chash
stringlengths
16
16
content
stringlengths
267
674k
2a59da802b284c00
Solution of Schrödinger equation for a step potential From Wikipedia, the free encyclopedia   (Redirected from Step potential) Jump to: navigation, search In quantum mechanics and scattering theory, the one-dimensional step potential is an idealized system used to model incident, reflected and transmitted matter waves. The problem consists of solving the time-independent Schrödinger equation for a particle with a step-like potential in one dimension. Typically, the potential is modelled as a Heaviside step function. Schrödinger equation and potential function[edit] Scattering at a finite potential step of height V0, shown in green. The amplitudes and direction of left and right moving waves are indicated. Yellow is the incident wave, blue are reflected and transmitted, red does not occur. E > V0 for this figure. The time-independent Schrödinger equation for the wave function \psi(x) is where H is the Hamiltonian, ħ is the reduced Planck constant, m is the mass, E the energy of the particle. The step potential is simply the product of V0, the height of the barrier, and the Heaviside step function: V(x) = \begin{cases} 0, & x < 0 \\ V_0, & x \ge 0 \end{cases} The barrier is positioned at x = 0, though any position x0 may be chosen without changing the results, simply by shifting position of the step by −x0. The first term in the Hamiltonian, -\frac{\hbar^2}{2m}\frac{d^2}{dx^2}\psi is the kinetic energy of the particle. The step divides space in two parts: x < 0 and x > 0. In any of these parts the potential is constant, meaning the particle is quasi-free, and the solution of the Schrödinger equation can be written as a superposition of left and right moving waves (see free particle) with the wave vectors in the respective regions being both of which have the same form as the De Broglie relation (in one dimension) p=\hbar k. Boundary conditions[edit] The coefficients A, B have to be found from the boundary conditions of the wave function at x = 0. The wave function and its derivative have to be continuous everywhere, so: \frac{d}{dx}\psi_1(0) = \frac{d}{dx}\psi_2(0). Inserting the wave functions, the boundary conditions give the following restrictions on the coefficients Transmission and reflection[edit] It is useful to compare the situation to the classical case. In both cases, the particle behaves as a free particle outside of the barrier region. A classical particle with energy E larger than the barrier height V0 will be slowed down but never reflected by the barrier, while a classical particle with E < V0 incident on the barrier from the left would always be reflected. Once we have found the quantum-mechanical result we will return to the question of how to recover the classical limit. To study the quantum case, let us consider the following situation: a particle incident on the barrier from the left side A. It may be reflected (A) or transmitted B. Here and in the following assume E > V0. To find the amplitudes for reflection and transmission for incidence from the left, we set in the above equations A = 1 (incoming particle), A = √R (reflection), B = 0 (no incoming particle from the right) and B = √(T k1/k2) (transmission). We then solve for T and R. The result is: The model is symmetric with respect to a parity transformation and at the same time interchange k1 and k2. For incidence from the right we have therefore the amplitudes for transmission and reflection Analysis of the expressions[edit] Reflection and transmission probability at a Heaviside-step potential. Dashed: classical result. Solid lines: quantum mechanics. For E < V0 the classical and quantum problem give the same result. Energy less than step height (E < V0)[edit] For energies E < V0, the wave function to the right of the step is exponentially decaying over a distance 1/(ik_2). Energy greater than step height (E > V0)[edit] In this energy range the transmission and reflection coefficient differ from the classical case. They are the same for incidence from the left and right: In the limit of large energies EV0, we have k1k2 and the classical result T = 1, R = 0 is recovered. Thus there is a finite probability for a particle with an energy larger than the step height to be reflected. Classical limit[edit] The result obtained for R depends only on the ratio E/V0. This seems superficially to violate the correspondence principle, since we obtain a finite probability of reflection regardless of the value of Planck's constant or the mass of the particle. For example, we seem to predict that when a marble rolls to the edge of a table, there can be a large probability that it is reflected back rather than falling off. Consistency with classical mechanics is restored by eliminating the unphysical assumption that the step potential is discontinuous. When the step function is replaced with a ramp that spans some finite distance w, the probability of reflection approaches zero in the limit wk\rightarrow0, where k is the wavenumber of the particle.[1] The Heaviside step potential mainly serves as an exercise in introductory quantum mechanics, as the solution requires understanding of a variety of quantum mechanical concepts: wavefunction normalization, continuity, incident/reflection/transmission amplitudes, and probabilities. A similar problem to the one considered appears in the physics of normal-metal superconductor interfaces. Quasiparticles are scattered at the pair potential which in the simplest model may be assumed to have a step-like shape. The solution of the Bogoliubov-de Gennes equation resembles that of the discussed Heaviside-step potential. In the superconductor normal-metal case this gives rise to Andreev reflection. See also[edit] 1. ^ D. Branson. 'The correspondence principle and scattering from potential steps', American Journal of Physics, Vol.47, 1101-1102, 1979. • Elementary Quantum Mechanics, N.F. Mott, Wykeham Science, Wykeham Press (Taylor & Francis Group), 1972, ISBN 0-85109-270-5 • Stationary States, A. Holden, College Physics Monographs (USA), Oxford University Press, 1971, ISBN 0-19-851121-3 • Quantum mechanics, E. Zaarur, Y. Peleg, R. Pnini, Schaum’s Oulines, Mc Graw Hill (USA), 1998, ISBN (10-) 007-0540187 Further reading[edit] • The New Quantum Universe, T.Hey, P.Walters, Cambridge University Press, 2009, ISBN 978-0-521-56457-1. • Quantum Field Theory, D. McMahon, Mc Graw Hill (USA), 2008, ISBN 978-0-07-154382-8 • Quantum mechanics, E. Zaarur, Y. Peleg, R. Pnini, Schaum’s Easy Outlines Crash Course, Mc Graw Hill (USA), 2006, ISBN (10-)007-145533-7 ISBN (13-)978-007-145533-6
82dc4b25995ce0a3
Particle Swarm Optimization and Intelligence Advances and Applications Premier Reference Source 2010 Uploaded on BOOK Konstantinos E. Parsopoulos_ Michael N. Vrahatis Particle Swarm Optimization and Intelligence Advances and Applications Premier Reference Source 2010 More in: Education , Technology • Full Name Full Name Comment goes here. Are you sure you want to Your message goes here Be the first to comment No Downloads Total Views On Slideshare From Embeds Number of Embeds Embeds 0 No embeds Report content Flagged as inappropriate Flag as inappropriate Flag as inappropriate Select your reason for flagging this presentation as inappropriate. No notes for slide • 1. Particle SwarmOptimization andIntelligence:Advances and ApplicationsKonstantinos E. ParsopoulosUniversity of Ioannina, GreeceMichael N. VrahatisUniversity of Patras, GreeceHershey • New YorkInformatIon scIence reference • 2. Director of Editorial Content: Kristin KlingerDirector of Book Publications: Julia MosemannDevelopment Editor: Joel GamonPublishing Assistant: Sean WoznickiTypesetter: Deanna ZombroQuality Control: Jamie SnavelyCover Design: Lisa TosheffPrinted at: Yurchak Printing Inc.Published in the United States of America byInformation Science Reference (an imprint of IGI Global)701 E. Chocolate AvenueHershey PA 17033Tel: 717-533-8845Fax: 717-533-8661E-mail: cust@igi-global.comWeb site: © 2010 by IGI Global. All rights reserved. No part of this publication may be reproduced, stored or distributed inany form or by any means, electronic or mechanical, including photocopying, without written permission from the publisher.Product or company names used in this set are for identification purposes only. Inclusion of the names of the products orcompanies does not indicate a claim of ownership by IGI Global of the trademark or registered trademark.Library of Congress Cataloging-in-Publication DataParticle swarm optimization and intelligence : advances and applications / Konstantinos E. Parsopoulos and Michael N.Vrahatis, editors.p. cm.Summary: "This book presents the most recent and established developments of Particle swarm optimization (PSO) within aunified framework by noted researchers in the field"--Provided by publisher.Includes bibliographical references and index.ISBN 978-1-61520-666-7 (hardcover) -- ISBN 978-1-61520-667-4 (ebook) 1.Mathematical optimization. 2. Particles (Nuclear physics) 3. Swarmintelligence. I. Parsopoulos, Konstantinos E., 1974- II. Vrahatis, MichaelN., 1955-QC20.7.M27P37 2010519.6--dc222009041376British Cataloguing in Publication DataA Cataloguing in Publication record for this book is available from the British Library.All work contributed to this book is new, previously-unpublished material. The views expressed in this book are those of theauthors, but not necessarily of the publisher. • 3. DedicationTo my wife Anastasia and our sons Vangelis and ManosK.E. ParsopoulosTo my wife IreneM.N. Vrahatis • 8. viiiForewordSwarm intelligence is an exciting new research field still in its infancy compared to other paradigmsin artificial intelligence. With many successful applications in a wide variety of complex problems,swarm-based algorithms have shown to have much promise, being efficient and robust, yet very simpleto implement.Anumber of computational swarm-based systems have been developed in the past decade,wheretheapproachistomodeltheverysimplelocalinteractionsamongindividuals,fromwhichcomplexproblem-solving behaviors emerge. One of the research areas within computational swarm intelligenceis particle swarm optimization (PSO), which has its origins in bird flocking models. Each individual,referred to as a particle, follow two very simple behaviors, i.e., to follow the best performing individual,and to move towards the best conditions found by the individual itself. In terms of optimization, eachparticle moves towards two attractors, with the result that all particles converge on one solution.Since its inception in 1995, research and application interest in PSO have increased, resulting in anexponential increase in the number of publications and applications. Research in PSO has resulted in alarge number of new PSO algorithms that improves the performance of the original PSO and enablesapplicationofPSOtodifferentoptimizationproblemtypes(e.g.,unconstrainedoptimization,constrainedoptimization, multiobjective optimization, optimization in dynamic environments, and finding multiplesolutions). Elaborate theoretical studies of PSO dynamics have been done, and PSO parameter sensitiv-ity analyses have resulted in a better understanding of the influence of PSO control parameters. PSOapplications vary in complexity and cover a wide range of application areas. To date, the total numberof PSO publications counts to approximately 1500 since 1995.It should be evident to the reader that the published knowledge available on PSO is vast. This thenprovides motivation for a dedicated, up-to-date book on particle swarm optimization. However, such atask is not an easy one. These authors have succeeded in the daunting task of sifting through the largevolumes of PSO literature to produce a text that focuses on the most recent and significant developmentsin PSO. The authors have also succeeded in conveying their significant experience in PSO developmentand application to the benefit of the reader. It should be noted that the intention of this book was notto produce an encyclopedia of PSO research and applications, but to provide both the novice and theexperienced PSO user and researcher with an introductory as well as expert level overview of PSO. Assuch the authors provide the reader with a compact source of information on PSO, and a foundation forthe development of new PSO algorithms and applications.The book is very well organized, starting with an overview of optimization, evolutionary compu-tation, and swarm intelligence in general. This is followed by a detailed development of the originalPSO and first improvements. A concise summary of theoretical analyses is given followed by detaileddiscussions of state-of-the-art PSO models. An excellent contribution made by the book is the coverage • 9. ixof a wide range of real-world applications, and of different optimization problem types. Throughout, theauthors have provided a book which is hands-on, making the book accessible to first-time PSO users.Another positive of the book is the collection of benchmark problems given in the appendix, and thelist of resources provided.The authors have succeeded in their objective to produce a book which covers the main trends inPSO research and applications, while still producing text that is accessible to a wide range of authors. Ihave no second thoughts of recommending the book and making the statement that this book will be avaluable resource to the PSO practitioner and researcher.Andries P. Engelbrecht,University of Pretoria, South AfricaAndries Engelbrecht is a professor in Computer Science at the University of Pretoria, South Africa. He also holds the positionas South African Research Chair in Artificial Intelligence, and leads the Computational Intelligence Research Group at theUniversity of Pretoria, consisting of 50 Masters and PhD students. He obtained his Masters and PhD degrees in ComputerScience from the University of Pretoria in 1994 and 1999 respectively. His research interests include swarm intelligence,evolutionary computation, artificial neural networks, artificial immune systems, and the application of these CI paradigms todata mining, games, bioinformatics, and finance. He has published over 130 papers in these fields in journals and internationalconference proceedings, and is the author of the two books, “Computational Intelligence: An Introduction” and “Fundamen-tals of Computational Swarm Intelligence”. In addition to these, he is a co-editor of the upcoming books, “Applied SwarmIntelligence” and “Foundations on Computational Intelligence”. He is very active in the international community, annuallyserving as a reviewer for over 20 journals and 10 conferences. He is an associate-editor of the IEEE Transactions on Evolution-ary Computation, Journal of Swarm Intelligence, and the recent IEEE Transactions on Computational Intelligence and AI inGames. Additionally, he serves on the editorial board of 3 other international journals, and is co-guest-editor of special issuesof the IEEE Transactions on Evolutionary Computation and the Journal of Swarm Intelligence. He served on the internationalprogram committee and organizing committee of a number of conferences, organized special sessions, presented tutorials,and took part in panel discussions. As member of the IEEE CIS, he is a member of the Games technical committee and chairof its Swarm Intelligence for Games task force. He also serves as a member of the Computational Intelligence and MachineLearning Virtual Infrastructure Network. • 10. xPrefaceOptimization is the procedure of detecting attributes, configurations or parameters of a system, to pro-duce desirable responses. For example, in structural engineering, one is interested in detecting the bestpossible design to produce a safe and economic structure that adheres to specific mechanical engineeringrules. Similarly, in computer science, one is interested in designing high-performance computer sys-tems at the lowest cost, while, in operations research, corporations struggle to identify the best possibleconfiguration of their production lines to increase their operational flexibility and efficiency. Numer-ous other systems (including human) can be mentioned, where there is a need or desire for explicit orimplicit improvement.All these problems can be scientifically resolved through modeling and optimization. Modeling offersa translation of the original (physical, engineering, economic, etc.) problem to a mathematical structurethat can be handled through algorithmic optimization procedures.The model is responsible for the properrepresentation of all key features of the original system and its accurate simulation. Concurrently, itoffers a mathematical means of identifying and modifying the system’s properties to produce the mostdesirable outcome without requiring its actual construction, thereby saving time and cost.The produced models are usually formulated as functions, called objective functions, in one or severalvariables that correspond to adaptable parameters of the system. The model is built in such a way that,based on the particular optimality criteria per case, the most desirable system configurations correspondto the extremal values of the objective function. Thus, the original system optimization problem is trans-formed to an equivalent function minimization or maximization problem. The difficulty in solving thisproblem is heavily dependent on the form and mathematical properties of the objective function.It is possible that a solution of the optimization problem can be achieved through an analytic ap-proach that involves minimum effort. Unfortunately, this case is rather an exception. In most problems,complicated systems are modeled with complicated multi-dimensional functions that cannot be easilyaddressed. In such cases, algorithmic procedures that take full advantage of modern computer systemscan be implemented to solve the underlying optimization problems numerically. Of course, only ap-proximations of the original solutions can be obtained under this scope. Thus, computation accuracy,time criticality, and implementation effort become important aspects of the numerical optimizationprocedure.To date, a multitude of algorithms that exploit favorable mathematical properties of the objectivefunction, such as differentiability and Lipschitz continuity, have been developed. These approaches usefirst-order and second-order derivatives and achieve high convergence rates. However, the necessaryassumptions for their application are not usually met in practice. Indeed, the blossoming of technologicalresearch and engineering has introduced a plethora of optimization problems with minimum available • 11. xiinformation regarding their form and inherent attributes. Typical properties of such problems are theexistence of discontinuities, the lack of analytical representation of the objective function, and noisedissemination.In these circumstances, the applicability and efficiency of classical optimization algorithms arequestionable, giving rise to the need for the development of different optimization methods. Early at-tempts towards this direction were focused on stochastic algorithms that employ only function values.Pure random search is the most trivial approach, although its performance is rapidly degenerating withproblem dimension and complexity, since it does not exploit information gained in previous steps ofthe algorithm. On the other hand, combinations of random and classical algorithms have offered betterresults; nevertheless, the necessity for strong mathematical assumptions on the objective function wasstill inevitable.However, researchers were gradually realizing that several systems observed in nature were able tocope efficiently with similar optimization problems. Thus, the trend to study and incorporate models ofnatural procedures in optimization algorithms gradually gained ground, becoming an appealing alterna-tive. Early approaches, such as simulated annealing, offered the potential of solving problems that werelaborious for other algorithms. However, for a number of years, they remained in the margin of relativeliterature, due to their limited theoretical developments at that time.At the same time, a new type of algorithms was slowly but steadily emerging. The inspiration behindtheir development stemmed from the study of adaptation mechanisms in natural systems, such as DNA.The underlying operations that support evolutionary mechanisms according to the Darwinian biologicaltheory were modeled and used to evolve problem solutions, based on user-defined optimality criteria.Although these optimization approaches were initially supported by limited theoretical analyses, theirpromising results on complex problems previously considered as intractable offered a boost to research,especially in the engineering community. Research groups in the USA and Europe attained to refineearly variants of these algorithms, introducing a set of efficient approaches under the general name ofevolutionary algorithms.Theoretical studies were soon conducted, rendering these algorithms promising alternatives in caseswhere classical approaches were not applicable. The new type of algorithms concentrated all desirableoptimization features as well as novel concepts. For example, a search was not performed by one butrather a population of interacting search agents without central control. Stochasticity, communication,informationexchange,andadaptationbecamepartoftheiroperation,whiletherequirementsonobjectivefunction were restricted to the least possible, namely the ability to perform function evaluations.The success recognized by evolutionary approaches sparked off research all over the world. As aresult, in the mid-90’s a new category of algorithms appeared. Instead of modeling evolutionary pro-cedures in microscopic (DNA) level, these methods model populations in a macroscopic level, i.e., interms of social structures and aggregating behaviors. Once again, nature was offering inspiration andmotivation to scientists. Hierarchically organized societies of simple organisms, such as ants, bees, andfish, with a very limited range of individual responses, exhibit fascinating behaviors with identifiabletraits of intelligence as a whole. The lack of a central tuning and control mechanism in such systems hastriggered scientific curiosity. Simplified models of these systems were developed and studied throughsimulations.Their dynamics were approximated by mathematical models similar to those used in particlephysics, while probability theory and stochastic processes offered a solid theoretical background for thedevelopment of a new category of algorithms under the name of swarm intelligence. • 12. xiiParticle swarm optimization (PSO) belongs to this category and constitutes the core subject of thebook at hand. Its early precursors were simulators of social behavior that implemented rules such asnearest-neighbor velocity matching and acceleration by distance, to produce swarming behavior ingroups of simple agents. As soon as the potential of these models to serve as optimization algorithmswas recognized, they were refined, resulting in the first version of PSO, which was published in 1995(Eberhart & Kennedy, 1995; Kennedy & Eberhart, 1995).Since its development, PSO has gained wide recognition due to its ability to provide solutions ef-ficiently, requiring only minimal implementation effort. This is reflected in Fig. 1, which illustrates thenumber of journal papers with the term “particle swarm” in their titles published by three major publish-ers, namely Elsevier, Springer, and IEEE, during the years 2000-2008. Also, the potential of PSO forstraightforward parallelization, as well as its plasticity, i.e., the ability to adapt easily its components andoperators to assume a desired form implied by the problem at hand, has placed PSO in a salient positionamong intelligent optimization algorithms.The authors of the book at hand have contributed a remarkable amount of work on PSO and gainedin-depth experience on its behavior and handling of different problem types. This book constitutes anattempt to present the most recent and established developments of PSO within a unified framework andshare their long experience, as described in the following section, with the reader.WHAT THIS BOOK IS AND IS NOT ABOUTThis book is not about numerical optimization or evolutionary and swarm intelligence algorithms ingeneral. It does not aim at analyzing general optimization procedures and techniques or demonstratingFigure 1. Number of journal papers with the term “particle swarm” in their titles, published by threemajor publishers, namely Elsevier, Springer, and IEEE, during the years 2000-2008. • 13. xiiithe application of evolutionary algorithms. The book at hand is completely devoted to the presentationof PSO. Its main objectives are to:1. Provide a unified presentation of the most established PSO variants, from early precursors to con-current state-of-the-art approaches.2. Provide a rough sketch of the established theoretical analyses and their impact on the establishedvariants of the algorithm.3. Provide complementary techniques that enhance its performance on particular problem types.4. Illustrateitsworkingsonaplethoraofdifferentproblemtypes,providingguidelinesandexperimentalsettings along with representative results that equip the reader with the necessary background andoffer a starting point for further investigation in similar problems.All these objectives were meant to be achieved using the simplest possible notation and descriptionsto render non-expert readers capable of taking full advantage of the presented material.In order to achieve these ambitious goals, the authors relied on their accumulated experience of morethan ten years working on PSO. The majority of the presented material is based on their own work, whichcorresponds to more than 60 papers in international refereed scientific journals, book chapters, editedvolumes, and conference proceedings, and more than a thousand citations from other researchers. Awide variety of applications is presented together with details on published results. Also, pseudocode isprovided for the presented methodologies and procedures when applicable, while special mathematicalmanipulations and transformations are analyzed whenever needed. Besides that, many recent develop-ments of other researchers are reported, always striving to keep the presentation simple and understand-able. The provided material and references can serve as an Ariadne’s thread for further acquisition anddevelopment of more sophisticated approaches.We tried to avoid the pitfall of rendering this book another literature review on PSO. Nevertheless,there are excellent survey papers for this purpose (Al Rashidi & El-Hawary, in press; Banks et al., 2007,2008; Hendtlass & Randall, 2001; Parsopoulos & Vrahatis, 2002; Reyes-Sierra & Coello Coello, 2006;Yang & Li, 2004). Instead, only the most important developments with potential for further extensionand improvement are analyzed. We did not aim at deluging the reader with a huge number of minordevelopments or reproductions of established sound results, but rather offer the most essential conceptsand considerations as the amalgam of our experience and personal viewpoint in the ongoing progressof PSO research all these years.The target audience of the book embraces undergraduate and graduate students with a special interestin PSO and relative approaches, as well as researchers and scientists that employ heuristic algorithmsfor problem solving in science and engineering. The level of mathematics was intentionally kept low tomake descriptions comprehensible even to a multidisciplinary (non-expert) audience. Thus, elementarycalculus is adequate for the comprehension of most of the presented algorithms in the first part of thebook, while essential specialized knowledge on machine learning, dynamical systems, and operationsresearch may be useful in specific chapters of the second part.This book can also serve as an essential reference guide of established advances on PSO, as well asa stepping stone for further developments. It can be distinguished by relevant books by its specializationsolely on PSO, without however restricting its algorithmic material to the narrow personal achievementsand considerations of the authors. Thus, it can be valuable to a wide scientific audience of both noviceand expert researchers. • 14. xivORGANIZATION OF THE BOOKThe book is divided in two sections. Section 1 consists of Chapters 1 to 5 and presents basic develop-ments in PSO along with theoretical derivations, state-of-the-art variants, and performance-enhancingtechniques. Section 2 consists of Chapters 6 to 12, where various applications of PSO are presentedand trends for future research are reported. The book also has two appendices. Appendix A containsdescriptions of the test problems used throughout the text, especially in Section 2. Appendix B containsan indicative simple implementation of PSO in Matlab©, as well as further web resources on PSO. Abrief description of each chapter is provided in the following paragraphs.Chapter 1 introduces the reader to basic concepts of global optimization, evolutionary computation,and swarm intelligence, outlines the necessity for solving optimization problems and identifies variousproblemtypes.Also,aroughclassificationofestablishedoptimizationalgorithmsisprovided,followedbythehistoricaldevelopmentofevolutionarycomputation.Thethreefundamentalevolutionaryapproaches,namely genetic algorithms, evolutionary programming, and evolution strategies are briefly presented,along with their basic features and operations. Then, swarm intelligence is presented followed by shortdescriptions of its three main algorithms, namely ant colony optimization, stochastic diffusion search,and particle swarm optimization. Finally, reference is made to the no-free-lunch theorem to justify thenecessity for further development of intelligent optimization algorithms.Chapter 2 is devoted to the presentation of particle swarm optimization (PSO).The description beginswith the main inspiration source that led to the development of its early precursors. Severe deficienciesof these approaches are pointed out and addressed by introducing new concepts, such as inertia weight,velocity clamping, and the concept of neighborhood. Finally, the reader is brought to the present dayby exposing the standard contemporary developments, which are considered the state-of-the-art PSOvariants nowadays.Chapter 3 briefly exposes the fundamental theoretical derivations and issues of PSO. Emphasis isgiven to developments that offered new insight in configuring and tuning its parameters. The chapterbegins with a discussion on initialization procedures, followed by the first attempts to investigateparticle trajectories. These studies opened the way for the stability analysis of PSO, which resulted incontemporary sophisticated variants and rules for parameter setting. We then present a useful techniquefor the optimal tuning of PSO on specific problems, based on computational statistics. The workingsof this technique are illustrated in detail in two problems. The chapter closes with a short discussion onthe most common termination conditions.Chapter 4 presents established and recently proposed PSO variants. The presented methods wereselected from the extensive PSO literature according to various criteria, such as sophisticated inspirationsource, close relationship to the standard PSO form, wide applicability in problems of different types,satisfactory performance and theoretical properties, number of reported applications, and potential forfurther development and improvement. Thus, the unified PSO, memetic PSO, composite PSO, vectorevaluated PSO, guaranteed convergence PSO, cooperative PSO, niching PSO, TRIBES, and quantumPSO are described and their fundamental concepts are exposed.Chapter 5 closes the first section of the book by presenting performance-enhancing techniques.These techniques consist of transformations of either the objective function or the problem variables,enabling PSO to alleviate local minimizers, detect multiple global minimizers, handle constraints, andsolve integer programming problems. The chapter begins with a short discussion of the filled functionsapproach, followed by the stretching technique as an alternative for alleviating local minimizers. Next, • 15. xvdeflection and repulsion techniques are presented as means for detecting multiple minimizers withPSO. The penalty function approach for handling constraints is discussed and two recently proposedapproaches are reported. The chapter closes with the description of two rounding schemes that enablethe real-valued PSO to solve integer programming problems. The techniques are thoroughly describedand, wherever possible, graphically illustrated.Chapter 6 focuses on the application of PSO on machine learning problems.Two representative cases,namely the training of artificial neural networks and learning in fuzzy cognitive maps, are first definedwithin a general framework, followed by illustrative examples that familiarize the reader with the mainprocedures and nominate possible obstacles of the underlying optimization procedure.Chapter 7 presents the application of PSO in dynamical systems and more specifically in the detectionof periodic orbits. The transformation of the original fixed-point problem to the corresponding optimi-zation problem is analyzed and indicative examples on widely used nonlinear mappings are reportedin the first part of the chapter. The second part thoroughly describes an important application on thedetection of periodic orbits in 3-dimensional galactic potentials, illustrating the ability of PSO to detectpreviously unknown solutions.Chapter 8 consists of three representative applications of PSO in operations research. Similarly toprevious chapters, attention is focused on the presentation of essential aspects of the applications ratherthan reviewing the existing literature. Thus, we present the formulation of the optimization problemfrom the original one, along with the efficient treatment of special problem requirements that cannot behandled directly by PSO. Applications from the fields of scheduling, inventory optimization, and gametheory are given, accompanied by recently published results to provide a flavor of PSO’s efficiency percase.Chapter 9 presents two interesting applications of PSO in bioinformatics and medical informatics.The first one consists of the adaptation of probabilistic neural network models for medical classifica-tion tasks. The second application considers PSO variants for tackling two magnetoencephalographyproblems, namely source localization and refinement of approximation models. The main points wherePSO interferes with the employed computational models are analyzed, and details are provided regardingthe formulation of the corresponding optimization problems and their experimental settings. Indicativeresults are also reported as representative performance samples.Chapter 10 discusses the workings of PSO on two intimately related research fields with special fo-cus on real world applications, namely noisy and dynamic environments. Noise simulation schemes arepresented and experimental results on benchmark problems are reported. Also, the application of PSOin a simulated real world problem, namely the particle identification by light scattering, is presented.Moreover, a hybrid scheme that incorporates PSO in particle filtering methods to estimate system statesonline is analyzed and representative experimental results are provided. Finally, the combination ofnoisy and continuously changing environments is shortly discussed, providing illustrative graphicalrepresentations of performance for different PSO variants.Chapter 11 essentially closes the second section of the book by presenting applications of PSO inthree very interesting problem types, namely multiobjective, constrained, and minimax optimizationproblems. The largest part of the chapter is devoted to the multiobjective case, which is supported by anextensive bibliography with a rich assortment of PSO approaches developed to date. Different algorithmtypes are presented and briefly discussed, insisting on the most influential approaches for the specificproblem types. • 16. xviChapter 12 closes the book by providing current trends and future directions in PSO research. Thisinformation can be beneficial to new researchers with an interest in conducting PSO-related research,since it enumerates open problems and active research topics.AppendixAcontainsalltestproblemsemployedthroughoutthetext.Thus,widelyusedunconstrainedoptimization problems, nonlinear mappings, inventory optimization problems, game theory problems,data sets for classification tasks in bioinformatics, multiobjective problems, constrained benchmark andengineering design problems, as well as minimax problems are reported. Citations for each problemare also provided.Finally, Appendix B contains an indicative simple implementation of the PSO algorithm in Matlab©,as well as references to web resources for further information on developments and implementationsof PSO.Each chapter is written in a self-contained manner, although several references to the first sectionof the book are made in chapters devoted to applications. We hope that readers will find our approachinteresting and help us improve it by providing their comments, considerations, and suggestions.REFERENCESAlRashidi, M. R., & El-Hawary, M. E. (2009). A survey of particle swarm optimization applications inelectric power systems. IEEE Transactions on Evolutionary Computation, 13(4), 913-918.Banks, A., Vincent, J., & Anyakoha, C. (2007). A review of particle swarm optimization. Part I: back-ground and development. Natural Computing, 6 (4), 476-484.Banks,A., Vincent, J., &Anyakoha, C. (2008).Areview of particle swarm optimization. Part II: hybrid-ization, combinatorial, multicriteria and constrained optimization, and indicative applications. NaturalComputing, 7 (1), 109-124.Eberhart, R. C., & Kennedy, J. (1995). A new optimizer using particle swarm theory. In Proceedingsof the 6thSymposium on Micro Machine and Human Science, Nagoya, Japan (pp. 39-43). Piscataway,NJ: IEEE Service Center.Hendtlass, T., & Randall, M. (2001).Asurvey of ant colony and particle swarm meta-heuristics and theirapplication to discrete optimisation problems. In Proceedings of the Inaugural Workshop on ArtificialLife, Adelaide, Australia (AL 2001) (pp. 15-25).Kennedy, J., & Eberhart, R. C. (1995). Particle swarm optimization. In Proceedings of the IEEE Inter-national Conference on Neural Networks, Perth, Australia (Vol. IV, pp. 1942-1948). Piscataway, NJ:IEEE Service Center.Parsopoulos,K.E.,&Vrahatis,M.N.(2002).Recentapproachestoglobaloptimizationproblemsthroughparticle swarm optimization. Natural Computing, 1 (2-3), 235-306.Reyes-Sierra, M., & Coello Coello, C. A. (2006). Multi-objective particle swarm optimizers: a surveyof the state-of-the-art. International Journal of Computational Intelligence Research, 2 (3), 287-308.Yang, W., & Li, Q. Q. (2004). Survey on particle swarm optimization algorithm. Engineering Science,5 (6), 87–94. • 17. xviiAcknowledgmentThe authors wish to thank all those who offered their support during the elaboration of this book. Specialthanks are due to our collaborators over all these years of research on Particle Swarm Optimization.Also, we feel obligated to thank the pioneers, Professor Russell C. Eberhart at the Purdue School ofEngineering and Technology, Indiana University Purdue University Indianapolis (IUPUI), Indianapolis(IN), USA, and Dr. James Kennedy at the Bureau of Labor Statistics, US Department of Labor, Wash-ington (DC), USA, who created a new and fascinating research direction by introducing Particle SwarmOptimization, back in 1995.Special thanks are also due to the anonymous reviewers, whose constructive criticism and recommen-dations helped us enhance the quality of this book. Our sincere gratitude also goes to Professor AndriesP. Engelbrecht at Department of Computer Science, University of Pretoria, South Africa, for writingthe foreword for this book. One of us (K.E.P.) wishes to thank also the State Scholarships Foundation(IKY) of Greece for partially supporting his efforts.The authors wish to thank their wives for their patience, understanding, and affection all these daysof endless work. Additionally, Konstantinos E. Parsopoulos wishes to thank his two little sons for put-ting the pressure on him to eventually finalize this project, as well as his parents for their unconditionalsupport and encouragement.K.E. Parsopoulos and M.N. VrahatisPatras, GreeceJune 2009 • 18. Section 1Theory and Methods • 19. 1Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.Chapter 1IntroductionIn this chapter, we provide brief introductions to the basic concepts of global optimization, evolution-ary computation, and swarm intelligence. The necessity of solving optimization problems is outlinedand various problem types are reported. A rough classification of established optimization algorithms isprovided, followed by the historical development of evolutionary computation. The three fundamentalevolutionary approaches are briefly presented, along with their basic features and operations. Finally,the reader is introduced to the field of swarm intelligence, and a strong theoretical result is conciselyreported to justify the necessity for further development of global optimization algorithms.WHAT IS OPTIMIZATION?Optimization is a scientific discipline that deals with the detection of optimal solutions for a problem,among alternatives. The optimality of solutions is based on one or several criteria that are usuallyproblem- and user-dependent. For example, a structural engineering problem can admit solutions thatprimarily adhere to fundamental engineering specifications, as well as to the aesthetic and operationalexpectations of the designer. Constraints can be posed by the user or the problem itself, thereby reducingthe number of prospective solutions. If a solution fulfills all constraints, it is called a feasible solution.Among all feasible solutions, the global optimization problem concerns the detection of the optimal one.However, this is not always possible or necessary. Indeed, there are cases where suboptimal solutionsare acceptable, depending on their quality compared to the optimal one. This is usually described as lo-cal optimization, although the same term has been also used to describe local search in a strict vicinityof the search space.DOI: 10.4018/978-1-61520-666-7.ch001 • 20. 2IntroductionA modeling phase always precedes the optimization procedure. In this phase, the actual problemis modeled mathematically, taking into account all the underlying constraints. The building blocks ofcandidate solutions are translated into numerical variables, and solutions are represented as numericalvectors. Moreover, a proper mathematical function is built, such that its global minimizers, i.e., pointswhere its minimum value is attained, correspond to optimal solutions of the original problem. Thisfunction is called the objective function, and the detection of its global minimizer(s) is the core subjectof global optimization. Instead of minimization, an optimization problem can be equivalently definedas maximization by inverting the sign of the objective function. Without loss of generality, we consideronly minimization cases in the book at hand.The objective function is accompanied by a domain, i.e., a set of feasible candidate solutions.The domain is delimited by problem constraints, which need to be quantified properly and describedmathematically using equality and inequality relations. In the simplest cases, constraints are limited tobounding boxes of the variables. In harder problems, complex relations among the variables must holdin the final solution, rendering the minimization procedure rather complicated.Analytical derivation of solutions is possible for some problems. Indeed, if the objective function is atleast twice continuously differentiable and has a relatively simple form, then its minimizers are attainedby determining the zeros of its gradient and verifying that its Hessian matrix is positive definite at thesepoints. Apparently, this is not possible for functions of high complexity and dimensionality or functionsthat do not fulfill the required mathematical assumptions. In the latter case, the use of algorithms thatapproximate the actual solution is inevitable. Such algorithms work iteratively, producing a sequenceof search points that has at least one subsequence converging to the actual minimizer.Optimization has been an active research field for several decades. The scientific and technologicalblossoming of the late years has offered a plethora of difficult optimization problems that triggered thedevelopment of more efficient algorithms. Real-world optimization suffers from the following problems(Spall, 2003):a. Difficulties in distinguishing global from local optimal solutions.b. Presence of noise in solution evaluation.c. The “curse of dimensionality”, i.e., exponential growth of the search space with the problem’sdimension.d. Difficulties associated with the problem’s constraints.The different nature and mathematical characteristics of optimization problems necessitated thespecialization of algorithms to specific problem categories that share common properties, such asnonlinearity, convexity, differentiability, continuity, function evaluation accuracy etc. Moreover, theinherent characteristics of each algorithm may render it more suitable either for local or global optimi-zation problems. Such characteristics include, among others, stochasticity, potential for parallelizationin modern computer systems and limited computational requirements.Today, there is a rich assortment of established algorithms for most problem types. Nevertheless, evendifferent instances of the same problem may have different computational requirements, leaving spacefor development of new algorithms and the improvement of established ones. Consequently, there willbe an ongoing need for new and more sophisticated ideas in optimization theory and applications.In the next section, we put the optimization problem into a mathematical framework, which allowsthe distinction between different problem types, and we identify major categories of optimization algo-rithms, related to the topics of the book at hand. • 21. 3IntroductionTYPES OF OPTIMIZATION PROBLEMSAn optimization (minimization) problem can be defined mathematically in several ways, dependingon the underlying application. In general, any function, f:A→Y, defined over a domain, A, also calledthe search space, and with range, Y, can be subjected to optimization given a total ordering relationover Y. In literature, the most common optimization problems consist of the minimization of functionswhose domain is a subset of the n-dimensional Euclidean space, Rn, and their range is a subset of thereal numbers. Moreover, the problem may have constraints in the form of inequality relations. Thus, theminimization problem can be formally described as:min ( ), subject to ( ) 0, 1, 2,..., ,ix Af x C x i k∈≤ =(1)where, A ⊆ Rn, is a subset of the n-dimensional Euclidean space; Y ⊆ R, is a subset of the real numbers;and, k, is the number of constraints. The form of constraints in relation (1) is not restrictive, since dif-ferent forms can be represented equivalently as follows:Ci(x) ≥ 0 ⇔ -Ci(x) ≤ 0,Ci(x) = 0 ⇔ Ci(x) ≤ 0 and -Ci(x) ≤ 0.Relation (1) defines a constrained optimization problem, and its constraints usually restrict the searchspace.Apoint, x∈A, is called feasible point if it satisfies all the constraints; the set of all feasible points ofA is called the feasible set. Obviously, in constrained problems, only solutions that satisfy all constraints(or, in some cases, slightly violate them) are acceptable. In cases where constraints are absent, we havean unconstrained optimization problem. This can be defined with relation (1) by simply omitting theconstraints. By definition, in unconstrained problems, the whole domain of the objective function is thefeasible set.Figure 1 depicts the contour lines of the following constrained optimization problem.f(x,y) = x2+y2, subject to C(x,y) = 2x-y2+1 ≤ 0, (2)for (x,y)∈[-3,3]2. Although the domain of f(x,y) is the set A = [-3,3]2, the constraint C(x), which is de-picted as the parabola on the right part of Fig. 1, confines the search space by excluding the shadowedregion. Note that the origin, where the global minimizer of the unconstrained problem lies, becomesinfeasible for the constrained problem, while the new global minimizer lies on the parabolic boundaryof the constrained feasible space. Therefore, even a relatively simple constraint may increase the com-plexity of a problem significantly.Global optimization aims at the detection of a global minimizer:* arg min ( )x Ax f x∈= ,of the objective function, as well as its corresponding minimum, f(x*). This will be the core problemaddressed by the algorithms presented in the book at hand. On the other hand, in local optimization, thedetection of a local minimizer: • 22. 4Introduction arg min ( )x A Ax f x∈ ⊂= ,along with its local minimum, f(xˊ), is adequate. Such problems will be considered only when localminimizers are acceptable for the studied problems. Figure 2 illustrates the local and the global minimumof the simple function, f(x) = sin(x)-x/2, in the range [-10, 1].Major optimization subfields can be further distinguished based on properties of the objective func-tion, its domain, as well as the form of the constraints. A simple categorization is presented in the fol-lowing paragraphs and summarized in Table 1. Some of the most interesting and significant subfields,with respect to the form of the objective function, are:1. Linear optimization (or linear programming): It studies cases where the objective function andconstraints are linear.2. Nonlinear optimization (or nonlinear programming): It deals with cases where at least onenonlinear function is involved in the optimization problem.3. Convex optimization: It studies problems with convex objective functions and convex feasiblesets.4. Quadratic optimization (or quadratic programming): It involves the minimization of quadraticobjective functions and linear constraints.5. Stochastic optimization: It refers to minimization in the presence of randomness, which is intro-duced either as noise in function evaluations or as probabilistic selection of problem variables andparameters, based on statistical distributions.There is a multitude of comprehensive studies on the aforementioned optimization problems. Someof the most widely-used resources are (Horst & Pardalos, 1995; Horst & Tuy, 2003; Luenberger, 1989;Nocedal & Wright, 2006; Polak, 1997; Torn & Žilinskas, 1989; Zhigljavsky & Žilinskas, 2008).Figure 1. Contour lines of the problem defined in equation (2). The shadowed area is the region excludedfrom search due to the constraint • 23. 5IntroductionUsually, optimization problems are modeled with a single objective function, which remains un-changed through time. However, many significant engineering problems are modeled with one or a setof static or time-varying objective functions that need to be optimized simultaneously. These cases giverise to the following important optimization subfields:1. Dynamic optimization: It refers to the minimization of time-varying objective functions (itshould not be confused with dynamic programming). The goal in this case is to track the positionof the global minimizer as soon as it moves in the search space. Also, it aims at providing robustsolutions, i.e., solutions that will not require heavy computational costs for refinement in case ofa slight change in the objective function.2. Multiobjective optimization, also known as multiple criteria optimization: It refers to prob-lems where two or more objective functions need to be minimized concurrently. In this case, theClassificationCriterionType ofOptimization ProblemSpecialCharacteristicsForm of the objectivefunctionand/or constraintsLinear Linear objective function and constraintsNonlinear Nonlinear objective function and/or constraintsConvex Convex objective function and feasible setQuadratic Quadratic objective function and linear constraintsStochastic Noisy evaluation of the objective function or probabilisticallydetermined problem variables and/or parametersNature of the searchspaceDiscrete Discrete variables of the objective functionContinuous Real variables of the objective functionMixed integer Both real and integer variablesNature of the problem Dynamic Time-varying objective functionMultiobjective Multitude of objective functionsTable 1. A categorization of optimization problems, based on different criteriaFigure 2. The local and global minimum of f(x) = sin(x)-x/2 in the range [-10,1] • 24. 6Introductionoptimality of solutions is redefined, since global minima of different objective functions are rarelyachieved at the same minimizers.Comprehensive books on these two subfields are (Branke, 2001; Chiang, 1991; Deb, 2001; Ehrgott &Gandibleux, 2002).A different categorization can be considered with respect to the nature of the search space and prob-lem variables:1. Discrete optimization: In such problems, the variables of the objective function assume discretevalues. The special case of integer variables is referred to as integer optimization.2. Continuous optimization: All variables of the objective function assume real values.3. Mixed integer optimization: Both integer and real variables appear in the objective function.The reader can refer to (Boros & Hammer, 2003; Floudas, 1995) for specialized introductions to thesetopics.There is a plethora of methods for solving efficiently problems in most of the aforementioned catego-ries. However, many of these approaches are based on strong mathematical assumptions that do not holdin real-world applications. For example, there are very efficient deterministic algorithms for nonlinearoptimization problems; however, they require that properties such as differentiability or convexity ofthe objective function hold. Unfortunately, these properties are not met in many significant applica-tions. Indeed, there are problems where the objective function is not even analytically defined, with itsvalues being obtained through complex procedures, computer programs, or measurements coming fromobservation equipment.The aforementioned modeling deficiencies render the optimization problem very difficult, whilesolutions provided by mathematical approximations of the original problem are not always satisfactory.This effect gives rise to the concept of black-box optimization, where the least possible amount of in-formation regarding the underlying problem is available, while function values of prospective solutionsare obtained as the output of a hidden, complex procedure. The demand for efficient algorithms to tacklesuch problems is continuously increasing, creating a boost in relevant research.The core of the book at hand is the particle swarm optimization algorithm and its applications, ratherthan optimization itself. Thus, in the rest of the book we will concentrate on problem categories on whichparticle swarm optimization has been applied successfully, providing significant results. In general, suchproblems include global optimization of nonlinear, discontinuous objective functions, and black-boxoptimization. Also, noisy, dynamic, and multiobjective problems will be considered. Continuous non-linear optimization will be in the center of our interest; linear, discrete, and mixed integer problems willbe considered only occasionally, since there are different approaches especially suited for these problemtypes (mainly for the linear case). In the next section, we provide a rough classification of optimizationalgorithms, based on their inherent properties.CLASSIFICATION OF OPTIMIZATION ALGORITHMSIn general, two major categories of optimization algorithms can be distinguished: deterministic andstochastic algorithms. Although stochastic elements may appear in deterministic approaches to im- • 25. 7Introductionprove their performance, this rough categorization has been adopted by several authors, perhaps due tothe similar inherent properties of the algorithms in each category (Archetti & Schoen, 1984; Dixon &Szegö, 1978). Deterministic approaches are characterized by the exact reproducibility of the steps takenby the algorithm, in the same problem and initial conditions. On the other hand, stochastic approachesproduce samples of prospective solutions in the search space iteratively. Therefore, it is almost impos-sible to reproduce exactly the same sequence of samples in two distinct experiments, even in the sameinitial conditions.Deterministic approaches include grid search, covering methods, and trajectory-based methods. Gridsearch does not exploit information of previous optimization steps, but rather assesses the quality ofpoints lying on a grid over the search space. Obviously, the grid density plays a crucial role on the finaloutput of the algorithm. On the other hand, trajectory-based methods employ search points that traversetrajectories, which (hopefully) intersect the vicinity of the global minimizer. Finally, covering methodsaim at the detection and exclusion of parts of the search space that do not contain the global minimizer.All the aforementioned approaches have well-studied theoretical backgrounds, since their operation isbased on strong mathematical assumptions (Horst & Tuy, 2003; Torn & Žilinskas, 1989).Stochastic methods include random search, clustering, and methods based on probabilistic modelsof the objective function. Most of these approaches produce implicit or explicit estimation models forthe position of the global minimizer, which are iteratively refined through sampling, using informationcollected in previous steps. They can be applied even in cases where strong mathematical properties ofthe objective function and search space are absent, albeit at the cost of higher computational time andlimited theoretical derivations.A more refined classification of global optimization algorithms is provided by Törn and Žilinskas(1989, p. 19):1. Methods with guaranteed accuracy:a. Covering methods.2. Direct search methods:b. Generalized descent methods.c. Clustering methods.d. Random search methods.3. Indirect search methods:e. Level set approximation methods.f. Objective function approximation methods.Their classification criterion differs from the previous categorization; it is rather based on the guaran-teed (or not) accuracy provided by each algorithm. Covering methods represent the only category withguaranteedaccuracy.Theyembracemethodsthatworkinthegeneralframeworkofbisectionapproaches,where the global minimizer is iteratively bounded within smaller intervals, omitting parts of the searchspace that have no further interest. In order to achieve theoretically sound results on covering methods,extensive information on the search space is required. Unfortunately, this requirement is rarely met inreal world applications, while their performance usually becomes comparable with that of exhaustivesearch, rendering their applicability questionable (Törn & Žilinskas, 1989).The category of direct search methods consists of algorithms heavily based on the fundamentalcomputation element, namely function evaluation. Generalized descent methods additionally require • 26. 8Introductionfirst-order and second-order gradient information of the objective function to produce trajectories thatintersect regions of attraction of the minimizers. For this reason, the combination of trajectory-basedapproaches, with random search schemes to provide their initial conditions, is preferable for the mini-mization of multimodal functions. This category also includes penalty function techniques, where theobjective function is modified after the detection of a local minimizer, so that a new landscape is pre-sented to the algorithm, where the detection of a lower minimum is possible.Clustering approaches strive to address a crucial deficiency of the generalized descent techniques.More specifically, various iterative schemes may converge on the same minimizer, although initializedat different initial positions. This effect adds significant computational cost to the algorithm and reducesits performance. Clustering aims at initiating a single local search per detected local minimizer. This ispossible by producing samples of points and clustering them in order to identify the neighborhoods oflocal minimizers. Then, a single local search can be conducted per cluster.Random search algorithms are based on probability distributions to produce samples of search points.In pure random search, a new sample of points is generated at each iteration of the algorithm. Obviously,output accuracy increases as sample size approaches infinity. In practice, this approach is inefficientsince it requires a vast number of function evaluations to produce acceptable results, even for problemsof moderate dimensionality. For this reason, the performance of pure random search is considered asthe worst acceptable performance for an algorithm. In improved variants, pure random search has beenequipped with a local search procedure, applied either on the best or on several sampled points, givingrise to singlestart and multistart methods, respectively.Significant efforts towards the improvement of random search methods have lead to the developmentof novel algorithms that draw inspiration from stochastic phenomena in nature. These algorithms arecalled heuristics or metaheuristics and they simulate fundamental elements and procedures that produceevolution and intelligent behaviors in natural systems. This category of algorithms is intimately relatedto modern research fields, such as evolutionary computation and swarm intelligence, and it will behenceforth in the center of our attention.The last category, indirect search, consists of methods that build models of either the objective func-tion or its level sets by exploiting local information. In the first case, Bayesian models are employed toapproximate the objective function using random variables, while in the latter, polynomial approxima-tions are used to fit its level sets. Although these approaches are very appealing theoretically, in practicethere are several implementation problems that need to be addressed to render them applicable withinan algorithmic framework (Törn & Žilinskas, 1989).In the book at hand, we focus on variants of particle swarm optimization. These approaches belongrather to the category of direct search methods. However, regardless of different classifications, there isa general feeling that the most efficient global optimization methods combine features or algorithms ofdifferent types. This will also become apparent in the rest of the book. In the next section, we providea brief flashback to the development of evolutionary computation, which constitutes the precursor ofswarm intelligence and, consequently, of particle swarm optimization.THE DEVELOPMENT OF EVOLUTIONARY COMPUTATIONThe term evolutionary computation is used to describe a category of heuristic optimization methods thatlie in the intersection of global optimization with computational intelligence. Evolutionary algorithms • 27. 9Introductioncombine elements such as stochasticity, adaptation, and learning, in order to produce intelligent opti-mization schemes. Such schemes can adapt to their environment through evolution, even if it changesdynamically, while exploiting information from previous search steps.The special structure of evolutionary algorithms is based on biological principles from the Darwin-ian theory of species. More specifically, they assume populations of potential solutions that evolveiteratively. Evolution occurs as the outcome of special operators on the population, such as crossover(also called recombination), mutation, and selection. All these operators work in direct analogy withtheir corresponding biological procedures. It is precisely these procedures that, according to Darwiniantheory, promoted the biological evolution of humans and animals through natural selection.The first applications of Darwinian principles on problem solving appeared in the mid-50’s in thefield of machine learning, and a few years later in optimization (Friedberg, 1958; Bremermann, 1962).However, it was not until the 90’s that the term “evolutionary computation” was used to describe thisnew and promising scientific field. In the meantime, the three major evolutionary approaches, namelyevolutionary programming, genetic algorithms, and evolution strategies, were developed in Europe andthe USA. After the establishment of relevant annual meetings and journals, the evolutionary researchcommunity was able to exchange knowledge and present its latest developments to scientists and en-gineers from different disciplines. This has lead to a continuous growth of the field, as reflected in thenumber of published books, journals, and conferences organized all over the world.In the following sections, we sketch the development of the three major evolutionary approachesover the last 50 years. Our aim is the exposition of basic principles and operations, also discussed laterin the present chapter. A more detailed history of evolutionary algorithms is provided by De Jong et al.(1997).Evolutionary ProgrammingEvolutionary programming was developed by L.J. Fogel in the mid-60’s as a learning process for finite-state machines (Fogel, 1962; Fogel, 1964). More specifically, Fogel aimed at evolving an algorithm (aprogram) in an environment that consisted of symbols from a finite alphabet. The main goal was to pro-duce an algorithm with traits of intelligence, i.e., the ability to predict its environment and trigger properresponses based on its predictions. This was possible by observing the sequence of passing symbols andproducing an output that should predict the next symbol of the environment accurately.Apay-offfunctionwasconstructedtoassessperformancebasedonpredictionquality.Forthispurpose,a prediction error (square root, absolute error etc.) was used. Thus, a population of finite-state machineswas exposed to the environment, i.e., the symbol strings observed so far. For each input symbol, a pre-dicted output symbol was produced by each machine of the population, and its prediction quality wasassessed with the pay-off function. This procedure was repeated for all input symbols, and the averagepay-off value per machine was assigned as its fitness value. Then, each machine was mutated by properstochastic operations that changed one of its fundamental elements, such as states, state transitions,output symbols etc.The mutated machines were exposed to the same environment and evaluated with the aforemen-tioned procedure. Those with the highest fitness values were selected to comprise the population in thenext iteration, also called generation, of the algorithm. This evolution procedure was repeated until amachine with exact predictions for all symbols was produced. Then, a new symbol was added to the • 28. 10Introductionknown environment and the population was evolved anew. Detailed descriptions of these experiments,together with several applications, are provided by Fogel et al. (1964, 1965, 1966).After the 80’s, research on evolutionary programming matured and a plethora of different approachesandapplicationsappearedinliterature.Thetravelingsalesmanproblem,neuralnetworkstraining,schedul-ing, and continuous optimization, are just a fraction of the problems that were addressed by evolutionaryprogramming (Fogel, 1988; Fogel & Fogel, 1988; Fogel et al., 1990; McDonnell et al., 1992), whilestudies on the adaptation of its parameters were also appearing (Fogel et al., 1991). Today, there is sucha huge amount of work on evolutionary programming and its relative field of genetic programming, thatany attempt of a comprehensive presentation in the limited space of a book is condemned to failure. Forfurther information, we refer the interested reader to specialized literature sources such as Eiben andSmith (1993), Fogel et al. (1966), Koza (1992), Langdon and Poli (2002).Genetic AlgorithmsGenetic algorithms were introduced by J.H. Holland in the mid-60’s. By that time, Holland’s researchwas focused on understanding adaptive systems capable of responding to environmental changes. Con-tinuous competition and generation of offspring seemed to be the fundamental elements of long-termadaptive behaviors in natural systems. The idea of simulating these elements for designing artificialadaptive systems was very appealing. The core of Holland’s ideas appeared in several dissertations of hisstudents, establishing a new optimization methodology under the name of genetic algorithms (Bagley,1967; Cavicchio, 1970; Hollstien, 1971). Holland also studied solution representation schemes and hedeveloped a theoretical framework for the analysis of evolution in adaptive systems, producing a verystrong theoretical result: the schema theorem (Goldberg, 1989; Holland, 1992).Afurther boost to geneticalgorithms research was provided by the work of De Jong (1975), who produced convincing evidencethat genetic algorithms could serve as an efficient optimization algorithm in difficult problems.Today, genetic algorithms constitute one of the most popular heuristics for global optimization. Theyexploit populations of candidate solutions, similarly to the rest of evolutionary algorithms. The distin-guishing characteristic of their essential variants is the binary representation of solutions, which requiresa translation function between binary vectors and the actual variables of the problem. For example, inreal-valued optimization problems, all real candidate solutions must be translated from and to binaryvectors. In shape optimization problems, a proper representation between structures and binary vectorsis required. The biological notation is retained in genetic algorithms; thus, the binary representation ofa solution is called its chromosome or genotype, while the actual (e.g., real-valued) form is called itsphenotype. Obviously, the size of chromosomes depends on the range of the variables and the requiredaccuracy.Evolution in genetic algorithms is achieved by three fundamental genetic operators: selection, cross-over, and mutation. Based on a ranking scheme that promotes the best individuals of the population,i.e., those with the lowest function values, a number of parents are selected from the current population.The parents are stochastically combined and produce offspring that carry parts of their chromosomes.Then, a stochastic mutation procedure alters one or more digits in each chromosome, and the mutatedoffspring are evaluated. Finally, the best among all individuals (parents and offspring) are selected tocomprise the population of the next generation. The algorithm is terminated as soon as a solution ofdesirable quality is found or the maximum available computational budget is exhausted. • 29. 11IntroductionThe aforementioned procedures are subject to modifications, depending on the employed variant ofthe algorithm. For example, if real valued representation is used, the crossover operator can be definedas numerical (usually convex) combination of the real vectors. A more representative description of themaingeneticoperatorsisprovidedinafollowingsection.Theapplicationdomainof geneticalgorithmsisvery wide, embracing almost all engineering disciplines. Detailed presentations of genetic algorithms aswell as a plethora of references for further inquiry can be found in Falkenauer (1997), Goldberg (2002),Michalewicz (1999), Mitchell (1996), Vose (1999).Evolution StrategiesEvolution strategies were developed in the 60’s by Rechenberg (1965), Schwefel (1965) and Bienert(1967),forsolvingshapeoptimizationengineeringproblems.Theirmaindifferencecomparedtopreviousevolutionary approaches was the lack of a crossover operator (only mutation and selection were used).In early implementations, discrete probability distributions such as the binomial one were used to mutatethe shape of structures. Instead of a population, a single search point was used to produce one descendantper generation. In later implementations, continuous probability distributions (e.g., Gaussians) were alsoconsidered, especially for solving numerical optimization problems (Schwefel, 1965).The parameters of the employed distributions have a severe impact on the quality of descendantsproduced by the mutation operator. For this purpose, evolution strategies incorporate a self-adaptationscheme for their parameters. More recent approaches also consider independent adaptation of each solu-tion component, adding flexibility to the algorithm. These characteristics render evolution strategies amore appealing approach for solving numerical optimization problems than the evolutionary algorithmspresented in previous sections (Beyer, 2001; Beyer & Schwefel, 2002; Hansen & Ostermeier, 2001;Schwefel, 1995).Today, research in evolution strategies has been advanced mostly thanks to the work of severalgroups in Germany. The previously described scheme with one parent and one descendant is denoted as(1+1)-ES and uses only mutation. More generalized variants apply also recombination among parents,and they are denoted as:(μ/ρ + λ)-ES or (μ/ρ, λ)-ES,where μ stands for the total number of parents; ρ ≤ μ is the number of parents selected to generate off-spring; and λ is the number of offspring. The sign “+” stands for the plus-strategy, which selects the μbest individuals among both the μ parents and the λ offspring, to comprise the population in the nextgeneration. On the other hand, the sign “,” stands for the comma-strategy, which retains the μ best amongthe λ offspring in the next generation. Obviously, in this case the constraint, μ < λ, must hold.An interesting issue in recent implementations is the mutation of strategy parameters and their useto control the statistical properties of mutation for the actual variables. A relatively recent sophisticatedapproach, called covariance matrix adaptation evolution strategy (Hansen & Ostermeier, 2001; Oster-meier et al., 1994), uses mutation distributions obtained through the adaptation of a covariance matrix,based on information from previous steps. This feature improves performance significantly, since it cancapture the local shape of the objective function. This algorithm is typically denoted as (μ/μI, λ)-CMA.Further information on the theory and practice of evolution strategies can be found in Arnold (2002),Bäck (1996), Beyer (2001), Beyer and Schwefel (2002), Hansen and Ostermeier (2001). • 30. 12IntroductionFUNDAMENTAL EVOLUTIONARY OPERATIONSA standard evolutionary algorithm can be described with the following steps:Initialize population.Evaluate population.While (stopping condition not met)Apply selection.Apply crossover (recombination).Apply mutation.Evaluate generated individuals.Update population.End WhileExcept from the evaluation procedure, which is typically based on the objective function, the rest of theprocedures and operations constitute distinguishing features of an evolutionary algorithm. A plethoraof different variants per operator have been developed to enhance efficiency in specific problem types.The form of these operators usually depends on solution representation. For example, mutation and re-combination between individuals with binary representation differ significantly from their counterpartsfor real number representations. It is outside the scope of this book to present all available approaches.However, in the following paragraphs, we present the basic concepts, with brief discussions of theirmost common issues.Population InitializationPopulationinitializationistheprocedurewherethefirstgenerationofthepopulationisdeterminedwithinthe search space. Obviously, it is highly related to the overall available information on the problem athand. For example, if there is prior knowledge of any special characteristics of the objective functionimplying a region that contains the global minimizer, then it would be reasonable to assign the wholeor the largest part of the population into this region. On the other hand, if there is no such information,then treating each region of the search space equivalently would be the most appropriate choice.Favoringorprohibitingregionsofthesearchspacewithoutspecialreasoncanslowdownconvergence,in the best case, or get the algorithm trapped in local minima, in the worst. Furthermore, a necessary andsufficient condition for an algorithm to achieve convergence in optimization problems that lack favorablemathematical properties, is the ability to produce sequences of search points that are everywhere densein the domain of the objective function, as stated in the following theorem.Theorem (Zhigljavsky & Žilinskas, 2008, p. 13): Let the objective function, f(x), be continuous in theneighborhood D ⊂ Rnof a global minimizer, x*, and its feasible region, A, be compact. Then, a globalminimization algorithm converges in the sense that, f(xn)→f(x*) as n→∞, if and only if it generates asequence of points, xi, which is everywhere dense in A.The most common initialization procedure is the uniform dispersion of the population within the searchspace. Uniformity can be considered either deterministically or stochastically. In the first case, the • 31. 13Introductionpopulation is initialized on the nodes of a grid that equidistantly covers the whole search space, while,in the latter and most common case, the population is produced through probabilistic sampling, follow-ing a uniform distribution over the search space. The final choice between the two approaches usuallydepends on the form of the specific search space.More sophisticated initialization approaches employ deterministic algorithms and/or heuristics toprovide the initial population. In this way, the local tendency of the objective function is revealed tothe evolutionary algorithm in its first steps. Such an approach will be presented in a later chapter of thebook at hand. According to a rule of thumb, the initialization scheme shall not be instrumental for thealgorithm if there is no special information that justifies a biased assignment of the initial population.SelectionSelection is the procedure of selecting individuals from the population to form a pool of parents thatwill be used to produce offspring through recombination. The criterion for selecting an individual canbe either stochastic or deterministic. Nevertheless, it is always dependent on its function value. Thus,deterministic selection approaches directly select the best individuals, i.e., those with the lowest functionvalues, while stochastic approaches perform a probabilistic selection, assigning higher probabilities tothe best individuals. There are many different selection schemes reported in literature. We will describetwo of the most common schemes, namely tournament selection and roulette-wheel selection (also calledfitness proportionate selection).Tournament selection consists of a series of tournaments among a randomly selected set of individu-als, adding each time one of them into the parent pool. To put it more formally, let the population, P,consist of N individuals, m be a fixed integer from the set {2, 3,…, N}, and k be the number of parentsto be selected. Then, tournament selection can be described with the following pseudocode:Do (i = 1…k)Choose randomly m individuals from the population P.Select one among the m individuals.Add the selected individual into the parent pool.End DoDeterministic tournament always selects the best one among the m individuals, i.e., the one with thelowest function value, thereby promoting elitism. On the other hand, stochastic tournament uses func-tion values to assign selection probabilities to individuals, and then performs a probabilistic selectionamong them. Thus, the overall best individual (among the m) has a selection probability, p, the secondbest, p(1-p), the third best, p(1-p)2, etc. The deterministic variant can be considered as a special case ofthe stochastic one, for p = 1.Also, the selected individual can be either replaced back into the populationand probably reselected in the next tournament, or removed from the population.Roulette-wheel selection associates each individual with a probability depending on its functionvalue, similarly to stochastic tournament selection. More specifically, if we denote with fithe functionvalue of the i-th individual, then it is assigned a probability:1Ni i jjp f f== ∑ • 32. 14IntroductionAssuming that, p0= 0, a random number, q, is generated from a uniform distribution in the range [0,1],and the k-th individual is selected to join the parent pool, where k∈{1, 2,…, N} is an index such that:10 0k ki ii ip q p−= =≤ <∑ ∑ (3)This procedure resembles a roulette-wheel spin with each individual occupying a portion, pi, of thewheel. Roulette-wheel selection can be described with the following pseudocode:Assign to each individual a probability, pi.Do (i = 1…k)Generate a uniformly distributed random number in [0,1].Find index k such that relation (3) holds.Add the k-th individual to the parent pool.End DoObviously, roulette-wheel selection offers a high chance of survival to less fit individuals. A thoroughdiscussion and analysis of selection schemes can be found in Baker (1985), Blickle and Thiele (1995),Goldberg and Klösener (1991), Michalewicz (1999).Crossover or RecombinationCrossover or recombination is the procedure of recombining the information carried by two individualsto produce new offspring. This is in direct analogy to the biological reproduction, where DNAsequencesof parents are mixed to produce offspring DNAsequences that combine their genetic information. Thereare different forms of crossover schemes for different representations of the solutions. Since the binaryrepresentation of genetic algorithms closely resembles its natural counterpart, we will focus on it in therest of this section.Let, p = {p1, p2,…, pn} and q = {q1, q2,…, qn}, be two n-dimensional binary parent vectors, selectedrandomly from the parent pool generated by the selection procedure.Then, a crossover point, k∈{1, 2,…,n-1}, is defined, and each parent is divided in two parts that are recombined to produce two offsprings,o1= {p1, p2,…, pk, qk+1, qk+2,…, qn} and o2= {q1, q2,…, qk, pk+1, pk+2,…, pn}. If we denote with the symbols“⊗” and “⊕” a bit of information of the two parents, p and q, respectively (thus ⊗ and ⊕ can be either0 or 1), then crossover can be represented schematically as follows:parent p: ⊗⊗⊗⊗⊗⊗ | ⊗⊗⊗⊗parent q: ⊕⊕⊕⊕⊕⊕ | ⊕⊕⊕⊕- - - - - - - - - - - - - - - - - - - - - - - - - -offspring o1: ⊗⊗⊗⊗⊗⊗ | ⊕⊕⊕⊕offspring o2: ⊕⊕⊕⊕⊕⊕ | ⊗⊗⊗⊗This procedure is also called one-point crossover, as it uses a single crossover point. Similarly, we canhave 2-point crossover, where two crossover points are used: • 33. 15Introductionparent p: ⊗⊗⊗ | ⊗⊗⊗ | ⊗⊗⊗⊗parent q: ⊕⊕⊕ | ⊕⊕⊕ | ⊕⊕⊕⊕- - - - - - - - - - - - - - - - - - - - - - - - - - -offspring o1: ⊗⊗⊗ | ⊕⊕⊕ | ⊗⊗⊗⊗offspring o2: ⊕⊕⊕ | ⊗⊗⊗ | ⊕⊕⊕⊕In general, we can have an arbitrary number of crossover points, producing multi-point crossoverschemes.In all the aforementioned schemes, the dimension of the parents is inherited to their offspring. Thus,recombining n-dimensional parents will produce n-dimensional offspring, which is desirable in mostnumerical optimization problems. However, there are applications where dimensionality of candidatesolutions may not be necessarily fixed. In such cases, the crossover point may differ between the twoparents, producing offspring of different dimensionality:parent p: ⊗⊗⊗ | ⊗⊗⊗⊗⊗⊗⊗parent q: ⊕⊕⊕⊕⊕⊕ | ⊕⊕⊕⊕- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -offspring o1: ⊗⊗⊗ | ⊕⊕⊕⊕offspring o2: ⊕⊕⊕⊕⊕⊕ | ⊗⊗⊗⊗⊗⊗⊗This approach is called cut-and-splice crossover. Another common scheme is uniform crossover, whereeach bit is independently compared between the two parents and switched with a probability equal to0.5.The presented crossover schemes can also be applied to real-valued cases by recombining the realcomponents of parent vectors. However, arithmetic recombination schemes are usually preferred insuch cases. According to these scheme, a real number, a, is randomly (and uniformly) drawn from therange (0,1), and the two parent vectors, p and q, are recombined through a convex linear combination,producing two offspring:offspring o1: o1= a p + (1-a) qoffspring o2: o2= (1-a) p + a qDifferent weights can also be used for the two parents in the linear combination. Also, all the presentedcrossover and recombination schemes can use more than two parents and produce more than two off-spring. Further information on crossover and recombination can be found in Liepins and Vose (1992),Michalewicz (1999), Rowe et al. (2002), Vose (1999).MutationMutation is a fundamental biological operation. It enables organisms to change one or more biologicalproperties radically, in order to fit an environmental change or continue their evolution by producingoffspringwithhigherchancesofsurvival.Innature,mutationconstitutesanabruptchangeinthegenotypeof an organism, and it can be either inherited by parents to children or acquired by an organism itself. • 34. 16IntroductionA DNA mutation may result in the modification of small part(s) of the DNA sequence or rather bigsections of a chromosome. Its effect on the organism depends heavily on the mutated genes. Whilemutations to less significant genes have small positive or negative effects, there are mutations that trig-ger radical changes in the behavior of several genes. These mutations alternate genes that control theactivation of other genes. Thus, they have a crucial impact that can possibly affect the whole structureof the organism.Modeling its biological counterpart, mutation in evolutionary algorithms constitutes a means forretaining diversity in the population, hindering individuals from becoming very similar; an undesiredbehavior that leads to search stagnation and entrapment to local minima. On the other hand, mutationstrength shall be balanced enough to permit convergence. For this purpose, mutation is applied with aprobability on each component of an individual. If crossover is used, mutation is usually applied on thegenerated offspring. Otherwise, it can be applied directly on the actual population.In binary representations, random bit flips with a prespecified probability, called mutation rate, isthe most common mutation scheme. Let the population consist of N individuals, P = {p1, p2,…, pN},with pi= {pi1, pi2,…, pin}, and pij∈{0,1}, for all i = 1, 2,…, N, and j = 1, 2,…, n. Also, let, a∈[0,1], be aprespecified mutation rate. Then, mutation can be described with the following pseudocode:Do (i = 1…N)Do (j = 1…n)Generate a uniformly distributed random number, r∈[0,1].If (r < a) Then Mutate the component pij.End IfEnd DoEnd DoAlthough the standard mutation scheme in binary representations consists of the aforementioned simplebit flip, arithmetic mutation has also been used in several applications. According to this, binary arith-metic operations are applied on the component(s) of the mutated vector.In real-valued representations, mutation can be defined as the replacement of a vector componentwith a random number distributed over its corresponding range. In numerical optimization problems,probabilistic schemes are frequently used. These schemes alternate an individual by adding a randomvector drawn from a probability distribution. The Gaussian and uniform distributions are the most com-mon choices in such mutations. Further information on mutation operators can be found in Bäck (1996),Goldberg (1989), Kjellström (1991), Michalewicz (1999), Mitchell (1996), Motwani and Raghavan(1995), Mühlenbein and Schlierkamp-Voosen (1995).SWARM INTELLIGENCESwarm intelligence is a branch of artificial intelligence that studies the collective behavior and emergentproperties of complex, self-organized, decentralized systems with social structure. Such systems consistofsimpleinteractingagentsorganizedinsmallsocieties(swarms).Althougheachagenthasaverylimitedaction space and there is no central control, the aggregated behavior of the whole swarm exhibits traitsof intelligence, i.e., an ability to react to environmental changes and decision-making capacities. • 35. 17IntroductionThe main inspiration behind the development of swarm intelligence stems directly from nature. Fishschools, bird flocks, ant colonies and animal herds, with their amazing self-organization capabilities andreactions, produce collective behaviors that cannot be described simply by aggregating the behavior ofeach team member. This observation has stimulated scientific curiosity regarding the underlying rulesthat produce these behaviors. The study of rules and procedures that promote intelligent behavior andpattern emergence through collaboration and competition among individuals gave rise to the fields ofcollective intelligence and emergence. Human teams have also been shown to share many of these prop-erties, rendering collective intelligence and emergence, inter-disciplinary scientific fields that intersect,among others, with mathematics, sociology, computer science, and biology (Goldstein, 1999; Holland,1998; Lévy, 1999; Szuba, 2001).In the global optimization framework, swarm intelligence appeared in 1989 as a set of algorithmsfor controlling robotic swarms (Beni & Wang, 1989). Then, in six years, the three main swarm intel-ligence optimization algorithms, namely ant colony optimization, stochastic diffusion search, and par-ticle swarm optimization, were developed. Although there are philosophical and operational differencesbetween evolutionary and swarm intelligence algorithms, they were all categorized as evolutionarycomputation approaches in the mid-90’s. This binding was made due to their inherent similarities, suchas stochasticity, use of populations, types of application fields, as well as the scientific audience thatwas primarily interested on these approaches. Thus, swarm intelligence papers were mainly hosted inspecial sessions on evolutionary algorithms and published in international scientific journals under thetopic of evolutionary computation.A few years later, there was an exponential increase in the number of works related to swarm intel-ligence, which made its autonomous presentation indispensable. Today, there are specialized symposiaand a large number of special sessions devoted to the latest developments in swarm intelligence. Theincreasing number of journal papers, as well as the establishment of new journals, reveal the extensiveinterest of the scientific community in swarm intelligence, whose fundamental algorithms are brieflydescribed in the following sections.Ant Colony OptimizationDorigo (1992) introduced a stochastic optimization algorithm for combinatorial problems. The algo-rithm was inspired by the behavior of ants in search of food, and was named ant colony optimization(Bonabeau et al., 1999; Dorigo & Stützle, 2004). Its workings were heavily based on the concept ofstigmergy, i.e., the indirect stimulation of an agent’s action by traces previously left in the environmentby other agents. In natural ant colonies, stigmergy arises due to pheromones laid on the ground by theants during their search for food. Initially, ants perform a random search around their nest for food. Assoon as a food source is found, they bring food back to their nest, laying pheromones on the ground.The shortest path between the nest and the food source carries more pheromone than the rest paths, dueto the more frequent passage of ants and pheromone evaporation. Thus, an ant starting its route fromthe nest will be strongly stimulated by higher pheromone levels and follow the shortest path to the foodsource, also laying its own pheromone on it. This is a simple yet efficient way of nature for solvingshortest-path problems.Avery similar procedure is followed by the ant colony optimization algorithm to solve combinatorialoptimization problems. Artificial ants start from an initial search point (nest) and build the componentsof a new potential solution one-by-one. For each component, a probabilistic decision is made among • 36. 18Introductionalternatives. Each alternative carries a pheromone level that determines its selection probability. Thepheromone of each selected alternative is updated at the end of the tour, based on the quality of theobtained solution, so that components of solutions with low function values (shortest route length) areassigned higher pheromone levels. In order to avoid a rapid biasing on a suboptimal route, pheromoneevaporation also takes place.Aplethora of different approaches that promote elitism within the swarm oruse special pheromone update schemes have been proposed in literature. Comprehensive presentationsof the ant colony optimization algorithm can be found in Bonabeau et al. (1999), Dorigo and Stützle(2004).Stochastic Diffusion SearchStochastic diffusion search was initially introduced by Bishop (1989) as a search heuristic for patternmatching. Similarly to ant colony optimization, it uses populations of communicating individuals toperform stochastic search. However, it uses a special communication scheme, where information isexchanged through one-to-one direct communication between individuals. Candidate solutions arepartially evaluated by each individual and the information is diffused to other individuals through theaforementioned communication scheme. Thus, promising solutions are associated with portions of theswarm that carry their (partial) information.The special partial evaluation scheme of stochastic diffusion search has proved very useful in caseswhere the objective function is computationally expensive and decomposable, i.e., the complete evalu-ation can be done in several consecutive independent parts. Obviously, this property provides extensivesynchronous and asynchronous parallelization capabilities to the algorithm.The application field of stochastic diffusion search is wide, including, among others, text and objectrecognition (Bishop, 1989; Bishop & Torr, 1992), robotics (Beattie & Bishop, 1998), face recognition(Grech-Cini & McKee, 1993), and wireless networks (Whitaker & Hurley, 2002). Also, a sound math-ematical framework has been developed for its theoretical analysis (Nasuto, 1999; Nasuto & Bishop,1999, Myatt et al., 2004).Particle Swarm OptimizationParticle swarm optimization was developed by Kennedy and Eberhart (1995) as a stochastic optimiza-tion algorithm based on social simulation models. The algorithm employs a population of search pointsthat moves stochastically in the search space. Concurrently, the best position ever attained by each indi-vidual, also called its experience, is retained in memory. This experience is then communicated to partor the whole population, biasing its movement towards the most promising regions detected so far. Thecommunication scheme is determined by a fixed or adaptive social network that plays a crucial role onthe convergence properties of the algorithm.The development of particle swarm optimization was based on concepts and rules that govern so-cially organized populations in nature, such as bird flocks, fish schools, and animal herds. Unlike the antcolony approach, where stigmergy is the main communication mechanism among individuals throughtheir environment, in such systems communication is rather direct without altering the environment.The book at hand is completely devoted to the particle swarm optimization algorithm and its ap-plications. In the following chapters, we will present its standard concepts and variants, as well asmodifications that enhance its performance in detail, while also, discussing various applications. Thus, • 37. 19Introductionwe postpone our further consideration of particle swam optimization until the next chapter, which iscompletely devoted to its detailed description.THE NO-FREE-LUNCH THEOREMWolpert and Macready (1997) developed one of the strongest theoretical results in optimization, namelythe no-free-lunch theorem. Its main conclusion can be summarized with the statement:Any two algorithms are equivalent in terms of their performance when it is averaged over all probleminstances and metrics.Alternatively, we can say that if an algorithm A outperforms algorithm B for some problems, then thereis exactly the same number of problems where B outperforms A. The theorem holds for finite searchspaces and algorithms that do not re-evaluate sampled points. Under these assumptions, and consideringan algorithm as a mapping between an objective function (input) and its sequence of function evalua-tions on the sampled points (output), there is no free lunch if and only if the distribution of objectivefunctions remains invariant under permutations of the space of sampled solutions (Igel & Toussaint,2004; Wolpert & Macready, 1997).There is an ongoing discussion regarding the applicability of the theorem in practice (Droste et al.,2002).Althoughitsmainassumptionsdonotholdpreciselyinpractice,theycanbeverifiedapproximately.Nevertheless, even if it holds approximately, its main conclusion remains significant, yielding that thespecial characteristics of each problem are the key elements for selecting the most suitable algorithm.Therefore, effort shall be paid in order to reveal these elements; otherwise the choice of algorithm willmost probably be suboptimal.Under this prism, there is a strong merit in developing new algorithms that take full advantage ofspecial characteristics of a given problem, or in fitting the established approaches to specific classes ofproblems. An extensive archive of developments on the no-free-lunch theorem is provided at the website: SYNOPSISThis chapter introduces the reader to the basic concepts of optimization and provides a brief presentationof the major developments in fields relevant to its main topic, namely the particle swarm optimizationalgorithm. Short introductions were provided for global optimization problems and algorithms.The mainconcepts and operations of evolutionary algorithms were briefly discussed along with the most commonapproaches, providing a background on nature-inspired computation. Swarm intelligence complementedthese ideas by considering natural systems at a higher level (self-organization, communication) ratherthan their building blocks (genes, biological evolution). The final section of the chapter justifies theinterest and necessity for further research on more efficient and specialized algorithms.The chapter did not aim at covering all presented topics in detail. This task would require severalbooks to even enumerate all results per topic. For this purpose, selected references were provided tostimulate the non-expert reader, providing a thread for further inquiry. The next chapter is devoted to • 38. 20Introductionthe detailed presentation of the particle swarm optimization algorithm, from early precursors to itscontemporary variants.REFERENCESArchetti, F., & Schoen, F. (1984). A survey on the global optimization problem: general theory andcomputational approaches. Annals of Operations Research, 1(1), 87–110. doi:10.1007/BF01876141Arnold, D. V. (2002). Noisy optimization with evolution strategies. Berlin: Springer.Bäck, T. (1996). Evolutionary algorithms in theory and practice: Evolution strategies, evolutionaryprogramming, genetic algorithms. UK: Oxford University Press.Bagley, J. D. (1967). The behavior of adaptive systems with employ genetic and correlation algorithms.Ph.D. thesis, University of Michigan, USA.Baker, J. E. (1985).Adaptive selection methods for genetic algorithms. In Proceedings of the 1stInterna-tional Conference on Genetic Algorithms and their Applications, Pittsburgh (PA), USA (pp. 101–111).Beattie, P. D., & Bishop, J. M. (1998). Self-localisation in the “Senario” autonomous wheelchair. Journalof Intelligent & Robotic Systems, 22, 255–267. doi:10.1023/A:1008033229660Beni, G., & Wang, J. (1989). Swarm intelligence in cellular robotic systems. In P. Dario, G. Sandini &P. Aebischer (Eds.), Robotics and biological systems: Towards a new bionics, NATO ASI Series, SeriesF: Computer and System Science Vol. 102 (pp. 703–712).Beyer, H.-G. (2001). The theory of evolution strategies. Berlin: Springer.Beyer, H.-G., & Schwefel, H.-P. (2002). Evolution strategies: A comprehensive introduction. NaturalComputing, 1(1), 3–52. doi:10.1023/A:1015059928466Bienert, P. (1967). Aufbau einer Optimierungsautomatik für drei Parameter. Dipl.-Ing. thesis, TechnicalUniversity of Berlin, Institute of Measurement and Control Technology, Germany.Bishop, J. M. (1989). Stochasticsearchingnetwork.InProceedingsofthe1stIEEConferenceonArtificialNeural Networks, London, UK (pp. 329-331).Bishop, J. M., & Torr, P. (1992). The stochastic search network. In R. Linggard, D.J. Myers, C. Night-ingale (Eds.), Neural networks for images, speech and natural language (pp. 370-387). New York:Chapman & Hall.Blickle, T., & Thiele, L. (1995). A comparison of selection schemes used in genetic algorithms (Tech.Rep. 11). Zürich, Switzerland: Swiss Federal Institute of Technology.Bonabeau, E., Dorigo, M., & Theraulaz, G. (1999). Swarm intelligence: From natural to artificial sys-tems. UK: Oxford University Press.Boros, E., & Hammer, P. L. (Eds.). (2003). Discrete optimization: The state of the art. Amsterdam:Elsevier Science. • 39. 21IntroductionBranke, J. (2002). Evolutionary optimization in dynamic environments. Dordrecht, The Netherlands:Kluwer Academic Publishers.Bremermann, H. J. (1962). Optimization through evolution and recombination. In M.C. Yovits, G.T.Jacobi, & G.D. Goldstein (Eds.), Self-organizing systems 1962 (Proceedings of the conference on self-organizing systems, Chicago, Illinois). Washington, DC: Spartan Books.Cavicchio, D. J. (1970). Adaptive search using simulated evolution. Ph.D. thesis, University of Michi-gan, USA.Chiang, A. C. (1991). Elements of dynamic optimization. Prospect Heights, IL: Waveland Press.De Jong, K., Fogel, D. B., & Schwefel, H.-P. (1997). A history of evolutionary computation. In T. Bäck,D.B. Fogel, & Z. Michalewicz (Eds.), Handbook of evolutionary computation (pp. A2.3:1-12). NewYork: IOP Press.De Jong, K. A. (1975). Analysis of behavior of a class of genetic adaptive systems. Ph.D. thesis, Uni-versity of Michigan, USA.Deb, K. (2001). Multi-objective optimization using evolutionary algorithms. Chichester, UK: JohnWiley & Sons.Dixon, L. C. W., & Szegö, G. P. (1978). The global optimization problem: an introduction. In L.C.WDixon & G.P. Szegö (Eds.), Towards global optimization 2 (pp. 1-15). Amsterdam: North-Holland.Dorigo, M. (1992). Optimization, learning and natural algorithms. Ph.D. thesis, Politecnico di Milano,Italy.Dorigo, M., & Stützle, T. (2004). Ant colony optimization. Cambridge, MA: MIT Press.Droste,S.,Jansen,T.,&Wegener,I.(2002).Optimizationwithrandomizedsearch heuristics:the(A)NFLtheorem, realistic scenarios, and difficult functions. Theoretical Computer Science, 287(1), 131–144.doi:10.1016/S0304-3975(02)00094-4Ehrgott, M., & Gandibleux, X. (Eds.). (2002). Multiple criteria optimization: State of the art annotatedbibliographic surveys. Dordrecht, The Netherlands: Kluwer Academic Publishers.Eiben, A. E., & Smith, J. E. (2003). Introduction to evolutionary computing. Berlin, Heidelberg:Springer.Falkenauer, E. (1997). Genetic algorithms and grouping problems. Chichester, UK: John Wiley & SonsLtd.Floudas, C.A. (1995). Nonlinear and mixed-integer optimization: Fundamentals and applications. NewYork: Oxford University Press.Fogel, D. B. (1988).An evolutionary approach to the travelling salesman problem. Biological Cybernet-ics, 60(2), 139–144. doi:10.1007/BF00202901 • 40. 22IntroductionFogel, D. B., & Fogel, L. J. (1988). Route optimization through evolutionary programming. In Proceed-ings of the 22ndAsilomar Conference on Signals, Systems and Computers, Pacific Grove (CA), USA(Vol. 2, pp. 679-680).Fogel, D. B., Fogel, L. J., & Atmar, J. W. (1991). Meta-evolutionary programming. In R.R. Chen (Ed.),Proceedings of the 25thasilomar conference on signals, systems and computers, Pacific Grove (CA),USA (Vol. 1, pp. 540-545).Fogel, D. B., Fogel, L. J., & Porto, V. W. (1990). Evolving neural networks. Biological Cybernetics,63(6), 487–493. doi:10.1007/BF00199581Fogel, L. J. (1962). Autonomous automata. Industrial Research, 4, 14–19.Fogel, L. J. (1964). On the organization of intellect. Ph.D. thesis, University of California at Los An-geles, USA.Fogel,L.J.,Owens,A.J.,&Walsh,M.J.(1964).Ontheevolutionofartificialintelligence.InProceedingsof the 5thNational Symposium on Human Factors and Electronics, San Diego (CA) USA (pp. 63-76).Fogel, L. J., Owens, A. J., & Walsh, M. J. (1965). Artificial intelligence through a simulation of evolu-tion. InA. Callahan, M. Maxfield, & L.J. Fogel (Eds.), Biophysics and cybernetic systems (pp. 131-156).Washington, DC: Spartan Books.Fogel, L. J., Owens, A. J., & Walsh, M. J. (1966). Artificial intelligence through simulated evolution.New York: Wiley.Friedberg, R. M. (1958). A learning machine: Part I. IBM Journal, 2, 2-13.Goldberg, D. E. (1989). Genetic algorithms in search, optimization and machine learning. Reading,MA: Addison Wesley.Goldberg, D. E. (2002). The design of innovation: Lessons from and for competent genetic algorithms.Reading, MA: Addison-Wesley.Goldberg, D. E., & Klösener, K. H. (1991). A comparative analysis of selection schemes used in ge-netic algorithms. In G. Rawlins (Ed.), Foundation of genetic algorithms (pp. 69-93). San Mateo, CA:Kaufmann.Goldstein, J. (1999). Emergence as a construct: History and issues. Emergence: Complexity and Orga-nization, 1, 49–72.Grech-Cini, H. J., & McKee, G. T. (1993). Locating the mouth region in images of human faces. InP.S. Schenker (Ed.), Proceedings of SPIE - The International Society for Optical Engineering, SensorFusion, Boston (MA), USA (Vol. 2059, pp. 458-465).Hansen, N., & Ostermeier, A. (2001). Completely derandomized self-adaptation in evolution strategies.Evolutionary Computation, 9 1), 159-195.Holland, J. H. (1992). Adaptation in natural and artificial systems. Cambridge, MA: MIT Press.Holland, J. H. (1998). Emergence from chaos to order. New York: Perseus Books. • 41. 23IntroductionHollstien, R. B. (1971). Artificial genetic adaptation in computer control systems. Ph.D. thesis, Univer-sity of Michigan, USA.Horst, R., & Pardalos, P. M. (Eds.). (1995). Handbook of global optimization. Dordrecht, The Nether-lands: Kluwer Academic Publishers.Horst, R., & Tuy, J. (2003). Global optimization: deterministic approaches. Berlin: Springer-Verlag.Igel, C., & Toussaint, M. (2004). A no-free-lunch theorem for non-uniform distributions of tar-get functions. Journal of Mathematical Modelling and Algorithms, 3, 313–322. doi:10.1023/B:JMMA.0000049381.24625.f7Kennedy, J., & Eberhart, R. C. (1995). Particle swarm optimization. In Proceedings of the IEEE Inter-national Conference on Neural Networks, Perth, Australia (pp. 1942–1948).Kjellström, G. (1991). On the efficiency of Gaussian adaptation. Journal of Optimization Theory andApplications, 71(3), 589–597. doi:10.1007/BF00941405Koza, J. R. (1992). Genetic programming: On the programming of computers by means of natural selec-tion. Cambridge, MA: MIT Press.Langdon, W. B., & Poli, R. (2002). Foundations of genetic programming. Berlin: Springer.Lévy, P. (1999). Collective intelligence: Mankind’s emerging world in cyberspace. New York: PerseusBooks.Liepins, G., & Vose, M. (1992). Characterizing crossover in genetic algorithms. Annals of Mathematicsand Artificial Intelligence, 5, 27–34. doi:10.1007/BF01530778Luenberger, D. G. (1989). Linear and nonlinear programming. Reading, MA: Addison-Wesley.McDonnell, J. R., Andersen, B. D., Page, W. C., & Pin, F. (1992). Mobile manipulator configurationoptimization using evolutionary programming. In D.B. Fogel & W. Atmar (Eds.), Proceedings of the 1stAnnual Conference on Evolutionary Programming, La Jolla (CA), USA (pp. 52-62).Michalewicz, Z. (1999). Genetic algorithms + data structures = evolution programs. Berlin: Spring-er.Mitchell, M. (1996). An introduction to genetic algorithms. Cambridge, MA: MIT Press.Motwani, R., & Raghavan, P. (1995). Randomized algorithms. MA: Cambridge University Press.Mühlenbein, H., & Schlierkamp-Voosen, D. (1995). Analysis of selection, mutation and recombinationin genetic algorithms. In W. Banzhaf & F.H. Eeckman (Eds.), Evolution as a computational process,Lecture Notes in Computer Science, Vol. 899 (pp. 142-168). Berlin: Springer.Myatt, D. M., Bishop, J. M., & Nasuto, S. J. (2004). Minimum stable convergence criteria for StochasticDiffusion Search. Electronics Letters, 40(2), 112–113. doi:10.1049/el:20040096Nasuto, S. J. (1999). Analysis of resource allocation of stochastic diffusion search. Ph.D. thesis, Uni-versity of Reading, UK. • 42. 24IntroductionNasuto, S. J., & Bishop, J. M. (1999). Convergence analysis of stochastic diffusion search. Journal ofParallel Algorithms and Applications, 14(2), 89–107.Nocedal, J., & Wright, S. J. (2006). Numerical optimization. New York: Springer.Ostermeier, A., Gawelczyk, A., & Hansen, N. (1994). A derandomized approach to self-adaptation ofevolution strategies. Evolutionary Computation, 2(4), 369–380. doi:10.1162/evco.1994.2.4.369Polak, E. (1997). Optimization: algorithms and consistent approximations. New York: Springer.Rechenberg, I. (1965). Cybernetic solution path of an experimental problem. Royal Aircraft Establish-ment Library, Translation 1122.Rowe, J. E., Vose, M. D., & Wright, A. H. (2002). Group properties of crossover and mutation. Evolu-tionary Computation, 10(2), 151–184. doi:10.1162/106365602320169839Schwefel, H.-P. (1965). Kybernetische Evolution als Strategie der experimentellen Forschung in derStrömungstechnik. Dipl.-Ing. thesis, Technical University of Berlin, Hermann Föttinger Institute forHydrodynamics, Germany.Schwefel, H.-P. (1995). Evolution and optimum seeking. New York: Wiley & Sons.Spall, J. C. (2003), Introduction to Stochastic Search and Optimization. Hoboken, NJ: John Wiley &Sons Ltd.Szuba, T. (2001). Computational collective intelligence. New York: Wiley & Sons.Torn, A., & Žilinskas, A. (1989). Global optimization. Berlin: Springer.Vose, M. D. (1999). The simple genetic algorithm: Foundations and theory. Cambridge, MA: MITPress.Whitaker, R. M., & Hurley, S. (2002). An agent based approach to site selection for wireless networks.In Proceedings of the 2002 ACM Symposium on Applied Computing, Madrid, Spain (pp. 574-577).Wolpert, D. H., & Macready, W. G. (1997). No free lunch theorems for optimization. IEEE Transactionson Evolutionary Computation, 1(1), 67–82. doi:10.1109/4235.585893Zhigljavsky, A., & Žilinskas, A. (2008). Stochastic global optimization. New York: Springer. • 43. 25Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.Chapter 2Particle Swarm OptimizationThis chapter is devoted to particle swarm optimization (PSO), from early precursors to contemporarystandard variants. The presentation begins with the main inspiration source behind its development,followed by early variants and discussion on their parameters. Severe deficiencies of early variantsare also pointed out and their solutions are reported in a relative historical order, bringing the reader tocontemporary developments, considered as the state-of-the-art PSO variants today.MAIN INSPIRATION SOURCEBird flocks, fish schools, and animal herds constitute representative examples of natural systems whereaggregated behaviors are met, producing impressive, collision-free, synchronized moves. In suchsystems, the behavior of each group member is based on simple inherent responses, although their out-come is rather complex from a macroscopic point of view. For example, the flight of a bird flock canbe simulated with relative accuracy by simply maintaining a target distance between each bird and itsimmediate neighbors. This distance may depend on its size and desirable behavior. For instance, fishretain a greater mutual distance when swimming carefree, while they concentrate in very dense groupsin the presence of predators. The groups can also react to external threats by rapidly changing their form,breaking in smaller parts and re-uniting, demonstrating a remarkable ability to respond collectively toexternal stimuli in order to preserve personal integrity.Similar phenomena are observed in physical systems. A typical example is the particle aggregationcaused by direct attraction between particles due to Brownian motion or fluid shear. Humans too arecharacterized by agnate behaviors, especially at the level of social organization and belief formulation.However, these interactions can become very complex, especially in the belief space, where, in contrastDOI: 10.4018/978-1-61520-666-7.ch002 • 44. 26Particle Swarm Optimizationto the physical space, the same point (a belief or an idea) can be occupied concurrently by large groupsof people without collisions. The aforementioned aggregating behaviors, characterized by the simplicityof animal and physical systems or the abstractness of human social behavior, intrigued researchers andmotivated their further investigation through extensive experimentation and simulations (Heppner &Grenander, 1990; Reynolds, 1987; Wilson, 1975).Intense research in systems where collective phenomena are met prepared the ground for the devel-opment of swarm intelligence, briefly described in the previous chapter. Notwithstanding their physicalor structural differences, such systems share common properties, recognized as the five basic principlesof swarm intelligence (Millonas, 1994):1. Proximity: Ability to perform space and time computations.2. Quality: Ability to respond to environmental quality factors.3. Diverse response: Ability to produce a plurality of different responses.4. Stability: Ability to retain robust behaviors under mild environmental changes.5. Adaptability: Ability to change behavior when it is dictated by external factors.Moreover, the social sharing of information among individuals in a population can provide anevolutionary advantage. This general belief, which was suggested in several studies and supported bynumerous examples from nature, constituted the core idea behind the development of PSO.EARLY VARIANTS OF PSOThe early precursors of PSO were simulators of social behavior for visualizing bird flocks. Nearest-neighbor velocity matching and acceleration by distance were the main rules employed to produceswarming behavior by simple agents in their search for food, in simulation experiments conducted byRussell C. Eberhart (Purdue School of Engineering and Technology, Indiana University Purdue Uni-versity Indianapolis) and James Kennedy (Bureau of Labor Statistics, Washington, DC). After realizingthe potential of these simulation models to perform optimization, Eberhart and Kennedy refined theirmodel and published the first version of PSO in 1995 (Eberhart & Kennedy, 1995; Kennedy & Eberhart,1995).Putting it in a mathematical framework, let, A ⊂ Rn, be the search space, and, f:A→Y ⊆ R, be theobjective function. In order to keep descriptions as simple as possible, we assume that A is also the fea-sible space of the problem at hand, i.e., there are no further explicit constraints posed on the candidatesolutions. Also, note that no additional assumptions are required regarding the form of the objectivefunction and search space. As mentioned in the previous chapter, PSO is a population-based algorithm,i.e., it exploits a population of potential solutions to probe the search space concurrently. The populationis called the swarm and its individuals are called the particles; a notation retained by nomenclature usedfor similar models in social sciences and particle physics. The swarm is defined as a set:S = {x1, x2,…, xN},of N particles (candidate solutions), defined as: • 45. 27Particle Swarm Optimizationxi= (xi1, xi2,…, xin)T∈ A, i = 1, 2,…, N.Indices are arbitrarily assigned to particles, while N is a user-defined parameter of the algorithm. Theobjective function, f(x), is assumed to be available for all points in A. Thus, each particle has a uniquefunction value, fi= f(xi) ∈ Y.The particles are assumed to move within the search space, A, iteratively. This is possible by adjust-ing their position using a proper position shift, called velocity, and denoted as:vi= (vi1, vi2,…, vin)T, i = 1, 2,…, N.Velocity is also adapted iteratively to render particles capable of potentially visiting any region of A.If t denotes the iteration counter, then the current position of the i-th particle and its velocity will behenceforth denoted as xi(t) and vi(t), respectively.Velocity is updated based on information obtained in previous steps of the algorithm. This is imple-mented in terms of a memory, where each particle can store the best position it has ever visited duringits search. For this purpose, besides the swarm, S, which contains the current positions of the particles,PSO maintains also a memory set:P = {p1, p2,…, pN},which contains the best positions:pi= (pi1, pi2, …, pin)T∈ A, i = 1, 2,…, N,ever visited by each particle. These positions are defined as:( ) argmin ( ),i itp t f t=where t stands for the iteration counter.PSO is based on simulation models of social behavior; thus, an information exchange mechanismshall exist to allow particles to mutually communicate their experience. The algorithm approximates theglobal minimizer with the best position ever visited by all particles. Therefore, it is a reasonable choiceto share this crucial information. Let g be the index of the best position with the lowest function valuein P at a given iteration t, i.e.,( ) argmin ( ( )).g iip t f p t=Then, the early version of PSO is defined by the following equations (Eberhart & Kennedy, 1995; Eber-hart et al., 1996; Kennedy & Eberhart, 1995):vij(t+1) = vij(t) + c1R1(pij(t)-xij(t)) + c2R2(pgj(t)-xij(t)), (1)xij(t+1) = xij(t) + vij(t+1), (2) • 46. 28Particle Swarm Optimizationi = 1, 2,…, N, j = 1, 2,…, n,where t denotes the iteration counter; R1and R2are random variables uniformly distributed within [0,1];and c1, c2, are weighting factors, also called the cognitive and social parameter, respectively. In the firstversion of PSO, a single weight, c = c1= c2, called acceleration constant, was used instead of the twodistinct weights in equation (1). However, the latter offered better control on the algorithm, leading toits predominance over the first version.At each iteration, after the update and evaluation of particles, best positions (memory) are also up-dated. Thus, the new best position of xiat iteration t+1 is defined as follows:( 1), if ( ( 1)) ( ( )),( 1)( ), otherwise.i i iiix t f x t f p tp tp t+ + ≤+ = The new determination of index g for the updated best positions completes an iteration of PSO.The operation of PSO is provided in pseudocode in Table 1. Particles are usually initialized randomly,followingauniformdistributionoverthesearchspace,A.ThischoicetreatseachregionofAequivalently;therefore it is mostly preferable in cases where there is no information on the form of the search space orthe objective function, requiring a different initialization scheme. Additionally, it is implemented fairlyeasily, as all modern computer systems can be equipped with a uniform random number generator.The previous velocity term, vij(t), in the right-hand side of equation (1), offers a means of inertialmovement to the particle by taking its previous position shift into consideration. This property can pre-vent it from becoming biased towards the involved best positions, which could entrap it to local minimaif suboptimal information is carried by both (e.g., if they both lie in the vicinity of a local minimizer).Furthermore, the previous velocity term serves as a perturbation for the global best particle, xg. Indeed,if a particle, xi, discovers a new position with lower function value than the best one, then it becomesthe global best (i.e., g←i) and its best position, pi, will coincide with pgand xiin the next iteration. Thus,the two stochastic terms in equation (1) will vanish. If there was no previous velocity term in equation(1), then the aforementioned particle would stay at the same position for several iterations, until a newTable 1. Pseudocode of the operation of PSOInput: Number of particles, N; swarm, S; best positions, P.Step 1. Set t←0.Step 2. Initialize S and Set P ≡ S.Step 3. Evaluate S and P, and define index g of the best position.Step 4. While (termination criterion not met)Step 5. Update S using equations (1) and (2).Step 6. Evaluate S.Step 7. Update P and redefine index g.Step 8. Set t←t+1.Step 9. End WhileStep 10. Print best position found. • 47. 29Particle Swarm Optimizationbest position is detected by another particle. Contrary to this, the velocity term allows this particle tocontinue its search, following its previous position shift.The values of c1and c2can affect the search ability of PSO by biasing the sampled new positions ofa particle, xi, towards the best positions, piand pg, respectively, as well as by changing the magnitude ofsearch. For example, consider the two cases illustrated in Fig. 1. Let xi= (0,0)T, denoted with the crosssymbol, be the current position of a particle. Also, let, pi= (2,1)Tand pg= (1,3)T, be its own best andoverall best position, denoted with a star and a square symbol, respectively. Moreover, for simplicity,let its current velocity, vi, be equal to zero. Then, Fig. 1 represents 1000 possible new positions of xiforc1= c2= 1.0 (left part) and c1= c2= 2.0 (right part).Apparently, the magnitude of search differs significantly in the two cases. If a better global explora-tion is required, then high values of c1and c2can provide new points in relatively distant regions of thesearch space. On the other hand, a more refined local search around the best positions achieved so farwould require the selection of smaller values for the two parameters. Also, choosing, c1> c2, would biassampling towards the direction of pi, while in the opposite case, c1< c2, sampling towards the directionof pgwould be favored. This effect can be useful in cases where there is special information regard-ing the form of the objective function. For instance, in convex unimodal objective functions, a choicethat promotes sampling closer to pgis expected to be more efficient, if combined with a proper searchmagnitude.We must also notice that PSO operates on each coordinate direction independently. At this point,we shall mention a typical mistake made by several researchers, especially when PSO equations wereconsidered in their vectorial form:vi(t+1) = vi(t) + c1R1(pi(t)-xi(t)) + c2R2(pg(t)-xi(t)), (3)xi(t+1) = xi(t) + vi(t+1), (4)i = 1, 2,…, N,Figure 1. Candidate new positions of the particle xi= (0,0)T(cross) with pi= (2,1)T(star) and pg= (1,3)T(square), for the cases (A) c1= c2= 1.0, and (B) c1= c2= 2.0 • 48. 30Particle Swarm Optimizationwhere all vector operations are performed componentwise. In this case, R1and R2should be consideredas random n-dimensional vectors with their components uniformly distributed within [0,1]. Instead, R1and R2were often considered as random one-dimensional values, similarly to equation (1), resulting ina scheme that uses the same random number for all direction components of the corresponding differ-ence vector in equation (3). The effect of this choice is illustrated in Fig. 2, where 1000 possible newpositions of the same particle, xi, as in Fig. 1, are generated with the (correct) configuration of equation(1) (left part) and the (wrong) configuration of equation (3) with random values used instead of randomvectors (right part). Obviously, the latter case restricts sampling within a parallelepiped region betweenthe two best positions, piand pg.In most optimization applications, it is desirable or inevitable to consider only particles lying withinthe search (feasible) space. For this purpose, bounds are imposed on the position of each particle, xi, torestrict it within the search space, A. If a particle assumes an undesirable step out of the search spaceafter the application of equation (2), it is immediately clamped at its boundary. In the simplest case,where the search space can be defined as a box:A a b a b a bn n= ´ ´ ´[ , ] [ , ] [ , ],1 1 2 2(5)with ai, bi∈ R, i = 1, 2,…, n, the particles are restricted as follows:b x, if ( 1) ,( 1), if ( 1) ,j ij jijj ij ja x t ax tt b+ <+ = + >i = 1, 2,…, N, j = 1, 2,…, n.Alternatively, a bouncing movement off the boundary back into the search space has been considered,similarly to a ball bounced off a wall (Kao et al., 2007). The popularity of this approach is limited, sinceFigure 2. Candidate new positions generated by equation (3) for the particle xi= (0,0)T(cross), withpi= (2,1)T(star), pg= (1,3)T(square), and c1= c2= 2.0, when (A) R1and R2are random n-dimensionalvectors, and (B) R1and R2are random one-dimensional values • 49. 31Particle Swarm Optimizationit requires the modeling of particle motion with complex physical equations. In cases where A cannot bedefined as a box, special problem-dependent conditions may be necessary to restrict particles.FURTHER REFINEMENT OF PSOEarly PSO variants performed satisfactorily for simple optimization problems. However, their crucialdeficiencies were revealed as soon as they were applied on harder problems with large search spaces anda multitude of local minima. In the following paragraphs, refinements developed to address deficienciesof the original PSO model are reported and discussed.Swarm Explosion and Velocity ClampingThe first significant issue, verified by several researchers, was the swarm explosion effect. It refers to theuncontrolled increase of magnitude of the velocities, resulting in swarm divergence. This deficiency isrooted in the lack of a mechanism for constricting velocities in early PSO variants, and it was straight-forwardly addressed by using strict bounds for velocity clamping at desirable levels, preventing particlesfrom taking extremely large steps from their current position.More specifically, a user-defined maximum velocity threshold, vmax> 0, is considered. After deter-mining the new velocity of each particle with equation (1), the following restrictions are applied priorto the position update with equation (2):|vij(t+1)| ≤ vmax, i = 1, 2,…, N, j = 1, 2,…, n. (6)In case of violation, the corresponding velocity component is set directly to the closest velocity bound,i.e.,max maxmax max, if ( 1) ,( 1), if ( 1) .ijijijv v t vv tv v t v+ >+ = − + < −If necessary, different velocity bounds per direction component can be used. The value of vmaxis usuallytaken as a fraction of the search space size per direction. Thus, if the search space is defined as in equa-tion (5), a common maximum velocity for all direction components can be defined as follows:maxmin{ }.i iib avk−=Alternatively, separate maximum velocity thresholds per component can be defined as:max,i iib avk−= , i = 1, 2,…, n, • 50. 32Particle Swarm Optimizationwith k = 2 being a common choice. Of course, if the problem at hand requires smaller particle steps,then higher values of k shall be used. For example, if the search space has a multitude of minimizerswith narrow regions of attraction close to each other, then k shall assume adequately large values toprevent particles from overflying them. On the other hand, k shall not take very small or large valuesthat encumber a satisfactory search progress.Figure 3 illustrates the impact of a large and a small value of k on swarm diversity, which is definedas the mean of the standard deviations of particles per coordinate direction. The cases for k = 2 and k =10 are illustrated for a swarm of 20 particles with c1= c2= 2, minimizing the 2-dimensional objectivefunction, f(x) = xTx, in the range [-100,100]2for 500 iterations. Evidently, the value k = 10 corresponds tosmaller diversity with mild fluctuations, in contrast to k = 2, where diversity is almost five times larger,with wide fluctuations, obviously due to the larger position shifts assumed by the particles.Velocityclampingofferedasimpleyetefficientsolutiontotheproblemofswarmexplosion.However,it did not address the problem of convergence. The particles were now able to fluctuate around theirbest positions, but they were unable either to achieve convergence on a promising position or performa refined search around it. This problem was addressed by the introduction of a new parameter in theoriginal PSO model, as described in the next section.The Concept of Inertia WeightAlthough the use of a maximum velocity threshold improved the performance of early PSO variants, itwas not adequate to render the algorithm efficient in complex optimization problems. Despite the allevia-tion of swarm explosion, the swarm was not able to concentrate its particles around the most promisingsolutions in the last phase of the optimization procedure. Thus, even if a promising region of the searchFigure 3. Swarm diversity during search for k = 2 (dotted line) and k = 10 (solid line). The plots pertainto a swarm of 20 particles with c1= c2= 2, minimizing the 2-dimensional instance of test problem TPUO-1,defined in Appendix A of the book at hand, in the range [-100,100]2for 500 iterations • 51. 33Particle Swarm Optimizationspace was roughly detected, no further refinement was made, with the particles instead oscillating onwide trajectories around their best positions.The reason for this deficiency was shown to be a disability to control velocities. Refined search inpromising regions, i.e., around the best positions, requires strong attraction of the particles towards them,and small position shifts that prohibit escape from their close vicinity. This is possible by reducing theperturbations that shift particles away from best positions; an effect attributed to the previous velocityterm in equation (1). Therefore, the effect of the previous velocity on the current one shall fade for eachparticle. For this purpose, a new parameter, w, called inertia weight, was introduced in equation (1), re-sulting in a new PSO variant (Eberhart & Shi, 1998; Shi & Eberhart, 1998a; Shi & Eberhart, 1998b):vij(t+1) = wvij(t) + c1R1(pij(t)-xij(t)) + c2R2(pgj(t)-xij(t)), (7)xij(t+1) = xij(t) + vij(t+1), (8)i = 1, 2,…, N, j = 1, 2,…, n.The rest of the parameters remain the same as for the early PSO variant of equations (1) and (2). Theinertia weight shall be selected such that the effect of vij(t) fades during the execution of the algorithm.Thus, a decreasing value of w with time is preferable. A very common choice is the initialization of wto a value slightly greater than 1.0 (e.g., 1.2) to promote exploration in early optimization stages, and alinear decrease towards zero to eliminate oscillatory behaviors in later stages. Usually, a strictly positivelower bound on w (e.g., 0.1) is used to prevent the previous velocity term from vanishing.In general, a linearly decreasing scheme for w can be mathematically described as follows:upmax( ) ( ) ,up lowtw t w w wT= − − (9)where t stands for the iteration counter; wlowand wupare the desirable lower and upper bounds of w; andTmaxis the total allowed number of iterations. Equation (9) produces a linearly decreasing time-dependentinertia weight with starting value, wup, at iteration, t = 0, and final value, wlow, at the last iteration, t =Tmax.Figure 4 illustrates diversity for a swarm of 20 particles updated with equations (7) and (8), with c1=c2= 2, minimizing the 2-dimensional instance of test problem TPUO-1, defined inAppendixAof the bookat hand, in the range [-100,100]2, with (solid line) and without (dotted line) a decreasing inertia weight.As in Fig. 3, diversity is defined as the mean value of the standard deviations of particles per coordinatedirection. Obviously, the use of inertia weight has a tremendous effect on swarm diversity, which almostvanishes after 300 iterations, in contrast to the case of simple velocity clamping, which retains almostthe same diversity levels throughout the search.In addition, we observe that, in the case of inertia weight, there is an increase in swarm diversity forthe first almost 100 iterations. This effect can be attributed to the initial value, wup= 1.2, of the inertiaweight. Since this value is greater than 1.0, the previous velocity term has a greater impact in equation(7) than in equation (1). This results in a temporary swarm explosion that enhances the explorationcapabilities even of a poorly initialized swarm. After almost 90 iterations, the inertia weight assumesvalues smaller than 1.0 and diversity starts declining towards zero, thereby promoting exploitation. As • 52. 34Particle Swarm Optimizationdiscussed in the next chapter, different schemes for inertia weight adaptation can be used to producedifferent behaviors of the algorithm.The amazing performance improvement gained by using the inertia weight PSO variant with velocityclamping, rendered it the most popular PSO approach for a few years. However, although particles wereabletoavoidswarmexplosionandconvergearoundthebestpositions,theywerestillgettingtrappedeasilyin local minima, especially in complex problems. This deficiency was addressed by introducing a moresophisticated information-sharing scheme among particles, as described in the following section.The Concept of NeighborhoodThe use of inertia weight equipped PSO with convergence capabilities, as described in the previoussection. However, it did not suffice to increase its efficiency to the most satisfactory levels in complex,multimodal environments. As depicted in Fig. 4, after a number of iterations, the swarm collapses dueto complete diversity loss. This implies that further exploration is not possible and the particles canperform only local search around their convergence point, which most possibly lies in the vicinity ofthe overall best position. Although the effect of fast convergence can be mild in simple optimizationproblems, especially in unimodal and convex functions, it becomes detrimental in high-dimensional,complex environments.This deficiency can be attributed to the global information exchange scheme that allows each particleto know instantly the overall best position at each iteration. Using this scheme, all particles assume newpositions in regions related to the same overall best position, reducing the exploration capabilities ofthe swarm.Figure 4. Swarm diversity during search, with (solid line) and without (dotted line) inertia weight. Theplots pertain to a swarm of 20 particles with c1= c2= 2, minimizing the 2-dimensional instance of testproblem TPUO-1, defined in Appendix A of the book at hand, in the range [-100,100]2for 500 iterationsand vmax= 50. The inertia weight decreases linearly from 1.2 to 0.1 • 53. 35Particle Swarm OptimizationThe aforementioned problem was addressed by introducing the concept of neighborhood. The mainidea was the reduction of the global information exchange scheme to a local one, where informationis diffused only in small parts of the swarm at each iteration. More precisely, each particle assumes aset of other particles to be its neighbors and, at each iteration, it communicates its best position only tothese particles, instead of to the whole swarm. Thus, information regarding the overall best position isinitially communicated only to the neighborhood of the best particle, and successively to the rest throughtheir neighbors.To put it formally, let xibe the i-th particle of a swarm, S = {x1, x2,…, xN}. Then, a neighborhood ofxiis defined as a set:1 2{ , ,..., }si n n nNB x x x= ,where, {n1, n2,…, ns} ⊆ {1, 2,…, N}, is the set of indices of its neighbors. The cardinality, |NBi|, of thisset is called neighborhood size. If gidenotes the index of the best particle in NBi, i.e.,such thatargmin ( )ij ig jj x NBp f p∈= ,then equations (7) and (8) are modified as follows:1 1 2 2( 1) ( ) ( ( ) ( )) ( ( ) ( ))iij ij ij ij g j ijv t wv t c R p t x t c R p t x t+ = + − + − , (10)xij(t+1) = xij(t) + vij(t+1), (11)i = 1, 2,…, N, j = 1, 2,…, n.The only difference between equations (7) and (10) is the index of the best position in the second pa-renthesis. According to equation (10), the particle will move towards its own best position as well as thebest position of its neighborhood, instead of the overall best position in equation (7).The scheme for determining the neighbors of each particle is called neighborhood topology. Acommon scheme that comes to mind is the formation of neighborhoods based on the actual distancesof the particles in the search space. According to this, each particle would be assigned a neighborhoodconsisting of a number, s, of the particles that lie closer to its current position. Although simple, thisscheme requires the computation of N(N+1)/2 distances between particles at each iteration, given that thesearch space is equipped with a proper metric. The computational burden for this approach can becomeprohibitive when a large number of particles is used. Moreover, it exhibits a general trend of formingparticle clusters that can be easily trapped in local minima. For these reasons, the specific neighborhoodtopology was not established as the most promising solution.The idea of forming neighborhoods based on arbitrary criteria was promoted in order to alleviatethe particle clustering effects produced by distance-based neighborhood topologies. The simplest anddirectly applicable alternative was the formation of neighborhoods based on particle indices. Accordingto this, the i-th particle assumes neighbors with neighboring indices. Thus, the neighborhood of xicanbe defined as:NBi= {xi-r, xi-r+1,…, xi-1, xi, xi+1,…, xi+r-1, xi+r}, • 54. 36Particle Swarm Optimizationas if the particles were lying on a ring and each one was connected only to its immediate neighbors. Thisscheme is illustrated in Fig. 5 (left) and it is called ring topology, while the parameter r that determinesthe neighborhood size is called neighborhood radius. Obviously, indices are considered to recycle inthis topology, i.e., index i = 1 follows immediately after the index i = N on the ring.The PSO variant that uses the overall best position of the swarm can be considered as a special caseof the aforementioned ring scheme, where each neighborhood is the whole swarm, i.e., NBi≡ S, forall i = 1, 2,…, N. To distinguish between the two approaches, the variant that employs the overall bestposition is called the global PSO variant (often denoted as gbest), while the one with strictly smallerneighborhoods is called the local PSO variant (denoted, respectively, as lbest). The gbest scheme isalso called star topology, and is graphically depicted in Fig. 5 (right) where all particles communicatewith the best one.The effect of using lbest instead of gbest on swarm diversity is illustrated in Fig. 6 for a swarm of 20particles with c1= c2= 2 and decreasing inertia weight, with Tmax= 500, wup= 1.2, wlow= 0.1, and vmax=5.12, minimizing the 10-dimensional Rastrigin function, defined as test problem TPUO-3in AppendixA of the book at hand, in the range [-5.12, 5.12]10. Solid line represents swarm diversity (as defined inthe previous section) for lbest with ring topology and radius r = 1, while the dotted line stands for thecorresponding gbest case. There is an apparent difference in swarm diversities between gbest and lbest,which becomes more intense when problem dimension increases.Although ring topology is adequately simple and efficient, different topologies have been proposedin literature (Kennedy, 1999; Mendes et al., 2003). Also, topology can change with time instead ofremaining fixed throughout a run. Such dynamic topologies have been used in mutiobjective optimiza-tion problems (Hu & Eberhart, 2002). Moreover, each particle can have its own individual (fixed ordynamic) topology, providing high flexibility to the user and the ability to fit any special requirementsof the problem at hand. Nevertheless, the vast majority of lbest models in literature are based on ringtopology; hence, it can be considered as a standard choice for local PSO variants.The introduction of neighborhoods enhanced the performance of PSO significantly, offering a boostto research towards the development of more competitive and sophisticated variants that incorporatedall presented concepts so far. In the next section, we present the most established contemporary PSOvariants, which are widely used in applications and considered as the state-of-the-art nowadays.Figure 5. Common neighborhood topologies of PSO: ring (left) and star (right) • 55. 37Particle Swarm OptimizationCONTEMPORARY STANDARD PSOTheefficiencyofthepresentedPSOvariantsattractedtheinterestofthescientificcommunity.Aremarkablenumber of scientists and engineers were testing PSO against the established evolutionary algorithms in avariety of applications, producing very promising results. The simplicity of PSO allowed scientists fromvarious disciplines, with limited background in computer science and programming skills, to use PSOas an efficient optimization tool in applications where classical optimization methods were inefficient.The blossoming research prompted researchers to also investigate the theoretical properties of PSO,offering better understanding of its operation and dynamics, as well as the mathematical traits for properparameter configuration. However, from the very first moment, it became obvious that such a theoreti-cal analysis would be a difficult task due to some particularities: PSO incorporates stochastic elementsbut the search is not based on probability distributions. Thus, a direct probabilistic analysis based onadaptive distributions was not possible. For this purpose, deterministic approximations of the originalPSO model were initially investigated, while stochasticity was introduced in the studied models as aperturbation factor of the considered deterministic systems.Ozcan and Mohan (1999) published the first theoretical investigation in multi-dimensional spaces,providing closed-form equations for particle trajectories. Their study focused on the early PSO modelof equations (1) and (2), and they showed that particles were actually moving on sinusoidal waves percoordinate of the search space, while stochasticity offering a means to manipulate its frequency andamplitude. A few years later, this interesting result was followed by a thorough investigation by Clercand Kennedy (2002), who considered different generalized PSO models and performed a dynamicalsystem analysis of their convergence.Figure 6. Swarm diversity during search using the lbest (solid line) and the gbest (dotted line) PSOvariant. The plots pertain to a swarm of 20 particles with c1= c2= 2, minimizing the 10-dimensionalRastrigin function (TPUO-3of Appendix A) in the range [-5.12, 5.12]10for 500 iterations and vmax= 5.12.The inertia weight decreases linearly from 1.2 to 0.1, while ring topology with radius r = 1 is used forthe lbest case • 56. 38Particle Swarm OptimizationClerc and Kennedy’s analysis offered a solid theoretical background to the algorithm, and it estab-lished one of the investigated models as the default contemporary PSO variant. This model is definedby the following equations:vij(t+1) = χ [vij(t) + c1R1(pij(t)-xij(t)) + c2R2(pgj(t)-xij(t))], (12)xij(t+1) = xij(t) + vij(t+1), (13)i = 1, 2,…, N, j = 1, 2,…, n.where χ is a parameter called constriction coefficient or constriction factor, while the rest of the pa-rameters remain the same as for the previously described PSO models. Obviously, this PSO variant isalgebraically equivalent with the inertia weight variant defined by equations (7) and (8). However, it isdistinguished in literature due to its theoretical properties that imply the following explicit selection ofits parameters (Clerc & Kennedy, 2002):22,2 4where φ = c1+c2, and φ > 4. Based on this equation, the setting:χ = 0.729, c1= c2= 2.05,is currently considered as the default parameter set of the constriction coefficient PSO variant.The velocity update of equation (12) corresponds to the gbest (global best) PSO model. Naturally,the concept of neighborhood can be alternatively used by replacing the global best component, pgj(t), inthe second parenthesis of equation (12), with the corresponding local best component. Thus, the localvariant (lbest) of equation (12) becomes:1 1 2 2 ,( 1) [ ( ) ( ( ) ( )) ( ( ) ( ))]ij ij ij ij g j ijv t v t c R p t x t c R p t x t+ = + − + −χ , (14)where gidenotes the index of the best particle in the neighborhood of xi. The use of equation (14) insteadof equation (12) has the same effect on the exploration/exploitation properties of the constriction coef-ficient PSO variant, as for the previously described inertia weight variant.Thestochasticparameters,R1andR2,areconsideredtobeuniformlydistributedwithintherange[0,1].Thus, prospective new positions of a particle are distributed in a rectangle as illustrated in the left part ofFig. 7, similarly to the rectangular areas illustrated in Fig. 1. Alternatively, R1and R2can be uniformlydistributed within a sphere centered at the origin, with radius equal to 1, as illustrated in the right partof Fig. 7. A different approach suggests that R1and R2are normally distributed with a Gaussian distribu-tion, N(μ,σ2). In this case, the distribution of new prospective positions can differ significantly from theprevious two cases, depending heavily on the values of μ and σ, which, in turn, are usually dependenton the parameters c1and c2. Figure 8 illustrates this dependency for two different cases, namely μ = 0and μ = c/2, respectively, with σ = c/4, and c = c1= c2= 2.05. • 57. 39Particle Swarm OptimizationThe aforementioned distributions appear in the vast majority of relevant works, with the rectangularone being the most popular. Different distributions are also used, although less frequently. Most of themare reported and thoroughly discussed in Clerc (2006, Chapter 8). Although they have shown potentialfor enhanced efficiency in specific problems, these approaches cannot be characterized as standard PSOvariants. The case of rectangular distributions offers simplicity and efficiency that has established it asthe default choice. The case of spherical distributions is quite similar to the rectangular one, although itfurther restricts the range of possible new particle positions.On the other hand, the case of Gaussian distributions offers a completely different potential to thealgorithm.Dependingontheirparameters,aparticlecanmoveeveninadirectionoppositetotheinvolvedbest positions, exhibiting completely different dynamics than the standard PSO models.The efficiency ofthis model always depends on the problem at hand. If the PSO has detected the most promising regionsFigure 7. Candidate new positions of the particle xi= (0,0)T(cross) with pi= (2,1)T(star) and pg= (1,3)T(square), for the cases of (A) rectangular distribution of R1, R2, and (B) spherical distribution of R1,R2. The velocity of xiis set to vi= (0,0)Tfor simplicity, and the default parameters, χ = 0.729, c1= c2=2.05, are usedFigure 8. Candidate new positions of the particle xi= (0,0)T(cross) with pi= (2,1)T(star) and pg=(1,3)T(square), for normally distributed values of R1and R2, with mean value (A) μ = 0, and (B) μ =c/2, and standard deviation σ = c/4. The velocity of xiis set to vi= (0,0)Tfor simplicity, and the defaultparameters, χ = 0.729, c1= c2= 2.05, are used • 58. 40Particle Swarm Optimizationof the search space, then Gaussian distributions can slow the convergence down. On the other hand,if there is a plethora of local minima with narrow basins of attraction, closely positioned to the globalone, then this approach can work beneficially for the algorithm. Nevertheless, distribution selection hasan obvious effect on performance; thus, careful choices are needed. In a given problem, experimentalevidence in tackling similar problems can be a substantial criterion for such a choice.CHAPTER SYNOPSISThis chapter presents a brief history of the most important landmarks in the development of PSO, fromitsearlyprecursorstocontemporarystandardvariants,describingsourcesofinspirationbehindthedevel-opment of PSO, as well as its first variants. Major deficiencies, such as the swarm explosion effect andinability to converge, were identified in early variants and addressed either by adding new parametersor by clamping the magnitude of existing ones. Later developments regarding the scope of informationexchangeamongparticlesofferedabettercontroloftheexploration/exploitationpropertiesofPSO.Inthelast section we presented a variant, which is considered as the standard contemporary PSO variant. Thisis accompanied by an explicit scheme for determining its parameters, while it incorporates all previousperformance-enhancing developments, retaining remarkable simplicity and flexibility.In the next chapter, we provide a closer inspection of the theoretical properties of PSO, along withseveral practical issues, such as swarm initialization, parameter selection, termination conditions, andsensitivity analysis of the algorithm.REFERENCESAngeline, P. J. (1998). Evolutionary optimization versus particle swarm optimization: philosophy andperformance differences. In V.W. Porto, N. Saravanan, D. Waagen, & A.E. Eiben (Eds.), EvolutionaryProgramming VII (pp. 601-610). Berlin: Springer.Clerc, M. (2006). Particle swarm optimization. London: ISTE Ltd.Clerc, M., & Kennedy, J. (2002). The particle swarm - explosion, stability, and convergence in amultidimensional complex space . IEEE Transactions on Evolutionary Computation, 6(1), 58–73.doi:10.1109/4235.985692Eberhart, R. C., & Kennedy, J. (1995). A new optimizer using particle swarm theory. In Proceedings ofthe 6thSymposium on Micro Machine and Human Science, Nagoya, Japan (pp. 39-43).Eberhart, R. C., & Shi, Y. (1998). Comparison between genetic algorithms and particle swarm optimiza-tion. In: W. Porto, N. Saravanan, D. Waagen, & A.E. Eiben (Eds.), Evolutionary Programming VII (pp.611-616). Berlin: Springer.Eberhart, R. C., Simpson, P., & Dobbins, R. (1996). Computational intelligence PC tools. Boston:Academic Press Professional.Heppner, F., & Grenander, U. (1990). A stochastic nonlinear model for coordinate bird flocks. In S.Krasner (Ed.), The Ubiquity of Chaos (pp. 233-238). Washington, DC: AAAS Publications. • 59. 41Particle Swarm OptimizationHu, X., & Eberhart, R. C. (2002). Multiobjective optimization using dynamic neighborhood particleswarmoptimization.InProceedingsofthe2002IEEECongressonEvolutionaryComputation,Honolulu(HI), USA (pp. 1677-1681).Kao, I. W., Tsai, C. Y., & Wang, Y. C. (2007). An effective particle swarm optimization method fordata clustering. In Proceedings of 2007 IEEE Industrial Engineering and Engineering Management,Singapore (pp. 548-552).Kennedy, J. (1999). Small worlds and megaminds: effects of neighborhood topology on particle swarmperformance. In Proceedings of 1999 IEEE Congress on Evolutionary Computation, Washington (DC),USA (pp. 1931-1938).Kennedy, J., & Eberhart, R. C. (1995). Particle swarm optimization. In Proceedings of the 1995 IEEEInternational Conference on Neural Networks, Perth, Australia (Vol. IV, pp. 1942-1948).Mendes, R., Kennedy, J., & Neves, J. (2003). Watch thy neighbor or how the swarm can learn from itsenvironment. In Proceedings of the 2003 IEEE Swarm Intelligence Symposium, Indianapolis (IN), USA(pp. 88-94).Millonas, M. M. (1994). Swarms, phase transitions, and collective intelligence. In C.G. Langton (Ed.),Artificial Life III (pp. 417-445). Reading, MA: Addison-Welsey.Ozcan, E., & Mohan, C. K. (1999). Particle swarm optimization: surfing the waves. In Proceedings of1999 IEEE Congress on Evolutionary Computation, Washington (DC), USA (pp. 1939-1944).Reynolds, C. W. (1987). Flocks, herds, and schools: a distributed behavioral model. Computer Graphics,21(4), 25–34. doi:10.1145/37402.37406Shi, Y., & Eberhart, R. C. (1998a). A modified particle swarm optimizer. In Proceedings of the 1998IEEE International Conference on Evolutionary Computation, Anchorage (AK), USA (pp. 69-73).Shi,Y.,&Eberhart,R.C.(1998b).Parameterselectioninparticleswarmoptimization.[London:Springer.].Lecture Notes in Computer Science, 1447, 591–600. doi:10.1007/BFb0040810Wilson, E. O. (1975). Sociobiology: the new synthesis. Cambridge, MA: Belknap Press. • 60. 42Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.Chapter 3Theoretical Derivationsand Application IssuesThis chapter deals with fundamental theoretical investigations and application issues of PSO. We aremostly interested in developments that offer new insight in configuring and tuning the parameters of themethod. For this purpose, the chapter opens with a discussion on initialization techniques, followed bybrief presentations of investigations on particle trajectories and the stability analysis of PSO. A usefultechnique based on computational statistics is also presented for the optimal tuning of the algorithm onspecific problems. The chapter closes with a short discussion on termination conditions.INITIALIZATION TECHNIqUESInitialization is perhaps the less studied phase of PSO and other evolutionary algorithms. This may bedue to the general demand for developing algorithms that are not very sensitive in the initial conditions.However, it can be experimentally verified that, in various problems, initialization can have a significantimpact on performance.As already mentioned in the previous chapter, uniform random initialization is the most popularscheme in evolutionary computation due to the necessity for equally treating each part of a search spacewith unrevealed characteristics. However, alternative initialization methodologies that use differentprobability distributions or employ direct search methods to provide the first steps of the algorithmhave proved very useful.In the following sections, we discuss the most common probabilistic initialization techniques. Inaddition, we present a scheme based on the nonlinear simplex method of Nelder and Mead, which hasbeen shown to work beneficially for the initialization of PSO (Parsopoulos & Vrahatis, 2002).DOI: 10.4018/978-1-61520-666-7.ch003 • 61. 43Theoretical Derivations and Application IssuesRandom Probabilistic InitializationIn the framework of PSO, the quantities that need to be initialized prior to application are the particlesas well as their velocities and best positions. The best positions consist of the best solutions alreadydetected by each particle, while current particle positions represent candidate new solutions. Since noinformation on the promising regions of the search space is expected to be available prior to initialization,the initial particles and the corresponding best positions are considered to coincide. Also, in constrainedoptimization, we are interested in detecting feasible solutions, i.e., solutions that do not violate problemconstraints. For this purpose, the initialization of swarm and best positions within the feasible searchspace, A ⊂ Rn, is desirable.The most common technique in evolutionary computation is random uniform initialization. Accord-ing to this, each particle of the initial swarm and, consequently, of the initial best positions is drawn bysampling a uniform distribution over the search space A. The applicability of this approach depends onthe form of the search space. If A is given as an n-dimensional bounded box:A a b a b a bn n= ´ ´ ´[ , ] [ , ] [ , ],1 1 2 2then any of the available pseudo-random number generators, such as the number-theoretically generatedSobol sequences (Press et al., 1992, Chapter 7), can be directly used to produce uniformly distributednumbers within it.In practice, it is very common to exploit the one-dimensional pseudo-random generators that ac-company all modern computer systems. Thus, each component of a particle is generated as a uniformlydistributed pseudo-random value within the interval [0,1] and then scaled in the magnitude of the cor-responding direction of A. This procedure is described in the pseudocode of Table 1, where we use thefunction drand48() provided by the C++ programming language as the pseudo-random generator in[0,1]. The produced value is then scaled in the corresponding direction of the search space, so that theproduced particles lie strictly within A. In addition, we scale the produced pseudo-random values ofthe velocity components, in order to clamp it within its limits, [-vmax, vmax], as described in the previouschapter. When a large number of subsequent experiments are conducted, re-initialization of the pseudo-random generator with a different seed may be occasionally necessary, in order to obtain unbiasedexperimental results.Mathematically speaking, the particles produced by the aforementioned procedure do not exactlyfollow the multi-dimensional uniform distribution over A. Despite this theoretical deficiency, randomuniform initialization became the technique one of choice; a popularity that can be attributed to the fol-lowing properties:1. It can be implemented easily in any computer system and programming language.2. In many applications, except from some complex constrained optimization problems, the feasiblesearch space can be given in the form of a bounded box or approximated by a sequence of suchboxes.3. It is suitable for time-critical applications as the generation of numbers is adequately fast, requiringonly minor computational effort.4. It does not have a partiality for any region of the search space. • 62. 44Theoretical Derivations and Application IssuesThe last property is very important in PSO. Sutton et al. (2006) have empirically shown that ini-tialization affects performance in multi-funnel landscapes. In such landscapes, there is no single globaltendency towards the global minimum, while the best local minima are not clustered closely. Preliminaryexperiments revealed that, if two funnels exist and most (nearly 80%) of the particles are initialized inthe one funnel, then PSO has up to five times greater probability to converge into this, rather than theother funnel. This property seems to hold regardless of swarm size, underlining the importance of properinitialization in complex problems.On the other hand, if there are items of information available regarding the location of the globalminimizer in the search space, it makes more sense to initialize the majority of the swarm around it.This can be done by replacing the uniform distribution with a Gaussian one. The mean value and stan-dard deviation of the Gaussian shall depend on the available information. For example, the mean valuecan be selected to lie somewhere in the conjectured region of the global minimizer, while the standarddeviation can be a fraction of the expected distance of the mean value from the global minimizer. It isup to the user to exploit as much available information as possible.Alternatively to random initialization, a grid that covers the search space with equidistant pointscan be considered. In this case, particles are initialized on grid nodes, while their initial velocities aretaken randomly. This approach is characterized by explicit fairness regarding the covering of differentregions of the search space; however, it suffers from a series of deficiencies that reduces its popularity.More specifically, its deterministic nature does not adhere to the philosophy of PSO and, more generally,of stochastic optimization algorithms. In addition, the grid construction can become laborious even insearch spaces with simple but not rectangular shape. Finally, experimentation shows that, in most cases,it is not accompanied by a statistically significant improvement in performance.Regardingvelocity,althoughthemostcommonpracticeisrandominitialization,itcanbealternativelyinitialized to zero. In this case, the particles are let to obtain acceleration in the first iteration based solelyon their distance from the best positions. Under this assumption, the overall best particle will remain inits initial position until another particle finds a better one. Also, particles initialized close to the overallbest position will obtain small velocities, becoming more prone to get stuck in local minima. For thesereasons, random initialization of velocities in non-zero values is preferable.Table 1. Pseudocode for random uniform initialization. We employ the drand48() pseudo-randomgenerator of the C++ programming language to produce uniformly distributed pseudo-random valueswithin [0,1]Input: Number of particles N, dimension n, velocity bounds [-vmax, vmax], and search space, A = [a1,b1] × [a2,b2] × ... × [an,bn]Step 1. Do (i = 1…N)Step 2. Do (j = 1…n)Step 3. Set particle component xij= aj+ drand48() (bj-aj).Step 4. Set best position component pij= xij.Step 5. Set velocity component vij= -vmax+ 2 drand48() vmax.Step 6. End DoStep 7. End Do • 63. 45Theoretical Derivations and Application IssuesExperimental studies revealed that, in some benchmark problems, initialization that biases the swarmtowards more promising regions of the search space can offer a significant reduction in the requiredcomputational burden (Parsopoulos & Vrahatis, 2002). In unexplored search spaces, this biasing canbe the outcome of a simple direct search algorithm, which has been applied for a few iterations prior toPSO. Such an approach that uses the nonlinear simplex method is presented in the next section.Initialization Using the Nonlinear Simplex MethodThe nonlinear simplex method (NSM) was developed by Nelder and Mead (1965) for function mini-mization tasks. NSM requires only function evaluations, and it is considered a good starting procedurewhen the figure of merit is to “get something to work quickly”, especially in noisy problems. In addi-tion, it is characterized by a geometrical naturalness that makes it attractive to work through (Press etal., 1992, Chapter 10).NSM is based on the mathematical structure of simplex. An n-dimensional simplex, also calledn-simplex, is the convex hull of a set of (n+1) affinely independent points in Rn, i.e., (n+1) points ingeneral positions in the sense that no m-dimensional hyperplane contains more than (m+1) of them.Thus, a 2-dimensional simplex is a triangle, while a 3-dimensional simplex is a tetrahedron. In general,we consider only non-degenerated simplices, i.e., simplices that enclose a finite inner n-dimensionalvolume. In such cases, if any vertex of the simplex coincides with the origin, the rest n vertices definedirections that span the n-dimensional vector space.Figure 1. Possible moves of a simplex (the worst and best vertex are denoted): (A) Reflection of theworst vertex against the face of the best vertex; (B) Reflection and expansion; (C) Contraction towardsthe face of the best vertex; and (D) Multiple contraction towards the best vertex • 64. 46Theoretical Derivations and Application IssuesThe operation of the NSM starts with an initial simplex and takes a series of steps where its worstvertex, i.e., the one with the highest function value, is mostly moved through its opposite face of thesimplex, hopefully to a new point with a lower function value. If possible, the simplex expands in one oranother direction to take larger steps.When the method reaches a “valley floor”, the simplex is contractedin the transverse direction to ooze down the valley or it can be contracted in all directions, pulling itselfaround its lowest point. The possible moves of a simplex are illustrated in Fig. 1. An implementation ofNSM is provided in (Press et al., 1992, Chapter 10).In practice, NSM is usually applied until a minimizer is detected. Then, it is restarted at the detectedminimizer to make sure that stopping criteria have not been fulfilled by a possible anomalous step. Thus,after restarting, the detected minimizer constitutes a vertex of the new initial simplex, while the rest nvertices are taken randomly. This restarting scheme is not expected to be computationally expensive,since the algorithm has already converged to one of its initial simplex vertices.The convergence properties of NSM are in general poor, but, in many applications, they have beenshown to be very useful, especially in cases of noisy functions and problems with imprecise data. Amore efficient variant of NSM was proposed and analyzed by Torczon (1991) along with its convergenceproperties. The reader is referred to the original paper for more details (Torczon, 1991).The motives for using NSM for the initialization of PSO lie in its ability to take quick downhill steps,combined with the performance improvement achieved when PSO is seeded with a good initial swarm(Parsopoulos & Vrahatis, 2002). The technique works as follows: suppose that we start NSM with aninitial simplex in the n-dimensional search space, A. Then, the (n+1) vertices of the initial simplex willconstitute the first (n+1) particles of the swarm. Next, NSM is applied for N-(n+1) iterations, where N isthe required swarm size. At each NSM iteration, the new simplex vertex produced by NSM is acceptedas a new initial particle into the swarm. Thus, the initial swarm is equipped with information gained inthe initial iterations of NSM. As soon as the initial swarm is filled with N particles that served as sim-plex vertices, NSM stops and PSO starts its operation with the produced initial swarm (Parsopoulos &Vrahatis, 2002). The technique is described in pseudocode in Table 2.ParsopoulosandVrahatis(2002)appliedtheNSM-basedinitializationtechniquewiththeinertiaweightPSO variant on several widely used benchmark problems from optimization literature, and their resultssuggested that convergence rates of PSO can be significantly increased. Table 3 reports the achievedimprovement, in terms of the required function evaluations, on several test problems described in Ap-pendix A of the book at hand, for solution accuracy equal to 10-3. The improvement in the consideredproblems ranged from 2.9% up to 35.1%, with a mean improvement percentage of 11.02%, suggestingTable 2. Pseudocode for initialization using the NSM algorithmInput: Number of particles, N; dimension, n; initial swarm, S ≡ ∅.Step 1. Generate the initial n-dimensional simplex with vertices sv1, sv2,…, svn+1.Step 2. Set S = S ∪ {sv1, sv2,…, svn+1}.Step 3. Do (i = 1…N-(n+1))Step 4. Perform an NSM step and obtain a new acceptable vertex, svn+1+i.Step 5. Set S = S ∪ {svn+1+i}.Step 6. End DoStep 7. Set S to be the initial swarm of PSO. • 65. 47Theoretical Derivations and Application Issuesthat NSM-based initialization can be beneficial for PSO. For further details, the reader is referred to theoriginal paper (Parsopoulos & Vrahatis, 2002).In the following sections, we concisely present the most important theoretical works that contributedtowards the better understanding of the basic operation mechanisms of PSO and provided fundamentalrules for its parameter setting.THEORETICAL INVESTIGATIONS AND PARAMETER SELECTIONThe theoretical analysis of PSO has proved to be harder than expected. This can be ascribed mostly tothe nature of the algorithm. First, it is stochastic, thus it cannot be studied through deterministic-orientedapproaches. Secondly, it is not a pure probabilistic algorithm; therefore stochastic investigations usingadaptive probability densities are not valid. Moreover, its stochasticity can be regarded as a mutation,similarly to evolutionary algorithms; however, this mutation depends on non-statistical information (bestpositions) that potentially changes at each iteration.The lack of a mathematical tool that could be used directly for the theoretical investigation of PSOenforced researchers to approximate it with simplified deterministic models. These models were studiedwithclassicalmathematicalmethodologies,suchasdynamicalsystemstheoryandstochasticanalysis,andthe derived conclusions were generalized to the actual PSO case by infusing stochasticity and analyzingits implications. Nevertheless, the practical value of the obtained results varies, since stochasticity canoften annul deterministic theoretical results.In the following sections, we describe the most influential theoretical investigations of PSO, andreport their most crucial derivations regarding its dynamics and parameter settings. Our primary aim isto expose the fundamental assumptions on which the theoretical studies were built, as well as underlinetheir impact on the form and parameter configuration of PSO. The insight gained can be very useful inapplying PSO to new, challenging problems.Table 3. Performance improvement of PSO, in terms of the required function evaluations, using the NSMinitialization technique. Percentages are derived from results reported in (Parsopoulos & Vrahatis,2002)Problem Dim. Improvement (%) Problem Dim. Improvement (%)TPUO-12 22.3% TPUO-132 6.2%TPUO-22 7.2% TPUO-142 9.5%TPUO-36 15.0% TPUO-152 4.7%TPUO-42 35.1% TPUO-163 2.9%TPUO-102 19.0% TPUO-176 7.4%TPUO-112 7.9% TPUO-179 5.7%TPUO-122 7.7% TPUO-186 3.7% • 66. 48Theoretical Derivations and Application IssuesOne-Dimensional Particle TrajectoriesThe first study on particle trajectories was conducted by Kennedy (1998). The study considered the earlyPSO version without inertia weight or constriction coefficient. This version was described by equations(1) and (2) of Chapter Two, which are reproduced below for presentation completeness:vij(t+1) = vij(t) + c1R1(pij(t)-xij(t)) + c2R2(pgj(t)-xij(t)),xij(t+1) = xij(t) + vij(t+1),i = 1, 2,…, N, j = 1, 2,…, n.Kennedy (1998) assumed a simplified one-dimensional PSO model; therefore, index j can be omittedfrom the equations. In addition, if we let:φ1= c1R1and φ2= c2R2,the equations become:vi(t+1) = vi(t) + φ1(pi(t)-xi(t)) + φ2(pg(t)-xi(t)),xi(t+1) = xi(t) + vi(t+1),i = 1, 2,…, N.As discussed in previous chapters, the use of the previous velocity term, vi(t), for updating the currentone, equips particles with the ability to make an oscillatory move around the best positions, rather thanmoving aggressively towards them. In fact, each particle follows a trajectory around the mean of thebest positions (Kennedy, 1998):1 21 2.i gp p(1)On the other hand, the best particle, for which piand pgcoincide by definition, performs the followingvelocity update:vi(t+1) = vi(t) + (φ1+ φ2) (pi(t)-xi(t)).The effective parameter that controls velocity for all particles is:φ = φ1+ φ2.The weighted mean of best positions in equation (1) is unpredictable when φ1and φ2assume randomvalues. However, in the long run, the sequence of particle positions, xi, approximates an average of pi • 67. 49Theoretical Derivations and Application Issuesand pg. This effect is imputed to the uniform distribution of φ1and φ2, which is expected to equalizethem on average. If we further simplify the system by considering a solitary particle and a constant bestposition, we obtain the following PSO model (Kennedy, 1998):v(t+1) = v(t) + φ (p-x(t)), (2)x(t+1) = x(t) + v(t+1), (3)Figure 2. Particle trajectories for fixed values of φ in (0,1]Figure 3. Particle trajectories for fixed values of φ in (1,2] • 68. 50Theoretical Derivations and Application Issueswhere p is the best position of the (single) particle. The shape of the trajectory traversed by x dependsheavily on φ, while v and p have an impact on its amplitude. The assumption of a constant p is notinconsistent with reality, since best positions are expected to vary less frequently and finally stabilizeat the last few iterations of a run.Kennedy (1998) performed an experimental study of the model defined by equations (2) and (3),for varying values of the control parameter, φ. The obtained trajectories for different values of φ in theintervals, (0,1], (1,2], (2,3], (3,4], and for φ > 4, are illustrated in Figs. 2, 3, 4, and 5, respectively. In allfigures, the initial values, x(0) = 0, v(0) = 2, and a fixed, p = 0, were used.Figure 4. Particle trajectories for fixed values of φ in (2,3]Figure 5. Particle trajectories for fixed values of φ in (3,4] • 69. 51Theoretical Derivations and Application IssuesThere is an apparent influence of φ on the produced particle trajectories. Different values of φ produceoscillations of different amplitude and frequency around the best position, p. When φ becomes equal to4, the particle diverges to infinity, as illustrated in Fig. 5. In all other cases, the particle oscillates withoutdiverging or converging towards p, although the period of its cycles depends on φ. Kennedy offers acomprehensive discussion per case, which reveals that there is a critical upper limit, φ = 4, after whichdivergence occurs (Kennedy, 1998).An interesting question arises regarding the validity of these observations in the original PSO case,i.e., when φ is stochastic. If the fixed φ assumes a random weight within [0,1], then the system becomesunpredictable, since particles take steps from all possible sequences of trajectories produced by the cor-responding fixed values of φ. In addition, the system inherits the undesirable property of swarm explo-sion (Kennedy, 1998) discussed in the previous chapter. Swarm explosion occurs even when the randomvalues of φ are constrained in ranges that exclude the critical upper limit, φ = 4. This is illustrated inFig. 6, for uniformly sampled values of φ within the ranges (0,1), (0,2), (0,3), and (0,4). We also noticea remarkable scaling difference in Fig. 6 as the range increases towards φ = 4.As mentioned in the previous chapter, the explosion effect can be addressed by imposing an upperlimit on the absolute values of the velocities. The effect of using a maximum velocity, vmax= 5, in theexperiments of Fig. 6, is illustrated in Fig. 7. Clearly, the behavior of the particle changes radically,with the oscillation amplitude being significantly reduced. Although in our example the value of vmaxisarbitrarily selected, estimates of scale and correlation dimension for the problem at hand can be veryuseful in selecting more appropriate velocity limits (Kennedy, 1998).The work of Kennedy constituted a milestone in the theoretical investigation of PSO. It identifiedthat a crucial value on φ exists, and, in combination with vmax, it has a crucial impact on the behavior ofthe considered simplified models. The next step was taken one year later by Ozcan and Mohan (1999).They generalized Kennedy’s study to multi-dimensional search spaces and derived closed form equa-tions of particle trajectories, as described in the following section.Figure 6. Particle trajectories for random values of φ, uniformly distributed within the ranges (0,1),(0,2), (0,3), and (0,4) • 70. 52Theoretical Derivations and Application IssuesMulti-Dimensional Particle TrajectoriesSimilarly to Kennedy (1998), the early PSO version without inertia weight or constriction coefficientwas studied by Ozcan and Mohan (1998, 1999). The only difference between the two studies was theuse of the local PSO variant (lbest) with neighborhoods, instead of the global (gbest) variant used byKennedy. The original local PSO model:vij(t+1) = vij(t) + φ1(pij(t)-xij(t)) + φ2(plj(t)-xij(t)), (4)xij(t+1) = xij(t) + vij(t+1), (5)i = 1, 2,…, N, j = 1, 2,…, n,where l denotes the best particle in the neighborhood of xi, and φ1, φ2, are samples of a uniform distribu-tion in [0, c1] and [0, c2], respectively, was simplified by assuming constant values for pij, pgj, φ1, andφ2. No further simplification regarding the dimensionality or the swarm size was made. Substitutingequation (4) to equation (5), a recursive formula for the trajectory of the particle is obtained (Ozcan &Mohan, 1999):xij(t) – (2 – φ1– φ2) xij(t-1) + xij(t-2) = φ1pij(t) + φ2plj(t). (6)The initial position and velocity, xij(0) and vij(0), respectively, serve also as initial conditions for therecursion. Thus, the first step is defined as:xij(1) = xij(0) (1 – φ1– φ2) + vij(0) + φ1pij(t) + φ2plj(t).Figure 7. Particle trajectories for random values of φ, uniformly distributed within the ranges (0,1),(0,2), (0,3), and (0,4), using a maximum velocity, vmax= 5 • 71. 53Theoretical Derivations and Application IssuesClosed form solutions for equation (6) were obtained using methodologies for solving non-homogeneouslinear recurrence equations for the displacement (Ozcan & Mohan, 1999):xij(t) = ηijtij+ ιijtij+ κij, (7)where,δij= 21 1(2 ) 4− ϕ − ϕ − , (8)αij= (2 – φ1– φ2+ δij)/2, (9)βij= (2 – φ1– φ2– δij)/2, (10)ηij= (0.5 – (φ1+ φ2)/2δij) xij(0) + vij(0)/δij+ (φ1pij+ φ2plj)/(2δij) – (φ1pij+ φ2plj)/(2φ1+2 φ2), (11)ιij= (0.5 + (φ1+ φ2)/2δij) xij(0) – vij(0)/δij– (φ1pij+ φ2plj)/(2δij) + (φ1pij+ φ2plj)/(2φ1+2 φ2), (12)κij= (φ1pij+ φ2plj)/(φ1+ φ2), (13)with special cases arising for φ1+ φ2= 2.Two major cases, namely real and complex values, are distinguished for δijin the analysis of Ozcanand Mohan (1999). In the first, the trajectory is governed by equations (7)-(13), while in the latter, whichcorresponds to the values:0 < φ1+ φ2< 4,αijand βijbecome complex numbers. Assuming that:θij= atan(|δij| / |2 – φ1– φ2|),and with proper mathematical manipulations, equation (7) becomes (Ozcan & Mohan, 1999):xij(t) = Κijsin(θijt) + Λijcos(θijt) + κij, (14)with,Κij= (2vij(0) – (φ1+ φ2) xij(0) + φ1pij+ φ2plj) / |δij|,Λij= xij(0) – κij.For φ1+ φ2> 2, the value of θijshall be increased by π.Several regions of interest can be identified based on the value φ = φ1+ φ2within the range (0,4),while special cases arise for φ = 0, 2, and 4 (Ozcan & Mohan, 1999). If, • 72. 54Theoretical Derivations and Application Issuesφ ∈ (0, 2- 3 ] ∪ (2+ 3 , 4],then it holds that |δij| ≤ 1. For increasing values of φ, Κijbecomes larger, resulting in larger steps of theparticle. On the other hand, if,φ ∈ (2- 3 , 2) ∪ (2, 2+ 3 ),then it holds that |δij| > 1, and the particle moves randomly around a weighted average of the best posi-tions, piand pl, with step sizes taken randomly from a sinusoidal form (Ozcan & Mohan, 1999). Forφ > 4, the amplitude of the movement grows exponentially, resulting in divergence. This verifies theobservations of Kennedy (1998) reported in the previous section. For a thorough analysis of the specialcases, φ = 0, 2, and 4, the reader is referred to the original paper by Ozcan and Mohan (1999).Besides the particles, Ozcan and Mohan (1999) also analyzed the velocities. Equations (4) and (5)produce, after substitution, the following recursive formula:vij(t) – (2 – φ1– φ2) vij(t-1) + vij(t-2) = 0, (15)with initial conditions, xij(0), vij(0), and,vij(1) = vij(0) – (φ1+ φ2) xij(0) + vij(0) + φ1pij(t) + φ2plj(t).Similarly to equation (6), closed form solutions can be given for equation (15):vij(t) = ρijtij+ τijtij, (16)where,ρij= vij(0)/2 + ((φ1+ φ2)/2δij) vij(0) + ((φ1+ φ2)/δij) xij(0) + (φ1pij+ φ2plj)/δij, (17)τij= vij(0)/2 – ((φ1+ φ2)/2δij) vij(0) – ((φ1+ φ2)/δij) xij(0) – (φ1pij+ φ2plj)/δij. (18)Analysis akin to particle trajectories can be made also for the velocity regions of interest. Indeed, forvalues:φ = φ1+ φ2∈ (0,4),velocity can be given in the form (Ozcan & Mohan, 1999):vij(t) = Υijsin(θijt) + Φijcos(θijt), (19)with,Υij= ((φ1+ φ2) 2 xij(0) + 2(φ1pij+ φ2plj) + φ1+ φ2) / |δij|, • 73. 55Theoretical Derivations and Application IssuesΦij= νij(0),andθij= atan(|δij| / |2 – φ1– φ2|),while, for φ > 4, the value of θijshall be increased by π. The reader is referred to the original paper forfurther details (Ozcan and Mohan, 1999).In the last part of their study, Ozcan and Mohan (1999) considered equipping their PSO model withinertia weight. The corresponding analysis showed that a constant inertia weight can alter the boundariesbetween areas of interest, and, consequently, modify the convergence properties of the model. Thus, acareless selection of the inertia weight can have a detrimental effect on efficiency. The case of decreasinginertia weight required more thorough investigation, and was put aside as a topic of future research.The works of Kennedy (1998) and Ozcan and Mohan (1999) offered a good starting point for un-raveling the Ariadne’s thread of PSO dynamics, attracting the interest of many researchers. Althougha small step was taken towards the theoretical analysis of the algorithm by providing rough parameterbounds, complete fine-tuning was still an open problem. It was not but three years later that a veryimportant - perhaps the most influential - theoretical work on PSO, namely the stability analysis due toClerc and Kennedy (2002) was published. In this work, new variants of the algorithm were proposedand mathematical formulae for determining its parameters were reported, providing the state-of-the-artPSO variants until nowadays.Stability Analysis of Particle Swarm OptimizationClerc and Kennedy (2002) considered the original local PSO (lbest) model:vij(t+1) = vij(t) + φ1(pij(t)-xij(t)) + φ2(plj(t)-xij(t)),xij(t+1) = xij(t) + vij(t+1),i = 1, 2,…, N, j = 1, 2,…, n,where l denotes the best particle in the neighborhood of xi; and φ1, φ2, are samples of a uniform distri-bution bounded by an upper limit, φmax. The velocity is also considered to be clamped by a parameter,vmax, as follows:|vij| ≤ vmax, for all i and j.Velocity update can be simplified to the algebraically equivalent form:vij(t+1) = vij(t) + φ (pij(t)-xij(t)),where pijis no longer the best position of the particle but the aggregated position: • 74. 56Theoretical Derivations and Application Issuespij= (φ1pij+ φ2plj) / (φ1+φ2),with φ = φ1+ φ2. Stripping further the system down, pijis assumed to be constant and henceforth denotedsimply as p. The parameter φ is also considered to be constant, similarly to the analyses described inprevious sections.The first model studied by Clerc and Kennedy (2002) was one-dimensional and assumed a reducedpopulation size, i.e., it consisted of a deterministic one-dimensional particle that moves according tothe following equations:v(t+1) = v(t) + φ (p-x(t)), (20)x(t+1) = x(t) + v(t+1). (21)Notice that all subscripts denoting particle index and direction are dropped. We can now distinguishtwo time cases for the particle movement: discrete and continuous. We describe each case separately inthe following sections.The Discrete-Time CaseThe first part of the study of Clerc and Kennedy (2002) was purely algebraic, analyzing the movement ofa particle in discrete time. The system of equations (20) and (21) was written as a dynamical system:11,(1 ) ,t t tt t tv v yy v y++= + ϕ= − + − ϕ (22)with t = 0, 1, 2,…, and yt= p-xt, where p is the aggregated best position defined in the previous section.In matrix form, the system reads:Pt+1= M Pt= M tP0, (23)where M tdenotes the t-th power of matrix M, and,Pt=ttvy   , M =11 1-.Its behavior depends on the eigenvalues of M, which are defined as follows:212241 ,2 241 .2 2 (24)The critical value, φ = 4, previously identified by Kennedy (1998) and Ozcan and Mohan (1999), appearsagain as a special case, since it constitutes the limit between the cases of two different real eigenvalues,one eigenvalue of multiplicity 2, and two complex conjugate eigenvalues. • 75. 57Theoretical Derivations and Application IssuesSince the matrix M tdefines completely the system in equation (23), we can use linear algebra tomake it eager to investigation through further simplification. Thus, if λ1≠ λ2, or equivalently, φ ≠ 4, thereexists a matrix, A, which produces the following similarity transformation of M:L = A M A-1=1200 .Assuming that A has the canonical form:A =ac11éëêêêùûúúú,and by doing the proper mathematical manipulations, it is derived that (Clerc & Kennedy, 2002):A =22+ 4 2- 4 2.Since it holds that:L = A M A-1⇔ A-1L A = M,we can go back to equation (23) and substitute M with its equivalent matrix, A-1L A:Pt+1= M Pt⇔ Pt+1= A-1L A Pt⇔ A Pt+1= L A Pt.If we define, Qt= A Pt, then the system is described by:Qt+1= L Qt= LtQ0, (25)where L is a diagonal matrix, with its main diagonal consisting of the eigenvalues of M. Thus, its t-thpower is defined as:Lt=1200tt .The study becomes easier now because cyclic behavior, determined by Qt= Q0, can be simply achievedby requiring that Ltis the unitary matrix in equation (25), or equivalently:1 21t t. • 76. 58Theoretical Derivations and Application IssuesIn the case of φ∈(0,4), the eigenvalues defined in equation (24) become complex. Thus, we have:12ttt i tt i t = += −and the system cycles for θ = (2kπ)/t. The corresponding solutions of φ are given as follows:φ = 2 (1-cos(2kπ/t)), k = 1, 2,…, t-1.On the other hand, if φ > 4, there is no cyclic behavior of the system, and it is proved that the distanceof Ptfrom the origin is monotonically increasing (Clerc & Kennedy, 2002). In the limit case, φ = 4, thesystem will have either an oscillatory behavior, i.e.,Pt+1= -Pt,if the initial vector P0is an eigenvector of M, or it will have an almost linear increase or decrease of thenorm ||Pt||, for y0> 0 or y0< 0, respectively (Clerc & Kennedy, 2002).Thesederivationsprovideusefulintuitionregardingthebehaviorofthesysteminthediscrete-timecaseunder different values of φ. Let us now study the same system from the continuous-time viewpoint.The Continuous-Time CaseClerc and Kennedy (2002) also conducted a continuous-time analysis based on differential equations.More specifically, equations (20) and (21) were merged to produce the following recurrent velocityequation:v(t+2) + (φ-2) v(t+1) + v(t) = 0.This becomes a second-order differential equation in continuous-time (Clerc & Kennedy, 2002):21 2 1 22ln(v vvt t∂ ∂+ + =∂ ∂,where λ1and λ2are the solutions of the equation:λ2+ (φ-2) λ + 1 = 0,resulting in the forms of equation (24). Thus, the general form of the velocity becomes (Clerc & Ken-nedy, 2002):v(t) = c1 1t+ c2 2t,while y(t) has the form: • 77. 59Theoretical Derivations and Application Issuesy(t) = (c1 1t(λ1-1) + c2 2t(λ2-1)) / φ,where c1and c2depend on v(0) and y(0). If λ1≠ λ2, then c1and c2have the following expressions (Clerc& Kennedy, 2002):212 1122 1,.y vcy vc− − −= −+ − = −Analysis similar to the discrete-time case was made for the regions of interest, in terms of the value ofφ. Thus, it was shown that system explosion depends on whether the condition:max {|λ1|, |λ2|} > 1,holds (Clerc & Kennedy, 2002). After these additional confirmations, Clerc and Kennedy (2002) gen-eralized their model to approximate the original PSO form. Their major developments are reported inthe next section.Generalized System RepresentationsClerc and Kennedy (2002) extended their study in more generalized models. The system of equation(22), produced by equations (20) and (21), was generalized by adding new coefficients:11t t tt t tv v vy v y++= += − + − (26)with φ being a positive real number. The new system matrix is:M´ = ,and let, λ1´, λ2´, be its eigenvalues. Then, the system of equation (26) can be written in its analytic formas follows:λ −= λ= λ1 1 2 21 1 1 2 2 2( ) ( ) ( ) ,1( ) ( ( ) ( ) ( ) ( )),t tt tv t cy t c + λα + λ λ − α βϕ(27)with, • 78. 60Theoretical Derivations and Application Issues212 1122 1,.ycyc− − −= −+ − = −We can now define two constriction coefficients (recall the corresponding PSO variant described in theprevious chapter), χ1and χ2, such that:λ1´ = χ1λ1and λ2´ = χ2λ2,where λ1and λ2are defined by equation (24). Then, the following forms of χ1, χ2, are obtained (Clerc& Kennedy, 2002):2 21 22 22 2( ) 2 ( 2 ) ( ),2 4( ) 2 ( 2 ) ( ).2 4 (28)It is worth noting that the system of equation (26) requires an integer time parameter, t, while its ana-lytical equivalent of equation (27) can also admit any positive real value of t. In the latter case, v(t) andy(t) become complex numbers, and the behavior of the system can be investigated in a 5-dimensionalsearch space.Clerc and Kennedy (2002) distinguished the following four interesting model classes, characterizedby different relations among the five parameters, α, β, γ, δ, and η, so that real constriction coefficientsare ensured:1. Class 1: This class of models is obtained for,α = δ and βγ = η2.In order to ensure real coefficients, one can simply take the additional condition:χ1= χ2= χ ∈ R.Then, a class of solutions is given by assuming that:α = β = γ = δ = η = χ.2. Class 1´: This class of models is obtained for,α = β and γ = δ = η.Taking again, χ1= χ2= χ, we obtain:α = (2-φ)χ+φ-1 and γ = χ or γ = χ/(φ-1).3. Class 1´´: This class of models is obtained for,α = β = γ = η, • 79. 61Theoretical Derivations and Application Issuesand,21 2 1 22 ( )( 2) ( ) 42( 1) .The case of δ = 1 has been studied extensively for historical reasons (Clerc & Kennedy, 2002).4. Class 2: This class is obtained for,α = β = 2δ and η = 2γ,and, for 2γφ > δ, we have:2122 4,22 3 4.4Also, Clerc and Kennedy (2002) provided a set of conditions that involve the maximum value, φmax,of φ such that the system remains continuous and real. The reader is referred to the original paper formore details.From the analysis above, the following models that use a single constriction coefficient were distin-guished and studied further (Clerc and Kennedy, 2002):1. Model Type 1: This model is defined as:( 1) ( ( ) ( )),( 1) ( ( ) (1 ) ( )),v t v t y ty t v t y tχχ+ = + ϕ+ = − + − ϕwith its converge criterion being satisfied for χ < min(|λ1|-1, |λ2|-1), resulting in the coefficient (Clerc &Kennedy, 2002):χ = κ / |λ2|, κ∈(0,1).2. Model Type 1´: This model is defined as:( 1) ( ( ) ( )),( 1) ( ( ) (1 ) ( )),v t v t y ty t v t y tχχ+ = + ϕ+ = − + − ϕwith coefficient (Clerc & Kennedy, 2002):χ = κ / |λ2|, κ∈(0,1), φ∈(0,2).Further investigation revealed that convergence is achieved for: • 80. 62Theoretical Derivations and Application Issues2 2,2χ+ ϕ − ϕ= φ∈(0,4),but not for higher values of φ (Clerc & Kennedy, 2002).3. Model Type 1´´: This model is defined as,( 1) ( ( ) ( )),( 1) ( ) (1 ) ( ),v t v t y ty t v t y t+ = χ + ϕ+ = −χ + − χϕ (29)and, with proper substitutions, it results in the following form:( 1) [ ( ) ( - ( ))],( 1) ( ) ( 1),v t v t p x tx t x t v tχ ϕ+ = ++ = + +which is the constriction coefficient PSO variant, described in the previous chapter as one of the state-of-the-art variants nowadays. Special analysis for this approach resulted in the following relation forthe determination of the constriction coefficient (Clerc & Kennedy, 2002):22, for 4,2 4, otherwise, (30)for κ∈(0,1).So far, the analysis was restricted to one-dimensional particles, with constant φ and a single fixedbest position, p. Clerc and Kennedy (2002) generalized their theoretical derivations by using randomvalues of φ and two vector terms for velocity update, as in the original PSO model. Thus, they showedthat system explosion can be controlled simply by using proper values of the constriction coefficient.Different PSO variants can be defined by using either of the aforementioned model types. Then,convergence rates depend on the selected system parameters. For example, the value κ = 1, which is alimit case for the models above, results in slow convergence, thereby promoting better exploration ofthe search space. This justifies the default parameter setting:χ = 0.729, c1= c2= 2.05,oftheconstrictioncoefficientPSOvariantreportedinthepreviouschapter,whichcorrespondstoequation(30) evaluated for κ = 1 and φ1= φ2= 2.05 (hence, φ = φ1+φ2= 4.1). The nice performance properties ofthe Type 1´´ model were also verified experimentally by Clerc and Kennedy (2002), and it was finallyestablished as the state-of-the-art variant of PSO.Clerc and Kennedy’s analysis (2002) was perhaps the most important theoretical work on PSO.However, it was very complicated for readers with a poor mathematical background. One year after itspublication, their work was reconsidered by Trelea (2003). He provided a simplified view of the previ-ous complex analysis and performed a series of experiments on typical benchmark problems. Besidesϕ • 81. 63Theoretical Derivations and Application Issuesthe typical parameter setting, χ = 0.729 and c1= c2= 2.05, he also investigated a promising alternative,namely χ = 0.6 and c1= c2= 2.83. His results verified that the latter can also be a beneficial setting, re-taining nice convergence properties. Thus, it can now be considered as an alternative parameter settingfor the constriction coefficient version of PSO.Different PSO variants also became the subject of further theoretical investigations. Zheng et al.(2003) presented an analysis for the inertia weight variant.They considered a PSO model with increasinginertia weight, w∈(0.4, 0.9), and φ1, φ2∈ (0.5, 2.0), claiming that it outperforms standard PSO, at leastin a small set of widely used benchmark problems. Van den Bergh and Engelbrecht (2006) reviewed theexisting theoretical works a few years later, also providing a study on the inertia weight variant. Theirresults agreed with previous studies, further improving our understanding on the most popular PSOvariants. A few more papers by different authors appeared in the past two years (Jiang et al., 2007; Xiaoet al., 2007); however, they mostly reproduced or reviewed existing results, thus we refer the interestedreader directly to these papers for further details.ThepresentedtrajectoryandstabilityanalysesdeterminedasetofefficientPSOvariants,andprovidedinstructions on parameter setting to achieve satisfactory convergence properties. However, we must notethat all studies consider convergence as the ability to reach a state of equilibrium rather than achieve atruelocalorglobaloptimum.Thelatterwouldrequireadditionalinformationonthemathematicalproper-ties of the objective function. At the same time, the obtained guidelines for parameter setting referred torather general environments, without taking into consideration any peculiarities of the problem at hand.However, if only a specific problem is of interest, then one can design algorithms specifically suitedto it. Of course, this does not adhere to the requirement to “provide satisfactory solutions as soon aspossible”, however it can save much computational effort if instances of the same complex problem areconsidered repeatedly. Sensitivity analysis using computational statistics has proved to be a very usefultool in such cases (Bartz-Beielstein et al., 2004). An excellent contribution to experimental research inevolutionary computation is provided in Bartz-Beielstein (2006).In the next section, we expose the design of PSO algorithms through computational statistics meth-odologies, and illustrate its application on a benchmark and a real world problem.DESIGN OF PSO ALGORITHMS USING COMPUTATIONAL STATISTICSBartz-Beielstein et al. (2004) proposed an approach for determining the parameters of PSO, tailored tothe optimization problem at hand. The approach employs techniques from computational statistics andstatistical experimental design, and is applicable with any optimization algorithm (Bartz-Beielstein,2006). Its operation was illustrated on a well-known benchmark problem, as well as on a simplifiedmodel of a real world application, which involves the optimization of an elevator group controller. Thefollowing sections present fundamental concepts of the technique along with results for the aforemen-tioned applications.Fundamental Concepts of Computational StatisticsComputational statistics is a scientific field that embraces computationally intensive methods (Gentleet al., 2004), such as experimental design and regression analysis, which can be used to analyze the • 82. 64Theoretical Derivations and Application Issuesexperimental setting of algorithms on specific test problems. A fundamental issue in these approachesis the determination of variables with significant impact on performance, which can be quantitativelydefined as the averaged best function value obtained over a number of independent experiments. Suchmeasures have been used for the empirical analysis of PSO (Shi & Eberhart, 1999), attempting to answerfundamental questions such as:a. How do variations of swarm size influence performance?b. Are there interactions between swarm size and the inertia weight value?Addressing such issues can enhance our understanding on the operation of PSO, and help towards thedesign of more efficient variants.The approach of Bartz-Beielstein et al. (2004) combines three types of statistical techniques, namelydesign of experiments (DOE), classification and regression trees (CART), and design and analysis ofcomputer experiments (DACE). Thorough descriptions of these methodologies can be found in Breimanet al., (1984), Montgomery (2001), Santner et al. (2003), while Bartz-Beielstein and Markon (2004)offer a comparison of the three methodologies for direct search algorithms. Below, we provide a roughdescription of DACE, which constitutes the core methodology in the study of Bartz-Beielstein et al.(2004) on designing PSO algorithms.DACE: Design and Analysis of Computer ExperimentsLet addenote a vector, called the algorithm design, which contains specific settings of an algorithm.In PSO, adcan contain parameters such as swarm size, social and cognitive parameters, inertia weightetc. A design can be represented only with one vector, and the optimal one is denoted as ad*. Also, letpdrepresent a problem design, i.e., a structure that contains problem-related information, such as itsdimension, available computational resources (number of function evaluations) etc. Then, a run of analgorithm can be treated as a mapping of the two designs, adand pd, to a stochastic output, Y(ad, pd).The main goal in DACE is the determination of the design, ad*, which optimizes performance interms of the required number of function evaluations. DACE was introduced for deterministic computerexperiments; therefore, its use in stochastic cases requires its repeated application. The specification of aprocess model in DACE is similar to the selection of a linear or quadratic model in classical regression,and is analyzed in following paragraphs.DACE can be very useful for interpolating observations from computationally expensive simulations.For this purpose, a deterministic function shall be evaluated at m different design points. Sacks et al.(1989) expressed a dynamic response, y(x), for a d-dimensional input vector, x, as the realization of aregression model, F, and a stochastic process, Z, as follows:Y(x) = F(β, x) + Z(x). (31)This model generalizes the classical regression model, Y(x) = βx + ε. The stochastic process Z(x) is as-sumed to have a zero mean, and covariance equal to:V(ω, x) = σ2R(θ, ω, x), • 83. 65Theoretical Derivations and Application Issuesbetween Z(ω) and Z(x), where σ2is the process variance and R(θ, ω, x) is a correlation model. The cor-relation function should be chosen with respect to the underlying process (Isaaks & Srivastava, 1989).Lophaven et al. (2002) provide a useful discussion on seven such models. Bartz-Beielstein et al. (2004)used correlations of the form:1(dj j jjR x R x== −∏ ,with Gaussian correlation functions:2(j j j jR h h= − ,where hj= ωj– xjand θj> 0. Then, the regression model can be defined by using ρ functions:fj: Rd→ R, j = 1, 2,…, ρ,as follows (Sacks et al., 1989):1( j jjF x f x f x== =∑ ,where,f(x) = (f1(x), f2(x),…, fρ(x))Tand β = (β1, β2,…, βρ)T.DACE also provides the mean square-error of the predictor or an estimation of the prediction error at anuntried point. In the next section, we expose the basic steps of its application for producing sequentialdesigns of an algorithm.Application of DACE on Sequential DesignsThe algorithm design, ad, must be specified prior to experimentation with an algorithm. Designs thatuse sequential sampling are often more efficient than designs with fixed sample sizes. In this case, aninitial design ad(0)is specified, and information gained in the first runs is exploited to define the nextdesign, ad(1), and so on. Thus, new design points are chosen in a more sophisticated manner that enhancesperformance in many practical situations.Several sequential sampling approaches with adaptation have been proposed for DACE (Sacks etal., 1989). Bartz-Beielstein et al. (2004) adopted a sequential sampling based on the expected improve-ment of the algorithm. The concept of improvement arose in the analysis of Santner et al. (2003, p.178) and is defined as follows: let, minky , denote the smallest detected function value after k runs of aheuristic global optimization algorithm; x ∈ ad, be a component of the design; and y(x) be the responseof the algorithm, which is a realization of Y(x) defined in equation (31). Then, the improvement of thealgorithm is defined as: • 84. 66Theoretical Derivations and Application Issuesk kmin min( ), if ( ) 0,0, otherwise.y y x y y x − − >∆ =  (32)The discussion in Santner et al. (2003) leads to the conclusion that new design points are promising ifthere is either a high probability that their predicted output is below the current observed minimum and/or there is great uncertainty in the predicted output. This result is in line with the general intention toavoid sites that guarantee worse results.The complete sequential design approach consists of the twelve steps reported in Table 4. Step 1consists of pre-experimental planning. At this stage, the practitioner defines the object of study as wellas sources to collect data exactly.Although this step seems trivial, formulating a generally accepted goalis not a simple task in practice. Discovery, confirmation, and robustness, are the only three possiblescientific goals of an experiment. Discovery asks what happens if new operators are implemented. Con-firmation analyzes how the algorithm behaves on different problems, and robustness asks for conditionsthat decrease performance.Statistical methods like run-length distributions (RLD) provide suitable means for measuring per-formance and describe qualitatively the behavior of optimization algorithms. The construction of anRLD plot requires k runs of the algorithm on a given problem instance, using different random numbergenerator seeds. For each run, the maximum number of function evaluations, tmax, is set to a relativelyhigh value. For each successful run, the number of required function evaluations, trun, is recorded. If therun fails, trun, is set to infinity. These results are then represented by an empirical cumulative distribu-Table 4. The twelve steps of the sequential approach for tuning the performance of direct search algo-rithmsStep ActionStep 1. Pre–experimental planning.Step 2. Scientific hypothesis.Step 3. Statistical hypothesis.Step 4. Specifications:a. Optimization problem.b. Constraints.c. Initialization method.d. Termination method.e. Algorithm (important factors).f. Initial experimental design.g. Performance measure.Step 5. Experimentation.Step 6. Statistical modeling of data and prediction.Step 7. Evaluation and visualization.Step 8. Optimization.Step 9. Termination: if the obtained solution is good enough or the maximum number of iterations has been reached, then goto Step 11.Step 10. Update design and go to Step 5.Step 11. Rejection/acceptance of the statistical hypothesis.Step 12. Objective interpretation of the results from Step 11. • 85. 67Theoretical Derivations and Application Issuestion function (CDF). Let trun(j) be the run-length for the j-th successful run. Then, the empirical CDF isdefined as (Hoos, 1998):runrun( )Pr( ( ) ) ,T jt j tk≤ =where Trun(j) denotes the number of indices j, such that trun(j) ≤ t.Step 2 consists of formulating a scientific hypothesis, based on the determined experimental goal.For example, a hypothesis can be a statement such as “the employed scheme improves the performanceof the algorithm.” Then, in Step 3 a statistical hypothesis, e.g., “there is no difference in means whencomparingperformancesoftwocompetingschemes,”needstobedetermined.Step4requiresthespecifi-cation of an optimization problem, along with its essential elements: possible constraints (e.g., maximumnumber of function evaluations); the initialization method; termination conditions; the algorithm and itsimportant factors; an initial experimental design; and a performance measure.Recallingthediscussioninthebeginningofthecurrentchapter,initializationcanbeeitherdeterministicor random, with the latter being the most popular for population-based stochastic algorithms. In general,we can have the following types of initialization for the repeated application of an algorithm:(I-1) Deterministic initialization with constant seed: According to this scheme, the algorithm is re-peatedly initialized to the same point, x(0), which is defined explicitly by the user.(I-2) Deterministic initialization with different seeds: The algorithm is initialized to a different pointfor each run. The initial points are user-defined and they all lie within an interval, [xlow, xup].(I-3) Random initialization with constant seed: This is the random counterpart of (I-1), where thealgorithm assumes the same, randomly selected initial point for each run.(I-4) Random initialization with different seeds: This is the random counterpart of (I-2), where thealgorithm is initialized at each run to a different point, randomly selected within an interval, [xlow,xup].The scheme (I-4) is typically used with evolutionary algorithms. On the other hand, (I-1) is a commonchoice for local gradient-based algorithms such as quasi-Newton methods, where the user must providean initial point sufficiently close to the global minimum in order to guarantee convergence.Termination of the algorithm occurs when one or more user-defined conditions are met. The mostcommon termination conditions are:(T-1) Domain convergence: The sequence of candidate solutions produced by the algorithm convergesto a minimizer.(T-2) Function value convergence: The function values of the produced candidate solutions convergeto a minimum.(T-3) Algorithm stalls: The algorithm is not able to produce any new candidate solutions. Such casesoccur when, for example, an evolutionary algorithm looses completely its diversity or velocitiesin PSO become practically equal to zero.(T-4) Exhausted resources: This case refers to the inevitable termination of the algorithm when themaximum available computational budget (e.g., number of function evaluations or CPU time) isexhausted. • 86. 68Theoretical Derivations and Application IssuesA discussion on the termination conditions commonly used with PSO is provided in the final sectionof the present chapter.After determining the algorithm and the initial design, performance measures must be defined. Inevolutionary computation, performance is usually measured by statistical moments of the number offunction evaluations required to find a solution with a desirable accuracy. The mean value, standarddeviation, minimum, and maximum number of function evaluations are such measures, used in the vastmajority of experimental works.In Step 5, the experiment is finally conducted. Preliminary (pilot) runs can give rough estimatesregarding the experimental error, run times, and consistency of experimental design. At this stage,RLDs can be very useful. For probabilistic search algorithms, functions may be evaluated several times(Santner et al., 2003). The experimental results provide the base for modeling and prediction in Step 6,where the model is fitted and a predictor is obtained per response.The model is evaluated in Step 7, where visualization techniques can be also applied. For example,simple graphical methods from exploratory data analysis as well as histograms and scatter plots can beused for the detection of outliers. If inappropriate initial ranges were chosen for the designs (e.g., verywide initial ranges), the visualization of the predictor can provide more suitable (narrower) ranges inthe next stage. Several techniques for assessing the validity of the model have been proposed. Crossvalidation predictions versus actual values, as well as standardized cross validation residuals versus crossvalidation predictions are widely used. Sensitivity analysis can be used to ascertain the dependence ofthe statistical model on its factors. Thorough presentations of variance-based methods for sensitivityanalysis can be found in Chan et al. (1997, 2000), Saltelli (2002), Saltelli et al. (2000), while Santneret al. (2003, p. 193) provide a description of ANOVA-type decompositions.Computation of sensitivity indices can be done by decomposing response into average, main effectsper input, two-input interactions, and higher-order interactions (Sacks et al., 1989, p. 417). Addition-ally, graphical methods can be used to visualize the effects of different factors and their interactions onpredictors. Predicted values can be plotted to support the analysis, while MSE is used to asses predictionaccuracy. At this point, we must underline that statistical models provide only guidelines for furtherexperiments, not proofs that connect factors with particular effects. If the predicted values are inaccurate,experimental setup has to be reconsidered. This especially concerns the specification of scientific goalsand ranges of design variables. Otherwise, if further experiments are necessary, new promising designpoints can be determined in Step 8.Finally, a termination criterion is checked in Step 9. If it is not fulfilled, based on the expected im-provement defined by equation (32), new candidate design points can be generated in Step 10. A newdesign point is selected, if there is a high probability that the predicted output is below the currentlyobserved minimum and/or it is characterized by large uncertainty. Otherwise, if the termination criterionis fulfilled and the obtained solution is good enough, the final statistical evaluation takes place in Step11, summarizing the results. A comparison between the first and the improved configuration shall bemade. For this purpose, techniques from exploratory data analysis can complement the analysis at thisstage. Besides that, graphical representations such as boxplots, histograms, and RLDs, can be used tosupport the final statistical decision.Finally, the scientific importance of results remains to be decided in Step 12, since any difference,although statistically significant, can be scientifically meaningless. Thus, it is up to the practitioner toassess the importance of results based on personal experience. The experimental setup should be consid-ered again at this stage, and questions like “have suitable test functions or performance measures been • 87. 69Theoretical Derivations and Application Issueschosen?”, or “did floor or ceiling effects occur?” must be answered. Simple test problems may causesuch ceiling effects. If two algorithms, A and B, achieve their maximum level of performance (or closeto it), then the hypothesis “performance of A is better than performance of B” should not be confirmed(Cohen, 1995). Floor effects describe the same phenomenon on the opposite side of the performancescale, i.e., the test problem is so hard that nearly no algorithm can solve it correctly. Such effects canoccur when the number of function evaluations is very small. In these cases, performance profiles canhelp the practitioner to decide whether ceiling effects have occurred or not.In the next section, we put forward the sequential design technique for tuning PSO on a benchmarkfunction as well as on a simulated real world problem.Applications of PSO with the Sequential Design ApproachBartz-Beielstein et al. (2004) applied the procedure described in the previous sections on both the inertiaweight PSO variant, defined as:vij(t+1) = w vij(t) + c1R1(pij(t)-xij(t)) + c2R2(pgj(t)-xij(t)),xij(t+1) = xij(t) + vij(t+1),i = 1, 2,…, N, j = 1, 2,…, n,and the constriction coefficient variant, defined as:vij(t+1) = χ [vij(t) + c1R1(pij(t)-xij(t)) + c2R2(pgj(t)-xij(t))],xij(t+1) = xij(t) + vij(t+1),i = 1, 2,…, N, j = 1, 2,…, n.We will henceforth denote the two variants as PSO[in]and PSO[con], respectively. In PSO[in], the param-eters:wscale, witerScale∈ [0,1],were used to parameterize the linear decrease of inertia weight, w, from a maximum value, wmax, to ascaled value, wmax×wscale, over tmax×witerScaleiterations of the algorithm, were tmaxstands for the maximumallowed number of iterations. For the remaining (1-witerScale)×tmaxiterations, the inertia weight remainsconstant to the value wmax×wscale. In parallel, all particles were considered to lie within a range, [xmin, xmax]n,while velocity was clamped within [-vmax, vmax], as follows:xmin≤ xij≤ xmaxand -vmax≤ vij≤ vmax,i = 1, 2,…, N, j = 1, 2,…, n. • 88. 70Theoretical Derivations and Application IssuesIn the experiments of Bartz-Beielstein et al. (2004), the global (gbest) PSO variant was used, i.e., pgdenotes the best particle of the whole swarm. Table 5 reports all exogenous parameters of PSO[in]andPSO[con], with the latter given in terms of its equivalent inertia weight representation. In fact, PSO[con]isequivalent to PSO[in]for the parameter values, wmax= 0.729, c1= c2= 1.4944, wscale= 1.0, and witerScale=0.0.The problem design, pd, which summarizes all information of Step 4 in the sequential design of Table4, is reported in Table 6. More specifically, the number of experiment; number of runs, k; maximumnumber of function evaluations, tmax; problem dimension, n; initialization and termination method; lower,xmin, and upper, xmax, bounds for the initialization; as well as the corresponding optimization problemsare reported. The considered problems are discussed in the following sections.The algorithm design, ad, is described in Table 7, summarizing tasks (e) to (f) of Step 4. An experi-mental design, ed, consists of both (problem and algorithm) designs. The results reported in Table 7 referto PSO[in]of experiment 1 in Table 6, which optimizes the 10-dimensional Rosenbrock function. Latinhypercube designs (LHD) were adopted in the experiments, as reported by Bartz-Beielstein et al. (2004).We denote with ad(l)and ad(u)the lower and upper bounds, respectively, for the generation of the LHD,while ad* denotes the parameter settings of the improved design found by the sequential approach.PSO[con]requires the determination of four exogenous strategy parameters, namely the swarm size, N;the constriction factor, χ; the parameter φ = c1+c2; and the maximum velocity, vmax. Aslett et al. (1998)reported that, according to their experience with the stochastic process model, ten times the expectednumber of important factors constitutes an adequate number of runs for the initial LHD. Thus, an LHDwith at least m = 15 design points was chosen. This is the minimum number of design points to fit aTable 5. The exogenous parameters of both PSO variants, with the constriction coefficient variant rep-resented in an equivalent inertia weight formSymbol Range PSO[in]PSO[con]N N 40 40c1R+2 1.4944c2R+2 1.4944wmaxR+0.9 0.729wscaleR+0.4 1.0witerScaleR+1.0 0.0vmaxR+100 100Table 6. Problem design, pd, for the PSO runs. The number of experiment; number of runs, k; maxi-mum number of function evaluations, tmax; dimension of the problem, n; initialization and terminationmethods; lower, xmin, and upper, xmax, initialization bounds; as well as the optimization problem underconsideration are reportedExp. k tmaxn Init. Term. xminxmaxProblem1 50 2500 10 (I-4) (T-4) 15 30 Rosenbrock2 50 1000 12 (I-3) (T-4) -10 10 S-Ring • 89. 71Theoretical Derivations and Application IssuesDACE model that consists of a second order polynomial regression model and a Gaussian correlationfunction, since the former requires:1 1114+ ==åiidesign points, while the latter requires 4 design points. Note that, for m = 15, there are no degrees offreedom left to estimate the mean square-error of the predictor (Santner et al., 2003). Let us now analyzethe application of PSO on each considered problem, individually.Application on the Rosenbrock FunctionThe first test case considered by Bartz-Beielstein et al. (2004) was the optimization of the Rosenbrockfunction. This is a well-known benchmark problem, defined as TPUO-2inAppendixAof the book at hand,and reproduced below for presentation consistency:12 2 211( ) [(1 ) 100( ) ]ni i iif x x x x−+== − + −∑ , (33)with x∈Rn. The selection of this simple problem was based on the intention of Bartz-Beielstein et al.(2004) to demonstrate the workings of DACE in a simple manner, as well as reveal its potential to im-prove performance in terms of the required number of function evaluations.Let us now analyze each step of Table 4 for the case of PSO[in]on the Rosenbrock function, as acomprehensive example of the complete application of DACE (Bartz-Beielstein et al., 2004):(Step 1) Pre-experimental planning: Pre-experimental tests to explore the optimization potential sup-ported the assumption that tuning might improve the performance of PSO on the Rosenbrockfunction. RLDs revealed the existence of a configuration able to complete the run successfullyusing less than 8000 iterations for nearly 80% of the cases. This was less than half the number offunction evaluations used in the reference study of Shi and Eberhart (1999), justifying the useful-ness of further analysis.(Step 2) Scientific hypothesis: There exists a parameterization, ad*, of PSO[in]that improves efficiencysignificantly.Table 7. Algorithm design, ad, for the PSO[in]variant that corresponds to Experiment 1 of Table 6, whichoptimizes the 10–dimensional Rosenbrock function. We denote with ad(l)and ad(u)the lower and upperbounds, respectively, for the generation of the LHD, while ad* denotes the parameter settings of theimproved design found by the sequential approachDesign N c1c2wmaxwscalewiterScalevmaxad(l)5 1.0 1.0 0.7 0.2 0.5 10ad(u)100 2.5 2.5 0.99 0.5 1 750ad* 21 2.25413 1.74587 0.788797 0.282645 0.937293 11.0496 • 90. 72Theoretical Derivations and Application Issues(Step 3) Statistical hypothesis: PSO[in]with the parameterization ad* outperforms PSO[in]with the de-fault parameterization, ad(0), used in (Shi & Eberhart, 1999).(Step 4) Specification: The 10-dimensional instance of the Rosenbrock function defined by equation(33) is the optimization problem under consideration. Its global minimizer is x* = (1, 1,…, 1)T,with global minimum, f * = 0. In accordance to the experimental design of Shi and Eberhart(1999), the mean fitness value of the best particle of the swarm was recorded in 50 runs, and it isdenoted as fB(50). For the production of RLD plots, a threshold value to distinguish successful fromunsuccessful runs was specified. Thus, a run configuration was classified as successful, if fB(50)<fShi, where fShi= 96.1715 is the corresponding value reported in (Shi & Eberhart, 1999). The initialproblem design of Table 6 was used, together with the corresponding algorithm design of Table7, which covers a wide range of interesting parameter settings (regions of interest). No problem-specific knowledge for the Rosenbrock function was used, expecting that the sequential approachwould guide the search towards promising regions.(Step 5) Experimentation: Table 8 reports the optimization process. Each line in Table 8 correspondsto one optimization step in the sequential approach. At each step, two new designs are generatedand the best one is re-evaluated. The number, k, of runs of the algorithm designs is increased(doubled) if a design has performed best twice or more, with starting value, k = 2. For example,design 14 performs best at iteration 1 and iteration 3. It has been evaluated 4 times; therefore, thenumber of evaluations is set to 4 for every newly generated design. This provides a fair compari-son and reduces the risk of incorrectly selecting a worse design.(Step 6) Statistical modeling and prediction: The response is modeled as the realization of a regres-sion model and a random process, as described in equation (31). For this purpose, a Gaussiancorrelation function and a regression model with a polynomial of order 2, were used. Hence, themodel is defined by:1( ) ( ) ( )j jjY x f x Z xρ== β +∑ ,where Z(x) is a random process with zero mean, and covariance defined as:V(ω, x) = σ2R(θ, ω, x),with correlation function:21(nj j jjR x x== − −∏ .Additionally, at certain stages, a tree-based regression model was constructed to determine param-eter settings that produce outliers.(Step 7) Evaluation and visualization: The MSE and predicted values can be plotted to support thenumerical analysis. For this purpose, the DACE toolbox of Lophaven et al. (2002) can be used.For example, the interaction between c1and c2is shown in Fig. 8. Values of c1and c2with c1+c2>4 generate outliers that might disturb analysis. To alleviate these outliers, a design correction wasapplied by requiring c1= c2- 4 if c1+c2> 4. The right part of Fig. 8 illustrates the estimated MSE.Since no design point has been placed for 1 < c1< 1.25 and 2.25 < c2< 2.5, the MSE is relativelyhigh. This might be an interesting region where a new design point will be placed during the next • 91. 73Theoretical Derivations and Application Issuesiteration. Figure 9 depicts the same situation with Fig. 8, after the design correction. In this case,a high MSE is associated with the region c1+c2> 4, but no design point is placed there.(Step 8) Optimization: Based on the expected improvement defined in equation (32), two new designpoints, ad(1)and ad(2), are generated. These designs are evaluated and their performance is com-pared to that of the current best design. Then, the best design found so far is re-evaluated. Theiteration terminates if a design was evaluated for k = 50 times, and the solution is obtained. Theparameter values in the final model, as reported in Table 8, are, N = 21, c1= 2.25413, c2= 1.74587,wmax= 0.788797, wscale= 0.282645, witerScale= 0.937293, and vmax= 11.0496 (Bartz-Beielstein et al.,2004). This step has also covered the termination procedure (Step 9) and design update (Step 10)of the sequential design reported in Table 4.(Step 11) Rejection/acceptance of statistical hypothesis: Finally, the configuration of Shi and Eberhart(1999) is compared to the optimized one. The final (tuned) and the first configuration are applied50 times each. Histograms and boxplots are illustrated in Fig. 10 for both PSO variants. Clearly,the tuned design of PSO[in]exhibits significant performance improvement. The correspondingstatistical analysis is reported in Table 9 (Bartz-Beielstein et al., 2004). Performing a classical t–test indicates that the null hypothesis “there is no difference in the mean performance of the twoalgorithms” can be rejected at the 5% level.Table 8. Optimizing the 10-dimensional Rosenbrock function with PSO[in]. Each row represents the bestalgorithm design at the corresponding tuning stage. Note that function values (reported in the first col-umn) can worsen (increase) although the design is improved. This happens due to noise in the results.The probability that a seemingly good function value is in fact worse exists. However, it decreases duringthe sequential procedure, because the number of re–evaluations is increased. The number of repeats isdoubled if a configuration performs best twice. These configurations are marked with an asterisky N c1c2wmaxwscalewiterScalevmaxConf.6.61557 26 1.45747 1.98825 0.712714 0.481718 0.683856 477.874 1418.0596 39 1.30243 1.84294 0.871251 0.273433 0.830638 289.922 1971.4024 26 1.45747 1.98825 0.712714 0.481718 0.683856 477.874 14*78.0477 30 2.21960 1.26311 0.944276 0.289710 0.893788 237.343 375.6154 30 2.21960 1.26311 0.944276 0.289710 0.893788 237.343 3*91.0935 18 1.84229 1.69903 0.958500 0.256979 0.849372 95.1392 3591.5438 21 1.05527 1.25077 0.937259 0.498268 0.592607 681.092 4393.7541 11 1.58098 2.41902 0.728502 0.469607 0.545451 98.9274 5293.9967 93 1.71206 1.02081 0.966302 0.378612 0.972556 11.7651 2099.4085 39 1.30243 1.84294 0.871251 0.273433 0.830638 289.922 19*117.595 11 1.13995 2.31611 0.785223 0.236658 0.962161 56.9096 57146.047 12 1.51468 2.48532 0.876156 0.392995 0.991074 261.561 1147.410 22 1.72657 2.27343 0.710925 0.235521 0.574491 50.5121 5498.3663 22 1.72657 2.27343 0.710925 0.235521 0.574491 50.5121 54*41.3997 21 2.25413 1.74587 0.788797 0.282645 0.937293 11.0496 67*43.2249 21 2.25413 1.74587 0.788797 0.282645 0.937293 11.0496 67*53.3545 21 2.25413 1.74587 0.788797 0.282645 0.937293 11.0496 67* • 92. 74Theoretical Derivations and Application Issues(Step 12) Objective interpretation: The statistical results from Step 11 suggest that PSO with thetuned design performs better (on average) than the default design. Comparing the parameters ofthe improved design, ad*, reported in Table 7, with the default setting of PSO, no significant dif-ferences are observed except for vmax, which appears to be relatively small. A swarm size of 20particles was shown to be a good value for this problem.The analysis and tuning procedure described above were based solely on the average function valuein 50 runs. However, in a different optimization context, this measure may be irrelevant, and the bestfunction value (minimum) or the median could be used alternatively. In this case, a similar optimizationprocedure with that presented can be conducted, although the resulting optimal designs may differ fromthe ones reported above.Figure 8. Predicted values (left) and MSE (right). As we can see in the left figure, c1+c2> 4 producesoutliers that complicate the analysisFigure 9. Predicted values (left) and MSE (right). The design correction avoids settings with c1+c2> 4that produce outliers (left). Therefore, a high mean squared error exists in the excluded region (right) • 93. 75Theoretical Derivations and Application IssuesThe aforementioned procedure focused on the case of PSO[in]to provide a comprehensive example ofthe sequential design. PSO[con]was tuned in a similar manner (Bartz-Beielstein et al., 2004). The initialLHD for PSO[con]is reported in Table 10, where ad(l)and ad(u)denote the lower and upper bound of theregion of interest, respectively, ad* is the improved design found by the sequential procedure, and ad(Clerc)is the default design recommended in (Clerc & Kennedy, 2002). From the numerical results reported inTable 9 and the corresponding graphical representations (histograms and boxplots) in Fig. 10, we canderive that there is no significant difference between the performance of ad* and ad(Clerc). This constitutesan experimental verification of the analysis of Clerc and Kennedy (2002) presented previously in thecurrent chapter.Application on a Real World ProblemBartz-Beielsteinetal.(2004)alsoappliedthesequentialdesignapproachonareal-worldproblem,namelythe optimization of an elevator group controller. The construction of elevators for high-rise buildings isa challenging task. The elevator group controller is a central part of an elevator system, with the dutyof assigning elevator cars to service calls in real-time, while optimizing the overall service quality, traf-fic throughput, and/or energy consumption. The elevator supervisory group control (ESGC) problemcan be classified as a combinatorial optimization problem (Barney, 1986; Markon & Nishikawa, 2002;Figure 10. Histograms and boxplots for both PSO variants. Left: Solid lines and light bars represent theimproved design. Right: The default configuration is denoted with the index 1, whereas index 2 denotesthe improved variant. Top: Both plots indicate that the tuned inertia weight PSO version performs betterthan the default version. Bottom: No clear difference can be detected when comparing the default withthe improved constriction factor PSO variant • 94. 76Theoretical Derivations and Application IssuesSo & Chan, 1999), and, due to many difficulties in analysis, design, simulation, and control, has beenstudied for a long time.The elevator group controller determines the floors where the elevator cars should go. Since it isresponsible for the allocation of elevators to hall calls, a control strategy, π, also called policy, is needed.One important goal in designing an efficient controller is the minimization of the passenger waiting time,which is defined as the time needed until passenger can enter an elevator after having requested service.Service time includes the waiting time, and, additionally, the time that a passenger spends in the elevatorcar. During a day, different traffic patterns can be observed. For example, in office buildings, “up-peak”traffic is observed in the morning, when people start working, and “down-peak” traffic is observed in theevening. Most of the day, traffic is balanced with much lower intensity than at peak times. Lunchtimetraffic consists of two (often overlapping) phases, where people first leave the building for lunch or headfor a restaurant floor, and then get back to work (Markon, 1995).Fujitec, one of the world’s leading elevator manufacturers, developed a controller that uses a neuralnetwork and a set of fuzzy controllers. The weights on the output layer of the neural network can bemodified and optimized, thereby resulting in a more efficient controller. The associated optimizationproblem is complex, since the distribution of local optima in the search space is unstructured, and thereare flat plateaus of equal function values. Additionally, the objective function is contaminated by noiseand changes dynamically, as it is influenced by the stochastic behavior of customers. Experiments haveshown that gradient-based optimization techniques cannot be applied successfully on such problems,suggesting the application of stochastic population-based algorithms (Beielstein et al., 2003).Elevator group controllers and their related policies are usually incomparable. To enable comparabil-ity for benchmark tests, a simplified elevator group control model, called S-ring (stands for “sequentialTable 10. Algorithm design, ad, of PSO[con]for the 10-dimensional Rosenbrock function. We denote withad(l)and ad(u)the lower and upper bounds, respectively, for the generation of the LHD, while ad* denotesthe parameter settings of the improved design that was found by the sequential approachDesign N χ φ vmaxad(l)5 0.68 3.0 10ad(u)100 0.8 4.5 750ad* 17 0.759 3.205 324.438ad(Clerc)20 0.729 4.1 100Table 9. Results for the Rosenbrock function. Default designs, ad[Shi]and ad[Clerc], from (Shi & Eberhart,1999) and (Clerc & Kennedy, 2002), respectively, as well as the improved design obtained for k = 50runs, are reportedDesign Algorithm Mean Median StD Min Maxad(Shi)PSO[in]1.8383×103592.1260 3.0965×10364.6365 18519ad* PSO[in]39.7029 9.4443 55.3830 0.7866 254.1910ad(Clerc)PSO[con]162.02 58.51 378.08 4.55 2.62×103ad* PSO[con]116.91 37.65 165.90 0.83 647.91 • 95. 77Theoretical Derivations and Application Issuesring”), was developed. This model enables fast and reproducible simulations, and it is applicable todifferent buildings and traffic patterns. In addition, it is characterized by scalability, and it can be easilyextended. Thus, it is appropriate as a test problem generator.The approach presented in (Bartz-Beielstein et al., 2004) uses the simulator of Fujitec, which isdepicted in Fig. 11. This simulator is characterized by high accuracy but heavy computational cost. Thecorresponding coarse (surrogate) S-ring model is depicted in Fig. 12. This is fast to solve, albeit at thecost of lower accuracy. Their approach also incorporates space mapping (SM) techniques, which areused to iteratively update and optimize surrogate models (Bandler et al., 2004), while the main goal isthe computation of an improved solution with a minimal number of function evaluations.To put it more formally, let i denote a site in an elevator system.A2-bit state, (si, ci), is associated withit, where siis set to 1 if a server is present on the i-th floor, or 0 otherwise, while ciis set to 1 if there isat least one waiting passenger at site i, or 0 otherwise. Figure 12 depicts a typical S-ring configuration.The state of a system at time t can be described by a vector:x(t) = (s0(t), c0(t),…, sd-1(t), cd-1(t)) ∈ {0,1}2d. (34)Thus, the state of the system depicted in Fig. 12 is given by a vector of the form, x(t)=(0,1,0,0,…,0,1,0,0)T,where there is a customer waiting on the first (c0= 1) but no elevator present (s0= 0), and so on. Thedynamics of the system are modeled through a state transition table. The state evolution is sequential,scanning the sites from d-1 down to 0, and then again around from d-1.The ascending and descending elevator movements can be considered as a loop. This motivates thering structure. Each time step considers one of the floor queues, where passengers may arrive with aspecific probability. Consider the situation at the third site (the upwards direction on the third floor) inFig. 12. Since a customer is waiting and a server is present, the controller has to make a decision. Theelevator car can either serve (“take” decision) or ignore the customer (“pass” decision). The formerdecision would change the values of the corresponding bits of x(t) from (1,1) to (0,1), while the latterwould change it to (1,0).Figure 11. Fujitec’s simulator for the visualization of the elevator system dynamics in the case of abuilding with 15 floors and 6 elevators • 96. 78Theoretical Derivations and Application IssuesThe operational rules of this model are very simple. Thus, it is easily reproducible and suitable forbenchmark testing. Despite its simplicity, it is hard to find the optimal policy, π*, even for a small S-ring. The actual π* is not obvious, and its difference from heuristic suboptimal policies is non-trivial(Bartz-Beielstein et al., 2004). The transition of the S-ring model from simulation to the correspondingoptimization problem requires the introduction of an objective function. The function that counts thesites with waiting customers at time t is defined as:10( ) ( , ) ( )diiQ t Q x t c t−== = ∑ .The steady-state, time-average number of sites with waiting customers in queue is given by:01lim ( ) ,TSTQ Q t dtT→∞= ∫with probability 1.For a given S-ring configuration, the basic optimal control problem is the detection of a policy, π*,such that the expected number, QS, of sites with waiting passengers that is the steady-state, time-averagein the system is minimized, i.e.,Q= .The policy can be represented by a 2d-dimensional vector, y∈R2d. Let, θ:R→{0,1}, be the Heavisidefunction, defined as:0, 0,1, 0,zzz<= ≥Figure 12. The S-ring elevator model. The case of a building with 6 floors and 3 elevators is illus-trated • 97. 79Theoretical Derivations and Application Issuesx = x(t) be the state at time t, as defined by equation (34), and y∈R2dbe a weight vector. Then, a lineardiscriminator or perceptron,( )( , ) ,x y x yπ = θ , (35)where,, ,i iix y x y= ∑can be used to model the decision process in a compact manner. For a given vector, y, which representsa policy, and a given vector, x, which represents the state of the system, a “take” decision occurs if π(x,y)≥ 0. Otherwise the elevator ignores the customer.The most obvious heuristic policy is the greedy one, i.e., to always serve the customer if possible,which is represented by the 2d-dimensional vector y0= (1, 1,…, 1)T. This vector guarantees that the resultin equation (35) always equals 1, which is interpreted as a “take” decision. However, this policy is notoptimal, except in the case of heavy traffic. This means that a good policy must occasionally bypasssome customers to protect the system from a bunching effect, which occurs when nearly all elevator carsare positioned in close proximity to each other.The perceptron S-ring can serve as a benchmark problem for different optimization algorithms, sinceit relies on a fitness function that maps R2dto R. In general, a policy, π, can be realized as a look-up tableof the system state, x. Then, the optimal policy, π*, can be found by enumerating all possible policiesand selecting the one with the lowest value of Q(π). Since this count grows exponentially with d, theenumerative approach would not work for any but the smallest cases.Bart-Beielstein et al. (2004) used the S-ring simulator to define a 12-dimensional optimization prob-lem with noisy function values, and applied the sequential design technique with PSO on it. The numberof function evaluations was limited to 1000 for each optimization run, which appears to be a realisticchoice for real world applications. The related problem design was reported as Experiment 2 in Table 6in the previous section, along with the design of the Rosenbrock function.SimilarlytotheanalysisfortheRosenbrockfunction,theconstrictioncoefficientPSOvariant,PSO[con],aswellastheinertiaweightvariant,PSO[in],wereanalyzed.Theformerrequiresonly4exogenousstrategyparameters, while 7 parameters have to be specified for the latter. Table 11 contains the results in termsof the obtained solution values. Optimizing PSO[in]improved its robustness as observed in Table 11. Theaverage function value decreased from 2.61 to 2.51, which is a significant difference. However, it is veryimportant to note that the minimum function value could not be improved, but increased slightly from2.4083 to 2.4127, i.e., the tuning procedure was able to find an algorithm design that prevents outliersand produces robust solutions at the cost of an aggressive exploratory behavior. Nevertheless, if therequirement of finding a solution with a minimum function value was specified as the optimization goal,then different optimal designs would have been detected (Bartz-Beielstein et al., 2004).Although function values look slightly better, the tuning process produced no significant improve-ment for PSO[con]. It seems that PSO[con]was unable to escape plateaus of equal fitness. This is an alreadyidentified property of the employed gbest PSO version (Bartz-Beielstein et al., 2004), and it occurredindependently from the parameterization of exogenous strategy parameters. Besides PSO[con], Bartz-Beielstein et al. (2004) also used the NSM method, described in the beginning of this chapter, as wellas a quasi-Newton gradient-based approach against PSO[in]. However, all algorithms were outperformed • 98. 80Theoretical Derivations and Application Issuesby the inertia weight PSO variant. Whether this improved result was caused by the scaling property ofthe inertia weight is subject to further investigation (Bartz-Beielstein et al., 2004).Summarizing, the sequential approach provided effective and efficient means to improve the per-formance of PSO. Thus, it can be considered as a useful tool to support the practitioner in selecting asuitable algorithm configuration. Bartz-Beielstein et al. (2004) considered only the mean fitness valuesas performance measure; different results may be received if different goals were determined. A draw-back of the sequential design approach, which is common to all statistical methods in this field, is thedetermination of a good initial design. This may be a very interesting direction for future research.TERMINATION CONDITIONSIn the previous sections, we mentioned some of the most common termination conditions used in prac-tice. This is perhaps the most user-dependent phase of the optimization procedure for any optimizationalgorithm. The decision for stopping the algorithm can depend on several criteria, related to the availableproblem information, resources, or the ability of the algorithm to attain further solutions.Let {xi}i=1,2,…denote the sequence of solutions produced by an algorithm, with xi= (xi1, xi2,…, xin)T,and {fi}i=1,2,…be the corresponding sequence of function values, i.e., fi= f(xi), for all i. For example, inPSO this sequence may consist of the overall best positions and their function values. Let, also, x* be a(local or global) minimizer of the objective function, and f* = f(x*) be the corresponding (local or global)minimum. Subsequently, we can roughly distinguish four categories of stopping conditions, which aredescribed in the following sections. All cases refer to the unconstrained optimization problem, whileslight modifications in formulations may be required under the presence of constraints.Convergence in Search SpaceIf ||∙|| denotes a distance measure in the search space, A, then convergence in search space is definedas:lim || *|| 0.iix x→∞− =Since the available number of iterations is always finite, the convergence criterion can be relaxed asfollows: for any desirable accuracy, ε > 0, there is an integer, k > 0, such that:Table 11. Results for the S-ring model. Default designs, ad[Shi]and ad[Clerc], from (Shi & Eberhart, 1999) and(Clerc & Kennedy, 2002), respectively, as well as the improved design for k = 50 runs, are reportedDesign Algorithm Mean Median StD Min Maxad(Shi)PSO[in]2.6152 2.5726 0.4946 2.4083 5.9988ad* PSO[in]2.5171 2.5112 0.0754 2.4127 2.6454ad(Clerc)PSO[con]4.1743 2.6252 1.7021 2.5130 5.9999ad* PSO[con]4.1707 2.6253 1.7055 2.5164 5.9999 • 99. 81Theoretical Derivations and Application Issues||xi– x*|| ≤ ε, for all i ≥ k. (36)Thus, the algorithm is terminated as soon as a solution adequately close to the minimizer is detected.In practice, the minimizer, x*, is unknown; thus, we can identify convergence by monitoring the gradi-ent at the approximating solutions. However, in order to extract sound conclusions through gradients,the existence of strong mathematical properties, such as continuous differentiability, are required forthe objective function. If f(x) is twice differentiable in A, and its gradient and Hessian matrix at xiaredefined as:T1 2( ) ( ) ( )( ) , , ,i i iinf x f x f xf xx x x ∂ ∂ ∂∇ =  ∂ ∂ ∂ ,2 221 122 221( ) ( )( )( ) ( )i inii in nf x f xx x xf xf x f xx x x ∂ ∂ ∂ ∂ ∂  ∇ = ∂ ∂  ∂ ∂ ∂   respectively, then xiis identified as an approximation of the minimizer, x*, with accuracy, ε > 0, if itholds that:||∇f(xi)|| < ε and zT∇2f(xi)z > 0, (37)for all non-zero vectors z (i.e., the Hessian matrix is positive definite).The aforementioned termination conditions for unconstrained optimization can be applied reliablyonly in noiseless functions with nice mathematical properties. Moreover, they require the computationof first-order and second-order derivatives, which are not always available in complex problems. Inaddition, there is no way to distinguish whether the obtained minimizer is the global one, unless ad-ditional restrictions (e.g., convexity) are considered in the form of the objective function. PSO and,in general, evolutionary algorithms have been designed to solve problems where the aforementionedrequired mathematical properties are not necessarily met. Thus, this type of termination condition is oflimited practical interest.Convergence in Function ValuesThere are several optimization problems where the global minimum, f*, is a priori known due to theform of the objective function. For example, neural network training is equivalent to the minimizationof a function that is usually defined as the summed square-error of the network’s output. This function,by construction, has the global minimum equal to zero. Similarly, fixed points of nonlinear mappingscan be detected by minimizing a sum of square or absolute errors. Again, this objective function has theglobal minimum equal to zero.In such cases, the following condition of convergence in function values can be used as the termina-tion criterion:|fi– f*| ≤ ε, (38) • 100. 82Theoretical Derivations and Application Issuesfor user-defined accuracy, ε > 0.Although this condition has milder mathematical requirements than thatof equation (37), its applicability is questionable in general cases, since f* is known or can be boundedbelow only for specific types of functions. However, it has been recognized as the most popular termina-tion condition for performance studies on benchmark problems. In these cases, the global minimum andminimizersareknown,andthepractitionerisinterestedintherequirednumberoffunctionevaluationsforthe detection of a global minimizer with a prespecified accuracy. Thus, the algorithm is usually stoppedas soon as it finds a solution with function value adequately close to the known global minimum. Dueto its popularity, we will refer to this termination criterion repeatedly in the rest of the book at hand.Computational Budget LimitationsIn modern applications, the available time for computation is usually limited. Time critical decisions andon-line control of systems require algorithms that provide satisfactory solutions within very restrictivetime frames. An example of such a time-critical problem is the elevator controller described previouslyin the current chapter. In addition, concurrent application servers and computer clusters usually have amultitude of processors available to a large number of users, who require the fastest execution of theirprograms with the shortest waiting time in the scheduler queue. Thus, queued jobs have explicit timeconstraints, translated in months, days, minutes, or seconds, based on their priority.Therefore, limitationsare usually posed on the available computational time (CPU time) for the execution of an algorithm.Limitations are also imposed for reasons of comparison. In order to have fair comparisons amongalgorithms on a specific problem, they must assume the same computational budget. However, a signifi-cant issue arises at this point. The time needed for the execution of a program depends heavily on theimplementation, programming language, programmer skills, and machine load at the time of execution.Thus, any comparison between two algorithms, in terms of the required CPU time, without taking thesefactors into consideration is condemned to be biased.For this reason, researchers have made a compromise. The most computationally expensive part insolving a complex problem is expected to be the evaluation of the objective function, which may becomputed through complex mathematical procedures (e.g., finite element simulations or integrations ofdynamical systems) or become available directly from experimental devices. Thus, the time required forall function evaluations during the execution of an algorithm is expected to constitute the largest fractionof the overall computation burden. For this purpose, the required number of function evaluations servesvery often as a performance measure for optimization algorithms.Based on the aforementioned discussion, we can define two termination conditions. If t is the CPUtime (e.g., in seconds) required by the algorithm from the beginning of its execution, and q denotes thecorresponding number of function evaluations required so far, then the following termination conditionsare defined:t ≥ tmax, (39)and,q ≥ qmax, (40) • 101. 83Theoretical Derivations and Application Issueswhere tmaxis the maximum available CPU time, and qmaxis the maximum allowed number of functionevaluations. Thus, the algorithm will stop as soon as its CPU time exceeds the available time frame orthe number of function evaluations required so far exceeds an upper limit. For reasons explained above,the condition of equation (40) is preferred against that of equation (39).However, there are algorithms that, although requiring only a low number of function evaluations,operate based on very complicated and time-consuming procedures. Therefore, their execution time canbe comparable with that of algorithms that need more function evaluations to provide results of the samequality.Techniquesinevolutionarymulti-objectiveoptimizationwithsophisticatedarchivingproceduresare typical examples of such algorithms. In these cases, it would be unfair to use a termination conditionthat monitors only the number of function evaluations. Hence, a combination of the two terminationconditions described above would be more reasonable.Finally, we must note that, in evolutionary algorithms, it is very common to use the number ofgenerations (iterations) instead of function evaluations as a stopping criterion. This is equivalent to thecondition of equation (40), assuming that the population size and number of function evaluations perpopulation member are fixed at each iteration of the algorithm. However, if there is a variable numberof function evaluations per iteration, then this stopping criterion is not valid.The presented termination conditions have been widely used in PSO literature, especially in caseswhere no information regarding the objective function is available or a new and unexplored problem isconsidered. Naturally, if required, they can be used in conjunction with other termination conditions,such as the ones presented in previous sections.Search StagnationThe final category of termination criteria consists of performance-related conditions. Monitoring theprogress of an algorithm during the optimization procedure provides insight regarding its efficiency andpotential for further improvement of the obtained solutions. The lack of such potential is called searchstagnation, and it can be attributed to several factors.In evolutionary algorithms, search stagnation can be identified by monitoring changes of the overallbest solution within a specific number of iterations. An algorithm is considered to suffer stagnation, ifits best solution has not been improved for a number, tframe, of consecutive iterations, which is definedas a fraction of the maximum number of iterations, tmax:tframe= h tmax, h∈(0,1).Alternatively, one can identify search stagnation by monitoring diversity of the population, which isusually defined as its spread in the search space. The standard deviation of the population is a com-monly used diversity measure. If it falls under a prespecified (usually problem-dependent) threshold,then the population is considered to be collapsed on a single point, having limited potential for furtherimprovement. Moreover, special features of the algorithm can be used to define diversity. For example,in PSO, if velocities of all particles become smaller than a threshold, then the swarm can be regardedas immobilized. Thus, its ability for further improvement is limited, and it is questionable whether itsfurther execution can offer any gain.Search stagnation has been widely used as a stopping criterion in evolutionary computation literature.However, the user must take special care regarding the employed stagnation measures. Some of them • 102. 84Theoretical Derivations and Application Issuesdepend on the scaling of the problem and need a preprocessing stage to identify their thresholds, whilesome others can be misleading especially in flat areas of the objective function, where diversity may besatisfactory but search is inefficient.On the Proper Selection of the Termination ConditionUnfortunately, there is no general rule for selecting a proper termination condition applicable to allalgorithms and problems. The number of cases, where only one of the presented termination criteria isadequate, is very limited. Thus, the user has to combine several criteria to ensure that the algorithm isnot stalled and worth continuing its execution.The most common termination condition in PSO literature is the combination of equations (38) and(40), along with a measure of search stagnation. Thus, PSO stops as soon as it exceeds the maximumallowed number of function evaluations or has found a solution within the desired accuracy or has notimproved its performance for a number of consecutive iterations. These are also the termination criteriathat will be used in most applications presented in the rest of the book.Less frequently, the termination condition defined by equation (40) stands alone, i.e., the algorithmis left to perform a prespecified number of function evaluations and then stops. This is very useful inproducing performance plots (e.g., best function value against iterations) for graphical comparisons.Nevertheless, proper termination conditions must be carefully selected, taking into consideration anypossible special features of the algorithms or problem-related peculiarities, in order to derive soundconclusions and perform fair comparisons.CHAPTER SYNOPSISThis chapter was devoted to a series of critical issues in the theory and practice of PSO. The initializationprocedure was discussed and, besides the typical random initialization, a sophisticated technique basedon a direct search algorithm was presented. We also described most influential theoretical developmentsof PSO. Particle trajectory studies, as well as the stability analysis of PSO, were briefly presented, andrules on parameter selection and tuning were derived. The contemporary state-of-the-art variants ofPSO stemmed from these studies. Moreover, we analyzed a useful sequential design technique based oncomputational statistics for the optimal tuning of PSO on specific tasks. Its application on a benchmarkproblem, as well as on a real world problem, revealed its potential for designing better PSO algorithms.The chapter concluded with the most common termination conditions, underlining the pitfalls that ac-company each choice.REFERENCESAslett, R., Buck, R. J., Duvall, S. G., Sacks, J., & Welch, W. J. (1998). Circuit optimization via sequen-tial computer experiments: design of an output buffer. Journal of the Royal Statistical Society. Series C,Applied Statistics, 47(1), 31–48. doi:10.1111/1467-9876.00096 • 103. 85Theoretical Derivations and Application IssuesBandler, J. W., Cheng, Q. S., Dakroury, S.A., Mohamed,A. S., Bakr, M. H., Madsen, K., & Søndergaard,J. (2004). Space mapping: the state of the art. IEEE Transactions on Microwave Theory and Techniques,52(1), 337–361. doi:10.1109/TMTT.2003.820904Barney, G. (1986). Elevator traffic analysis, design and control. MA: Cambridge University Press.Bartz-Beielstein, T. (2006). Experimental research in evolutionary computation. Heidelberg, Germany:Springer.Bartz-Beielstein, T., & Markon, S. (2004). Tuning search algorithms for real-world applications: a re-gression tree based approach. In Proceedings of the 2004 IEEE Congress on Evolutionary Computation,Portland (OR), USA (pp. 1111–1118).Bartz-Beielstein, T., Parsopoulos, K. E., & Vrahatis, M. N. (2004). Design and analysis of optimizationalgorithms using computational statistics. Applied Numerical Analysis & Computational Mathematics,1(3), 413–433. doi:10.1002/anac.200410007Beielstein,T., Ewald, C.-P., & Markon, S. (2003). Optimal elevator group control by evolution strategies.In E. Cantú-Paz et al. (Eds.), Lecture Notes in Computer Science, Vol. 2724 (pp. 1963–1974). Berlin:Springer.Breiman, L., Friedman, J. H., Olshen, R. A., & Stone, C. J. (1984). Classification and regression trees.Florence, KY: Wadsworth.Chan, K., Saltelli, A., & Tarantola, S. (1997). Sensitivity analysis of model output: variance-basedmethods make the difference. In Proceedings of the 29th conference on Winter simulation (WSC’97),Atlanta (GA), USA (pp. 261–268).Chan, K. S., Tarantola, S., Saltelli,A., & Sobol’, I. M. (2000). Variance based methods. InA. Saltelli, K.Chan & E.M. Scott (Eds.), Sensitivity Analysis (Probability and Statistics Series) (pp. 167–197). NewYork: John Wiley & Sons.Clerc, M., & Kennedy, J. (2002). The particle swarm - explosion, stability, and convergence in amultidimensional complex space. IEEE Transactions on Evolutionary Computation, 6(1), 58–73.doi:10.1109/4235.985692Cohen, P. R. (1995). Empirical methods for artificial intelligence. Cambridge, MA: MIT Press.Gentle, J. E., Härdle, W., & Mori, Y. (2004). Handbook of computational statistics. Berlin: Springer.Hoos, H. H. (1998). Stochastic local search – methods, models, applications. PhD thesis, TechnischeUniversität Darmstadt, Germany.Isaaks, E. H., & Srivastava, R. M. (1989). An introduction to applied geostatistics. UK: Oxford Uni-versity Press.Jiang, M., Luo, Y. P., & Yang, S. Y. (2007). Stochastic convergence analysis and parameter selectionof the standard particle swarm optimization algorithm. Information Processing Letters, 102(1), 8–16.doi:10.1016/j.ipl.2006.10.005 • 104. 86Theoretical Derivations and Application IssuesKennedy, J. (1998). The behavior of the particles. In V.W. Porto, N. Saravanan, D. Waagen &A.E. Eiben(Eds.), Evolutionary programming VII, Lecture Notes in Computer Science, Vol. 1447 (pp. 581-589).Berlin: Springer-Verlag.Lophaven, S. N., Nielsen, H. B., & Søndergaard, J. (2002). Aspects of the Matlab Toolbox DACE(Tech. Rep. IMM-REP-2002-13). Informatics and Mathematical Modelling, Technical University ofDenmark.Markon, S. (1995). Studies on applications of neural networks in the elevator system. PhD thesis, KyotoUniversity, Japan.Markon, S., & Nishikawa, Y. (2002). On the analysis and optimization of dynamic cellular automatawith application to elevator control. In Proceedings of the 10th Japanese-German Seminar on NonlinearProblems in Dynamical Systems, Theory and Applications, Ishikawa, Japan.Montgomery, D. C. (2001). Design and analysis of experiments. New York: John Wiley & Sons.Nelder, J. A., & Mead, R. (1965). A simplex method for function minimization. The Computer Journal,7, 308–313.Ozcan, E., & Mohan, C. K. (1998). Analysis of a simple particle swarm optimization problem. In C.Dagli et al. (Eds.), In Proceedings of the Conference on Artificial Neural Networks in Engineering (AN-NIE’98), St. Louis (MO), USA (pp. 253-258).Ozcan, E., & Mohan, C. K. (1999). Particle swarm optimization: surfing the waves. In Proceedings of1999 IEEE Congress on Evolutionary Computation, Washington (DC), USA (pp. 1939-1944).Parsopoulos, K. E., & Vrahatis, M. N. (2002). Initializing the particle swarm optimizer using the non-linear simplex method. In A. Grmela & N.E. Mastorakis (Eds.), Advances in Intelligent Systems, FuzzySystems, Evolutionary Computation (pp. 216-221). WSEAS Press.Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. (1992). Numerical recipes in fortran77. MA: Cambridge University Press.Sacks, J., Welch, W. J., Mitchell, T. J., & Wynn, H. P. (1989). Design and analysis of computer experi-ments. Statistical Science, 4(4), 409–435. doi:10.1214/ss/1177012413Saltelli, A. (2002). Making best use of model evaluations to compute sensitivity indices. ComputerPhysics Communications, 145(2), 280–297. doi:10.1016/S0010-4655(02)00280-1Saltelli, A., Tarantola, S., & Campolongo, F. (2000). Sensitivity anaysis as an ingredient of modeling.Statistical Science, 15(4), 377–395. doi:10.1214/ss/1009213004Santner, T. J., Williams, B. J., & Notz, W. I. (2003). The design and analysis of computer experiments.Berlin: Springer.Shi, Y., & Eberhart, R. C. (1999). Empirical study of particle swarm optimization. In Proceedings of the1999 IEEE Congress of Evolutionary Computation, Washington (DC), USA (pp. 1945–1950).So, A. T., & Chan, W. L. (1999). Intelligent suilding systems. Dordrecht, The Netherlands: KluwerAcademic Publishers. • 105. 87Theoretical Derivations and Application IssuesSutton, A. M., Whitley, D., Lunacek, M., & Howe, A. (2006). PSO and multi-funnel landscapes: howcooperation might limit exploration. In Proceedings of the 8thAnnual Conference on Genetic and Evo-lutionary Computation (GECCO’06), Seattle (WA), USA (pp. 75-82).Torczon, V. (1991). On the convergence of the multidirectional search algorithm. SIAM Journal onOptimization, 1, 123–145. doi:10.1137/0801010Trelea, I. C. (2003). The particle swarm optimization algorithm: convergence analysis and parameterselection. Information Processing Letters, 85(6), 317–325. doi:10.1016/S0020-0190(02)00447-7Van den Bergh, F., & Engelbrecht, A. P. (2006). A study of particle swarm optimization particle trajec-tories. Information Sciences, 176(8), 937–971. doi:10.1016/j.ins.2005.02.003Xiao, R.-Y., Li, B., & He, X.-P. (2007). The particle swarm: parameter selection and convergence. InD.-S. Huang, L. Heutte & M. Loog (Eds.), Advanced Intelligent Computing Theories and Applications(Vol. 2, pp. 396-402). Berlin: Springer.Zheng,Y.-L., Ma, L.-H., Zhang, L.-Y., & Qian, J.-X. (2003). On the convergence analysis and parameterselection in particle swarm optimization. In Proceedings of the 2ndInternational Conference on MachineLearning and Cybernetics (ICMLC2003), Xi’an, China (pp. 1802-1807).f • 106. 88Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.Chapter 4Established and RecentlyProposed Variants of ParticleSwarm OptimizationIn this chapter, we describe established and recently proposed variants of PSO. Due to the rich PSOliterature, the choice among different variants proved to be very difficult. Thus, we were compelled toset some criteria and select those variants that best suit them. For this purpose, we considered the fol-lowing criteria:1. Sophisticated inspiration source.2. Close relationship to the standard PSO.3. Wide applicability in problems of different types.4. Performance and theoretical properties.5. Number of reported applications.6. Potential for further development and improvements.Thus, we excluded variants based on complicated hybrid schemes that combine other algorithms,where it is not evident which algorithm triggers which effect, as well as over-specialized schemes thatrefer only to one problem type or instance.Under this prism, we selected the following methods: unified PSO, memetic PSO, composite PSO,vector evaluated PSO, guaranteed convergence PSO, cooperative PSO, niching PSO, TRIBES, andquantum PSO. Albeit possibly omitting an interesting approach, the aforementioned variants sketch arough picture of the current status in PSO literature, exposing the main ideas and features that constitutethe core of research nowadays.DOI: 10.4018/978-1-61520-666-7.ch004 • 107. 89Established and Recently Proposed Variants of Particle Swarm OptimizationUNIFIED PARTICLE SWARM OPTIMIZATIONIn the previous chapters, we emphasized the importance of the two main phases, exploration and ex-ploitation, of the search procedure in a population-based algorithm such as PSO. The former phase isresponsible for the detection of the most promising regions of the search space, while the latter pro-motes convergence of particles towards the best solutions. These two phases can take place either onceor successively during the execution of the algorithm. Their impact on its performance necessitates thespecial handling of inherent features that cause transition from one phase to another, altering its searchdynamics.In the second chapter of the book at hand, we discussed the concept of neighborhood and under-lined the influence of its size on the convergence properties of the algorithm. Two PSO variants weredistinguished, namely the global (gbest) where the whole swarm is considered as the neighborhood ofeach particle, and the local (lbest) where neighborhoods are strictly smaller. Although gbest is a specialcase of lbest, we distinguish between them due to their different outcome in the exploration/exploitationproperties of PSO.More specifically, the global variant converges faster towards the overall best position than the localone, in the most common neighborhood topologies; therefore, it stands out for its exploitation ability.On the other hand, the local variant has better exploration properties, since information regarding thebest position of each particle is gradually communicated to the rest through their neighbors. Thus, theoverall best position attracts the particles gradually, providing the opportunity of avoiding suboptimalsolutions. Apparently, the choice of neighborhood topology and size significantly affects the trade-offbetween exploration and exploitation, albeit there is no formal procedure to determine it optimally.In practice, the most common neighborhood configuration of PSO consists of a ring topology appliedeithertogbestorlbestwithradiusequaltoone.Undersuchconfigurations,thealgorithmisbiasedtowardsexploitation or exploration, depending on the complexity of the problem at hand. Neighborhoods withlarger radii that implicitly interpolate between the two extremes are used less frequently. The develop-ment of unified particle swarm optimization (UPSO) was motivated by the intention to combine the twoextremal variants (in terms of their exploration/exploitation properties) in a generalized manner, aimingto produce new schemes that combine their properties.UPSO was developed by Parsopoulos and Vrahatis (2004) as an algorithm that harnesses the globaland local PSO variant in a unified scheme, without imposing additional computational burden in termsof function evaluations. The constriction coefficient velocity update of PSO was used, although theunified scheme can be defined for different schemes as well. Let N be the swarm size, and n denote thedimension of the problem at hand. Also, let Gi(t+1) denote the velocity update of the i-th particle, xi, forthe global PSO variant with constriction coefficient, which is defined as:Gij(t+1) = χ [vij(t) + c1R1(pij(t)-xij(t)) + c2R2(pgj(t)-xij(t))], (1)and Li(t+1) denote the corresponding velocity update for the local PSO variant:Lij(t+1) = χ [vij(t) + c1R1(pij(t)-xij(t)) + c2R2(plj(t)-xij(t))], (2) • 108. 90Established and Recently Proposed Variants of Particle Swarm Optimizationwhere g denotes the index of the overall best position, while l denotes the best position in the neighbor-hood of xi(to simplify our notation we use the index l instead of the index giused in Chapter Two). Then,the main UPSO scheme is defined as (Parsopoulos & Vrahatis, 2004):vij(t+1) = u Gij(t+1) + (1-u) Lij(t+1), (3)xij(t+1) = xij(t) + vij(t+1), (4)i = 1, 2,…, N, j = 1, 2,…, n,where u ∈ [0,1] is a new parameter, called unification factor, which controls the influence of the globaland local velocity update. The rest of the parameters are the same as for the standard PSO.According to equations (3) and (4), the new position shift of a particle in UPSO consists of a weightedcombination of the gbest and lbest position shifts. The unification factor balances the influence of thesetwo search directions. The original global and local PSO variants constitute special cases of UPSO foru = 1 and u = 0, respectively. All intermediate values, u ∈ (0,1), define UPSO variants that combine theexploration/exploitation properties of gbest and lbest.The position update of a particle for different values of the unification factor is illustrated in Fig.1. More specifically, let xi= (0,0)Tbe the current position of a particle, denoted with the cross symbol,and pi= (1,2)Tbe its own best position, denoted with a square. Also, let pg= (2,5)Tbe the overall bestposition, denoted with a circle, and pl= (5,-3)Tbe the best position in its neighborhood, denoted with atriangle. Then, Fig. 1 illustrates the distribution of 3000 possible new positions of xi, for varying valuesof u ranging from 0.0 (local PSO) up to 1.0 (global PSO) with increments of 0.1. For simplicity, thecurrent velocity, vi, of the particle is considered to be equal to zero in all cases.Evidently,lowervaluesofucorrespondtodistributionsbiasedtowardsthelocalbestposition(denotedwith a triangle), since the lbest position shift dominates in equation (3). Increasing u towards 1.0 resultsin a contraction of the distribution shape and expansion towards the overall best position (denoted witha circle). Thus, u can control the distribution of new positions for each particle, controlling its explora-tion/exploitation properties.Besides the main UPSO scheme, Parsopoulos and Vrahatis (2004) added further stochasticity toenhance its exploration properties by introducing a new stochastic parameter in equation (3). This ad-dition produces the following two different schemes:vij(t+1) = r3u Gij(t+1) + (1-u) Lij(t+1), (5)which is mostly based on the local variant, and, alternatively:vij(t+1) = u Gij(t+1) + r3(1-u) Lij(t+1), (6)which is mostly based on the global variant (Parsopoulos & Vrahatis, 2004). The stochastic parameter,r3, follows a Gaussian distribution:r3~ N(μ, σ2), • 109. 91Established and Recently Proposed Variants of Particle Swarm Optimizationwith mean value μ and standard deviation σ. The use of r3imitates the mutation operation in evolution-ary algorithms. However, in the case of UPSO, mutation is biased towards directions consistent withPSO dynamics, in contrast to the pure random mutation used in evolutionary algorithms. Following theassumptions of Matyas (1965), a proof of convergence in probability was derived for the UPSO schemeof equations (5) and (6) (Parsopoulos & Vrahatis, 2004).The effect of mutation is illustrated in Fig. 2, using the same data as in Fig. 1. The distributions ofnew positions are illustrated for the standard UPSO scheme with u = 0.2 (right part), as well as for itsmutated counterpart of equation (5) with μ = 0.0 and σ = 0.1 (left part). Mutated UPSO is clearly morebiased towards the local best position (denoted with a triangle). In addition, the spread of possible posi-tions is wider, promoting exploration as intended.At this point, we must note that the choice between equations (5) and (6) shall take into consider-ation the value of the unification factor. In cases where u < 0.5, the local search direction, Li, has thedominant weight in equation (3), hence the algorithm is mostly based on it. In this case, it is better touse the mutation scheme of equation (5) where the non-dominant (i.e., the global) search direction ismutated. Otherwise, the dominant local search direction will probably be degraded, reducing efficiency.The opposite must hold for u > 0.5, where the global search direction, Gi, is in charge of equation (3).In this case, it is better to use the mutation scheme of equation (6), where the local search direction ismutated.The selection of appropriate values for μ and σ depends on the problem at hand. Usually, μ = 0.0 andsmall values of σ are adequate to enhance the exploration capability of PSO. However, different valuesmay prove to be better, depending on the shape of the objective function. Parsopoulos and VrahatisFigure 1. The distribution of 3000 possible new positions (light grey points) of a particle using the updatescheme of UPSO defined by equations (3) and (4), for unification factor values ranging from 0.0 (localPSO) up to 1.0 (global PSO) with increments of 0.1. The cross symbol denotes the current position of theparticle, xi= (0,0)T; the square denotes its own best position, pi= (1,2)T; the circle denotes the overallbest position, pg= (2,5)T; and the triangle denotes the best position, pl= (5,-3)T, in the neighborhood ofxi. For simplicity, the current velocity, vi, is considered to be equal to zero in all cases • 110. 92Established and Recently Proposed Variants of Particle Swarm Optimization(2007) offered an extensive experimental study on the selection and adaptation of the unification factor.Their analysis is briefly reported in the next section.Parameter Selection and Adaptation in UPSOAs already mentioned, there is an obvious dependence of UPSO dynamics on the unification factor, sinceitcontrolsthebalancebetweenitsexploration/exploitationproperties.Smallvaluesfavorthelocalpositionshift component, thereby resulting in better exploration, while large values favor the global component,promoting exploitation. Values around the middle point, u = 0.5, are expected to produce more balancedschemes with respect to their exploration/exploitation capabilities. However, such balanced versions failto take advantage of any special structure of the objective function (e.g., unimodality, convexity etc). Insuch cases, unification factors closer to 0.0 or 1.0 may exhibit superior performance. Moreover, onlineadaptation of the unification factor is intuitively expected to enhance performance.Parsopoulos and Vrahatis (2007) considered a set of different selection and adaptation schemes ofthe unification factor. Also, they applied them on a set of widely used benchmark functions to revealthe potential of UPSO for self-adaptation in different environments, and distinguish its most promisingconfigurations.Following the categorization of Angeline (1995) for evolutionary parameters, the unification factorcan be considered either at swarm-level, where the same value is assigned to all particles, or at particle-level, where each particle assumes its own independent value. In the first case, particles have the samebent for exploration/exploitation, resulting in swarms with aggregating behavior. In the latter case, eachparticle has its own special exploration/exploitation trade-off, resulting in swarms with higher behaviordiversity. These adaptation schemes are described in the following sections.Quantized Unification FactorThis is a swarm-level scheme, where all particles have the same fixed unification factor value. Parso-poulos and Vrahatis (2007) considered the following set:Figure 2. Distribution of 3000 possible new positions of a particle using UPSO, (A) with mutation, and(B) without mutation. The same data as in Fig. 1 are used. The unification factor is u = 0.2 and themutated UPSO of equation (5) is used with μ = 0.0 and σ = 0.1 • 111. 93Established and Recently Proposed Variants of Particle Swarm OptimizationW = {0.0, 0.1, 0.2,…, 1.0},of equidistant values of u∈[0,1], and they studied the performance of each value separately to gainintuition regarding the most promising values per problem.Increasing Unification FactorThis is also a swarm-level scheme with all particles having the same unification factor, which is initial-ized at 0.0 and increases up to 1.0 during the execution of the algorithm. Consequently, exploration isfavored in the first stages where u lies closer to 0.0 (lbest), while exploitation is promoted at final stageswhere u assumes higher values close to 1.0, approximating gbest.Let t denote the iteration number; u(t) be the unification factor at iteration t; and Tmaxdenote themaximum number of iterations. Three different increasing schedules were considered in the study ofParsopoulos and Vrahatis (2007):1. Linear: Unification factor is linearly increased from 0.0 up to 1.0, according to the scheme:max( ) ,tu tT=which corresponds to a smooth and relatively slow transition from exploration to exploitation.2. Modular: Unification factor increases repeatedly from 0.0 to 1.0 every q iterations, according tothe scheme:mod( 1)( ) ,t qu tq+=which repeatedly modifies the dynamics of the algorithm from exploration to exploitation, through-out its execution. The value of q is selected as a reasonable fraction of the maximum number ofiterations. In the experiments reported in Parsopoulos and Vrahatis (2007), the value q = 102wasused for Tmax= 104.3. Exponential: Unification factor increases from 0.0 to 1.0 exponentially, according to thescheme:maxlog(2.0)( ) exp 1.0.tu tT = −  This scheme results in a mild transition from exploration to exploitation in early iterations, whileit is accelerated in later stages of execution.Figure 3 graphically illustrates the unification factor under the aforementioned increasing schemes for1000 iterations. • 112. 94Established and Recently Proposed Variants of Particle Swarm OptimizationSigmoid Unification FactorThis is also a swarm-level scheme, similar to the increasing schemes described in the previous section.All particles assume the same unification factor, which is increased according to the relation:maxsig( ) ,20Tu t F t = −  where,1( , ) .1 exp( )sigF xxλ =+ −λThis scheme results in a sigmoid transition from exploration to exploitation. It is considered separatelyfrom the other increasing schemes due to the form of the sigmoid, which depends on the value of theparameter λ. Parsopoulos andVrahatis (2007) investigated the values, λ = 10-1, 10-2, and 10-3.The correspond-ing sigmoids are depicted in Fig. 4 for Tmax= 103. Sigmoid has been used as an activation function in artificialneural networks (Parsopoulos & Vrahatis, 2007).Swarm PartitioningThisisaparticle-levelscheme,wheretheswarmisdividedinpartitionsconsistingofaprespecifiednumberofparticles, called partition size.All particles in the same partition share the same unification factor, while eachpartition has a different value. Parsopoulos and Vrahatis (2007) considered a swarm with 11 partitions, eachassuming a value of u from the set W = {0.0, 0.1, 0.2,…, 1.0}. Depending on the swarm size, it is possible tohave partitions with different sizes. For example, a swarm of size 100 can be partitioned in 10 partitions ofsize 9, and one partition of size 10. The slightly different size of the last partition is not expected to modifythe dynamic of the scheme significantly.Figure 3. The linear, modular, and exponential increasing scheme of the unification factor for 1000iterations • 113. 95Established and Recently Proposed Variants of Particle Swarm OptimizationSpecial care shall be taken in determining the neighborhoods of partitioned swarms. If a neighborhoodconsistsmostlyofparticlesofthesamepartition,thentheywillsharethesameunificationfactor,therebybias-ing its search capabilities. For example, in the ring topology, if the first k particles, x1, x2,…, xk, are assigned topartition 1, the next k particles, xk+1, xk+2,…, x2k, to partition 2, and so on, then most of the neighborhoods willhave the aforementioned undesired property. Better spread of particles of different partitions in different ringneighborhoods can be achieved by assigning particles to partitions in a non-sequential manner, such that thei-th particle is assigned to the (1 + (i-1) mod k)-th partition (Parsopoulos & Vrahatis, 2007). Put simply, thefirst k particles of the swarm are assigned to partitions 1 to k, respectively, one particle per partition. Then, itstarts over by assigning xk+1to partition 1, xk+2to partition 2, and so on. The procedure is illustrated schemati-cally below for k = 11 partitions:1 122 1311 22Partition 1Partition 2Partition 11x xx xx x←←←  Using this scheme, particles with different unification factors are allowed to interact by sharing infor-mation with their neighbors, while a satisfactory appearance frequency of different unification factorvalues is retained in the swarm.Dominance of the BestThis is a particle-level scheme based on swarm partitioning. More specifically, the swarm is dividedin partitions, and, after a number of iterations, the partition that contains the best particle of the swarmgains an additional particle from the partition where the worst particle belongs. The minimum size of apartition is set to 1 to prevent its elimination and the consequent loss of behavioral diversity. If the sizeof the worst partition is equal to 1, then a particle from the immediately next worst partition with sizehigher than 1 is attributed to the best partition when required. If all but one partitions have sizes equalFigure 4. The sigmoid scheme for adapting the unification factor, for λ=10-1, 10-2, and 10-3 • 114. 96Established and Recently Proposed Variants of Particle Swarm Optimizationto 1, then no particle migration takes place.Dominance of the best incorporates an award system for the best partition to strengthen the influenceof the best unification factor in the swarm, although it preserves the existence of all different values of ubyimposingaminimumpartitionsize.Thismechanismpreventstheswarmfrombeingconqueredrapidlyby the best performing unification factor, which could be detrimental for its overall performance.Self-Adaptive Unification FactorIn this particle-level scheme, each particle, xi= (xi1, xi2,…, xin)T, has its own unification factor, ui, which isincorporated in the particle as an additional variable, augmenting the dimension of the problem. Hence,the particle xiis defined as:xi= (xi1, xi2,…, xin, ui)T∈ A×[0,1], i = 1, 2,…, N,where A is the original search space and n is the problem dimension.According to the self-adaptive scheme, UPSO is allowed to determine the optimal unification factorfor each particle individually, by capturing online possible special structure of the problem at hand.Empirical Analysis of UPSOParsopoulos and Vrahatis (2007) performed an extensive experimental analysis of the aforementionedschemes for selection and adaptation of the unification factor. Their experiments used the test problemsTPUO-1-TPUO-5, reported in Appendix A of the book at hand. The corresponding dimension, search space,and desired accuracy per test problem are reported in Table 1.For each test problem, 100 independent experiments were conducted per scheme. The swarm sizewas set to 30 in all cases. This choice was based on the promising results reported in Trelea (2003).The swarm was allowed to perform a maximum number, Tmax= 104, of iterations. The particles wereconstrained within the ranges reported in Table 1, and a maximum value, vmax, equal to half the cor-responding range of the search space was imposed on velocities. The rest of the parameters were set totheir default values, χ = 0.729, c1= c2= 2.05 (Clerc & Kennedy, 2002). For the computation of the localPSO component in UPSO, a ring neighborhood topology of radius r = 1 was used.In the sigmoid scheme, the parameters, λ = 10-1, 10-2, and 10-3, were used for the transition of theunification factor from 0.0 to 1.0 in the first 1000 iterations. The decision to complete the transition in1000 iterations, instead of the maximum number, was based on the observation that, in most cases, PSOrequired less than 1000 iterations to converge (Parsopoulos & Vrahatis, 2007).In swarm partitioning, since the swarm size was 30 and the number of partitions was 11, some parti-tions of 3 particles and some of 2 particles were considered. More specifically, partitions with u rangingfrom 0.0 up to 0.7 consisted of 3 particles, while the remaining partitions consisted of 2 particles. Thesame also held for the initialization of dominance of the best, where additionally a partition update wasperformed every 20 iterations.In the self-adaptive scheme, the unification factor of each particle was randomly initialized usinga uniform distribution over the range [0.3,0.6]. Initially, the full range [0,1] was used; however, thealgorithm was unable to reach the desired goal within the maximum number of iterations, although itwas moving close to the global minimizers. This inability is attributed to the increased dimension of the • 115. 97Established and Recently Proposed Variants of Particle Swarm Optimizationparticles after the inclusion of u. Thus, either the maximum number of iterations should be increasedor a smaller range for u should be used in order to have a fair comparison of the self-adaptive schemewith the rest. To this end, Parsopoulos and Vrahatis (2007) retained the maximum number of iterationsbecause the same value was used in related works (Trelea, 2003), and they preferred to constrain u within[0.3,0.6]. The selection of the range [0.3,0.6] was based on the promising results obtained through thequantized scheme for this range.Besides a plethora of tables containing the statistical analyses of their results in terms of the requirednumber of function evaluations to reach the global minimizer with the desired accuracy, Parsopoulos andVrahatis (2007) also performed t-tests to study the statistical significance of their results. Each differentscheme was tested against all other schemes to a significance level of 99%.Due to spatial limitations, we omit the detailed presentation of all the tables reported in Parsopoulosand Vrahatis (2007); however, we provide a short discussion per problem in the following paragraphs.In addition, in Table 2, we report the overall best approach per test problem, in terms of its success per-centage over 100 experiments and the mean required number of iterations. For reasons of comparison,the second best approach, as well as the gbest and lbest PSO variants, are also reported. The adaptationschemes are denoted as follows: “Quan” for quantized; “Sig” for sigmoid; “Self” for self-adaptive; and“Part” for swarm partitioning.InTPUO-1, all UPSO schemes were 100% successful, outperforming the local and global PSO variants,which correspond to the values u = 0.0 and u = 1.0 of the quantized scheme, respectively. Most of thealgorithms had statistically significant differences in their performance. Only the case of u = 1.0 (pureglobal PSO) in the quantized scheme exhibited a success rate smaller than 100%. At a first glance, thiscontradicts the claim that u = 1.0 promotes exploitation, since the problem is unimodal and convex,thus, it should be solved efficiently by an exploitation-oriented PSO variant. The explanation for thisinferior performance lies in the high dimensionality of the problem. The gbest PSO converges rapidlytowards the global minimizer, although with a different convergence rate per coordinate direction. Thus,it can be trapped in suboptimal solutions, although close to the actual global minimizer. The quantizedscheme with u ∈ [0.4,0.7], along with the sigmoid with λ = 10-3, and the self-adaptive scheme, exhibitedthe best performance with small differences among them, followed by the dominance of the best andswarm partitioning.In TPUO-2, the quantized scheme with u = 0.2 was the most promising. As expected, exploration-oriented UPSO variants with u closer to 0.0 performed better than exploitation-oriented variants withu closer to 1.0. The sigmoid scheme with λ = 10-3and the self-adaptive scheme also performed well,without having statistically significant differences. In this problem, dominance of the best exhibitedbetter performance than swarm partitioning.Table 1. Configuration of the test problems. All problems are defined in Appendix A of the book athandProblem Dimension Range AccuracyTPUO-130 [-100, 100]3010-2TPUO-230 [-30, 30]30102TPUO-330 [-5.12, 5.12]30102TPUO-430 [-600, 600]3010-1TPUO-52 [-100,100]210-5 • 116. 98Established and Recently Proposed Variants of Particle Swarm OptimizationTPUO-3is highly multimodal. Thus, it was anticipated that balanced UPSO versions would performbetter. Indeed, the value u = 0.5 proved to be the best, while unification factors higher than 0.7 exhib-ited poor performance. The self-adaptive scheme had the best overall performance. Also, the modularscheme exhibited superior performance in this problem. This suggests that, in highly multimodal func-tions, the iterative modification of the unification factor from 0.0 to 1.0 can provide better results thanfixed values.In TPUO-4, the value u = 0.5 was again the most promising, with exploration-oriented UPSO variantsoutperforming the exploitation-oriented ones. Self-adaptive and dominance of the best were also shownto be efficient, with all algorithms having marginal performance differences among them. Finally, inTPUO-5, the quantized scheme with unification factor u = 0.3 was the best among all quantized schemes,while swarm partitioning and self–adaptive had the best overall performance among all schemes.Summarizing the results, u = 0.2 and u = 0.3 were the only cases of the quantized scheme with suc-cess percentages of 100% for all test problems, although, in some cases they were outperformed bymore balanced UPSO versions, such as u = 0.5. Also, the linearly increasing and self-adaptive schemeswere successful in all test problems, with the latter always outperforming both the local and globalPSO variants, with respect to the mean number of iterations. Moreover, it was observed that the finalunification factor (i.e., that of the solution) in the adaptive schemes came rarely in line with the bestunification factor observed in the quantized scheme. This is an indication that, in a given problem theadaptive schemes were able to capture its shape and change their behavior accordingly, based on theirTable 2. Results for the overall best performing and the second best performing scheme in the experi-ments of Parsopoulos and Vrahatis (2007), along with those of the standard gbest and lbest PSO vari-ants. The adaptation schemes are denoted as follows: “Quan” for quantized; “Sig” for sigmoid; “Self”for self-adaptive; and “Part” for swarm partitioning. For each case, the success percentage over 100experiments, as well as the mean required number of iterations, are reportedProblem Overall Best Second best gbest PSO lbest PSOTPUO-1Method Quan (u = 0.5) Quan (u = 0.4) Quan (u = 1.0) Quan (u = 0.0)Success 100% 100% 91% 100%Mean 1.921×1021.930×1021.231×1035.698×102TPUO-2Method Sig (λ = 10-3) Quan (u = 0.2) Quan (u = 1.0) Quan (u = 0.0)Success 100% 100% 68% 100%Mean 1.931×1022.401×1023.583×1034.673×102TPUO-3Method Self Quan (u = 0.5) Quan (u = 1.0) Quan (u = 0.0)Success 100% 100% 52% 95%Mean 1.273×1021.313×1024.895×1039.628×102TPUO-4Method Quan (u = 0.5) Quan (u = 0.3) Quan (u = 1.0) Quan (u = 0.0)Success 100% 100% 90% 100%Mean 1.794×1021.957×1021.299×1035.317×102TPUO-5Method Quan (u = 0.3) Part Quan (u = 1.0) Quan (u = 0.0)Success 100% 100% 76% 99%Mean 4.074×1024.102×1022.674×10 8.956×102 • 117. 99Established and Recently Proposed Variants of Particle Swarm Optimizationperformance during execution.The highest diversities of unification factor values at the solutions were observed for TPUO-3andTPUO-5, while narrower ranges were obtained for the rest of the problems. Overall, there was an UPSOversion that outperformed the standard PSO variants in all test problems. The values u = 0.2 and u = 0.3were shown to be the most effective of the unification factor, while u = 0.5 constitutes a good choice fora more balanced search and faster convergence, although at the risk of slightly reduced success rates.The linearly decreasing, sigmoid, and self-adaptive schemes were robust and reliable, with the latterexhibiting considerably better performance.The reported results reveal the potential of UPSO to be a very promising approach. This is also re-flected in its number of applications in different scientific fields. Such applications will be reported infollowing chapters of the book at hand. The next section, on memetic PSO, presents a recently proposedand efficient approach that incorporates local search.MEMETIC PARTICLE SWARM OPTIMIZATIONMemetic PSO (MPSO) is a hybrid algorithm that combines PSO with local search techniques. MPSOconsists of two main components: a global one, which is responsible for global search, and a local one,which performs more refined search around roughly detected solutions. In the next section, we brieflydescribe the fundamental concepts of memetic algorithms (MAs) and provide the necessary backgroundfor the presentation of MPSO.Fundamental Concepts of Memetic AlgorithmsMAs comprise a family of population-based heuristic search algorithms, designed for global optimiza-tion tasks. The main inspiration behind their development was the concept of the meme, as coined byDawkins (1976), which represents a unit of cultural evolution that admits refinement. Memes can alsorepresent models of adaptation in natural systems that combine evolutionary adaptation with individuallearning within a lifetime. MAs include a stage of individual optimization or learning, usually in theform of a local search procedure, as part of their search operation.MAs were first proposed by Moscato (1989), where simulated annealing was used for local searchwith a competitive and cooperative game between agents interspersed with the use of a crossover opera-tor, to tackle the traveling salesman problem. This method gained wide acceptance due to its ability tosolve difficult problems.Although MAs bear a similarity with genetic algorithms (GAs) (Goldberg, 1989), they mimic culturalrather than biological evolution. Indeed, GAs employ a combination of selection, recombination, andmutation, similar to that applied to genes in natural organisms. However, genes are usually not modifiedduring a lifetime, whereas memes are. Therefore, most MAs can be interpreted rather as a cooperative/competitive algorithm of optimizing agents.In general, an evolutionary MA can be described with the procedure reported in Table 3. In particu-lar, at the beginning the population is initialized within the search space. The local search algorithmis initialized on one or more population members and performs local search from each one. Then, theproduced solutions are evaluated and evolutionary operators are applied to produce offspring. Localsearch is applied on these offspring and selection chooses the individuals that will constitute the parent • 118. 100Established and Recently Proposed Variants of Particle Swarm Optimizationpopulation in the next generation. The termination condition can include various criteria, such as timeexpiration and/or maximum generations limit.The first implementations of MAs were hybrid algorithms combining GAs as the global searchcomponents with a local search (GA–LS) (Belew et al., 1991; Hart, 1994; Hinton & Nowlan, 1987;Keesing & Stork, 1991; Muhlenbein et al., 1988). The GA–LS hybrid scheme was interesting due tothe interaction between its local and global search components.An important issue of these interactions,also met in natural systems, is the Baldwin effect (Belew, 1990; Hinton & Nowlan, 1987), where learn-ing proves to speed up the rate of evolutionary change. Similar effects have been observed in severalGA–LS schemes (Belew et al., 1991; Hinton & Nowlan, 1987; Keesing & Stork, 1991). MAs have beensuccessfully applied also in combinatorial optimization, especially for the approximate solution of NP-hard optimization problems, where their success can be attributed to the synergy of the employed globaland local search components (Krasnogor, 2002; Land, 1998; Merz, 1998).Recently Proposed Memetic PSO SchemesMPSO combines PSO with a local search algorithm. Besides the selection of the most appropriate localsearch method, three major questions arise spontaneously. Henceforth, we will call these questions thefundamental memetic questions (FMQs):(FMQ 1) When local search has to be applied?(FMQ 2) Where local search has to be applied?(FMQ 3) What computational budget shall be accredited to the local search algorithm?It is a matter of user experience to provide appropriate answers to these questions prior to the applica-tion of an MA to a complex problem. For this purpose, a preprocessing phase for the determination ofthe most promising choices can be valuable in unstudied problems.Petalas et al. (2007b) proposed an MPSO approach that uses the random walk with directional exploi-tation local search method (Rao, 1992), and studied its performance against the standard PSO on severalTable 3. Pseudocode of a generic memetic evolutionary algorithmInput: Population, evolutionary operators, local search algorithm.Step 1. Initialize population.Step 2. Apply local search.Step 3. Evaluate population.Step 4. While (termination condition is false)Step 5. Apply recombination.Step 6. Apply mutation.Step 7. Apply local search.Step 8. Evaluate population.Step 9. Apply selection.Step 10. End WhileStep 11. Report best solution. • 119. 101Established and Recently Proposed Variants of Particle Swarm Optimizationbenchmark problems. In another application framework, related to learning in fuzzy cognitive maps,Petalas et al. (2007a) extended their scheme by using the local search algorithms of Hook and Jeeves(Hook & Jeeves, 1961) and Solis and Wets (Solis & Wets, 1981). More recently, an entropy-based MPSOmethod was proposed and studied for the detection of periodic orbits of nonlinear mappings (Petalas etal., 2007c). Let us take a closer look at each of the aforementioned schemes in the following sections.Memetic PSO with Random Walk with Directional Exploitation Local SearchIn (Petalas et al., 2007b), the following schemes regarding the point of application of the local search(which is related to FMQ 2) were proposed:(Scheme 1) Local search is applied on the overall best position, pg, of the swarm, where g is the indexof the best particle.(Scheme 2) For each best position, pi, i = 1, 2,…, N, a random number, r ∈ [0,1], is generated and, ifr < ε for a prescribed threshold, ε > 0, local search is applied on pi.(Scheme 3) Local search is applied on the overall best position, pg, as well as on some randomly se-lected best positions, pi, i ∈ {1, 2,…, N}.(Scheme 4) Local search is applied on the overall best position, pg, as well as on the best positions, pi,i ∈ {1, 2,…, N}, for which it holds that, ||pg– pi|| > cΔ(A), where c ∈ (0,1), andΔ(A) is the diameterof the search space A or an approximation of it.These schemes can be applied either in every iteration of the algorithm or at some iterations. Ofcourse, different approaches can be also considered, e.g., application of local search to all particles.However, empirical studies suggest that such schemes are costly in terms of function evaluations, while,in practice, only a small percentage about 5% of the particles has to be considered as application pointfor local search (Petalas et al., 2007b). These conclusions were verified also by Hart (1994) for the caseof GA–LS hybrid schemes.A pseudocode for the MPSO algorithm is reported in Table 4. In addition, a proof of convergence inprobability was derived in (Petalas et al., 2007b) for the approach with the random walk with directionalexploitation (RWDE) (Rao, 1992) local search algorithm, which is sketched below.RWDE is an iterative stochastic optimization method. It generates a sequence of approximations ofthe optimizer by assuming random search direction. RWDE can be applied both on discontinuous andnon-differentiable functions. Moreover, it has been shown to be effective in cases where other methodsfail due to difficulties posed by the form of the objective function, e.g., sharply varying functions andshallow regions (Rao, 1992). Let x(t) be the approximation generated by RWDE at the t-th iteration, withx(1) denoting the initial value. Let also, λinit, be a starting step length and, Tmax, be the maximum numberof iterations, while f(x) denotes the objective function. Then, RWDE is described by the pseudocode ofTable 5.Alternatively to the rather simple RWDE, more sophisticated stochastic local search algorithms canbe used (Hoos & Stützle, 2004). Petalas et al. (2007b) justified their choice of RWDE by stating thattheir main goal was to designate MPSO as an efficient and effective method, rather than conducting athorough investigation of the employed local search and its convergence properties. Additionally, thesimplicity of RWDE, along with its adequate efficiency, was also taken into consideration. Moreover,RWDE does not make any continuity or differentiability assumptions on the objective function; thus, it • 120. 102Established and Recently Proposed Variants of Particle Swarm Optimizationis consistent with the PSO framework that requires function values solely. For all these reasons, RWDEwas preferred instead of gradient-based local search algorithms (Petalas et al., 2007b).Petalas et al. (2007b) performed extensive experiments with the MPSO with RWDE on test problemsTPUO-1-TPUO-9, defined in Appendix A of the book at hand. The dimension of each problem, the searchTable 4. Pseudocode of the MPSO algorithmInput: N, χ, c1, c2, xmin, xmax(lower and upper bounds), f(x) (objective function)Step 1. Set t ← 0.Step 2. Initialize xi(t), vi(t), pi(t), i = 1, 2,…, N.Step 3. Evaluate f(xi(t)), i = 1, 2,…, N.Step 4. Update indices, gi, of best particles.Step 5. While (stopping condition not met)Step 6. Update velocities, vi(t+1), and particles, xi(t+1), i = 1, 2,…, N.Step 7. Constrain particles within bounds [xmin, xmax].Step 8. Evaluate f(xi(t+1)), i = 1, 2,…, N.Step 9. Update best positions, pi(t+1), and indices, gi.Step 10. If (local search is applied) ThenStep 11. Choose a position pq(t+1), q∈{1, 2,…, N}, according to Schemes 1-4.Step 12. Apply local search on pq(t+1) and obtain a solution, y.Step 13. If (f(y) < f(pq(t+1)) Then pq(t+1) ← y.Step 14. End IfStep 15. Set t ← t+1.Step 16. End WhileTable 5. Pseudocode of the RWDE algorithmInput: Initial point, x(1); initial step, λinit; maximum iterations, Tmax.Step 1. Set t ← 0 and λ ← λinit.Step 2. Evaluate f1= f(x(1)).Step 3. While (t < Tmax)Step 4. Set t ← t+1.Step 5. Generate a unit-length random vector, z(t).Step 6. Evaluate f´ = f(x(t)+λz(t)).Step 7. If (f´ < ft) ThenStep 8. Set x(t+1) = x(t)+λz(t).Step 9. Set λ ← λinitand ft+1← f´.Step 10. ElseStep 11. Set x(t+1) = x(t).Step 12. If (f´ > ft) Then Set λ ← λ/2.Step 13. End IfStep 14. End WhileStep 15. Report solution. • 121. 103Established and Recently Proposed Variants of Particle Swarm Optimizationspace, as well as the required accuracy in their experiments is reported in Table 6. The maximum numberof iterations for every problem was set to Tmax= 104. Three different swarm sizes, N = 15, 30, and 60,were used to study the corresponding scaling properties of MPSO. Both the global and local PSO vari-ants were equipped with RWDE, resulting in the corresponding memetic schemes, which are henceforthdenoted as MPSOg[RW]and MPSOl[RW], respectively.The memetic approaches were compared against their standard PSO counterparts. For this purpose,50 experiments were performed for each test problem and algorithm.An experiment was considered suc-cessful if the global minimizer was detected with the required accuracy within the maximum number ofiterations. In an attempt to achieve the best possible performance of the considered memetic approachesper test problem, Petalas et al. (2007b) used different configuration of RWDE for each problem, basedon observations in preliminary experiments (preprocessing phase). These configurations are reportedin Table 7.The first column of the table denotes the problem, while the second stands for the swarm size, N. Thethird and fourth column report the number of iterations and initial step size used by RWDE, respectively.The fifth column has the value “Yes” in cases where RWDE was applied only on the best particle of theswarm. On the other hand, if RWDE was applied with a probability on the best position of each particle,then this probability is reported in the sixth column. Finally, the last column shows the frequency ofapplication of the local search. For instance, the value “1” corresponds to application of local search ateach iteration, while “20” corresponds to application every 20 iterations.For the local variants of PSO and MPSO, a ring neighborhood of radius r = 1 was used for TPUO-1-TPUO-6and TPUO-8, while a radius equal to r = 2 provided better results for TPUO-7and TPUO-9(Petalas etal., 2007b). The results of the best performing MPSO variants (among those reported in Table 7) andthose of the corresponding standard PSO variants, are reported in Table 8 (Petalas et al., 2007b). Morespecifically, for each test problem and algorithm, the swarm size, N, the number of successes (over 50experiments), as well as the mean number of function evaluations of the best performing variant peralgorithm, are reported. The best performing variant was defined as the one with the highest numberof successes and the smallest mean number of function evaluations (Petalas et al., 2007b). Instead ofiterations, function evaluations were used as the performance measure, since memetic approaches donot perform necessarily the same number of function evaluations per iteration due to the application ofTable 6. Configuration of the test problems. All problems are defined in Appendix A of the book athandProblem Dimension Range AccuracyTPUO-130 [-100, 100]3010-2TPUO-230 [-30, 30]30102TPUO-330 [-5.12, 5.12]30102TPUO-430 [-600, 600]3010-1TPUO-52 [-100,100]210-5TPUO-630 [-32,32]3010-3TPUO-74 [-1000,1000]410-6TPUO-830 [-50,50]3010-6TPUO-930 [-50,50]3010-2 • 122. 104Established and Recently Proposed Variants of Particle Swarm Optimizationlocal search. In addition, the reported mean number of function evaluations was computed for all ap-proaches using only information from their successful experiments, in order to provide an estimation ofthe expected number of function evaluations in a successful run (Petalas et al., 2007b).Table 7. Configuration of the RWDE local search in (Petalas et al., 2007b). For each test problem andswarm size, N, the number of iterations (Iter) and initial step size (Step) are reported for each algorithm.The value “Yes” under the column “Best” denotes that RWDE was applied only on the best particle ofthe swarm. If RWDE was applied with a probability on the best position of each particle, then this prob-ability is reported under the “Prob” column, while column “Freq” shows the frequency of applicationof RWDE (e.g., “1” corresponds to application at each iteration, while “20” corresponds to applicationevery 20 iterations)MPSOg[RW]MPSOl[RW]Problem N Iter. Step Best Prob. Freq. Iter. Step Best Prob. Freq.TPUO-115 5 1.0 Yes - 1 10 1.0 Yes - 130 5 1.0 Yes - 1 10 1.0 Yes - 160 5 1.0 Yes - 1 5 1.0 Yes - 1TPUO-215 10 1.0 Yes - 1 8 0.5 Yes - 5030 5 1.0 Yes - 1 5 1.0 Yes - 3060 5 1.0 Yes - 1 5 1.0 Yes - 1TPUO-315 5 1.0 - 0.2 1 5 1.0 Ye - 2030 5 1.0 - 0.2 1 10 1.0 Yes - 2060 5 1.0 - 0.1 1 5 1.0 Yes - 1TPUO-415 5 4.0 Yes - 1 10 8.0 Yes - 130 5 4.0 Yes - 1 10 8.0 Yes - 160 5 4.0 Yes - 1 10 8.0 Yes - 1TPUO-515 8 1.0 - 0.3 1 8 1.0 - 0.3 230 8 1.0 - 0.2 1 8 1.0 - 0.1 160 8 1.0 - 0.1 1 8 1.0 - 0.1 2TPUO-615 5 1.0 - 0.5 1 5 1.0 Yes - 2030 5 1.0 - 0.5 1 5 1.0 Yes - 2060 5 1.0 - 0.4 1 5 1.0 Yes - 20TPUO-715 5 1.0 Yes - 20 5 1.0 Yes - 2030 5 1.0 Yes - 20 5 1.0 Yes - 2060 5 1.0 Yes - 20 5 1.0 Yes - 20TPUO-815 5 1.0 - 0.8 1 5 1.0 Yes 0.3 130 5 1.0 - 0.5 1 5 1.0 Yes - 260 5 1.0 - 0.3 1 5 1.0 Yes - 2TPUO-915 5 1.0 - 0.6 1 3 1.0 - 0.5 130 5 1.0 - 0.5 1 5 1.0 - 0.1 160 5 1.0 - 0.3 1 10 1.0 Yes - 2 • 123. 105Established and Recently Proposed Variants of Particle Swarm OptimizationClearly, the memetic variants outperformed their corresponding standard PSO variants. More spe-cifically, for the global variants, MPSOg[RW]has higher success rates than PSOgin all problems. In somecases, this comes at the cost of some extra function evaluations, although in most problems MPSOg[RW]was computationally cheaper than PSOg. The impact of swarm size appears to be similar for both thememetic and the corresponding standard PSO variants, with larger swarms requiring more functionevaluations but exhibiting better success rates (Petalas et al., 2007b).Similar conclusions are also derived for the local variants. The superiority of PSOlover PSOg, interms of success rate, is inherited to its memetic counterpart, MPSOl[RW], over MPSOg[RW]. Moreover,MPSOl[RW]performed better than PSOlin almost all cases, achieving high success rates with significantlysmaller number of function evaluations (Petalas et al., 2007b). MPSOl[RW]had better success rates thanMPSOg[RW]in all test problems, although at the cost of slower convergence. This is also an indicationthat the neighborhood radius in memetic PSO approaches has the same effect on convergence as for thestandard PSO methods.Interestingly, in many cases MPSOg[RW]outperformed even PSOl, which is a promising indicationthat the use of RWDE with global PSO variants can enhance significantly their exploration capabilities.Petalas et al. (2007b) also performed t–tests in cases where the superiority of an algorithm over anotherwas not clear. The tests were conducted using the null hypothesis that the mean numbers of requiredfunction evaluations between the two algorithms are equal at significance level 99%. Their conclusionssuggested that in TPUO-1, where MPSOg[RW]was compared with PSOl(they both achieved 50 successes),the null hypothesis could be rejected, i.e., MPSOg[RW]performed better than PSOl, for swarm size equalto 15. The same holds also for the MPSOl[RW]against PSOland for MPSOg[RW]against MPSOl[RW], whichseems to be a natural consequence attributed to the simplicity and unimodality of TPUO-1. Similar testswere performed also for TPUO-7, where MPSOg[RW]was proved to be statistically superior to PSOl.Overall,MPSOoutperformedPSO,exhibitingsignificantlybetterperformanceinmosttestproblems.Petalas et al. (2007b) presented results also for constrained, minimax, and integer optimization prob-lems to justify further the superiority of memetic PSO approaches over the standard PSO. The reader isreferred to the original paper for a thorough presentation.RWDE was not the only local search algorithm combined with PSO in memetic schemes. Differ-ent approaches that employ the Hook and Jeeves (Hook & Jeeves, 1961) and the Solis and Wets (Solis& Wets, 1981) local search were proposed and applied on learning problems in fuzzy cognitive maps(Petalas et al., 2007a). Besides the different local search methods, an entopy-based MPSO (Petalas etal., 2007c) was proposed in an attempt to tackle the issues raised by the FMQs described in the previoussection. We present this approach in the next section.Entropy-Based Memetic PSOThe entropy-based memetic PSO (henceforth denoted as E-MPSO) was introduced by Petalas et al.(2007c), and it is based on the concept of Shannon’s information entropy (SIE) (Shannon, 1964). SIE hasbeen used as diversity measure in genetic programming (Burke et al., 2004; Rosca, 1995). Entropy hasalso been used for developing diversity-preserving techniques in multiobjective evolutionary algorithms(Cui et al., 2001), as well as for determining the initial conditions of local search in parallel memeticschemes (Tang et al., 2006).Let P be a population divided in k phenotype classes, and qkbe the proportion of P occupied bypartition k at a given time. Then, SIE is defined as (Rosca, 1995): • 124. 106Established and Recently Proposed Variants of Particle Swarm OptimizationSIE( ) log ,P q qk kk= -årepresenting the amount of chaos in the system. Large values of entropy correspond to small values ofqk, i.e., the partitions are almost equally significant. On the other hand, small values of entropy corre-spond to larger values of qk, i.e., a significant number of individuals are concentrated in a few partitions.Therefore, high entropy indicates higher population diversity, in direct analogy to physical systems(Shannon, 1964).Table8.Resultsofthebestperformingvariants(amongthosereportedinTable7)ofMPSOg[RW],MPSOl[RW],PSOgand PSOl. For each problem and algorithm, the swarm size, N, the number of successes (over 50experiments), as well as the mean number of function evaluations of the best performing variant, arereportedProblem MPSOg[RW]MPSOl[RW]PSOgPSOlTPUO-1N 15 15 60 15Success 50 50 48 50Mean 6009.7 7318.3 16360.0 8467.5TPUO-2N 15 15 60 15Success 50 50 39 50Mean 9275.5 7679.0 26420.0 8004.9TPUO-3N 60 30 60 30Success 50 50 40 50Mean 19494.8 13815.3 10042.5 16848.0TPUO-4N 15 15 60 15Success 50 50 49 50Mean 5956.5 6588.2 14891.0 8006.4TPUO-5N 30 60 60 60Success 50 50 44 50Mean 12425.4 18080.2 11877.3 27240.0TPUO-6N 60 15 60 15Success 50 50 20 50Mean 79473.6 12978.2 25116.0 12733.5TPUO-7N 15 15 60 15Success 50 50 50 50Mean 2094.3 2685.8 5616.0 2686.2TPUO-8N 30 30 60 60Success 49 50 33 50Mean 78764.5 38916.5 30172.7 74986.8TPUO-9N 30 15 30 60Success 50 50 38 50Mean 34710.7 17579.2 17416.6 25918.8 • 125. 107Established and Recently Proposed Variants of Particle Swarm OptimizationIn the context of PSO, let S = {x1, x2,…, xN} be a swarm consisting of N particles, and P = {p1, p2,…,pN} be the corresponding best positions. Then, at a given iteration, t, SIE is defined as:SIEt(P) = -=åq t q ti iiN( )log ( ),1(7)where,q tf p tf p tiiiiN( )( ( ))( ( )),==å1i.e., qi(t) is the contribution of the function value f(pi(t)) of the best position pi(t) to the sum of all bestposition function values. This quantity has been used widely as a fitness measure in evolutionary algo-rithms.The SIE of equation (7) measures the spread of best position values, providing information regardingthe behavior of PSO. High SIE values correspond to widely spread function values of the best positions,while a smaller SIE indicates similar function values.Also, a rapidly changing value of SIE is an indica-tion of rapidly changing best positions, while slight changes of SIE indicate that the relative differencesamong function values of the population remain almost unchanged, an effect produced also by searchstagnation (Petalas et al., 2007c).Therefore, SIE can be used as a criterion for deciding, at swarm level, regarding the application ornot of local search. More specifically, the changes in the value of SIE are monitored in equidistant mo-ments (e.g., every k iterations), and, if the difference of two consequent measurements is smaller thana user-defined threshold, TSIE, i.e.,|SIEt(P) – SIEt-k(P)| ≤ TSIE,then the local search component of the algorithm is evoked.However, only some of the best positions will serve as initial conditions for the local search. Forthis purpose, a randomized non-elitist selection of best positions is performed (Petalas et al., 2007c).Thus, for each best position, pi, i = 1, 2,…, N, a random value, Ri, uniformly distributed within [0,1],is drawn. If,Ri≤ qs,where qsis a user-defined selection probability, then piis selected as an initial condition for local search;otherwise it is ignored. The non-elitism can prevent from premature convergence to local minima, whileselection pressure imposed by the user-defined threshold prevents from excessively large numbers offunction evaluations. A pseudocode describing the operation of E–MPSO is reported in Table 9. Thereader is referred to Petalas et al. (2007c) for further details.All PSO variants studied so far in this chapter were essentially using a single swarm. In the next sec-tion, we present a multi-swarm approach especially suited for multiobjective problems. • 126. 108Established and Recently Proposed Variants of Particle Swarm OptimizationVECTOR EVALUATED PARTICLE SWARM OPTIMIZATIONVector evaluated PSO (VEPSO) was introduced by Parsopoulos and Vrahatis (2002a, 2002b) as amulti-swarm PSO variant for multiobjective optimization (MO) problems, and it was extended to par-allel implementations in (Parsopoulos et al., 2004). In MO problems, a set of K objective functions,f1(x),…, fK(x), needs to be minimized concurrently. The concept of optimality must change to fit theMO framework. The main goal in MO problems is the detection of Pareto optimal points, i.e., pointswere a small perturbation to any direction will cause an immediate increase to at least one of the objec-tive functions. A detailed description of the MO framework, along with relative applications of PSO, isprovided in a separate chapter later in the book. Thus, in the following paragraphs, we will focus onlyon the description of the VEPSO approach, assuming that the reader is familiar with the fundamentalconcepts and definitions of MO. For the unfamiliar reader, Chapter Eleven offers a nice introduction tothese fundamental concepts.Let fk:A→R, i = 1, 2,…, K, be a set of n-dimensional objective functions that need to be minimizedconcurrently. VEPSO utilizes a set of K swarms, S1, S2,…, SK, one for each objective function. The i-thparticle of the k-th swarm is denoted as xi[k]; the corresponding best position as pi[k]; and its velocity asvi[k]. The swarm Skis evaluated only with its corresponding objective function, fk, for all k = 1, 2,…, K.Swarms exchange information among them by sharing their individual findings to direct search towardsthe Pareto optimal set, i.e., the set of all Pareto optimal points. Assuming that the swarm Skconsists ofNkparticles, k = 1, 2,…, K, then its update equations in VEPSO are defined as (Parsopoulos & Vrahatis,2002a, 2002b):Table 9. Pseudocode of the E-MPSO algorithmInput: N (swarm size), P (best positions), parameters TSIE, qs, k.Step 1. Set t ← 0 and SIEprev← SIEt(P).Step 2. While (stopping condition not met)Step 3. Set t ← t+1.Step 4. Update swarm and best positions, P.Step 5. If (mod(t, k) = 0) ThenStep 6. Compute SIEt(P).Step 7. If (|SIEt(P) – SIEprev| ≤ TSIE) ThenStep 8. Do (i = 1…N)Step 9. Draw a random number, Ri∈ [0,1].Step 10. If (Ri≤ qs) ThenStep 11. Apply local search on pi(t).Step 12. Update pi(t) if better solution found.Step 13. End IfStep 14. End DoStep 15. End IfStep 16. Set SIEprev← SIEt(P).Step 17. End IfStep 18. End While • 127. 109Established and Recently Proposed Variants of Particle Swarm Optimization[ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ]1 1 2 2( 1) [ ( ) ( ( ) ( )) ( ( ) ( ))],sk k k k k k k s kij ij ij ij g j ijv t v t c r p t x t c r p t x t+ = χ + − + − (8)[ ] [ ] [ ]( 1) ( ) ( 1),k k kij ij ijx t x t v t+ = + + (9)k = 1, 2,…, K, i = 1, 2,…, NK, j = 1, 2,…, n,where, χ[k]is the constriction coefficient of the k-th swarm; c1[k], c2[k], are its cognitive and social coef-ficient, respectively; r1, r2, are random values uniformly distributed within [0,1]; and gsis the index ofthe overall best position of the s-th swarm, with s ∈ {1, 2,…, K} and s ≠ k.VEPSO permits the parameter configuration of each swarm independently. Thus, the number ofparticles as well as the values of the PSO parameters per swarm may differ. The inertia weight variant ofVEPSO can be defined in direct analogy with the constriction coefficient case of equation (8).Anotablecharacteristic is the insertion of the best position of another swarm in Sk. This information-exchangingscheme among swarms has a prominent position in VEPSO. It can be clearly viewed as a migrationscheme, where particles migrate from one swarm to another according to a connecting topology. Figure5 illustrates the ring migration topology, which corresponds to the following choice of s in equation (8)(Parsopoulos et al., 2004):, 1,1, 2,..., .K isi i K== − =Similarities with the corresponding ring neighborhood topology defined for PSO in Chapter Two, areobvious. An alternative could be a random assignment of s for each swarm. Further constraints can beposed on the selection of s, e.g., allow the best position of a swarm to migrate only to one of the rest(this property holds for the ring topology but not for the random case).The performance of VEPSO with two swarms was investigated in (Parsopoulos & Vrahatis, 2002a,2002b).Oneofitsadvantagesisthepotentialfordirectparallelizationofthealgorithmbydistributingoneswarm per machine. In Parsopoulos et al. (2004) a parallel VEPSO approach was proposed, where up to10 swarms were distributed over 10 machines of a small computer cluster and their communication wasperformed through Ethernet interconnections. Experiments on widely used MO test problems revealedthat the number of swarms in combination with the Ethernet speed has a crucial impact on performance,since execution time of the swarms becomes comparable to communication time between machines,thereby raising synchronization issues. The reader is referred to the original paper for further details.COMPOSITE PARTICLE SWARM OPTIMIZATION:A META-STRATEGY APPROACHComposite PSO (COMPSO) was introduced by Parsopoulos and Vrahatis (2002a) as a meta-strategythat employs an evolutionary algorithm to control the parameters of PSO during optimization. For thispurpose, the differential evolution (DE) algorithm was used. DE is a probabilistic population-based al-gorithm that works similarly to PSO, using differences of vectors instead of pure probabilistic sampling,to produce new candidate solutions. In addition, it is a greedier algorithm than PSO, due to a selection • 128. 110Established and Recently Proposed Variants of Particle Swarm Optimizationmechanism that retains only the best individuals in a single population. Thus, it can be quite fast butmore prone to get stuck in local minima than PSO. The selection of a different algorithm than PSO itselffor controlling its parameters was dictated by observations in early experiments. These observationsimplied that PSO was unable to control its own parameters during optimization, at least in an efficientmanner (Parsopoulos & Vrahatis, 2002a).The meta-strategy approach assumes a standard PSO swarm for the minimization of the objectivefunction in the search space, and a DE population of PSO parameter values that minimizes a performancecriterion in the space of parameters. These two minimization procedures take place concurrently byswitching repeatedly between the swarm and the parameter population (Parsopoulos & Vrahatis, 2002a).The performance criterion for the latter is usually related to the current performance of the swarm, as itis reflected to the achieved progress in minimizing the objective function.Inthenextparagraphs,webrieflydescribetheDEalgorithmandpresenttheCOMPSOapproach,alongwith conclusions derived by Parsopoulos and Vrahatis (2002a) for widely used benchmark problems.The Differential Evolution AlgorithmDE was introduced by Storn and Price (1997). It is a population-based optimization algorithm that uti-lizes a population, P = {x1, x2,…, xN}, of N individuals to probe the search space. Each individual is ann-dimensional vector, xi= (xi1, xi2,…, xin)T, i = 1, 2,…, N, where n is the dimension of the problem. Ateach iteration, and for each individual, xi, i = 1, 2,…, N, a mutation operator is applied on xito producea new candidate solution. There are several established mutation operators:1 2 3( 1) ( ) ( ( ) ( )),i r r rv t x t F x t x t+ = + − (10)1 2( 1) ( ) ( ( ) ( )),i g r rv t x t F x t x t+ = + − (11)1 2( 1) ( ) ( ( ) ( )) ( ( ) ( )),i i g i r rv t x t F x t x t F x t x t+ = + − + − (12)Figure 5. Schematic representation of the ring migration topology for K swarms • 129. 111Established and Recently Proposed Variants of Particle Swarm Optimization1 2 3 4( 1) ( ) ( ( ) ( )) ( ( ) ( )),i g r r r rv t x t F x t x t F x t x t+ = + − + − (13)1 2 3 4 5( 1) ( ) ( ( ) ( )) ( ( ) ( )),i r r r r rv t x t F x t x t F x t x t+ = + − + − (14)where r1, r2, r3, r4, and r5, are mutually different randomly selected indices that differ also from i, whileF∈[0,2] is a user-defined parameter. The index g denotes the best individual of the population, i.e., theone with the lowest function value. Thus, the new candidate solution is produced by combining individu-als from the current population. The similarity of some operators, e.g., equation (12), with the velocityupdate in PSO is apparent, suggesting an intimate relation between the two algorithms.After mutation, a crossover operator is applied to produce a trial vector, ui= (ui1, ui2,…, uin)T, asfollows:( 1), if ( ) or ( rnbr( )),( 1)( ), if ( ) and ( rnbr( )),ij jijij jv t R CR j iu tx t R CR j i+ ≤ =+ = > ≠j = 1, 2,…, n,where Rjis the j-th evaluation of a uniform random number generator in [0,1]; CR is a user-definedcrossover constant in [0,1]; and rnbr(i) is a randomly selected index from {1, 2,…, n}. If the trial vectorimproves the function value of xi, then it replaces it in the population:( 1), if ( ( 1)) ( ( )),( 1)( ), otherwise.i i iiiu t f u t f x tx tx t+ + <+ = Thus, DE always stores the detected best positions in its population and operates directly on them, incontrast to PSO where the best positions are maintained in a separate population. This feature renders DEa greedy algorithm, where convergence is achieved quickly but probably at the cost of low efficiency.The Composite PSO ApproachParsopoulos and Vrahatis (2002a) used the inertia weight PSO variant, defined by equations (7) and (8)of Chapter Two for the COMPSO approach. The three parameters of the swarm are the inertia weight,w, and the cognitive and social parameter, c1and c2, respectively. For these parameters, a DE popula-tion of 3-dimensional individuals is defined, where the components of each individual correspond to asetting of the three PSO parameters.Let St= {x1(t), x2(t),…, xN(t)} be the swarm that operates in the search space of the problem at hand,at the t-th iteration. Then, the particles are updated as follows (Parsopoulos & Vrahatis, 2002a):vij(t+1) = w vij(t) + c1R1(pij(t)-xij(t)) + c2R2(pgj(t)-xij(t)), (15)xij(t+1) = xij(t) + vij(t+1), (16)i = 1, 2,…, N, j = 1, 2,…, n. • 130. 112Established and Recently Proposed Variants of Particle Swarm OptimizationLet g denote the best particle at each iteration. For a given swarm, St, a DE population, P = {q1, q2,…,qM}, of 3-dimensional individuals, qm= {wm, c1m, c2m}, m = 1, 2,…, M, is defined. The swarm Stis up-dated and evaluated using individually each parameter set qmin equation (15). The function value of thebest particle for the updated swarm is adopted as the function value of the individual (parameter set) qm.Then, the DE population is updated using the DE operators, producing a new population of parameters.The procedure continues again with the update of the swarm with each set of parameters, individually,and so on. DE is executed for a specific number of iterations, although different termination criteria canbe used. The best parameter set provided by the DE after its termination is used by PSO to produce thenext iteration of the swarm. The algorithm is presented in pseudocode in Table 10.Parsopoulos and Vrahatis (2002a) applied COMPSO to a set of widely used benchmark problems re-ported inAppendixAof the book at hand. For each problem, they performed 25 independent experimentsusing the DE operator defined in equation (10). The parameters of PSO were bounded as follows:0.4 ≤ w ≤ 1.2, 0.1 ≤ c1, c2≤ 4.Table 10. Pseudocode of the COMPSO algorithmInput: Initial swarm, S0; velocities, V0; and t = -1.Step 1. Set t ← t+1 and t´ ← 0.Step 2. Generate a population, Pt´, of vectors qm= {qm1, qm2, qm3}, m = 1, 2,…, M.Step 3. Do (m = 1…M)Step 4. Set w ← qm1, c1← qm2, c2← qm3.Step 5. Determine temporary swarm, St´, and velocities, Vt´, by using the parameters, w, c1and c2, of Step 4 in equations(15) and (16).Step 6. Evaluate St´ and find the index g of its best particle, xg.Step 7. Set f(qm(t´)) ← f(xg).Step 8. End DoStep 9. Apply DE mutation, crossover and selection on Pt´and generate a new population, Pt´+1.Step 10. Find the best individual, q*, of Pt´+1.Step 11. If (stopping criterion of DE is not met) ThenStep 12. Set t´← t´+1.Step 13. Go To Step 3.Step 14. ElseStep 15. Go To Step 17.Step 16. End IfStep 17. Set (w, c1, c2) ← q* and update St, Vt, with equations (15) and (16).Step 18. End WhileStep 19. If (stopping criterion of PSO is not met) ThenStep 20. Go To Step 1.Step 21. ElseStep 22. Terminate algorithm.Step 23. End If • 131. 113Established and Recently Proposed Variants of Particle Swarm OptimizationBesides the increased efficiency of COMPSO, a very interesting property is observed in the results:the mean values assigned by the DE algorithm to the three PSO parameters are very close to the valuesrecognized later in the stability analysis of Clerc and Kennedy as “optimal” for the constriction coeffi-cient variant of PSO (Clerc & Kennedy, 2002). These values along with the corresponding test problemsand dimensions are reported in Table 11. We refer the reader to the original paper by Parsopoulos andVrahatis (2002a) for further details on the configuration of the algorithms.In Table 11, we can see that the inertia weight assumed values close to the default one of the constric-tion coefficient PSO variant, i.e., χ = 0.729.Also, the assigned values of c1and c2seem to be interrelatedwith their sum having an average equal to 3.3, while the corresponding sum for the original constrictioncoefficient PSO variant is almost 2.99 (the convergence regions of PSO were theoretically obtained forvalues higher than 2.916).Moreover, the DE algorithm always assigns a larger value to c1than c2. This tends to enforce theglobal perspective of COMPSO, i.e., to maintain the diversity of the swarm for a large number of it-erations, and, consequently, avoid premature convergence to local minima. Parsopoulos and Vrahatis(2002a) also reported that the behavior of COMPSO was not significantly sensitive to different valuesof the parameters F and CR of DE.COMPSO was shown to be able to provide a means for reliable control on the parameters of PSO. Ofcourse, this comes at the cost of extra computational effort. However, it might be inevitable to pay thiseffort, since the determination of proper parameters in unexplored search spaces often requires a largenumber of preliminary experiments. Additionally, COMPSO controls parameters during optimization,changing them radically if needed, thereby preventing bad outcomes due to wrong parameter settings.COMPSO has offered a good starting point for research in a field of general interest, namely the fieldof meta-strategies (also called meta-heuristics). DE can be replaced by any algorithm and numerousmodifications can be made to the original COMPSO scheme. Thus, a lot of research is still needed, toreveal the potential of this apparently promising approach.Table 11. Mean values of w, c1, c2, assigned by COMPSO to the inertia weight PSO variant through DE.All test problems are reported in Appendix A of the book at handProblem Dim. Mean w Mean c1Mean c2TPUO-22 0.7337 2.0421 1.1943TPUO-36 0.7027 2.0462 0.9881TPUO-42 0.7082 2.0171 1.3926TPUO-410 0.6415 2.1291 1.4534TPUO-102 0.7329 2.0197 1.3148TPUO-112 0.7153 2.1574 1.2706TPUO-132 0.7085 2.0378 1.2712TPUO-152 0.7023 2.0145 1.2098TPUO-163 0.7053 2.1062 1.1694TPUO-176 0.7179 1.9796 1.3812TPUO-193 0.7739 1.9722 1.3063 • 132. 114Established and Recently Proposed Variants of Particle Swarm OptimizationGUARANTEED CONVERGENCE PARTICLE SWARM OPTIMIZATIONThe guaranteed convergence PSO (GCPSO) was introduced by Van den Bergh and Engelbrecht (2002),based on the finding that the inertia weight variant of PSO can be non-convergent even on local minimiz-ers if velocities become very small (Van den Bergh, 2002). To address this problem, a modified updatescheme was introduced for the overall best particle, based on the local search algorithm of Solis andWets (1981).More specifically, if the current and the best position of the i-th particle at iteration t coincide withthe overall best position, i.e., xi(t) = pi(t) = pg(t), then the position update of xiwill depend only on theprevious velocity term, wvi(t). Thus, if velocity is very close to zero, the particle will be almost immo-bilized. Since, in the long run, all particles are expected to approximate the global best position, it ispossible that the aforementioned deficiency will hold for most, resulting in search stagnation (Van denBergh & Engelbrecht, 2002).The problem can be alleviated by introducing a different update scheme for the overall best particle.Let g be the index of the best particle in the inertia weight variant of PSO defined by equations (7) and(8) in Chapter Two. Then, according to GCPSO, the best particle is updated using the equations (Vanden Bergh & Engelbrecht, 2002):vgj(t+1) = -xgj(t) + pgj(t) + wvgj(t) + ρ(t)(1-2r), (17)xgj(t+1) = xgj(t) + vgj(t+1), (18)j = 1, 2,…, n,where n is the dimension of the problem; r is a uniformly distributed random number in [0,1]; and ρ(t) isa scaling factor. The rest of the particles are updated using the standard equations (7) and (8) in ChapterTwo. Equations (17) and (18) can be combined by substitution, resulting in the equivalent scheme:xgj(t+1) = pgj(t) + wvgj(t) + ρ(t)(1-2r). (19)According to this update, the best particle will not stagnate, generating new candidate solutions randomlyin an area that surrounds the overall best position, pg(Van den Bergh & Engelbrecht, 2002).In addition, GCPSO uses two counters to count the number of consecutive successes and failuresin updating the best position, where success is defined as the improvement of the overall best functionvalue, while failure is defined otherwise. Thus, if MSUC(t) denotes the number of consecutive successesand MFAIL(t) is the number of successive failures at iteration t, then the scaling factor ρ(t) is updated asfollows (Van den Bergh & Engelbrecht, 2002):SUC SUCFAIL FAIL212t M t st t M t st>+ = > (20) • 133. 115Established and Recently Proposed Variants of Particle Swarm Optimizationwhere sSUCand sFAILare user-defined parameters. In addition to equation (20), the following rules applyon MSUCand MFAIL(Van den Bergh & Engelbrecht, 2002):M t M t M tM t M t M tSUC SUC FAILFAIL FAIL SUC( ) ( ) ( ) ,( ) ( ) (+ > Þ + =+ > Þ +1 1 01 11 0) ,=such that only the number of consecutive successes and failures are preserved. The update of equation(20) resembles that of Solis and Wets (1981) for step update in random local search.The adaptation of the scaling factor controls the sampling volume of random search, where consecu-tive successes increase volume to allow larger steps, while failures have the opposite effect, by reduc-ing volume. The value, ρ(0) = 1.0, was experimentally shown to be a good initial value for the scalingfactor. Regarding the rest of the parameters, the values sFAIL= 5 and sSUC= 15 are recommended by Vanden Bergh and Engelbrecht (2002). This setting implies that GCPSO will penalize bad values of ρ(t)faster than it awards its good values. Alternatively, dynamic control of these parameters was proposedas a mean to prevent the rapid oscillation of ρ(t) (Van den Bergh & Engelbrecht, 2002). Nevertheless, alower bound on the value of ρ(t) may be beneficial for the algorithm, preventing it from taking almostzero values.GCPSO has proved to converge on local minima (Van den Bergh, 2002), based on the convergenceproof of Solis and Wets for the corresponding random local search algorithm (Solis & Wets, 1981). Inparallel, Van den Bergh and Engelbrecht (2002) reported results from the application of GCPSO on30-dimensional instances of test problems TPUO-1, TPUO-3, TPUO-6, and TPUO-20, described in AppendixA of the book. Table 12 reports the improvement percentages achieved by GCPSO against PSO on theaforementionedtestproblems,intermsofsuccessinachievingtherequiredaccuracyover50independentexperiments, as well as in the required number of function evaluations for the successful experiments,based on the results reported in (Van den Bergh & Engelbrecht, 2002). For the comparisons, the inertiaweight PSO variant was used, although its parameter setting implied the constriction coefficient variant,i.e., w = 0.72, c1= c2= 1.49 (Van den Bergh & Engelbrecht, 2002).In the unimodal test problems, TPUO-1and TPUO-20, GCPSO had superior performance than PSO,especially in the case of small swarm sizes, where the improvement in function evaluations was higherthan 70%. Improvement was also observed in the number of successes, although only for the case of N =10 particles. This is indicative of the efficiency of GCPSO in unimodal problems, which is the problemtype that better suits it (since it is proven to be a local search algorithm).In the rest (multimodal) problems, GCPSO had ambiguous performance. While in TPUO-6there wasa significant improvement in the number of successes, especially for larger swarms, in TPUO-3inferiorperformance was observed. However, in both cases, GCPSO remained the fastest algorithm, requiringup to 30% less function evaluations in successful experiments.In conclusion, GCPSO is a PSO variant perfectly suited for local optimization, accompanied bya convergence proof. This property renders it a suitable and efficient choice for hybrid PSO schemesthat combine local and global optimization components, such as the memetic approaches described inthe previous sections. The next section presents a different approach that promotes cooperation amongparticles to probe the search space. • 134. 116Established and Recently Proposed Variants of Particle Swarm OptimizationCOOPERATIVE PARTICLE SWARM OPTIMIZATIONCooperation can be defined as the interaction of individuals for the exchange information that canenhance their search capabilities during the optimization procedure. This description is rather abstractand embraces different approaches. In some sense, even a simple evolutionary algorithm with a re-combination operator can be considered as a cooperative algorithm with information being exchangedthrough recombination. Cobb (1992) considered this kind of cooperation for a GA-based scheme, whilePotter and De Jong (2000) offer a comprehensive description of the main issues arising in cooperativeevolutionary approaches.Cooperation can be either explicit, through direct one-to-one communication among individuals, orimplicit, through a shared memory for storing information. In the latter case, each individual can con-tribute information on the shared memory, placing it at the disposal of the rest (Clearwater et al., 1992).The shared information can be either a complete or a partial solution of the problem. In the latter case,the shared information shall be combined with different parts of other individuals to form a completecandidate solution, i.e., the contribution of each individual pertains only to some of the coordinate di-rections of the problem.Potter and De Jong (1994) proposed an approach based on GAs, where one population per coordinatedirection of the solution vector was used to probe the search space. Thus, each population was perform-ing a one-dimensional optimization, and cooperation was the communication channel among them thatallowed the construction of complete solutions.The same ideas were adopted by Van den Bergh and Engelbrecht (2004) for the development of co-operative PSO (CPSO) algorithms. More specifically, if n is the dimension of the problem, then, insteadof using a single swarm, S, of N particles, CPSO employs n swarms, S1, S2,…, Sn, of Ni, i = 1, 2,…, n,particles each, and each swarm copes only with a single coordinate direction of the solution vector.Table 12. Improvement percentages achieved by GCPSO against PSO for two swarm sizes, N = 10, 20,in terms of success in achieving the required accuracy over 50 independent experiments. The requirednumber of function evaluations in successful experiments is also reported. The numbers are based onthe results provided in (Van den Bergh & Engelbrecht, 2002). Dash denotes that both algorithms were100% successful, while negative values denote worse performanceGCPSO Improvement (%)Problem N Success Func. Eval.TPUO-110 4.16% 89.89%20 - 32.84%TPUO-310 -20.00% 22.53%20 -8.16% 30.38%TPUO-610 9.09% 24.44%20 24.32% 16.94%TPUO-2010 31.57% 73.35%20 - 22.14% • 135. 117Established and Recently Proposed Variants of Particle Swarm OptimizationA fundamental question arises in such algorithms: how shall each particle be evaluated since a com-plete (n-dimensional) solution requires components encoded in other swarms? This question implies thefollowing issues that need to be addressed:1. How shall particles be selected from each swarm to form complete (n-dimensional) candidatesolutions?2. How shall a particle be penalized or awarded for its contribution in the quality of solutions?The answers to these issues determine the special features of the algorithm, having significant impacton its performance. Therefore, they shall be carefully treated based on the available data and problemtypes. Van den Bergh and Engelbrecht (2004) proposed two different CPSO approaches, described inthe following sections.The CPSO-SKAlgorithmThis CPSO approach uses the concept of the context vector to tackle the function evaluation problem.Since a direct evaluation of the one-dimensional particles of each swarm is impossible with an n-di-mensional objective function, a context that provides the missing information, i.e., the remaining (n-1)coordinates, to each particle is used. This context is an n-dimensional vector where a particle contributesits coordinate value, while the rest (n-1) coordinates are fixed to the values of the best particles of therest swarms. Van den Bergh and Engelbrecht (2004) denoted this approach as CPSO-SK.Putting it formally, let xi[k]be the i-th (one-dimensional) particle in the k-th swarm, Sk, for i = 1, 2,…,Ni, and k = 1, 2,…, n. Also, let pg[k]be the (one-dimensional) overall best position of Sk. Then, the par-ticle xi[k]is evaluated with the n-dimensional objective function, f(z), by constructing the n-dimensionalcontext vector:zi[k]= (pg[1], pg[2],…, pg[k-1], xi[k], pg[k+1],…, pg[n])T, (21)and setting:f(xi[k]) = f(zi[k]).Then, each one-dimensional swarm is updated according to its PSO velocity and position update, whichmay not be identical for all swarms.Van den Bergh and Engelbrecht (2004) used the inertia weight variant of PSO. The update of bestpositions of each swarm follows the standard PSO rules exactly, using the objective function value as-signed to each particle with the aforementioned procedure. The objective function is re-evaluated eachtime one of the context vector components changes. The context vector that contains all the overall bestpositions of the swarms:z* = (pg[1], pg[2],…, pg[n])T,is, by definition, the overall best solution detected by CPSO-Skfor the problem at hand. • 136. 118Established and Recently Proposed Variants of Particle Swarm OptimizationCPSO-SKcan be considered also with swarms of different dimensionality. Thus, instead of using none-dimensional swarms, S1, S2,…, Sn, one can use just m < n swarms, such that:1,miid n==∑ and di≥ 1, i = 1, 2,…, m.This modification is important especially in cases where correlations exist among the directions ofthe objective function. If correlated directions are assigned to different swarms, then their nonalignedchanges will be detrimental for the algorithm (Van den Bergh & Engelbrecht, 2004). Unfortunately,these correlations are not always known to the user.The performance of CPSO-SKdepends on several factors related to the following issues:1. Selection of the underlying PSO variant per swarm2. Selection of the context vector building schemeRegarding the first issue, the inertia weigh PSO variant was used in the approach of Van den Bergh andEngelbrecht (2004). Of course, any other PSO variant could be employed as well, and the convergenceproperties of the CPSO-SKalgorithm would be then heavily dependent on the employed algorithm. Inaddition, different PSO variants can be used per swarm, combining their different properties. To the bestof our knowledge, no such cooperative scheme has been proposed for CPSO up-to-date.Regarding the second issue, the proposed context vector building scheme employed the best particleper swarm. Alternatively, a randomly selected best position could be used from each swarm, replacingequation (21) with:1 2 1 1[ ] [1] [2] [ 1] [ ] [ 1] [ ] T( , ,..., , , ,..., ) ,k k nk k k k ni r r r i r rz p p p x p p− +− += (22)where r1, r2,…, rn, are randomly selected indices. Also, a combination of equations (21) and (22) couldbe used in a competitive manner by evaluating both and selecting the best one. This corresponds to acooperation scheme that is more biased towards exhaustive search.Preliminary experiments revealed performance deficiencies of CPSO-SK, which appears to be proneto get stuck in local minima. Van den Bergh and Engelbrecht (2004) proposed a hybrid scheme thatcombines CPSO-SKwith standard PSO to address these deficiencies. This scheme is described in thefollowing section.The CPSO-HKAlgorithmThis hybrid scheme, denoted as CPSO-HK, combines CPSO-SKwith PSO (Van den Bergh & Engelbrecht,2004). The main idea is the application of PSO as soon as the search of CPSO-SKstagnates. It would beideal if one could identify when CPSO-SKstagnates to switch over the standard PSO. However, this isnot always possible, dictating the necessity for a different scheme. To this end, the application of eachalgorithm alternately at each iteration was shown to constitute a promising approach. The search canbe further enhanced by exchanging information between the active and the idle algorithm at the end ofeach iteration, in terms of the discovered solutions. • 137. 119Established and Recently Proposed Variants of Particle Swarm OptimizationSuch an information exchange scheme can be the replacement of some particles of the one algorithmwith the overall best position of the other (Van den Bergh & Engelbrecht, 2004). More specifically, af-ter the application of CPSO-SKat iteration t, the produced context vector replaces a randomly selectedparticle of the standard PSO. Then, in the next iteration, t+1, the standard PSO is applied and updatesits overall best position, whose direction components replace the best positions in randomly selectedswarms of CPSO-SK. Notice that the replacements shall not affect the overall best position detected byeach algorithm, otherwise performance will most likely be reduced.Moreover, Van den Bergh and Engelbrecht (2004) noticed that if information exchange takes placeamong all particles of CPSO-SKand PSO, this can result in inferior performance of CPSO-HKcomparedto more conservative schemes. Loss of diversity and deletion of good particles have been identified as thereasons behind this effect. Thus, using only part (about half) of the particles per algorithm to exchangeinformation has been shown to be a more efficient scheme (Van den Bergh & Engelbrecht, 2004).Van den Bergh and Engelbrecht (2004) applied CPSO-HKand CPSO-SKon the test problems TPUO-2,TPUO-3, TPUO-4, TPUO-6, and TPUO-20, reported inAppendixAof the book at hand. Moreover, they comparedthesetwoapproacheswiththestandardPSO,aswellaswiththecooperativeco-evolutionaryGA(CCGA)approach presented in (Potter & De Jong, 1994), providing a significant amount of experimental results.Both CPSO approaches exhibited competitive performance to the rest of the algorithms. Especially forsmall swarms (10 particles), CPSO approaches were shown to be very fast and obtained satisfactoryresults even for rotated versions of the test problems.There seems to be a difference in performance between CPSO approaches with one-dimensional andhigher-dimensional swarms when coordinate axes are rotated, with the latter exhibiting superior per-formance. On the other hand, in problems with equidistantly distributed local minima, one-dimensionalswarms performed very well. For a detailed presentation and discussion of the results, the reader isreferred to the original paper of Van den Bergh and Engelbrecht (2004).Although several important issues need further investigation, CPSO approaches were shown toexhibit promising behavior, rendering them as an interesting alternative in cases where standard PSOfails to provide satisfactory results. In the next section, we describe a PSO variant that can find severalsolutions of the problem, concurrently, by using niches of particles.NICHING PARTICLE SWARM OPTIMIZATIONNiching is a framework for developing evolutionary algorithms capable of locating several minimizersof the objective function (Horn, 1997). There are two types of niching: parallel niching, where severalniches are recognized in a population and maintained simultaneously, and sequential niching, whereniching is applied iteratively on the problem, while a procedure ensures that already detected solutionswill not be detected again.Niching PSO (NPSO) was proposed by Brits et al. (2002) as a PSO variant capable of locating severalsolutions of a problem, simultaneously. Hence, it can be categorized as a parallel niching approach. Inthe original paper, a number of subswarms following the GCPSO variant, described in a previous sec-tion, were considered (Brits et al., 2002).NPSO utilizes a swarm, S, of N particles. The swarm is initialized uniformly within the search space,A ⊂ Rn. Initialization plays a crucial role, since uniform distribution of the particles is required at thebeginning by the algorithm. For this purpose, Brits et al. (2002) used Faure sequences of random num- • 138. 120Established and Recently Proposed Variants of Particle Swarm Optimizationbers in their implementation (Thiémard, 1998). Then, a number of iterations is performed using onlythe cognitive term in velocity update:vij(t+1) = wvij(t) + c1r1(pij(t)-xij(t)). (23)This allows each particle to explore locally the search space without being influenced from the rest. Themodel of equation (23) is also called the cognitive only model.Niches shall now be identified in the swarm. Brits et al. (2002) tracked the variance, σi, of the func-tion value of each particle, xi, for a number of iterations, tσ. When σifalls under a threshold, δ, a newsubswarm is created consisting of xiand its closest neighbor. Claiming that it avoids the use of tunableparameters, Brits et al. (2002) proposed this approach as an alternative to the approach of Parsopoulosand Vrahatis (2001). However, in contrast to these claims, the proposed approach still uses a tunableparameter, δ, that requires user intervention.Nevertheless, both approaches are useful, although in different cases. For example, in square-errorobjective functions (e.g., neural networks training, model identification problems etc.) where the globalminimum is by definition equal to zero, the approach of Parsopoulos and Vrahatis (2001) is more suit-able. On the other hand, if no information is known on the form of the objective function, the approachof Brits et al. (2002) can be very useful. Perhaps a combination of the two approaches may be superiorfrom each one separately, a fact identified also by Brits et al. (2002).In order to avoid problem dependencies, the parameter σiwas normalized based on the upper andlower bounds of the particles. The closest neighbor of xiis defined as follows:cl argmin{|| ||}.i jj ix x x≠= −Let S[j]denote the j-th subswarm, and xi[j]denote its i-th particle. Then, its radius is defined as follows:[ ] [ ] [ ]max{|| ||},jjj j jg ii gR x x≠= −where gjis the index of the global best particle of the j-th subswarm. Notice that, the radius of a subswarmcan become arbitrarily small, if its particles have all converged on the same solution.If a particle ximoves within the range of the j-th subswarm:[ ]|| ||jji gx x− ≤ R[j],then it is absorbed by S[j]. Moreover, two subswarms that intersect:1 21 2[ ] [ ]|| ||j jj jg gx x− < 1 2[ ] [ ]j jR R+ , (24)can be merged, since they are expected to probe the same regions of the search space. In the special casewhere both subswarms have radius equal to zero, then a sufficiently small value, μ > 0, can be used asa threshold in the right part of equation (24).Each subswarm of the NPSO approach uses the GCPSO variant, which was proved to be locallyconvergent. In the approach of Brits et al. (2002), subswarms were let to consist of 2 particles only, • 139. 121Established and Recently Proposed Variants of Particle Swarm Optimizationwhile the variance of each particle was tracked for tσ= 3 iterations. The pseudocode of NPSO is givenin Table 13. Brits et al. (2002) applied the NPSO approach on four one-dimensional and one 2-dimen-sional test problems with very promising results. However, they identified a number of issues that needto be resolved, such as:a. Sensitivity of the algorithm on the parameters σiand μ.b. Correlations between the number of niches, the number of solutions, and swarm size.c. Impact of using different algorithms than GCPSO in subswarms.These issues, along with the need for further experimentation in high dimensional problems, opened theway for the development of more efficient NPSO approaches (Bird & Li, 2006).The sequential niching approaches are also related to techniques presented in the next chapter; thus,we shall re-examine these issues later. The next section presents a PSO variant that attempts to minimizeuser intervention for parameter settings by using a parameter-free schemeTRIBESThe TRIBES algorithm was proposed by Clerc (2006, Chapter 11) as a black-box PSO variant withcapabilities of self-adaptation. Its foundation stone is the concept of tribe, which describes a specialneighborhood connection scheme for exchanging information among particles, while the same structureand update rules with PSO are used.Table 13. Pseudocode of the NPSO algorithmInput: Main swarm and GCPSO parameters.Step 1. Initialize the main swarm uniformly within the search space.Step 2. Apply one iteration of the model defined by equation (23).Step 3. Evaluate particles of the main swarm.Step 4. Do (number of subswarms)Step 5. Apply one iteration of GCPSO on the subswarm.Step 6. Evaluate the particles of the subswarm.Step 7. Update the radius of the subswarm.Step 8. End DoStep 9. Merge intersecting subswarms.Step 10. Let subswarms absorb particles from the main swarm, if they have entered their range.Step 11. Check condition for producing a new subswarm for each particle of the main swarm.Step 12. If (stopping criteria not met) ThenStep 13. Go To Step 2.Step 14. ElseStep 15. Terminate algorithm.Step 16. End IfStep 17. Report solutions. • 140. 122Established and Recently Proposed Variants of Particle Swarm OptimizationIf a particle y shares its own memory, i.e., its best position, with another particle, x, then y is calledan informant of x (Clerc, 2006), and we will denote this connection as:inf(y, x) = 1.On the other hand, if y is not an informant of x, we will denote it as:inf(y, x) = 0.In the PSO framework, it holds that inf(y, x) = inf(x, y), i.e., if y is an informant of x, then x is also anvinformant of y. This compromise is adopted also in TRIBES.As we have already seen in neighborhoodtopologies of PSO, this information exchange channel is depicted with an arc between y and x. In mostneighborhoodtopologies,onlyanumberofparticlesareconnectedwithagivenparticlex,i.e.,thenumberof its informants is smaller than the swarm size, N. Now, we can give the following definition:Definition 1.Atribe of a swarm, S, is a subset, T ⊆ S, of particles where any one of them is an informantfor the rest in the same subset, i.e.:inf(xi, xj) = 1, for all i and j such that xi, xj∈ T.In a graph theoretical sense, a tribe is a symmetrical clique in the graph representing the neighborhoodtopology of the swarm (Clerc, 2006). Obviously, if Tj, j = 1, 2,…, J, are tribes of a swarm, S, then itholds that:S = TjjJ=1 ,where J ≤ N.Each tribe resembles a team that moves in the search space with complete cooperation among itsmembers, while communicating with the rest of the tribes. This concept brings to mind the cooperativeandnichingtechniquesdescribedinprevioussections.Indeed,TRIBEScanbeconsideredascooperative/niching approach in the sense that several teams cooperate to probe the search space simultaneously.Communication among tribes is crucial for the operation of the algorithm. Clerc (2006) requires thateach two particles, y and x, of the swarm must have a connecting path in the neighborhood graph, i.e.,there shall exist a set of particles, {z1, z2,…, zC}, with C < N-1, such that:1 1 2inf( , ) inf( , ) inf( , ) 1.Cy z z z z x= = = =Thus, each tribe must have at least one communication channel with another tribe. However, since thealgorithm is self-adaptive, it can generate or delete some tribes; therefore, the connection channels areexpected to change dynamically without losing the aforementioned interconnection property among theremaining tribes. • 141. 123Established and Recently Proposed Variants of Particle Swarm OptimizationRegarding the particles of a tribe, quality measures are defined to assess their performance throughtime. If x(t) is a particle at iteration t, and p(t) is its best position, then:x(t) isp tp tp tneutral, if ( ) ( 1),good, if ( ) is better than ( 1),excellent, if ( ) is better than ( 1), and ( 1) is better than ( 2).p t p tp t p t p t= −− − − −i.e., TRIBES considers rather a history of the best positions than only the last one, which is the case ofstandard PSO (Clerc, 2006). Within the context of a tribe, one can also define the worst and the bestparticle, as the ones with the worst and best function value, respectively.Based on the quality of its particles, a tribe, T, of size, NT, is characterized as good or bad as fol-lows:T is[good]good, if ,bad, otherwise,TN r >where NT[good]≤ NTis the number of good particles of T, and r∈{0, 1,…, NT} is a randomly selectedinteger. This probabilistic rule provides the space to build new tribes based on the following set of ac-companying evolution rules (Clerc, 2006):1. Elimination of a particle: A particle can be eliminated only from the best tribe, and it can be onlyits worst one. This constriction minimizes the risk of deleting a particle that carries crucial infor-mation regarding the actual solution of the problem. If a tribe consists of only one particle, then itwill be deleted only if it has at least one informant with better performance. Deleting a particle willprobably require a re-organization of the communication channels between particles and tribes.Thus, if the tribe Tjconsists of more than one particles and its worst particle,[ ]worstjTx , is connected tothe k-th particle of another tribe, Ti, i.e.:[ ] [ ]worstinf( , ) 1,j iT Tkx x =then, deleting the worst particle will result in the following connection change:[ ] [ ]bestinf( , ) 1,j iT Tkx x =i.e., the connections of the deleted particle are inherited by the best particle of its tribe. On theother hand, if Tjconsists of one particle solely, connected with the ki-th particle of tribe Tias wellas with the kl-th particle of tribe Tl, i.e.:[ ] [ ][ ] [ ]inf( , ) 1 and inf( , ) 1,j ji li lT TT Tk kx x x x= =then, deleting this particle produces the following update:[ ] [ ]inf( , ) 1,l il iT Tk kx x =i.e., the tribes Tiand Tlare connected directly, while Tjvanishes. Notice that[ ]jTx will be deletedonly if at least one of its informants, [ ]llTkx or [ ]iiTkx , has better function value. • 142. 124Established and Recently Proposed Variants of Particle Swarm Optimization2. Generationofnewparticles:Eachtribethatwascharacterizedas“bad”generatestwonewparticles;a free and a confined one.The free particle is generated randomly using a uniform distribution eitherwithin the whole search space or on one of its sides or vertices. On the other hand, the confinedparticle is generated randomly in a more restricted region. More specifically, let Tjbe the tribe underconsideration and gjbe the index of its best particle, with current position, jgx , and best position, jgp .Let also, z, be the best informant of jgx , and zgbe its best position. Then, the confined particle willbe generated randomly and uniformly in a sphere of center zgand radius || zg- jgp ||. Therefore, foreach “bad” tribe, the free particle is initialized such that it has higher probability of discoveringnew promising regions, while the confined one is initialized in already identified promising re-gions. All particles generated from “bad” tribes form a new one that retains connections with itsoriginating “bad” tribes.The gain from the aforementioned evolution rules can be revealed only after a few iterations of thealgorithm are performed. Therefore, it is not wise to perform adaptations very often. Clerc (2003) pro-poses the following rule: after each adaptation, the diameter of the relation graph among the particles,i.e., the number of arcs constituting the shortest path between any pair of particles of different tribes,is calculated. The maximum among the lengths of these shortest paths provides an estimation of therequired number of iterations for transmitting information carried by any particle to any other particle. Ifthis number is equal to L after an adaptation, the next adaptation shall take place after L/2 iterations.In practice, the application of TRIBES requires a randomly generated initial particle, which consti-tutes the initial tribe. This particle is updated using the PSO rules (any PSO variant can be applied) and,if it is not improved in the first iteration, then a second particle is generated, producing a second tribe.If in the next iteration both tribes are bad, then two new particles are generated from each one. The fournew particles constitute a new tribe, and the procedure continues in the same manner. Thus, as long asthe algorithm does not exhibit an improvement, TRIBES will increase the number of particles, tryingto equip the search mechanism with more search units. As soon as good positions start being detected,TRIBES will decrease the number of particles by deleting the worst among them, thereby retaining atrade-off between computational cost and efficient search (Clerc, 2003).Clerc tested TRIBES under different PSO update schemes per particle, even per coordinate direction(Clerc, 2003, p. 145). He also proposed a generalized comparison scheme between particles, renderingthe algorithm applicable even in spaces without a metric (Clerc, 2003, p. 146). Moreover, he reportedresults from the application of TRIBES on several benchmark problems and provided its source codefreely on the web page: popularity of TRIBES is still limited in literature, perhaps due to its more complex structure thanPSO,whilethereportedresultsarestillambiguous,admittingsignificantcriticism.Nevertheless,TRIBESis a very interesting and promising approach for future generation variants of PSO. The user needs tospecify only the objective function, its search space, the desired accuracy, and a maximum number offunction evaluations. Although a black-box algorithm that can adapt to any problem by responding tostimulations during the optimization procedure and taking all decisions without user intervention, mayseem a very ambitious or misbegotten task, it still remains the holly grail of intelligent optimization andartificial intelligence. Thus, any effort towards this direction shall be embraced and improved to enrichour experience and provide hints that can lead to the design of new approaches closer to this goal. Inour opinion, TRIBES constitutes a first step towards this direction. • 143. 125Established and Recently Proposed Variants of Particle Swarm OptimizationqUANTUM PARTICLE SWARM OPTIMIZATIONQuantum PSO (QPSO) was introduced by Sun et al. (2004a). Although it is considered a PSO vari-ant, its operation is placed on a rather different framework than standard PSO. Thus, if PSO follows aNewtonian approach for the movement of the particles, QPSO considers rather a quantum behavior ofparticles, based on laws of quantum mechanics.In quantum mechanics, the time-dependent Schrödinger equation:�( , ) ( ) ( , ),j r t H r r tt∂Ψ = Ψ∂(25)has a dominant position, where:�22( ) ( ),2H r V rm= − ∇ +(26)is a time-dependent Hamiltonian operator; ħ is the Planck’s constant; m is the mass of the particle; andV(r) is a potential energy distribution function. The squared amplitude, Q = |Ψ|2, of the wave functionΨ(r,t) in equation (25) serves as a probability measure for the movement of the particle, under the nor-malization:2| ( , ) | d d d 1.0.r t x y zΨ =∫∫∫In QPSO, the swarm is considered as a quantum system where each particle has a quantum state basedon the employed wave function, while it moves in a Delta potential well (DPW) towards a position p.Influenced by the analysis of Clerc and Kennedy (2002) presented in the previous chapter, Sun et al.(2004a) assumed that the position, p, of a particle, xi, is defined as the weighted mean of its best position,pi, and the overall best position of the swarm, pg, as follows:1 21 2,ij gjjp pp+=+j = 1, 2,…, n, (27)where φ1= c1r1and φ2= c2r2, with c1, c2, being the cognitive and social parameter of PSO, respectively,and r1, r2, being uniformly distributed random numbers in [0,1].To illustrate the workings of QPSO, let us consider the simplest, one-dimensional case.Assuming thatthe center, p, of the potential is defined by equation (27), the DPW is defined as (Sun et al., 2004a):V(x) = -γ δ(x-p) = -γ δ(y), (28)where y = p-x. Through proper mathematical manipulations (Sun et al., 2004a), we obtain the followingwave function:1 | |( ) exp ,yyLL Ψ = −  (29) • 144. 126Established and Recently Proposed Variants of Particle Swarm Optimizationand, hence, a probability measure:2 1 | |( ) ( ) exp 2 ,yQ y yL L = Ψ = −  (30)where L = ħ2/mγ.So far, we obtained a probability density of the particle positions. However, this is not adequate toserve as an algorithm, since the evaluation of a particle requires an exact position. Therefore, the posi-tion of the particle shall be gauged; this procedure is called collapse of the quantum state to the classicalstate.Collapsing is possible through Monte Carlo simulation. More specifically, let s be a random numberuniformly distributed in (0, 1/L). Then, s can be written as:1s uL= ,where u is a random number uniformly distributed in (0,1). Substituting Q(y) in the left part of equation(30) with s, and solving for x, results in the following QPSO model (Sun et al., 2004a):1ln .2Lx pu = ±    (31)Equation (31) provides two possible new positions of the particle, which are measurable with the objec-tive function. Sun et al. (2004a) provided a convergence proof of this QPSO model on the position p.The parameter L is the only control parameter that appears in the update equations of QPSO. The lackof other user-defined parameters renders QPSO an interesting approach.Alternatively to the DPW, different quantum field models can be used. Sun et al. (2004a) proposeddifferent wave functions that result in different update schemes for the particles. Therefore, if S is aswarm of size N, xiis the i-th particle, piis its best position, pgis the overall best particle, and t is theiteration counter, we can describe the general form of QPSO as follows:xi(t) = p(t) + F(L, ±u), (32)where,1 21 2( ) ,i gp t p tp t+=+where i = 1, 2,…, N; u is a uniformly distributed number in (0,1); and F is a functional form obtainedthrough the inversion of the probability density function, thereby depending on the employed quantumfield model. Table 14 shows the pseudocode of QPSO, and Table 15 provides the three most popularpotential well models (Mikki & Kishk, 2006; Sun et al., 2004a; Sun et al., 2004b). • 145. 127Established and Recently Proposed Variants of Particle Swarm OptimizationSun et al. (2004a) have recognized the sensitivity of QPSO on the parameter L (or equivalently onthe parameter q in Table (15)). However, their experiments on widely used test problems revealed thatQPSO can become an efficient approach under proper fine-tuning, as most of the PSO variants. Thedifferent philosophy than the rest PSO variants, as well as its susceptibility to improvements (Coelho,2008; Xi et al., 2008) and its interesting applications (Coelho & Mariani, 2008; Liu et al., 2009) renderedQPSO a worth-noting approach.CHAPTER SYNOPSISWe presented some of the most established and recently proposed variants of PSO with dominant positionin PSO literature. The presentation detail was kept at a low level to make the underlying ideas attainableby readers with different scientific backgrounds. Pseudocode accompanied the algorithms, providing asketch for their implementation in any programming language.Our aim was the exposition of the main ideas and features that constitute the core of research in thedevelopment of modern PSO variants, without focusing on specific problem types or instances. The pre-sented variants served as prototypes for accomplishing our goal. However, they shall not be consideredas the unique PSO variants for solving optimization problems efficiently. In fact, over-specialized PSOapproaches may effectively exploit the inherent properties of a specific problem type, producing evenbetter results than the exposed methods. Such approaches were out of the scope of this chapter, althoughthe interested reader can easily access the relative literature, starting from sources reported herein.Table 14. Pseudocode of the QPSO algorithmInput: Swarm S, swarm size N.Step 1. Initialize swarm and best positions.Step 2. Find the index g of the best particle.Step 3. While (stopping condition not met)Step 4. Do (i = 1…N)Step 5. Compute position p(t) using pi(t) and pg(t).Step 6. Draw a random number u ~ U(0,1).Step 7. Set L = L(q, u, |xi(t)-p(t)|).Step 8. Draw a random number R ~ U(0,1).Step 9. If (R > 0.5) ThenStep 10. xi(t+1) = xi(t) + F(L, u)Step 11. ElseStep 12. xi(t+1) = xi(t) + F(L, -u)Step 13. End IfStep 14. End DoStep 15. End WhileStep 16. Report solutions • 146. 128Established and Recently Proposed Variants of Particle Swarm OptimizationREFERENCESAngeline, P. J. (1995). Adaptive and self–adaptive evolutionary computations. In M. Palaniswami, Y.Attikiouzel, R. Marks, D. Fogel & T. Fukuda (Eds.), Computational Intelligence: A Dynamic SystemsPerspective (pp. 152-163). Washington, DC: IEEE Press.Belew, R. K. (1990). Evolution, learning and culture: computational metaphors for adaptive algorithms.Complex Systems, 4, 11–49.Belew, R. K., McInerny, J., & Schraudolph, N. N. (1991). Evolving networks: using the genetic algorithmwith connectionist learning. In C. Langton, C. Taylor, J. Farmer & S. Rasmussen (Eds.), Artificial LifeII (pp. 511-547). New York: Addison-Wesley.Bird, S., & Li, X. (2006). Adaptively choosing niching parameters in a PSO. In Proceedings of the 2006Genetic and Evolutionary Computation Conference (GECCO’06), Seattle (WA), USA (pp. 3-9).Brits, R., Engelbrecht,A. P., &Van den Bergh, F. (2002).Aniching particle swarm optimizer. In Proceed-ings of the 4thAsia-Pacific Conference on Simulated Evolution and Learning (SEAL 2002), Singapore(pp. 692-696).Burke, E. K., Gustafson, S., & Kendall, G. (2004). Diversity in genetic programming: an analysis ofmeasuresandcorrelationwithfitness.IEEETransactionsonEvolutionaryComputation,8(1),1098–1107.doi:10.1109/TEVC.2003.819263Clearwater, S. H., Hogg, T., & Huberman, B. A. (1992). Cooperative problem solving. In Computation:The micro and macro view (pp. 33-70). Singapore: World Scientific.Clerc, M. (2006). Particle swarm optimization. London: ISTE Ltd.Clerc, M., & Kennedy, J. (2002). The particle swarm - explosion, stability, and convergence in amultidimensional complex space. IEEE Transactions on Evolutionary Computation, 6(1), 58–73.doi:10.1109/4235.985692Table 15. The QPSO update equations for different potential energy modelsPotential well QPSO update equationDelta potential well x t p tuqx t p ti i( ) ( )ln( / )ln( ) ( )+ = ± -112 2Harmonic oscillator x t p tuqx t p ti i( ) ( )ln( / ).( ) ( )+ = ± -110 47694Square well10.6574( 1) ( ) cos ( ) ( ) ( )i ix t p t u x t p tq • 147. 129Established and Recently Proposed Variants of Particle Swarm OptimizationCobb, H. G. (1992). Is the genetic algorithm a cooperative learner? In Foundations of genetic algorithms2 (pp. 277-296). San Mateo, CA: Kaufmann.Coelho, L. S. (2008). A quantum particle swarm optimizer with chaotic mutation operator. Chaos, Soli-tons, and Fractals, 37(5), 1409–1418. doi:10.1016/j.chaos.2006.10.028Coelho, L. S., & Mariani, V. C. (2008). Particle swarm approach based on quantum mechanics and har-monic oscillator potential well for economic load dispatch with valve-point effects. Energy Conversionand Management, 49(11), 751–759.Cui,X., Li,M., &Fang,T.(2001).Studyofpopulationdiversityofmultiobjectiveevolutionaryalgorithmbased on immune and entropy principles. In Proceedings of the 2001 IEEE Congress on EvolutionaryComputation (CEC’01), Seoul, Korea (pp. 1316–1321).Dawkins, R. (1976). The selfish gene. New York: Oxford University Press.Goldberg, D. E. (1989). Genetic algorithms in search, optimization and machine learning. Reading,MA: Addison Wesley.Hart, W. E. (1994). Adaptive global optimization with local search. PhD thesis, University of California,San Diego, USA.Hinton, G. E., & Nowlan, S. J. (1987). How learning can guide evolution. Complex Systems, 1, 495–502.Hooke, R., & Jeeves, T. A. (1961). Direct search solution of numerical and statistical problems. Journalof the ACM, 8, 212–229. doi:10.1145/321062.321069Hoos, H. H., & Stützle, T. (2004). Stochastic local search: foundations and applications. San Mateo,CA: Kaufmann.Horn, J. (1997). The nature of niching: genetic algorithms and the evolution of optimal, cooperativepopulations. PhD thesis, University of Illinois, Illinois Genetic Algorithm Lab, Urbana, USA.Keesing, R., & Stork, D. G. (1990). Evolution and learning in neural networks: the number and distri-bution of learning trials affect the rate of evolution. In R. Lippmann, J. Moody, & D. Touretzky (Eds.),Proceedings of the 1990 Conference on Advances in Neural Information Processing Systems (NIPS 3),Denver, Colorado (pp. 804–810).Krasnogor, N. (2002). Studies on the theory and design space of memetic algorithms. PhD thesis, Uni-versity of the West of England, Bristol, UK.Land, M. W. S. (1998). Evolutionary algorithms with local search for combinatorial optimization. PhDthesis, University of California, San Diego, USA.Liu, L., Sun, J., Zhang, D., Du, G., Chen, J., & Xu, W. (2009). Culture conditions optimization of hy-aluronic acid production by Streptococcus zooepidemicus based on radial basis function neural networksand quantum-behaved particle swarm optimization algorithm. Enzyme and Microbial Technology, 44(1),24–32. doi:10.1016/j.enzmictec.2008.09.015 • 148. 130Established and Recently Proposed Variants of Particle Swarm OptimizationMatyas, J. (1965). Random optimization. Automatization and Remote Control, 26, 244–251.Merz, P. (1998). Memetic algorithms for combinatorial optimization (fitness landscapes and effectivesearch strategies). PhD thesis, Department of Electrical Engineering and Computer Science, Universityof Siegen, Germany.Mikki, S. M., & Kishk, A. A. (2006). Quantum particle swarm optimization for electromagnetics. IEEETransactions on Antennas and Propagation, 54(10), 2764–2775. doi:10.1109/TAP.2006.882165Moscato, P. (1989). On evolution, search, optimization, genetic algorithms and martial arts. Towardsmemetic algorithms (Tech. Rep. C3P Report 826). Caltech Concurrent Computation Program, Califor-nia, USA.Muhlenbein, M., Gorges-Schleiter, M., & Kramer, O. (1988). Evolution algorithms in combinatorialoptimization. Parallel Computing, 7, 65–85. doi:10.1016/0167-8191(88)90098-1Parsopoulos, K. E., Tasoulis, D. K., & Vrahatis, M. N. (2004). Multiobjective optimization using parallelvector evaluated particle swarm optimization, In Proceedings of the 2004 IASTED International Confer-ence on Artificial Intelligence and Applications (AIA 2004), Innsbruck, Austria (Vol. 2, pp. 823-828).Parsopoulos, K. E., & Vrahatis, M. N. (2001). Modification of the particle swarm optimizer for locat-ing all the global minima. In V. Kurkova, N.C. Steele, R. Neruda, & M. Karny (Eds.), Artificial NeuralNetworks and Genetic Algorithms (pp. 324-327). Wien: Springer.Parsopoulos,K.E.,&Vrahatis,M.N.(2002a).Recentapproachestoglobaloptimizationproblemsthroughparticle swarm optimization. Natural Computing, 1(2-3), 235–306. doi:10.1023/A:1016568309421Parsopoulos, K. E., & Vrahatis, M. N. (2002b). Particle swarm optimization method in multiobjectiveproblems. In Proceedings of the 2002 ACM Symposium on Applied Computing (SAC 2002), Madrid,Spain (pp. 603-607).Parsopoulos, K. E., & Vrahatis, M. N. (2004). UPSO: a unified particle swarm optimization scheme. InT. Simos & G. Maroulis (Eds.), Lecture Series on Computer and Computational Sciences (Vol. 1, pp.868-873). Zeist, The Netherlands: VSP International Science Publishers.Parsopoulos, K. E., & Vrahatis, M. N. (2007). Parameter selection and adaptation in unified par-ticle swarm optimization. Mathematical and Computer Modelling, 46(1-2), 198–213. doi:10.1016/j.mcm.2006.12.019Petalas, Y. G., Parsopoulos, K. E., Papageorgiou, E. I., Groumpos, P. P., & Vrahatis, M. N. (2007a). En-hanced learning in fuzzy simulation models using memetic particle swarm optimization. In Proceedingsof the 2007 IEEE Swarm Intelligence Symposium (SIS’07), Honolulu (HI), USA (pp. 16-22).Petalas, Y. G., Parsopoulos, K. E., & Vrahatis, M. N. (2007b). Memetic particle swarm optimization .Annals of Operations Research, 156(1), 99–127. doi:10.1007/s10479-007-0224-yPetalas, Y. G., Parsopoulos, K. E., & Vrahatis, M. N. (2007c). Entropy-based memetic particle swarmoptimization for computing periodic orbits of nonlinear mappings. In Proceedings of the 2007 IEEECongress on Evolutionary Computation (CEC’07), Singapore (pp. 2040-2047). • 149. 131Established and Recently Proposed Variants of Particle Swarm OptimizationPotter, M.A., & De Jong, K.A. (1994).Acooperative coevolutionary approach to function optimization.In The Third Parallel Problem Solving From Nature (pp. 249-257). Berlin: Springer.Potter, M.A., & De Jong, K.A. (2000). Cooperative coevolution: an architecture for evolving coadaptedsubcomponents. Evolutionary Computation, 8(1), 1–29. doi:10.1162/106365600568086Rao, S. S. (1992). Optimization: theory and applications. New York: Wiley Eastern.Rosca, J. P. (1995). Entropy–driven adaptive representation. In Proceedings of the Workshop on GeneticProgramming: From Theory to Real–World Applications, Tahoe City (CA), USA (pp. 23–32).Shannon, C. E. (1964). The mathematical theory of communication. Champaign: University of IllinoisPress.Solis, F., & Wets, R. (1981). Minimization by random search techniques. Mathematics of OperationsResearch, 6, 19–30. doi:10.1287/moor.6.1.19Storn,R.,&Price,K.(1997).Differentialevolution-asimpleandefficientheuristicforglobaloptimizationover continuous spaces. Journal of Global Optimization, 11, 341–359. doi:10.1023/A:1008202821328Sun, J., Feng, B., & Xu, W. (2004a). Particle swarm optimization with particles having quantum be-havior. In Proceedings of the 2004 IEEE Congress on Evolutionary Computation (CEC’04), Portland(OR), USA (pp. 325-331).Sun, J., Xu, W., & Feng, B. (2004b). A global search strategy for quantum-behaved particle swarmoptimization. In Proceedings of the 2004 IEEE Conference on Cybernetics and Intelligent Systems,Singapore (pp. 111-116).Tang, J., Lim, M. H., & Ong,Y. S. (2006).Adaptation for parallel memetic algorithm based on populationentropy. In Proceedings of the 2006 Genetic and Evolutionary Computation Conference (GECCO’06),Seattle (WA), USA (pp. 575–582).Thiémard, E. (1998). Economic generation of low-discrepancy sequences with a b-ary gray code. (Tech.Rep. EPFL-DMA-ROSO, RO981201). Department of Mathematics, Ecole Polytechnique Fédérale deLausanne, Lausanne, Switzerland.Trelea, I. C. (2003). The particle swarm optimization algorithm: convergence analysis and parameterselection. Information Processing Letters, 85(6), 317–325. doi:10.1016/S0020-0190(02)00447-7Van den Bergh, F. (2002). An analysis of particle swarm optimizers. PhD thesis, Department of ComputerScience, University of Pretoria, South Africa.Van den Bergh, F., & Engelbrecht, A. P. (2002). A new locally convergent particle swarm optimizer. InProceedings of the 2002 IEEE International Conference on Systems, Man and Cybernetics (SMC’02),Hammamet, Tunisia (Vol. 3, pp. 96-101).Van den Bergh, F., & Engelbrecht, A. P. (2004). A cooperative approach to particle swarm optimization.IEEE Transactions on Evolutionary Computation, 8(3), 225–239. doi:10.1109/TEVC.2004.826069 • 150. 132Established and Recently Proposed Variants of Particle Swarm OptimizationXi, M., Sun, J., & Xu, W. (2008). An improved quantum-behaved particle swarm optimization algo-rithm with weighted mean best position. Applied Mathematics and Computation, 205(2), 751–759.doi:10.1016/j.amc.2008.05.135 • 151. 133Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.Chapter 5Performance-EnhancingTechniquesThis chapter presents techniques that have proved to be very useful in enhancing the performance of PSOin various optimization problem types. They consist of transformations of either the objective functionor the problem variables, enabling PSO to alleviate local minimizers, detect multiple minimizers, handleconstraints, and solve integer optimization problems. The chapter begins with a short discussion on thefilled functions approach, and then presents the stretching technique as an alternative for alleviatinglocal minimizers. Next, we present the deflection and repulsion techniques, as a means for detectingmultiple global minimizers with PSO, followed by a penalty function approach for constraint handling.The chapter closes with the description of two rounding schemes that enable the continuous, real-valuedPSO to solve integer programming problems. All techniques are thoroughly described and graphicallyillustrated whenever possible.INTRODUCTIONThe alleviation of local minimizers and the detection of several (global or local) ones has been a topic ofongoingresearchformanyyearsinglobaloptimization(Torn&Žilinskas,1989).Forthispurpose,varioustechniques have been developed and incorporated in the context of global optimization algorithms.The simplest approach to tackle the aforementioned problem is the multistart technique, where thealgorithmisrestartedfromdifferentinitialconditionseverytimeithasconvergedtoaminimizer.However,this technique does not include a mechanism for preventing the algorithm from converging anew to thesame or a worst minimizer in subsequent restarts. This deficiency limits the applicability of multistart,unless it is combined with different techniques that guide the algorithm towards more promising regionsof the search space than those detected (Torn & Žilinskas, 1989).DOI: 10.4018/978-1-61520-666-7.ch005 • 152. 134Performance-Enhancing TechniquesAlternatively, one can modify the objective function after the detection of a minimizer, in such away that its new form excludes already detected minimizers. In this context, Goldstein and Price (1971)proposed an efficient algorithm for the minimization of algebraic functions, which requires high-orderderivatives of the involved polynomials. Later, they generalized their technique to non-polynomialproblems, using a transformation that makes use of the Hessian of the objective function. However, therequired derivatives and Hessian are not always available nor can they always be estimated accurately,thereby restricting the applicability of this method to problems with nice mathematical properties. Shus-terman (1979) proposed a similar approach but the produced objective function becomes very flat afterthe detection of a small number of minimizers, setting obstacles on most optimization algorithms.Vilkov et al. (1975) proposed the popular tunneling method for one-dimensional functions. Thismethod can direct the search point away from a detected minimizer, towards the region of attraction of abetterone.Thealgorithmwasgeneralizedinmulti-dimensionalproblemsbyMontalvo(1979)andGómezand Levy (1982). However, the constructed hypersurface becomes increasingly flat with the number ofdetected minimizers, imposing the same difficulties as the previous methods on the algorithms.Filled functions constitute another popular approach, developed by Ge (1987, 1990). They consist ofa transformation of the objective function after the detection of a minimizer. More specifically, let x* bea detected (local) minimizer of the objective function, f(x). Then, f(x) is transformed as follows:22*( ; , ) ( ) exp ,( )x xaT x r p f xr f x p − = + − +  (1)where a, r, and p, are user-defined parameters that control the magnitude of the transformation. Theterm:,( )ar f x+inverts the objective function, transforming the local minimizer, x*, to a local maximizer. The expo-nential term:22*exp ,x xp − −  imposes an exponentially increasing penalty to each point x, based on its distance from x*.The parametera determines the magnitude of the produced local maximum at x*, with larger values producing higherlocal maxima. The parameter r is used to avoid division by zero in equation (1), and it shall be selectedtaking into consideration both the magnitude of the objective function as well as the value of a. Finally,p is perhaps the most crucial parameter, since it determines the scope of the transformation. Small valuesof p result in more local effects than (even slightly) higher values.The parameters a, r, and p, have a crucial impact on the shape of the filled function. An erroneouschoice can be detrimental for the optimization procedure, while even small differences can producesignificantly different landscapes. We illustrate this parameter sensitivity with the example of Fig. 1,where the filled function of equation (1) (illustrated with a dashed line) is applied on the one-dimensional • 153. 135Performance-Enhancing Techniquesinstance of TPUO-3(Rastrigin function), defined in Appendix A of the book at hand, and illustrated witha solid line. The transformation is applied for the local minimizer x* = -1, fixed parameters, a = 50, r =0.1, and three different values of p, namely 0.01, 0.1, and 1.0.As we can see, the value p = 0.01 produces a very local effect around the detected minimizer, leavingthe rest of the function unaffected. Increasing the value of p to 0.1, produces wider alteration aroundx*, although the effect remains local. However, a further increase to 1.0 has a tremendous effect on thefunction, especially for the global minimizer that lies at zero, on the right of x*. The effect becomes evenstronger if we further vary the rest of the parameters. Nevertheless, it is expected that an appropriateset of parameters shall be problem-dependent. Indeed, Parsopoulos and Vrahatis (2004, p. 13) providedgraphical evidence for a polynomial function, where the setting, a = 200 (and a = 400), r = 0, and p =1, was promising, while the same setting is rather baneful for the function illustrated in Fig. 1. Finally,one can notice that the filled function introduces new local minima around the detected minimizer; aneffect also known as mexican hat.The aforementioned deficiencies dictate the cautious use of filled functions, suggesting the necessityfor a remarkable effort to discover an appropriate configuration for the problem at hand. More recentmethods, based on ideas similar to the concept of filled functions, have been proposed and used suc-cessfully with PSO for detecting several global minimizers and alleviating local ones. These techniquescan also be considered as sequential niching schemes, described in the previous chapter, and they areexposed in the following sections.THE STRETCHING TECHNIqUE FOR ALLEVIATING LOCAL MINIMIZERSStretching was introduced to PSO as a means for alleviating local minima, in Parsopoulos et al. (2001). Itworks similarly to filled functions, i.e., after the detection of a local minimizer a transformation is appliedon the objective function such that the higher local minima vanish, while the lower minima (including theglobal one) remain unaffected. Stretching has a wide effect on the whole range of the objective functionand is properly suited for alleviating local minimizers, especially in highly multimodal problems.More specifically, if f(x) is the objective function and x* is a detected local minimizer, stretching isdefined as the following two-stage transformation:Figure 1. The filled function (dashed line) of equation (1) applied on the one-dimensional instance ofTPUO-3(Rastrigin function - solid line) defined in Appendix A, for the local minimizer x* = -1, a = 50, r= 0.1, and p = 0.01, 0.1, and 1.0 • 154. 136Performance-Enhancing Techniques1( ) ( ) || *||[1 ( ( ) ( *))],G x f x x x sign f x f x= + γ − + − (2)21 ( ( ) ( *))( ) ( ) ,tanh( ( ( ) ( *)))sign f x f xH x G xG x G x+ −= + γµ −(3)where γ1, γ2, and μ, are user-defined parameters, while sign(z) is the three-valued sign function, definedas:1, if 0,sign( ) 0, if 0,1, if 0.zz zz+ >= =− <Thus, after the detection of x*, all new candidate solutions will be evaluated using equations (2) and (3),instead of the original objective function f(x). If a new local minimizer with lower function value thanf(x*) is detected, then it replaces x* in equations (2) and (3). Thus, stretching is applied on a sequenceof local minimizers with monotonically decreasing function values, which, under proper assumptionson the number of local minima, can lead the algorithm to the global one.The first transformation, G(x), stretches the function upwards, covering higher local minima, whilethe second one, H(x), transforms the detected minimizer to a maximizer. The whole procedure leavesall minimizers with lower function values unaffected. This is illustrated in Fig. 2 for the same objectivefunction with Fig. 1, i.e., the one-dimensional instance of TPUO-3(solid line), and for the local minimizerx* = -1, with parameters γ1= γ2= 20, and μ = 0.1. The first transformation, G(x), is illustrated with adotted line, while the final form, H(x), of the stretched objective function is illustrated with a dashedline. We can clearly see the valley shaped on the right of the stretched minimizer, around the positionof the global one at zero, as well as the maximum (central peak) introduced at the (previously localminimizer) x* = -1. It is intuitively evident that an algorithm applied on the transformed landscape willhave a higher probability of convergence on the global minimizer.We can state the following significant remarks regarding the application of stretching:Remark 1. Stretching can also be applied on non-minimizers. Indeed, it can be applied to any pointexcept the global minimizer, in order to simplify the landscape by reducing the number of localminima.Remark 2. Stretching shall not be applied on the global minimizer. Since it eliminates all minimiz-ers with the same or higher values, it will destroy all remaining global and local minima. Thus,stretching is not appropriate for detecting several global minimizers of a function.Remark 3. Stretching introduces new local minima. We can clearly distinguish the two new localminima introduced in both sides of the stretched local minimizer, x* = -1, in Fig. 2. This is theMexican hat effect that was also verified for the filled functions in Fig. 1.Remark 4. Stretching is not suitable for detecting all local minimizers. Its application on a local mini-mizer will destroy all higher local minimizers.Remark 5. Stretching is sensitive to the values of its parameters. Similarly to the filled functions,stretching requires a proper parameter set to work efficiently. Although the negative effects ofa bad parameter set are not as extensive as in filled functions, it can produce transformations of • 155. 137Performance-Enhancing Techniquesinferior quality. Parsopoulos and Vrahatis (2004) have illustrated this sensitivity for different pa-rameter settings.Summarizing, stretching can be a very useful technique for alleviating local minima, combined easilywith any evolutionary algorithm. It has been used with PSO on several problems with very promisingresults (Parsopoulos et al., 2001; Parsopoulos & Vrahatis, 2002a, 2004). However, its use requires someprecautions such as a nice parameter setting, which can be determined through a preprocessing phaseprior to the application of PSO. In parallel, a mechanism for distinguishing between local and globalminimizers can be useful to avoid destructing all global minimizers of the objective function. The latterissue can be easily addressed in functions with known global minima, e.g., error functions where theglobal minimum is known to be equal to zero or bounded below.Stretching can be applied anew to an arbitrary number of points, always using the original objectivefunction, f(x), in equations (2) and (3) for each new minimizer. An improved variant that claims to ad-dress the Mexican hat effect has been reported in Wang and Zhang (2007).Alternatively, the undesirableeffect can also be tackled by using a repulsion technique that keeps the algorithm away from the newlyintroduced local minima. This technique is discussed in a later section, after the presentation of deflec-tion, an intimately related technique for detecting several minimizers.THE DEFLECTION TECHNIqUE FOR DETECTING SEVERAL MINIMIZERSDeflection was introduced by Magoulas et al. (1997) as a technique for detecting several (local or global)minimizers of the error function in neural network training problems. Later, it was combined with PSOfor solving widely used optimization problems (Parsopoulos & Vrahatis, 2004). Deflection works simi-Figure 2. The stretching transformation applied on the one-dimensional instance of TPUO-3(Rastriginfunction - solid line) defined in Appendix A, for the local minimizer x* = -1, and parameters, γ1= γ2=20, μ = 0.1 • 156. 138Performance-Enhancing Techniqueslarly to the previously presented approaches, i.e., it transforms the objective function so that alreadydetected minimizers are converted to maximizers. However, it has only one additional constraint; it canbe applied only on non-negative functions.Let f(x)>0 be the objective function, and xi*, i = 1, 2,…, m, be the already detected minimizers. Then,deflection consists of the following transformation:* 11( ) ( ) ( ; , ) ,mi i iiD x f x T x x −== × λ∏ (4)where Ti, i = 1, 2,…, m, are proper functions, and λi, i = 1, 2,…, m, are positive relaxation parameters.The functions Tishall be selected such that D(x) has exactly the same minimizers with f(x) except xi*,i = 1, 2,…, m. In other words, any sequence of points converging to a minimizer xi* shall not producea minimum at D(xi*). A category of functions that satisfy the aforementioned deflection property is thefollowing (Magoulas et al., 1997):Ti(x; xi*, λi) = tanh(λi||x-xi*||). (5)The effect of the deflection procedure is intended to be local around the position of the detected mini-mizer, such that the rest remain unaffected.In contrast to stretching, deflection uses all detected minimizers to transform the objective function.Thus, if a new minimizer is detected, then it rather adds a new term in the product of equation (4) insteadof replacing the previous minimizer as in equations (2) and (3). Thus, deflection is especially suited fordetecting a multitude of (local or global) minimizers of the objective function. Similarly to stretching,deflection can be applied on any point of the search space regardless of its qualification as a minimizer,although without altering its surrounding minimizers. Therefore, in contrast to stretching, it can also beused on global minimizers.The requirement of a positive objective function lies in the fact that, if f(x*)<0 for a given minimizer,x*, then the transformation of equation (4) will retain all minimizers (including x*), shifting their valuestowards -∞. On the other hand, if f(x*) = 0 then also D(x*) = 0. Thus, a global minimum equal to zerocancels the deflection transformation, raising an issue that can be addressed through the use of a non-positive function. In this case, deflection is applied on the shifted objective function:f´(x) = f(x) + c, (6)where c > 0 is a suitable positive constant such that, f´(x) > 0, for all x. The shifted objective functionf´(x) has exactly the same minimizers with f(x), although all its minima are shifted upwards by c.TheworkingsofdeflectiononthesameproblemwithFig.2,areillustratedinFig.3.Morespecifically,the one-dimensional instance of TPUO-3(Rastrigin function), defined in Appendix A and illustrated witha solid line, is used, and deflection (dashed line) is applied on its local minimizer, x1* = -1, as well as onthe global minimizer, xg* = 0, with parameters λ1= λg= 1. The parameter, c = 1, was used in equation(6) to shift the objective function and enable the deflection of the global minimizer xg* = 0. Otherwise,the deflection would have an effect only on the local minimizer, which has an original function valueequal to f1* = 1. • 157. 139Performance-Enhancing TechniquesEquation (4) implies that the deflected value of any given point x depends on three factors: the magni-tude of its value, f(x); its distance from the deflected minimizers, ||x-xi||; as well as on the correspondingparameters, λi. Regarding the first factor, it is clear that D(x) increases proportionally to f(x) for fixedvalues of the functions Ti(x; xi*, λi). For this reason, the peaks produced in the regions of deflected mini-mizers with higher values are expected to also be higher than those of minimizers with smaller values.This effect is also observed in Fig. 3, where the region of the global minimizer positioned at zero as-sumes lower values than that of the local minimizer at -1. Regarding the second factor, points in adequatedistance from the deflected minimizers will assume values close to their original ones, while points thatlie close to a deflected minimizer will be more affected by the transformation, assuming higher values.Finally, keeping x and xifixed in equation (5) while increasing λi, results in monotonically increasingvalues of Ti(x; xi*, λi) that approximate 1. Thus, the deflected value of x will be close to its original one.On the other hand, decreasing λitowards zero, produces values of Ti(x; xi*, λi) close to zero. In this case,the value of x will become very large due to the transposition of the functions Tiin equation (4).Figure 4 illustrates the effect of different values of λion deflection. The same objective functionwith Fig. 3 is used, while deflection is applied on the local minimizers, x1* = -2, x2* = -1, as well as onthe global one, xg* = 0, with parameter values λi= 0.8 (dash-dotted line), λi= 1.0 (dashed line), and λi= 2.0 (dotted line). Apparently, increasing values of λihave a milder effect on the objective function (ashifted function as in Fig. 3 was assumed also in Fig. 4).The Mexican hat effect, which accompanies filled functions and stretching, is present also in de-flection, as we observe in Figs. 3 and 4. Local minimizers introduced by this effect can still trap analgorithm, although they have remarkably higher values than the originally deflected minimizers. Toalleviate this drawback, Parsopoulos and Vrahatis (2004) proposed the repulsion technique describedin the next section.Figure 3. The deflection transformation applied on the one-dimensional instance of TPUO-3(Rastriginfunction - solid line) defined in Appendix A, on the local minimizer x1* = -1, as well as on the globalminimizer xg* = 0, with parameters λ1= λg= 1 • 158. 140Performance-Enhancing TechniquesTHE REPULSION TECHNIqUEThe repulsion technique was introduced by Parsopoulos and Vrahatis (2004) for preventing the attrac-tion of PSO on the local minimizers artificially introduced by stretching and deflection. The underlyingidea is intuitively appealing and straightforwardly realizable: after the detection of a minimizer and theapplication of a (stretching or deflection) transformation on it, a repulsion area is defined around it andany particle that falls into this, is repelled away.To put it formally, let X* = {xj*; j = 1, 2,…, m} be the set of already detected minimizers and S ={x1, x2,…, xN} be the swarm at a given iteration. After the position update of particles using any PSOvariant, the distance between each particle and detected minimizer, which is defined as (Parsopoulos& Vrahatis, 2004):dij= d(xi, xj*) = ||xi-xj*||, i = 1, 2,…, N, j = 1, 2,…, m, (7)is computed to check if it lies in the repulsion area of the minimizer. In general, this area is definedfor each minimizer as a spherical region, centered at the minimizer, with radius rij, i = 1, 2,…, N, j = 1,2,…, m. Hence, if dij≤ rijthen xishall be repelled away from xj*. This is achieved with a correction inthe particle position, as follows (Parsopoulos & Vrahatis, 2004):xi= xi+ ρijzij, (8)where ρijis a fixed parameter determining the repulsion strength, and zijis the unitary vector with direc-tion from xj* towards xi, defined as:Figure 4. The deflection transformation applied on the one-dimensional instance of TPUO-3(Rastriginfunction - solid line) defined in Appendix A, on the local minimizers x1* = -2, x2* = -1, as well as on theglobal minimizer xg* = 0, for different parameter values, λi= 0.8 (dash-dotted line), λi= 1.0 (dashedline), and λi= 2.0 (dotted line), for all i • 159. 141Performance-Enhancing Techniques*( ).i jijijx xzd−=(9)For simplicity, rijand ρijare selected to be equal for all particles and detected minimizers, although, ifneeded, the user can choose different settings per case. In unexplored problems, small values of rijarepreferred to avoid the inclusion of an undiscovered minimizer in the repulsion area of a detected one.On the other hand, ρijshall be adequately large to ensure the proper repulsion away from the detectedminimizer. The repulsion procedure is described in pseudocode in Table 1.Parsopoulos and Vrahatis (2004) illustrated the workings of repulsion in conjunction with the deflec-tion technique on test problem TPUO-21, defined in Appendix A of the book at hand, which has the shapeof an egg holder, as depicted in the left part of Fig. 5. Their experiment aimed at the detection of all 12global minimizers within the range [-5,5]2. As they report, the plethora of equally attractive regions ofthe search space results in a rambling movement of the particles, which is illustrated in the right part ofFig. 5. Moreover, there is no guarantee that after an (even large) number of restarts, the algorithm willbe able to detect all global minimizers. This problem was addressed by applying deflection with repul-sion after the detection of each global minimizer.The experiments used the constriction coefficient variant, PSO[co], as well as the inertia weight vari-ant, PSO[in]. The corresponding parameter setting is reported in Table 2, while the results are reportedin Table 3. Both variants were able to detect all 12 minimizers, with PSO[co]being more efficient thanPSO[in]. The values of the parameters rijand ρijare crucial, since they may result in the inclusion ofneighboring minimizers in the repulsion area of a detected one. For example, this could happen if rij=3 was used in the example of Fig. 5. Hence, in the application of repulsion, an estimation of the relativedistance among the desirable minimizers can be more than useful. Otherwise, a preprocessing phase ofpreliminary experiments using different parameter settings can offer the necessary information.Parsopoulos and Vrahatis (2004) also reported a plethora of experimental results on the problem ofdetecting periodic orbits of nonlinear mappings, where all techniques described so far find a rich field ofapplication. We postpone the discussion of these results until a later chapter devoted to the applicationsTable 1. Pseudocode of the repulsion procedureInput: Set of detected minimizers, X*; swarm, S; parameters rijand ρijfor all i and j.Step 1. Do (i = 1…N)Step 2. If (X* ≠ ∅) ThenStep 3. Do (j = 1…m)Step 4. Computedijusing equation (7).Step 5. If (dij≤ rij) ThenStep 6. Compute zijusing equation (9).Step 7. Update particle position xiusing equation (8).Step 8. End IfStep 9. End DoStep 10. End IfStep 11. End Do • 160. 142Performance-Enhancing Techniquesof PSO in dynamical systems. The next section discusses the penalty function technique for tacklingconstrained optimization problems with PSO.THE PENALTY FUNCTION TECHNIqUE FORCONSTRAINED OPTIMIZATION PROBLEMSIn Chapter 1, we defined the constrained optimization problem as follows:min ( ), subject to ( ) 0, 1, 2,..., ,ix Af x C x i k∈≤ = (10)where k is the number of constraints. The form of constraints in relation (10) is not restrictive, sincedifferent forms can be represented equivalently as follows:Ci(x) ≥ 0 ⇔ -Ci(x) ≤ 0,Figure 5. (Left) plot of test problem TPUO-21(egg holder), defined in Appendix A of the book at hand. Inthe range [-5,5]2, it has 12 global minimizers at the points (k1π/2, k2π)T, for k1= ±1,±2, and k2= 0,±1.(Right) contour plot of TPUO-21and the rambling movement of a particle for 30 iterations. Darker contourlines denote the regions of the global minimizersTable 2. Parameters of the constriction coefficient and inertia weight PSO variants for detecting allglobal minimizers of test problem TPUO-21, illustrated in Fig. 5, using both deflection and repulsionParameter ValueSwarm size 20Accuracy 10-4c1, c22.05w 1 → 0.1 (linearly decreasing)χ 0.729vmax5rij0.5ρij0.8 • 161. 143Performance-Enhancing TechniquesCi(x) = 0 ⇔ Ci(x) ≤ 0 and -Ci(x) ≤ 0.One of the most popular approaches for tackling constrained problems is the use of penalty functions.In general, a penalty function is defined as:fP(x) = f(x) + P(x), x ∈ A⊂Rn,where f(x) is the original objective function and P(x) is a penalty term. Obviously, P(x) shall be selectedsuch that:0, if is a feasible point,( )0, otherwise,xP xa= >in order to penalize only infeasible solutions. Also, P(x) can be either fixed to a prescribed value for allinfeasible solutions or proportional to the number of violated constraints and degree of violation.Recently, the following penalty function was used with evolutionary algorithms and PSO, exhibitingvery promising results (Parsopoulos & Vrahatis, 2002b; Yang et al., 1997):fP(x) = f(x) + h(t)H(x), x∈A⊂Rn, (11)where f(x) is the original objective function; h(t) is a penalty value, dynamically changing with the itera-tion number, t; and H(x) is a penalty factor defined as follows:( ( ))1( ) ( ( )) ( ) ,ikq xi iiH x q x q x γ== θ∑ (12)where qi(x) = max{0, Ci(x)}, i = 1, 2,…, k; θ(qi(x)) is a multi-stage assignment function; and γ(qi(x)) isthe power of the penalty function (Homaifar et al., 1994).The aforementioned penalty function takes all constraints into consideration, based on their corre-sponding degree of violation, while the user can manipulate each one independently, based on its levelof importance.Also, the penalty value, h(t), adds an additional degree of freedom, allowing the dynamicchange of penalty magnitude during the optimization procedure.An alternative penalty function is defined in Coello Coello (1999), as follows:fP(x) = f(x) + H(x), (13)Table 3. Results of the constriction coefficient and inertia weight PSO variants for detecting all globalminimizers of test problem TPUO-21, using the parameter setting of Table 2PSO variant Number of detected minimizers Mean total number of functionevaluationsMean number of function evaluationsper minimizerPSO[co]12 6300 524PSO[in]12 50660 4222 • 162. 144Performance-Enhancing Techniqueswith,H(x) = w1HNVC(x) + w2HSVC(x), (14)where HNVC(x) is the number of violated constraints, and HSVC(x) is the sum of violated constraints,defined as follows:SVC1( ) max{0, ( )}.kiiH x C x== ∑The weights w1and w2permit the user to determine the importance of each constraint based on theproblem at hand.Parsopoulos and Vrahatis (2002b) investigated the performance of PSO equipped with the penaltyfunction of equations (11) and (12) on several widely used constrained optimization test problems. Theyalso investigated unified PSO (UPSO) on constrained engineering design problems, using the penaltyfunction of equations (13) and (14) (Parsopoulos & Vrahatis, 2005). We postpone the discussion of theirresults until a later chapter that deals with applications of PSO on this type of problems. The next sectioncloses this chapter with a discussion of rounding schemes that allow continuous-based PSO approachesto work efficiently on integer optimization problems.ROUNDING TECHNIqUES FOR INTEGER OPTIMIZATIONPSO was originally introduced for continuous optimization problems. Both its philosophy and opera-tion imply the existence of continuous variables. This property raises a rational question regarding itsefficiency on integer subspaces of the n-dimensional Euclidean space.A common approach for applying continuous optimization methods on integer problems is thetransformation of the integer problem to the corresponding continuous problem. Then, the continuousproblem is solved and integer solutions are obtained by truncating the detected continuous ones. Suchapproaches have been used with traditional branch and bound optimization algorithms, which transformthe integer problem to a corresponding continuous one, and apply quadratic programming techniques tosolve it (Lawler & Wood, 1966; Manquinho et al., 1997).Laskari et al. (2002) considered the aforementioned approach in the context of PSO for solving inte-ger programming problems. More specifically, they applied the standard PSO update rules, while eachparticle component was rounded to the nearest integer value. Hence PSO retained its dynamics, whilethe produced solutions were integer.To put it more formally, let xi= (xi1, xi2,…, xin)Tbe the i-th particle of the swarm, with its componentsxij∈ R, j = 1, 2,…, n. Then, its rounded counterpart, zi= (zi1, zi2,…, zin)T, defined as:0.5ij ijz x = + , j = 1, 2,…, n, (15)replaces xijin the swarm at each iteration. Therefore, the swarm always consists of integer particles,although their velocities can be real-valued. • 163. 145Performance-Enhancing TechniquesObviously, following this approach may result in multiple appearances of the same integer particlein the swarm, since an integer vector corresponds to an infinite number of different real-valued vectorsrounded with equation (15). The original real-valued trajectory of a single particle (solid line) and itscorresponding integer trajectory (dashed-dotted line) are denoted in Fig. 6 for the 2-dimensional instanceof test problem TPUO-1defined in Appendix A of the book at hand, for 10 iterations.Intuitively, the repeated rounding at each iteration may result in search stagnation, especially in caseswhere particles cluster close to each other. To tackle this potential deficiency, Laskari et al. (2002) alsoinvestigated a different scheme with gradual truncation of the particles, in order to retain the dynamicsof PSO, while gradually pushing the particles towards integer values.According to this scheme, particlesare truncated to their first k1decimal digits in the first T iterations, to k2< k1decimal digits in the nextT iterations, and so on, until kireaches zero. Typically, one can consider ki+1= ki-2, i.e., two decimaldigits are eliminated from the particle components after every T iterations of the algorithm. A more ag-gressive scheme would be ki+1= ki/2, where the transformation of the particles to integers is rapid. Theaforementioned schemes require that the initial number of decimal digits is even, while different schemeswith dynamically changing values of kican be also considered.Laskari et al. (2002) applied the aforementioned approaches in the context of PSO on several integerprogramming test problems reported in Appendix A. We postpone the discussion of their results until alater chapter of the book, which considers the application of PSO on such problems.CHAPTER SYNOPSISThis chapter was devoted to the presentation of techniques that enhance PSO performance, renderingit efficient in demanding applications. All the presented techniques are based on transformations of theobjective function and/or problem variables. Additional parameters are introduced in most cases, whichFigure 6. Original (real-valued) trajectory of a single particle (solid line) and its corresponding integertrajectory (dashed-dotted line) for 10 iterations, produced with the rounding scheme of equation (15),on the contour plot of the 2-dimensional instance of test problem TPUO-1defined in Appendix A of thebook at hand • 164. 146Performance-Enhancing Techniquesrequire the interference of the user.Apreprocessing phase of preliminary experiments can be very usefulfor determining the proper values of these parameters. In some cases, wrong values can be detrimentalfor the algorithm; thus, the user should pay special attention when applying these techniques to avoidundesirable effects.REFERENCESCoello, C. A. C. (1999). Self-adaptive penalties for GA-based optimization. In Proceedings of the 1999IEEE Congress on Evolutionary Computation (CEC’99), Washington (DC), USA (pp. 573-580).Ge, R. P. (1987). The theory of the filled function method for finding a global minimizer of a nonlinearlyconstrained minimization problem. Journal of Computational Mathematics, 5, 1–9.Ge, R. P. (1990). A filled function method for finding a global minimizer of a function of several vari-ables. Mathematical Programming, 46, 191–204. doi:10.1007/BF01585737Goldstein, A. A., & Price, J. F. (1971). On descent from local minima. Mathematics of Computation,25(3), 569–574. doi:10.2307/2005219Gómez, S., & Levy, A. V. (1982). The tunneling method for solving the constrained global optimizationproblem with several nonconnected feasible regions. In Lecture Notes in Mathematics, Vol. 909 (pp.34-47). Berlin: Springer.Homaifar, A., Lai, A. H.-Y., & Qi, X. (1994). Constrained optimization via genetic algorithms. Simula-tion, 2(4), 242–254. doi:10.1177/003754979406200405Laskari, E. C., Parsopoulos, K. E., & Vrahatis, M. N. (2002). Particle swarm optimization for integerprogramming. In Proceedings of the 2002 IEEE Congress on Evolutionary Computation (CEC’02),Honolulu (HI), USA (pp. 1582-1587).Lawler, E. L., & Wood, D. W. (1966). Branch and bound methods: a survey. Operations Research, 14,699–719. doi:10.1287/opre.14.4.699Magoulas, G. D., Vrahatis, M. N., & Androulakis, G. S. (1997). On the alleviation of local minima inbackpropagation. Nonlinear Analysis, Theory . Methods & Applications, 30(7), 4545–4550.Manquinho, V. M., Marques Silva, J. P., Oliveira, A. L., & Sakallah, K. A. (1997). Branch and boundalgorithms for highly constrained integer programs (Tech. Rep.). Cadence European Laboratories,Portugal.Montalvo, A. (1979). Development of a new algorithm for the global minimization of functions. PhDthesis, Universidad National Autónoma de Mexico, Mexico.Parsopoulos, K. E., Plagianakos, V. P., Magoulas, G. D., & Vrahatis, M. N. (2001). Objective function“stretching” to alleviate convergence to local minima. Nonlinear Analysis, Theory . Methods & Applica-tions, 47(5), 3419–3424. • 165. 147Performance-Enhancing TechniquesParsopoulos,K.E.,&Vrahatis,M.N.(2002a).Recentapproachestoglobaloptimizationproblemsthroughparticle swarm optimization. Natural Computing, 1(2-3), 235–306. doi:10.1023/A:1016568309421Parsopoulos, K. E., & Vrahatis, M. N. (2002b). Particle swarm optimization method for constrained opti-mization problems. In P. Sincak, J.Vascak,V. Kvasnicka, & J. Pospichal (Eds.), Intelligent Technologies-Theory and Applications: New Trends in Intelligent Technologies (Frontiers in Artificial Intelligenceand Applications series, Vol. 76) (pp. 214-220). IOS Press.Parsopoulos, K. E., & Vrahatis, M. N. (2004). On the computation of all global minimizers through par-ticle swarm optimization. IEEE Transactions on Evolutionary Computation, 8(3), 211–224. doi:10.1109/TEVC.2004.826076Parsopoulos,K.E.,&Vrahatis,M.N.(2005).Unifiedparticleswarmoptimizationforsolvingconstrainedengineering optimization problems. In [LNCS]. Lecture Notes in Computer Science, 3612, 582–591.Shusterman, L. B. (1979). The method of successive elimination in search for the global optimum ofmultiextremal algebraic functions. Radioelektronika, 79(6), 58–63.Torn, A., & Žilinskas, A. (1989). Global optimization. Heidelberg, Germany: Springer.Vilkov, A. V., Zhidkov, N. P., & Schedrin, B. M. (1975). A method of search for the global minimumof a function of one variable. Journal of Computational Mathematics and Mathematical Physiscs, 75,1040–1043.Wang, Y.-J., & Zhang, J.-S. (2007). A new constructing auxiliary function method for global optimiza-tion. Mathematical and Computer Modelling, 47(11-12), 1396–1410. doi:10.1016/j.mcm.2007.08.007Yang, J.-M., Chen, Y.-P., Horng, J.-T., & Kao, C.-Y. (1997). Applying family competition to evolutionstrategies for constrained optimization. [Berlin: Springer.]. Lecture Notes in Computer Science, 1213,201–211. doi:10.1007/BFb0014812 • 166. Section 2Applications of Particle SwarmOptimization • 167. 149Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.Chapter 6Applications inMachine LearningThis chapter presents the fundamental concepts regarding the application of PSO on machine learningproblems. The main objective in such problems is the training of computational models for performingclassification and simulation tasks. It is not our intention to provide a literature review of the numerousrelative applications. Instead, we aim at providing guidelines for the application and adaptation of PSOon this problem type. To achieve this, we focus on two representative cases, namely the training of ar-tificial neural networks, and learning in fuzzy cognitive maps. In each case, the problem is first definedin a general framework, and then an illustrative example is provided to familiarize readers with the mainprocedures and possible obstacles that may arise during the optimization process.INTRODUCTIONMachine learning is the field of artificial intelligence that deals with algorithms that render computa-tional models capable of learning and adapting to their environment. From an abstract viewpoint, ma-chine learning is the procedure of extracting information in the form of patterns or rules from data. Thispurpose requires the use of computational methods. Human interaction can also be beneficial within acollaborative framework to the algorithms. However, the elimination of this necessity still remains themain challenge in the development of intelligent systems.There are different types of machine learning procedures, based on the desired outcome as well ason the degree of human intervention:a. Supervised learning: The algorithm builds a mapping between a set of presented input data anda set of desired output. This is possible by altering the parameters of the computational model sothat the produced error between input and output is minimized.DOI: 10.4018/978-1-61520-666-7.ch006 • 168. 150Applications in Machine Learningb. Unsupervised learning: The algorithm tunes the computational model to regularities of the avail-able data without a task-oriented measure of quality. This is possible through competition amongthe modules of the computational model.c. Semi-supervised learning: This is a combination of the two previous approaches that employsboth learning with explicit input-output examples, as well as non-labeled examples.d. Reinforcementlearning:Thealgorithmlearnsaninput-outputmappingbycontinuouslyinteractingwith the environment, which in turn admits an impact from every taken step, providing feedbackto the model.Several additional learning subtypes, which are outside the scope of the book at hand, can be distin-guished.Supervised learning constitutes a very prosperous application field for evolutionary algorithms andPSO, due to the existence of explicit performance measures. These measures usually come in the formof objective functions in the parameters of the computational model. Thus, training procedures aim at thedetection of parameter values that minimize the model’s error in learning a set of presented examples.In this context, PSO has been applied for training artificial neural networks and fuzzy cognitive maps.The rest of this chapter is dedicated to an overview of these applications.TRAINING ARTIFICIAL NEURAL NETWORKS WITH PSOIn the following sections, we briefly present the problem of neural network training for the most com-mon case of feedforward neural networks. We also present an illustrative example for the logical XORclassification task. Further applications are also reported.The Multi-Layer Perceptron ModelArtificial neural networks (NNs) are computational models based on the operation of biological neuralnetworks, which constitute the information processing mechanism of the human brain. Their structureis based on the concept of the artificial neuron, which resembles biological neurons, as their main pro-cessing unit. The artificial neuron constitutes a nonlinear mapping between a set of input and a set ofoutput data. Thus, if the input data are represented as a vector, Q = (q1, q2,…, qm)T, the artificial neuronimplements a function:1,mi i iiy F b w q= = +  ∑where F is the transfer function; wi, i = 1, 2,…, m, are the weights; and biis a bias. In order to retain acompact notation, we will henceforth represent the bias, bi, as a weight, w0, with an auxiliary constantinput, q0= 1.The training of the neuron to learn an input-output pair, {Q, y}, is the procedure of detecting properweights so that the output y is obtained if Q is presented to the neuron. Obviously, this procedure canbe modeled as an error minimization problem: • 169. 151Applications in Machine Learning20min .imi iwiy F w q=  −    ∑If more than one input vectors, Q1, Q2,…, QK, are to be learned by the neuron, then the objective functionis augmented with a separate square-error term for each input vector:21 0min ,iK mk i kiwk iy F w q= =  −    ∑ ∑where qkiis the i-th component of the k-th input vector, Qk= (qk1, qk2,…, qkm)T(recall that w0is the biasof the neuron, with qk0= 1 for all k). This kind of training procedure is also known as batch training, asall input vectors are presented to the neuron prior to any change of its weights.Obviously, the dimension of the minimization problem is equal to the number, m, of the weights andbiases, which is in direct correlation with the dimension of the input vector. The transfer function, F(x),is selected based on the desired properties that shall be attributed to the model. Although linear transferfunctions can be used, nonlinear functions equip the model with enhanced classification capabilities.Thus, nonlinear transfer functions are most commonly used, with sigmoid functions defined as:1( , ) ,1 exp( )F xxλ =+ −λbeing the most common choice. In addition, alternative transfer functions have been proposed in theliterature (Magoulas & Vrahatis, 2006).The gathering of artificial neurons, interconnected and structured in layers, produces various NNmodels. The most popular one is the multi-layer perceptron. The first layer in such a structure is calledthe input layer. Its neurons are commissioned simply to forward the input vector to the neurons of thenext layer through their weighted interconnections. Thus, they do not perform any computation and,hence, do not posses biases or transfer functions. The output neurons constitute the output layer, whileall intermediate layers are called hidden layers, and their neurons admit biases and transfer functions.Since each layer forwards its output strictly to the neurons of the next one, these NN models are alsocalled feedforward NNs. Figure 1 illustrates the structure of such a NN architecture.Let L be the number of layers, including the input and output layer, and let N1, N2,…, NL, be the numberof neurons per layer. Then, the number of weights between the l-th and the (l+1)-th layer is equal to Nl(1+Nl+1), including biases. Thus, the total number of weights and biases of the whole NN is equal to:11 11(1 ).LL l lln N N N N−+== − + +∑ (1)Let {Qk, yk}, k = 1, 2,…, K, be the training patterns, i.e., input-output pair examples, with Qk= (qk1,qk2,…, qkm)Tand yk= (yk1, yk2,…, ykD)T. Obviously, it must hold that N1= m and NL= D. Also, let wij[l]denote the weight of the interconnection between the i-th neuron of the l-th layer with the j-th neuronof the (l+1)-th layer, with l = 1, 2,…, L, i = 0, 1, 2,…, Nl, and j = 1, 2,…, Nl+1(the value i = 0 stands forthe biases of the neurons in the l-th layer for l = 2,…, L). Then, the training of the network is equivalentto the following minimization problem (Magoulas & Vrahatis, 2006): • 170. 152Applications in Machine Learning( )[ ]21 1min ,lijK Dkd kdwk dY y= =−∑∑ (2)where Ykdis the actual output of the d-th output neuron when the input vector Qkis presented to the net-work, while ykdis the corresponding desired output. This error function is not the only possible choicefor the objective function. A variety of distance functions are available in the literature, such as theMinkowsky, Mahalanobis, Camberra, Chebychev, quadratic, correlation, Kendall’s rank corellation andchi-square distance metrics; the context-similarity measure; the contrast model; hyperrectangle distancefunctions and others (Magoulas & Vrahatis, 2006; Wilson & Martinez, 1997).The minimization problem defined in equation (2) can be straightforwardly solved using PSO byencoding the network weights as particle components. Thus, each particle constitutes an individualweight configuration, evaluated in the context of equation (2). The underlying optimization problem ishigh-dimensional and can be highly nonlinear under sigmoid transfer functions. All PSO concepts andproperties still hold in the case of NN training, since the formulation of the problem as a problem ofglobal optimization adheres to the general optimization framework that governs all PSO developmentsdiscussed in the first part of the book at hand.Learning the eXclusive OR: A Simple ExampleLet us now illustrate the application of PSO in NN training with a simple example, namely the eXclu-sive OR (XOR) problem. The main goal is to train a NN to learn the logical XOR operation, which isdefined with the truth table of Table 1.Aproperly trained NN shall admit a two dimensional input vectorfrom the set {(0,0), (0,1), (1,0), (1,1)} and produce the correct output reported in Table 1. A NN witha single hidden layer of two neurons and a logistic transfer function are used to achieve this (Haykin,1999, p. 175).Setting the problem in the framework of the previous section, there will be three layers, i.e., L = 3.The first (input) layer consists of N1= 2 neurons, since the input vectors are 2-dimensional. The hiddenlayer consists of N2= 2 neurons, and the third (output) layer consists of N3= 1 neuron, since the outputcan be either “0” or “1”. There is a total number of 6 weights in the interconnections among all neurons,plus 3 biases of the neurons in the hidden and output layer. Hence, there is a total of 9 parameters thatneed to be specified, i.e., the corresponding optimization problem is 9-dimensional, as it is also derivedFigure 1. A multi-layer perceptron model • 171. 153Applications in Machine Learningfrom equation (1). The corresponding objective function for the specific problem can be also given ina closed-form expression, as reported in Parsopoulos and Vrahatis (2002).PSO can now be applied using a swarm of 9-dimensional particles. Each particle corresponds to adifferent weight configuration of the NN, and it is evaluated based on equation (2). However, the problemis highly nonlinear with a plethora of local minima as well as flat regions. This can impose difficul-ties on the detection of the global minimizer to any optimization algorithm. In this case, the stretchingtechnique of Chapter Five can be used.Parsopoulos and Vrahatis (2002) applied the inertia weight PSO variant on this problem, using aswarm size equal to 80, while the inertia weight was decreased from 1.0 towards 0.4; c1= c2= 0.5; andthe weights and biases were initialized randomly and uniformly in the range [-1,1]9. As the global mini-mum of the objective function is by definition equal to zero, one can identify whether the algorithm hasdetected the global minimum or not. Parsopoulos and Vrahatis (2002) performed 100 experiments and,if PSO failed to converge to a global minimizer, then stretching was applied and the algorithm contin-ued its operation on the transformed function. The obtained results revealed that stretching was able toincrease efficiency of PSO from 77% to 100%, in terms of the number of successful experiments, at thecost of an increased mean number of iterations as reported in Table 2.Alternatively to stretching, the deflection procedure, which has a more local effect on the objectivefunction, can be used, as presented in Chapter Five. In general, our experience shows that these twotechniques can be valuable tools for tackling machine learning problems with PSO.FURTHER APPLICATIONSIn the XOR problem, as well as in the general framework presented so far in this chapter, human inter-vention was needed to determine crucial properties of the NN, such as the number of hidden layers andneurons. These requirements can be reduced by encoding the corresponding parameters, L, N2,…, NL-1,as components of the particles of PSO. The number of neurons in the input and output layers, N1and NL,respectively, are excluded since they depend solely on the dimension of the input and output vectors.Thus, the number of hidden layers, neurons, weights, and biases, can all be considered as parametersto be determined by PSO. In this case, special care shall be taken since the produced particles will notnecessarily be of the same size.For example, in the XOR problem, a network with 1 hidden layer and 3 hidden neurons would bemodeled with a 15-dimensional particle, instead of the 9-dimensional case of 2 hidden neurons presentedin the previous section. This particle would consist of the number of layers, L = 3, the number of hiddenneurons in the hidden layer, N2= 3, and 13 weights and biases. Similarly, a network with 2 hidden layersTable 1. Truth table of the logical XOR operationA B A XOR B0 0 00 1 11 0 11 1 0 • 172. 154Applications in Machine Learningand 2 hidden neurons in each, would result in an 18-dimensional particle containing, L, N2, N3, and 15weights and biases. Clearly, if the swarm consists of particles with different dimensions, the standardPSO update equations must be properly modified. This is possible by truncating or augmenting eachparticle; such approaches are reported in (Binos, 2002; Zhang et al., 2000).Moreover, instead of feedforward NN, different network types have also been also tackled with dif-ferent PSO variants. For instance, Ismail and Engelbrecht (2000) used PSO to train product unit NNs,while Van den Bergh and Engelbrecht (2000) applied cooperative PSO for feedforward NNs with prom-ising results. Al-kazemi and Mohan (2002) proposed a multi-phase PSO approach to train feedforwardNNs using many swarms that combine different search criteria and hill-climbing. A hybrid approachthat combines GAs with PSO for designing recurrent NNs to solve a dynamic plant control problem isproposed by Juang (2004). In this case, PSO and the genetic algorithm produce an equal portion of apopulation of recurrent NN designs at each iteration, while PSO additionally refines the elite individualsof the population. Finally, very interesting applications for analyzing the levels of pollution in centralHong Kong and the stage prediction of the Shing Mun river are reported in Lu et al. (2002) and Chau(2006), respectively.FUZZY COGNITIVE MAPS LEARNING WITH PSOLearning in fuzzy cognitive maps constitutes an active research field with many significant applicationsin industry, engineering, and bioinformatics. In the following sections, we present the latest develop-ments on the application of PSO on this machine learning task.Fuzzy Cognitive MapsFuzzy cognitive maps (FCMs) are simulation models that combine concepts from artificial neuralnetworks and fuzzy logic. The neuro-fuzzy representation equips FCMs with an inherent ability forabstraction in knowledge representation and adaptation, rendering them a very useful tool for modelingand studying complex systems. To date, FCMs have been used in a plethora of applications in diversescientific fields, including social and organizational systems (Craiger et al., 1996; Taber, 1991, 1994),circuit design (Styblinski & Meyer, 1988), industrial process control (Stylios et al., 1999), supervisorycontrol systems (Groumpos & Stylios, 2000; Stylios & Groumpos, 1998; Stylios et al., 1999), and bio-informatics (Georgopoulos et al., 2003; Parsopoulos et al., 2004).FCMs were originally introduced by Kosko (1986) as directional graphs with feedback. Their rep-resentation is similar to that of causal concept maps, consisting of nodes that represent key conceptsTable 2. Results for the XOR problem with and without stretchingStatistic Standard PSO Stretched cases only PSO with stretchingSuccess 77% 23% 100%Mean 1459.7 29328.6 7869.6St.D. 1143.1 15504.2 13905.4 • 173. 155Applications in Machine Learningof the simulated system, and links among them that represent their causal relationships. The degree ofcausality between two concepts is represented with a numerical weight on their interconnecting link. Asimple FCM with 5 concepts and 7 weights is depicted in Fig. 2.To put it formally, let M be the number of concepts (nodes), Ci, i = 1, 2,…, M, of the FCM. Eachconcept assumes a numerical value, Ai∈ [0,1], i = 1, 2,…, M, that quantifies Cior its effect. An edgewith direction from Cito another concept, Cj, denotes a causality relationship between them. The linkis weighted with a numerical value, wij∈ [-1,1]. Positive weights denote positive causality; hence, anincrease in the value, Ai, of the concept Citriggers an increase in the value, Aj, of Cj. On the other hand,negative causality is expressed with negative weights, and an increase in Airesults in decrease of Ajandvice versa. The weights of an FCM can be represented with a matrix:11 12 121 22 21 2,MMM M MMw w ww w wWw w w   =      where the i-th row includes the causality relationships between Ciand the rest of the concepts. Zeroweights, wij= 0, are used to represent the absence of causality (and therefore interconnection) betweenCiand Cj.The initial setting of an FCM, including both its design and initial values of weights and concepts,is determined by a group of experts with in-depth knowledge of the considered problem (Stylios et al.,1999; Stylios & Groumpos, 2000). In order to avoid the error-prone procedure of assigning directlynumerical values on concepts and weights, linguistic modifiers are used and then converted to fuzzyfunctions. Thus, numerical values are finally assigned through a fuzzification-defuzzification procedure.If needed, the experts can impose strict bounds on these values to retain their physical meaning.After the initial configuration, the FCM behaves similarly to a discrete dynamical system. Keepingthe weights fixed, the concept values are let to converge to a stable state by applying the following itera-tive update rule (Kosko, 1997; Stylios & Groumpos, 2004):Figure 2. A simple FCM with 5 concepts (nodes) and 7 weights (arcs) • 174. 156Applications in Machine LearningA t F A t w A ti i ki kkk iM( ) ( ) ( ) ,+ = +æèççççççççöø÷÷÷÷÷÷÷÷÷=¹å11(3)where t stands for the iteration counter; Ai(t+1) is the value of concept Ciat iteration t+1; Ak(t) is the valueof Ckat iteration t; and wkiis the weight of the link from Ckto Ci. The function F is usually a sigmoid,similarly to the case of feedforward neural networks described in the previous sections.After its convergence, which usually requires a small number of iterations of the equation (3), theFCM shall be capable of simulating the underlying system accurately, and desirable values shall be as-sumed by the concepts. Unfortunately, this is not always possible. Wide opinion variations among theexperts are translated into weight values that are incapable of leading the system to desirable states. Insuch cases, a learning algorithm is needed to modify the weights further within their bounds, so thatdesirable steady states can be achieved.There are just a few established learning algorithms for FCMs and they can be classified in twomajor categories. The first one consists of algorithms based on rules for unsupervised training of arti-ficial neural networks (Kosko, 1997). The second category consists of evolutionary algorithms (Khanet al., 2004; Koulouriotis et al., 2001) and PSO-based approaches (Papageorgiou et al., 2004, 2005;Parsopoulos et al., 2003, 2004; Petalas et al., 2007, 2009). The latter approaches are described in thefollowing section.A Learning Approach Based on PSOThe learning procedure is similar to some extent with that of neural network training. Let M be thenumber of concepts, Ci, and Ai∈ [0,1], i = 1, 2,…, M, be their values. Let also, M* ≤ M, be the numberof concepts whose output is of interest, i.e., their values are crucial for the operation of the simulatedsystem. These concepts are called output concepts, and we denote them as Co1, Co2,…, CoM*. Their values,Ao1, Ao2,…, AoM*, are monitored and used as performance criterion for the learning procedure. Hence, theuser is interested in detecting a weight matrix, W = [wij], i, j = 1, 2,…, M, so that the converged FCMattains a desirable steady state, while its weights retain their physical meaning.A desirable steady state for a given weight matrix, W, shall provide output concept values that liewithin prespecified bounds:A A Ai i io o o[min] [max],£ £ i = 1, 2,…, M*,considered to be crucial for the proper operation of the simulated system. A proper objective functionthat guarantees this property shall be defined for the learning procedure. Papageorgiou et al. (2005)proposed the following objective function:f W A A H A A A Ai i i iiMi i( ) [min] [min]*[max]= - ´ -( )éëêùûú + -=å o o o o o o1´´ -( )éëêùûú=å H A Ai iiMo o[max]*,1(4)where H is the Heaviside function: • 175. 157Applications in Machine Learning( )[min][min] 0, if 0,1, otherwise,oi oioi oiA AH A A − <− = and ( )[max][max] 0, if 0,1, otherwise,oi oioi oiA AH A A − <− = and Aoi, i = 1, 2,…, M*, are the steady state values of output concepts obtained by applying the iterativerule of equation (3), using the weight matrix W.The objective function of equation (4) is actually a penalty function. Its global minimizers are weightmatrices that produce output concept values within the prespecified bounds. Any other weight matrixis penalized by an amount proportional to the degree of bound violation. In contrast to gradient-basedapproaches, the non-differentiability of f(W) does not constitute an obstacle to the application of PSO.Possible further requirements implied by the problem at hand can be easily incorporated in equation (4)to achieve a desirable weight matrix.PSO is applied straightforwardly using the objective function of equation (4). Each particle of theswarm is a weight matrix, encoded as a vector by considering its rows in turn:T12 1 21 2 1 , 1row 1 row 2 row[ , , , , , , , , , ] .M M M M MMW w w w w w w −=      The elements, w11, w22,…, wMM, of the main diagonal of W are omitted in the vectorial representation,as, by definition, FCMs have no self-feedback in their nodes, and therefore the corresponding weightsare all equal to zero, wii= 0, i = 1, 2,…, M. The dimension of particles for an FCM with M conceptswill be at maximum equal to M×(M-1), although in many applications it is significantly smaller due tosparsity of the weight matrices.Each particle (weight matrix W) is evaluated with f(W), by using the weight matrix W in the FCMand letting it converge to a steady state to obtain the required concept values involved in equation (4).Subsequently, the PSO update equations are used to produce new weight matrices (particles). Initializa-tion of each weight is performed randomly and uniformly either in [-1,0], if it is specified as negative bythe experts, or in [0,1], otherwise. Obviously, there is no restriction regarding the employed PSO variant,while the objective function can be modified accordingly to fit the framework of different problems. Aflowchart of the proposed PSO-based learning procedure is depicted in Fig. 3. In the next section, anillustrative example of the learning procedure is provided for an industrial process control problem.Industrial Process Control: An Illustrative ExampleThis problem was addressed by Papageorgiou et al. (2005), and it is ideal to illustrate the PSO-basedlearning procedure. The problem, previously considered by Stylios and Groumpos (1998), consists ofthe simulation of a simple process control problem from industry. The system, which is illustrated in Fig.4, consists of a tank and three valves, denoted as V1, V2, and V3, which control the amount of liquid inthe tank. Valves V1 and V2 pour two liquid chemicals, whose chemical reaction produces a new liquid,into the tank. A sensor (gauger) is sunk into the tank and gauges the specific gravity, G, of the producedliquid. If G attains a value within a desirable range, [Gmin, Gmax], then V3 opens and empties the tank.There is also a security limit on the height, T, of the liquid in the tank, which shall not exceed lower andupper bounds, Tminand Tmax, respectively. Therefore, the main goal of this simple control process is thepreservation of G and T within the desirable limits: • 176. 158Applications in Machine LearningGmin≤ G ≤ Gmax, Tmin≤ T ≤ Tmax.A group of experts designed an FCM that simulates the system, following the procedure describedin the previous sections. First, they decided on the number of concepts and their interactions, which aredescribed as “negative”, “positive”, and “no influence”, based on the causality relationships among them.The corresponding FCM is depicted in Fig. 5 and consists of the following five concepts:1. Concept C1: Height of liquid in tank. It depends on the state of valves V1, V2, and V3.2. Concept C2: State of valve V1 (open, closed, or partially open).Figure 3. Flowchart of the PSO-based learning procedureFigure 4. The industrial process control system • 177. 159Applications in Machine Learning3. Concept C3: State of valve V2 (open, closed, or partially open).4. Concept C4: State of valve V3 (open, closed, or partially open).5. Concept C5: Specific gravity of the produced liquid.The concepts are connected with eight links. A consensus among experts was attained regarding the di-rections of the links. Also, the initial weights were determined by assigning linguistic variables, such as“weak”, “strong” etc., along with the corresponding fuzzy sets, as defined in Cox (1999). The linguisticvariables are combined to a single linguistic weight using the widely used SUM technique (Lin & Lee,1996). This weight is transformed to a numerical value using the center of area defuzzification method(Kosko, 1992; Lin & Lee, 1996).All experts agreed on the same range for the weights w21, w31and w41, while most of them agreedfor w12and w13(Papageorgiou et al., 2005). However, no agreement was attained for w15, w52, and w54,where their opinions varied significantly. The ranges as implied by the fuzzy regions of the weights arereported in Table 3.PSO was applied on the corresponding 8-dimensional optimization problem for the detection of aweight setting that retains the output concepts, C1and C5, within the following bounds assigned by theexperts:0.68 ≤ A1≤ 0.70, 0.78 ≤ A5≤ 0.85. (5)Thus, the objective function of equation (4) becomes:f W A H A A H A( ) . . . ..= - -( )+ - -( )+0 68 0 68 0 70 0 700 781 1 1 1-- -( )+ - -( )A H A A H A5 5 5 50 78 0 85 0 85. . . .Figure 5. The FCM that simulates the system of Fig. 4 • 178. 160Applications in Machine LearningAdditionally, each weight was constrained in the range [-1,0] or [0,1] to avoid physically meaninglessweights.Papageorgiouetal.(2005)consideredtwodifferentscenarios.Inthefirst,theyadmittedallconstraintsposed by the experts. In the second, they considered only those constraints where a unanimous agreementamong the experts was achieved. In both cases they used the constriction coefficient PSO variant withring neighborhood topology of radius r = 3, and a swarm size equal to 20. Also, the default parameters,χ = 0.729, c1= c2= 2.05, were used, while the desired accuracy on the objective function value was setto 10-8. For each case, 100 independent experiments were performed and their results were statisticallyanalyzed. The most important observations and conclusions are reported in the following sections.Scenario One: All Constraints are RetainedIn the first set of experiments, all constraints defined by the experts in Table 3 were used. However, asreported in Papageorgiou et al. (2005), no solution that fulfills the relations (5) was found in 100 ex-periments. This is a strong indication that the provided weight bounds are not proper and, consequently,they cannot lead the FCM to a desirable steady state. The best of the obtained weight matrices was thefollowing:W*. . . . .. . . . .. . .=- -0 00 0 35 0 20 0 00 0 400 40 0 00 0 00 0 00 0 000 50 0 00 0 00 0.. .. . . . .. . . . .00 0 000 80 0 00 0 00 0 00 0 000 00 0 75 0 00 0 20 0 00-æèçççççççççççççççççöø÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷,which leads the FCM to the following steady state:A1= 0.6723, A2= 0.7417, A3= 0.6188, A4= 0.6997, A5= 0.7311.This steady state clearly violates both constraints in relation (5).Assumingthattheconstraintsfortheweightsw15,w52,and w54,forwhichtheopinionsofexpertsvariedsignificantly, were the reason for the disability of the algorithm to achieve a solution, Papageorgiou etal. (2005) attempted to solve the problem by omitting them. Thus, they experimented by omitting theconstraints, initially one by one, and subsequently in pairs. Despite their effort, again, no solution wasdetected.Table 3. Ranges of the weights as implied by their fuzzy regions for the industrial process control prob-lem of Fig. 4-0.5 ≤ w12≤ -0.3, 0.2 ≤ w15≤ 0.4, 0.4 ≤ w31≤ 0.5, 0.5 ≤ w52≤ 0.7,-0.4 ≤ w13≤ -0.2, 0.3 ≤ w21≤ 0.4, -1.0 ≤ w41≤ -0.8, 0.2 ≤ w54≤ 0.4, • 179. 161Applications in Machine LearningFinally, they omitted all three constraints, allowing the three weights to assume values in the wholerange [0,1] (they all have positive signs). In this case, proper weight matrices were obtained, althoughin substantially different ranges than those determined by the experts (Papageorgiou et al., 2005). Thus,the algorithm was able to correct the inconsistencies of the experts in a very efficient manner, requiringat most 620 function evaluations to find a solution with the desirable accuracy. Of course, due to theinteractions among weights and concepts, there is a multitude of different optimal matrices. Indicatively,we report one of the obtained optimal matrices:W*. . . . .. . . . .. . .=- -0 00 0 45 0 20 0 00 0 840 40 0 00 0 00 0 00 0 000 50 0 00 0 00 0.. .. . . . .. . . . .00 0 000 80 0 00 0 00 0 00 0 000 00 0 99 0 00 0 10 0 00-æèçççççççççççççççççöø÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷,which leads the FCM to the desirable steady state:A1= 0.6805, A2= 0.7798, A3= 0.6176, A4= 0.6816, A5= 0.7967.Scenario Two: Only Unanimously Agreed Constraints are RetainedIn the second scenario, Papageorgiou et al. (2005) retained only the unanimously agreed constraints,i.e., those of the weights w21, w31, and w41. The rest of the weights were let to move unrestricted in [-1,0]or [0,1], depending on their sign.In this case, it was observed that the three constrained weights assumed values in remarkably nar-rower ranges than those proposed by the experts. This implies that PSO can be also used for the furtherrefinement of the assigned bounds. Numerous experiments were performed with the three weights beingfixed or moving in their corresponding ranges (Papageorgiou et al., 2005). Some of the obtained optimalweight matrices were the following:W*. . . . .. . . . .. . .=- -0 00 0 44 0 10 0 00 1 000 40 0 00 0 00 0 00 0 000 50 0 00 0 00 0.. .. . . . .. . . . .00 0 000 81 0 00 0 00 0 00 0 000 00 1 00 0 00 0 13 0 00-æèçççççççççççççççççöø÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷,which leads the FCM to the steady state:A1= 0.6805, A2= 0.7872, A3= 0.6390, A4= 0.6898, A5= 0.8172; • 180. 162Applications in Machine LearningW*. . . . .. . . . .. . .=- -0 00 0 27 0 20 0 00 1 000 40 0 00 0 00 0 00 0 000 50 0 00 0 00 0.. .. . . . .. . . . .00 0 000 81 0 00 0 00 0 00 0 000 00 1 00 0 00 0 10 0 00-æèçççççççççççççççççöø÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷,which leads the FCM to the steady state:A1= 0.6816, A2= 0.8090, A3= 0.6174, A4= 0.6822, A5= 0.8174;andW*. . . . .. . . . .. . .=- -0 00 0 23 0 13 0 00 0 860 40 0 00 0 00 0 00 0 000 50 0 00 0 00 0.. .. . . . .. . . . .00 0 000 81 0 00 0 00 0 00 0 000 00 0 92 0 00 0 12 0 00-æèçççççççççççççççççöø÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷,which leads the FCM to the steady state:A1= 0.6817, A2= 0.7985, A3= 0.6323, A4= 0.6860, A5= 0.8007.Although different, all the obtained weight matrices lead the FCM to desirably steady states. The weightsw13, w15, w52, and w54, converged to regions significantly different than those suggested by the experts,while w21, w31, and w41, remained almost fixed at values close to the initial ones suggested by the experts.Also, the weight w12deviates slightly from its initial region.These interesting observations constitute a small fraction of the information derived from the applica-tion of the PSO-based learning procedure on the industrial process control problem. However it sufficesto reveal the potential of the PSO-based learning algorithm to significantly increase our intuition onthe problem dynamics, and help towards the further refinement of expert knowledge. Moreover, it canprovide robust solutions in cases where the experts disagree or doubt on their decisions. The next sectionbriefly reports further developments and applications of PSO-based learning procedures.Further Developments on PSO-Based Learning AlgorithmsThe PSO-based learning procedure was introduced in Parsopoulos et al. (2003) and further evaluated inPapageorgiou et al. (2005) for the industrial process control problem presented in the previous section.Its efficiency and apparent practical value prompted its further use in different applications, such asmodeling of radiation therapy systems (Parsopoulos et al., 2004), and more complex industrial processcontrol problems (Papageorgiou et al., 2004).Concurrently, different PSO variants were assessed on the FCM learning problem. One of the mostefficient approaches was recently proposed by Petalas et al. (2007, 2009). This approach utilizes the • 181. 163Applications in Machine Learningmemetic PSO (MPSO) algorithm presented in Chapter Four. More specifically, MPSO approaches,equipped with either the Hook and Jeeves or the Solis and Wets local search, were investigated on inter-esting FCM learning problems, including a complex industrial process control problem, the simulationof a radiation therapy system, a heat exchanger problem, and the simulation of an ecological industrialpark problem (Petalas et al., 2009). Moreover, MPSO was compared favorably against the DE and GAalgorithm on these problems. The reader is referred to the original paper for the complete presentationof these applications. Besides the form of the equation (4), different objective functions can be used,depending on the problem at hand. In the following, we briefly describe the radiation therapy problem,where such an objective function is used.Radiotherapy is a popular means of cancer treatment. It is a complex process that involves a largenumber of treatment variables. The main objective of radiotherapy is the delivery of the highest possibleamount of radiation to the tumor, while minimizing the exposure of healthy tissue and critical organs tothe radiation. Hence, treatment planning and doctor-computer interaction is required prior to the actualfinal treatment (Parsopoulos et al., 2004; Petalas et al., 2009).The radiation therapy process is modeled by a supervisor-FCM, which is constructed by experts andconsists of the following six concepts:1. Concept C1: Tumor localization.2. Concept C2: Dose prescribed for the treatment planning.3. Concept C3: Machine-related factors.4. Concept C4: Human-related factors.5. Concept C5: Patient positioning and immobilization.6. Concept C6: Final dose received by the targeted tumor.The corresponding FCM is depicted in Fig. 6.The main objective in this problem is the maximization of the final dose received by the tumor, whichis described by C6, as well as the maximization of the dose, C2, prescribed by the treatment planning asdefined by the AAPM and ICRP protocols for the determination of acceptable dose per organ and partof the human body (Khan, 1994; Wells & Niederer, 1998; Willoughby et al., 1996). Thus, the objectivefunction is modeled so that the values of the two aforementioned concepts are maximized, instead ofrestricting them in bounds as in equation (4), and it is defined as:f(W) = – A2– A6,where the negative sign is used to transform the maximization of the positive values A2and A6to anequivalent minimization problem.This objective function was easily addressed with PSO as reported in Parsopoulos et al. (2004). Intheir experiments, both the inertia weight and the constriction coefficient PSO variant were investigatedand compared against two DE variants. The standard parameter setting, χ = 0.729, c1= c2= 2.05, anda decreasing inertia weight from 1.2 to 0.1 were employed, while the desirable bounds for the outputconcepts were determined by the experts as follows:0.80 ≤ A2≤ 0.95, 0.90 ≤ A6≤ 0.95. • 182. 164Applications in Machine LearningAll algorithms detected the same solution matrix (Parsopoulos et al., 2004):W*. . . . . .. . . . . .. . . . . .=- -0 0 0 0 0 0 0 0 0 0 0 50 4 0 0 0 0 0 0 0 0 0 70 0 0 2 0 0 0 0 0 0 0 110 0 0 0 0 0 0 0 0 2 0 20 0 0 0 0 0 0 6 0 0 0 80 4 0 9 0 0 0 0 0 9 0. . . . . .. . . . . .. . . . . .- --00æèçççççççççççççççççççöø÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷÷,which leads the FCM to the steady state:A1= 0.819643, A2= 0.819398, A3= 0.659046, A4= 0.501709, A5= 0.824788, A6= 0.916315.AninterestingpointisthehighpositiveinfluenceofconceptC6(finaldose)onconceptC2(doseprescribedfrom the treatment planning), as well as on concept C5(patient positioning). This means that if we suc-ceed in delivering the maximum dose to the target volume, then the initial calculated dose from treatmentplanning is the desired; the same happens with patient positioning (Parsopoulos et al., 2004).TheconvenientmanipulationandeasyadaptationofthePSO-basedlearningschemetodifferentprob-lems renders it a valuable tool in machine learning tasks. Using these developments as a starting point,new simulation models were also introduced. Such a model is the interval cognitive map introduced byPetalas et al. (2005), which is based on modeling the concepts and weights with interval numbers. Thepresentation of the interval arithmetic methodology for learning in interval FCM models is outside thescope of the book at hand, and thus omitted.CHAPTER SYNOPSISFundamental concepts of applying PSO on machine learning problems were presented and analyzed.Theproblems were restricted to the representative cases of training feedforward artificial neural networksFigure 6. The supervisor-FCM for the radiotherapy problem • 183. 165Applications in Machine Learningand learning in fuzzy cognitive maps. For each case, an illustrative example was used to introduce thereader to the workings of the corresponding PSO-based approach. Thus, the training of a feedforwardneural network with one hidden layer for learning the XOR logical operation, as well as the simulationof an industrial process control problem with a fuzzy cognitive map, were analyzed. Moreover, elementsfor further acquisition were provided in the reported literature, along with brief references to relativeapplications.REFERENCESAl-kazemi, B., & Mohan, C. K. (2002). Training feedforward neural networks using multi-phase particleswarm optimization. In L. Wang, J.C. Rajapakse, K. Fukushima, S.-Y. Lee & X. Yao (Eds.), Proceed-ings of the 9th International Conference on Neural Information Processing (ICONIP’02), Singapore(pp. 2615-2619).Binos, T. (2002). Evolving neural network architecture and weights using an evolutionary algorithm.MSc thesis, RMIT University, Melbourne, Australia.Chau,K.W.(2006).ParticleswarmoptimizationtrainingalgorithmforANNsinstagepredictionofShingMun river. Journal of Hydrology (Amsterdam), 329, 363–367. doi:10.1016/j.jhydrol.2006.02.025Cox, E. (1999). The fuzzy systems handbook. Cambridge, MA: Academic Press.Craiger, J. P., Goodman, D. F., Weiss, R. J., & Butler, A. (1996). Modeling organizational behavior withfuzzy cognitive maps. International Journal of Computation Intelligent Organization, 1, 120–123.Georgopoulos, V., Malandraki, G., & Stylios, C. (2003). A fuzzy cognitive map approach to differentialdiagnosisofspecificlanguageimpairment.JournalofArtificialIntelligenceinMedicine,29(3),261–278.doi:10.1016/S0933-3657(02)00076-3Groumpos, P. P., & Stylios, C. D. (2000). Modelling supervisory control systems using fuzzy cognitivemaps. Chaos, Solitons, and Fractals, 11, 329–336. doi:10.1016/S0960-0779(98)00303-8Haykin, S. (1999). Neural networks: a comprehensive foundation. Upper Saddle River, NJ: PrenticeHall.Ismail, A., & Engelbrecht, A. P. (2000). Global optimization algorithms for training product unit neu-ral networks. In Proceedings of the 2000 IEEE International Joint Conference on Neural Networks(IJCNN’00), Como, Italy (pp. 132-137).Juang, C.-F. (2004).Ahybrid of genetic algorithm and particle swarm optimization for recurrent networkdesign. IEEE Transactions on Systems, Man, and Cybernetics. Part B, Cybernetics, 34(2), 997–1006.doi:10.1109/TSMCB.2003.818557Khan, F. (1994). The physics of radiation therapy. Baltimore: Williams & Wilkins.Khan,M.S.,Khor,S.,&Chong,A.(2004).Fuzzycognitivemapswithgeneticalgorithmforgoal-orienteddecision support. International Journal of Uncertainty . Fuzziness and Knowledge-Based Systems, 12,31–42. doi:10.1142/S0218488504003028 • 184. 166Applications in Machine LearningKosko, B. (1986). Fuzzy cognitive maps. International Journal of Man-Machine Studies, 24, 65–75.doi:10.1016/S0020-7373(86)80040-2Kosko, B. (1992). Neural networks and fuzzy systems. Upper Saddle River, NJ: Prentice Hall.Kosko, B. (1997). Fuzzy engineering. Upper Saddle River, NJ: Prentice Hall.Koulouriotis, D. E., Diakoulakis, I. E., & Emiris, D. M. (2001). Learning fuzzy cognitive maps usingevolution strategies: a novel schema for modeling and simulating high-level behavior. In Proceedingsof the 2001 IEEE Congress on Evolutionary Computation (CEC’01), Seoul, Korea (pp. 364-371).Lin, C. T., & Lee, C. S. (1996). Neural fuzzy systems: a neuro-fuzzy synergism to intelligent systems.Upper Saddle River, NJ: Prentice Hall.Lu, W. Z., Fan, H. Y., Leung, A. Y. T., & Wong, J. C. K. (2002). Analysis of pollutant levels in centralHong Kong applying neural network method with particle swarm optimization. Environmental Monitor-ing and Assessment, 79, 217–230. doi:10.1023/A:1020274409612Magoulas, G. D., & Vrahatis, M. N. (2006). Adaptive algorithms for neural network supervised learn-ing: a deterministic optimization approach. International Journal of Bifurcation and Chaos in AppliedSciences and Engineering, 16(7), 1929–1950. doi:10.1142/S0218127406015805Papageorgiou, E. I., Parsopoulos, K. E., Groumpos, P. P., & Vrahatis, M. N. (2004). Fuzzy cognitivemaps learning through swarm intelligence. In Lecture Notes in Artificial Intelligence (LNAI), Vol. 3070(pp. 344-349). Berlin: Springer.Papageorgiou, E. I., Parsopoulos, K. E., Stylios, C. D., Groumpos, P. P., & Vrahatis, M. N. (2005). Fuzzycognitive maps learning using particle swarm optimization. Journal of Intelligent Information Systems,25(1), 95–121. doi:10.1007/s10844-005-0864-9Parsopoulos, K. E., Papageorgiou, E. I., Groumpos, P. P., & Vrahatis, M. N. (2003).Afirst study of fuzzycognitive maps learning using particle swarm optimization. In Proceedings of the 2003 IEEE Congresson Evolutionary Computation (CEC’03), Canberra, Australia (pp. 1440-1447).Parsopoulos, K. E., Papageorgiou, E. I., Groumpos, P. P., &Vrahatis, M. N. (2004). Evolutionary compu-tation techniques for optimizing fuzzy cognitive maps in radiation therapy systems. In [LNCS]. LectureNotes in Computer Science, 3102, 402–413.Parsopoulos,K.E.,&Vrahatis,M.N.(2002).Recentapproachestoglobaloptimizationproblemsthroughparticle swarm optimization. Natural Computing, 1(2-3), 235–306. doi:10.1023/A:1016568309421Petalas, Y. G., Papageorgiou, E. I., Parsopoulos, K. E., Groumpos, P. P., & Vrahatis, M. N. (2005). Inter-val cognitive maps. In Proceedings of the International Conference of Numerical Analysis and AppliedMathematics (ICNAAM 2005), Rhodes, Greece (pp. 882-885).Petalas, Y. G., Parsopoulos, K. E., Papageorgiou, E. I., Groumpos, P. P., & Vrahatis, M. N. (2007). En-hanced learning in fuzzy simulation models using memetic particle swarm optimization. In Proceedingsof the 2007 IEEE Swarm Intelligence Symposium (SIS’07) (pp. 16-22).:Washington, D.C.: IEEE. • 185. 167Applications in Machine LearningPetalas, Y. G., Parsopoulos, K. E., & Vrahatis, M. N. (2009). Improving fuzzy cognitive maps learningthrough memetic particle swarm optimization. Soft Computing, 13(1), 77–94. doi:10.1007/s00500-008-0311-2Styblinski, M. A., & Meyer, B. D. (1988). Fuzzy cognitive maps, signal flow graphs, and qualitativecircuit analysis. In Proceedings of 2ndIEEE International Conference on Neural Networks, San Diego(CA), USA (pp. 549-556).Stylios, C. D., Georgopoulos, V., & Groumpos, P. P. (1999). Fuzzy cognitive map approach to processcontrol systems. Journal of Advanced Computational Intelligence and Intelligent Informatics, 3(5),409–417.Stylios, C. D., & Groumpos, P. P. (1998). The challenge of modelling supervisory systems using fuzzycognitive maps. Journal of Intelligent Manufacturing, 9, 339–345. doi:10.1023/A:1008978809938Stylios, C. D., & Groumpos, P. P. (2000). Fuzzy cognitive maps in modeling supervisory control systems.Journal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology, 8(2), 83–98.Stylios, C. D., & Groumpos, P. P. (2004). Modeling complex systems using fuzzy cognitive maps. IEEETransactions on Systems . Man and Cybernetics: Part A, 34(1), 159–165.Taber, R. (1991). Knowledge processing with fuzzy cognitive maps. Expert Systems with Applications,2, 83–87. doi:10.1016/0957-4174(91)90136-3Taber, R. (1994). Fuzzy cognitive maps model social systems. AI Expert, 9, 8–23.Van den Bergh, F., & Engelbrecht, A. P. (2000). Cooperative learning in neural networks using particleswarm optimizers. South African Computer Journal, 26, 84–90.Wells, D., & Niederer, J. (1998). A medical expert system approach using artificial neural networks forstandardized treatment planning. International Journal of Radiation Oncology, Biology, Physics, 41(1),173–182. doi:10.1016/S0360-3016(98)00035-2Willoughby, T., Starkschall, G., Janjan, N., & Rosen, I. (1996). Evaluation and scoring of radiotherapytreatment plans using an artificial neural network. International Journal of Radiation Oncology, Biology,Physics, 34(4), 923–930. doi:10.1016/0360-3016(95)02120-5Wilson, D. R., & Martinez, T. R. (1997). Improved heterogeneous distance functions. Journal of Arti-ficial Intelligence Research, 6, 1–34.Zhang, C., Shao, H., & Li, Y. (2000). Particle swarm optimisation for evolving artificial neural network.In Proceedings of the 2000 IEEE International Conference on Systems, Man, Cybernetics, Nashville(TN), USA (pp. 2487-2490). • 186. 168Copyright © 2010, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.Chapter 7Applications inDynamical SystemsThis chapter is devoted to the application of PSO in dynamical systems. The core subject of the chapteris the problem of detecting periodic orbits of nonlinear mappings. This problem is very interesting andsignificant, as the study of periodic orbits can reveal several crucial properties of a dynamical system.Traditional root-finding algorithms, such as the Newton-family methods, are widely applied on suchproblems. However, obstacles arise as soon as non-differentiable or discontinuous mappings come un-der investigation. In such cases, PSO has been shown to be a very useful and efficient alternative. Thechapter aims at presenting fundamental ideas and specific application issues. We thoroughly discuss thetransformation of the original problem to a corresponding global optimization task. The application ofthe deflection technique, presented in Chapter Five, for computing several periodic orbits is analyzedand the algorithm is illustrated on well known benchmark problems. Finally, we present and discuss avery significant application, i.e., the detection of periodic orbits in 3-dimensional galactic potentials.INTRODUCTIONNonlinear mappings are widely used to model conservative or dissipative dynamical systems (Birkhoff,1917; Bountis & Helleman, 1981; Greene, 1979; Polymilis et al., 1997, 2000, 2003; Skokos, 2001a;Skokos et al., 1997;Vrahatis, 1995;Vrahatis & Bountis, 1994;Vrahatis et al., 1993, 1996, 1997;Verhulst,1990). Points that are invariant under the mapping possess a central role in its analysis. Such points arecalled fixed points or periodic orbits of the mapping (Verhulst, 1990). To put it formally, let Φ(x) be anonlinear mapping, defined as:Φ(x) = (Φ1(x), Φ2(x),…, Φn(x))T: Rn→ Rn. (1)DOI: 10.4018/978-1-61520-666-7.ch007 • 187. 169Applications in Dynamical SystemsThen, an n-dimensional point:x = (x1, x2,…, xn)T∈ Rn,is called a fixed point of Φ(x), if it holds that:Φ(x) = x,i.e., the image of x through the mapping Φ(x) remains equal to x. In addition, x is called a fixed point oforder p or periodic orbit of period p of Φ(x), if it holds that:times( ) ( ( ( ) )) ,ppx x xΦ ≡ Φ Φ Φ =  (2)i.e., the image of x after p subsequent applications of the mapping Φ(x) is equal to x.The numerical detection of periodic orbits is a very challenging task, as analytical expressions are rarelyavailableandonlyforpolynomialmappingsoflowdegreeandlowperiods(Helleman,1977;Hénon,1969).Classical efficient methods constitute a typical tool for tackling the problem numerically. However, thesealgorithms often fail; a failure attributed to the nonexistence (or poor behavior) of partial derivatives inthe neighborhood of the fixed points or to their sensitivity to large values assumed by the mapping in theneighborhood of saddle-hyperbolic periodic orbits, which are unstable in the linear approximation.These pathological cases can be remedied by using different methods, such as topological degree-based methods, named as characteristic bisection methods (or sign methods) (Vrahatis, 1995; Vrahatis &Bountis, 1994; Vrahatis et al., 1993, 1996, 1997). These methods require only the accurate knowledge ofthe sign of the components of the considered function. Furthermore, evolutionary computation and swarmintelligence offer an abundance of alternative tools. Parsopoulos and Vrahatis (2003, 2004) offered thefirst studies of PSO on the detection of periodic orbits of nonlinear mappings. In parallel, they combinedPSO with the deflection technique, described in Chapter Five, introducing a scheme capable of detectingmultiple periodic orbits of the same period. Their results on widely used benchmark problems justifiedthe efficiency of this approach, rendering it a valuable tool.An important application of this methodology was published by Skokos et al. (2005), where theaforementioned PSO approach was applied for the detection of periodic orbits in 3-dimensional galacticpotentials. The obtained results were very promising, enriching the available knowledge on the studiedmodels by detecting new and unknown periodic orbits. The methodology was further improved with theintroduction of an entropy-based memetic PSO approach by Petalas et al. (2007).All these advances aredescribed in detail in the following sections.DETECTION OF PERIODIC ORBITS OF NONLINEAR MAPPINGS USING PSOThe application of PSO for the detection of periodic orbits of nonlinear mappings requires the devel-opment of a proper optimization framework. This framework is needed to transform the nonlinearsystem-solving problem of equation (2) to a proper minimization task, suitable for the application of • 188. 170Applications in Dynamical SystemsPSO. This can be accomplished by proper mathematical manipulations of equation (2) as described inthe following paragraphs.Let Φ(x) be an n-dimensional nonlinear mapping defined as in equation (1) and On= (0, 0,…, 0)Tbe the origin of the n-dimensional Euclidean space, Rn. Also, let p be an integer denoting the desiredperiod. Then, by definition, we can derive the following:11 1 122 2 20( ) ( ) 0,0( ) ( ) 0,( ) ( )0( ) ( ) 0.p pp pp pnp pnn n nxx x xxx x xx x x x Oxx x x  Φ Φ − =        Φ Φ − =     Φ = ⇒ Φ − = ⇒ − = ⇒              Φ Φ − =      (3)All solutions of this nonlinear system are also periodic orbits of period p of Φ(x). We can now definethe following objective function:21( ) ( ( ) ) .npi iif x x x==
94c064bcbe7c36a1
Take the 2-minute tour × The Wikipedia article on Moseley's law seems to show that the screening of heavy atoms is by 1 electron charge exactly (in the limit of large Z, experimental precision, within nonrelativistic limits, and). But why is this exactly one unit? The other K-shell electron is not screening exactly one unit, and this seems to be a conspiracy of other electrons. I suspect it is because of an unappreciated hole-picture of deep holes in heavy atoms (electrons missing in deep shells), and I will describe this theory briefly. If you remove an electron from close to the nucleus, the electron-hole behaves as an object with positive charge and negative mass (this is why it orbits the nucleus that it is repelled by). The state is not a vacuum quite, because of the presence of other electrons, but the rigidity of the Fermi liquid near the nucleus of a heavy atom means that the hole behaves as a single particle. This single-particle behavior is in the potential background of the nucleus and the other electrons, and it is possible that the result can give an exact 1 unit screening. I developed the formalism a little bit to see what the form should be, but I did not see any reason for 1 unit screening. Perhaps there is none, but it looks to be more than a coincidence. share|improve this question I am confident that your explanation is at least on the right track. At any rate, even the Wikipedia article does say that $(Z-1)$ arises because of differences in the electron-electron interactions between the initial and final states and holes could be very useful to quantify this difference. It must be possible to justify the number 1 in some way. –  Luboš Motl Apr 13 '12 at 10:06 Ron, what about charge conservation? In truth there are Z-1 electrons going around and the hole must take its apparent charge from the Z of the nucleus to have charge conservation of the system from afar. Charge number is quantized after all in units of e.This might leave the nucleus at Z-1. –  anna v Apr 13 '12 at 10:38 @annav: This doesn't work--- the Z-1 is only for K-shell, and I don't see a reason this is linked to the number of electrons. I also am not sure this is nonrelativistically exact--- I just saw a convergence in the values at large Z by eye. –  Ron Maimon Apr 13 '12 at 19:32 Are you saying that by "remove" you mean the electron is on a higher energy shell but still attached to the atom? If it is off the atom then the atom is ionized and will have a charge +1 . –  anna v Apr 14 '12 at 3:10 look at "binding energy" here : en.wikipedia.org/wiki/Ionization_energy . –  anna v Apr 14 '12 at 3:14 1 Answer 1 I doubt that it is exactly 1. See Effective_nuclear_charge for references therein. The Clementi tables goes up to Radon and uses Slater Type orbitals. You can calculate it on your own with a quantum chemistry programm, defining the correct symmetry group for your exited wavefunction with an empty k-shell. The 1s electron coefficients are not integer but close to (Z-1). Another important point is the inclusion of relativistic effects, most important for s-shell electrons in heavy elements. E.g. the hyperfine interaction (the Fermi contact term) is increased by about 20% for heavy elements share|improve this answer I know it isn't exactly 1 for finite Z, the question is whether it asymtptotes to 1 for large Z. I see it pass 1, but I think for large atoms it is even closer to 1. It might be coincidental, but I don't think so. –  Ron Maimon Apr 13 '12 at 16:19 I don't want relativistic--- the question is whether a nonrelativistic enormous Z potential with Z electrons and one K-shell hole has energy given by Bohr model with (Z-1). It's a well defined mathematical question, and I don't see a clear "no". If you find the answer for Z=400 and it is significanty different from 1, it is good evidence that the asymptotic value is not exactly 1, but maybe 1.08 or something. I was trying to see if there is a reason for a near-1 value, and this limit of Z going to infinity is the only thing I could think of. But +1 for the link –  Ron Maimon Apr 13 '12 at 19:31 Also, you might be right, it might not be 1 exactly, and if you give a little evidence, I will accept. The problem is that I don't know how far you can get a good solution for the K-shell screening--- I never looked at the numerical methods. Experimentally, extremely heavy atoms could screw up the convergence due to relativistic corrections. –  Ron Maimon Apr 13 '12 at 19:46 Please wait for an anwer from another user, that is a specialist in x-ray analysis or a particle physicist. The Schrödinger equation is just analytically solvable for the hydrogen atom, for other you have to rely on numerical methods. I stumbled over this problem from the reverse - the effective potential of the valence electron for alkalis. There is always this assumption of an "light electron", the electron cloud screening Z-1 charge. But this picture is not right at all. –  Alex1167623 Apr 13 '12 at 19:52 Yes--- the valence picture is only a crude rough approximation. But the inner picture should be mathematically correct because of the large separation between inner shells and outer shells in energy and distance both. This means that in the limit of large Z, the inner shell transitions and orbits are exactly (nonrelativistically) described by a one-hole dynamical picture, where the x-ray transitions are the negative mass positive charge hole going up in "n" (down in energy--- it has negative mass) with matrix elements given by the hole dipole moment. This isn't in the literature, should be. –  Ron Maimon Apr 13 '12 at 20:14 Your Answer
43a1dfa666ab5f06
Dismiss Notice Dismiss Notice Join Physics Forums Today! De Broglie Waves and Complex Numbers 1. Jan 5, 2008 #1 We used complex variables to describe the wave function. People do that in acoustics and optics too, strictly for convenience, because the real and imaginary parts are rudundant. The wave function of quantum mechanics is "necessarily" complex, it's not just for convenience that we use complex numbers in quantum theory. Is there any physical reason for the wave function to be complex? Last edited: Jan 5, 2008 2. jcsd 3. Jan 5, 2008 #2 User Avatar Science Advisor I think there is more than one way to answer this question. One way to see that complex numbers are built into the theory rather than simply for convenience like your earlier examples is that Schrodinger's equation is a heat equation with imaginary dispersion coefficient. Thus the solutions are intrinsically complex. The real and imaginary parts of these solutions would also obey the Schrodinger equation since the equation is linear, but they do not generally also match the boundary conditions, and are therefore not generally solutions to your physical systems (UNLIKE your counterexamples of acoustics and optics). For a concrete example: a "left-moving particle" has a wavefunction [itex]e^{ikx}[/itex], NOT cos(kx) or sin(kx). That might not be what you call a "physical reason", but it is a mathematical one. 4. Jan 6, 2008 #3 I'll attempt an explanation without using equations. The wave function represents the state of the system. Because its time and space derivatives must be proportional to the wave function itself, it takes an exponential form. If the argument in the exponent is real, the wave function will either grow without limit or decay away. If we want the system to persist, we need to make the wave function periodic, and that requires an i in the argument. By the way, de Broglie waves don't have to be complex, but Schroedinger waves do. The Schroedinger equation contains an explicit i. 5. Jan 6, 2008 #4 User Avatar Homework Helper Gold Member Could you expand on the 'must be' please? Why must they be? 6. Jan 6, 2008 #5 For example, if the energy is constant, the time derivative of the wave function is proportional to the energy multiplied by the wave function (the eigenvalue equation). If we are going to use wave functions to describe motion, then they must be complex. 7. Jan 6, 2008 #6 User Avatar Homework Helper They don't have to be. An obvious example is the particle in a box, the solutions of which are not proportional to their odd spacial derivatives. Another example is the ground state (or any state) of a simple harmonic oscillator. Another example is *any* function other than an exponential. So, I think that he must just be saying that the Schrodinger equation is linear. But this does not mean that the solutions are always proportional to their space and time derivatives. 8. Jan 6, 2008 #7 User Avatar Gold Member Xeinstein - there is no physical reason why complex numbers should be used in QM. If it were so, it would be equivalent to saying it only worked in German, or in base 8 arithmetic. There are perfectly good QM's that do not use complex numbers. It is merely a (great) convenience. 9. Jan 6, 2008 #8 You are correct, of course, wave functions do not need to be complex. I was addressing the question about why they are (when they are). But even the solutions to Schroedinger's time-independent equation that you give as examples are associated with a time-dependent factor that is complex. 10. Jan 7, 2008 #9 Regarding the Schrödinger equation from a mathematical point of view it has just one derivative in time, whereas the Maxwell eqations have a second derivative in time. That's the big difference and requires the solution of Schrödingers equation to be complex. Physically there is another reason: Energypreservation demands that the solution has to be invariant by time transformation. If the solution would be real and you replace the t by -t there is no invariancy. But by replacing the imaginary part it to -it accomplishes the requirement. Another reason is the Spin description. The whole theory is only possible in the complex spere (For one Spin the Bloch sphere, minus the part for unity matrix). Spin eigen states have to be orthogonal and for one spin the vector has to be 2 dimensional (because there are two possibilites: Spin up and Spin down) . Further there have to be 3 eigen states and that is possible only if two are in real space and one is imaginary. So complex space enables more orthogonal eigen states. And then and I guess what's most important: The only way to write a vector product as an integral is in complex space, namely in Quantum physics the Hilbert space. I can't remember exactely, but it has something to do with the bilinear form ... There would be certainly more reasons ... but that's what's crossing my mind, wright now. (No guarantee that it is correct!) Have something to add? Similar Discussions: De Broglie Waves and Complex Numbers 1. De Broglie waves (Replies: 1)
4d39a40bdaec68ef
Three-dimensional Spatiotemporal Accessible Solitons in a <italic>PT</italic>-symmetric Potential Three-dimensional Spatiotemporal Accessible Solitons in a PT-symmetric Potential Journal of the Optical Society of Korea. 2012. Dec, 16(4): 425-431 Copyright ©2012, Optical Society of Korea • Received : August 17, 2012 • Accepted : October 11, 2012 • Published : December 25, 2012 Export by style Cited by About the Authors Wei-Ping Zhong Department of Electronic and Information Engineering, Shunde Polytechnic, Guangdong Province, Shunde 528300, China Milivoj R. Belić Texas A&M University at Qatar, P.O. Box 23874 Doha, Qatar Tingwen Huang Utilizing the three-dimensional Snyder–Mitchell model with a PT -symmetric potential, we study the influence of PT symmetry on beam propagation in strongly nonlocal nonlinear media. The complex Coulomb potential is used as the PT -symmetric potential. A localized spatiotemporal accessible soliton solution of the model is obtained. Specific values of the modulation depth for different soliton parameters are discussed. Our results reveal that in these media the localized solitons can exist in various shapes, such as single-layer and multi-layer disk-shaped structures, as well as vortex-ring and necklace patterns. Recently, the study of systems exhibiting parity-time ( PT ) symmetry has drawn a great deal of attention. The underlying idea is to extend canonical quantum mechanics by introducing a class of non-Hermitian Hamiltonians which exhibit entirely real eigenvalue spectra below a certain phase-transition point [1] , although the potentials in these Hamiltonians are complex-valued. A necessary condition for the Hamiltonian to be PT -symmetric is that its potential V ( x ), being complex, is subject to the spatial-symmetry constraint V ( x ) = V * ( -x ). The complex PT -symmetric potentials can be realized in the most straightforward way in optics, by combining the spatial modulation of the refractive index with properly placed gain and loss [2] . This possibility has excited extensive theoretical [3 , 4] and experimental [5] research. Pioneering theoretical works [2 , 3 , 4] stimulated recent experimental studies that eventually resulted in the observation of the PT symmetry breaking in both active [5] and passive [6] optically coupled systems. This will probably enable manufacturing of integrated PT photonic devices with extraordinary capabilities, such as double-refraction or energy flow tailoring. A new direction in nonlinear optical research, concerning PT optical lattices [7] and the related PT -based solitons, can be envisaged. The existence and propagation dynamics of the one-dimensional (1D) optical solitons in a PT -symmetric linear periodic potential have been examined in detail in Ref. [8] . Hence, further exploration of general properties of solitons in multidimensional PT -symmetric potentials is warranted. An intriguing feature of PT -symmetric potentials is the spontaneous breakdown of PT symmetry above a threshold level of the strength of the imaginary part of the potential. Above that level the eigenfunctions of the Hamiltonian cease to be the eigenfunctions of the PT operator, even though the PT symmetry remains in force. The complex Coulomb potential, which will be used as the PT potential in this paper, was among the early potentials studied in the PT -symmetric setting [9] . It turned out, however, that in 1D it cannot be treated on the real x -axis, but on some trajectories in the complex x plane [10 , 11] . In the multidimensional case it is treatable in the quantum mechanical sense, but as a part of a more complicated PT -symmetric potential involving parabolic and quartic potentials [11] . We treat it as a nonlinear optical problem, which may not make much sense quantum mechanically, but represents a viable system optically. Optical spatial solitons, which are self-trapped optical beams that exist by virtue of the balance between diffraction and nonlinearity, have lately been extensively studied in nonlocal nonlinear media [12 , 13] . It has been found that the nonlocality can prevent the collapse of self-focusing beams in media with cubic nonlinearity [14] , suppress azimuthal instabilities of vortex solitons [15] , and stabilize Laguerre soliton clusters, azimuthons, and multipole solitons [12] . The evolution of optical beams in nonlocal nonlinear media is governed by the nonlocal nonlinear Schrödinger (NNS) equation [12 , 16] . Of particular importance is the case, referred to as the strongly nonlocal case, in which the characteristic length of nonlocality is much larger than the beam width. In 1997, Snyder and Mitchell [16] simplified the NNS equation in the case of strong nonlocality to a linear model, called the Snyder-Mitchell model [13] . Subsequently, Assanto et al demonstrated theoretically [17] and experimentally [18] that nematic liquid crystals are some of the strongly nonlocal nonlinear (NN) media. The Snyder-Mitchell model can support solitons with new properties, the so-called “accessible solitons” [16] , which are described by the solutions of a linear differential equation, tantamount to the high-dimensional quantum harmonic oscillator [12 , 13] . Owing to this feature, strong nonlocality exerts a stabilizing influence on the dynamics of solutions, as the solutions of linear systems cannot be unstable or chaotic. Light bullets (LBs), or optical spatiotemporal solitons [19] , in which both the diffraction and group-velocity dispersion are balanced by the nonlinearity, are challenging subjects in multidimensional nonlinear optics [20] . In addition to their fundamental significance as particle-like waves, light bullets can find applications in long and short-distance communications, all-optical switching, and digital computing, among others [21] . In this paper we demonstrate that a class of new 3D light bullets originating in a PT -symmetric potential can be supported by the strongly nonlocal nonlinear media. We display rather unusual properties of these 3D light bullets. However, the main aim of the present work is to perform a detailed study of the 3D NNS equation in the strongly NN media in the presence of a PT -symmetric potential and in a region where optical solitons can exist. Thus, we concentrate on obtaining the localized solutions of the 3D strongly NNS equation, without regarding them a priori as stationary eigenfunctions of the corresponding Hamiltonian operator, with the corresponding real or complex eigenvalues. The rest of the article is organized as follows. In Section II, the 3D Snyder-Mitchell model with a PT -symmetric potential is introduced, and the localized soliton solutions are constructed. The properties of 3D localized accessible solitons with the PT -symmetric potential are explored. We also illustrate and discuss some examples of the exact solutions obtained in Section III. The article is concluded in Section IV. We begin the analysis from the scaled (3+1)D spatiotemporal nonlinear Schrödinger equation [13 , 22 , 23] Lager Image which governs the propagation of a slowly varying field envelope u along the propagation coordinate z in a nonlinear nonlocal optical medium. Here ∇ 2 is the full 3D spatiotemporal Laplacian, Lager Image is the radial coordinate, z is the retarded time in the reference frame moving with the pulse, and Lager Image represents the nonlocal nonlinearity induced by the optical beam intensity I =| u | 2 . W denotes the external potential. We assume N to be of the form Lager Image Lager Image is the normalized symmetric real response function of the medium whose characteristic length determines the degree of nonlocality. The setting chosen in this paper is reminiscent of the X wave generation geometry [24] ; the wave equations are similar and there exist natural linear and nonlinear regimes of wave packet dynamics. The major difference is that we consider a nonlocal medium with anomalous dispersion. In the case of strong nonlocality, Eq. (1) can be simplified to the 3D Snyder-Mitchell model in spherical coordinates [13 , 22 , 25] : Lager Image where s is a parameter proportional to the beam power. To analyze PT -symmetry relevant for Eq. (2), we choose the complex Coulomb potential Lager Image where μ (≠0) is a real constant. Obviously, when μ = 0, Eq. (2) is simplified to the general 2D NNS equation in strongly NN media [25] . The second term in Eq. (2) represents diffraction, the third term originates from the optical nonlinearity, and the fourth term is the external potential function. It is noted that Eq. (2) is rather generic; one may assign different physical interpretations to essentially the same type of equation by choosing different physical systems and variables. For example, in a typical quantum mechanical setting one may understand Eq. (2) as the scaled Schrödinger equation for the wave function of a particle moving in the potential Lager Image An often analyzed similar PT version involves both quadratic and quartic anharmonic oscillator terms Lager Image [26] . Hence, it is the parameter μ that decides whether Eq. (2) is PT -symmetric or not. It is easy to see that V * ( -r ) = V ( r ). Thus, Eq. (2) is a system with the PT-symmetric potential. We treat equation (2) in spherical coordinates, by the method of separation of variables. Defining the complex optical field as u ( z , r , θ , ϕ ) = F ( z , r ) Y ( θ , ϕ ), with separated angular variables, the separation yields the following two equations: Lager Image Lager Image where l is a non-negative integer. Equation (3A) has the spherical harmonics as the solution, Lager Image Lager Image are the associated Legendre polynomials with Lager Image and ϕ is the azimuthal angle. The parameter 0≤ q ≤1 determines the modulation depth of the beam intensity. The parameter m is a real nonnegative integer, called the topological charge. Now, we consider the solution of Eq. (3B). Following Refs. [12 , 25] , we define F ( z , r ) = A ( z , r ) e iB (z,r) , where A ( z , r ) and B ( z , r ) are the real functions of z and r . With this variable change and after a little algebra, we transform Eq. (3B) into two coupled equations for A and B : Lager Image Lager Image These equations can be treated by the self-similar method [25] . To treat Eqs. (5), the amplitude A ( z , r ) and the phase B ( z , r ) of the beam are further defined as [25] : Lager Image Lager Image where Ω( z , r ) is the self-similarity variable and w ( z ) is the pulse width. As a consequence of these definitions, Eq. (5B) yields: Lager Image Lager Image Note that the parameter b remains invariant on propagation. Furthermore, it should be noted that Eqs. (7) and (8) are universally applicable to all types of self-similar pulses. Here and in what follows, the symbols containing subscript “0” are used to represent the initial values of the corresponding parameters, at distance z = 0. By means of Eqs. (6), (7) and (8), a nonlinear differential equation for F is derived from Eq. (5A), Lager Image where cz is the derivative of c with respect to z . In order to solve Eq. (9), we introduce another variable transformation Lager Image from Eq. (9) one obtains: Lager Image To simplify Eq. (10), yet another variable transformation is introduced, Ω 2 = R ; thus, we find: Lager Image In the end, to obtain tractable solutions, we restrict their generality by choosing special forms of some parameters: Lager Image Lager Image where n is a nonnegative integer and w 0 (=constant) is the initial beam width of the pulse. Using Eq. (12B) we find that Eq. (11) becomes: Lager Image The solution of Eq. (13) can be written in terms of the confluent hypergeometric functions. Thus, when w = w 0 , an exact single accessible light bullet solution to Eq. (2) with a PT -symmetric potential can be written as: Lager Image where Ylm ( θ , ϕ ) are the spherical harmonics and 1 F 1 is the confluent hypergeometric function of the first kind. It is straightforward to see that | u ( z , r , θ , ϕ )| vanishes at r → ∞, i.e., Eq. (14) represents a localized solitary solution. Arbitrariness in the choice of the soliton parameters n , m and l included in the above solution (3) implies that the beam field u ( z , r , θ , ϕ ) may possess a rich structure. For μ = 0 the solution (14) goes to the solution (21) in Ref. [25] , apart from a constant factor. It should be noted that the solution (14) differs from the solution in the absence of the PT potential, i.e. when μ=0, by a complex factor exp [ i ( μr / w 0 - μ2z / 2 )]. Thus, the influence of the complex Coulomb potential is to modulate the accessible solitons that exist in the absence of it. The intensity distributions | unlm | 2 remain the same in both cases. To better understand the 3D soliton dynamics, we introduce some special types of localized solutions for the Lager Image Intensity distributions of the LBs with the spherical structures for m = 0 and n = 0,2,4 from left to right, respectively. Lager Image LB distributions with disk and ring-shaped profiles for m = 0; the parameter are: (a) ; 2,l = 4; (b) n = 1, l = 4; (c) n = 0, l = 6. optical field expressed by Eq. (14) via suitable selections of the nonnegative integer parameters ( n , m , l ). We focus attention on the distributions of the optical intensity I = | u | 2 . In the following examples, we further fix the beam width w 0 = 1. First, we address the simplest case m = 0. Then the LB intensity does not depend on the modulation depth q . Because the parameters n and l are arbitrary, various light bullet structures can be obtained. If the parameters n and I are chosen as zero, from Eq. (14) we find that 1 F 1 (0,3/2, r 2 ) = 1; the beam is then called the fundamental light bullet and forms a single-layer sphere, see Fig. 1 (a). Keeping l = 0 and increasing the parameter n , multilayered structures are obtained; a typical example is presented in Fig. 1 (b) for n = 2. Similarly, we can construct a higher-order fundamental LB for larger n , i.e., for n = 4 the intensity distribution is exhibited in Fig. 1 (c). In general, there exist n +1 layered spheres for such a soliton. For n ≠0 and l ≠0, the higher-order LBs can be excited. Figure 2 (a) displays the results for n = 2, l = 4, which features three coaxial rings in the mid-plane; there are five layers stacked along the τ -axis. Figure 2 (b) displays similar structures for n = 1, l = 4, with two rings in the mid-plane, and two disks above and below the central rings. Finally, Fig. 2 (c), corresponding to n = 0, l = 6, shows a single middle ring and, once again, ring and disk-shaped objects along the τ -axis. For q = 1 in Eq. (14), we obtain the vortex ring beam for a nonnegative integer l and a positive integer m . As an interesting case we pick the parameters m = l (≠0). An example of such a vortex ring LB is shown in Fig. 3 . The soliton parameters are: (a) m = l = 4, n = 0; (b) m = l = 3, n = 1; (c) m = l = 2, n = 2. The number of rings in the horizontal direction is determined by n . Changing the modulation depth from q = 1 to 0 < q < 1, we find that the LBs modulate azimuthally, see Fig. 4 . It is noted that the outer modulation is more distinct than the inner one. Actually, the formation of a vortex ring beam is the result of the periodic azimuthal modulation functions cos( ) and sin( ). For m > 0 ( m ≠ l) an integer, and in the limit q = 1, Lager Image Intensity profiles of the vortex ring LBs for q = 1 and m = l. (a) m = l = 4, n = 0; (b) m = l =3, n = 1; (c) m = l =2, n = 2. Lager Image Intensity distribution of LBs from Fig. 3. Setup is the same as in Fig. 3, except for q = 0.95. Lager Image Vortex-ring solitons, for q = 1 and m ≠ l. Parameters (n,l,m) have the following values: (a) (0,4,1); (b) (1,4,2); (c) (2,4,2). The figure layout is as in Fig. 3. the multilayered vortex ring LBs along the vertical τ -axis are found. In Fig. 5 we depict some properties of the vortex ring LBs. It is seen that for the same m , the larger the parameter n , the larger the soliton radius in the horizontal plane. The optical intensity is zero at the τ -axis, which is the location of the topological defect. For q = 0 and l = m in Eq. (14), we obtain single-layer and multi-layer necklace beams for a positive integer m . A typical example of such a necklace is shown in Fig. 6 for l = m = 4, along with the axisymmetric distribution. Self-trapped localized structures with a large number of azimuthal petals and multi-layered necklaces may exhibit a strong effective stabilization in strongly NN media [12 , 13] . Figure 7 displays the intensity distribution of multi-layer Lager Image Single- and multi-layer necklace LBs in the horizontal plane for l = m = 4 and n = 0,1,2 from left to right. The setup is as in Fig. 3, except for q = 0. Lager Image Structures of multi-layer necklace solitons in the vertical direction. The setup is the same as in Fig. 6, except for l ≠ m. necklace solitons in the vertical direction, which exhibit similar patterns. These examples are obtained for positive integers ( n , l , m ) in Eq. (14). The parameters are: (a) (0,4,2); (b) (1,4,3); (c) (2,4,3). In these solutions, the necklace structure is still formed, due to the periodic azimuthal modulation. Note that these solitons form multi-layers, with the outer ones more strongly modulated than the inner counterparts. Interesting structures are seen in Figs. 6 and 7 . We find that the larger the parameter m , the larger the necklace radius. It is seen that the soliton distributions change regularly with the azimuthal angle. The number of beads in each layer is determined by m , and the number of layers is determined by n . These solitons contain 2 m ( n +1) necklaces and form n +1 necklace layers in the horizontal direction. The number of necklace layers in the vertical direction is determined by l . We have introduced a class of self-trapped LB solutions of the NNS equation in the strongly NN media with a PT -symmetric potential in the form of a complex Coulomb potential. We were not much concerned with the details of the eigenvalue spectrum, but with the solution of the NNS equation with a specific PT -symmetric potential. Analytical accessible soliton solutions are obtained with the help of the self-similar method for solving such evolution partial differential equations. They are given in terms of the confluent hypergeometric function of the first kind and spherical harmonics. We find that in addition to the fundamental LBs, these solutions may come in the form of 3D single-layer and multi-layer disk-shaped, vortex ring and necklace LBs. This work was supported by the National Natural Science Foundation of China under Grant No. 61275001 and by the Natural Science Foundation of Guangdong Province, China, under Grant No. 1015283001000000. The work at the Texas A&M University at Qatar is supported by the NPRP 09-462-1-074 project of the Qatar National Research Fund. Bender C. M. , Boettcher S. (1998) “Real spectra in non- Hermitian Hamiltonians having PT symmetry” Phys. Rev. Lett. 80 5243 - 5246 El-Ganainy R. , Makris K. G. , Christodoulides D. N. , Musslimani Z. H. (2007) “Theory of coupled optical PT-symmetric structures” Opt. Lett. 32 2632 - 2634 Christodoulides D. N. , Lederer F. , Silberberg Y. (2003) “Discretizing light behavior in linear and nonlinear waveguide lattices” Nature (London) 424 817 - 823 Abdullaev F. Kh. , Kartashov Y. V. , Konotop V. V. , Zezyulin D. A. (2011) “Solitons in PT-symmetric nonlinear lattices” Phys. Rev. A 83 041805-1 - 041805-4 Ruter C. E. , Makris K. G. , EI-Ganainy R. , Christodoulides D. N. , Segev M. , Kip D. (2010) “Observation of parity-time symmetry in optics” Nat. Phys. 6 192 - 195 Guo A. , Salamo G. J. , Duchesne D. , Morandotti R. , Volatier-Ravat M. , Aimez V. , Siviloglou G. A. , Christodoulides D. N. (2009) “Observation of PT-symmetry breaking in complex optical potentials” Phys. Rev. Lett. 103 093902-1 - 093902-4 Makris K. G. , El-Ganainy R. , Christodoulides D. N. , Musslimani Z. H. (2010) “PT-symmetric optical lattices” Phys. Rev. A 81 063807-1 - 063807-10 Musslimani Z. H. , Makris K. G. , El-Ganainy R. , Christodoulides D. N. (2008) “Optical solitons in PT periodic potentials” Phys. Rev. Lett. 100 030402-1 - 030402-4 Znojil M. , Levai G. (2000) “The Coulomb-harmonic oscillator correspondence in PT symmetric quantum mechanics” Phys. Lett. A 271 327 - Levai G. (2009) “Spontaneous breakdown of PT symmetry in the complex Coulomb potential” Pramana 73 329 - 335 Lévai G. , Siegl P. , Znojil M. (2009) “Scattering in the PTsymmetric Coulomb potential” J. Phys. A 42 295201 - Zhong W. P. , Yi L. (2007) “Two-dimensional Laguerre-Gaussian soliton family in strongly nonlocal nonlinear media” Phys. Rev. A 75 061801-1 - 061801-4 Zhong W. P. , Yi L. , Xie R. H. , Belic M. , Chen G. (2008) “Robust three-dimensional spatial soliton clusters in strongly nonlocal media” J. Phys. B: At. Mol. Opt. Phys. 41 025402 - Bang O. , Krolikowski W. , Wyller J. , Rasmussen J. J. (2002) “Collapse arrest and soliton stabilization in nonlocal nonlinear media” Phys. Rev. E 66 046619-1 - 046619-5 Buccoliero D. , Desyatnikov A. S. , Krolikowski W. , Kivshar Y. S. (2008) “Spiraling multivortex solitons in nonlocal nonlinear media” Opt. Lett. 33 198 - 200 Snyder A. , Mitchell J. (1997) “Accessible solitons” Science 276 1538 - 1541 Conti C. , Peccianti M. , Assanto G. (2003) “Route to nonlocality and observation of accessible solitons” Phys. Rev. Lett. 91 073901-1 - 073901-4 Conti C. , Peccianti M. , Assanto G. (2004) “Observation of optical spatial solitons in a highly nonlocal medium” Phys. Rev. Lett. 92 113902-1 - 113902-4 Silberberg Y. (1990) “Collapse of optical pulses” Opt. Lett. 15 1282 - 1284 Malomed B. A. , Mihalache D. , Wise F. , Torner L. (2005) “Spatiotemporal optical solitons” J. Opt. B: Quantum Semiclassical Opt. 7 R53 - R72 Abdullaev F. K. , Konotop V. V. 2004 Nonlinear Waves: Classical and Quantum Aspects Kluwer Academic Publishers Dordrecht, Netherlands Zhong W. P. , Belić M. , Xie R. , Huang T. , Lu Y. (2010) “Three-dimensional spatiotemporal solitary waves in strongly nonlocal media” Opt. Commun. 283 5213 - 5217 Zhong W. P. , Belić, M. , Assanto G. , Malomed B. A. , Huang T. (2011) “Light bullets in the spatiotemporal nonlinear Schrödinger equation with a variable negative diffraction coefficient” Phys. Rev. A 84 043801-1 - 043801-8 Faccio D. , Averchi A. , Couairon A. , Kolesik M. , Moloney J. V. , Dubietis A. , Tamosauskas G. , Polesana P. , Piskarskas A. , Di Trapani P. (2007) “Spatio-temporal reshaping and X wave dynamics in optical filaments” Opt. Express 15 13077 - 13095 Zhong W. P. , Belic M. (2009) “Three-dimensional optical vortex and necklace solitons in highly nonlocal nonlinear media” Phys. Rev. A 79 023804-1 - 023804-6 Znojil M. (2000) “Quasi-exactly solvable quartic potentials with centrifugal and Coulombic terms” arXiv:math-ph/0002036v2
5cd671686af3984f
Acceleration Signals From Superconductors by David Sears Schroeder What Are Matter Waves About the Author and Experimental Motivation Status of Experiments: 10 October, 2016 Hypothetical Warp Drive From Novel Supersymmetry Particles Revised experimental set-up on 28 March, 2016. Front row, left to right: High voltage source, X-Y positioning system for accelerometer module, auxiliary Trigger Board to control oscilloscope sweep (allows isolation of acoustic signals based on assumption that the anomalous signal propagates at light speed). Back row, new 1 Ghz sampling rate, 70 Mhz bandwidth oscilloscope and cryostat. 26 April 2016 Set-up For LN2 Test and Custom-Built Positioning System Overall Test Set-up Close-up of Nb-Ti Rod Aligned with ADXL203 Microchip (Inside Budbox) Completely Built X-Y Positioning System Rear View of Positioning System for Accelerometer Module Experiment History and More Photos Since 1992 acceleration effects, in the vicinity of superconductors, or superfluids, tens of magnitudes larger than General Relativity allows, have been claimed. By far, the most convincing of these reports has come from the Austrian Research Center (ARC) (Tajmar et al, 2003-2007). It's speculated these signals constitute a tiny residual of a gravity-emulating force, 40 magnitudes stronger than its classical counterpart. A supersymmetric quanta, that ranges to 10-19 m. (TeV scale), is the proposed source of this field. Such a quanta might arise naturally at the boundary of a parallel 3+1 negative-energy space and our 3+1 positive-energy braneworld. Effectively such quanta would possess zero net energy, and thus not violate energy conservation. Their inherent field structure would imprint an Alcubierre topology on spacetime. The resulting geodesic hypersurfaces would neutralize angular acceleration forces on electrons and nucleii that can reach 1022 g's, or more, in the hydrogen atom. This would explain the absense of synchroton radiation, and consequent stability of atomic structures, at a more fundamental level than the Quantum Mechanical requirement for integer wavelength orbits. Vacuum polarization, from this field, is speculated to momentarily evolve massless spin-2 gravitons, of both the negative-energy and positive-energy variety, in response to acceleration, until equilibrium is restored. Macroscopic coherence in superconductors, would raise these, exceedingly brief, graviton 'bursts' to detectable levels. The burst of negative-energy gravitons, directed opposite to the condensates acceleration, is speculated to create the momentary repulsive force seen in these superconductor experiments. It is further speculated that a continuous inflow of negative-energy gravitons across our Universe's brane boundary, from the negative-energy brane, is the cause of cosmological inflation. In short, a common denominator may underlie atomic stablity at the smallest scales, cosmic inflation at the largest scales, and anomalous phenomena seen in superconductor experiments. If these speculations are valid, the Alcubierre Warp, in microscopic form, would be ubiquitous in nature. About the Experiments While the concept of "gravity shielding" has long been discounted on theoretical grounds (incompatible with General Relativity), creation of transitory acceleration pulses via quantum processes may explain any genuine signal that was observed. Based on a theoretical idea (A Physical Interpretation of Matter Waves), only acceleration of the bulk superconductor, supercurrent, or superfluid will produce an acceleration signal, and then only for the duration of the acceleration. Thus the experiments have, to some degree, replicated Podkletnov's "Impulse Gravity Generator", which would indeed have sharply accelerated the supercurrent. For the record, in a series of experimental runs in late March, 2010, apparent acceleration pulses were detected with a PM-3214 oscilloscope. The scope was triggered by the accelerometer's output, every time a 40 mfd capacitor, (charged to 300 volts), was discharged through the superconductor. Control runs seemed to rule out electromagnetic pulses (EMPs) as the culprit, but more runs are needed under identical conditions. The new set-up, pictured above, has most of the experimental components secured to an 11 by 11 by 3/4 inch oak platform. The control box, with cable leading to the high voltage circuit, has been replaced by a 433 MHz RF link, that allows charging and discharing of the capacitor bank remotely. The small remote, on a keychain, is visible on the right side of the photo. This eliminates the danger of having nearly 1000 volts accidentally reaching the hand-held control box. The aluminum project box, in the foreground, houses the accelerometer and associated circuitry. To its left is the cryostat with slidable fiberglass tabs supporting the anode and superconductor. A kit LCD voltmeter has been mounted on a vertical metal frame for monitoring the charging voltage. Only two of the four relays are used; one for starting and stopping capacitor charging, the other to trigger discharge through the superconductor load. An ADXL203, plus/minus 1.7g accelerometer chip (resolution 1 milli-g), aligned with the supercurrent axis, monitors for signal. It is enclosed in a 2" by 4" by 6" aluminum project box for RF (radio frequency) isolation. The accelerometer's output is first referenced to analog common in a 5 volt, bi-polar supply; established by 7805 and 7905 regulators, that, in turn, are fed by a pair of 9 volt batteries mounted outside the case. The second op-amp on the 747 chip provides a 10-to-1 signal gain. Shielded coax cables both within the box to through-panel BNC connectors, and from the box to the oscilloscope adds further RF isolation. Signals can be tapped either directly from the ADXL203's output, from the referencing stage, or the final amplifier output. Using a solid state accelerometer overcomes a pitfall in previous attempts to measure acceleration phenomena from superconductors with a digital scale and target mass. Any brief acceleration pulse would have been averaged over the sampling interval of the digital scale, and further diluted by the large inertial mass of the target. Moreover, negative results would be expected for a static superconductor, in which the bose-condensate is not being accelerated, if the theory presented here is correct. Back in 2010, electric discharge directly through the superconductor was the sole method tried, and hasn't been attempted since. The YBCO superconductor, which yielded signals by this method, was accidentally ruined when silver epoxy was applied to both sides of it, in an effort to obtain electrical contact over its entire surface. Therefore a coil was wound on a fiberglass cylinder slightly larger than the superconductor. Discharging the capacitor through this coil induces a circulating supercurrent in the the tangential plane of the superconductor. The induction method was tried in late May of 2010, with interesting results. In the superconductive state the PM-3214 scope was triggered, by the accelerometer's output, in multiple runs when the 40 mfd capacitor was discharged through the solenoid. Scope triggering was not observed after the YBCO chip transitioned into its non-superconducting state. Tried various combinations to duplicate triggering, but only 300-350 volts discharge from 40 mfd capacitor, with YBCO chip in superconductive state, produced results in 3 runs. While tantalizing, since the triggering was close to the noise threshold of the system, it's not irrefutable proof of anamalous phenomena. Tests with a non-superconducting aluminum blank were carried out in early June, 2010, and did not duplicate the triggering effect seen with the YBCO chip in the superconductive state. Matter Waves, Superconductor Anomalies and Dark Energy De Broglie (matter) waves underlie all of chemistry and even biology at the molecular scale. What matter waves do has long been elucidated through the de Broglie and Schrödinger equations, and Born's statistical interpretation, but what they actually are, or consist of, remains an unanswered question. As every freshman college physics student learns matter waves are intimately linked to nature's fundamental unit of action - Planck's Constant - through the relation: λ = h/p, where λ is the wavelength associated with a particle, p is the particles momentum, and h is Planck's constant. DeBroglie showed that for stable orbits to exist the relation: = 2πR, where n is an integer and R the radius of the orbit, must be satisfied. Erwin Schrödinger was once of the opinion that matter waves represented a real disturbance in space, analogous to the field variables in electromagnetic waves3. Since the wavefunction for a particle ψ(x,t); where x is position in space and t time, concerns the probable position of the particle at a given time, it utilizes the same parameters as general relativistic gravity - space and time. To be more precise, the intensity of the gravitational field, at a given locale, is determined by the amount of contraction of measuring rods and slowing of clocks. But, in contrast to the feebleness of Newtonian gravity, matter waves modulate the location (via probability) of fundamental particles as robustly as do electric and magnetic fields. If Schrödinger's intuition was correct, a similar strength analogue of the electromagnetic (EM) field, with variables of length and time, suggests itself as the physical basis of matter waves. Implicit in a length-time analogue of the electromagnetic field is a bi-polar length variable that contracts/expands and a bi-polar time variable that retards/advances. By definition, one half of such a wave cycle, in which length expands and time advances, corresponds to a negative energy state of the vacuum (a positive mass planet contracts length scales, and retards clocks, a negative mass planet will have the opposite effect). The combined effect of these two variables is proposed to be the origin of the imaginary phase factor 'i' in Schrödinger's wave equation: ψ = Hψ, as well as in Heisenberg's commutative relation: pq - qp = ih/2π. It is speculated that these bi-polar length and time variables account for all quantum interference phenomena, for which the phase factor i is known to be the source. In accordance with Maxwell's laws, a changing 'length current' should give rise to a changing 'time current' and visa-versa. The amplitudes of these two variables would cyclically rise and fall, in step, as the length-time wave propagates past an observer. Clearly, an observer (particle) entrained at a crossover point of a length-time wave (where the wave transitions from a positive to negative vacuum condition) would be continually preceded, within 1/2 wavelength, by a region of contracting spacetime, and trailed within 1/2 wavelength by expanding spacetime (incidentally, this "crossover" point corresponds to the boundary between a higher dimensional "bulk" space, and our 3+1 brane. String theory proscribes that all open-ended particles exist at this boundary - see below). Such a local distortion of spacetime is the metric signature of an Alcubierre warp6. It is proposed to underlie the absense of synchrotron radiation in stable atomic orbits, by creating a local free-fall geodesic for orbiting electrons. This scenario assumes that electrons are 'modulated' by the oscillating length and time fields of virtual length-time 'photons', just as virtual (electromagnetic) photons modify other aspects of real particles, as proscribed by quantum electrodynamics (QED). These oscillating length and time fields are postulated to be the "internal periodic phenomena" all particles are subject to, as predicted by Louis DeBroglie in his 1923 Comptes Rendus note5. But, such a gravity-emulating, Maxwell gauge field cannot be massless, otherwise it would have long since been detected. If it exists at all, it must be in the unexplored supersymmetry realm between 1 and 100 TeV. The warp field of a length-time 'photon' would, accordingly, take the form of a micro-warp in the 10-17 to 10-19 meter range. In this view, the lobe-like complexity of electron orbits would stem from oscillations of the length and time variables, confined within a 10-19 meters, effective warp 'bubble', that should act like a cavity resonator. Thus, throughout its complex gyrations, an orbiting electron would locally be shielded from inertial forces, as the amplitude and orientation of the micro-warp synchronizes with the dynamically changing angular acceleration vector. Large amplitude expansions/contractions of spacetime within the micro-warp's operational radius, stemming from di-pole gravity 40 magnitudes greater than Newtonian gravity, must lead to correspondingly large synchronization (sync) shifts. Since this micro-warp concept is based on extra dimensions of space, a logical deduction is that during the contraction cycle the volume of space within the warp 'bubble' shrinks to the size of the extra dimension(s) and expands into them. Having the higher dimensional bulk serve as the source and sink of spacetime (gravitons) for these alternating expansions and contractions would obviate the need for negative matter to implement an Alcubierre warp. From 2003 to 2007, a group of researchers, led by Martin Tajmar, at the Austrian Research Center, detected anomalously large (up to 277 micro-gs) acceleration signals from a rapidly spun-up, ring shaped, niobium superconductor. They interpreted this acceleration signal (which opposed the applied acceleration) to be a gravitoelectric field, induced by a time-varying gravitomagnetic field. When they attempted to detect the gravitomagnetic field directly with sensitive gyroscopes, they found only 1% of the signal they were expecting. Furthermore, this supposed gravitomagnetic field did not follow the inverse square rule as was expected. Since only an acceleration field was detected, an alternative explanation is proposed. Cooper pairs move as a supercurrent through the lattice, progressively bonding from one lattice site to another as they advance. If the acceleration nulling, dipole field really exists, then all cooper pairs, and their proton (lattice) partners, would experience zero-g acceleration within the 10-19 meters frame of theis field, for all components of momentum. Effectively, perfect superconduction would correspond to an acceleration-free dance for both the moving cooper-pairs, and the flexing lattice sites, as this field exactly cancels the acceleration components apparent to external observers. When the experimenters applied an acceleration to the body of the superconductor, this perfect balance was briefly upset. Since this hypothetical length-time (LT) field is a guage field, like long range electromagnetism, it would respond, like that field by trying to 'brake' the applied acceleration. The problem is that the LT field ranges only to 10-19 meters, so its long range detection is an issue. The proposed explanation is that the LT field, associated with each electron and proton, functions as a micro-pump for shuttling massless gravitions between the extra dimensional bulk and our 3-brane. Assuming fundamental particles are fixed to the brane ' wall' separating our 3D space and the extra dimensions, and enveloped by virtual micro-warps, each particle would see every other particle cyclically receding and advancing in position relative to every other particle. The resulting sync shifts would induce forward/backward translations in time - each particle seeing every other particle oscillating between the past and future, but averaging to the local present. Such temporal oscillations could underlie the weird, non-classical aspects of quantum mechanics as illustrated in John Cramer's Transactional Hypothesis. The electromagnetic-gravity duality, implied in a length-time Maxwell field's existence, is postulated to be embraced within one of six dualities betweeen the forces comprising the superforce. Three forces comprise the superforce above the electroweak synthesis - strong force, electroweak force, and gravity, which would converge in strength, in the TeV scale, if non-compact extra dimensions were indeed a reality. This yields six dualities by the permutation rule N!, where N=3. These six dualities are proposed to correspond to the five 10D string theories and 11D supergravity that make up the tableau of M-Theory. Each of these field theories is speculated to reside on its own m+n "brane" in the 5D "bulk"., where m and n are integers denoting the number of space and time dimensions, respectively. It's also intriguing that the most recent measurements of dark matter by a Cambridge University team shows that 'dark matter' composes between 80-85% of the matter of the universe. It has been suggested that dark matter is really matter sequestered on nearby branes in the higher dimensional bulk. If our brane is but one of six, and all branes are about equal in extent (in terms of total mass energy), then 5/6ths (83.3%) of the matter of the 'multiverse' would be hidden background matter on the other 5 branes; right smack in the middle of the Cambridge team's estimate. Finally, this Maxwell length-time field would be massless on a "3-brane", whose 'spacetime' has electric and magnetic parameters. Such a 3+1 (3 electric/1 magnetic) brane would constitute an S-dual version of our 3+1 (3 length/1 time) brane universe. Conversely, our photon would underlie matter waves in their universe, since it would have a TeV range mass, and exhibit their form of gravity in a dipole form, but range to less than 10-19 meters. Additional Writings by Author The Multifamily Structure of Matter Supersymmetry with a Triplet Higgs A Physical Interpretation of Matter Waves Experiment Write-Up on Gravatar Copyright 1998, David Sears Schroeder
1a082a38cd884de0
KLUEDO RSS Feed KLUEDO Dokumente/documents https://kluedo.ub.uni-kl.de/index/index/ Fri, 09 Jan 2015 13:27:20 +0200 Fri, 09 Jan 2015 13:27:20 +0200 Stochastic Modeling and Approximation of Turbulent Spinning Processes https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4168 In some processes for spinning synthetic fibers the filaments are exposed to highly turbulent air flows to achieve a high degree of stretching (elongation). The quality of the resulting filaments, namely thickness and uniformity, is thus determined essentially by the aerodynamic force coming from the turbulent flow. Up to now, there is a gap between the elongation measured in experiments and the elongation obtained by numerical simulations available in the literature. The main focus of this thesis is the development of an efficient and sufficiently accurate simulation algorithm for the velocity of a turbulent air flow and the application in turbulent spinning processes. In stochastic turbulence models the velocity is described by an \(\mathbb{R}^3\)-valued random field. Based on an appropriate description of the random field by Marheineke, we have developed an algorithm that fulfills our requirements of efficiency and accuracy. Applying a resulting stochastic aerodynamic drag force on the fibers then allows the simulation of the fiber dynamics modeled by a random partial differential algebraic equation system as well as a quantization of the elongation in a simplified random ordinary differential equation model for turbulent spinning. The numerical results are very promising: whereas the numerical results available in the literature can only predict elongations up to order \(10^4\) we get an order of \(10^5\), which is closer to the elongations of order \(10^6\) measured in experiments. Florian Hübsch doctoralthesis https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4168 Tue, 01 Sep 2015 13:27:20 +0200 Construction of a Mittag-Leffler Analysis and its Applications https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4157 Motivated by the results of infinite dimensional Gaussian analysis and especially white noise analysis, we construct a Mittag-Leffler analysis. This is an infinite dimensional analysis with respect to non-Gaussian measures of Mittag-Leffler type which we call Mittag-Leffler measures. Our results indicate that the Wick ordered polynomials, which play a key role in Gaussian analysis, cannot be generalized to this non-Gaussian case. We provide evidence that a system of biorthogonal polynomials, called generalized Appell system, is applicable to the Mittag-Leffler measures, instead of using Wick ordered polynomials. With the help of an Appell system, we introduce a test function and a distribution space. Furthermore we give characterizations of the distribution space and we characterize the weak integrable functions and the convergent sequences within the distribution space. We construct Donsker's delta in a non-Gaussian setting as an application. In the second part, we develop a grey noise analysis. This is a special application of the Mittag-Leffler analysis. In this framework, we introduce generalized grey Brownian motion and prove differentiability in a distributional sense and the existence of generalized grey Brownian motion local times. Grey noise analysis is then applied to the time-fractional heat equation and the time-fractional Schrödinger equation. We prove a generalization of the fractional Feynman-Kac formula for distributional initial values. In this way, we find a Green's function for the time-fractional heat equation which coincides with the solutions given in the literature. Florian Jahnert doctoralthesis https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4157 Tue, 18 Aug 2015 08:32:00 +0200 Aspects and Applications of the Wilkie Investment Model https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4137 The Wilkie model is a stochastic asset model, developed by A.D. Wilkie in 1984 with a purpose to explore the behaviour of investment factors of insurers within the United Kingdom. Even so, there is still no analysis that studies the Wilkie model in a portfolio optimization framework thus far. Originally, the Wilkie model is considering a discrete-time horizon and we apply the concept of Wilkie model to develop a suitable ARIMA model for Malaysian data by using Box-Jenkins methodology. We obtained the estimated parameters for each sub model within the Wilkie model that suits the case of Malaysia, and permits us to analyse the result based on statistics and economics view. We then tend to review the continuous time case which was initially introduced by Terence Chan in 1998. The continuous-time Wilkie model inspired is then being employed to develop the wealth equation of a portfolio that consists of a bond and a stock. We are interested in building portfolios based on three well-known trading strategies, a self-financing strategy, a constant growth optimal strategy as well as a buy-and-hold strategy. In dealing with the portfolio optimization problems, we use the stochastic control technique consisting of the maximization problem itself, the Hamilton-Jacobi-equation, the solution to the Hamilton-Jacobi-equation and finally the verification theorem. In finding the optimal portfolio, we obtained the specific solution of the Hamilton-Jacobi-equation and proved the solution via the verification theorem. For a simple buy-and-hold strategy, we use the mean-variance analysis to solve the portfolio optimization problem. Norizarina Ishak doctoralthesis https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4137 Tue, 11 Aug 2015 11:06:03 +0200 Coercive functions from a topological viewpoint and properties of minimizing sets of convex functions appearing in image restoration https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4100 Many tasks in image processing can be tackled by modeling an appropriate data fidelity term \(\Phi: \mathbb{R}^n \rightarrow \mathbb{R} \cup \{+\infty\}\) and then solve one of the regularized minimization problems \begin{align*} &{}(P_{1,\tau}) \qquad \mathop{\rm argmin}_{x \in \mathbb R^n} \big\{ \Phi(x) \;{\rm s.t.}\; \Psi(x) \leq \tau \big\} \\ &{}(P_{2,\lambda}) \qquad \mathop{\rm argmin}_{x \in \mathbb R^n} \{ \Phi(x) + \lambda \Psi(x) \}, \; \lambda > 0 \end{align*} with some function \(\Psi: \mathbb{R}^n \rightarrow \mathbb{R} \cup \{+\infty\}\) and a good choice of the parameter(s). Two tasks arise naturally here: \begin{align*} {}& \text{1. Study the solver sets \({\rm SOL}(P_{1,\tau})\) and \({\rm SOL}(P_{2,\lambda})\) of the minimization problems.} \\ {}& \text{2. Ensure that the minimization problems have solutions.} \end{align*} This thesis provides contributions to both tasks: Regarding the first task for a more special setting we prove that there are intervals \((0,c)\) and \((0,d)\) such that the setvalued curves \begin{align*} \tau \mapsto {}& {\rm SOL}(P_{1,\tau}), \; \tau \in (0,c) \\ {} \lambda \mapsto {}& {\rm SOL}(P_{2,\lambda}), \; \lambda \in (0,d) \end{align*} are the same, besides an order reversing parameter change \(g: (0,c) \rightarrow (0,d)\). Moreover we show that the solver sets are changing all the time while \(\tau\) runs from \(0\) to \(c\) and \(\lambda\) runs from \(d\) to \(0\). In the presence of lower semicontinuity the second task is done if we have additionally coercivity. We regard lower semicontinuity and coercivity from a topological point of view and develop a new technique for proving lower semicontinuity plus coercivity. Dropping any lower semicontinuity assumption we also prove a theorem on the coercivity of a sum of functions. René Ciak doctoralthesis https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4100 Tue, 09 Jun 2015 15:50:38 +0200 Upscaling Approaches for Nonlinear Processes in Lithium-Ion Batteries https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4086 Lithium-ion batteries are broadly used nowadays in all kinds of portable electronics, such as laptops, cell phones, tablets, e-book readers, digital cameras, etc. They are preferred to other types of rechargeable batteries due to their superior characteristics, such as light weight and high energy density, no memory effect, and a big number of charge/discharge cycles. The high demand and applicability of Li-ion batteries naturally give rise to the unceasing necessity of developing better batteries in terms of performance and lifetime. The aim of the mathematical modelling of Li-ion batteries is to help engineers test different battery configurations and electrode materials faster and cheaper. Lithium-ion batteries are multiscale systems. A typical Li-ion battery consists of multiple connected electrochemical battery cells. Each cell has two electrodes - anode and cathode, as well as a separator between them that prevents a short circuit. Both electrodes have porous structure composed of two phases - solid and electrolyte. We call macroscale the lengthscale of the whole electrode and microscale - the lengthscale at which we can distinguish the complex porous structure of the electrodes. We start from a Li-ion battery model derived on the microscale. The model is based on nonlinear diffusion type of equations for the transport of Lithium ions and charges in the electrolyte and in the active material. Electrochemical reactions on the solid-electrolyte interface couple the two phases. The interface kinetics is modelled by the highly nonlinear Butler-Volmer interface conditions. Direct numerical simulations with standard methods, such as the Finite Element Method or Finite Volume Method, lead to ill-conditioned problems with a huge number of degrees of freedom which are difficult to solve. Therefore, the aim of this work is to derive upscaled models on the lengthscale of the whole electrode so that we do not have to resolve all the small-scale features of the porous microstructure thus reducing the computational time and cost. We do this by applying two different upscaling techniques - the Asymptotic Homogenization Method and the Multiscale Finite Element Method (MsFEM). We consider the electrolyte and the solid as two self-complementary perforated domains and we exploit this idea with both upscaling methods. The first method is restricted only to periodic media and periodically oscillating solutions while the second method can be applied to randomly oscillating solutions and is based on the Finite Element Method framework. We apply the Asymptotic Homogenization Method to derive a coupled macro-micro upscaled model under the assumption of periodic electrode microstructure. A crucial step in the homogenization procedure is the upscaling of the Butler-Volmer interface conditions. We rigorously determine the asymptotic order of the interface exchange current densities and we perform a comprehensive numerical study in order to validate the derived homogenized Li-ion battery model. In order to upscale the microscale battery problem in the case of random electrode microstructure we apply the MsFEM, extended to problems in perforated domains with Neumann boundary conditions on the holes. We conduct a detailed numerical investigation of the proposed algorithm and we show numerical convergence of the method that we design. We also apply the developed technique to a simplified two-dimensional Li-ion battery problem and we show numerical convergence of the solution obtained with the MsFEM to the reference microscale one. Vasilena Taralova doctoralthesis https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4086 Thu, 28 May 2015 09:01:35 +0200 Simulation of Degradation Processes in Lithium-Ion Batteries https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4085 Lithium-ion batteries are increasingly becoming an ubiquitous part of our everyday life - they are present in mobile phones, laptops, tools, cars, etc. However, there are still many concerns about their longevity and their safety. In this work we focus on the simulation of several degradation mechanisms on the microscopic scale, where one can resolve the active materials inside the electrodes of the lithium-ion batteries as porous structures. We mainly study two aspects - heat generation and mechanical stress. For the former we consider an electrochemical non-isothermal model on the spatially resolved porous scale to observe the temperature increase inside a battery cell, as well as to observe the individual heat sources to assess their contributions to the total heat generation. As a result from our experiments, we determined that the temperature has very small spatial variance for our test cases and thus allows for an ODE formulation of the heat equation. The second aspect that we consider is the generation of mechanical stress as a result of the insertion of lithium ions in the electrode materials. We study two approaches - using small strain models and finite strain models. For the small strain models, the initial geometry and the current geometry coincide. The model considers a diffusion equation for the lithium ions and equilibrium equation for the mechanical stress. First, we test a single perforated cylindrical particle using different boundary conditions for the displacement and with Neumann boundary conditions for the diffusion equation. We also test for cylindrical particles, but with boundary conditions for the diffusion equation in the electrodes coming from an isothermal electrochemical model for the whole battery cell. For the finite strain models we take in consideration the deformation of the initial geometry as a result of the intercalation and the mechanical stress. We compare two elastic models to study the sensitivity of the predicted elastic behavior on the specific model used. We also consider a softening of the active material dependent on the concentration of the lithium ions and using data for silicon electrodes. We recover the general behavior of the stress from known physical experiments. Some models, like the mechanical models we use, depend on the local values of the concentration to predict the mechanical stress. In that sense we perform a short comparative study between the Finite Element Method with tetrahedral elements and the Finite Volume Method with voxel volumes for an isothermal electrochemical model. The spatial discretizations of the PDEs are done using the Finite Element Method. For some models we have discontinuous quantities where we adapt the FEM accordingly. The time derivatives are discretized using the implicit Backward Euler method. The nonlinear systems are linearized using the Newton method. All of the discretized models are implemented in a C++ framework developed during the thesis. Maxim Taralov doctoralthesis https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4085 Thu, 28 May 2015 08:47:34 +0200 Isogeometric Finite Element Analysis of Nonlinear Structural Vibrations https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4079 In this thesis we present a new method for nonlinear frequency response analysis of mechanical vibrations. For an efficient spatial discretization of nonlinear partial differential equations of continuum mechanics we employ the concept of isogeometric analysis. Isogeometric finite element methods have already been shown to possess advantages over classical finite element discretizations in terms of exact geometry representation and higher accuracy of numerical approximations using spline functions. For computing nonlinear frequency response to periodic external excitations, we rely on the well-established harmonic balance method. It expands the solution of the nonlinear ordinary differential equation system resulting from spatial discretization as a truncated Fourier series in the frequency domain. A fundamental aspect for enabling large-scale and industrial application of the method is model order reduction of the spatial discretization of the equation of motion. Therefore we propose the utilization of a modal projection method enhanced with modal derivatives, providing second-order information. We investigate the concept of modal derivatives theoretically and using computational examples we demonstrate the applicability and accuracy of the reduction method for nonlinear static computations and vibration analysis. Furthermore, we extend nonlinear vibration analysis to incompressible elasticity using isogeometric mixed finite element methods. Oliver Weeger doctoralthesis https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4079 Wed, 20 May 2015 11:46:03 +0200 Isogeometric Shell Discretizations for Flexible Multibody Dynamics https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4076 This work aims at including nonlinear elastic shell models in a multibody framework. We focus our attention to Kirchhoff-Love shells and explore the benefits of an isogeometric approach, the latest development in finite element methods, within a multibody system. Isogeometric analysis extends isoparametric finite elements to more general functions such as B-Splines and Non-Uniform Rational B-Splines (NURBS) and works on exact geometry representations even at the coarsest level of discretizations. Using NURBS as basis functions, high regularity requirements of the shell model, which are difficult to achieve with standard finite elements, are easily fulfilled. A particular advantage is the promise of simplifying the mesh generation step, and mesh refinement is easily performed by eliminating the need for communication with the geometry representation in a Computer-Aided Design (CAD) tool. Quite often the domain consists of several patches where each patch is parametrized by means of NURBS, and these patches are then glued together by means of continuity conditions. Although the techniques known from domain decomposition can be carried over to this situation, the analysis of shell structures is substantially more involved as additional angle preservation constraints between the patches might arise. In this work, we address this issue in the stationary and transient case and make use of the analogy to constrained mechanical systems with joints and springs as interconnection elements. Starting point of our work is the bending strip method which is a penalty approach that adds extra stiffness to the interface between adjacent patches and which is found to lead to a so-called stiff mechanical system that might suffer from ill-conditioning and severe stepsize restrictions during time integration. As a remedy, an alternative formulation is developed that improves the condition number of the system and removes the penalty parameter dependence. Moreover, we study another alternative formulation with continuity constraints applied to triples of control points at the interface. The approach presented here to tackle stiff systems is quite general and can be applied to all penalty problems fulfilling some regularity requirements. The numerical examples demonstrate an impressive convergence behavior of the isogeometric approach even for a coarse mesh, while offering substantial savings with respect to the number of degrees of freedom. We show a comparison between the different multipatch approaches and observe that the alternative formulations are well conditioned, independent of any penalty parameter and give the correct results. We also present a technique to couple the isogeometric shells with multibody systems using a pointwise interaction. Anmol Goyal doctoralthesis https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4076 Tue, 19 May 2015 09:55:55 +0200 Portfolio Optimization and Stochastic Control under Transaction Costs https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4073 This thesis is concerned with stochastic control problems under transaction costs. In particular, we consider a generalized menu cost problem with partially controlled regime switching, general multidimensional running cost problems and the maximization of long-term growth rates in incomplete markets. The first two problems are considered under a general cost structure that includes a fixed cost component, whereas the latter is analyzed under proportional and Morton-Pliska transaction costs. For the menu cost problem and the running cost problem we provide an equivalent characterization of the value function by means of a generalized version of the Ito-Dynkin formula instead of the more restrictive, traditional approach via the use of quasi-variational inequalities (QVIs). Based on the finite element method and weak solutions of QVIs in suitable Sobolev spaces, the value function is constructed iteratively. In addition to the analytical results, we study a novel application of the menu cost problem in management science. We consider a company that aims to implement an optimal investment and marketing strategy and must decide when to issue a new version of a product and when and how much to invest into marketing. For the long-term growth rate problem we provide a rigorous asymptotic analysis under both proportional and Morton-Pliska transaction costs in a general incomplete market that includes, for instance, the Heston stochastic volatility model and the Kim-Omberg stochastic excess return model as special cases. By means of a dynamic programming approach leading-order optimal strategies are constructed and the leading-order coefficients in the expansions of the long-term growth rates are determined. Moreover, we analyze the asymptotic performance of Morton-Pliska strategies in settings with proportional transaction costs. Finally, pathwise optimality of the constructed strategies is established. Yaroslav Melnyk doctoralthesis https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4073 Mon, 18 May 2015 10:01:57 +0200 Robustness for regression models with asymmetric error distribution https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4046 In this work we focus on the regression models with asymmetrical error distribution, more precisely, with extreme value error distributions. This thesis arises in the framework of the project "Robust Risk Estimation". Starting from July 2011, this project won three years funding by the Volkswagen foundation in the call "Extreme Events: Modelling, Analysis, and Prediction" within the initiative "New Conceptual Approaches to Modelling and Simulation of Complex Systems". The project involves applications in Financial Mathematics (Operational and Liquidity Risk), Medicine (length of stay and cost), and Hydrology (river discharge data). These applications are bridged by the common use of robustness and extreme value statistics. Within the project, in each of these applications arise issues, which can be dealt with by means of Extreme Value Theory adding extra information in the form of the regression models. The particular challenge in this context concerns asymmetric error distributions, which significantly complicate the computations and make desired robustification extremely difficult. To this end, this thesis makes a contribution. This work consists of three main parts. The first part is focused on the basic notions and it gives an overview of the existing results in the Robust Statistics and Extreme Value Theory. We also provide some diagnostics, which is an important achievement of our project work. The second part of the thesis presents deeper analysis of the basic models and tools, used to achieve the main results of the research. The second part is the most important part of the thesis, which contains our personal contributions. First, in Chapter 5, we develop robust procedures for the risk management of complex systems in the presence of extreme events. Mentioned applications use time structure (e.g. hydrology), therefore we provide extreme value theory methods with time dynamics. To this end, in the framework of the project we considered two strategies. In the first one, we capture dynamic with the state-space model and apply extreme value theory to the residuals, and in the second one, we integrate the dynamics by means of autoregressive models, where the regressors are described by generalized linear models. More precisely, since the classical procedures are not appropriate to the case of outlier presence, for the first strategy we rework classical Kalman smoother and extended Kalman procedures in a robust way for different types of outliers and illustrate the performance of the new procedures in a GPS application and a stylized outlier situation. To apply approach to shrinking neighborhoods we need some smoothness, therefore for the second strategy, we derive smoothness of the generalized linear model in terms of L2 differentiability and create sufficient conditions for it in the cases of stochastic and deterministic regressors. Moreover, we set the time dependence in these models by linking the distribution parameters to the own past observations. The advantage of our approach is its applicability to the error distributions with the higher dimensional parameter and case of regressors of possibly different length for each parameter. Further, we apply our results to the models with generalized Pareto and generalized extreme value error distributions. Finally, we create the exemplary implementation of the fixed point iteration algorithm for the computation of the optimally robust in uence curve in R. Here we do not aim to provide the most exible implementation, but rather sketch how it should be done and retain points of particular importance. In the third part of the thesis we discuss three applications, operational risk, hospitalization times and hydrological river discharge data, and apply our code to the real data set taken from Jena university hospital ICU and provide reader with the various illustrations and detailed conclusions. Daria Pupashenko doctoralthesis https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4046 Thu, 16 Apr 2015 13:53:08 +0200 Worst-Case Portfolio Optimization: Transaction Costs and Bubbles https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4045 In this thesis we extend the worst-case modeling approach as first introduced by Hua and Wilmott (1997) (option pricing in discrete time) and Korn and Wilmott (2002) (portfolio optimization in continuous time) in various directions. In the continuous-time worst-case portfolio optimization model (as first introduced by Korn and Wilmott (2002)), the financial market is assumed to be under the threat of a crash in the sense that the stock price may crash by an unknown fraction at an unknown time. It is assumed that only an upper bound on the size of the crash is known and that the investor prepares for the worst-possible crash scenario. That is, the investor aims to find the strategy maximizing her objective function in the worst-case crash scenario. In the first part of this thesis, we consider the model of Korn and Wilmott (2002) in the presence of proportional transaction costs. First, we treat the problem without crashes and show that the value function is the unique viscosity solution of a dynamic programming equation (DPE) and then construct the optimal strategies. We then consider the problem in the presence of crash threats, derive the corresponding DPE and characterize the value function as the unique viscosity solution of this DPE. In the last part, we consider the worst-case problem with a random number of crashes by proposing a regime switching model in which each state corresponds to a different crash regime. We interpret each of the crash-threatened regimes of the market as states in which a financial bubble has formed which may lead to a crash. In this model, we prove that the value function is a classical solution of a system of DPEs and derive the optimal strategies. Christoph Belak doctoralthesis https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4045 Tue, 07 Apr 2015 10:17:10 +0200 Modeling and design optimization of textile-like materials via homogenization and one-dimensional models of elasticity https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4019 The work consists of two parts. In the first part an optimization problem of structures of linear elastic material with contact modeled by Robin-type boundary conditions is considered. The structures model textile-like materials and possess certain quasiperiodicity properties. The homogenization method is used to represent the structures by homogeneous elastic bodies and is essential for formulations of the effective stress and Poisson's ratio optimization problems. At the micro-level, the classical one-dimensional Euler-Bernoulli beam model extended with jump conditions at contact interfaces is used. The stress optimization problem is of a PDE-constrained optimization type, and the adjoint approach is exploited. Several numerical results are provided. In the second part a non-linear model for simulation of textiles is proposed. The yarns are modeled by hyperelastic law and have no bending stiffness. The friction is modeled by the Capstan equation. The model is formulated as a problem with the rate-independent dissipation, and the basic continuity and convexity properties are investigated. The part ends with numerical experiments and a comparison of the results to a real measurement. Vladimir Shiryaev doctoralthesis https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4019 Mon, 09 Mar 2015 14:42:08 +0100 Modeling and Simulation of a Moving Rigid Body in a Rarefied Gas https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4012 We present a numerical scheme to simulate a moving rigid body with arbitrary shape suspended in a rarefied gas micro flows, in view of applications to complex computations of moving structures in micro or vacuum systems. The rarefied gas is simulated by solving the Boltzmann equation using a DSMC particle method. The motion of the rigid body is governed by the Newton-Euler equations, where the force and the torque on the rigid body is computed from the momentum transfer of the gas molecules colliding with the body. The resulting motion of the rigid body affects in turn again the gas flow in the surroundings. This means that a two-way coupling has been modeled. We validate the scheme by performing various numerical experiments in 1-, 2- and 3-dimensional computational domains. We have presented 1-dimensional actuator problem, 2-dimensional cavity driven flow problem, Brownian diffusion of a spherical particle both with translational and rotational motions, and finally thermophoresis on a spherical particles. We compare the numerical results obtained from the numerical simulations with the existing theories in each test examples. Samir Shrestha doctoralthesis https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4012 Wed, 04 Mar 2015 11:43:53 +0100 Testrig optimization by block loads: Remodelling of damage as Gaussian functions and their clustering method https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4003 In automotive testrigs we apply load time series to components such that the outcome is as close as possible to some reference data. The testing procedure should in general be less expensive and at the same time take less time for testing. In my thesis, I propose a testrig damage optimization problem (WSDP). This approach improves upon the testrig stress optimization problem (TSOP) used as a state of the art by industry experts. In both (TSOP) and (WSDP), we optimize the load time series for a given testrig configuration. As the name suggests, in (TSOP) the reference data is the stress time series. The detailed behaviour of the stresses as functions of time are sometimes not the most important topic. Instead the damage potential of the stress signals are considered. Since damage is not part of the objectives in the (TSOP) the total damage computed from the optimized load time series is not optimal with respect to the reference damage. Additionally, the load time series obtained is as long as the reference stress time series and the total damage computation needs cycle counting algorithms and Goodmann corrections. The use of cycle counting algorithms makes the computation of damage from load time series non-differentiable. To overcome the issues discussed in the previous paragraph this thesis uses block loads for the load time series. Using of block loads makes the damage differentiable with respect to the load time series. Additionally, in some special cases it is shown that damage is convex when block loads are used and no cycle counting algorithms are required. Using load time series with block loads enables us to use damage in the objective function of the (WSDP). During every iteration of the (WSDP), we have to find the maximum total damage over all plane angles. The first attempt at solving the (WSDP) uses discretization of the interval for plane angle to find the maximum total damage at each iteration. This is shown to give unreliable results and makes maximum total damage function non-differentiable with respect to the plane angle. To overcome this, damage function for a given surface stress tensor due to a block load is remodelled by Gaussian functions. The parameters for the new model are derived. When we model the damage by Gaussian function, the total damage is computed as a sum of Gaussian functions. The plane with the maximum damage is similar to the modes of the Gaussian Mixture Models (GMM), the difference being that the Gaussian functions used in GMM are probability density functions which is not the case in the damage approximation presented in this work. We derive conditions for a single maximum for Gaussian functions, similar to the ones given for the unimodality of GMM by Aprausheva et al. in [1]. By using the conditions for a single maximum we give a clustering algorithm that merges the Gaussian functions in the sum as clusters. Each cluster obtained through clustering is such that it has a single maximum in the absence of other Gaussian functions of the sum. The approximate point of the maximum of each cluster is used as the starting point for a fixed point equation on the original damage function to get the actual maximum total damage at each iteration. We implement the method for the (TSOP) and the two methods (with discretization and with clustering) for (WSDP) on two example problems. The results obtained from the (WSDP) using discretization is shown to be better than the results obtained from the (TSOP). Furthermore we show that, (WSDP) using clustering approach to finding the maximum total damage, takes less number of iterations and is more reliable than using discretization. Chhitiz Buchasia doctoralthesis https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/4003 Tue, 24 Feb 2015 11:08:29 +0100 Combinations of Boolean Groebner Bases and SAT Solvers https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3958 In this thesis, we combine Groebner basis with SAT Solver in different manners. Both SAT solvers and Groebner basis techniques have their own strength and weakness. Combining them could fix their weakness. The first combination is using Groebner techniques to learn additional binary clauses for SAT solver from a selection of clauses. This combination is first proposed by Zengler and Kuechlin. However, in our experiments, about 80 percent Groebner basis computations give no new binary clauses. By selecting smaller and more compact input for Groebner basis computations, we can significantly reduce the number of inefficient Groebner basis computations, learn much more binary clauses. In addition, the new strategy can reduce the solving time of a SAT Solver in general, especially for large and hard problems. The second combination is using all-solution SAT solver and interpolation to compute Boolean Groebner bases of Boolean elimination ideals of a given ideal. Computing Boolean Groebner basis of the given ideal is an inefficient method in case we want to eliminate most of the variables from a big system of Boolean polynomials. Therefore, we propose a more efficient approach to handle such cases. In this approach, the given ideal is translated to the CNF formula. Then an all-solution SAT Solver is used to find the projection of all solutions of the given ideal. Finally, an algorithm, e.g. Buchberger-Moeller Algorithm, is used to associate the reduced Groebner basis to the projection. We also optimize the Buchberger-Moeller Algorithm for lexicographical ordering and compare it with Brickenstein's interpolation algorithm. Finally, we combine Groebner basis and abstraction techniques to the verification of some digital designs that contain complicated data paths. For a given design, we construct an abstract model. Then, we reformulate it as a system of polynomials in the ring \({\mathbb Z}_{2^k}[x_1,\dots,x_n]\). The variables are ordered in a way such that the system has already been a Groebner basis w.r.t lexicographical monomial ordering. Finally, the normal form is employed to prove the desired properties. To evaluate our approach, we verify the global property of a multiplier and a FIR filter using the computer algebra system Singular. The result shows that our approach is much faster than the commercial verification tool from Onespin on these benchmarks. Thanh Hung Nguyen doctoralthesis https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3958 Thu, 18 Dec 2014 14:11:19 +0100 Multilevel Constructions https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3942 The thesis consists of the two chapters. The first chapter is addressed to make a deep investigation of the MLMC method. In particular we take an optimisation view at the estimate. Rather than fixing the number of discretisation points \(n_i\) to be a geometric sequence, we are trying to find an optimal set up for \(n_i\) such that for a fixed error the estimate can be computed within a minimal time. In the second chapter we propose to enhance the MLMC estimate with the weak extrapolation technique. This technique helps to improve order of a weak convergence of a scheme and as a result reduce CC of an estimate. In particular we study high order weak extrapolation approach, which is know not be inefficient in the standard settings. However, a combination of the MLMC and the weak extrapolation yields an improvement of the MLMC. Anton Kostiuk doctoralthesis https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3942 Wed, 10 Dec 2014 08:29:03 +0100 Zinsoptimiertes Schuldenmanagement https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3931 Das zinsoptimierte Schuldenmanagement hat zum Ziel, eine möglichst effiziente Abwägung zwischen den erwarteten Finanzierungskosten einerseits und den Risiken für den Staatshaushalt andererseits zu finden. Um sich diesem Spannungsfeld zu nähern, schlagen wir erstmals die Brücke zwischen den Problemstellungen des Schuldenmanagements und den Methoden der zeitkontinuierlichen, dynamischen Portfoliooptimierung. Das Schlüsselelement ist dabei eine neue Metrik zur Messung der Finanzierungskosten, die Perpetualkosten. Diese spiegeln die durchschnittlichen zukünftigen Finanzierungskosten wider und beinhalten sowohl die bereits bekannten Zinszahlungen als auch die noch unbekannten Kosten für notwendige Anschlussfinanzierungen. Daher repräsentiert die Volatilität der Perpetualkosten auch das Risiko einer bestimmten Strategie; je langfristiger eine Finanzierung ist, desto kleiner ist die Schwankungsbreite der Perpetualkosten. Die Perpetualkosten ergeben sich als Produkt aus dem Barwert eines Schuldenportfolios und aus der vom Portfolio unabhängigen Perpetualrate. Für die Modellierung des Barwertes greifen wir auf das aus der dynamischen Portfoliooptimierung bekannte Konzept eines selbstfinanzierenden Bondportfolios zurück, das hier auf einem mehrdimensionalen affin-linearen Zinsmodell basiert. Das Wachstum des Schuldenportfolios wird dabei durch die Einbeziehung des Primärüberschusses des Staates gebremst bzw. verhindert, indem wir diesen als externen Zufluss in das selbstfinanzierende Modell aufnehmen. Wegen der Vielfältigkeit möglicher Finanzierungsinstrumente wählen wir nicht deren Wertanteile als Kontrollvariable, sondern kontrollieren die Sensitivitäten des Portfolios gegenüber verschiedenen Zinsbewegungen. Aus optimalen Sensitivitäten können in einem nachgelagerten Schritt dann optimale Wertanteile für verschiedenste Finanzierungsinstrumente abgeleitet werden. Beispielhaft demonstrieren wir dies mittels Rolling-Horizon-Bonds unterschiedlicher Laufzeit. Schließlich lösen wir zwei Optimierungsprobleme mit Methoden der stochastischen Kontrolltheorie. Dabei wird stets der erwartete Nutzen der Perpetualkosten maximiert. Die Nutzenfunktionen sind jeweils an das Schuldenmanagement angepasst und zeichnen sich insbesondere dadurch aus, dass höhere Kosten mit einem niedrigeren Nutzen einhergehen. Im ersten Problem betrachten wir eine Potenznutzenfunktion mit konstanter relativer Risikoaversion, im zweiten wählen wir eine Nutzenfunktion, welche die Einhaltung einer vorgegebenen Schulden- bzw. Kostenobergrenze garantiert. Christoph Peters doctoralthesis https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3931 Mon, 24 Nov 2014 09:09:39 +0100 Variance Reduction Procedures for Market Risk Estimation https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3885 Monte Carlo simulation is one of the commonly used methods for risk estimation on financial markets, especially for option portfolios, where any analytical approximation is usually too inaccurate. However, the usually high computational effort for complex portfolios with a large number of underlying assets motivates the application of variance reduction procedures. Variance reduction for estimating the probability of high portfolio losses has been extensively studied by Glasserman et al. A great variance reduction is achieved by applying an exponential twisting importance sampling algorithm together with stratification. The popular and much faster Delta-Gamma approximation replaces the portfolio loss function in order to guide the choice of the importance sampling density and it plays the role of the stratification variable. The main disadvantage of the proposed algorithm is that it is derived only in the case of Gaussian and some heavy-tailed changes in risk factors. Hence, our main goal is to keep the main advantage of the Monte Carlo simulation, namely its ability to perform a simulation under alternative assumptions on the distribution of the changes in risk factors, also in the variance reduction algorithms. Step by step, we construct new variance reduction techniques for estimating the probability of high portfolio losses. They are based on the idea of the Cross-Entropy importance sampling procedure. More precisely, the importance sampling density is chosen as the closest one to the optimal importance sampling density (zero variance estimator) out of some parametric family of densities with respect to Kullback - Leibler cross-entropy. Our algorithms are based on the special choices of the parametric family and can now use any approximation of the portfolio loss function. A special stratification is developed, so that any approximation of the portfolio loss function under any assumption of the distribution of the risk factors can be used. The constructed algorithms can easily be applied for any distribution of risk factors, no matter if light- or heavy-tailed. The numerical study exhibits a greater variance reduction than of the algorithm from Glasserman et al. The use of a better approximation may improve the performance of our algorithms significantly, as it is shown in the numerical study. The literature on the estimation of the popular market risk measures, namely VaR and CVaR, often refers to the algorithms for estimating the probability of high portfolio losses, describing the corresponding transition process only briefly. Hence, we give a consecutive discussion of this problem. Results necessary to construct confidence intervals for both measures under the mentioned variance reduction procedures are also given. Mykhailo Pupashenko doctoralthesis https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3885 Wed, 01 Oct 2014 09:47:40 +0200 New aspects of optimal investment in continuous time https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3867 This thesis focuses on dealing with some new aspects of continuous time portfolio optimization by using the stochastic control method. First, we extend the Busch-Korn-Seifried model for a large investor by using the Vasicek model for the short rate, and that problem is solved explicitly for two types of intensity functions. Next, we justify the existence of the constant proportion portfolio insurance (CPPI) strategy in a framework containing a stochastic short rate and a Markov switching parameter. The effect of Vasicek short rate on the CPPI strategy has been studied by Horsky (2012). This part of the thesis extends his research by including a Markov switching parameter, and the generalization is based on the B\"{a}uerle-Rieder investment problem. The explicit solutions are obtained for the portfolio problem without the Money Market Account as well as the portfolio problem with the Money Market Account. Finally, we apply the method used in Busch-Korn-Seifried investment problem to explicitly solve the portfolio optimization with a stochastic benchmark. Nhat Thu Tran doctoralthesis https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3867 Tue, 09 Sep 2014 12:50:40 +0200 Edgeworth Expansions for Binomial Trees https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3861 In the theory of option pricing one is usually concerned with evaluating expectations under the risk-neutral measure in a continuous-time model. However, very often these values cannot be calculated explicitly and numerical methods need to be applied to approximate the desired quantity. Monte Carlo simulations, numerical methods for PDEs and the lattice approach are the methods typically employed. In this thesis we consider the latter approach, with the main focus on binomial trees. The binomial method is based on the concept of weak convergence. The discrete-time model is constructed so as to ensure convergence in distribution to the continuous process. This means that the expectations calculated in the binomial tree can be used as approximations of the option prices in the continuous model. The binomial method is easy to implement and can be adapted to options with different types of payout structures, including American options. This makes the approach very appealing. However, the problem is that in many cases, the convergence of the method is slow and highly irregular, and even a fine discretization does not guarantee accurate price approximations. Therefore, ways of improving the convergence properties are required. We apply Edgeworth expansions to study the convergence behavior of the lattice approach. We propose a general framework, that allows to obtain asymptotic expansion for both multinomial and multidimensional trees. This information is then used to construct advanced models with superior convergence properties. In binomial models we usually deal with triangular arrays of lattice random vectors. In this case the available results on Edgeworth expansions for lattices are not directly applicable. Therefore, we first present Edgeworth expansions, which are also valid for the binomial tree setting. We then apply these result to the one-dimensional and multidimensional Black-Scholes models. We obtain third order expansions for general binomial and trinomial trees in the 1D setting, and construct advanced models for digital, vanilla and barrier options. Second order expansion are provided for the standard 2D binomial trees and advanced models are constructed for the two-asset digital and the two-asset correlation options. We also present advanced binomial models for a multidimensional setting. Alona Bock doctoralthesis https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3861 Tue, 02 Sep 2014 09:07:50 +0200 Portfoliooptimierung im Binomialmodell https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3849 Die Dissertation "Portfoliooptimierung im Binomialmodell" befasst sich mit der Frage, inwieweit das Problem der optimalen Portfolioauswahl im Binomialmodell lösbar ist bzw. inwieweit die Ergebnisse auf das stetige Modell übertragbar sind. Dabei werden neben dem klassischen Modell ohne Kosten und ohne Veränderung der Marktsituation auch Modellerweiterungen untersucht. Henriette Kröner doctoralthesis https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3849 Thu, 14 Aug 2014 08:15:18 +0200 Algorithms in SINGULAR: Parallelization, Syzygies, and Singularities https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3840 This thesis, whose subject is located in the field of algorithmic commutative algebra and algebraic geometry, consists of three parts. The first part is devoted to parallelization, a technique which allows us to take advantage of the computational power of modern multicore processors. First, we present parallel algorithms for the normalization of a reduced affine algebra A over a perfect field. Starting from the algorithm of Greuel, Laplagne, and Seelisch, we propose two approaches. For the local-to-global approach, we stratify the singular locus Sing(A) of A, compute the normalization locally at each stratum and finally reconstruct the normalization of A from the local results. For the second approach, we apply modular methods to both the global and the local-to-global normalization algorithm. Second, we propose a parallel version of the algorithm of Gianni, Trager, and Zacharias for primary decomposition. For the parallelization of this algorithm, we use modular methods for the computationally hardest steps, such as for the computation of the associated prime ideals in the zero-dimensional case and for the standard bases computations. We then apply an innovative fast method to verify that the result is indeed a primary decomposition of the input ideal. This allows us to skip the verification step at each of the intermediate modular computations. The proposed parallel algorithms are implemented in the open-source computer algebra system SINGULAR. The implementation is based on SINGULAR's new parallel framework which has been developed as part of this thesis and which is specifically designed for applications in mathematical research. In the second part, we propose new algorithms for the computation of syzygies, based on an in-depth analysis of Schreyer's algorithm. Here, the main ideas are that we may leave out so-called "lower order terms" which do not contribute to the result of the algorithm, that we do not need to order the terms of certain module elements which occur at intermediate steps, and that some partial results can be cached and reused. Finally, the third part deals with the algorithmic classification of singularities over the real numbers. First, we present a real version of the Splitting Lemma and, based on the classification theorems of Arnold, algorithms for the classification of the simple real singularities. In addition to the algorithms, we also provide insights into how real and complex singularities are related geometrically. Second, we explicitly describe the structure of the equivalence classes of the unimodal real singularities of corank 2. We prove that the equivalences are given by automorphisms of a certain shape. Based on this theorem, we explain in detail how the structure of the equivalence classes can be computed using SINGULAR and present the results in concise form. The probably most surprising outcome is that the real singularity type \(J_{10}^-\) is actually redundant. Andreas Steenpass doctoralthesis https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3840 Wed, 30 Jul 2014 10:37:00 +0200 Algorithmic aspects of tropical intersection theory https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3823 In the first part of this thesis we study algorithmic aspects of tropical intersection theory. We analyse how divisors and intersection products on tropical cycles can actually be computed using polyhedral geometry. The main focus is the study of moduli spaces, where the underlying combinatorics of the varieties involved allow a much more efficient way of computing certain tropical cycles. The algorithms discussed here have been implemented in an extension for polymake, a software for polyhedral computations. In the second part we apply the algorithmic toolkit developed in the first part to the study of tropical double Hurwitz cycles. Hurwitz cycles are a higher-dimensional generalization of Hurwitz numbers, which count covers of \(\mathbb{P}^1\) by smooth curves of a given genus with a certain fixed ramification behaviour. Double Hurwitz numbers provide a strong connection between various mathematical disciplines, including algebraic geometry, representation theory and combinatorics. The tropical cycles have a rather complex combinatorial nature, so it is very difficult to study them purely "by hand". Being able to compute examples has been very helpful in coming up with theoretical results. Our main result states that all marked and unmarked Hurwitz cycles are connected in codimension one and that for a generic choice of simple ramification points the marked cycle is a multiple of an irreducible cycle. In addition we provide computational examples to show that this is the strongest possible statement. Simon Hampe doctoralthesis https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3823 Thu, 03 Jul 2014 09:26:06 +0200 Efficient Algorithms for Flow Simulation related to Nuclear Reactor Safety https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3826 Safety analysis is of ultimate importance for operating Nuclear Power Plants (NPP). The overall modeling and simulation of physical and chemical processes occuring in the course of an accident is an interdisciplinary problem and has origins in fluid dynamics, numerical analysis, reactor tech- nology and computer programming. The aim of the study is therefore to create the foundations of a multi-dimensional non-isothermal fluid model for a NPP containment and software tool based on it. The numerical simulations allow to analyze and predict the behavior of NPP systems under different working and accident conditions, and to develop proper action plans for minimizing the risks of accidents, and/or minimizing the consequences of possible accidents. A very large number of scenarios have to be simulated, and at the same time acceptable accuracy for the critical param- eters, such as radioactive pollution, temperature, etc., have to be achieved. The existing software tools are either too slow, or not accurate enough. This thesis deals with developing customized al- gorithm and software tools for simulation of isothermal and non-isothermal flows in a containment pool of NPP. Requirements to such a software are formulated, and proper algorithms are presented. The goal of the work is to achieve a balance between accuracy and speed of calculation, and to develop customized algorithm for this special case. Different discretization and solution approaches are studied and those which correspond best to the formulated goal are selected, adjusted, and when possible, analysed. Fast directional splitting algorithm for Navier-Stokes equations in complicated geometries, in presence of solid and porous obstales, is in the core of the algorithm. Developing suitable pre-processor and customized domain decomposition algorithms are essential part of the overall algorithm amd software. Results from numerical simulations in test geometries and in real geometries are presented and discussed. Tatiana Gornak doctoralthesis https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3826 Thu, 03 Jul 2014 08:29:14 +0200 Efficient algorithms for Asymmetric Flow Field Flow Fractionation https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3811 This thesis is devoted to the modeling and simulation of Asymmetric Flow Field Flow Fractionation, which is a technique for separating particles of submicron scale. This process is a part of large family of Field Flow Fractionation techniques and has a very broad range of industrial applications, e. g. in microbiology, chemistry, pharmaceutics, environmental analysis. Mathematical modeling is crucial for this process, as due to the own nature of the process, lab ex- periments are difficult and expensive to perform. On the other hand, there are several challenges for the mathematical modeling: huge dominance (up to 106 times) of the flow over the diffusion, highly stretched geometry of the device. This work is devoted to developing fast and efficient algorithms, which take into the account the challenges, posed by the application, and provide reliable approximations for the quantities of interest. We present a new Multilevel Monte Carlo method for estimating the distribution functions on a compact interval, which are of the main interest for Asymmetric Flow Field Flow Fractionation. Error estimates for this method in terms of computational cost are also derived. We optimize the flow control at the Focusing stage under the given constraints on the flow and present an important ingredients for the further optimization, such as two-grid Reduced Basis method, specially adapted for the Finite Volume discretization approach. Tigran Nagapetyan doctoralthesis https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/3811 Wed, 04 Jun 2014 09:42:28 +0200
4075a0b23f543e17
My long, complexity-theoretic journey So, what was I doing these past few weeks that could possibly take precedence over writing ill-considered blog entries that I’d probably regret for the rest of my life? 1. On the gracious invitation of Renato Renner, I visited one of Al Einstein’s old stomping-grounds: ETH Zürich.  There I gave a physics colloquium called How Much Information Is In A Quantum State?, as well as a talk on my paper Quantum Copy-Protection and Quantum Money, which has been more than three years in the procrastinating.  Though I was only in Switzerland for three days, I found enough time to go hiking in the Swiss Alps, if by “Swiss Alps” you mean a 200-foot hill outside the theoretical physics building.  I’m quite proud of having made it through this entire trip—my first to Switzerland—without once yodeling or erupting into cries of “Riiiiiiicola!”  On the other hand, what with the beautiful architecture, excellent public transportation, and wonderful hosts, it was a struggle to maintain my neutrality. 2. On the plane to and from Switzerland, I had the pleasure of perusing Computational Complexity: A Modern Approach, by Sanjeev Arora and Boaz Barak, which has just been published after floating around the interweb for many years.  If you’re a hardcore complexity lover, I can recommend buying a copy in the strongest terms.  The book lives up to its subtitle, concentrating almost entirely on developments within the last twenty years.  Classical complexity theorists should pay particular attention to the excellent quantum computing chapter, neither of whose authors has the slightest background in the subject.  You see, people, getting quantum right isn’t that hard, is it?  The book’s only flaw, an abundance of typos, is one that can and should be easily fixed in the next edition. 3. I then visited the National Institute of Standards and Technology—proud keepers of the meter and the kilogram—at their headquarters in Gaithersburg, MD.  There I gave my talk on Quantum Complexity and Fundamental Physics, a version of the shtick I did at the QIS workshop in Virginia.  Afterwards, I got to tour some of the most badass experimental facilities I’ve seen in a while.  (Setting standards and making precision measurements: is there anything else that sounds so boring but turns out to so not be?)  A highlight was the Center for Neutron Research, which houses what’s apparently the largest research reactor still operating in the US.  This thing has been operating since 1967, and it shoots large numbers of slow-moving neutrons in all directions so that archaeologists, chemists, physicists, etc. can feed off the trough and do their experiments.  The basic physics that’s been done there recently has included setting bounds on possible nonlinearities in the Schrödinger equation (even though any nonlinearity, no matter how small, could be used to send superluminal signals and solve NP-complete problems in polynomial time), as well as observing the photons that the Standard Model apparently predicts are emitted 2% of the time when a neutron decays.  I also got to see one of the world’s least jittery floors: using dynamical feedback, they apparently managed to make this floor ~107 times less jittery than a normal floor, good enough that they can run a double-slit experiment with slow neutrons on top of it and see the interference pattern.  (Before you ask: yes, I wanted to jump on the floor, but I didn’t.  Apparently I would’ve messed it up for a day.) I have to add: the few times I’ve toured a nuclear facility, I felt profoundly depressed by the “retro” feel of everything around me: analog dials, safety signs from the 60s…   Why are no new reactors being built in the US, even while their value as stabilization wedges becomes increasingly hard to ignore?  Why are we unwilling to reprocess spent fuel rods like France does?  Why do people pin their hopes on the remote prospect of controlled fusion, ignoring the controlled fission we’ve had for half a century?  Why, like some horror-movie character unwilling to confront an evil from the past, have we decided that a major technology possibly crucial to the planet’s survival must remain a museum piece, part of civilization’s past and not its future?  Of course, these are rhetorical questions.  While you can be exposed to more radiation flying cross-country than working at a nuclear reactor for months, while preventing a Chernobyl is as easy as using shielding and leaving on the emergency cooling system, human nature is often a more powerful force than physics. 4. Next I went to STOC’2009 in Bethesda, MD.  Let me say something about a few talks that are impossible not to say something about.  First, in what might or might not turn out to be the biggest cryptographic breakthrough in decades, Craig Gentry has proposed a fully homomorphic encryption scheme based on ideal lattices: that is, a scheme that lets you perform arbitrary computations on encrypted data without decrypting it.  Currently, Gentry’s scheme is not known to be breakable even by quantum computers—despite a 2002 result of van Dam, Hallgren, and Ip, which said that if a fully homomorphic encryption scheme existed, then it could be broken by a quantum computer.  (The catch?  Van Dam et al.’s result applied to deterministic encryption schemes; Gentry’s is probabilistic.) Second, Chris Peikert (co-winner of the Best Paper Award) announced a public-key cryptosystem based on the classical worst-case hardness of the Shortest Vector Problem.  Previously, Regev had given such a cryptosystem based on the assumption that there’s no efficient quantum algorithm for SVP (see also here for a survey).  The latter was a striking result: even though Regev’s cryptosystem is purely classical, his reduction from SVP to breaking the cryptosystem was a quantum reduction.  What Peikert has now done is to “dequantize” Regev’s security argument by thinking very hard about it.  Of course, one interpretation of Peikert’s result is that classical crypto people no longer have to learn quantum mechanics—but a better interpretation is that they do have to learn QM, if only to get rid of it!  I eagerly await Oded Goldreich‘s first paper on quantum computing (using it purely as an intellectual tool, of course). Third, Robin Moser (co-winner of the Best Paper Award and winner of the Best Student Paper Award) gave a mindblowing algorithmic version of the Lovász Local Lemma.  Or to put it differently, Moser gave a polynomial-time algorithm that finds a satisfying assignment for a k-SAT formula, assuming that each clause intersects at most 2k-2 other clauses.  (It follows from the Local Lemma that such an assignment exists.)  Moser’s algorithm is absurdly simple: basically, you repeatedly pick an unsatisfied clause, and randomly set its variables so that it’s satisfied.  Then, if doing that has made any of the neighboring clauses unsatisfied, you randomly set their variables so that they’re satisfied, and so on, recursing until all the damage you’ve caused has also been fixed.  The proof that this algorithm actually halts in polynomial time uses a communication argument that, while simple, seemed so completely out of left field that when it was finished, the audience of theorists sort of let out a collective gasp, as if a giant black “QED” box were hovering in the air. Fourth, Babai, Beals, and Seress showed that if G is a matrix group over a finite field of odd order, then the membership problem for G can be solved in polynomial time, assuming an oracle for the discrete logarithm problem.  This represents the culmination of about 25 years of work in computational group theory.  I was all pumped to announce an important consequence of this result not noted in the abstract—that the problem is therefore solvable in quantum polynomial time, because of Shor’s discrete log algorithm—but Laci, alas, scooped me on this highly nontrivial corollary in his talk. 5. Finally, I took the train up to Princeton, for a workshop on “Cryptography and Complexity: Status of Impagliazzo’s Worlds”.  (For the insufficiently nerdy: the worlds are Algorithmica, where P=NP; Heuristica, where P≠NP but the hard instances of NP-complete problems are hard to find; Pessiland, where the hard instances are easy to find but none of them can be used for cryptographic one-way functions; Minicrypt, where one-way functions do exist, enabling private-key cryptography, but not the trapdoor one-way functions needed for public-key cryptography; and Cryptomania, where trapdoor one-way functions exist, and cryptography can do pretty anything you could ask.)  I gave a talk on Impagliazzo’s worlds in arithmetic complexity, based on ongoing join work with Andy Drucker (where “ongoing” means we’re pretty sure more of our results are correct than would be expected by random guessing). Tell you what: since it’s been a long time, feel free to ask whatever you feel like in the comments section, whether related to my journeys or not.  I’ll try to answer at least a constant fraction of questions. 56 Responses to “My long, complexity-theoretic journey” 1. Sean Carroll Says: And yet, Scott has been awesome enough to find time to read a book draft for me and offer excellent comments. Which is to say, very awesome. 2. Brian Says: Good to see you’re back, Scott. It’s always fun to see what you have to say – and occasionally I understand some of it! 3. asterix Says: Why was Gentry’s work less significant than Peikert’s work? I don’t mean this in a competitive way. I am not a crypto expert, but to me they both sounds like astounding results, and I’m wondering why one is considered more of a breakthrough than the other. I assume Peikert’s may have some easy to explain idea behind it (like Moser’s) whereas Gentry’s is more technical? Is homomorphic encryption less important than using worst-case SVP problems to get crypto systems? Thanks. 4. John Sidles Says: Scott asserts: Any nonlinearity [in the Schrödinger equation], no matter how small, could be used to send superluminal signals and solve NP-complete problems in polynomial time. Gee whiz … Scott …. for this to be true … don’t you have add a pretty lengthy conditional clause: “in the Schrödinger equation on vector state-spaces having exponentially large dimension.” One reason for focusing on this stipulation is that in the world of practical calculations (meaning, PSPACE and PTIME resources) and also in the world of practical experiments (meaning, finite-temperature and/or noisy laboratories), it is commonplace to compute on tensor network state-spaces … which definitely are not vector spaces, but rather are Kähler manifolds. In working through this manifold-oriented framework for QIS/QIT, our QSE Group translated chapters 2 and 8 of Nielsen and Chuang’s textbook into the language of Kähler manifolds as contrasted with Hilbert space. This was a fun exercise (in which Ashtekar and Schilling preceded us by a decade, I will mention). The resulting QIS/QIT mathematical framework proved sufficiently compact that we could summarize it on one page … … and this compactness proves to be very convenient for organizing large-scale QIS/QIT calculations. Now, it is true that when QIS/QIT is formulated as a manifold theory, the mathematical focus naturally shifts from the (linear) Hamiltonian dynamics of superposition to the (nonlinear) concentration dynamics that is generic to Lindbladians. But isn’t this a good thing? … when it helps us to efficiently simulate large-scale quantum systems? As far as our QSE Group knows, there is nothing in orthodox Lindbladian quantum dynamics that permits “sending superluminal signals and solving NP-complete problems.” Because isn’t Lindbladian dynamics so constructed as to rule-out these possibilities? Even when the Lindbladian dynamics concentrates quantum trajectories onto a non-vector manifold? The point being, noise and nonlinearity are valuable QIS/QIT resources—to be treasured, not scorned! :) 5. Domenic Denicola Says: Wow, thanks for enlightening us bystanders; those results from point #4 are really cool! 6. Arne Peeters Says: Unrelated (but you said it’s ok): It’s now 4 years since, so what’s your updated opinion on “waste papers”? Would you still write that post (given a sufficient reason to procrastinate 😉 ) and if not, what would you write instead? 7. Carl Says: If I die… Tell my wife… “Hello…”. 8. Scott Says: Why was Gentry’s work less significant than Peikert’s work? asterix: Please be assured, you’re not the only one to have asked that question! Since I wasn’t privy to the decision, and have no strong personal feelings about it, I suppose it’s OK for me to say what I know (and if it isn’t, and I end up using this blog to blab about something I shouldn’t one more time … well, who’s counting? :-) ). My understanding is that the PC had some concerns about the correctness of at least part of Gentry’s paper (and maybe about the underlying assumptions—though I’m just guessing there), and didn’t want to risk looking foolish by (e.g.) giving a Best Paper Award for a cryptosystem that might be broken half a year later. The trouble, of course, is that in a situation like this one, the PC runs that risk no matter what it does! :-) What I can say with confidence is the following: (1) Both Peikert’s and Moser’s papers would be clear contenders for the Best Paper Award in an ordinary year. (2) Gentry’s might someday be seen as the best paper ever to have been passed over for the Best Paper Award. 9. Scott Says: John: You’re right, of course; I was talking about adding a nonlinear term to the Schrödinger equation while keeping the “rest” of QM (the state space, the measurement rule, etc.) unchanged. In reality, though, my view is that any nonlinear term (no matter how small) would amount to a complete collapse of QM—so that conditioned on finding such a term, the state space and everything else would seem like fair game as well. 10. anon Says: Yeah, what’s the deal? Could some crypto experts give their opinion on the extent to which the Gentry paper is correct? If it’s 100% correct then it’s completely amazing, right? 11. harrison Says: Scott, I haven’t read either of the papers, so forgive me if I’m missing something trivial, but it seems like if you used a sufficiently strong PRNG with Gentry’s cryptosystem, wouldn’t it then be (quantum) breakable by van Dam et al.? And therefore, either no PRNG exists which can fool a quantum computer, or Gentry’s argument is flawed? What am I missing here? 12. Aspiring Blogger Says: Scott, I love your blog and have been following it for a long time. Sometimes I entertain the thought of creating a forum as entertaining and intellectually rich as this one, but I ask myself — how much effort would it take? Since you opened the floor for questions, can you comment on this? How much time do you spend (or how much sleep do you lose!) on your blog? Any tips or admonitions for aspiring bloggers? 13. Scott Says: Harrison, as I understand it, the main issue is whether you can efficiently recognize an encryption of the all-0 string. Van Dam et al. assume you can (as would be the case, in particular, for any deterministic encryption scheme), while in Gentry’s scheme you presumably can’t. In which case, if you replaced the randomness in Gentry’s scheme by the output of a cryptographic PRG (with random seed), it would still be hard to recognize an encryption of the all-0 string (since otherwise, you’d get a polynomial-time algorithm for distinguishing the PRG output from true randomness, contrary to assumption). I trust others will correct me if I’m wrong. 14. Scott Says: Aspiring Blogger: Like many questions I’ve gotten over the years and never answered, yours really deserves a post of its own! Briefly: right now the blog doesn’t take much of my time, since I hardly ever update it. Back when I updated it every other day, it took maybe half my time. But that’s a statement more about me than about blogging: I understand many other bloggers are able to dash off a decent entry in 20 minutes; I’m more than an order of magnitude less efficient. Now to watch Colbert… 15. Bram Cohen Says: That paper should be called ‘Walksat finds the Lovasz Local Minima in polynomial time’. That result has of course been known for the special case of 2-clauses for a long time. There’s another possible world – in which P=NP in circuit complexity but there’s no finite TM solves arbitrarily large NP complete problems in polynomial time. 16. Martin Schwarz Says: Hi Scott, I’m glad you’re back blogging! By the way, why didn’t you make it to your Vienna, Austria, talk this week? I would have enjoyed watching one of your terrific talks live and meeting you in person. best regards, 17. Scott Says: Bram: There’s actually a huge number of possible worlds not covered by Russell’s classification (the one where P=NP but the algorithm takes n10000 time; the one where there’s a uniform algorithm that solves SAT in polynomial time on particular input lengths only; the one where P≠NP but NP⊆BQP; the one where public-key crypto is possible using “lossy” trapdoor one-way functions, but ordinary TDOWFs don’t exist…). Indeed, Russell pointed out at the workshop that for the foreseeable future, the worlds are in far more danger of proliferating than they are of collapsing! Incidentally, I can easily imagine an alternative history of theoretical computer science, where instead of using complexity classes as our basic concept, we directly used the Impagliazzo-worlds (which are basically possibilities for collapses of complexity classes). Of course it might get cumbersome, as in principle there could be exponentially more of the latter than of the former. So maybe people who complain about the size of the current Complexity Zoo should count their blessings! On the other hand, I conjecture that Cryptomania, Pessiland, etc. would be a much easier public-relations sell than BQP and PSPACE. 18. Nagesh Says: Hi Scott, Nice to hear a summary of your recent travels. I myself wanted to summarize my own travels I have been doing lately even though most of it for family reasons :) So as for the questions can I ask what did you propose to do in your NSF career proposal? Can you share the title (and proposal) if it’s ok? 19. Scott Says: Nagesh, the title of my CAREER proposal was “Limits on Efficient Computation in the Physical World” (same as my thesis title). The main things I proposed to work on, besides education, outreach, and diversifying, were (1) BQP vs. the polynomial hierarchy, (2) the need for structure in quantum speedups (e.g., quantum query complexity lower bounds for almost-total Boolean functions), and (3) non-relativizing techniques in quantum complexity theory. I understand these things are generally not made public, but email me if you want a copy. 20. Aspiring Complexity Researcher Says: Dear Scott, I am an advanced doctoral student in a systems area at small school but have fallen in love with topics in complexity, tcs, discrete math and such. (And I am not a natural genius, but I am creative and enthusiastic about tcs!) What is your best suggestion for me as to ways and means by which I could make valuable contributions to any of the areas I mentioned above? So far, I have made tiny contributions and have been busy trying to squeeze time to read more about these things. But, for example, isn’t having a theorist to interact with / mentor essential? After graduation, what could I do to achieve my goal best? How could I, say, become a postdoc with a theory group without having done a lot of theory work? Right now, with the economy and what not, the future seems bleak for my love interest. Any thoughtful (as usual) guidance would be welcome. Of course, I enjoy your blog and all the remarkable work you are doing for tcs. I have also heard you talk at Harvard once. You inspire many like me. Please know that I am always wishing the very best for you. Thank you! 21. Scott Says: Martin: My apologies! I’d never been to Vienna, and had really wanted to go. Alas, I ended up having back-to-back travel in the two weeks prior, and urgently needed to get back to MIT as I had five summer students to meet and get started on their projects. I hope the workshop went well! Warning to All Workshop Organizers: For whatever reason, I have an extraordinarily hard time saying ‘no’ to anyone, until I’m forced to by circumstances. 22. Scott Says: Aspiring Complexity Researcher: Given how far you’ve already gone in your studies, it sounds to me like your best bet is to pursue whatever career you were going to pursue in systems (whether that’s in industry or academia), and then look for connections with theory and for theorists to talk with. Quite a few scientists gradually change areas over time, so that what they eventually end up doing might be completely unrelated to what they got their PhD in. But they usually start by doing what they got their PhD in. :-) And the paths between different parts of CS happen to be particularly well-trodden ones: we really do talk to each other! 23. Nagesh Says: Thanks a lot for responding! I will email you now :) 24. Bobby Says: Regarding result #4, how is that different than having the input data encrypted with a public key, and the “computation” being the process of appending a message that encapsulates the computation to the end of the input data? The decryption process in this case would include both decrypting the input data and applying the computation message. I can see that it’s possible that this asymmetric encryption + messages process would be susceptible to “easy” quantum cracking, or that it may be provable that the method given in paper #4 is quicker to decrypt. Possibly that’s all that’s new that the paper is illuminating. However, given that the computation can easily add information to the encrypted message, it must be the case that the output data after the computation can be larger than the input data, so the process of computation must be able to lengthen the time it takes to decrypt the data. Even worse, if you can’t derive information about the encrypted data by the computation process, if the computation conditionally would add information to the encrypted data depending on the encrypted values, it must always add information to the encrypted data *even if the computation has no actual effect on the original data*. I.e. if the computation is “f(y) is y if the y 25. Bobby Says: Hrm, last post got truncated. Continuation: I.e. if the computation is “f(y) is y if y < 40, otherwise f(y) is the result of a lookup in some arbitrary table using y as the index”, then the process of applying the computation to the encrypted data must encode all the information in the table into the encrypted result, even if the encrypted y is 39. Did I misunderstand what the paper is demonstrating? (BTW, I think the truncation was because I included a < that I didn’t encode as &lt;) 26. komponisto Says: Would you consider doing a Bloggingheads diavlog with Eliezer Yudkowsky? 27. Scott Says: Komponisto: LOL! By exploiting my backwards-in-time causal powers, I just did a diavlog with Eliezer this afternoon. It will be up shortly. 28. Responder Says: Hi Bobby, The point is that you don’t trust the person doing the computation, so you don’t want him to be able to decrypt the input, compute, then append the answer, because then he’d see the input. Imagine cloud computing. I have confidential data that I’d like to process, but I have very little computing power. I could pay amazon.com, send them the data encrypted, and have them process it on their million computers and send me the results, while being assured that amazon hasn’t learned anything about my confidential data. 29. d Says: Ha ha funny: 30. Anon Says: How many of the “ten most annoying questions in quantum computing” (http://scottaaronson.com/blog/?p=112) are still unsolved? 31. Scott Says: Anon, here’s the status of questions 1-9 (10 not being a real question): Solved by Montenaro and Shepherd. Under the conjecture that the provers only need to share a finite-dimensional Hilbert space, Doherty et al. prove a recursive upper bound, which is already highly nontrivial. Without that conjecture, no upper bound is known. In my paper with Beigi et al., we solved this problem assuming a weak version of the Additivity Conjecture from quantum information theory (the general Additivity Conjecture is now known to be false, but our version still seems plausible). No unconditional result is known. Solved by Sheridan, Maslov, and Mosca (contrary to my conjecture, the answer is yes) I’m not sure whether this particular question is still open or not—does anyone else? What I know is that Anup Rao solved a closely related problem, by proving a concentration bound for parallel repetition of the CHSH game. I realized shortly after posing this problem that an affirmative answer follows from, e.g., the Childs et al. conjoined tree problem. Still open Still open, as far as I know Still open, though I have a plausible strategy for closing it that I haven’t pursued—thanks for reminding me! :-) 32. milkshake Says: Nuclear reactors: an awesome one-stop-source for all questions nuclear is Garwin’s Archive. He explains why re-processing currently does not make any economic sense (it saves less than 20% of uranium at a ridiculous cost, and it actually increases the volume of radioactive waste without much reducing its radioactivity, the possibility of separated reactor plutonium theft is a serious proliferation risk etc). France and Japan got into the re-processing business in anticipation of breeder reactors which never really took off. Even if you have pure, already separated weapons-grade plutonium surplus that you want to dispose off, it is actually cheaper and less problematic to mix it with some highly radioactive waste and burry it in a mine depository rather than blend it into a reactor fuel to save uranium. Retro-looking reactors: Freeman Dyson has a wonderful reminiscence in Disturbing the Universe about his time in 50s with Teller and Freddy deHoffman at General Atomics, designing the reactors. I think the NIST facility uses a different (heavy-water) design than the TRIGA and the graphite-moderated reactors that Dyson worked on but I think his fond memories of the little red-brick schoolhouse where they run reactor calculations over the summer captures well the initial momentum, the enthusiasm which gradually evaporated as the accountants, MBAs and government regulators took over the industry. Nuclear accidents: safety is expensive and private companies operating nuclear reactors in US and Japan were cutting corners and skimping on upgrades in the past. Plus there is a normal human stupidity and complacency. There were few horribly close calls, not just the Three Mile Island accident. 33. Anon Says: Thanks for the update! What does Question 7 mean? What sort of oracle are we looking for? 34. computational simplicity Says: Hi, a general question: Can you give us a list of 10 weblogs that you truly enjoy and read regularly ? They don’t have to be CS-oriented, just what you enjoy the most. Also, they don’t have to be *blogs* (a news site, webcomic, or anything like that is okay). You can give more that 10 if you feel like it. 35. Scott Says: Anon, we’re looking for a classical oracle: that is, one that maps each classical basis state |x⟩ to (-1)f(x)|x⟩, for some Boolean function f. There’s certainly a quantum oracle, namely U itself! 36. Scott Says: Computational Simplicity: Look at the blogroll to the right! A few others that I occassionally read: Andrew Sullivan, FiveThirtyEight, Lubos Motl, Bitch PhD, I Blame the Patriarchy. 37. John Sidles Says: Scott, your above-linked Zurich talk How Much Information Is In A Quantum State? is really excellent, and the numerical experiments you are doing with Eyal Dechter provide a good striking example of Terry Tao’s maxim that “progress generally starts in the concrete and then flows to the abstract.” This inspires us to flex the narrative to be even more concrete … without (AFAICT) changing any of the fundamental mathematics. We can accomplish this by (1) altering Alice’s motive from conveying information to concealing her activities (which is always more fun!) and (2) altering the quantum informatic framework from informatic/algebraic to stochastic/geometric (which provides a broader perspective on how these ideas work). We suppose that Bob has in his laboratory an ion-trap containing (say) 100 trapped ions. Every morning, Bob performs just one tomographic measurement on these ions (it’s a long-running experiment). And every afternoon, Bob prepares (the same) quantum state for the following day’s tomographic measurements. Thus Bob’s life is pretty boring — every afternoon the (identical) state preparation, every morning a tomographic measurement of the previous day’s state. Alice is spying on Bob. Every night, Alice sneaks into Bob’s lab and measures his carefully-prepared state (thereby destroying it). To conceal her activities, Alice performs covert measurement-and-control operations on Bob’s ions, leaving behind a state ρ that Bob will measure the following morning. Alice’s goal is that Bob’s daytime tomographic measurements reliably yield (as your lecture puts it) “Tr(Eρ) for most measurements E drawn from some probability measure D.” So Alice is secretly “dry-labbing” Bob’s experiment … leaving behind states that are informatically indistinguishable (by Bob) from undisturbed states. Of course, Alice has finite resources in information and time … she has to be done preparing the ions before Bob arrives the following morning. Obviously it’s a challenging task — she has 100 ions to entangle! To make Alice’s life harder, we assume that she does not know Bob’s experimental protocol (otherwise she could just duplicate it), but instead only knows the specified outcome distribution D. How can Alice achieve her quantum deception goal? Or is it impossible? Equally interesting—and not addressed in the lecture—does Alice really have to physically restore Bob’s quantum state? Could Alice instead install a (classical) “root kit” on Bob’s tomographic measurement software; a root kit that could reliably simulate every morning (with classical resources) any tomographic measurement that Bob might specify that morning? —- End of Part 1 —- 38. Bobby Says: Responder: I apologize, I meant to mention in my post that it seemed theoretically possible that the method given in the paper would allow the decryption process to perform better, in that the computation work would have been done by the other computers. However, without something explicit in the paper which asserts the complexity of the decryption is bounded, and what’s more that the increase in size of the encrypted package by the computation process is bounded, it seems to me that there are no guarantees. Also, honestly, my initial reaction to seeing the paper’s description is that it would have some profound implications, beyond letting you play performance games. After I recognized that you could do the same thing modulo performance by conventional methods, I thought I would see if someone saw a flaw in my reasoning. 39. Bram Cohen Says: If you really want your head to explode, go read up on liquid flouride thorium reactors. There are vastly safer and cheaper ways of getting nuclear energy than we have now, which have the unfortunate downsides of being radically different than what’s used currently, so noone’s an expert in them, and far too inherently safe to have any weapons use at all, so the military won’t fund them, and politics has basically killed them for the last fifty years. 40. Jonathan Vos Post Says: “If you really want your head to explode” — and who wouldn’t want that? 41. Jr Says: What do you think is the most important open mathematical problem outside of TCS? In science, outside of math and computer science? 42. Jr Says: Also, why do you think religion exists? 43. Scott Says: I was going to say the Riemann hypothesis, but we all know that’s just a derandomization problem. :-) So maybe the Langlands conjectures? But those, too, could conceivably turn out to be relevant to circuit lower bounds via Mulmuley’s program… Whatever the answer is, “important” presumably means that a significant fraction of mathematicians would need to agree. So old standbys like 3x+1, twin primes, and the transcendence of π+e are presumably out… :-) In science, outside of math and computer science? In fundamental science, here are the first four things that popped into my head: 1. Why sex, sleep, and homosexuality exist 2. Extraterrestrial life (or even “earth-like” extrasolar planets, or non-DNA/RNA-based life on earth) 3. Physics beyond the Standard Model (wherever progress turns out to happen—electroweak symmetry breaking, Λ, ultra-high-energy cosmic rays, the Pioneer anomaly?) 4. Not clear whether there’s anything new and compelling with actual technical content to say about consciousness, free will, the anthropic principle, or the quantum measurement problem, but if there were, that would certainly count In applied science (similarly, first four things that popped into my head): 1. Cheaper, more efficient solar cells (likewise, cheaper, safer nuclear reactors) 2. Mass manufacturing of wacky materials like carbon nanotubes, so we can haz SPACE ELEVATORS! 3. The ability to google and edit your own genome 4. Batteries that last 44. Scott Says: Also, why do you think religion exists? Once you accept that for almost all of history, and in most of the world today, the “purpose of life” has been to maintain cohesive tribes in which the men valiantly fight the rival tribes, the women stay faithful and raise children, etc.—and that uncovering the true nature of the physical universe only ever enters the picture insofar as it directly advances those goals—the question becomes, why shouldn’t religion exist? 45. John Sidles Says: A Google search for “space elevator elastic energy” will find a literature replete with sobering engineering realities … … that’s why I work in quantum spin microscopy instead … where the realities are sobering, but not as sobering. As Pope put it: “Shallow draughts intoxicate the brain, but drinking largely sobers us again.” Here “largely” has the seventeenth century meaning given in Samuel Johnson’s dictionary: “amply, widely, copiously” 46. Jonathan Vos Post Says: I like Scott’s Comment #43 (half of a twin prime pair). Massively compressing my impressions: 1. Why sex, sleep, and homosexuality exist — as problems in reconstructing path-dependent models embedded in Evolution by Natural Selection. Otherwise we can take the myth that humans were once 4-armed, 4-legged, unisexual, and were bifurctaed, always seeking our other halves. 2. Extraterrestrial life (or even “earth-like” extrasolar planets, or non-DNA/RNA-based life on earth) — interesting recent publications extrapolating back to BEFORE the RNA World. And a Strong Gaia hypothesis suggests doubling the Drake Equation approximation. 3. Physics beyond the Standard Model — and the Cosmology that dervives from that, via Quantum Cosmology arguments. 4. consciousness — I’ve been speaking with Cristoph Koch, whose 20 years of work with Francis Crick has yielded some amazing experimental results, in multiple processes competing in the human brain’s visual/semantic subsystems, with the interference nicely measurable, but below conscious awareness. Again, old Bayesian rationalists deny the demonstrable circuitry of the human brain. Crick & Koch asked what are the neural Correlates of Consciousness, including: is there a minimum complexity of a system for it to be able to be a substrate for consciousness? And why, if our immune system, or entereic nervous system, or genome exceeds that threshhold, is the immune, gut, or genetic network NOT conscious? 47. Jonathan Vos Post Says: Likewise, educated first impressions: 1. Cheaper, more efficient solar cells [I keep in touch with Dr. Geoffrey Landis, a real expert; and with the IdeaLab solar companies] (likewise, cheaper, safer nuclear reactors [also: smaller, down to scale of individual business or home]) 2. Mass manufacturing of wacky materials like carbon nanotubes, so we can haz SPACE ELEVATORS! [some stranger molecules being investigated; meanwhile Space Elevators already feasible for the Moon] 3. The ability to google and edit your own genome [The humanist ethic begins with the belief that humans are an essential part of nature. Through human minds the biosphere has acquired the capacity to steer its own evolution, and now we are in charge. Humans have the right and the duty to reconstruct nature so that humans and biosphere can both survive and prosper. For humanists, the highest value is harmonious coexistence between humans and nature. The greatest evils are poverty, underdevelopment, unemployment, disease and hunger, all the conditions that deprive people of opportunities and limit their freedoms. — HERETICAL THOUGHTS ABOUT SCIENCE AND SOCIETY By Freeman Dyson] 4. Batteries that last [Cowan’s Heinlein Concordance: Shipstone 1. Common power source. It involved intensive solar collection and energy storage but was not otherwise described. It apparently replaced almost all other sources of energy. The name also applied to the conglomerate that apparently owned most of the corporations on and off Earth…. In effect, Shipstone controlled the entire economy. A feud among different factions resulted in the overthrow and disruption of many Earth governments, particularly in North America. (Friday) (To Sail Beyond the Sunset) [Compare D. D. Harriman’s extensive holdings and economic influence in earlier stories, and the more benevolent depiction of an unlimited power source in “Let There Be Light”.] 48. Bram Cohen Says: Scott, does survey propogation constitute a full-blown exhaustive search algorithm, as opposed to just a stochastic search problem (I think the answer is ‘yes’, but just checking) and if so, would it apply naturally to algorithm X type problems, and if it does do you think it would on some instances be faster than dancing links in practice? 49. coder Says: the homomorphic cryptosystem reminds me of McEliece and other coding cryptosystems. these too resist quantum attacks but require copious entropy. poor entropy sources are the bane of a cryptographer’s existence; the concerns in #11 are relevant but well understood. many computers have hardware entropy sources these days… 50. Zack Says: Is there a way to construct quantum money that allows one to make change? That is, I have a quantum banknote worth $A, I want to be able to convert it into two banknotes worth $X and $Y, where X + Y = A is enforced, without communicating with the issuing authority. Conversely, I would also like to be able to merge banknotes worth $X and $Y into a single banknote worth $(X+Y), again without communication. 51. Raoul Ohio Says: While of course your religion is the one true religion, it is interesting to speculate about what’s the deal with all the wrong ones. One of the weisenheimer columnists in Scientific American has an interesting model: The standard human mental kit fails to include a working BS detector. If true, this explains lots of other curious things. He fleshes the model into a mini theory by speculating how evolution produced this state of affairs. His guess is that higher brain functions are largely pattern recognition, and overreacting to any plausible threat pays off when there are lions about. This is an advantage, natural selection wise, leading to most people not thinking critically about whatever. The virus theory, which I first read about in “Godel, Escher, and Bach”, is also good for a few laughs. An entertaining variation has religion as kind of a Ponzi scheme, the various priesthoods may or may not be in on the joke. Putting on your optimization hat, can anyone think of a better scam than trading infinite bliss in the next life for money and obedience right now? 52. Scott Says: Zack: That’s an extremely interesting question! “Merging” two banknotes can in some sense be trivially done, by just putting the banknotes side-by-side. It’s “splitting” a banknote that’s the problem—at least, assuming you don’t want the number of qubits in a banknote to grow linearly with its value (in which case we could just let an $n banknote consist of n $1 notes). Another simple observation is that ordinary cash does not provide the functionality you ask for, and yet we seem to make do anyway, mostly by choosing denominations ($100, $20, $10, $5, $1…) in such a way that if there are enough people at the restaurant table, then w.h.p. it’s possible to make change. Probably the first step should be to find one quantum money scheme with reasonable evidence for its security, that at least provides the same functionalities as ordinary cash! Then we can worry about additional functionalities like the one you ask for. [Note: Using the ideas from my CCC paper, it shouldn’t be hard to construct a quantum oracle relative to which a quantum money scheme with the “splitting” and “merging” functionalities exists. The hard part, as usual, is to find an explicit scheme, one that works even with no oracle.] 53. Scott Says: Bram: Survey propagation is basically a local search algorithm; it doesn’t find refutations for unsatisfiable instances. I’m sorry I don’t know the answers to your other questions. 54. Bram Cohen Says: Hrmph, my reading of survey propogation was way off. Is there an intro to it for non-mathematicians? I can read reference code, but not speak math. 55. Zack Says: It’s true that regular old cash is not splittable, but I observe that cash is falling into disuse (replaced by debit and credit cards) and speculate that not having to deal with change is a major reason for this. Debit and credit cards, of course, do require communication with the bank. If quantum banknotes could be split and merged, then they would solve a practical problem with cash (unsplittability) as well as one that’s not really a problem in practice (unforgeability)… 56. Bobby Says: Can someone answer my question from comment #24? In short: I understand that the paper does demonstrate a system with at least one property that the classical system I give above doesn’t have. Excerpted from comment 24: My question is, does the method discussed in the paper provide anything else that a classical system of concatenating messages listing the operations to be performed wouldn’t provide?
c1c9449e13c95d03
Take the 2-minute tour × In many areas of mathematics (PDE, Algebra, combinatorics, geometry) when we have difficulty in coming with a solution to a problem we consider various notions of "generalized solutions". (There are also other reasons to generalize the notion of a solution in various contexts.) I would like to collect a list of "generalized solutions" concepts in various areas of mathematics, hoping that looking at these various concepts side-by side can be useful and interesting. Let me demonstrate what I mean by an example from graph theory: A perfect matching in a graph is a set of disjoint edges such that every vertex is included in precisely one edge. A fractional perfect matching is an assignment of non negative weights to the edges so that for every vertex, the sum of weights is 1. In combinatorics, moving from a notion described by a 0-1 solution for a linear programming problem to the solution over the reals is called LP relaxation of a problem and it is quite important in various contexts. (There are, of course, useful papers or other resources on generalized solutions in specific areas. It will be useful to have links to those but not as a substitute for actual answers with some details.) share|improve this question Actually the scope of the answers is much larger than what I thought! (But I cannot formally define what was the more restricted scope I had in mind). –  Gil Kalai Oct 8 '10 at 21:16 16 Answers 16 Partial Differential Equations (PDE) is a topic where generalizing the notion of solutions is a daily activity. The most obvious generalization has been the notion of weak solutions, which means that a solution $u$ is not necessarily differentiable enough times for the derivatives involved in the equation to make sense; but an integration against test functions, followed by an integration by parts, cures the problem. The most known example is that of the Laplace equation $$\Delta u=f\qquad\hbox{over }\Omega,$$ where it is enough for $u$ to have locally integrable first-order derivatives, by rewriting the equation as a variational formulation (Dirichlet principle) $$\int_\Omega \nabla u\cdot\nabla vdx=-\int_\Omega fvdx$$ for every $v\in{\mathcal C}^1_c(\Omega)$ (subscript $c$ means compact support). What is important in this process is to satisfy the rule If $u$ has enough derivatives that the equation makes sense pointwise, then it is a weak solution if and only if it is a classical solution. Let us mention in passing that in order to use the full strength of functional analysis and operator theory, this weak notion of solutions led to the birth of Sobolev spaces and Distribution theory (L. Schwartz). This framework has been used for nonlinear equations and systems too, for instance for the Navier-Stokes, Euler, Schrödinger equations, ... An important question is whether this framework is accurate or not. By accurate, we mean that boundary and/or initial data yield a unique solution, which depends continuously on the data. This is the question of well-posedness. In many cases, functional analysis, sometimes associated to topological arguments, yield an existence theorem. A celebrated one is J. Leray's existence result to the Navier-Stokes equation of an incompressible fluid. However, uniqueness is often an other matter, a difficult one. For a $3$-dimensional fluid, the uniqueness to Navier-Stokes is a $1$M US Dollar open question. Uniqueness is often (but not always) associated to regularity. In many situations, there are weak-strong uniqueness result, which state that if a classical, or a regular enough solution exists, then there does not exist any other weak solution (say in a class where we do have an existence result). It is an if-theorem, in the absence of an existence result of strong solutions. For elliptic and parabolic equations, the regularity theory is a topic of its own. Whereas regularity is often expected in elliptic or parabolic equations and systems, it is not for hyperbolic ones, because we know that singularities do propagate, and that they can even be created in finite time thanks to nonlinear effects. Then the notion of weak solutions becomes meaningful, in that it translates in mathematical terms the physical notion of conserved quantities. It gives algebraic relation for the jump of the solution and its derivatives across discontinuities (Rankine-Hugoniot relations). Finally, I like a lot the way the theory of nonlinear elliptic equations, and of Hamilton-Jacobi equations have develloped in the past decades. At the beginning, it was observed that the maximum principle, known for classical solutions, remains valid for weak ones. This suggested, when the nonlinearity is so strong that a variational formulation is not available, that the maximum principle itself be used to define a notion of viscosity solution. The idea is to test at $x_0$ the PDE with a test function $\phi$ being comparable to $u$ (either $\phi\le u$ or $\phi\ge u$ locally) and touching $u$ at $x_0$. This has been extremely powerfull. share|improve this answer In the third to last paragraph, you wrote: "Uniqueness is often (but not always) associated to uniqueness". Is that intentional? Perhaps one of the two uniquenesses ought to be regularity? –  Willie Wong Oct 7 '10 at 17:23 Of course ! Thank you for careful reading. I correct immediately. –  Denis Serre Oct 8 '10 at 6:38 Formal solutions to partial differential relations Given a partial differential relation, that is, a subset $\mathcal{R} \subset J^k(\mathbb{R}^n, \mathbb{R}^m)$ of the space of $k$-jets of smooth maps $\mathbb{R}^n \to \mathbb{R}^m$, one can consider the space of smooth (say) maps $f$ from an $n$-manifold $N$ to an $m$-manifold $M$ such that $J^k(f) \in \mathcal{R}$, i.e. so that the $k$-jet of the function lies in the subspace $\mathcal{R}$ at each point. Call the space of such maps $\mathrm{Sol}_\mathcal{R}(N, M)$. On the other hand, we can consider the bundle $J^k(N, M) \to N$ of $k$-jets of maps from $N$ to $M$, and the associated subbundle $\mathcal{R}(N, M) \to N$, and call the space of sections of this last bundle $\mathrm{FSol}_\mathcal{R}(N,M)$, the space of formal solutions. This space is far easier to analyse, for example because constructing sections of a bundle is a purely homotopy-theoretic problem. Taking derivatives gives a comparison map $$\mathrm{Sol}_\mathcal{R}(N, M) \to \mathrm{FSol}_\mathcal{R}(N, M).$$ If $\mathcal{R}$ is open in $J^k(\mathbb{R}^n, \mathbb{R}^m)$ and the manifold $N$ is open, Gromov showed that the comparison map is a homotopy equivalence. In particular, if the space of formal solutions is non-empty, so is the space of actual solutions. share|improve this answer Moduli problem: find a good parametrization of geometric objects of some type; parametrization should form a collection equipped with some natural geometric structure, therefore being a geometric object in its own right. While naive "parameter space" is a set, in structured formulation it is replaced by a moduli space which classifies the geometric objects we started with. In the simplest case, the moduli problem is representable by a space in a usual sense, an object in more or less the same category in which the original geometric object was. For example a manifold or a scheme where the original objects were manifolds or schemes. With harder problems the moduli lead to more and more general kinds of objects. This motivated new types of spaces as stacks, higher stacks, derived stacks and so on. It appears that starting with original geometric category, most of the generalized objects needed to solve the moduli problem live in some nice geometric subcategory (e.g. algebraic stacks) of the category of (possibly categorified) presheaves or sheaves on the original category, including higher versions like simplicial presheaves and so on. The original category embeds by the corresponding version of Yoneda embedding into the category of (pre)sheaves. The new ambient category of presheaves not only more generically has a solution to the moduli problem, but also has many other improved natural properties like closedness under limits. Cohomology theories, various generalized cocycles and so on, generalized smoothness notions and so on, can also be accomodated after Yoneda embedding into a homotopy correct version of presheaf category, like in the emerging subject of derived geometry. In the original terms of non-generalized spaces, one would need to use all kinds of difficult and dirty technique to define and study the generalized notions, for example introducing various piecewise-continuous cocycles, multivalued or infinite-dimensional models and so on. Methods depending on Yoneda philosophy give rather universal setting to attack moduli problems and many other problems (like deformation theory), allowing to often eliminate construction of very elaborate but ad hoc modifications of original concepts. Inside the bigger category it may be easier to cut out some nice geometric subcategory of geometric spaces which include the solutions to the moduli problem than constructing some similar category in terms of original geometry. Of course, sometimes the difficult elementary models have their own specific strengths, which do not follow from the application of general methods. share|improve this answer Affine schemes: Given any ring $R$, try to find a map from it into local ring $L$ which is initial among maps to local rings (i.e. any other map from $R$ into a local ring should factor through this one, followed by a map of local rings, i.e. one such that the preimage of the maximal ideal is the maximal ideal). Such a thing does not exist, unless $R$ is already local. But if we allow $L$ to be a ring object living in a different topos than that of sets, then it exists: It is the local ring object living in $Sh(Spec R)$ given by the structure sheaf $\mathcal{O}_{Spec R}$ (see also my post here) share|improve this answer Given a set of polynomial diophantine equations, it is useful to study solutions in any ring, instead of just studying integer solutions. (This is the "functorial point of view" of a scheme over $\mathbb Z$.) share|improve this answer Well, then I'll start with the most obvious generalized solutions: • weak solutions to PDEs • Schwartz's generalized Functions aka Distributions, • Colombeau's algebra(s) of generalized functions and • various other kinds of generalized functions • Quasi-Minima in functional analysis: A quasi-minimum of a functional $\mathcal{F}$ is a $u$ such that $\mathcal{F}u\leq Q\mathcal{F}v$ for all $v$ (with some constant $Q\geq 1$) • Every solution of an polynomial equation within $\mathbb{C}$ can be a generalized solution if you're problem is something that has only real (maybe some geometric problem) or only integer or even only natural (maybe something from number theory) solutions. But considering all complex solution to your particular equation often gives a very elegant treatment of the problem. share|improve this answer Ideals in rings of integers of number fields arose as "ideal numbers"... share|improve this answer Grothendieck topologies (or: toposes as generalized spaces): There is no topology on a general scheme which is e.g. fine enough to give back the cohomological dimensions expected from geometry, but with a more general notion of covering (or: of space) this works out. share|improve this answer Quotient "spaces": While quotients, e.g. of group actions, in geometry often are degenerate, several generalized notions of quotient space help here: Sheaf quotients, Orbifolds, Algebraic Spaces, Stack quotients, Homotopy quotients, Non-commutative quotients, GIT quotients, ... (similarly with moduli spaces) share|improve this answer Complex numbers arose as ideal solutions of polynomial equations with real coefficients, I guess. share|improve this answer I edited another answer of yours where you wrote "Polinomial". Are you doing it intentionally or is there any other reason? Thanks –  Unknown Oct 7 '10 at 16:17 No, I'm just careless :-) If you clean up those things I have no objections - thanks! –  Peter Arndt Oct 8 '10 at 0:21 Generalized Eigenvector I'm surprised no one has yet mentioned the first example an undergraduate is likely to see. Suppose $A$ is a linear map from a finite dimensional vector space $V$ to $V$, with eigenvalue $\lambda$. Any nonzero vector in $\text{ker}(A-\lambda I)^k$ for some $k\ge1$ is called a generalized eigenvector for eigenvalue $\lambda$. These are used in proving the existance of the Jordan Canonical Form. share|improve this answer In linear algebra (linear inverse problems) one generalizes the notion of a solution of a linear operator equation $Ax=y$ to 1. "best approximation" if there is no solution, i.e. minimizing the functional $\|Ax-y\|$, 2. "Minimum-norm solution" if there is a subspace of solutions, i.e. taking that solution of $Ax=y$ which has minimal norm, 3. both (if the best approximation is not unique) leading to the Moore-Penrose inverse. share|improve this answer Virtual knots Louis Kauffman generalized the knots by introducing virtual crossings, virtual knots and virtual Reidemeister moves. He obtained some interesting developments in knot theory. share|improve this answer One of the most fruitful notion of generalized solution in optimization and combinatorics is linear programming relaxation. Quoting from the wikipedia article: In mathematics, the linear programming relaxation of a 0-1 integer program is the problem that arises by replacing the constraint that each variable must be 0 or 1 by a weaker constraint, that each variable belong to the interval [0,1]. share|improve this answer A form of "generalized solution" which I saw in various areas like for combinatorial optimization problems, for diophanine equations, for computational complexity purposes, and others is "statistical physics relaxation". You regard your original problem as a "temperature 0" case of a more general problem and try to gain insight on the original problem based on statistical-physics insights for the generalized problem. I am not sure what is the general recipe for this apprach and I will be happy to see an edited version with further explanation and links. share|improve this answer I'll mention a recent paper by Baez and Stay on 'Algorithmic thermodynamics', arxiv.org/abs/1010.2067 which contains results about randomness and complexity, depending on a temperature parameter, in which, to quote, "the randomness described by Chaitin and Tadaki then arises as the infinite-temperature limit." –  David Roberts Dec 6 '10 at 12:23 I think that such an example is the use of the Residue theorem to calculate contour integrals. You use complex analysis to solve a problem in real analysis. share|improve this answer Your Answer
7281940a86f40296
Guest Blog Guest Blog Commentary invited by editors of Scientific American Are Parallel Universes Unscientific Nonsense? Insider Tips for Criticizing the Multiverse Although we don’t know whether parallel universes exist, we know something else about them with certainty: many people instinctively dislike them, and whenever a physicist writes a book about them, the Web erupts with claims that they are unscientific nonsense. My new book Our Mathematical Universe proved to be no exception. “Is this still science?” the biologist Mark Buchanan wondered on the pages of New Scientist, “Or has inflationary cosmology veered towards something akin to religion?” The physicist Peter Woit dismissed it as “grandiose nonsense”. Baby pictures of our universe 400,000 years after the Big Bang taken by the Planck Satellite match the predictions of the theory of inflation. / Max Tegmark If you’re a multiverse skeptic, you should know that there are many potential weaknesses in the case for parallel universes, and I hope you’ll find my cataloging of these weaknesses below useful. To identify these weaknesses in the pro-multiverse arguments, we first need to review what the arguments are. Many physicists have explored various types of parallel universes in recent books, including Sean Carroll, David Deutsch, Brian Greene, Michio Kaku, Martin Rees, Leonard Susskind and Alexander Vilenkin. Interestingly, not a single one of these books (my own included) makes any outright claims that parallel universes exist. Instead, all their arguments involve what logicians know as “modus ponens”: that if X implies Y and X is true, then Y must also be true. Specifically, they argue that if some scientific theory X has enough experimental support for us to take it seriously, then we must take seriously also all its predictions Y, even if these predictions are themselves untestable (involving parallel universes, for example). As a warm-up example, let’s consider Einstein’s theory of General Relativity. It’s widely considered a scientific theory worthy of taking seriously, because it has made countless correct predictions - from the gravitational bending of light to the time dilation measured by our GPS phones. This means that we must also take seriously its prediction for what happens inside black holes, even though this is something we can never observe and report on in Scientific American. If someone doesn’t like these black hole predictions, they can’t simply opt out of them and dismiss them as unscientific: instead, they need to come up with a different mathematical theory that matches every single successful prediction that general relativity has made - yet doesn’t give the disagreeable black hole predictions. This has proven a remarkably difficult task, eluding many brilliant scientists for about a century. In other words, for a theory to be testable (and hence scientific), we don’t have to be able to test all its predictions, merely one of its predictions. So are there parallel universes, or is the universe we observe (the spherical region of space from which light has had time to reach us during the 13.8 billion years since our Big Bang) all that exists? We don’t know. The interesting claim that these books collectively make is that various theories imply that various types of parallel universes exist (see table), so that by modus ponens, if we take any of these theories seriously, we’re forced to take seriously also some parallel universes. Conversely, if we can experimentally rule out any of these theories based on their other predictions, we’ve destroyed the evidence for the corresponding parallel universes. For example, Alan Guth, Andrei Linde and Alexander Vilenkin have argued that the cosmological theory of inflation generically predicts the Level I multiverse: a single space so large that it contains many universe-sized regions. Inflation may or may not turn out to be correct, but the recent confirmation of many of its predictions by cosmic microwave background experiments etc. have caused it to emerge as the most popular scientific theory for what happened early on, and ongoing experiments may provide additional tests. A second argument is that if we add to inflation the separate assumption that the correct theory of quantum gravity (say string theory, loop quantum gravity or some competitor) has more than one homogeneous solution (just as the equations for water have three solutions corresponding to ice, steam and gas), then this implies the Level II multiverse: a single space containing universe-sized regions with each kind of space. A third argument, first made by Hugh Everett III, is that the bare-bones theory of quantum mechanics free from so-called wave-function collapse implies a third type of multiverse. A fourth argument, made in my book, is that if there’s an external reality completely independent of us humans, then there’s a fourth type of multiverse realizing all mathematically possible universes. The most persistent and acrimonious debates often occur when the debating parties misunderstand each other, so it’s important that people on both sides of the multiverse debate be as explicit as they can about what they’re claiming. The fact that parallel universes aren’t a theory, but predictions of certain theories, means that there are three (and only three) logically possible lines of attack on parallel universes, corresponding to three types of claims: • One or more of these “X implies Y” arguments is incorrect (inflation doesn’t predict Level I, say). • One or more of the theories X predicting multiverses are incorrect (inflation, say). • Parallel universe are indeed predicted by scientific theories, but scientists shouldn’t waste their time thinking about such topics. Since C is a matter of personal opinion, let’s focus in more detail on A and B, which are straightforward scientific claims that can hopefully one day be settled by calculation and observation. We’ll see that there’s no shortage of such scientific lines of attack. Before delving into them, however, it’s worth noting that the best way to weaken a strong case is to overstate it, so multiverse skeptics should avoid undermining their case by going beyond A, B and C with vague and unscientific claims of “fantasy,” “nonsense,” etc. A type-A attack on Level I would show that inflation doesn’t produce a Level I multiverse. Although Guth, Linde and Vilenkin have shown that almost any inflation model produces an infinite space, the “almost” allows a line of attack: there are still some models which don’t, and even though they’ve been criticized as contrived, they remain a logical possibility. A type-B attack on Level I should weaken the case for inflation. This could happen either through theoretical progress (for example, proof that competing theories such as the ekpyrotic universe or string gas cosmology are free from the obstacles currently limiting their popularity) or through new experimental results disagreeing with generic inflation predictions (for example, detection of a small but non-zero curvature of space, or growing evidence that the claimed anomalies in the cosmic microwave background images from the Planck satellite need to be taken seriously). A more radical and potentially devastating type-B attack is to question the assumption that space can be stretched out indefinitely. Although it’s a standard assumption in physics that physical space is continuous, with even the smallest volume containing infinitely many points, it’s an Achilles heal in the sense that we have no experimental evidence for anything truly continuous or infinite in nature. Contrariwise, we suspect that our intuitive picture of space breaks down on tiny scales. Killing the continuum could kill eternal inflation, resulting in a Level I multiverse that is merely large but not infinite, potentially eliminating the prediction that there are near-identical copies of you far out in space. Any of the above-mentioned Level I attacks could torpedo Level II as well. A second line of attack against Level II is to challenge the other assumption upon which it rests: that the correct theory of quantum gravity has more than one homogeneous solution. If further work on quantum gravity leads to a theory with a unique solution that matches what we experimentally observe, Level II will have had the rug pulled from under it. A third line of attack is to give a compelling explanation for the observed fine-tuning of physical constants that doesn’t rely on a Level II multiverse. Since the Level III multiverse is implied by the (collapse-free) Schrödinger equation of quantum mechanics, it can be demolished with a type-B attack: an experimental demonstration of a violation of the Schrödinger equation. For example, if the current multi-million dollar attempts to build quantum computers fail and the cause is determined to be that the Schrödinger equation is violated by some form of wavefunction collapse process, then there are no Level III parallel universes. The Level IV multiverse is also vulnerable to a type-B attack: we can simply reject the notion that there’s an external reality completely independent of us humans, for example in the spirit of Niels Bohr’s famous dictum, “no reality without observation”. A second type-B attack option is to falsify the mathematical universe hypothesis by demonstrating that there’s some physical phenomenon that has no mathematical description. In summary, there is no shortage of potential weaknesses in the arguments for parallel universes. Attacking all these weaknesses involves doing interesting experimental and theoretical physics research. If any of the attacks succeed, the corresponding multiverse evidence is discredited. Conversely, if all the attacks fail, then we’ll be forced to take parallel universes more seriously whether we like them or not - such are the rules of science. In this way, parallel universes are no different from any other scientific idea. Share this Article: Email this Article
83239719134dbb1a
Sign up × Inspired by this question: Are these two quantum systems distinguishable? and discussion therein. Given an ensemble of states, the randomness of a measurement outcome can be due to classical reasons (classical probability distribution of states in ensemble) and quantum reasons (an individual state can have a superposition of states). Because a classical system cannot be in a superposition of states, and in principle the state can be directly measured, the probability distribution is directly measurable. So any differing probability distributions are distinguishable. However in quantum mechanics, an infinite number of different ensembles can have the same density matrix. What assumptions are necessary to show that if two ensembles initially have the same density matrix, that there is no way to apply the same procedure to both ensembles and achieve different density matrices? (ie. that the 'redundant' information regarding what part of Hilbert space is represented in the ensemble is never retrievable even in principle) To relate to the referenced question, for example if we could generate an interaction that evolved: 1) an ensemble of states $|0\rangle + e^{i\theta}|1\rangle$ with a uniform distribution in $\theta$ 2) an ensemble of states $|0\rangle + e^{i\phi}|1\rangle$ with a non-uniform distribution in $\phi$ such an mapping of vectors in Hilbert space can be 1-to-1. But it doesn't appear it can be done with a linear operator. So it hints that we can probably prove an answer to the question using only the assumption that states are vectors in a Hilbert space, and the evolution is a linear operator. Can someone list a simple proof showing that two ensembles with initially the same density matrix, can never evolve to two different density matrices? Please be explicit with what assumptions you make. Update: I guess to prove they are indistinguishable, we'd also need to show that non-unitary evolution like the projection from a measurement, can't eventually allow one to distinguish the underlying ensemble either. Such as perhaps using correlation between multiple measurements or possibly instead of asking something with only two answers, asking something with more that two so that finally the distribution of answers needs more than just the expectation value to characterize the results. share|cite|improve this question Hah! I addressed your update in my answer before I even saw it. –  Keenan Pepper Apr 6 '11 at 1:05 2 Answers 2 up vote 8 down vote accepted You only need to assume 1. the Schrödinger equation (yes, the same old linear Schrödinger equation, so the proof doesn't work for weird nonlinear quantum-mechanics-like theories) 2. the standard assumptions about projective measurements (i.e. the Born rule and the assumption that after you measure a system it gets projected into the eigenspace corresponding to the eigenvalue you measured) Then it's easy to show that the evolution of a quantum system depends only on its density matrix, so "different" ensembles with the same density matrix are not actually distinguishable. First, you can derive from the Schrödinger equation a time evolution equation for the density matrix. This shows that if two ensembles have the same density matrix and they're just evolving unitarily, not being measured, then they will continue to have the same density matrix at all future times. The equation is $$\frac{d\rho}{dt} = \frac{1}{i\hbar} \left[ H, \rho \right]$$ Second, when you perform a measurement on an ensemble, the probability distribution of the measurment results depends only on the density matrix, and the density matrix after the measurement (of the whole ensemble, or of any sub-ensemble for which the measurement result was some specific value) only depends on the density matrix before the measurement. Specifically, consider a general observable (assumed to have discrete spectrum for simplicity) represented by a hermitian operator $A$. Let the diagonalization of $A$ be $$A = \sum_i a_i P_i$$ where $P_i$ is the projection operator in to the eigenspace corresponding to eigenvalue (measurement outcome) $a_i$. Then the probability that the measurement outcome is $a_i$ is $$p(a_i) = \operatorname{Tr}(\rho P_i)$$ This gives the complete probability distribution of $A$. The density matrix of the full ensemble after the measurment is $$\rho' = \sum_i P_i \rho P_i$$ and the density matrix of the sub-ensemble for which the measurment value turned out to be $a_i$ is $$\rho'_i = \frac{P_i \rho P_i}{\operatorname{Tr}(\rho P_i)}$$ Since none of these equations depend on any property of the ensemble other than its density matrix (e.g. the pure states and probabilities of which the mixed state is "composed"), the density matrix is a full and complete description of the quantum state of the ensemble. share|cite|improve this answer Oh, and for the case of an observable $A$ with a continuous spectrum, it works basically the same way. For mathematicians it might get more hairy, but as a physicist I have no problem just saying "replace all the summation signs with integrals". –  Keenan Pepper Apr 6 '11 at 0:59 You don't even need to assume Schrödinger equation, but only the fact that the evolution of a quantum state is unitary. –  Frédéric Grosshans Apr 10 '11 at 19:14 Density matrices are an alternative description of quantum mechanics. Consequently, if two ensembles have the same density matrix, they are not distinguishable. Example, consider the unpolarized spin-1/2 density matrix which can be modeled as a system that is half pure states in the +x direction and half in the -x direction, or alternatively, as half pure states in the +z direction (i.e. spin up) and half in the -z direction (i.e. spin down): $$\begin{pmatrix}0.5&0\\0&0.5\end{pmatrix} = 0.5\rho_{+x}+0.5\rho_{-x} = 0.5\rho_{+z}+0.5\rho_{-z}$$ Now compute the average value of an operator $H$ with respect to these ensembles. Let $$H = \begin{pmatrix}h_{11}&h_{12}\\h_{21}&h_{22}\end{pmatrix}$$ then the averages for the four states involved are: $$\begin{array}{rcl} \langle H\rangle_{+x} &=& 0.5(h_{11}+h_{12}+h_{21}+h_{22})\\ \langle H\rangle_{-x} &=& 0.5(h_{11}-h_{12}-h_{21}+h_{22})\\ \langle H\rangle_{+z} &=& h_{11}\\ \langle H\rangle_{-z} &=& h_{22} \end{array}$$ From the above, it's clear that taking the average over $\pm x$ will give the same result as taking the average over $\pm z$, that is, in both cases the ensemble will give an average of $$\langle H\rangle = 0.5(h_{11}+h_{22})$$ Any preparation of the system amounts to an operator acting on the states and so $H$ can stand for a general operation. Therefore there is no way of distinguishing an unpolarized mixture of +- x from an unpolarized mixture of +-z. The argument for general density matrices is similar, but I think this gets the point across. share|cite|improve this answer Are you saying instead of representing a state as a vector in Hilbert space, it is sufficient to represent a state as a density matrix? It seems like this view would change the counting of physical states and would have an effect in statistical mechanics or thermodynamics of a system. It almost seems like you would be reducing the entropy by mixing two ensembles. –  Ginsberg Apr 6 '11 at 0:32 Either way, the whole point of the question was to see a concrete mathematical proof. Instead of just saying it is so, can you please show how it is so, such that I can learn more? –  Ginsberg Apr 6 '11 at 0:34 @Ginsberg; Yes, a density matrix is equivalent to a collection of pure states (presumably represented by state vectors) along with a probability density for the pure states. I've not found the reference I was looking for so I'll type up an outline of a proof and edit it in. –  Carl Brannen Apr 6 '11 at 0:45 Your Answer
7bee13d1f552e88e
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer From what I remember in my undergraduate quantum mechanics class, we treated scattering of non-relativistic particles from a static potential like this: 1. Solve the time-independent Schrodinger equation to find the energy eigenstates. There will be a continuous spectrum of energy eigenvalues. 2. In the region to the left of the potential, identify a piece of the wavefunction that looks like $Ae^{i(kx - \omega t)}$ as the incoming wave. 3. Ensure that to the right of the potential, there is not piece of the wavefunction that looks like $Be^{-i(kx + \omega t)}$, because we only want to have a wave coming in from the left. 4. Identify a piece of the wavefunction to the left of the potential that looks like $R e^{-i(kx + \omega t)}$ as a reflected wave. 5. Identify a piece of the wavefunction to the right of the potential that looks like $T e^{i(kx - \omega t)}$ as a transmitted wave. 6. Show that $|R|^2 + |T|^2 = |A|^2$. Interpret $\frac{|R|^2}{|A|^2}$ as the probability for reflection and $\frac{|T|^2}{|A|^2}$ as the probability for transmission. This entire process doesn't seem to have anything to do with a real scattering event - where a real particle is scattered by a scattering potential - we do all our analysis on a stationary waves. Why should such a naive procedure produce reasonable results for something like Rutherford's foil experiment, in which alpha particles are in motion as they collide with nuclei, and in which the wavefunction of the alpha particle is typically localized in a (moving) volume much smaller than the scattering region? share|cite|improve this question Essentially because the dynamical problem only interests you in the limit where $T_i \to \infty$, $T_f \to \infty$ and by Lippmann-Schwinger equation it can be shown that all you need to do is to match the asymptotic states of the time-independent Hamiltonian (which is precisely what you describe, although nobody will tell you this in the undergraduate class). This can be developed more fully into the S-matrix theory, fundamental to all of scattering problems. I'll see if I can get to a more complete answer later. – Marek Jul 22 '11 at 11:50 This really bothered me too when I first took quantum mechanics. – Ted Bunn Jul 22 '11 at 14:11 Allow me to say that this is a very interesting question with even more interesting answers. – Constantine Black Feb 23 at 10:37 up vote 14 down vote accepted This is fundamentally no more difficult than understanding how quantum mechanics describes particle motion using plane waves. If you have a delocalized wavefunction $\exp(ipx)$ it describes a particle moving to the right with velocity p/m. But such a particle is already everywhere at once, and only superpositions of such states are actually moving in time. $$\int \psi_k(p) e^{ipx - iE(p) t} dp$$ where $\psi_k(p)$ is a sharp bump at $p=k$, not a delta-function, but narrow. The superposition using this bump gives a wide spatial waveform centered about at x=0 at t=0. At large negative times, the fast phase oscillation kills the bump at x=0, but it creates a new bump at those x's where the phase is stationary, that is where $${\partial\over\partial p}( p x - E(p)t ) = 0$$ or, since the superposition is sharp near k, where $$ x = E'(k)t$$ which means that the bump is moving with a steady speed as determined by Hamilton's laws. The total probability is conserved, so that the integral of psi squared on the bump is conserved. The actual time-dependent scattering event is a superposition of stationary states in the same way. Each stationary state describes a completely coherent process, where a particle in a perfect sinusoidal wave hits the target, and scatters outward, but because it is an energy eigenstate, the scattering is completely delocalized in time. If you want a collision which is localized, you need to superpose, and the superposition produces a natural scattering event, where a wave-packet comes in, reflects and transmits, and goes out again. If the incoming wavepacked has an energy which is relatively sharply defined, all the properties of the scattering process can be extracted from the corresponding energy eigenstate. Given the solutions to the stationary eigenstate problem $\psi_p(x)$ for each incoming momentum $p$, so that at large negative x, $\psi_p(x) = exp(ipx) + A \exp(-ipx)$ and $\psi_p(x) = B\exp(ipx)$ at large positive x, superpose these waves in the same way as for a free particle $$\int dp \psi_k(p) \psi_p(x) e^{-iE(p)t}$$ At large negative times, the phase is stationary only for the incoming part, not for the outgoing or reflected part. This is because each of the three parts describes a free-particle motion, so if you understand where free particle with that momentum would classically be at that time, this is where the wavepacket is nonzero. So at negative times, the wavepacket is centered at $$ x = E'(k)t$$ For large positive t, there are two places where the phase is stationary--- those x where $$ x = - E'(k) t$$ $$ x = E_2'(k) t$$ Where $E_2'(k)$ is the change in phase of the transmitted k-wave in time (it can be different than the energy if the potential has an asymptotically different value at $+\infty$ than at $-\infty$). These two stationary phase regions are where the reflected and transmitted packet are located. The coefficient of the reflected and transmitted packets are A and B. If A and B were of unit magnitude, the superposition would conserve probability. So the actual transmission and reflection probability for a wavepacket is the square of the magnitude of A and of B, as expected. share|cite|improve this answer First suppose that the Hamiltonian $H(t) = H_0 + H_I(t)$ can be decomposed into free and interaction parts. It can be shown (I won't derive this equation here) that the retarded Green function for $H(t)$ obeys the equation $$G^{(+)}(t, t_0) = G_0^{(+)}(t, t_0) - {i \over \hbar} \int_{-\infty}^{\infty} {\rm d} t' G_0^{(+)}(t,t') H_I(t') G^{(+)}(t', t_0)$$ where $G_0^{(+)}$ is the retarded Green function for $H_0$. Letting this equation act on a state $\left| \psi(t_0) \right>$ this becomes $$\left| \psi(t) \right> = \left| \varphi(t) \right> - {i \over \hbar} \int_{-\infty}^{\infty} {\rm d} t' G_0^{(+)}(t,t') H_I(t')\left| \psi(t') \right> $$ where $\varphi(t) = G_0^{(+)}(t,t') \left| \psi(t_0) \right>$. Now, we suppose that until $t_0$ there is no interaction and so we can write $\left |\psi(t_0) \right>$ as superposition of momentum eigenstates $$\left| \psi(t_0) \right> = \int {\rm d}^3 \mathbf p a(\mathbf p) e^{-{i \over \hbar} E t_0} \left| \mathbf p \right>.$$ A similar decomposition will also hold for $\left| \phi(t) \right>$. This should inspire us in writing $\left| \psi(t) \right >$ as $$\left| \psi(t) \right> = \int {\rm d}^3 \mathbf p a(\mathbf p) e^{-{i \over \hbar} E t} \left| \psi^{(+)}_{\mathbf p} \right>$$ where the states $\left| \psi^{(+)}_{\mathbf p} \right>$ are to be determined from the equation for $\left|\psi(t) \right>$. Now, the amazing thing (which I again won't derive due to the lack of space) is that these states are actually eigenstates of $H$: $$H \left| \psi^{(+)}_{\mathbf p} \right> = E \left| \psi^{(+)}_{\mathbf p} \right>$$ for $E = {\mathbf p^2 \over 2m}$ (here we assumed that the free part is simply $H_0 = {{\mathbf p}^2 \over 2m}$ and that $H_I(t)$ is independent of time). Similarly, one can derive advanced eigenstates from advanced Green function $$H \left| \psi^{(-)}_{\mathbf p} \right> = E \left| \psi^{(-)}_{\mathbf p} \right>.$$ Now, in one dimension and for an interaction Hamiltonian of the form $\left< \mathbf x \right| H_I \left| \mathbf x' \right> = \delta(\mathbf x - \mathbf x') U(\mathbf x)$ it can be further shown that $$\psi^{(+)}_p \sim \begin{cases} e^{{i \over \hbar}px} + A(p) e^{-{i \over \hbar}px} \quad x< -a \cr B(p)e^{{i \over \hbar}px} \quad x> a \end{cases}$$ where $a$ is such that the potential vanishes for $|x| > a$ and $A(p)$ and $B(p)$ are coefficients fully determined by the potential $U(x)$. Similar discussion again applies for wavefunctions $\psi^{(-)}_p$. Thus we have succeeded in reducing the dynamical problem into a stationary problem by writing the non-stationary states $\psi(t, x)$ in the form of stationary $\psi^{(+)}_p(x)$. share|cite|improve this answer -1 This answer is no good. You are turning off the scattering potential at $t=-\infty$ for no reason, the Hamiltonian in a scattering problem of the sort the OP is asking about is time independent. The answer is ridiculously formal, and all the interesting things are in the "it can be shown...". – Ron Maimon Aug 17 '11 at 2:32 @Ron: I don't quite understand your objection. Physically, the $t = -\infty$ part of the potential never matters in a scattering problem since particles are infinitely away from the potential (that is usually generated by their being close anyway). So this is only technicallity that I prefer to work with that doesn't change anything (rather, it's very convenient in more general situations). As for the "it can be shown" parts... well, I can show them but the answer would be twice as long. Will you remove the downvote if I include the derivations? And as for being formal... so what? – Marek Aug 17 '11 at 6:26 The answer to this is the same as the answer to why you solve the Time-Independent-Schrodinger-Equation to find the time evolution of a bound particle. First you solve the TISE to find the stationary states $\psi_n$, then you write the particle's wavefunction $\Psi(t=0)$ in terms of a superposition of the $\psi_n$. Since you know how the stationary states evolve in time, you now know (at least in principle) how ANY wavefunction evolves in time. It's the same thing for scattering. You figure out what happens for the energy eigenstates, and now you know what will happen for any wavepacket (which you would write as a superposition of energy eigenstates, of course). And here it's even easier than the bound states: if all you care about is R and T, and your wavepacket has a narrow range of energies (for which T is nearly constant), then the value of T for your wavepacket is the same as what you just calculated for the energy eigenstate. Huzzah! If your wavepacket involves a superpostion of a wide range of energies, with a wide range of T's, then your life will be more complicated, of course. But in scattering experiments, folks usually try to employ nearly monoenergetic beams. Because quantum mechanics classes spend so much time mired in the details of solving the TISE (either for scattering or bound states), they often lose sight of one of the motivations for solving the TISE: it's a tool for finding the time behavior of any initial condition. share|cite|improve this answer I'm baffled by @Marek's statement that the Hamiltonian is explicitly time-dependent. It certainly doesn't need to be and often isn't. For instance, Rutherford scattering: $H=p^2/(2m)+q_1q_2/(4\pi\epsilon_0r)$. Note the absence of time dependence. In a scattering situation, the wavefunction is time-dependent, not generally the Hamiltonian. In any situation in which the Hamiltonian is explicitly time-dependent, the procedure described in the original question wouldn't work, so in the context of this question we're certainly assuming time-independent Hamiltonians. – Ted Bunn Jul 22 '11 at 18:49 @Ted: also note that the process Mark describes is not what AC describes in his answer. We don't evolve solutions in time at all. To give complete justification one needs to proceed as in the usual scattering theory (which is best dealt with in the Dirac picture and not Schrodinger picture). This is a huge subject and it certainly is not about simple solving of TISE (even though it can be reduced to this sometimes)... – Marek Jul 22 '11 at 19:36 I don't dispute any of this, but I don't think any of it is relevant to the question at hand. Note that it's explicitly about scattering from a static potential. One should be able to understand why the "usual" undergraduate quantum mechanics procedure for treating, e.g., Rutherford scattering, or scattering from a delta-function potential, or a square barrier gives the right answer. (Continued ...) – Ted Bunn Jul 22 '11 at 19:47 There's no need to introduce time-dependence in any of those cases: you could solve the time-dependent equation numerically for a wave packet, or you can solve the time-dependent Schrodinger equation analytically. As I understand it, Mark's question is why those two ways of treating the problem give the same answer. – Ted Bunn Jul 22 '11 at 19:49 @Ted: well, I was just trying to describe why the problem is about something else than simple solving of TISE. As for the real justification, I hinted at it in my comment under the question: it follows from the L-S equation. What AC describes is either another way of solving the scattering problem (and so irrelevant to the question) or a (wrong) justification of why the "usual" way works. Either way, I find this answer unsatisfactory. – Marek Jul 22 '11 at 20:12 Here I would like to expand some of the arguments given in Ron Maimon's nice answer. i) Setting. Let us divide the 1D $x$-axis into three regions $I$, $II$, and $III$, with a localized potential $V(x)$ in the middle region $II$ having a compact support. (Clearly, there are physically relevant potentials that haven't compact support, e.g. the Coulomb potential, but this assumption simplifies the following discussion concerning the notion of asymptotic states.) ii) Time-independent and monochromatic. The particle is free in the regions $I$ and $III$, so we can solve the time-independent Schrödinger equation $$\hat{H}\psi(x) ~=~E \psi(x), \qquad\qquad \hat{H}~=~ \frac{\hat{p}^2}{2m}+V(x),\qquad\qquad E> 0, \tag{1}$$ exactly there. We know that the 2nd order linear ODE has two linearly independent solutions, which in the free regions $I$ and $III$ are plane waves $$ \psi_{I}(x) ~=~ \underbrace{a^{+}_{I}(k)e^{ikx}}_{\text{incoming right-mover}} + \underbrace{a^{-}_{I}(k)e^{-ikx}}_{\text{outgoing left-mover}}, \qquad k> 0, \tag{2} $$ $$ \psi_{III}(x) ~=~ \underbrace{a^{+}_{III}(k)e^{ikx}}_{\text{outgoing right-mover}} + \underbrace{a^{-}_{III}(k)e^{-ikx}}_{\text{incoming left-mover}}. \qquad\qquad\qquad \tag{3} $$ Just from linearity of the Schrödinger equation, even without solving the middle region $II$, we know that the four coefficients $a^{\pm}_{I/III}(k)$ are constrained by two linear conditions. This observation leads, by the way, to the time-independent notion of the scattering $S$-matrix and the transfer $M$-matrix $$ \begin{pmatrix} a^{-}_{I}(k) \\ a^{+}_{III}(k) \end{pmatrix}~=~ S(k) \begin{pmatrix} a^{+}_{I}(k) \\ a^{-}_{III}(k) \end{pmatrix}, \tag{4} $$ $$ \begin{pmatrix} a^{+}_{III}(k) \\ a^{-}_{III}(k) \end{pmatrix}~=~ M(k) \begin{pmatrix} a^{+}_{I}(k) \\ a^{-}_{I}(k) \end{pmatrix}, \tag{5} $$ see e.g. my Phys.SE answer here. iii) Time-dependence of monochromatic wave. The dispersion relation reads $$ \frac{E(k)}{\hbar} ~\equiv~\omega(k)~=~\frac{\hbar k^2}{2m}. \tag{6} $$ The specific form on the right-hand side of the dispersion relation $(6)$ will not matter in what follows (although we will assume for simplicity that it is the same for right- and left-movers). The full time-dependent monochromatic solution in the free regions I and III becomes $$ \Psi_r(x,t) ~=~ \sum_{\sigma=\pm}a^{\sigma}_r(k)e^{\sigma ikx-i\omega(k)t} ~=~\underbrace{e^{-i\omega(k)t}}_{\text{phase factor}} \Psi_r(x,0), \qquad r ~\in~ \{I, III\}. \tag{7} $$ The solution $(7)$ is a sum of a right-mover ($\sigma=+$) and a left-mover ($\sigma=-$). For now the words right- and left-mover may be taken as semantic names without physical content. The solution $(7)$ is fully delocalized in the free regions I and III with the probability density $|\Psi_r(x,t)|^2$ independent of time $t$, so naively, it does not make sense to say that the waves are right or left moving, or even scatter! However, it turns out, we may view the monochromatic wave $(7)$ as a limit of a wave packet, and obtain a physical interpretation in that way, see next section. iv) Wave packet. We now take a wave packet $$ A^{\sigma}_r(k)~=~0 \qquad \text{for} \qquad |k-k_0| ~\geq~ \frac{1}{L}, \qquad\sigma~\in~\{\pm\}, \qquad r ~\in~ \{I, III\},\tag{8} $$ narrowly peaked around some particular value $k_0$ in $k$-space, $$|k-k_0| ~\leq~ K, \tag{9}$$ where $K$ is some wave number scale, so that we may Taylor expand the dispersion relation $$\omega(k)~=~ \omega(k_0) + v_g(k_0)(k-k_0) + {\cal O}\left((k-k_0)^2\right), \tag{10} $$ and drop higher-order terms ${\cal O}\left((k-k_0)^2\right)$. Here is the group velocity. The wave packet (in the free regions I and III) is a sum of a right- and a left-mover, $$ \Psi_r(x,t)~=~ \Psi^{+}_r(x,t)+\Psi^{-}_r(x,t), \qquad\qquad r ~\in~ \{I, III\},\tag{12} $$ $$ \Psi^{\sigma}_r(x,t)~:=~ \int dk~A^{\sigma}_r(k)e^{\sigma ikx-i\omega(k)t}, \qquad\qquad\sigma~\in~\{\pm\}, \qquad\qquad r ~\in~ \{I, III\}, $$ $$ ~\approx~ e^{i(k_0 v_g(k_0)-\omega(k_0))t} \int dk~A^{\sigma}_r(k)e^{ ik(\sigma x- v_g(k_0)t)}$$ $$~=~\underbrace{e^{i(k_0 v_g(k_0)-\omega(k_0))t}}_{\text{phase factor}} ~\Psi^{\sigma}_r\left(x-\sigma v_g(k_0)t,0\right).\tag{13}$$ The right- and left-movers $\Psi^{\sigma}$ will be very long spread-out wave trains of sizes $\geq \frac{1}{K}$ in $x$-space, but we are still able to identity via eq. $(13)$ their time evolution as just 1. a collective motion with group velocity $\sigma v_g(k_0)$, and 2. an overall time-dependent phase factor of modulus $1$ (which is the same for the right- and the left-mover). In the limit $K \to 0$, with $K >0$, the approximation $(10)$ becomes better and better, and we recover the time-independent monochromatic wave, $$ A^{\sigma}_r(k) ~\longrightarrow ~a^{\sigma}_r(k_0)~\delta(k-k_0)\qquad \text{for} \qquad K\to 0. \tag{14}$$ It thus makes sense to assign a group velocity to each of the $\pm$ parts of the monochromatic wave $(7)$, because it can understood as an appropriate limit of the wave packet $(13)$. The previous sentence is in a nutshell the answer to OP's title question (v3). share|cite|improve this answer There is already a detailed and correct derivation, in my answer I can try to address the qualitative side of "why". In a scattering problem, there is always a hierarchy of well-separated scales. In your example of an alpha particle in Rutherford experiment, you refer to localization in space which means a certain spread in the momentum/energy. However, as long as this spread is smaller than the characteristic energy scale on which the scattering amplitudes changes, the time-independent at well-defined energy should give correct results. In terms of lengths this scale separation required for the time-independent picture to work is that the wave-packet of the alpha particle should be larger that the neighbourhood of the nucleus where the scattering happens. Typically, this is the case -- if it is not, the alpha particle is likely to have very uncertain (in Heisenberg sense) energy/momentum. share|cite|improve this answer I also struggled to understand it myself. Why I think this confuses many people is that they try to interpret the time-independent scattering wavefunction as describing one single collision of a particle from the target and it is this interpretation which is not correct and leads to the confusion! I think that the easiest way of seeing why the time-independent approach works lies in the definition of the scattering process which the wavefunction describes. The time-independent scattering solution describes the situation in which the target is being continuously bombarded by a flux of non-interacting projectiles approaching with different impact parameters (this is how most of the scattering experiments work). Therefore the process you are trying to describe is stationary. This is the actual reason why the time-independent formulation works. You can see that from e.g. the classical book on scattering (Taylor: Scattering theory), where the scattering process is defined (Chapter 3, section d) very clearly in terms of the continuous flux of the incoming particles. You can convince yourself that this interpretation of the time-independent scattering solution is indeed correct by simply noting that the probability flux (either incoming or outgoing) that you can calculate from the scattering wavefunction has the units of probability per unit time per unit area, i.e. it describes a stationary scattering process. share|cite|improve this answer Your Answer
3946752bf6299a06
Complex number From Wikipedia, the free encyclopedia Jump to: navigation, search A complex number can be visually represented as a pair of numbers (a, b) forming a vector on a diagram called an Argand diagram, representing the complex plane. "Re" is the real axis, "Im" is the imaginary axis, and i is the imaginary unit which satisfies i2 = −1. A complex number is a number that can be expressed in the form a + bi, where a and b are real numbers and i is the imaginary unit, that satisfies the equation i2 = −1.[1] In this expression, a is the real part and b is the imaginary part of the complex number. Complex numbers extend the concept of the one-dimensional number line to the two-dimensional complex plane by using the horizontal axis for the real part and the vertical axis for the imaginary part. The complex number a + bi can be identified with the point (a, b) in the complex plane. A complex number whose real part is zero is said to be purely imaginary, whereas a complex number whose imaginary part is zero is a real number. In this way, the complex numbers contain the ordinary real numbers while extending them in order to solve problems that cannot be solved with real numbers alone. As well as their use within mathematics, complex numbers have practical applications in many fields, including physics, chemistry, biology, economics, electrical engineering, and statistics. The Italian mathematician Gerolamo Cardano is the first known to have introduced complex numbers. He called them "fictitious" during his attempts to find solutions to cubic equations in the 16th century.[2] Complex numbers allow for solutions to certain equations that have no solutions in real numbers. For example, the equation (x+1)^2 = -9 \, has no real solution, since the square of a real number cannot be negative. Complex numbers provide a solution to this problem. The idea is to extend the real numbers with the imaginary unit i where i2 = −1, so that solutions to equations like the preceding one can be found. In this case the solutions are −1 + 3i and −1 − 3i, as can be verified using the fact that i2 = −1: ((-1+3i)+1)^2 = (3i)^2 = (3^2)(i^2) = 9(-1) = -9, ((-1-3i)+1)^2 = (-3i)^2 = (-3)^2(i^2) = 9(-1) = -9. A complex number is a number of the form a + bi, where a and b are real numbers and i is the imaginary unit, satisfying i2 = −1. For example, −3.5 + 2i is a complex number. The real number a is called the real part of the complex number a + bi; the real number b is called the imaginary part of a + bi. By this convention the imaginary part does not include the imaginary unit: hence b, not bi, is the imaginary part.[3][4] The real part of a complex number z is denoted by Re(z) or ℜ(z); the imaginary part of a complex number z is denoted by Im(z) or ℑ(z). For example, \operatorname{Re}(-3.5 + 2i) &= -3.5 \\ \operatorname{Im}(-3.5 + 2i) &= 2. Hence, in terms of its real and imaginary parts, a complex number z is equal to \operatorname{Re}(z) + \operatorname{Im}(z) \cdot i . This expression is sometimes known as the Cartesian form of z. A real number a can be regarded as a complex number a + 0i whose imaginary part is 0. A purely imaginary number bi is a complex number 0 + bi whose real part is zero. It is common to write a for a + 0i and bi for 0 + bi. Moreover, when the imaginary part is negative, it is common to write abi with b > 0 instead of a + (−b)i, for example 3 − 4i instead of 3 + (−4)i. The set of all complex numbers is denoted by , \mathbf{C} or \mathbb{C}. Some authors[5] write a + ib instead of a + bi, particularly when b is a radical. In some disciplines, in particular electromagnetism and electrical engineering, j is used instead of i,[6] since i is frequently used for electric current. In these cases complex numbers are written as a + bj or a + jb. Complex plane[edit] Main article: Complex plane Figure 1: A complex number plotted as a point (red) and position vector (blue) on an Argand diagram; a+bi is the rectangular expression of the point. A complex number can be viewed as a point or position vector in a two-dimensional Cartesian coordinate system called the complex plane or Argand diagram (see Pedoe 1988 and Solomentsev 2001), named after Jean-Robert Argand. The numbers are conventionally plotted using the real part as the horizontal component, and imaginary part as vertical (see Figure 1). These two values used to identify a given complex number are therefore called its Cartesian, rectangular, or algebraic form. A position vector may also be defined in terms of its magnitude and direction relative to the origin. These are emphasized in a complex number's polar form. Using the polar form of the complex number in calculations may lead to a more intuitive interpretation of mathematical results. Notably, the operations of addition and multiplication take on a very natural geometric character when complex numbers are viewed as position vectors: addition corresponds to vector addition while multiplication corresponds to multiplying their magnitudes and adding their arguments (i.e. the angles they make with the x axis). Viewed in this way the multiplication of a complex number by i corresponds to rotating the position vector counterclockwise by a quarter turn (90°) about the origin: (a+bi)i = ai+bi2 = -b+ai. History in brief[edit] Main section: History Many mathematicians contributed to the full development of complex numbers. The rules for addition, subtraction, multiplication, and division of complex numbers were developed by the Italian mathematician Rafael Bombelli.[7] A more abstract formalism for the complex numbers was further developed by the Irish mathematician William Rowan Hamilton, who extended this abstraction to the theory of quaternions. Two complex numbers are equal if and only if both their real and imaginary parts are equal. In symbols: z_{1} = z_{2} \, \, \leftrightarrow \, \, ( \operatorname{Re}(z_{1}) = \operatorname{Re}(z_{2}) \, \and \, \operatorname{Im} (z_{1}) = \operatorname{Im} (z_{2})). Because complex numbers are naturally thought of as existing on a two-dimensional plane, there is no natural linear ordering on the set of complex numbers.[8] There is no linear ordering on the complex numbers that is compatible with addition and multiplication. Formally, we say that the complex numbers cannot have the structure of an ordered field. This is because any square in an ordered field is at least 0, but i2 = −1. Elementary operations[edit] Geometric representation of z and its conjugate \bar{z} in the complex plane The complex conjugate of the complex number z = x + yi is defined to be xyi. It is denoted \bar{z} or z*. Formally, for any complex number z: \bar{z} = \operatorname{Re}(z) - \operatorname{Im}(z) \cdot i . Geometrically, \bar{z} is the "reflection" of z about the real axis. Conjugating twice gives the original complex number: \bar{\bar{z}}=z. The real and imaginary parts of a complex number z can be extracted using the conjugate: \operatorname{Re}\,(z) = \tfrac{1}{2}(z+\bar{z}), \, \operatorname{Im}\,(z) = \tfrac{1}{2i}(z-\bar{z}). \, Moreover, a complex number is real if and only if it equals its conjugate. Conjugation distributes over the standard arithmetic operations: \overline{z+w} = \bar{z} + \bar{w}, \, \overline{z-w} = \bar{z} - \bar{w}, \, \overline{z w} = \bar{z} \bar{w}, \, \overline{(z/w)} = \bar{z}/\bar{w}. \, Addition and subtraction[edit] Complex numbers are added by adding the real and imaginary parts of the summands. That is to say: Similarly, subtraction is defined by Multiplication and division[edit] The multiplication of two complex numbers is defined by the following formula: (a+bi) (c+di) = (ac-bd) + (bc+ad)i.\ In particular, the square of the imaginary unit is −1: i^2 = i \times i = -1.\ The preceding definition of multiplication of general complex numbers follows naturally from this fundamental property of the imaginary unit. Indeed, if i is treated as a number so that di means d times i, the above multiplication rule is identical to the usual rule for multiplying two sums of two terms. (a+bi) (c+di) = ac + bci + adi + bidi (distributive law) = ac + bidi + bci + adi (commutative law of addition—the order of the summands can be changed) = ac + bdi^2 + (bc+ad)i (commutative and distributive laws) = (ac-bd) + (bc + ad)i (fundamental property of the imaginary unit). The division of two complex numbers is defined in terms of complex multiplication, which is described above, and real division. When at least one of c and d is non-zero, we have \,\frac{a + bi}{c + di} = \left({ac + bd \over c^2 + d^2}\right) + \left( {bc - ad \over c^2 + d^2} \right)i. Division can be defined in this way because of the following observation: \,\frac{a + bi}{c + di} = \frac{\left(a + bi\right) \cdot \left(c - di\right)}{\left (c + di\right) \cdot \left (c - di\right)} = \left({ac + bd \over c^2 + d^2}\right) + \left( {bc - ad \over c^2 + d^2} \right)i. As shown earlier, cdi is the complex conjugate of the denominator c + di. At least one of the real part c and the imaginary part d of the denominator must be nonzero for division to be defined. This is called "rationalization" of the denominator (although the denominator in the final expression might be an irrational real number). The reciprocal of a nonzero complex number z = x + yi is given by \frac{1}{z}=\frac{\bar{z}}{z \bar{z}}=\frac{\bar{z}}{x^2+y^2}=\frac{x}{x^2+y^2} -\frac{y}{x^2+y^2}i. This formula can be used to compute the multiplicative inverse of a complex number if it is given in rectangular coordinates. Inversive geometry, a branch of geometry studying reflections more general than ones about a line, can also be expressed in terms of complex numbers. In the network analysis of electrical circuits, the complex conjugate is used in finding the equivalent impedance when the maximum power transfer theorem is used. Square root[edit] The square roots of a + bi (with b ≠ 0) are \pm (\gamma + \delta i), where \gamma = \sqrt{\frac{a + \sqrt{a^2 + b^2}}{2}} \delta = \sgn (b) \sqrt{\frac{-a + \sqrt{a^2 + b^2}}{2}}, where sgn is the signum function. This can be seen by squaring \pm (\gamma + \delta i) to obtain a + bi.[9][10] Here \sqrt{a^2 + b^2} is called the modulus of a + bi, and the square root sign indicates the square root with non-negative real part, called the principal square root; also \sqrt{a^2 + b^2}= \sqrt{z\bar{z}}, where z = a + bi .[11] Polar form[edit] Figure 2: The argument φ and modulus r locate a point on an Argand diagram; r(\cos \varphi + i \sin \varphi) or r e^{i\varphi} are polar expressions of the point. Absolute value and argument[edit] An alternative way of defining a point P in the complex plane, other than using the x- and y-coordinates, is to use the distance of the point from O, the point whose coordinates are (0, 0) (the origin), together with the angle subtended between the positive real axis and the line segment OP in a counterclockwise direction. This idea leads to the polar form of complex numbers. If z is a real number (i.e., y = 0), then r = | x |. In general, by Pythagoras' theorem, r is the distance of the point P representing the complex number z to the origin. The square of the absolute value is \textstyle |z|^2=z\bar{z}=x^2+y^2.\, where \bar{z} is the complex conjugate of z. The argument of z (in many applications referred to as the "phase") is the angle of the radius OP with the positive real axis, and is written as \arg(z). As with the modulus, the argument can be found from the rectangular form x+yi:[12] \varphi = \arg(z) = \mbox{indeterminate } & \mbox{if } x = 0 \mbox{ and } y = 0. Normally, as given above, the principal value in the interval (−π,π] is chosen. Values in the range [0,2π) are obtained by adding if the value is negative. The value of φ is expressed in radians in this article. It can increase by any integer multiple of and still give the same angle. Hence, the arg function is sometimes considered as multivalued. The polar angle for the complex number 0 is indeterminate, but arbitrary choice of the angle 0 is common. The value of φ equals the result of atan2: \varphi = \mbox{atan2}(\mbox{imaginary}, \mbox{real}). z = r(\cos \varphi + i\sin \varphi ).\, Using Euler's formula this can be written as z = r e^{i \varphi}.\, Using the cis function, this is sometimes abbreviated to z = r \operatorname{cis} \varphi. \, z = r \ang \varphi . \, Multiplication and division in polar form[edit] \cos(a)\cos(b) - \sin(a)\sin(b) = \cos(a + b) \cos(a)\sin(b) + \sin(a)\cos(b) = \sin(a + b) we may derive z_1 z_2 = r_1 r_2 (\cos(\varphi_1 + \varphi_2) + i \sin(\varphi_1 + \varphi_2)).\, (2+i)(3+i)=5+5i. \, holds. As the arctan function can be approximated highly efficiently, formulas like this—known as Machin-like formulas—are used for high-precision approximations of π. Similarly, division is given by \frac{z_1}{ z_2} = \frac{r_1}{ r_2} \left(\cos(\varphi_1 - \varphi_2) + i \sin(\varphi_1 - \varphi_2)\right). Euler's formula[edit] Euler's formula states that, for any real number x, i^0 &{}= 1, \quad & i^1 &{}= i, \quad & i^3 &{}= -i, \\ i^4 &={} 1, \quad & i^5 &={} i, \quad & i^7 &{}= -i, Natural logarithm[edit] Euler's formula allows us to observe that, for any complex number where r is a non-negative real number, one possible value for z's natural logarithm is \ln (z)= \ln(r) + \varphi i Because cos and sin are periodic functions, the natural logarithm may be considered a multi-valued function, with: \ln(z) = \left\{ \ln(r) + (\varphi + 2\pi k)i \;|\; k \in \mathbb{Z}\right\} Integer and fractional exponents[edit] We may use the identity \ln(a^{b}) = b \ln(a) to define complex exponentiation, which is likewise multi-valued: \ln (z^n)=\ln((r(\cos \varphi + i\sin \varphi ))^{n}) = n \ln(r(\cos \varphi + i\sin \varphi)) = \{ n (\ln(r) + (\varphi + k2\pi) i) | k \in \mathbb{Z} \} = \{ n \ln(r) + n \varphi i + nk2\pi i | k \in \mathbb{Z} \}. z^{n}=(r(\cos \varphi + i\sin \varphi ))^{n} = r^n\,(\cos n\varphi + i \sin n \varphi). The nth roots of z are given by \sqrt[n]{z} = \sqrt[n]r \left( \cos \left(\frac{\varphi+2k\pi}{n}\right) + i \sin \left(\frac{\varphi+2k\pi}{n}\right)\right) \sqrt[n]{z^n} = z Field structure[edit] z_1+ z_2 = z_2 + z_1, z_1 z_2 = z_2 z_1. Solutions of polynomial equations[edit] Given any complex numbers (called coefficients) a0, …, an, the equation a_n z^n + \dotsb + a_1 z + a_0 = 0 has at least one complex solution z, provided that at least one of the higher coefficients a1, …, an is nonzero. This is the statement of the fundamental theorem of algebra. Because of this fact, C is called an algebraically closed field. This property does not hold for the field of rational numbers Q (the polynomial x2 − 2 does not have a rational root, since 2 is not a rational number) nor the real numbers R (the polynomial x2 + a does not have a real root for a > 0, since the square of x is positive for any real number x). Algebraic characterization[edit] Characterization as a topological field[edit] • P is closed under addition, multiplication and taking inverses. Formal construction[edit] Formal development[edit] Above, complex numbers have been defined by introducing i, the imaginary unit, as a symbol. More rigorously, the set C of complex numbers can be defined as the set R2 of ordered pairs (a, b) of real numbers. In this notation, the above formulas for addition and multiplication read (a, b) + (c, d) &= (a + c, b + d)\\ (a, b) \cdot (c, d) &= (ac - bd, bc + ad). (x+y) z = xz + yz The quotient ring R[X]/(X 2 + 1) can be shown to be a field. This extension field contains two square roots of −1, namely (the cosets of) X and X, respectively. (The cosets of) 1 and X form a basis of R[X]/(X 2 + 1) as a real vector space, which means that each element of the extension field can be uniquely written as a linear combination in these two elements. Equivalently, elements of the extension field can be written as ordered pairs (a, b) of real numbers. Moreover, the above formulas for addition etc. correspond to the ones yielded by this abstract algebraic approach—the two definitions of the field C are said to be isomorphic (as fields). Together with the above-mentioned fact that C is algebraically closed, this also shows that C is an algebraic closure of R. Matrix representation of complex numbers[edit] a & -b \\ b & \;\; a |z|^2 = a & -b \\ b & a = (a^2) - ((-b)(b)) = a^2 + b^2. The conjugate \overline z corresponds to the transpose of the matrix. Though this representation of complex numbers with matrices is the most common, many other representations arise from matrices other than \bigl(\begin{smallmatrix}0 & -1 \\1 & 0 \end{smallmatrix}\bigr) that square to the negative of the identity matrix. See the article on 2 × 2 real matrices for other representations of complex numbers. Complex analysis[edit] Main article: Complex analysis Complex exponential and related functions[edit] \operatorname{d}(z_1, z_2) = |z_1 - z_2| \, is a complete metric space, which notably includes the triangle inequality for any two complex numbers z1 and z2. \exp(z):= 1+z+\frac{z^2}{2\cdot 1}+\frac{z^3}{3\cdot 2\cdot 1}+\cdots = \sum_{n=0}^{\infty} \frac{z^n}{n!}. \, and the series defining the real trigonometric functions sine and cosine, as well as hyperbolic functions such as sinh also carry over to complex arguments without change. Euler's identity states: \exp(i\varphi) = \cos(\varphi) + i\sin(\varphi) \, for any real number φ, in particular \exp(i \pi) = -1 \, \exp(z) = w \, for any complex number w ≠ 0. It can be shown that any such solution z—called complex logarithm of a—satisfies \log(x+iy)=\ln|w| + i\arg(w), \, Complex exponentiation zω is defined as z^\omega = \exp(\omega \log z). \, Consequently, they are in general multi-valued. For ω = 1 / n, for some natural number n, this recovers the non-uniqueness of nth roots mentioned above. \,a^{bc} = (a^b)^c. Holomorphic functions[edit] with complex coefficients a and b. This map is holomorphic if and only if b = 0. The second summand b \overline z is real-differentiable, but does not satisfy the Cauchy–Riemann equations. Complex numbers have essential concrete applications in a variety of scientific and related areas such as signal processing, control theory, electromagnetism, fluid dynamics, quantum mechanics, cartography, and vibration analysis. Some applications of complex numbers are: Control theory[edit] Improper integrals[edit] Fluid dynamics[edit] Dynamic equations[edit] Electromagnetism and electrical engineering[edit] Main article: Alternating current V(t) = V_0 e^{j \omega t} = V_0 \left (\cos \omega t + j \sin\omega t \right ), To obtain the measurable quantity, the real part is taken: v(t) = \mathrm{Re}(V) = \mathrm{Re}\left [ V_0 e^{j \omega t} \right ] = V_0 \cos \omega t. The complex-valued signal V(t) is called the analytic representation of the real-valued, measurable signal v(t). [14] Signal analysis[edit] x(t) = Re \{X( t ) \} \, X( t ) = A e^{i\omega t} = a e^{ i \phi } e^{i\omega t} = a e^{i (\omega t + \phi) } \, \cos((\omega+\alpha)t)+\cos\left((\omega-\alpha)t\right) & = \operatorname{Re}\left(e^{i(\omega+\alpha)t} + e^{i(\omega-\alpha)t}\right) \\ & = \operatorname{Re}\left((e^{i\alpha t} + e^{-i\alpha t})\cdot e^{i\omega t}\right) \\ & = \operatorname{Re}\left(2\cos(\alpha t) \cdot e^{i\omega t}\right) \\ & = 2 \cos(\alpha t) \cdot \operatorname{Re}\left(e^{i\omega t}\right) \\ & = 2 \cos(\alpha t)\cdot \cos\left(\omega t\right)\,. Quantum mechanics[edit] The complex number field is intrinsic to the mathematical formulations of quantum mechanics, where complex Hilbert spaces provide the context for one such formulation that is convenient and perhaps most standard. The original foundation formulas of quantum mechanics—the Schrödinger equation and Heisenberg's matrix mechanics—make use of complex numbers. Certain fractals are plotted in the complex plane, e.g. the Mandelbrot set and Julia sets. Every triangle has a unique Steiner inellipse—an ellipse inside the triangle and tangent to the midpoints of the three sides of the triangle. The foci of a triangle's Steiner inellipse can be found as follows, according to Marden's theorem:[15][16] Denote the triangle's vertices in the complex plane as a = xA + yAi, b = xB + yBi, and c = xC + yCi. Write the cubic equation \scriptstyle (x-a)(x-b)(x-c)=0, take its derivative, and equate the (quadratic) derivative to zero. Marden's Theorem says that the solutions of this equation are the complex numbers denoting the locations of the two foci of the Steiner inellipse. Algebraic number theory[edit] Construction of a regular pentagon using straightedge and compass. Analytic number theory[edit] The earliest fleeting reference to square roots of negative numbers can perhaps be said to occur in the work of the Greek mathematician Hero of Alexandria in the 1st century AD, where in his Stereometrica he considers, apparently in error, the volume of an impossible frustum of a pyramid to arrive at the term \scriptstyle \sqrt{81 - 144} = 3i\sqrt{7} in his calculations, although negative quantities were not conceived of in Hellenistic mathematics and Heron merely replaced it by its positive (\scriptstyle \sqrt{144 - 81} = 3\sqrt{7}).[17] The impetus to study complex numbers proper first arose in the 16th century when algebraic solutions for the roots of cubic and quartic polynomials were discovered by Italian mathematicians (see Niccolò Fontana Tartaglia, Gerolamo Cardano). It was soon realized that these formulas, even if one was only interested in real solutions, sometimes required the manipulation of square roots of negative numbers. As an example, Tartaglia's formula for a cubic equation of the form \scriptstyle x^3 = px + q[18] gives the solution to the equation x3 = x as At first glance this looks like nonsense. However formal calculations with complex numbers show that the equation z3 = i has solutions i, {\scriptstyle\frac{\sqrt{3}}{2}}+{\scriptstyle\frac{1}{2}}i and {\scriptstyle\frac{-\sqrt{3}}{2}}+{\scriptstyle\frac{1}{2}}i. Substituting these in turn for {\scriptstyle\sqrt{-1}^{1/3}} in Tartaglia's cubic formula and simplifying, one gets 0, 1 and −1 as the solutions of x3x = 0. Of course this particular equation can be solved at sight but it does illustrate that when general formulas are used to solve cubic equations with real roots then, as later mathematicians showed rigorously, the use of complex numbers is unavoidable. Rafael Bombelli was the first to explicitly address these seemingly paradoxical solutions of cubic equations and developed the rules for complex arithmetic trying to resolve these issues. ([...] quelquefois seulement imaginaires c’est-à-dire que l’on peut toujours en imaginer autant que j'ai dit en chaque équation, mais qu’il n’y a quelquefois aucune quantité qui corresponde à celle qu’on imagine.) A further source of confusion was that the equation \scriptstyle \sqrt{-1}^2=\sqrt{-1}\sqrt{-1}=-1 seemed to be capriciously inconsistent with the algebraic identity \scriptstyle \sqrt{a}\sqrt{b}=\sqrt{ab}, which is valid for non-negative real numbers a and b, and which was also used in complex number calculations with one of a, b positive and the other negative. The incorrect use of this identity (and the related identity \scriptstyle \frac{1}{\sqrt{a}}=\sqrt{\frac{1}{a}}) in the case when both a and b are negative even bedeviled Euler. This difficulty eventually led to the convention of using the special symbol i in place of −1 to guard against this mistake.[citation needed] Even so, Euler considered it natural to introduce students to complex numbers much earlier than we do today. In his elementary algebra text book, Elements of Algebra, he introduces these numbers almost at once and then uses them in a natural way throughout. \cos \theta + i\sin \theta = e ^{i\theta } \, Wessel's memoir appeared in the Proceedings of the Copenhagen Academy but went largely unnoticed. In 1806 Jean-Robert Argand independently issued a pamphlet on complex numbers and provided a rigorous proof of the fundamental theorem of algebra. Gauss had earlier published an essentially topological proof of the theorem in 1797 but expressed his doubts at the time about "the true metaphysics of the square root of −1". It was not until 1831 that he overcame these doubts and published his treatise on complex numbers as points in the plane, largely establishing modern notation and terminology. The English mathematician G. H. Hardy remarked that Gauss was the first mathematician to use complex numbers in 'a really confident and scientific way' although mathematicians such as Niels Henrik Abel and Carl Gustav Jacob Jacobi were necessarily using them routinely before Gauss published his 1831 treatise.[20] Augustin Louis Cauchy and Bernhard Riemann together brought the fundamental ideas of complex analysis to a high state of completion, commencing around 1825 in Cauchy's case. The common terms used in the theory are chiefly due to the founders. Argand called \scriptstyle \cos \phi + i\sin \phi the direction factor, and \scriptstyle r = \sqrt{a^2+b^2} the modulus; Cauchy (1828) called \cos \phi + i\sin \phi the reduced form (l'expression réduite) and apparently introduced the term argument; Gauss used i for \scriptstyle \sqrt{-1}, introduced the term complex number for a + bi, and called a2 + b2 the norm. The expression direction coefficient, often used for \cos \phi + i\sin \phi, is due to Hankel (1867), and absolute value, for modulus, is due to Weierstrass. Generalizations and related notions[edit] The process of extending the field R of reals to C is known as Cayley–Dickson construction. It can be carried further to higher dimensions, yielding the quaternions H and octonions O which (as a real vector space) are of dimension 4 and 8, respectively. In this context the complex numbers have been called the binarions.[21] However, just as applying the construction to reals loses the property of ordering, more properties familiar from real and complex numbers vanish with increasing dimension. The quaternions are only a skew field, i.e. for some x, y: x·yy·x for two quaternions, the multiplication of octonions fails (in addition to not being commutative) to be associative: for some x, y, z: (x·yzx·(y·z). Reals, complex numbers, quaternions and octonions are all normed division algebras over R. However, by Hurwitz's theorem they are the only ones. The next step in the Cayley–Dickson construction, the sedenions, in fact fails to have this structure. \mathbb{C} \rightarrow \mathbb{C}, z \mapsto wz \operatorname{Re}(w) & -\operatorname{Im}(w) \\ \operatorname{Im}(w) & \;\; \operatorname{Re}(w) J = \begin{pmatrix}p & q \\ r & -p \end{pmatrix}, \quad p^2 + qr + 1 = 0 \{ z = a I + b J : a,b \in R \} The field R is the completion of Q, the field of rational numbers, with respect to the usual absolute value metric. Other choices of metrics on Q lead to the fields Qp of p-adic numbers (for any prime number p), which are thereby analogous to R. There are no other nontrivial ways of completing Q than R and Qp, by Ostrowski's theorem. The algebraic closures \overline {\mathbf{Q}_p} of Qp still carry a norm, but (unlike C) are not complete with respect to it. The completion \mathbf{C}_p of \overline {\mathbf{Q}_p} turns out to be algebraically closed. This field is called p-adic complex numbers by analogy. See also[edit] 1. ^ Charles P. McKeague (2011), Elementary Algebra, Brooks/Cole, p. 524, ISBN 978-0-8400-6421-9  2. ^ Burton (1995, p. 294) 3. ^ Complex Variables (2nd Edition), M.R. Spiegel, S. Lipschutz, J.J. Schiller, D. Spellman, Schaum's Outline Series, Mc Graw Hill (USA), ISBN 978-0-07-161569-3 5. ^ For example Ahlfors (1979). 6. ^ Brown, James Ward; Churchill, Ruel V. (1996), Complex variables and applications (6th ed.), New York: McGraw-Hill, p. 2, ISBN 0-07-912147-0, In electrical engineering, the letter j is used instead of i.  7. ^ Katz (2004, §9.1.4) 8. ^ 9. ^ Abramowitz, Milton; Stegun, Irene A. (1964), Handbook of mathematical functions with formulas, graphs, and mathematical tables, Courier Dover Publications, p. 17, ISBN 0-486-61272-4 , Section 3.7.26, p. 17 10. ^ Cooke, Roger (2008), Classical algebra: its nature, origins, and uses, John Wiley and Sons, p. 59, ISBN 0-470-25952-3 , Extract: page 59 11. ^ Ahlfors (1979, p. 3) 12. ^ Kasana, H.S. (2005), "Chapter 1", Complex Variables: Theory And Applications (2nd ed.), PHI Learning Pvt. Ltd, p. 14, ISBN 81-203-2641-5  13. ^ Nilsson, James William; Riedel, Susan A. (2008), "Chapter 9", Electric circuits (8th ed.), Prentice Hall, p. 338, ISBN 0-13-198925-1  14. ^ Electromagnetism (2nd edition), I.S. Grant, W.R. Phillips, Manchester Physics Series, 2008 ISBN 0-471-92712-0 15. ^ Kalman, Dan (2008a), "An Elementary Proof of Marden's Theorem", The American Mathematical Monthly 115: 330–38, ISSN 0002-9890  16. ^ Kalman, Dan (2008b), "The Most Marvelous Theorem in Mathematics", Journal of Online Mathematics and its Applications  External link in |journal= (help) 18. ^ In modern notation, Tartaglia's solution is based on expanding the cube of the sum of two cube roots: \scriptstyle \left(\sqrt[3]{u} + \sqrt[3]{v}\right)^3 = 3 \sqrt[3]{uv} \left(\sqrt[3]{u} + \sqrt[3]{v}\right) + u + v With \scriptstyle x = \sqrt[3]{u} + \sqrt[3]{v}, \scriptstyle p = 3 \sqrt[3]{uv}, \scriptstyle q = u + v, u and v can be expressed in terms of p and q as \scriptstyle u = q/2 + \sqrt{(q/2)^2-(p/3)^3} and \scriptstyle v = q/2 - \sqrt{(q/2)^2-(p/3)^3}, respectively. Therefore, \scriptstyle x = \sqrt[3]{q/2 + \sqrt{(q/2)^2-(p/3)^3}} + \sqrt[3]{q/2 - \sqrt{(q/2)^2-(p/3)^3}}. When \scriptstyle (q/2)^2-(p/3)^3 is negative (casus irreducibilis), the second cube root should be regarded as the complex conjugate of the first one. 20. ^ Hardy, G. H.; Wright, E. M. (2000) [1938], An Introduction to the Theory of Numbers, OUP Oxford, p. 189 (fourth edition), ISBN 0-19-921986-9  Mathematical references[edit] Historical references[edit] • Nahin, Paul J. (1998), An Imaginary Tale: The Story of \scriptstyle\sqrt{-1}, Princeton University Press, ISBN 0-691-02795-1  • H.D. Ebbinghaus; H. Hermes; F. Hirzebruch; M. Koecher; K. Mainzer; J. Neukirch; A. Prestel; R. Remmert (1991), Numbers (hardcover ed.), Springer, ISBN 0-387-97497-0  Further reading[edit] • The Road to Reality: A Complete Guide to the Laws of the Universe, by Roger Penrose; Alfred A. Knopf, 2005; ISBN 0-679-45443-8. Chapters 4–7 in particular deal extensively (and enthusiastically) with complex numbers. • Visual Complex Analysis, by Tristan Needham; Clarendon Press; ISBN 0-19-853447-7 (hardcover, 1997). History of complex numbers and complex analysis with compelling and useful visual interpretations. • Conway, John B., Functions of One Complex Variable I (Graduate Texts in Mathematics), Springer; 2 edition (12 September 2005). ISBN 0-387-90328-3. External links[edit]
e034e2674209d37f
Classical Fluids via Quantum Mechanics The subject of this post is probably a bit too technical to interest many readers, but I’ve been meaning to post something about it for a while and seem to have an hour or so to spare this morning so here goes. This is going to be a battle with the clunky WordPress latex widget too so please bear with me if it’s a little difficult to read. The topic something I came across a while ago when thinking about the way the evolution of the matter distribution in cosmology is described in terms of fluid mechanics, but what I’m going to say is not at all specific to cosmology, and perhaps isn’t all that well known, so it might be of some interest to readers with a general physics background. Consider a fluid with density \rho= \rho (\vec{x},t). The velocity of the fluid at any point is \vec{v}=\vec{v}(\vec{x},t). The evolution of such a fluid can be described by the continuity equation: \frac{\partial \rho}{\partial t} + \vec{\nabla}\cdot (\rho\vec{v})= 0 and the Euler equation \frac{\partial \vec{v}}{\partial t} + (\vec{v}\cdot\vec{\nabla})\vec{v} +\frac{1}{\rho} \vec{\nabla} P + \vec{\nabla} V = 0, in which P is the fluid pressure (pressure gradients appear in the above equation) and V is a potential describing other forces on the fluid (in a cosmological context, this would include its self-gravity). To keep things as simple as possible, consider a pressureless fluid (as might describe cold dark matter) and restrict consideration to the case of a potential flow, i.e. one in which \vec{v} = \vec{\nabla}\phi where \phi=\phi(\vec{x},t) is a velocity potential; such a flow is curl-free. It is convenient to take the first integral of the Euler equation with respect to the spatial coordinates, which yields an equation for the velocity potential (cf. the Bernoulli equation): \frac{\partial \phi}{\partial t} + \frac{1}{2} (\nabla \phi)^{2} + V=0. The continuity equation becomes \frac{\partial \rho}{\partial t} + \vec{\nabla}\cdot(\rho\vec{\nabla}\phi) = 0 This is all standard basic classical fluid mechanics. Now here’s the interesting thing. Introduce a new quantity \Psi defined by \Psi(\vec{x},t) \equiv R\exp(i\phi/\nu), in which R=R(\vec{x},t) and \nu is a constant. Using this construction, it turns out that \rho = \Psi\Psi^{\ast}= |\Psi|^2=R^2. After a little bit of fiddling around putting this in the previous equation you can obtain the following: i\nu \frac{\partial \Psi}{\partial t} = -\frac{\nu^2}{2} \nabla^2{\Psi} + V\Psi + Q\Psi which, apart from the last term Q and a slightly different notation, is identical to the Schrödinger equation of quantum mechanics; the term \nu would be  proportional to Planck’s constant h in that context, but in this context is a free parameter. The mysterious term Q is pretty horrible: Q = \frac{\nu^2}{2} \frac{\nabla^2 R}{R}, and it turns the Schrödinger equation into a non-linear equation, but its role can be understood by seeing what happens if you start with the normal single-particle Schrödinger equation and work backwards; this is the approach taken historically by David Bohm and others. In that case the term Q appears as a strange extra potential term in the Bernoulli equation which is sometimes called the quantum potential. In the context of fluid flow, however, the term describes  the the effect of pressure gradients that would arise if the fluid were barotropic. In the approach I’ve outlined, going in the opposite direction, this term is consequently sometimes called the “quantum pressure”. The parameter \nu controls the size of this term, which has the effect of blurring out the streamlines of the purely classical solution. This transformation from classical fluid mechanics to quantum mechanics is not a new idea; in fact it goes back to Madelung who, in the 1920s, was trying to find a way to express quantum theory in the language of classical fluids. What interested me about this approach, however, is more practical. It might seem strange to want transform relatively simple classical fluid-mechanical setup into a quantum-mechanical framework, which isn’t the obvious way to make progress, but there are a number of advantages of doing so. Perhaps chief among them is that the construction of \Psi means that the density \rho is guranteed positive definite; this means that a perturbation expansion of \Psi will not lead to unphysical negative densities in the same way that happens if perturbation theory is applied to \rho directly. This approach also has interesting links to other methods of studying the growth of large-scale structure in the Universe, such as the Zel’dovich approximation; the “waviness” controlled by the parameter \nu is useful in ensuring that the density does not become infinite at shell-crossing, for example. Anyway, here are some links to references with more details:…416L..71W…12..012S…12..016S I think there are many more ways this approach could be extended, so maybe this will encourage someone out there to have a look at it! 3 Responses to “Classical Fluids via Quantum Mechanics” 1. Anton Garrett Says: I knew this mathematics starting from the other way round, ie begin with the Schroedinger equation and derive (coupled, nonlinear) equations for the amplitude and phase of the wavefunction. David Bohm attempted to give these a physical interpretation. In fluid mechanics the Navier-Stokes equations (or whatever approximation to them is used) need to be supplemented by the condition that the pressure cannot go negative and that, whenever their solution is on the point of doing so, you get a vacuum with P=0, as in the cavitation phenomenon around ship propellers. I don’t know if that condition has any analogy in cosmology, or even in nonrelativistic quantum theory where it might be tested. • telescoper Says: Yes, I should have mentioned the Bohm approach because that’s the more familiar way of doing this. That interprets the term Q as a quantum potential for a single-particle. Incidentally, googling about I see there’s been some work on this approach in the context of Clifford Algebra. Very interesting. 2. I haven’t tried to integrate it with WordPress, but mathjax is a wonderful way of getting nicely typeset on the web. I stumbled across it on the PRL website when there was a news item about the APS supporting the project. Now the difficult battle is to get my University to integrate it with their VLE. Sorry for all the TLAs. Leave a Reply You are commenting using your account. Log Out / Change ) Twitter picture Facebook photo Google+ photo Connecting to %s Get every new post delivered to your Inbox. Join 4,987 other followers %d bloggers like this:
d61e55ccbc5de07b
Open Access Nano Express Origin of the blueshift of photoluminescence in a type-II heterostructure Masafumi Jo1,2,4*, Mitsuru Sato1, Souta Miyamura1, Hirotaka Sasakura1, Hidekazu Kumano1 and Ikuo Suemune1,3 Author Affiliations For all author emails, please log on. Received:4 October 2012 Accepted:14 November 2012 Published:27 November 2012 © 2012 Jo et al.; licensee Springer. Interest has recently been increasing in type-II heterostructures in which electrons and holes are separated in adjacent different materials, thereby forming spatially indirect excitons [1-9]. The wavefunction of the indirect exciton is significantly extended in space compared with that of a direct exciton in a type-I system where both electrons and holes are confined in the same layer, which allows large controllability of the wavefunction distribution. In addition, the long radiative lifetime originating from spatially indirect recombination is attractive for applications such as optical memories [10,11]. The separation of charge carriers in a type-II system also induces electrostatic potential (Hartree potential), which causes band bending and a resultant significant change in the exciton wavefunction distribution. Experimentally, this band-bending effect has been observed in power-dependent photoluminescence (PL) measurements, in the blueshift of PL peaks with increasing excitation power [1,2,5,6,12]. The mechanism of this effect has been discussed in terms of a triangular potential model in which photogenerated electrons and holes form a dipole layer, creating a triangular-like potential at the interface [1]. With increasing excitation power, the potential becomes steeper and the quantization energy increases, giving rise to a blueshift of the recombination energy. Following this model, the blueshift is proportional to the cube root of the excitation power, which has been generally accepted for the characterization and distinction of type-II heterostructures. However, detailed examinations of the observed power dependency sometimes show deviations from the cube root of power law. This is especially noticeable when the excitation power dependence is examined over a wide range. Here, we reexamine the characteristic blueshift in a type-II system using a GaAsSb/GaAs quantum well (QW). We observe that the blueshift does not obey a single-exponent power law, but instead tends to saturate with increasing excitation power. This is analyzed on the basis of a self-consistent band calculation. The dominant contribution to the blueshift originates from the variation of the QW energy level rather than the variation of the triangular potentials formed in the barrier layers, which modifies the cube-root power law. The sample containing a 6-nm GaAsSb QW was grown on a GaAs(001) substrate by MOMBE. The Sb composition of GaAsSb was set at 8%, which was confirmed by XRD. At this Sb concentration, the band lineup between GaAs and GaAsSb becomes a type-II alignment with holes confined in the GaAsSb well [13,14]. The excitation power dependence of the PL was measured at 23 K using the 633 nm line of a He-Ne laser with an intensity range of 1 to 100 W cm-2. The incident beam was chopped using an optical chopper to avoid heating. Results and discussion Figure 1 shows the normalized PL spectra of the sample as a function of excitation power. The spectra show a typical blueshift with increasing excitation power. The shift of the PL peak energy is summarized in the inset, which clearly shows that the cube-root power law only holds within a limited range. The power exponent is greater than 1/3 at low excitation, then decreases and becomes smaller than 1/3 at high excitation. thumbnailFigure 1. Low-temperature PL spectra of a 6-nm GaAsSb QW at different excitation densities. The inset plots the PL peak energy shift as a function of the excitation power density fitted with the conventional cube-root power law. To elucidate the origin of the characteristic blueshift, let us start with a semi-quantitative analysis of the band bending in a type-II system. We will deal with a single QW structure, and study the band bending effect numerically using a simple one-band model for both the electron and the heavy hole. The excitonic effect is not taken into account at this stage. The one-particle effective mass Schrödinger equations are given by 2 2 m iz d 2 d z 2 + V i z + ϕ z ψ i z = E i ψ i z , (1) where i = e (electron) or h (hole), miz is the carrier effective mass in the growth direction z, Vi(z) the heterostructure potential and φ(z) the self-consistent Hartree potential induced by the spatial separation of the charged carriers. The Hartree potential is obtained from Poisson’s equation, d 2 d z 2 ϕ z = e 2 ε ε 0 n h z n e z , (2) in which ε is the dielectric constant, ε0 is the permittivity of vacuum, and ni is the carrier density determined by the normalized wavefunctions ψi(z): n i z = n s ψ i z 2 . (3) The sheet charge density ns is a parameter which is an increasing function of the excitation power. Equations 1, 2 and 3 are solved iteratively until they converge. Figure 2a shows the calculated band diagram of the GaAsSb/GaAs QW for a sheet charge density ns = 1 × 1011 cm-2. The self-consistent potential is shown by the solid lines, and the flat-band potential by the dashed lines. Electron and heavy-hole wavefunctions with their eigenenergies under the bending band are also plotted in Figure 2a. The parameters used in the calculation are summarized in Table 1. In this heterostructure, holes are confined in the GaAsSb well, whereas electrons are loosely bound to the triangular potential wells formed at the GaAsSb/GaAs interfaces. The ground state energy of the electron under the bending band is lower than that under the flat band because of the attractive Hartree potential, which results in a redshift of the transition energy. However, the hole ground state is also pushed down by the band bending, which leads to a blueshift. The total transition energy is thus dependent on two competing shifts. thumbnailFigure 2. One-band model calculation for a type-II QW. (a) Calculated band diagram of a 6-nm GaAsSb/GaAs QW for a sheet charge density of 1 × 1011 cm-2: self-consistent potential (solid lines) and flat-band potential (dashed lines). Electron and heavy-hole wavefunctions with their eigenenergies are also plotted. (b) Calculated energy shift of the ground state for the electron (ΔEe) and the heavy-hole (ΔEhh) with respect to the flat-band condition. The transition energy shift (ΔEPL) is given by the difference between the two energy shifts. Table 1. Material parameters used for the calculation of a GaAsSb/GaAs QW To see how the transition energy shifts with the excitation, we calculated the energy shifts of the electron and heavy hole as a function of the sheet charge density Figure 2b. The energy shift of the optical transition, ΔEPL, is given by the difference between the two energy shifts: ΔEPL = ΔEe - ΔEhh. As the sheet charge density increases, both the electron and heavy-hole energy levels monotonically decrease due to the increasing Hartree potential. Furthermore, the heavy-hole energy shift is always larger than the electron energy shift. As a result, the transition energy shift ΔEPL shows a blueshift with increasing excitation. Indeed, this trend is generally true for a type-II structure; the confined carrier (here the hole) is more susceptible to the Hartree potential. This response is partly because the potential well for the electrons is formed at the skirt of the Hartree potential, while holes are affected by the peak height of the Hartree potential. In addition, an increase in the steepness of the triangular well for the electrons raises the quantization energy, compensating for the energy decrease due to the increased well depth. Having confirmed that the blueshift is mainly caused by the energy shift of the hole in the well, we consider the power dependency of the peak shift. To the zero-order approximation, the hole energy change is proportional to the depth of the Hartree potential, which is, in turn, proportional to the sheet charge density if the holes and electrons are completely separated. The calculated energy shift in Figure 2b shows sublinear dependence on the sheet charge density, indicating that the distribution of electrons and holes under the bending band plays an important role. For more quantitative evaluation, especially at low excitation regimes, it is necessary to include excitonic effects in the calculation. We performed a calculation of the exciton energy under the bending band following [17]. The Schrödinger equation for the exciton is [ 2 2 m ρ 1 ρ ρ ρ ρ 2 2 m ez 2 z e 2 2 2 m hz 2 z h 2 + V e z e + V h z h e 2 4 π ε ε 0 1 ρ 2 + z e z h 2 ] ψ ( ρ , z e , z h ) = E ψ ( ρ , z e , z h ) . (4) Here, 1/mρ = 1/m + 1/m and is the in-plane reduced mass, and ρ is the in-plane electron–hole distance. The Hartree potential is included in the calculation through the modified heterostructure potential V i z i = V i ( z ) ϕ z i , where i = e (electron) corresponds to the upper sign, and h (hole) to the lower sign. For simplicity, we ignore the spatially dependent dielectric screening in Equation 4. To solve Poisson’s equation, the carrier density ni is obtained by n e , h z e , h = n s 0 d ρ d z h , e 2 π ρ ψ ρ , z e , z h 2 . (5) Again, the sheet charge density ns is a parameter. Figure 3a plots the probability density of the electron under the flat band (ns = 0 cm-2) and the bending band (ns = 5 × 1011 cm-2). The electron probability density is calculated from the wavefunction ψ(ρ, ze, zh) by pe(ze) = ∫ 0dρ ∫ − dzh2πρ|ψ(ρ, ze, zh)|2. It is clearly seen that the electron is attracted to the well under the flat band due to the presence of Coulomb interaction. Binding energy of 3.7 meV is obtained for the exciton by comparing with the energies of the single-particle calculation Equation 1. Figure 3b shows the exciton energy shift in the GaAsSb/GaAs QW as a function of the sheet charge density. The energy shift increases linearly with the sheet charge density at the low density level, and subsequently shows sublinear dependence at the higher density regime. The power exponent at the high density regime is found to be approximately ~0.5. Although we do not have a clear explanation of the origin of the 1/2 exponent at the high power regime, the sublinear increase in the energy shift can be qualitatively understood in terms of the spatial distribution of the electron wavefunction. At a low charge density where the Hartree potential is small, most of the electrons remain in the GaAs barrier, and the electron probability density inside the GaAsSb well is negligibly small. Thus, the spatially separated charges increase in proportion to the sheet charge density, which results in a linear increase in the Hartree potential. In contrast, a significant portion of the electron wavefunction penetrates into the well at high charge density ns = 5 × 1011 cm-2. This penetration decreases the net charge that forms the Hartree potential, leading to the sublinear increase with increasing sheet charge density. thumbnailFigure 3. Calculation considering excitonic effects, in comparison with the experimental results. (a) Plot of the probability density for the electron under the flat band and bending band. (b) Double logarithmic plot of the exciton energy shift versus sheet charge density for a 6-nm GaAsSb QW. (c) The same data as in the inset of Figure 1, fitted with another power law. Finally, the calculated energy shift can be connected with the experimental excitation power density, through the following rate equation G = B Δ n 2 . (6) G is the generation rate of the photocarrier and is proportional to the excitation power, Δn is the photogenerated excess carrier density and Β is the bimolecular radiative recombination coefficient. Here, we ignore nonradiative recombination since the linearity of the PL intensity with the excitation power ensures the radiative dominant regime [18]. Combining Equation 6 with the numerically calculated carrier density dependence of the PL energy shift shown in Figure 3b, the following power law for the blueshift is derived: Δ E PL Δ n m G m ' , m ' = m / 2 = 1 / 2 ~ 1 / 4 , (7) with the power factor m’ depending on the excitation power. We show again the experimental PL peak shift in the inset of Figure 3b, along with the new power law. Transition from the low excitation regime (m’ = 1/2) to the high excitation regime (m’ = 1/4) is obvious. Between the two extremes, we can see the conventionally applied m’ = 1/3 power law regime. We have analyzed the blueshift of the PL peak in a type-II QW. A one-band calculation shows that the blueshift is mainly caused by the energy shift of the confined carrier in the well. More quantitative analysis based on a self-consistent calculation including excitonic effects illustrated the transition from a linear to a sublinear increase in the blueshift with increasing sheet charge density. Combining the calculated result with the carrier rate equation, the blueshift was found to be proportional to the m-th root of the excitation power density, in which m = 1/2 ~ 1/4 and is dependent on the excitation power. The more comprehensive theory presented here predicts the 1/3-power law in the literature over a limited range of carrier density only. The above power law is consistent with the experimental results obtained from a type-II GaAsSb/GaAs QW. MOMBE: Metal-organic molecular beam epitaxy; PL: Photoluminescence; QW: Quantum well; XRD: X-ray diffraction. Competing interests The authors declare that they have no competing interests. Authors' contributions MJ and MS conceived and designed the experiments. MS and SM performed the sample growth. MS conducted the optical measurements. MJ carried out the numerical calculation and drafted the manuscript. HS and HK participated in the coordination of the study. IS supervised the project. All authors discussed the results and commented on the manuscript. This work was supported in part by Hokkaido University and Hokkaido Innovation through Nano Technology Support (HINTS). 1. Ledentsov NN, Böhrer J, Heinrichsdorff F, Grundmann M, Bimberg D, Ivanov SV, Meltser BY, Shaposhnikov SV, Yassievich IN, Faleev NN, Kop'ev PS, Alferov ZI: Radiative states in type-II GaSb/GaAs quantum wells. Phys. Rev. B 1995, 52:14058. Publisher Full Text OpenURL 2. Hatami F, Grundmann M, Ledentsov NN, Heinrichsdorff F, Heitz R, Böhrer J, Bimberg D, Ruvimov SS, Werner P, Ustinov VM, Kop'ev PS, Alferov ZI: Carrier dynamics in type-II GaSb/GaAs quantum dots. Phys. Rev. B 1998, 57:4635. Publisher Full Text OpenURL 3. Ribeiro E, Govorov AO, Carvalho W, Medeiros-Ribeiro G: Aharanovo-Bohm signature for neutral polarized excitons in type-II quantum dot ensembles. Phys. Rev. Lett. 2004, 92:126402. PubMed Abstract | Publisher Full Text OpenURL 4. Madureira JR, de Godoy MPF, Brasil MJSP, Iikawa F: Spatially indirect excitons in type-II quantum dots. Appl. Phys. Lett. 2007, 90:212105. Publisher Full Text OpenURL 5. Alonso-Álvarez D, Alén B, García JM, Ripalda JM: Optical investigation of type II GaSb/GaAs self-assembled quantum dots. Appl. Phys. Lett. 2007, 91:263103. Publisher Full Text OpenURL 6. Kawazu T, Mano T, Noda T, Sakaki H: Optical properties of GaSb/GaAs type-II quantum dots grown by droplet epitaxy. Appl. Phys. Lett. 2009, 94:081911. Publisher Full Text OpenURL 7. Tatebayashi J, Khoshakhlagh A, Huang SH, Dawson LR, Balakrishnan G, Huffaker DL: Formation and optical characteristics of strain-relieved and densely stacked GaSb/GaAs quantum dots. Appl. Phys. Lett. 2006, 89:203116. Publisher Full Text OpenURL 8. Dheeraj DL, Patriarche G, Zhou H, Hoang TB, Moses AF, Grønsberg S, Helvoort AT, Fimland BO, Weman H: Growth and characterization of wurtzite GaAs nanowires with defect-free zinc blende GaasSb inserts. Nano Lett. 2008, 8:4459. PubMed Abstract | Publisher Full Text OpenURL 9. Akopian N, Patriarche G, Liu L, Harmand JC, Zwiller V: Crystal phase quantum dots. Nano Lett. 2010, 10:1198. PubMed Abstract | Publisher Full Text OpenURL 10. Muto S: On a possiblity of wavelength-domain-multiplication memory using quantum boxes. Jpn. J. Appl. Phys. 1995, 34:L210. Publisher Full Text OpenURL 11. Geller M, Kapteyn C, Muller-Kirsch L, Heitz R, Bimberg D: Hole storage in GaSb/GaAs quantum dots for memory devices. Phys. Stat. Sol. (b) 2003, 238:258. Publisher Full Text OpenURL 12. Suzuki K, Hogg RA, Arakawa Y: Structural and optical properties of type II GaSb/GaAs self-assembled quantum dots grown by molecular beam epitaxy. J. Appl. Phys. 1999, 85:8349. Publisher Full Text OpenURL 13. Ichii A, Tsou Y, Garmire E: An empirical rule for band offsets between III-V alloy compounds. J. Appl. Phys. 1993, 74:2112. Publisher Full Text OpenURL 14. Noh MS, Ryou JH, Dupuis RD, Chang YL, Weissman RH: Band lineup of pseudomorphic GaAs1-xSbx quantum-well structures with GaAs, GaAsP, and InGaP barriers grown by metal organic chemical vapor deposition. J. Appl. Phys. 2006, 100:093703. Publisher Full Text OpenURL 15. Yu PY, Cardona M: Fundamentals of Semiconductors. 3rd edition. Berlin: Springer; 2005. OpenURL 16. Vurgaftman I, Meyer JR, Ram-Mohan LR: Band parameters for III-V compound semiconductors and their alloys. J Appl Phys 2001, 89:5815. Publisher Full Text OpenURL 17. Penn C, Schaffler F, Bauer G, Glutsch S: Application of numerical exciton-wave-function calculations to the question of band alignment in Si/SiGe quantum wells. Phys Rev B 1999, 59:13314. Publisher Full Text OpenURL 18. Fukatsu S, Usami N, Shiraki Y: Luminescence from Si1-xGex/Si quantum wells grown by Si molecular-beam epitaxy. J Vac Sci Technol B 1993, 11:895. Publisher Full Text OpenURL
86be14817b58dec9
Mathematical equations aren't just useful — many are quite beautiful. And many scientists admit they are often fond of particular formulas not just for their function, but for their form, and the simple, poetic truths they contain. While certain famous equations, such as Albert Einstein's E = mc^2, hog most of the public glory, many less familiar formulas have their champions among scientists. LiveScience asked physicists, astronomers and mathematicians for their favorite equations; here's what we found: General relativity The equation above was formulated by Einstein as part of his groundbreaking general theory of relativity in 1915. The theory revolutionized how scientists understood gravity by describing the force as a warping of the fabric of space and time. "It is still amazing to me that one such mathematical equation can describe what space-time is all about," said Space Telescope Science Institute astrophysicist Mario Livio, who nominated the equation as his favorite. "All of Einstein's true genius is embodied in this equation." [Einstein Quiz: Test Your Knowledge of the Genius] "The right-hand side of this equation describes the energy contents of our universe (including the 'dark energy' that propels the current cosmic acceleration)," Livio explained. "The left-hand side describes the geometry of space-time. The equality reflects the fact that in Einstein's general relativity, mass and energy determine the geometry, and concomitantly the curvature, which is a manifestation of what we call gravity." [6 Weird Facts About Gravity] "It's a very elegant equation," said Kyle Cranmer, a physicist at New York University, adding that the equation reveals the relationship between space-time and matter and energy. "This equation tells you how they are related — how the presence of the sun warps space-time so that the Earth moves around it in orbit, etc. It also tells you how the universe evolved since the Big Bang and predicts that there should be black holes." Standard model equation Image: R.T. Wohlstadter/Shutterstock Standard model Another of physics' reigning theories, the standard model describes the collection of fundamental particles currently thought to make up our universe. The theory can be encapsulated in a main equation called the standard model Lagrangian (named after the 18th-century French mathematician and astronomer Joseph Louis Lagrange), which was chosen by theoretical physicist Lance Dixon of the SLAC National Accelerator Laboratory in California as his favorite formula. "It has successfully described all elementary particles and forces that we've observed in the laboratory to date — except gravity," Dixon told LiveScience. "That includes, of course, the recently discovered Higgs(like) boson, phi in the formula. It is fully self-consistent with quantum mechanics and special relativity." The standard model theory has not yet, however, been united with general relativity, which is why it cannot describe gravity. [Infographic: The Standard Model Explained] Image: agsandrew/Shutterstock While the first two equations describe particular aspects of our universe, another favorite equation can be applied to all manner of situations. The fundamental theorem of calculus forms the backbone of the mathematical method known as calculus, and links its two main ideas, the concept of the integral and the concept of the derivative. "In simple words, [it] says that the net change of a smooth and continuous quantity, such as a distance travelled, over a given time interval (i.e. the difference in the values of the quantity at the end points of the time interval) is equal to the integral of the rate of change of that quantity, i.e. the integral of the velocity," said Melkana Brakalova-Trevithick, chair of the math department at Fordham University, who chose this equation as her favorite. "The fundamental theorem of calculus (FTC) allows us to determine the net change over an interval based on the rate of change over the entire interval." The seeds of calculus began in ancient times, but much of it was put together in the 17th century by Isaac Newton, who used calculus to describe the motions of the planets around the sun. Pythagorean theorem Image: igor.stevanovic/Shutterstock Pythagorean theorem An "oldie but goodie" equation is the famous Pythagorean theorem, which every beginning geometry student learns. This formula describes how, for any right-angled triangle, the square of the length of the hypotenuse, c, (the longest side of a right triangle) equals the sum of the squares of the lengths of the other two sides (a and b). Thus, a^2 + b^2 = c^2 "The very first mathematical fact that amazed me was Pythagorean theorem," said mathematician Daina Taimina of Cornell University. "I was a child then and it seemed to me so amazing that it works in geometry and it works with numbers!" [5 Seriously Mind-Boggling Math Facts] 1 = 0.999999999 equation Image: Tursunbaev Ruslan/Shutterstock 1 = 0.999999999…. This simple equation, which states that the quantity 0.999, followed by an infinite string of nines, is equivalent to one, is the favorite of mathematician Steven Strogatz of Cornell University. "I love how simple it is — everyone understands what it says — yet how provocative it is," Strogatz said. "Many people don't believe it could be true. It's also beautifully balanced. The left side represents the beginning of mathematics; the right side represents the mysteries of infinity." Special relativity equation Image: optimarc/Shutterstock Special relativity Einstein makes the list again with his formulas for special relativity, which describes how time and space aren't absolute concepts, but rather are relative depending on the speed of the observer. The equation above shows how time dilates, or slows down, the faster a person is moving in any direction. "The point is it's really very simple," said Bill Murray, a particle physicist at the CERN laboratory in Geneva. "There is nothing there an A-level student cannot do, no complex derivatives and trace algebras. But what it embodies is a whole new way of looking at the world, a whole attitude to reality and our relationship to it. Suddenly, the rigid unchanging cosmos is swept away and replaced with a personal world, related to what you observe. You move from being outside the universe, looking down, to one of the components inside it. But the concepts and the maths can be grasped by anyone that wants to." Murray said he preferred the special relativity equations to the more complicated formulas in Einstein's later theory. "I could never follow the maths of general relativity," he said. Euler's equation Image: Jezper/Shutterstock Euler's equation This simple formula encapsulates something pure about the nature of spheres: "It says that if you cut the surface of a sphere up into faces, edges and vertices, and let F be the number of faces, E the number of edges and V the number of vertices, you will always get V – E + F = 2," said Colin Adams, a mathematician at Williams College in Massachusetts. "So, for example, take a tetrahedron, consisting of four triangles, six edges and four vertices," Adams explained. "If you blew hard into a tetrahedron with flexible faces, you could round it off into a sphere, so in that sense, a sphere can be cut into four faces, six edges and four vertices. And we see that V – E + F = 2. Same holds for a pyramid with five faces — four triangular, and one square — eight edges and five vertices," and any other combination of faces, edges and vertices. "A very cool fact! The combinatorics of the vertices, edges and faces is capturing something very fundamental about the shape of a sphere," Adams said. Euler-Lagrange Equation Image: Marc Pinter/Shutterstock Euler–Lagrange equations and Noether's theorem "These are pretty abstract, but amazingly powerful," NYU's Cranmer said. "The cool thing is that this way of thinking about physics has survived some major revolutions in physics, like quantum mechanics, relativity, etc." Here, L stands for the Lagrangian, which is a measure of energy in a physical system, such as springs, or levers or fundamental particles. "Solving this equation tells you how the system will evolve with time," Cranmer said. A spinoff of the Lagrangian equation is called Noether's theorem, after the 20th century German mathematician Emmy Noether. "This theorem is really fundamental to physics and the role of symmetry," Cranmer said. "Informally, the theorem is that if your system has a symmetry, then there is a corresponding conservation law. For example, the idea that the fundamental laws of physics are the same today as tomorrow (time symmetry) implies that energy is conserved. The idea that the laws of physics are the same here as they are in outer space implies that momentum is conserved. Symmetry is perhaps the driving concept in fundamental physics, primarily due to [Noether's] contribution." Callan-Symanzik Equation Image: R.T. Wohlstadter/Shutterstock The Callan-Symanzik equation "The Callan-Symanzik equation is a vital first-principles equation from 1970, essential for describing how naive expectations will fail in a quantum world," said theoretical physicist Matt Strassler of Rutgers University. The equation has numerous applications, including allowing physicists to estimate the mass and size of the proton and neutron, which make up the nuclei of atoms. Basic physics tells us that the gravitational force, and the electrical force, between two objects is proportional to the inverse of the distance between them squared. On a simple level, the same is true for the strong nuclear force that binds protons and neutrons together to form the nuclei of atoms, and that binds quarks together to form protons and neutrons. However, tiny quantum fluctuations can slightly alter a force's dependence on distance, which has dramatic consequences for the strong nuclear force. "It prevents this force from decreasing at long distances, and causes it to trap quarks and to combine them to form the protons and neutrons of our world," Strassler said. "What the Callan-Symanzik equation does is relate this dramatic and difficult-to-calculate effect, important when [the distance] is roughly the size of a proton, to more subtle but easier-to-calculate effects that can be measured when [the distance] is much smaller than a proton." Minimal surface equation Image: MarcelClemens/Shutterstock The minimal surface equation "The minimal surface equation somehow encodes the beautiful soap films that form on wire boundaries when you dip them in soapy water," said mathematician Frank Morgan of Williams College. "The fact that the equation is 'nonlinear,' involving powers and products of derivatives, is the coded mathematical hint for the surprising behavior of soap films. This is in contrast with more familiar linear partial differential equations, such as the heat equation, the wave equation, and the Schrödinger equation of quantum physics." Euler Line Image: Patrick Ion/Mathematical Reviews The Euler line Glen Whitney, founder of the Museum of Math in New York, chose another geometrical theorem, this one having to do with the Euler line, named after 18th-century Swiss mathematician and physicist Leonhard Euler. "Start with any triangle," Whitney explained. "Draw the smallest circle that contains the triangle and find its center. Find the center of mass of the triangle — the point where the triangle, if cut out of a piece of paper, would balance on a pin. Draw the three altitudes of the triangle (the lines from each corner perpendicular to the opposite side), and find the point where they all meet. The theorem is that all three of the points you just found always lie on a single straight line, called the 'Euler line' of the triangle." Whitney said the theorem encapsulates the beauty and power of mathematics, which often reveals surprising patterns in simple, familiar shapes. Follow Clara Moskowitz on Twitter @ClaraMoskowitz or LiveScience @livescience. We're also on Facebook & Google+. Related on LiveScience and MNN:
fce172edad771632
You are currently browsing the tag archive for the ‘limiting amplitude function’ tag. Perhaps the most fundamental differential operator on Euclidean space {{\bf R}^d} is the Laplacian \displaystyle \Delta := \sum_{j=1}^d \frac{\partial^2}{\partial x_j^2}. The Laplacian is a linear translation-invariant operator, and as such is necessarily diagonalised by the Fourier transform \displaystyle \hat f(\xi) := \int_{{\bf R}^d} f(x) e^{-2\pi i x \cdot \xi}\ dx. Indeed, we have \displaystyle \widehat{\Delta f}(\xi) = - 4 \pi^2 |\xi|^2 \hat f(\xi) for any suitably nice function {f} (e.g. in the Schwartz class; alternatively, one can work in very rough classes, such as the space of tempered distributions, provided of course that one is willing to interpret all operators in a distributional or weak sense). Because of this explicit diagonalisation, it is a straightforward manner to define spectral multipliers {m(-\Delta)} of the Laplacian for any (measurable, polynomial growth) function {m: [0,+\infty) \rightarrow {\bf C}}, by the formula \displaystyle \widehat{m(-\Delta) f}(\xi) := m( 4\pi^2 |\xi|^2 ) \hat f(\xi). (The presence of the minus sign in front of the Laplacian has some minor technical advantages, as it makes {-\Delta} positive semi-definite. One can also define spectral multipliers more abstractly from general functional calculus, after establishing that the Laplacian is essentially self-adjoint.) Many of these multipliers are of importance in PDE and analysis, such as the fractional derivative operators {(-\Delta)^{s/2}}, the heat propagators {e^{t\Delta}}, the (free) Schrödinger propagators {e^{it\Delta}}, the wave propagators {e^{\pm i t \sqrt{-\Delta}}} (or {\cos(t \sqrt{-\Delta})} and {\frac{\sin(t\sqrt{-\Delta})}{\sqrt{-\Delta}}}, depending on one’s conventions), the spectral projections {1_I(\sqrt{-\Delta})}, the Bochner-Riesz summation operators {(1 + \frac{\Delta}{4\pi^2 R^2})_+^\delta}, or the resolvents {R(z) := (-\Delta-z)^{-1}}. Each of these families of multipliers are related to the others, by means of various integral transforms (and also, in some cases, by analytic continuation). For instance: 1. Using the Laplace transform, one can express (sufficiently smooth) multipliers in terms of heat operators. For instance, using the identity \displaystyle \lambda^{s/2} = \frac{1}{\Gamma(-s/2)} \int_0^\infty t^{-1-s/2} e^{-t\lambda}\ dt (using analytic continuation if necessary to make the right-hand side well-defined), with {\Gamma} being the Gamma function, we can write the fractional derivative operators in terms of heat kernels: \displaystyle (-\Delta)^{s/2} = \frac{1}{\Gamma(-s/2)} \int_0^\infty t^{-1-s/2} e^{t\Delta}\ dt. \ \ \ \ \ (1) 2. Using analytic continuation, one can connect heat operators {e^{t\Delta}} to Schrödinger operators {e^{it\Delta}}, a process also known as Wick rotation. Analytic continuation is a notoriously unstable process, and so it is difficult to use analytic continuation to obtain any quantitative estimates on (say) Schrödinger operators from their heat counterparts; however, this procedure can be useful for propagating identities from one family to another. For instance, one can derive the fundamental solution for the Schrödinger equation from the fundamental solution for the heat equation by this method. 3. Using the Fourier inversion formula, one can write general multipliers as integral combinations of Schrödinger or wave propagators; for instance, if {z} lies in the upper half plane {{\bf H} := \{ z \in {\bf C}: \hbox{Im} z > 0 \}}, one has \displaystyle \frac{1}{x-z} = i\int_0^\infty e^{-itx} e^{itz}\ dt for any real number {x}, and thus we can write resolvents in terms of Schrödinger propagators: \displaystyle R(z) = i\int_0^\infty e^{it\Delta} e^{itz}\ dt. \ \ \ \ \ (2) In a similar vein, if {k \in {\bf H}}, then \displaystyle \frac{1}{x^2-k^2} = \frac{i}{k} \int_0^\infty \cos(tx) e^{ikt}\ dt for any {x>0}, so one can also write resolvents in terms of wave propagators: \displaystyle R(k^2) = \frac{i}{k} \int_0^\infty \cos(t\sqrt{-\Delta}) e^{ikt}\ dt. \ \ \ \ \ (3) 4. Using the Cauchy integral formula, one can express (sufficiently holomorphic) multipliers in terms of resolvents (or limits of resolvents). For instance, if {t > 0}, then from the Cauchy integral formula (and Jordan’s lemma) one has \displaystyle e^{itx} = \frac{1}{2\pi i} \lim_{\epsilon \rightarrow 0^+} \int_{\bf R} \frac{e^{ity}}{y-x+i\epsilon}\ dy for any {x \in {\bf R}}, and so one can (formally, at least) write Schrödinger propagators in terms of resolvents: \displaystyle e^{-it\Delta} = - \frac{1}{2\pi i} \lim_{\epsilon \rightarrow 0^+} \int_{\bf R} e^{ity} R(y+i\epsilon)\ dy. \ \ \ \ \ (4) 5. The imaginary part of {\frac{1}{\pi} \frac{1}{x-(y+i\epsilon)}} is the Poisson kernel {\frac{\epsilon}{\pi} \frac{1}{(y-x)^2+\epsilon^2}}, which is an approximation to the identity. As a consequence, for any reasonable function {m(x)}, one has (formally, at least) \displaystyle m(x) = \lim_{\epsilon \rightarrow 0^+} \frac{1}{\pi} \int_{\bf R} (\hbox{Im} \frac{1}{x-(y+i\epsilon)}) m(y)\ dy which leads (again formally) to the ability to express arbitrary multipliers in terms of imaginary (or skew-adjoint) parts of resolvents: \displaystyle m(-\Delta) = \lim_{\epsilon \rightarrow 0^+} \frac{1}{\pi} \int_{\bf R} (\hbox{Im} R(y+i\epsilon)) m(y)\ dy. \ \ \ \ \ (5) Among other things, this type of formula (with {-\Delta} replaced by a more general self-adjoint operator) is used in the resolvent-based approach to the spectral theorem (by using the limiting imaginary part of resolvents to build spectral measure). Note that one can also express {\hbox{Im} R(y+i\epsilon)} as {\frac{1}{2i} (R(y+i\epsilon) - R(y-i\epsilon))}. Remark 1 The ability of heat operators, Schrödinger propagators, wave propagators, or resolvents to generate other spectral multipliers can be viewed as a sort of manifestation of the Stone-Weierstrass theorem (though with the caveat that the spectrum of the Laplacian is non-compact and so the Stone-Weierstrass theorem does not directly apply). Indeed, observe the *-algebra type properties \displaystyle e^{s\Delta} e^{t\Delta} = e^{(s+t)\Delta}; \quad (e^{s\Delta})^* = e^{s\Delta} \displaystyle e^{is\Delta} e^{it\Delta} = e^{i(s+t)\Delta}; \quad (e^{is\Delta})^* = e^{-is\Delta} \displaystyle e^{is\sqrt{-\Delta}} e^{it\sqrt{-\Delta}} = e^{i(s+t)\sqrt{-\Delta}}; \quad (e^{is\sqrt{-\Delta}})^* = e^{-is\sqrt{-\Delta}} \displaystyle R(z) R(w) = \frac{R(w)-R(z)}{z-w}; \quad R(z)^* = R(\overline{z}). Because of these relationships, it is possible (in principle, at least), to leverage one’s understanding one family of spectral multipliers to gain control on another family of multipliers. For instance, the fact that the heat operators {e^{t\Delta}} have non-negative kernel (a fact which can be seen from the maximum principle, or from the Brownian motion interpretation of the heat kernels) implies (by (1)) that the fractional integral operators {(-\Delta)^{-s/2}} for {s>0} also have non-negative kernel. Or, the fact that the wave equation enjoys finite speed of propagation (and hence that the wave propagators {\cos(t\sqrt{-\Delta})} have distributional convolution kernel localised to the ball of radius {|t|} centred at the origin), can be used (by (3)) to show that the resolvents {R(k^2)} have a convolution kernel that is essentially localised to the ball of radius {O( 1 / |\hbox{Im}(k)| )} around the origin. In this post, I would like to continue this theme by using the resolvents {R(z) = (-\Delta-z)^{-1}} to control other spectral multipliers. These resolvents are well-defined whenever {z} lies outside of the spectrum {[0,+\infty)} of the operator {-\Delta}. In the model three-dimensional case {d=3}, they can be defined explicitly by the formula \displaystyle R(k^2) f(x) = \int_{{\bf R}^3} \frac{e^{ik|x-y|}}{4\pi |x-y|} f(y)\ dy whenever {k} lives in the upper half-plane {\{ k \in {\bf C}: \hbox{Im}(k) > 0 \}}, ensuring the absolute convergence of the integral for test functions {f}. (In general dimension, explicit formulas are still available, but involve Bessel functions. But asymptotically at least, and ignoring higher order terms, one simply replaces {\frac{e^{ik|x-y|}}{4\pi |x-y|}} by {\frac{e^{ik|x-y|}}{c_d |x-y|^{d-2}}} for some explicit constant {c_d}.) It is an instructive exercise to verify that this resolvent indeed inverts the operator {-\Delta-k^2}, either by using Fourier analysis or by Green’s theorem. Henceforth we restrict attention to three dimensions {d=3} for simplicity. One consequence of the above explicit formula is that for positive real {\lambda > 0}, the resolvents {R(\lambda+i\epsilon)} and {R(\lambda-i\epsilon)} tend to different limits as {\epsilon \rightarrow 0}, reflecting the jump discontinuity in the resolvent function at the spectrum; as one can guess from formulae such as (4) or (5), such limits are of interest for understanding many other spectral multipliers. Indeed, for any test function {f}, we see that \displaystyle \lim_{\epsilon \rightarrow 0^+} R(\lambda+i\epsilon) f(x) = \int_{{\bf R}^3} \frac{e^{i\sqrt{\lambda}|x-y|}}{4\pi |x-y|} f(y)\ dy \displaystyle \lim_{\epsilon \rightarrow 0^+} R(\lambda-i\epsilon) f(x) = \int_{{\bf R}^3} \frac{e^{-i\sqrt{\lambda}|x-y|}}{4\pi |x-y|} f(y)\ dy. Both of these functions \displaystyle u_\pm(x) := \int_{{\bf R}^3} \frac{e^{\pm i\sqrt{\lambda}|x-y|}}{4\pi |x-y|} f(y)\ dy solve the Helmholtz equation \displaystyle (-\Delta-\lambda) u_\pm = f, \ \ \ \ \ (6) but have different asymptotics at infinity. Indeed, if {\int_{{\bf R}^3} f(y)\ dy = A}, then we have the asymptotic \displaystyle u_\pm(x) = \frac{A e^{\pm i \sqrt{\lambda}|x|}}{4\pi|x|} + O( \frac{1}{|x|^2}) \ \ \ \ \ (7) as {|x| \rightarrow \infty}, leading also to the Sommerfeld radiation condition \displaystyle u_\pm(x) = O(\frac{1}{|x|}); \quad (\partial_r \mp i\sqrt{\lambda}) u_\pm(x) = O( \frac{1}{|x|^2}) \ \ \ \ \ (8) where {\partial_r := \frac{x}{|x|} \cdot \nabla_x} is the outgoing radial derivative. Indeed, one can show using an integration by parts argument that {u_\pm} is the unique solution of the Helmholtz equation (6) obeying (8) (see below). {u_+} is known as the outward radiating solution of the Helmholtz equation (6), and {u_-} is known as the inward radiating solution. Indeed, if one views the function {u_\pm(t,x) := e^{-i\lambda t} u_\pm(x)} as a solution to the inhomogeneous Schrödinger equation \displaystyle (i\partial_t + \Delta) u_\pm = - e^{-i\lambda t} f and using the de Broglie law that a solution to such an equation with wave number {k \in {\bf R}^3} (i.e. resembling {A e^{i k \cdot x}} for some amplitide {A}) should propagate at (group) velocity {2k}, we see (heuristically, at least) that the outward radiating solution will indeed propagate radially away from the origin at speed {2\sqrt{\lambda}}, while inward radiating solution propagates inward at the same speed. There is a useful quantitative version of the convergence \displaystyle R(\lambda \pm i\epsilon) f \rightarrow u_\pm, \ \ \ \ \ (9) known as the limiting absorption principle: Theorem 1 (Limiting absorption principle) Let {f} be a test function on {{\bf R}^3}, let {\lambda > 0}, and let {\sigma > 0}. Then one has \displaystyle \| R(\lambda \pm i\epsilon) f \|_{H^{0,-1/2-\sigma}({\bf R}^3)} \leq C_\sigma \lambda^{-1/2} \|f\|_{H^{0,1/2+\sigma}({\bf R}^3)} for all {\epsilon > 0}, where {C_\sigma > 0} depends only on {\sigma}, and {H^{0,s}({\bf R}^3)} is the weighted norm \displaystyle \|f\|_{H^{0,s}({\bf R}^3)} := \| \langle x \rangle^s f \|_{L^2_x({\bf R}^3)} and {\langle x \rangle := (1+|x|^2)^{1/2}}. This principle allows one to extend the convergence (9) from test functions {f} to all functions in the weighted space {H^{0,1/2+\sigma}} by a density argument (though the radiation condition (8) has to be adapted suitably for this scale of spaces when doing so). The weighted space {H^{0,-1/2-\sigma}} on the left-hand side is optimal, as can be seen from the asymptotic (7); a duality argument similarly shows that the weighted space {H^{0,1/2+\sigma}} on the right-hand side is also optimal. We prove this theorem below the fold. As observed long ago by Kato (and also reproduced below), this estimate is equivalent (via a Fourier transform in the spectral variable {\lambda}) to a useful estimate for the free Schrödinger equation known as the local smoothing estimate, which in particular implies the well-known RAGE theorem for that equation; it also has similar consequences for the free wave equation. As we shall see, it also encodes some spectral information about the Laplacian; for instance, it can be used to show that the Laplacian has no eigenvalues, resonances, or singular continuous spectrum. These spectral facts are already obvious from the Fourier transform representation of the Laplacian, but the point is that the limiting absorption principle also applies to more general operators for which the explicit diagonalisation afforded by the Fourier transform is not available. (Igor Rodnianski and I are working on a paper regarding this topic, of which I hope to say more about soon.) In order to illustrate the main ideas and suppress technical details, I will be a little loose with some of the rigorous details of the arguments, and in particular will be manipulating limits and integrals at a somewhat formal level. Read the rest of this entry » RSS Google+ feed Get every new post delivered to your Inbox. Join 2,319 other followers
9598358e24829b70
Thursday, January 14, 2016 What is Life? The other day, I came across a paper by Erwin Schrödinger , first published in 1944, titled simply, "What is Life?" I didn't read it and have no intention of reading it. And I highly recommend you not read it either! According to Wikipedia, Schrödinger is known for his "Schrödinger's cat" thought-experiment. He was a Nobel Prize-winning Austrian physicist who developed a number of fundamental results in the field of quantum theory, which formed the basis of wave mechanics: he formulated the wave equation (stationary and time-dependent Schrödinger equation) and revealed the identity of his development of the formalism and matrix mechanics. Schrödinger proposed an original interpretation of the physical meaning of the wave function. In addition, he was the author of many works in various fields of physics: statistical mechanics and thermodynamics, physics of dielectrics, colour theory, electrodynamics, general relativity, and cosmology, and he made several attempts to construct a unified field theory.  In other words, Schrödinger is an intelligent and accomplished man! Or is he? After all, who are we to question a Nobel Prize winner who has contributed so much to the advancement of Science in the 20th century? I'm not trying to be flippant here. The reason I question his thesis on life is the same reason I question modern civilization and its stories. In his book What Is Life? Schrödinger addressed the problems of genetics, looking at the phenomenon of life from the point of view of physics. And therein lies the problem: looking at the phenomenon of life from the point of view of physics. I wrote about Scientism before where I question the applicability of Science beyond the world of the hard Physical Sciences. Indeed, it was another Nobel Prize winner, an Economist, who brought this to our attention. Friedrich A. Hayek, who is also Austrian, argued against applying Science to the field of Economics. And just as well, Science is not applicable to an understanding of existential matters. Schrödinger's thesis on the definition of life falls squarely in the madness of Scientism. How did civilized man decide that Science has an answer to such questions? And how did civilized man even come up with questions like that? I wonder if this is yet another artifact of the process of separation from nature and from our true selves that civilization enables and is enabled by. It wasn't always this way. We didn't ask questions like this before, much less proceed to employ arbitrary tools to answer them. There is no Scientific explanation for the origin of the Universe... there is the Big Bang theory but what was there before the Big Bang? Rupert Sheldrake jokes about this. He quotes Scientists as saying, "give us one free miracle and we will explain the rest." :) How did life originate? That's a central question for many Scientists. But first, how does one go about defining "life"? Somehow, we have a definition for what's life and what's not life. And according to modern Science, at some point, ages ago, non-life became life, non-alive chemical molecules got together to have a party and decided to become "alive". One wouldn't need to ponder over this too long before one realizes that the notion of "alive" is rather arbitrary. It's as if we write up a research paper and viola, we have drawn a distinction between what we will now declare to be alive and what we will now declare to be not-alive. So we have taken the creation around us, and drawn an arbitrary line and went around marking things as alive and not-alive. This is the work of fiction, not Science. An indigenous person would laugh at such insanity. He would say, all of creation is alive. Alive and vibrant, every stream, every mountain, every rock, every molecule. Even every electron, as the double-slit experiment amply shows. Modern man would rather run around in circles trying to explain how life came from non-life, asking for a free miracle, but would never admit that the very distinction between life and non-life itself is rather arbitrary, entirely made-up, conjured out of thin air... OK, right, written up by a civilized Scientist called Schrödinger. Once we make that distinction, a whole hierarchy starts forming... humans at the top, of course, animals next, plants, multi-cellular organisms, single-cell organisms, etc. Again, who's to say a human being is more alive than an animal? That a cow is more of an animal than a fish? When I tell people I am vegetarian, sometimes I am asked if I eat fish. I tell them no, because a fish is an animal. Then they tell me that a fish doesn't have as many feelings as a cow, so it is more like a vegetable than an animal. Where do people get these ideas from, I do not know! Chickens are less than cows, they say. And plants are the least because they don't move. We're obsessed with categories and hierarchies. Guess that's pretty much the way of Empire. A king is more alive than the commoner, right? I am not sure we need to make such arbitrary distinctions. The indigenous person who lived in harmony with land for 200,000 years never made such distinctions. The indigenous person is an animist at heart. He saw in things a certain spirit that we don't see. They are alive in their own way. Just like the Earth is alive. Scientists would deny that. An indigenous person wouldn't. Calling something non-alive or dead points to something in our own psyches, our own consciousness that is dead. Not being able to see the entire Universe as alive and conscious is a symptom of our own impaired consciousness. Calling a rock dead and inanimate points to an ossification of our own minds, a hardening of our own otherwise soft nature. It takes being partly dead to see death. Things mean a lot to me. I try to fix things before tossing them in the garbage. I reuse paper napkins multiple times, if I have to use them at all. Those things came from living trees. There's hardly a distinction. Everything is made of the same elements which go round and round making one thing today and another tomorrow. Any distinctions we make point to our own fragmented minds. Things are as alive as people, maybe more so... Love them with all your heart, if you're so moved to! And they will love you back! "Animism encompasses the beliefs that there is no separation between the spiritual and physical (or material) world, and that souls or spirits exist, not only in humans, but also in some other animals, plants, rocks, geographic features such as mountains or rivers, or other entities of the natural environment, including thunder, wind, and shadows. Animism thus rejects Cartesian dualism. Animism may further attribute souls to abstract concepts such as words, true names, or metaphors in mythology. Some members of the non-tribal world also consider themselves animists (such as author Daniel Quinn, sculptor Lawson Oyekan, and many contemporary Pagans)." We live in a conscious Universe imbued with spirit and we can participate in it just as other life forms do. And everything is alive. This, we have known for a long time. We just need to remember it. Check out this fascinating film: The Animal Communicator and find out what else we've forgotten. 1. Dear Satish, Your words above are so good you might have written us into a corner with no more to say. Maybe just us folks trying to live day to day. Some little God somewhere, gave up scripting the movements of every living particle, forming everything. DETERMINED to relax and float in a sea of phospherscent stars the little God tinkerd with Intelligent Design (ID) Let there be FREEWILL. Let it go, let it all Flow. With omnipotence ID insures any element can and will intercede...if you please. LOVE do not fight, the creation of Poetry in motion and Physics and Photons and fishy physical things are what the waves of manifestation form. Easy breezy to look behind a billion big bangs. You have total freewill to determine what you can and cannot see. Little little little fractals - forever - in an infinite sea. Determine what you will, or be very still, your floating in it. Composed of it. Physics and LIFE OF PI...Cosmic oceans of fishy things. LOVE it all --- if you will () Crowfoot, of the Blackfoot Nation, 1821-1877 Native Americans have long viewed the animals as 'nations' and 'societies' unto themselves. 3. NOTHING elite about living forever. EVERYTHING does. Eternal Entropy recycling every type of Energy. Transforming spirits & souls & your essence. Scanning sorting and completing lessons. Higher Education I.D. LOVES to invite you to better planets. They pre-exist. Something for everyone...Some keep their populations and problems very small. You can get there from right here. When your truly ready, you will make the quantum leap from with-in. Are we, are we, are we ourselves? Well most spirit returns Now maybe we've learned To stop this whirl of a lie To this earth we are bound I ask you Are we, are we, are we ourselves? A planet with perfect land use already designed by you, for you, from the one you artfully drew from thin air. Honey, your poodle was more than a little doG. Your puppyman and farm friends were always reading you. Now you know, what you always knew,they were hatching love places...Running wild & free, together, reunited to the one where you really want to go. No more trauma in store. No more glass ceiling or ocean floor. No Marco. No polo after the dark matter and blue sky collide. Mo oh Mo, Welcome to the worm's a bit cozy as time flies by out there. Both hot & cold, you know how troubles grow old. So good to see your human side - test after test your strong heart is always in the right place. Face to Face I shake hands in a new place with you. No more secrets left to reveal. Shep - A crystal river of love flows to you. The past is present. This weekend I go with the flow to visit your childhood place. Here we are but ghosts in a machine. But in full living reality your footprints and imprints on the Crystal River ways "our" existence is too hard to explain. From Cotterell to Kierkegard you touch on it all. I leave you all here (for now) as the Crystal River nuclear plant decommissioning is nearly done. Be fearless, Be happy, A universe of other kingdoms will come. One for the Kogi, ONE for all. It is now completely up to each of you to choose. You have every free option & determined belief in your heart. Very big & very small. 1. Mark, I love the way you weave poetry into your discourse. I picture sitting with you at a table and hearing you casually speaking like this. You're Mark-speak is what inspired me to post my poem below. Please don't ever stop being Mark. And Satish, your blog post topic today has spoken to my most passionate cause; the questioning of scientism's over emphasis on reason and logic and the negative influence that was brought to our global culture as a result. Once the scientism worldview became imbalanced and exclusive, we were set upon a dark path indeed. It never should have been used as an exclusive worldview, it was just another tool. Instead, it became a religion, one complete with its own priests and even its own inquisition too. Thanks for commenting about it. And Mark, more poem speak ... I love it! 2. " became a religion, one complete with its own priests and even its own inquisition too." LWA, yes, Scientism is the new religion. The Richard Dawkins' of the world, its high priests and pontiffs. And modern schools the temples where children are immersed in a STEM-heavy curriculum. First graders in Silicon Valley are being taught computer programming. A whole generation of machine men in the making. They'd know all about spaghetti code! We had some interesting discussions on Scientism along similar lines on NBL last year. Plenty of apologists for this religion! 3. They'd know all about spaghetti code! You know, the other day I had to resist cracking a joke about how 'of course I know what those things are.' Nuclear reactor software needn't be precise, only just accurate. And spaghetti code is a party where a bunch of software engineers get together and eat spaghetti instead of pizza for a change. Dude, I was in that industry! But then I thought, naw, it's not a good enough joke to use in a gum fight, lol. I'm so glad to be speaking with a group of people who can identify this false road we went down. I read a book once (don't ask me what it was though) that delved into symbolic thinking, sort of like Jung was into. It posited that 'reason' was actually like a sickness that crept into out minds somewhere along the way. Imagine if it was true that we lived in a sort of holographic quantum universe where manifestations were purely generations produced from or in our subjective minds. If this were actually the case, then reason and objective science would be the ultimate delusion and mental illness indeed. My view is that at the very least it shouldn't be the exclusive determinant for defining our universe. All things in balance and moderation, at the very least. And Dawkins is certainly not moderate. As a matter of fact, every time I hear Dawkins pontificate, I never even hear him make any actual arguments anyway. He's always just performing to his crowd by placing his hands on his forehead dramatically, or sighing, or cracking an insult at someone for the laugh it gets him from his audience. What's up with that? I use reason, and I'm not a fundamental religionist either, and I just don't get the guy (Dawkins.) Maybe his books are different, but unfortunately his behavior and debates have discouraged me from ever reading one. You know, come to think of it, for proper andante noodles, spaghetti does need to be cooked more precisely than accurately. Damn dude, I passed their cheesy tests. Mmmm, drippy cheesy spaghetti code, yum. 4. LWA, allow me to return the favor, if only to a small comparative degree. I've read some Dawkins, and watched several of his debates. Don't go there. It's painful how little he understands, and how often he lies and just makes stuff up. He made a statement something to the effect that if there was a God, he was more incomprehensible than any theologian ever considered. Well, the unknowable, incomprehensible, mysterious nature of God is a pretty well established theological and general religious foundational precept. Anyone who knows anything about religion knows that. There is even an ancient Latin term for it, that I can't find at the moment; I tell you, Google has gone to computer hell lately. He made another comment once that really caught my attention. He talked about how complex the universe is, how incomprehensibly large and complex, and if there was a God, God would have to be more complex than the universe by far, and how was that possible. Or something very close to that. I was stunned that he misses entirely the elegant simplicity of the universe, its fundamental forms, its basis in the most dense, fertile simplicity that is replicated and adapted. The universe isn't founded in complexity, but simplicity, replication, and consciousness, all within a larger context of ongoing change. At least, that's how I see it. Anyway, he is the worst philosopher I've ever had the misfortune to listen to. Not as bright as he and his followers think. I'm not qualified to judge his work as a biologist, but as a philosopher and deep thinker, he's remarkably limited. His logic is lacking, too. 5. Thanks oldgrowth. I agree, I only ever watched a few of his most very brief clips, and it was to be able to debate 'about' him with some people who seemed to be fans of his. So I watched a few clips. I agree, and that's exactly what I meant; he seems to make no real logical arguments at all. What seemed to me to be impressing these fans of his was exactly what I described above. 'Well you're a fool' Hahahahahaha goes the crowd. 'Well that's preposterous.' Hahahahahaha goes the crowd. 'It boggles the mind.' Hahahahahaha goes the crowd. And I was left thinking, sheeesh, I don't even see an argument here, he's just a boorish comedian is all. He just hurls insults and acts all exasperated. So ya, no worries. I've been thoroughly turned off of Dawkins, and it took very little to do it too. I just didn't get him at all, or what the big deal was about him. People must think being the biggest ass wins the argument. Okey dokey then. Thanks oldgrowthforest for making sure I didn't hurt myself with him, lol. :) 4. A "radical" commentary about human behavior toward animals that was on AlterNet today. 1. Shockingly, there are registered non-profits that promote hunting for fun... This was quite moving when I first saw it - Earthlings - a documentary that shows the many ways humans mistreat and murder animals. "Earthlings is a 2005 documentary film about humanity’s use of animals as pets, food, clothing, entertainment, and for scientific research.Since we all inhabit the Earth, all of us are considered earthlings. There is no sexism, no racism, or speciesism in the term earthling. It encompasses each and every one of us, warm or cold-blooded, mammal, vertebrae or invertebrate, bird, reptile, amphibian, fish, and human alike. 2. This is going to be hard, isn't it? :( 3. Yes, OGF. The friend I watched it with covered her eyes many times. 4. Well, even though I know about and already have opinions about everything I saw in that video Satish, I watched, every bit of it, without looking away once. I guess all I can say about it is ... I watched it. You don't need to see it oldgrowthforest, how about we just say I watched it for the both of us and leave it at that. It's the sort of thing that makes one pray for extinction, real soon. I am at a loss for anything more to say about it. Love's a bit low at the moment, so I'll just sign off for now I think. Peace and love gang. 5. - " It's the sort of thing that makes one pray for extinction, real soon." I have seen that "earthlings"film too some years ago. I was extremely shocked, shocked like some little child, angry, sad and extremely shocked. No matter, how disconnected from Nature a person has become, you will always find suffering at his core. Adolf Hitler was a suffering person, disconnected from Nature, disconnected from his own Heart, from his own emotions, disconnected from empathy. Adolf Hitler was a suffering person. We can find many of disconnected persons in modern society. No matter, how criminal and shameless they are, you will always find some suffering little child at the core of their hearts. When someone does evil to other beings, then because he is disconnected and suffering. To be disconnected means to suffer for shure. A person who realized real freedom and peace within, won't do evil to others. A real free and balanced person does not do evil things to other beings. A joyful, peaceful and free person wants to share joy, peace and freedom. Buddha for example said, that we should try to have some empathy, compassion even for evil beings, because those beings are badly suffering beings, but at their very core there lives some cosmic spark, some cosmic essence, the same cosmic essence that is within ourselves, within everyone, everything. I have to remember this every single day when I feel hatred against the machine, against Empire, against others, these evil, suffering beings. I have to learn that Balance anew every single day, like some Yoga. I bow before the Kogi, who showed so much compassion, even for their stupid, little, younger Brothers _()_ 6. Sorry, I wanted to speak to LWA with my last comment. 7. Apologies, LWA, if the video drained your energy... I mean, how can it not? I watched it twice over the years, once by myself and once with a friend. It's terrible. Just grotesque. Worse than "The Cove". I think I sometimes think of this blog as a place that provides cookie crumbs for that person who's a bit lost. He/she would somehow land here, go through the posts and the comments and start putting together a picture of just what goes on in this world. Not just the ugly parts, but the beautiful parts too. I was closed off to both for the longest time, and only in the last few years did I start seeing the connections between the dots. I am certain those of us who are here now are well aware of the extent of the predicament we find ourselves in. Most of us, anyway. But I have a feeling that once bizarre things start happening, people would want to know what's going on, how we ended up here, etc. Of course, by the time we get there, we will have crossed the point of no return. So all that will be left to do is to understand (intellectual work) and pray (spiritual work). Assuming we will have more than one shock along the way, there will be opportunities for people to try and get closer to their true selves. My hope is that all that we discuss here, all the thoughts and ideas we commit to the space here, all the energy we put into the morphic field via this and other blogs will one day serve others, when they need it the most. 8. No problem Satish, it was my choice, even though I was already aware of everything it covered. After meditating a bit on it, I came to realize why I stayed watching it. I don't really wind up doing much that doesn't turn out to have some reason or another as to why, I have pretty solid guidance in that regard. So, no worries. It's all good. I admonish oldgrowth not to view it though. No deep animal empath should ever view that. Cheers Satish, it's all good. Energy is more or less back now. It was just a temporary despondency. ;) 9. Thanks for your wise reminder Nemesis. Forgive thine enemy. I agree, at the core of every freak is often just a suffering abused person who is only just lashing out. Thanks for reminding me too, Nemesis, that I have yet to watch the Kogi videos. I know, I know, where have I been, out partying or something? I think tonight might turn out to be the night I settle in and check those out, so I thank you for bringing that back into my awareness. Thank you for having this message for me. Cheers Nem, get off your computer and get wanking on that guitar already ... emote, emote, emote, Quas-emoto ! :) 10. Nice! And maybe this would be a good antidote to Earthlings - Anna Breytenbach interview (in case you haven't watched it yet) I just watched it. Anna is amazing... I like how she explains the process of inter-species communication in plain English. 11. Yeah, those Kogi videos are worth watching a few times... thanks, Nemesis, for pointing us to them. And thanks, OGF, for introducing us to Anna Breytenbach. 12. @LWA " Thanks for your wise reminder Nemesis." It was a reminder to me as well ;-) Man, sometimes I feel like I will grab that long, heavy sword and clean that evil bastards off the planet 8-) Thank you for your comforting thoughts about this place here. Yeah, these morphic fields. I read some book written by Sheldrake many years ago. It is about resonance, morphic resonance (isn't Music the same in some sense?). When I saw the video about the "Animal Communicator" that oldgrowthforest posted, I thought of morphic resonance too. These indigenous people in the film talked about it as well in some sense: The world, the Kosmos is like a net, everything inter-connected through morphic resonance like a web, through all times and spaces. Some noise, some vibration here makes some noise, some vibration there. The Kogi tell us about Aluna, the Great Mother, who weaves the Web of the Kosmos, the Cosmic Web. Maya too weaves her cosmic web. Mayas/Alunas web isn't just illusion, you only get lost in her web, when you don't respect the Dharma, IMO. This web is the Dream of the Great Mother (Nagual/Intuition), she gives birth to the Spirit (Tonal/Ratio). These ancient caves, with all these wonderful paintings from the stoneage in it, tell the same story, I think: The animistic interconnection of beings and things within a web of natural, cosmic, morphic resonance, Natural Mystic so to say. When times get real tough, when times get "bizzar", then people could start to panic. When people realize, what direction we are heading to, many of them will start to panic, that's one of my big concerns. When too many people panic, the situation gets completely out of control. No, I am not in panic for now, I studied the weird, modern planetary situation my whole life, so I got used to it. But I smell upcoming panic in the not too distant future out there in the streets. Climate change gains momentum ever more, when this just goes on exponentially, many people will start to panic. Panic transmits via morphic resonance very fast, like shock-waves moving through some swarm of birds. This will be the moment, when civ finds it's end. Maybe good for the planet, but shurely bad for all the children of this world. Hey, thank you very much for "Animal Communicator". It touched my heart and bones. I liked the black leopard the most, "Diabolo", hahaha, later "Spirit". I love both names. I feel like some diabolic, spiritual black panther sometimes, hahaha: " His vision, from the constantly passing bars, has grown so weary that it cannot hold anything else. It seems to him there are a thousand bars; and behind the bars, no world. As he paces in cramped circles, over and over, the movement of his powerful soft strides is like a ritual dance around a center in which a mighty will stands paralyzed. Only at times, the curtain of the pupils lifts, quietly--. An image enters in, rushes down through the tensed, arrested muscles, plunges into the heart and is gone. Rainer Maria Rilke 13. Sorry, the title of the Rilke poem is "The Panther". 14. Nem ~ that has been one of my, if not my very most, favorite Rilke poems for a long time. I also completely loved your post of the Maya Angelou poem previously. that one is so diamond good. 15. @mo flow Glad you liked "Still I rise" by Maya Angelou. I will never forget the victims of Empire. I was always extremely sad and angry about how Empire grew big through slavery and exploitation. " I’m a black ocean, leaping and wide, Welling and swelling I bear in the tide. To me this is more than just about the black people. To me it's about cosmic Blackness, the Nagual, Kali, the Great Mother, who gives life and takes it. To me the color of Empire is just grey, ugly, lifeless, nagging grey. Those buisy men in their grey 1000$ suits, with their grey faces, their grey bloody money, their grey, machinelike hearts and minds. I like black very much, black like "Diabolo/Spirit" in the "Animal Communicator". "The Panther" is one of my favorite Rilke poems as well. Everybody knows what it's about immediately. Living in a prison, on prisonplanet. But there is this image, this remembrance of freedom, freedom is still alive within that Panther, not forgotten, not dead. Freedom lives on, even behind bars (like within Nelson Mandela during his time in apartheid-prison): 16. I have suspected that it's easier for Germans than anglos to appreciate blackness. 5. . Hopsilophodon hops along. On three toes he goes; grubbling. And the hummina-hummina hums and numbs and hums and numbs, Like summoned drums, As hopsilophodon hops along. On three toes he goes; grubbling. And the hummina-hummina is all around him. (Grubble – to feel or grope in the dark.) A Hypsilophodon was a little three toed, bipedal, vegetarian dinosaur that existed during the cretaceous. Hypsilophodon may have foraged around at night to avoid being eaten. This is a poem I wrote many years ago as a metaphor for incarnation. I thought you all might enjoy it. 6. "all of creation is alive" oh yes. all of creation, every last electron, is part of the One. and the One is most definitely alive! Satish, your commentary on this issue really hits home for me. the life/non-life distinction that science tries to define is so much at the heart of where things have gone wrong for us. we (western civ science we) first give ourselves the right to make that definition. then we forget this was completely arbitrary, and created out of thin air, and then we give our definition the power of God. and we now have that power, ourselves! voila, and we get to mold and move the world as we see fit, based on this definition. isn't that a niftly parlor trick? like you say, this is not Science. this is completely fiction. the worst kind of story making, because it is both unconscious, and driven by (deeply unconscious) ill intent ~ the desire to control, at all costs. lwa, I love your poem! very evocative of your metaphor, and also harks nicely to the jabberwocky, one of my faves. more poetry! so beautiful, this whole post, Mark. 1. @mo flow It is my fault on NBL, I know that. I invest too much time on the internet while my guitar get's very jealous, she demands lots and lots of devotion, she is more jealous than any woman I ever met :D So I guess that I secretely/unconsciously dare to get banned on NBL. I also seem to seek farewells because of my own bio. When one had to say farewell to so many people during a lifetime, you start to seek farewells all the time, it get's repetitive. At least, that's a big part of my sometimes strange personality, kinda firebug, burning down everything all the time, leaving everything behind all the time, clinging to nobody and nothing. But I know that this is not Balance the right way. I think, I should leave NBL alone for a while and just write some spare comments here on Kuku. I feel more comfortable here, more at home. 2. hey Nem ~ I knew exactly what your secret desire was, and why. the way you said Please alone was enough to clue me in, but I kinda caught on a while ago about certain things you mention here ~ knowing your bio as you have mentioned it on NBL. you are right. it's not balanced. but what exactly does it mean in today's world to find true balance? is this actually possible? it is about peace, that much is clear. peace within flow. I'm working on this all the time. 3. - @mo flow Yeah, you know me quite good. And you are right, it's about Peace and it's about Balance. And yes, it is hard sometimes to stay in Balance in today's world. An important things seems to be Comprehension. You got Comprehension and I appreciate that very much. It's late in the evening. I am sitting here with guitar in one hand, coffee in the other hand and trying to get the next song into flesh and blood. My fingers are ice-cold, that makes it harder, the heater shut off automatically already, I am freezing actually, brrrr. It is one thing to learn lyric and chords, but it's another thing, to breathe Life into a song. Well, I am on track with Satish's essay now, cool- "What is Life?" From a musical point of view, one has to breathe real Life into a song. You need to be in Balance. It's not enough to learn the chords and the lyric. It can be just one easy chord and one simple line of lyric- if you don't breathe Life into it, it sounds lifeless, stiff, mechanical, without breathing, without vibration. The air must vibrate, there has to be a certain vibration, a certain Resonance (hat tip to Rupert Sheldrake). You can feel, if a song- performance is alive or not. When it's really alive, then there is a certain kind of communion between the performer and the audience, a certain vibration that vibrates in the air, forth and back between audience and performer, they actually become One on some level, on a musical level. That's why a life- performance is called "life-" performance. In communication it's the same, I think. "Comm- unication", "Comm- union"... quite the same, isn't it? Thank you for your comprehension, dear mo flow! ... brrrr, it's real cold now here in my little living-room... brrrr... I have to get more coffee now, brrr 4. Resonance, yeah... Re- sonare 7. Dear Satish, I've haven't looked here on your blog for a long time, and I'm so happy that I did! What a wonderful essay. You say what I feel and know to be true. I don't think I could watch "Earthlings" because I've seen too much already and my imagination makes me see the rest. For example, every time I see a horse (or a donkey), I can't help but imagine all a cruel ways these lovely, powerful animals have been exploited and misused for the "benefit" of man, especially in all those "glorious wars". I'm just re-reading Tolstoy's War and Peace which contains some very explicit scenes of how horses were used like (throw-away) machines. They way they went through artillery horses, so many of them just for servicing one gun - utterly dreadful. This is not the sort of thing they show in movies or TV adaptations. But yes, that's the reality when humans are disconnected from life. Cruelty and exploitation becomes second nature. And dear Mark too, I agree with LWA, you have a truly lyrical way of expressing yourself. Remember last year, at this time, I said here that I live from season to season? You picked up on that in your poetic way. So here we are again, another season, a very early spring. I have some lovely hellebores (Helleborus orientalis)or Lenten roses in the garden. They don't care about the deluge-like rain we have here or the cold and look positively exotic, more like plants from a tropical climate. The cold makes them bend their heads to the ground but as soon as the sun comes out, they stand up straight again, drinking in the light with pleasure. I'll send some pictures to your email address. And OGF and MO here again. I've missed you both! 1. . Hi Sabine. I think we spoke face to face on NBL just once. I've read a bit of you here on kuku's past threads, and I just wanted to say hi. So ... hi! Nice to see you drop in again. :) 2. Welcome back, Sabine... so good to see you here again :) I don't think it's a random coincidence that we were talking about you here and you decide to show up and say Hi. There's more going on here. It looks like your approach to taking it season by season is becoming the norm for all of us. May we all have one more season. 8. Sabine! so good to see you here again. I've been thinkin about you, and wondering how you are doing. great to hear about your hellebores. yes, please stay FAR away from Earthlings. I am not going there, myself. Satish, solivagant, all ~ here is something from the Bhagavad Gita, that goes straight to the heart of what we were discussing in the previous thread. I am moving this discussion here to make it less unwieldy. from The Ninth Teaching ~ The Sublime Mystery (Krishna speaking, paragraphs 8, 9, 10) Gathering in my own nature, again and again I freely create this whole throng of creatures, helpless in the force of my nature. These actions do not bind me, since I remain detached in all my actions, Arjuna, as if I stood apart from them. Nature, with me as her inner eye, bears animate and inanimate beings; and by reason of this, Arjuna, the universe continues to turn. so in the paragraph 8 here, we have, perfectly, the idea of both "freely create" and "helpless in the force of my nature". meaning, in a very real way, helpless as in out of control. this is exactly how it was. as I was describing previously, when the One surrenders to its own nature, it is utterly helpless. it is completely caught up in the force of its own nature. that force is extreme, beyond all conception. interestingly, relating to this current thread, paragraph 10 here talks about animate and inanimate. but says first he is behind all of this, with his inner eye. by reason of his own living nature, it all turns. it is all alive. Krishna specifies over and over, in various ways, that he is the Creator, without being Identified with his creation. earlier in this same chapter, he says "my self quickens creatures, sustaining them without being in them." this is absolutely the case. there is no way the One, the true being of Krishna itself, can be IN nature. every atom would be instantly annihilated with the extreme force of that Being, if the One itself lived in that atom. the One supplies all the life force that is behind everything, sustaining everything. and the universe turns, alive. 1. I love the Bhagavad Gita. 2. Hi, Sabine! I mentioned you in a post just last night, and here you are. So happy to see you here. I hope you watch the Anna Breytenbach documentary if you haven't seen it before. Are you familiar with her? I thought of you and your love for nature when I posted it here. 3. mo, that's quite interesting... what you experienced being articulated by the Gita. I am going to be pondering over this for a while, of course... for now, these thoughts comes to mind... there has to be a seed, a very small seed of non-control, of agency, even when we're very much out of control. It's like a thin thread that sprouts and becomes a rope strong enough to reel in an otherwise out-of-control scenario. It's as if the yin has shrunk to an infinitesimal size while the yang is fully expressing itself, as fully as it can while still leaving a little space for a diminutive yin. Without that almost non-existent yin, the yang wouldn't know itself anyway. And the yin will never yield completely. And when the yang is done, when it has had enough fun, when it is ready to return to balance with the yin, the seed of yin sprouts and grows and occupies the space that the yang is relinquishing, and they are back in balance, until, of course, it's time for yin to express itself fully... In that eternal vibration, pictured as a wave, there is still a force that acts at the extremes, one that seeks to restore balance. In fact, the force is maximum at the extremes. So, I wonder if, if anything is out of control, if there's something that we cannot escape from, it's this duality itself. We simply can't be totall y out of control, nor can we be totally in control. Now, that idea, that phenomenon is perhaps the only thing that is out of our control. We simply can't vanquish the little seed, whether it's the yin or the yang. As usual, I seek to relate these musings of the cosmic game to the story of humanity. And I think of sociopathy as that seed that was ever present in the best of times, in the most harmonious and balanced periods of our existence on this planet. The seed must have existed all along, throughout our evolution, and it had finally sprouted at some point along the way, whether it was 10,000 years ago with the dawn of agriculture, or even further back, with the invention of language and the beginning of the loss of telepathic abilities. At some point, that seed sprouted and what we're seeing today is the full-blown expression of sociopathy and all that it engenders. The sheer virtue and skill of our indigenous ancestors was in knowing about this seed, and keeping it from sprouting just yet, which they did by ostracizing, banishing, or killing the one or two sociopaths that showed up in every tribe every now and then. But at some point, the sociopath ran over his tribe, perhaps when the tribe was caught in a state of weakness due to a natural calamity or some such situation that threw the members into a bit of a chaos when they couldn't keep tabs on the sociopath. This is perhaps the "out-of-control" aspect of the One. And it answers that question, "why would a decent God let the world go to hell?" It's also interesting to see that in current times, the indigenous peoples and the meditators and the monks are the ones who are being the most resilient in their psyches and minds, because, at some level, they understand what's going on and the inevitable nature of some of the ongoing events. Well, those are just some thoughts... I don't know if that's the way it is. Although, I never stop looking for "the way things are" despite my doubt that there is a way that things are :) OGF would know what I mean! 4. I like your thoughts here, Satish. I want to be careful about this one idea you mention. the "helpless in the force of my nature" ~ meaning out of control aspect of the One ~ creates everything. every manifest thing, and every unmanifest possibility, arise out of that. the infinite ways a system could achieve different forms of harmonious balance, for an arbitrary amount of time, arise out of that infinitely "surrendered to" force of the One's nature. this is a key way I felt the infinity of the One. ALL of manifest and unmanifest Creation has to happen right NOW. the only way that is possible is if the One allows its INFINITY to dominate all other considerations. infinite force, power, foresight, intelligence, sensitivity, and who knows how many other aspects of its true nature that we have no words for whatsoever. yes, that a sociopath could arise out of this infinitely forceful creation is definitely part of the deal. but everything else that exists, or might exist, also arises out of that same infinite creation. 5. mo, you seem to have experienced the tremendous power and strength that's behind all of creation, that gives life to everything, life not in biological terms, but that spirit of existence to everything. It's as if a little piece of nuclear fuel can be demonstrated to be made up of an immense amount of energy. But that is still physical energy, measurable in kilotons or megatons, and there's something even more powerful about the kind of force you perceived, it being infinite. It's interesting to imagine all of creation as being due to the out-of-control nature of the One. 6. infinitely satisfied and infinitely hungry for MORE! the definition of OC peaceful paradox. that's me. But you cannot see me with your own eye; I will give you a divine eye to see the majesty of my discipline. Krishna speaking to Arjuna, from The Eleventh Teaching ~ The Vision of Krishna's Totality If the light of a thousand suns were to rise in the sky at once... the Vision of Krishna's Totality is a powerful, if very poetic, description of "seeing from the outside" the true nature of Krishna's Totality. it is so powerful and complete, and accurate in some key nuances of detail, that taken along with everything else in the Gita, I am convinced that the original author or authors of the Gita new from the inside exactly what they were talking about. anyone can be there, or be IT in Totality, as ultimately, we arise from IT, and our awareness is IT's awareness. that's really the only point of the Gita and all similar works. just to communicate the reality that you are IT, and you can know this again, in complete Truth. 9. Dear SABINE, I wanted to be sure to remind everyone on NBL of you as we started this year. I something along the lines of "If Horton hears a Who, I hope SABINE is out in her garden praying for one more year." Both NEMESIS & LWA are so much like Mayan reflections of myself. I feel like I can read them rather than write. I love it how each of them is so wise, yet down to earth stumbling around making the same mistakes I do in comments last year. Yikes, now I just stepped in it again saying that. Praying my brothers above know exactly what I'm trying to say here with a laugh. Very stormy on the other side of Fla tonight. Earthings & The Cove are horror movies. Worse yet the COVE slaughter has increased. I try to balance with the art & poetry aspects to reflect what beauty there is all over the world. I'm at the Crystal River lodge in this amazing cypress jungle. Thunder & wind. A vase of fresh lilies on the table. A flickering candle in this old cabin room. Thinking of how some future advanced intel might be able to read what we recorded about these times. I might not spell this right but I bet each of you has heard of the Akaskic record. Since everything in space is always in motion, all you have to do is capture exactly where every star and atom is at right now...and wow...that a perfect code system to return to this exact moment in time. I used to say that only radiation would remain when I was dealing with that very serious issue. However, tonight I'm sure the force of love does last longer. Stronger. Strangers in the night we will all begin...again. 1. You didn't step in it, Mark. But I did over there. Then I smeared it all over myself and ran around naked right through the center of the place waving my arms and shouting. It was all in good fun though. I like your mental picture of the Akashic record. It's also sort of based, I believe, on there really being no time anyway. No future, no past, just now. Shamanic time. All of it happening right now. And me covered with it and running around naked waving my arms. Don't blow away Mark, we need you. Peace and Love All. 2. Of course that was love I smeared myself with, and I was only naked underneath my clothes. Jeepers people. But the arm waving was a bit weird. 3. @Mark Austin Shure it does. Yes, Lake'ch, I and I 10. About Life, Balance, the Self, Individuation, Duality and Oneness: " C. G. Jung - The Self" C. G. Jung about Synchronicity: 1. - SYNCHRONICITY, the constant, meaningful, conscious, spiritual relationship between inner and outer world- this, the inner and outer relationship of Psyche and Kosmos, is exactly what the indigineous people, the Kogi never have lost. But modern man lost this Lake'ch, this meaningful relationship to himSelf, to others, to Nature, to the Kosmos. There is a crystal-clear relation between the destruction of the ecologic web and the psyche of modern man: An eco-spiritual crisis, the loss of Balance between inner and outer world, psyche and Nature/Kosmos. 2. "There is a crystal-clear relation between the destruction of the ecologic web and the psyche of modern man" Yes, this is indeed an eco-spiritual crisis we're going through. I like that term. Our collective psyche, currently wayward, is related to the ongoing destruction on the planet. A disconnection from source, one that leads us to a rather restrictive and limited worldview that characterizes the richer aspects of the Universe as superstition, is similar to the disconnection from Mother Earth, from Earthlings, from Nature and from all of Creation. 3. Exactly. And modern, purely materialistic science or technology can not solve that crisis (allthough not all modern scientists are purely materialistic). I like, what Jung said about the relationship (meaningful syncronicity) of inner psyche and outer matter: It might be one and the same entity, just viewed from within and viewed from outside. Alunas web connects both through syncronicity in a meaningful sense (Love? Consciousness? Inter-connectedness). The inner and the outer world mirror eachother. 11. Kundalini Yoga meditation for atomic radiation. It can't hurt. 1. Yogi Bhajan is the person who brought Kundalini Yoga to the west. I wrote briefly about him in the previous thread. One of Yogi Bhajan’s sutras for the Aquarian Age is “Recognize that the other person is you.” All these old cultures seemed to know this. Cheers other me's. 2. " “If your inside is in a turmoil, this meditation will prevent you from dying. It can be done anytime, and its effect will be to calm you, to energize you, and to relax you.” – Yogi Bhajan" Yes, mon! Calmed, energized, prevented from dying and relaxed now I can read the more toxic, radioactive comments on NBL without reacting to them, cool 8-) 3. And yes, I am born under the sign of Aquarius, coincidently. I am baptized in the waters of the black river. 4. The black river Styx: To me, Death is a big part of Life. When I was baptized in the black river Styx, I found out, that the source of Life and Death is within, manifested as/within Breath (Anima, Psyche, Odem), within the eternal tides of breathing in and breathing out, the tides of the oceans, the tides of the Kosmos. This Breath wanders, travels around eternaly, but it doesn't get lost while we travel the passage of Death, for womb and grave are the same, fertile humus. 5. Nemesis, thank you for sharing all these gems of spiritual insights here. I connect with them. You have said it very well about the inseparability of Oneness and dualism, the relationship between and inseparability of the inner and outer worlds, etc. 6. Satish, yes, I enjoy to share this joy. If I ever babble too much, then please give me a hint. I get out of control sometimes, swept away by inspiration poaring itself into words ;-) 12. - Still comments on NBL that try to "prove scientifically" that every single man, that humankind, Homo Sapiens (that's how science calls it) as a species is a beast, cancer all in all, who exploits and destroys and kills everything, ha ha. Comments, that try to prove that those who respect and handle Mother Earth with care and live in Balance with the Cosmic Law, are the same like those, who rape and exploit and destroy Mother Earth. There have always been countless tribes, countless human beings, who respect Mother Earth and prove those "scientific evidences" wrong. And they still exist today. There is no scientific way to turn a lie into truth. 1. - These Agents of Empire, who say that all human beings are unballanced rapists, beasts, thiefs, exploiters, liers, cheaters ect, suffer from a psychological projection. Those people didn't integrate their own shadow (dark side) of their psyche and therefore project it onto all others and everything else. And they end up saying stuff like that: " The only thing naked apes do that is sustainable is bullshit and moralize." This quote implies two things: That 1. sustainability is futile/impossible and that 2. trying to live sustainable means to "moralize", to "bullshit". This a perfect example of ignorance, Donald Trump, every redneck and alike would be proud of. What does "moralize" mean here? To live in Balance with Nature? Not to exploit and rape Nature? To warn about the consequences, if we don't live in Balance with Nature? Then "moralizing" to me is a very good thing. Let me quote Guy McPherson here: "... Instead of changing, people embedded within the dominant paradigm prefer to disparage others. “China is horrible,” they proclaim. “They’re burning all that coal, polluting the air.” Or maybe it’s Brazil this week. Or India, using all those “resources” we need here in the homeland." Exactly. And one could add: " And all others would have done the same as I do, as we do, all indiginous people, bullshit, no matter who, all human beings, all naked apes are as ignorant as I am, as we are. All the same. Period." Yes, an ugly, bullying, psychological projection that is. " Carl Jung – Shadow Projection": The "Bardo Thodol" tells us in a more mythical language, that, in the Bardo of Death, we will be inevitably confronted with our own, inner shadow. In fact: " Any state of consciousness can form a type of "intermediate state", intermediate between other states of consciousness. Indeed, one can consider any momentary state of consciousness a bardo, since it lies between our past and future existences; it provides us with the opportunity to experience reality, which is always present but obscured by the projections and confusions that are due to our previous unskillful actions." See you in the Bardo, 2. hey Nem ~ really loving everything you've been saying here. keep it up! 3. hey mo flow, glad you can relate to it. The way, how Empire clings to denial, is just insane, suicidal. Empire will just go on with denial, until it's end. Well, that's the way, Samsara (fueled by greed, hatred, ignorance) works. 13. What a wonderful way to laugh till I cry. Waving my arms and yep I'm actually naked but fully clean and showered. Hot & Muggy after the rain. Gosh I was going to say something serious about the Decommissioning tour today, but I just read a letter from MO about not caring...and well I have to admit all my 6 dollar bionic bits are too tired to care about big stuff anymore either. I've seen more than I ever imagined I could in this life sailing the seas. D.C. ABC Cern, cc me on all of infinity. Still I have this wild spark of love for everything. Now how can I possibly love a dolphin slaughter,,shit,,I don't - I'm just grateful that I got to witness so much. Certainly learn what I would not do if I got to create another tiny speck like this just across the galaxy. The Milky Way is so vast. Forget the universe. Just the scale of our own solar system is really really huge. So I guess it only makes sense that just for a nanosec infinity let technology and everything go. It just blinked. That's all. And it still has no idea what we are experiencing here. Blogging isn't even real shouting. Oh but LWA you sure made me laugh out loud and now I still feel all happy and cozy out of no where. Maybe it's only the feelings that matter? I do not feel like an Illusion. LOVE everything said above. Nemesis warm up your guitar fingers. Nothing Mayan tonight. This is real and real can be really nice despite all it's deadly flaws. Better cover my thighs with diamonds, the spirit of Maya Angelou just told me I look like a manatee...oh dear me...not even diamonds are forever. Does love radiate? Once I sailed all day thinking I was on Autopilot, but when I went to switch the autohelm off I discovered it was a rare occurrence of every sail, rope, wench, wind waves everything pulling together so perfect that NATURE was the autopilot. I should write something beautiful about how stunned I was to discover that harmony. But the biggest lesson was that I only knew it at the END of that perfect balance day on Banderas bay. 1. . I think even better than Mark’s know it poet speak, is his sneaky speak. It’s almost like that tiny spot of yin barely detectable in the teardrop of the crying, hurting yang. Maybe it's only the feelings that matter? You know, one thing I’ve learned from my magical mystery tour, and I told mo about this once, is that being able to stay still with the love and the peace, and to still be able to find the beauty and the ability to have compassion, while right in the center of hell, is somewhat of an attainment of sorts. It could even possibly win you a prize somewhere down the line. Maybe you could win a small candy treat or something by staying with the love. I’ve also seen that when collapsing waves of choice in the quantum, that it’s more the feelings that determine the outcomes anyway, rather than the intent or desire or the entrainment on something. Although, entrainment certainly can arise out of feelings, especially out of dark and scary ones. That’s why the fear has worked so well for the naughty masters. Specks of doubt are deadly little things, like pieces of plutonium. It’s not easy to trust and to believe, compared to the easy certainty of doubt. And above all, it’s tough to conjure peace and relaxation and love in the midst of pain and horror and the goings on at the likes of the Cove. But, if you slip into fear, then that’s what you’ll project and create around yourself … a fearful existence is what fear manifests. That’s why I think the following sneaky speak was such a powerful message from Mark. This is why I get concerned sometimes about all those nice folks at NBL, especially the thoughts of apneaman last night. So ugly and fearful. But alas, I’ve never actually been able to save anybody, really. Not a single one. Not even that fellow in the south of France who lost his girlfriend, and then his forest, and then his hope, and who then turned to dreaming of violence. But, with Lake’ch, maybe just getting myself there will be just enough to make a ripple that will grow and touch and bump, and without so much effort even. It wasn’t until I withdrew mentally from NBL a month back that I even found out why I was there to begin with. Just like your sailing trip Mark. Funny how that goes, like time running in reverse or something, the idea of retrospect. Seeing how it was all perfection to begin with anyway, and just needed to be left alone to fall into place, without the need of interference by conscious thought. No fussing required. I just needed to be willing to walk along the way was all, knowing it would all make sense later. Sneaky speak. I do love it. Chuck chuck chuck chuck … tee hee hee hee hee Ha ha ha ha ... Ho ho hoo HOO HOO (Oops, that was my outside laugh, wasn’t it? Sorry ‘bout that. Apparently I’ve gone kuku now.) Mark, you're absolutely incredible!!!!! And then, there were the comments of LWA. You guys are so amazingly "out there!" Nothing to say. Just glad you're there to light up the way. 3. - @Mark Austin Yeah, despite all these shiny technology gadgets, they still have no clue, what "conscious experience" means. They can track the traces of conscious experience in the brain, but not experience itself. They are searching for consciousness within brains, within neurons, within time and space, but not within. They'd really like to grab that human spirit and put it in a box to fulfill total, infinite control. They will end up with their own human experience inside, from within, when times get tough and shiny techno-gadgets don't help anymore. " Maybe it's only the feelings that matter? I do not feel like an Illusion." To feel is a great gift. To feel means to be interconnected, connected to oneself and the rest of it. The ability to feel is vital in evolution, is vital for survival. Emotional resonance, balanced emotional perception is the basis of effective, rational intelligence, IMO. Without emotion and ratio in Balance, shit happens. Modern man can explain splitting of the atom, but can't feel the suffering of Mother Earth. They can fly to the moon, but can't reduce unnecessary suffering on this planet obviously. Sail, rope, wench, wind, wave, yin, yang, nature and you as One great stream of consciousness on autopilot. Great, inspiring image. When Oneness happens, when YinYang, the Tao is in Balance, I can't even say anymore, if it's autopilot or manual control, it just feels like this: The same when I breathe in and out like a swing: I just can't even say, if Nature breathes me or if I am breathing Nature, hahaha. Sail on, Captain Austin! 4. "I just needed to be willing to walk along the way was all, knowing it would all make sense later." That's exactly it for me.. what doesn't make sense now will make sense later. That's part of the deal. Part of the infinite and unfailing justice that the Universe is made out of. Maybe there will be a point when it doesn't have to make sense, and that's good too, because there's enough going on in the moment that's more than good enough. It's like trying to remember something that was just on my mind, trying hard for a split second, before suddenly realizing that it doesn't matter because the current moment is plenty rich. And if it's important enough, it will come back. It all works out. Every time. Always. 5. . Nature creates all beings without erring: this is its straightness. It is calm and still: this is its foursquareness. It tolerates all creatures equally: this is its greatness. Therefore it attains what is right for all without artifice or special intentions. Man achieves the height of wisdom when all that he does is as self evident as what nature does. For this very reason the earth has no need of a special purpose. Everything becomes spontaneously what it should rightly be, for in the law of heaven life has an inner light that it must involuntarily obey. Therefore in all matters the individual hits upon the right course instinctively and without reflection, because he is free of all those scruples and doubts which induce a timid vacillation and lame the power of decision. The I Ching (Wilhelm) A rather scathing indictment on the folly of reason, I would say. 14. Nemesis and Mark, You both talk about yin and yang and express what I fell too in almost everything you write here, but from the point of view of an old woman, I'm not entirely happy with this symbol of Oneness. I have one or two thoughts on that I'd like to share. I'll get back to this tomorrow (17.35 h here) because I'll have to start making some food. Enjoy the rest of light for today. 15. Hola Satish I hope the day finds you at peace. The indigenous person is an animist at heart and so am I! I remember a conversation with my daughter over a thanksgiving dinner many years ago about what is alive and what is not. I made the comment that even the plate she was eating off of was alive.... She called me crazy of course and we all laughed... I said that even though the plate looked solid and would break into a thousand pieces there where in fact molecules moving within the plate. Even a grain of sand is alive in the world. My daughter, my wife and I have all been Vegan for about 5 years now. We love and respect all life and it's creation in the world. I wish that everyone in the world could see life as the indigenous people see it. Humanity, for all it's good things, has failed to grasp this basic truth that our world is a living breathing entity and will in the end self regulate. We see it happening everyday. As your article states.... We will become another part of the great cosmic plan. Something to be joyful about!!!!!! Peace my brother 1. Great comments. I have not read about animism, but seem to be thoroughly animistic anyhow. I suppose that not knowing how it's supposed to work gives me a chance to discover. I find that why I tend to ignore or mistreat my environment is that I have been programmed not to see it as living. And when (all too rarely) I see it as living, I'm considerably kinder and more careful in how I relate to it. 16. - What is Life? What is the source of life? I'd like to share some further thoughts on that question: It is too obvious, that the environment isn't just some kind of "surrounding", we are a part of it, we share the cosmic essence with it, not just spiritually, but literally, it's all about sharing and inter-connectedness: We are connected to Mother Earth through the air we breathe, through the food we eat, the water we drink, the wind we feel on our skin, through just everything. We come from Mother Earth, we live now through Mother Earth and our body/mind will go back to Mother Earth. The indigenous people knew it always and still know it today. They had and have their rituals, often celebrated in caves. The caves are the womb of Mother Earth and the indigenous people (the people from the stoneage as well) see those caves as the place, where every life comes from and where any lifes goes back to. It's the Nagual, we call it "darkness", but it's the source of light and dark. The source, where also dreams, thoughts and feelings emerge. It's the inner, mental body, that Buddhists call the "subtle" body, but it's no physical body at all (but it's the source of the physical body and the physical world), it doesn't have any expansion or mass in time and space in a materialistic sense, it's middle is everywhere and nowhere, it is hidden within, therefore, modern, materialistic science can't find it, hahaha. But it's there, all the time. Do, for instance, stories myths, archetypes or does the web of culture, ideas ect have expansion in time and space? Well, obviously, in some sense. But that expansion, in a materialistic sense, just can't be measured exactly. But it's there. How big or how long or how heavy is a thought, an emotion, an inspiration? Do dreams have expansion in time and space? I think, that trees are dreaming, Mother Earth is dreaming, Nature is dreaming, Man is dreaming, the whole Kosmos is dreaming in some sense. Those dreams can't be found through microscopes or within some particles in CERN, hahaha. But it's there, it's all there, within. Imagination, inspiration, meditation, Samadhi. The timeless source of dreams and reality, the source of time and space, is within. There is a ultra-deep connection between the ancient, timeless, inner dreamworld and the real world (Syncronicity is part of that connection as well). Modern man got lost in physical, rational time and space alone, he lost the connection to his inner, im-material world of visions and dreams and inspiration and therefore, he lost the meaningful connection to anything else. He lost his soul, his self, his natural Home, modern man is a restless wandering orphan, searching for himself, but the faster and more remote he searches, the more he loses himself. When we look at Nature, when we look at that stunning, vast Kosmos, when we look at others, we see the material "outside" of the miracle, so to say. But to find the source of this miracle, we have to search within, we have to turn our sight inwards in the most intimate way, it can't be found in laboratories or hadron colliders, never ever. Whoever searches for that source within, will find it for shure, it's never gone, always there, the cosmic source, the infinite source. It's within me, it's within you, forever and always. 1. Well said, Nemesis! Excellent thoughts. Very clear... Love it! 2. Thank you very much for your positive resonance, Sir Satish Musunuru! I'd like to go on then with some further thoughts about our relationship to the essential, cosmic source within and our relationship to anything and anybody else... It's not about superpowers or becoming all-knowing in the ordinary sense. It's not about "controlling" anything or "conquering" anything or "profiting from" anything. It's about relationship. What is my relationship to it all, what is your, what is anybodies relationship to it all? And what kind of unavoidable responsibility is connected to that relationship? We have to look and listen deep, deep within, to find out... When we look at a tree, then don't we often see more of our own, personal relationship to that tree, our own prejudices, instead of seeing the real tree? To say, that "this tree does only exist, as long, as I exist", tells something about a certain kind of relationship to that tree. It implies, that the tree has no own life, that I am the creator of that tree. But I am definitely NOT the creator of that tree, but within myself I am the creator of my own relationship to that tree. I am not the creator, the father of that tree, but I am a brother of that tree. Maybe, that tree is just some "hallucination" of some brain? Some say that. But that kind of statements tell more about the personal, individual relationship to that tree and life in general, than anything else. It is a childish inflation of the Ego, to say that "the tree, the Kosmos only exists, as long as I exist, when I will be gone, then the tree, everything else, the Kosmos will be gone too". It is great confusion, Ego-Confusion. But: The tree and I share the same source within. Like, as if the source would be a river and both, the tree and I, share the same source of life, this river, this water of life. The tree is it's own being, I am my own being, but we share the same cosmic river, the same cosmic water within. We can call this source "Big Bang" or "God" or "Shunyata" or "Tao" or whatever, names don't matter much. It is One source, but it creates Many things and beings. This source is anywhere and nowhere. It is the Cosmic Breath, that breathes within high tide and low tide of the oceans, that breathes within the circulation of the rivers on planet Earth, it breathes within the winds, it breathes within the circulation of the galaxies, it breathes within life and death, it breathes within ourselves and within Music. This Cosmic Breath, this Cosmic Source is within everything and all of us, within matter and beings, every second, infinitely. It is within the dance of the atoms and molecules and stones and bacterias and plants, animals, human beings, gods. We can't find it through microscopes, large, larger, largest hadron colliders or Hubble telescopes, but we can find it in our own, individual, personal relationship to it all, within. Our relationship to it all defines, who we are and who we will be. Some call this unavoidable, individual relationship Karma/Vipaka. No chance, to run away from it, no chance, to run away from yourself. The most inner essence, the most inner cosmic source within can't be destroyed, can't die, can't cease, can't dissolve, for it is the source of it all, for it is everywhere and nowhere. Find out, what your most intimate, real, living relationship to that source, to the Kosmos is, from within. You don't need any guru, any master, any scientist, any superpower, any money, to find your own relationship to it all, within. All you need is awareness. 3. This is an eco-spiritual crisis we're facing... there's no separating the two anymore... we have lived as if there's a great distinction between the worldly matters we face here and the spiritual matters we face inside, or after death, or anywhere but here and now, living in the world. The longer we go, the more we see them merge, the more we're confronted by their inter-relatedness, the more we're forced to deal with them both at once and see them as a single crisis. The ecological crisis is a spiritual crisis. "King’s sacred view of nature, based in African American tradition, aligns with African and other indigenous traditions, mystical traditions, and much of the eco-spiritual thinking that would later develop. “Although God is beyond nature he is also immanent in it,” King wrote. “Probably many of us who have been so urbanized and modernized need at times to get back to the simple rural life and commune with nature… We fail to find God because we are too conditioned to seeing man-made skyscrapers, electric lights, aeroplanes, and subways.”" 4. - @Satish Musunuru Thank you very much for the information about Martin Luther King’s thinking and feeling! I find some inspiring similarities to my own thinking and feeling in your quote, I didn’t know of until now. Respect to Martin Luther King Yes, that's my own thinking. We are connected to the eco-spiritual Reality through our relationship to it. But I see many beings, who all have their own life through their own, individual relationship to the same cosmic source we all share. I say, the source "within", because we cannot find this source without our very own, inner, individual relationship to it. I am not talking about seperation here, but about a respect relationship to all things and beings, I respect their own, individual Karma here. When I love a human being or any another being, then I love it, because it has it's own individual being, I love it for itself. Everybody has his own Karma, I can never experience anyone elses experience, I can never experience anyone elses Karma, only my own. But I can relate to other beings. You see it through your individual eyes, through your individual relationship to it, I see it through mine. You walk in your shoes, I walk in mine. We both walk on the same road, but everybody walks on his own trail. The fish sees it from below, the eagle sees it from above. This is my respectful relationship to all things and beings, I respect their own "So-Sein", their own, individual "Suchness", Tathātā. " In its very origin suchness is of itself endowed with sublime attributes. It manifests the highest wisdom which shines throughout the world, it has true knowledge and a mind resting simply in its own being. It is eternal, blissful, its own self-being and the purest simplicity; it is invigorating, immutable, free... Because it possesses all these attributes and is deprived of nothing, it is designated both as the Womb of Tathagata and the Dharma Body of Tathagata." 17. Nicolas Wessberg Natural Reserve is part of the Costa Rican National Park System. It’s a gorgeous piece of beachfront land, just north of Montezuma. You can easily reach it by walking 15-20 minutes, just past Ylang Ylang Resort. Named for Olof “Nicolas” Wessberg, the Swedish man who lived in Montezuma decades ago and became the founder of the country’s national park system, this park was founded in 1994 as a permanent landmark dedicated to his memory. It has no services, camp sites, etc and is a “Reserva Absoluta”, meaning that no one can normally go inside except for park rangers. But, you can walk walk past it on the trail to Romelia park and Playa Grande.
87ab0294696e1de2
Friday, 15 January 2016 Towards Quantum Economics From Wikipedia, the free encyclopedia/Blogger Ref Jump to: navigation, search Quantum Economics (a.k.a. quantum macroeconomics, a.k.a. the theory of money emissions) is a school of monetary economic analysis developed by French economist Bernard Schmitt (* 1929 in Colmar, France), beginning in the 1950s in Dijon (France) and Fribourg (Switzerland). The origins of quantum economics can be traced back to the works of prominent economists of the past. Quantum economists refer to Adam Smith’s distinction between money and money’s worth promoted in his Wealth of Nations and later taken up by David Ricardo and Karl Marx: When, by any particular sum of money, we mean not only to express the amount of the metal pieces of which it is composed, but to include in its signification some obscure reference to the goods which can be had in exchange for them, the wealth or revenue which it in this case denotes is equal only to one of the two values which are thus intimated somewhat ambiguously by the same word, and to the latter more properly than to the former, to money’s worth more properly than to the money[1] — Adam Smith In another passage Adam Smith emphasizes that money is not a product, but a simple means of circulation whose value does not add up to that of national output. Karl Marx dwelled further on this subject and suggested that money is but the social form of value: In the form of money, all properties of the commodity as exchange value appear as an object distinct from it, as a form of social existence separated from the natural existence of the commidity.[2] — Karl Marx Quantum economists refer also to David Ricardo’s idea that commodities cannot measure value because their value fluctuates: The only qualities necessary to make a measure of value a perfect one are, that it should itself have value, and that that value should be itself invariable, in the same manner as in a perfect measure of length the measure should have length and that length should be neither liable to be increased or diminished; or in a measure of weight that it should have weight and that such weight should be constant.[3] Léon Walras’s view of money as a purely numerical, adimensional object („Le mot franc est le nom d'une chose qui n'existe pas[4])is another intuition espoused by quantum economists. They also revive Jean-Baptiste Say’s Law, although in a slightly different sense than usually retained. By analyzing the accounting logic of payments and production, quantum economists claim that global supply and global demand are necessarily identical at every instant in time. They also take inspiration from the capital theory of Eugen Böhm von Bawerk – in particular his work on the relation between capital and time – and to Knut Wicksell and his idea that money is endogenously created by banks. Perhaps the most crucial author to quantum economists is John Maynard Keynes. Bernard Schmitt was inspired by his idea that economic theory ought to integrate the nature of money and the role of banks in a “monetary theory of production”.[5] Keynes noted that money is a spontaneous acknowledgement of debt, which is entered into the bank’s ledger in a two-sided operation. Other Keynesian insights adopted by quantum economists are his choice of the wage unit as the economic unit of measure,[6] as well as his idea that the macroeconomic identities between global demand and global supply, and between saving and investment, are logical identities, rather than equilibrium conditions: The prevalence of the idea that savings and investment, taken in their straightforward sense, can differ from one another, is to be explained, I think, by an optical illusion due to regarding an individual depositor’s relation to his bank as being a one-sided transaction, instead of seeing it as the two sided transaction which it actually is.[7] Fundamental Concepts[edit] The following concepts are the cornerstones of quantum economics. Bank money is indisputably the starting point of Bernard Schmitt’s analysis. Referring to double entry bookkeeping, he shows that the emission of money is an instantaneous event taking place every time a payment is carried out by banks. Since no positive asset can be created out of nothing, quantum economists maintain that, far from being a net asset, money is a purely numerical vehicle issued by banks in a circular flow defining its instantaneous creation and destruction. Money is therefore nothing more than a means of payment, a numerical vehicle through which payments are conveyed from purchaser to seller and whose existence in chronological time coincides with that of the payment it conveys: a mere instant. Quantum economists introduce a fundamental distinction between money and income. Money has no positive value whatsoever; and income is the very object of economic payments. While money is emitted by banks at zero cost, income is the result of production. According to quantum economic analysis, when banks grant a credit to the economy they do so by lending it the income generated by its own productive activity, not through money creation. Production and quantum time[edit] The expression quantum economics or quantum macroeconomics (since the approach proposed by quantum economists is substantially macroeconomic) has its raison d’être in the fact that, as shown by Bernard Schmitt, production is an instantaneous event that quantizes time. According to his analysis, output is literally emitted as a whole at the very moment production takes place, that is, at the instant the production process is completed. The entire period of production (a finite period of time) is thus ‘given in an instant’ as an indivisible interval of time: a quantum of time. As quantum economists explain it, from an economic point of view production coincides with the payment of wages. It is at the very moment wages are paid that output acquires its numerical form and is transformed from a physical object into an economic entity. The payment of wages is therefore the instantaneous event that defines production, through which money acquires a real content and is replaced by a positive amount of income. Absolute Exchange[edit] An absolute exchange is an exchange of an object with itself (as opposed to a relative exchange, an exchange between two different objects). We can exemplify this unusual phrasing by considering a wage payment. When firms pay wages, wage earners receive a bank deposit. Assuming, for the sake of simplicity, that the firm pays wages by contracting a new loan with the bank, this new asset in the bank’s balance sheet exactly matches the bank’s liability with wage earners. Wage earners receive a positive purchasing power because their credit with the bank has a real object – the newly produced output. Wage earners' income therefore does not exist independently of output; it is the numerical form of output, its expression in terms of units of accounts. In this sense, in the payment of wages output in its physical form exchanges for output in its numerical form (income), in what quantum economists call an absolute exchange. The law of the identity between sales and purchases[edit] Following Bernard Schmitt, quantum economists claim that the correct interpretation of the principle of double-entry bookkeeping implies the necessary equality between each economic agent’s sales and purchases. Perfectly consistent with the flow nature of money, this law applies to the buying and selling transactions carried out on the set of available markets. Hence, for example, within any national economy it is always verified on the labour, commodity, and financial markets taken together. If money were a net asset, the purchase of agent a would simply be matched by the sale of another agent b. Agent a would be debited and agent b credited. Yet, since money is but a flow and consistently with double-entry bookkeeping, a can pay for his/hers purchases only through his/hers simultaneous sales: his/hers purchases on the commodity market, for example, must be balanced by his/hers sales on the labour or/and the financial markets. According to quantum economists this is but the necessary consequence of the true principle of double-entry bookkeeping, each agent being simultaneously debited-credited or credited-debited. The law of the identity between global demand and global supply[edit] Since output finds its economic measure in the payment of wages and since income is initially formed by this same payment, quantum economists hold that global supply and demand are jointly determined as the two aspects of one and the same reality. They maintain that global or macroeconomic demand is defined, irrespective of economic agents’ behaviour, by the amount of income available within a given economy, and that global or macroeconomic supply is determined by the economic measure of produced output. Both terms of the equation D = S being measured by the same amount of wages, quantum economists conclude that their relationship is necessarily that of an identity and that the present economic disequilibria have to be explained starting from and consistently with this identity. Monetary Pathologies[edit] Inflation is the situation where global demand numerically exceeds global supply. This situation is at odds with the logical quantum identity between demand and supply – inflation is pathological. To have inflation there must be some money devoid of purchasing power, which quantum economists call empty money, that increases or inflates global demand only numerically without altering the substantial identity between D and S. According to quantum economists, the origin of inflation is closely connected with capital accumulation.[8] By marking up prices over costs of production, firms make a profit. In the process wage earners transfer part of their purchasing power over produced output to firms. Firms may then either redistribute their profits or invest them. In the first case, shareholders and creditors spend their dividends and interests on the goods market and income is thereby destroyed. In the second case, firms invest their income (profit) by financing the production of fixed capital goods. Because wages are paid out of a pre-existing income, their payment implies the purchase of fixed capital goods by firms. This is a final purchase of output, which therefore destroys income. However, current systems of payments do not recognize this fact, and allow banks to lend on the financial market the deposits formed following the investment of profits. Logically, the income invested by firms is transformed into fixed capital, and should therefore no longer be available on the financial market. This not being the case, an ‘empty’ sum of money pathologically increases the demand for produced output: there is a nominal demand not matched by an equal supply (inflation). Quantum economists emphasize that inflation and its effect – the pathological accumulation of capital – is a macroeconomic disorder that does not stem from the behavior of economic agents. The root of the problem lies in the current accounting system of banks, which, being inconsistent with the logical distinction between money, income and fixed capital, generates inflation. Involuntary Unemployment and Deflation[edit] Quantum economists consider involuntary unemployment as a macroeconomic pathology. Unlike micro-founded theories of unemployment, quantum economists state that involuntary unemployment is a macroeconomic disorder independent of people’s behavior. Inflation causes capital over-accumulation as empty money emissions lead to inflationary profits for firms and to their investment in the production of new fixed capital goods. Yet, firms must also pay the cost of capital – the market’s rate of interest – out of profits. Quantum economists then argue that, as the amount of fixed capital goods grows persistently, at some point the ratio between profits and capital must fall. As the margin or the gap between the rate of profit and the market interest rate falls, firms will then either invest less, or use their profits for the production of consumer goods. In the first case, national production suffers, thus causing a positive measure of involuntary unemployment. In the second, firms supply an amount of consumer goods for which there is no associated demand (deflation). Sovereign or external debt crisis[edit] Bernard Schmitt and his followers argue that a correct definition of a country’s sovereign debt cannot be limited to what is known as public debt, but must include both that part of public and private debt a country incurs abroad. At the same time they claim that, because of the lack of a true system of international payments respectful of the flow nature of money, the sovereign or external debt of countries is the object of a pathological duplication. In a few words, they maintain that countries’ external debt is much higher than it should be because of a pathological, monetary mechanism that multiplies by two the debt incurred every time a country finances its net expenditures through a foreign loan. As a consequence of this pathological increase in their sovereign debt, countries are forced to pay huge amounts of interests without obtaining any real counterpart. What is lost by countries is gained by what is known as the financial bubble, a stateless, pathological capital whose presence is at the origin of speculation and whose continuing growth explains the increasing disrupting effects of the financial crisis. Proposals for Reform[edit] All monetary pathologies identified by quantum economists can in principle be cured by reforming bookkeeping practices of banks and settlement institutions. The reforms proposed by quantum economists are not aimed at changing the behaviour of individuals, but would merely alter the way transactions are recorded by banks and settlement institutions. Quantum economists propose two reforms, which would get rid of inflation and deflation, involuntary unemployment, sovereign debt crisis and pathological volatility in financial markets. Reform of National Payment Systems[edit] In the reformed system of national payments, transactions are recorded in three separate departments.[9] I. The monetary department (department I) records all money emissions. II. The financial department (department II) records all newly formed bank deposits and their expenditure. III. The fixed capital department (department III) records the capitalization of profits. The basic principle behind this tripartite structure of banks’ bookkeeping is the practical separation of money (department I), income (department II) and fixed capital (department III). The first two departments guarantee the separation between money – a valueless, numerical vehicle – and income – a positive bank deposit and the monetary definition of current output. Because banks can issue money without cost by the stroke of a pen and can thereby extend the assets and liabilities side of the balance sheet theoretically ad infinitum, an over-emission of money can occur. The separation will ensure that banks cannot lend more money than they have income deposited, thereby preventing a credit-led inflation. Banks will not be able to lend more than the amount of income generated by production. Thanks to this partition, bank directors would know at every point in time the exact amount of income they can lend to the public. The third department is there to guarantee that income is not mixed up with fixed capital, which in the present system leads to the emission of empty money, the cause of inflation according to quantum theory.[10] All profits, once formed in the market for goods, have to be transferred to the third department. Profits distributed by firms as interests and dividends are transferred back to the second department; anything remaining in the third department defines the amount of fixed capital formed in the economy. Fixing these profits in the balance sheet of the third department prevents firms from spending these deposits once again, which would give rise to an inflationary process of capital accumulation. Reform of International Payment Systems[edit] Granted that within any sovereign country a monetary system exists that ensures monetary homogeneity and the final settlement of inter-bank payments, no satisfactory international system of payments has so far been implemented between countries. Quantum economists advocate Bernard Schmitt’s proposal for a world monetary reform based on the institution of a supranational bank acting as a monetary intermediary and as an international clearing house. They argue that international payments would be best settled using an international money and that through its circular or vehicular use the supranational bank would settle credits and debts of the various national banking systems. Implementing a real-time gross settlement system between central banks, the imports of goods or services of one country would be immediately balanced by an equivalent export of goods, services and/or securities of the same country. This way, quantum economists argue, payments between nations would settle and money would assume its natural function of a circular and vehicular means of payment. The reform proposed by quantum economists is reminiscent of the one originally designed by John Maynard Keynes at Bretton Woods. However, Keynes’s solution was not entirely satisfactory, for it still implied the use of gold and other reserve assets, and it did not fully explain how payments carried out through the vehicular use of an international currency can enable the real settlement of international transactions. Schmitt, B. (1960): La formation du pouvoir d’achat, Paris: Sirey. Schmitt, B. (1966): Monnaie, salaires et profits, Paris: Presses Universitaires de France. Schmitt, B. (1972): Macroeconomic Theory. A Fundamental Revision, Albeuve: Castella. Schmitt, B. (1975): Théorie unitaire de la monnaie, nationale et internationale, Albeuve: Castella. Schmitt, B. (1984a): Inflation, chômage et malformations du capital. Macroéconomie quantique, Paris and Albeuve: Economica and Castella. Schmitt, B. (1984b): La France souveraine de sa monnaie, Paris and Albeuve: Economica and Castella. Schmitt, B. (2012): Money, effective demand and profits, in C. Gnos and S. Rossi (eds) Modern Monetary Macroeconomics, Cheltenham, UK and Northampton, MA, USA: Edward Elgar. Schmitt, B. (2012): Sovereign debt and interest payments, in C. Gnos and S. Rossi (eds) Modern Monetary Macroeconomics, Cheltenham, UK and Northampton, MA, USA: Edward Elgar. Cencini, A. (1984): Time and the Macroeconomic Analysis of Income, London and New York: Pinter. Cencini, A. (1988): Money, Income, and Time. A Quantum-Theoretical Approach, London and New York: Pinter. Cencini, A. and Schmitt, B. (1991): External Debt Servicing. A Vicious Circle, London and New York: Pinter. Cencini, A. (1995): Monetary Theory. National and International, London and New York: Routledge. Cencini, A. (2001): Monetary Macroeconomics. A New Approach, London and New York: Routledge. Cencini, A. (2005): Macroeconomic Foundations of Macroeconomics, London and New York: Routledge. Cencini, A. (2012): Towards a macroeconomic approach to macroeconomics, in C. Gnos and S. Rossi (eds) Modern Monetary Macroeconomics, Cheltenham, UK and Northampton, MA, USA: Edward Elgar. Rossi, S. (2001): Money and Inflation: A New Macroeconomic Analysis, London and New York: Routledge. Rossi, S. (2006): The theory of money emissions, in Arestis, P. and Sawyer, M. (eds.) A Handbook of Alternative Monetary Economics, Cheltenham and Northampton: Edward Elgar. Rossi, S. (2007): Money and Payments in Theory and Practice, London and New York: Routledge. External links[edit] 1. Jump up ^ Smith, A. (1978), The Wealth of Nations, Pelican Classics: Harmondsworth (first published 1776). 2. Jump up ^ Marx, K. (1973), Grundrisse, The Pelican Marx Library: Harmondsworth. 3. Jump up ^ Ricardo, D. (1817), On the Principles of Political Economy and Taxation, J. Murray: London; reprinted Cambridge University Press: Cambridge, 1951. 4. Jump up ^ Walras, L. (1952) Elémentsd'économiepolitique pure, ou la théorie de la richessesociale, Paris, LibrairieGénérale de Droit et de Jurisprudence. 5. Jump up ^ Keynes, J.M. (1933/1973), ‘A monetary theory of production’. Reprinted in The Collected Writings of John Maynard Keynes, Vol. XIII The General Theory and After: Part I Preparation, London and Basingstoke: Macmillan, 408–11. 6. Jump up ^ Keynes, J.M. (1936), The General Theory of Employment, Interest and Money. 7. Jump up ^ Keynes, J. M. (1973), The Collected Writings of John Maynard Keynes, The General Theory of Employment, Interest and Money, London and Basingstoke: Macmillan. 8. Jump up ^ Schmitt, B. (1984), This is in opposition to the Austrian Economic view that inflation is correlated directly to increases in the money supply. Inflation, chômage et malformation du capital, Albeuve: Castella. 9. Jump up ^ Rossi, S. (2007), Money and Payments in Theory and Practice, London and New York: Routledge. 10. Jump up ^ Cencini, A. (2005), Macroeconomic Foundations of Macroeconomics, London and New York: Routledge. Quantum finance Quantum finance From Wikipedia, the free encyclopedia/Blogger Ref Jump to: navigation, search Quantum finance is an interdisciplinary research field, applying theories and methods developed by quantum physicists and economists in order to solve problems in finance. It is a branch of econophysics. Background on instrument pricing[edit] Finance theory is heavily based on financial instrument pricing such as stock option pricing. Many of the problems facing the finance community have no known analytical solution. As a result, numerical methods and computer simulations for solving these problems have proliferated. This research area is known as computational finance. Many computational finance problems have a high degree of computational complexity and are slow to converge to a solution on classical computers. In particular, when it comes to option pricing, there is additional complexity resulting from the need to respond to quickly changing markets. For example, in order to take advantage of inaccurately priced stock options, the computation must complete before the next change in the almost continuously changing stock market. As a result, the finance community is always looking for ways to overcome the resulting performance issues that arise when pricing options. This has led to research that applies alternative computing techniques to finance. Background on quantum finance[edit] One of these alternatives is quantum computing. Just as physics models have evolved from classical to quantum, so has computing. Quantum computers have been shown to outperform classical computers when it comes to simulating quantum mechanics[1] as well as for several other algorithms such as Shor's algorithm for factorization and Grover's algorithm for quantum search, making them an attractive area to research for solving computational finance problems. Quantum continuous model[edit] Most quantum option pricing research typically focuses on the quantization of the classical Black–Scholes–Merton equation from the perspective of continuous equations like the Schrödinger equation. Haven [2] builds on the work of Chen[3] and others, but considers the market from the perspective of the Schrödinger equation. The key message in Haven's work is that the Black–Scholes–Merton equation is really a special case of the Schrödinger equation where markets are assumed to be efficient. The Schrödinger-based equation that Haven derives has a parameter ħ (not to be confused with the complex conjugate of h) that represents the amount of arbitrage that is present in the market resulting from a variety of sources including non-infinitely fast price changes, non-infinitely fast information dissemination and unequal wealth among traders. Haven argues that by setting this value appropriately, a more accurate option price can be derived, because in reality, markets are not truly efficient. This is one of the reasons why it is possible that a quantum option pricing model could be more accurate than a classical one. Baaquie [4] has published many papers on quantum finance and even written a book[5] that brings many of them together. Core to Baaquie's research and others like Matacz [6] are Feynman's path integrals. Baaquie applies path integrals to several exotic options and presents analytical results comparing his results to the results of Black–Scholes–Merton equation showing that they are very similar. Piotrowski et al.[7] take a different approach by changing the Black–Scholes–Merton assumption regarding the behavior of the stock underlying the option. Instead of assuming it follows a Wiener-Bachelier process,[8] they assume that it follows an Ornstein-Uhlenbeck process.[9] With this new assumption in place, they derive a quantum finance model as well as a European call option formula. Other models such as Hull-White[10] and Cox-Ingersoll-Ross[11] have successfully used the same approach in the classical setting with interest rate derivatives. Khrennikov[12] builds on the work of Haven and others and further bolsters the idea that the market efficiency assumption made by the Black–Scholes–Merton equation may not be appropriate. To support this idea, Khrennikov builds on a framework of contextual probabilities using agents as a way of overcoming criticism of applying quantum theory to finance. Accardi and Boukas[13] again quantize the Black–Scholes–Merton equation, but in this case, they also consider the underlying stock to have both Brownian and Poisson processes. Quantum binomial model[edit] Chen published a paper in 2001,[3] where he presents a quantum binomial options pricing model or simply abbreviated as the quantum binomial model. Metaphorically speaking, Chen's quantum binomial options pricing model (referred to hereafter as the quantum binomial model) is to existing quantum finance models what the Cox-Ross-Rubinstein classical binomial options pricing model was to the Black–Scholes–Merton model: a discretized and simpler version of the same result. These simplifications make the respective theories not only easier to analyze but also easier to implement on a computer. Multi-step quantum binomial model[edit] In the multi-step model the quantum pricing formula is: which is the equivalent of the Cox-Ross-Rubinstein binomial options pricing model formula as follows: This shows that assuming stocks behave according to Maxwell-Boltzmann classical statistics, the quantum binomial model does indeed collapse to the classical binomial model. Quantum volatility is as follows as per Meyer:[14] Bose-Einstein assumption[edit] Maxwell-Boltzmann statistics can be replaced by the quantum Bose-Einstein statistics resulting in the following option price formula: The Bose-Einstein equation will produce option prices that will differ from those produced by the Cox-Ross-Rubinstein option pricing formula in certain circumstances. This is because the stock is being treated like a quantum boson particle instead of a classical particle. 1. Jump up ^ B. Boghosian (1998). "Simulating quantum mechanics on a quantum computer". Physica D.  2. Jump up ^ Haven, Emmanuel (2002). "A discussion on embedding the Black–Scholes option pricing model in a quantum physics setting". Physica A. Bibcode:2002PhyA..304..507H. doi:10.1016/S0378-4371(01)00568-4.  3. ^ Jump up to: a b Zeqian Chen (2004). "Quantum Theory for the Binomial Model in Finance Theory". Journal of Systems Science and Complexity. arXiv:quant-ph/0112156.  4. Jump up ^ Baaquie, Belal E.; Coriano, Claudio; Srikant, Marakani (2002). "Quantum Mechanics, Path Integrals and Option Pricing: Reducing the Complexity of Finance". ArXiv Condensed Matter e-prints: 8191. arXiv:cond-mat/0208191. Bibcode:2002cond.mat..8191B.  5. Jump up ^ Baaquie, Belal (2004). Quantum Finance: Path Integrals and Hamiltonians for Options and Interest Rates. Cambridge University Press. p. 332. ISBN 978-0-521-84045-3.  6. Jump up ^ "Path dependent option pricing, The path integral partial averaging method". Journal of Computational Finance. 2002. arXiv:cond-mat/0005319v1.  7. Jump up ^ Piotrowski, Edward W.; Schroeder, Małgorzata; Zambrzycka, Anna (2006). "Quantum extension of European option pricing based on the Ornstein Uhlenbeck process". Physica A (Physica A Statistical Mechanics and its Applications) 368: 176. arXiv:quant-ph/0510121. Bibcode:2006PhyA..368..176P. doi:10.1016/j.physa.2005.12.021.  8. Jump up ^ Hull, John (2006). Options, futures, and other derivatives. Upper Saddle River, N.J: Pearson/Prentice Hall. ISBN 0-13-149908-4.  9. Jump up ^ "On the Theory of {B}rownian Motion". The Journal of Political Economy. 1930.  10. Jump up ^ "The pricing of options on interest rate caps and floors using the Hull-White model". Advanced Strategies in Financial Risk Management. 1990.  11. Jump up ^ "A theory of the term structure of interest rates". Physica A. 1985.  12. Jump up ^ Khrennikov, Andrei (2007). "Classical and quantum randomness and the financial market" 0704. ArXiv e-prints: 2865. arXiv:0704.2865. Bibcode:2007arXiv0704.2865K.  13. Jump up ^ Accardi, Luigi; Boukas, Andreas. "The Quantum Black-Scholes Equation". arXiv:0706.1300v1.  14. Jump up ^ Keith Meyer (2009). Extending and simulating the quantum binomial options pricing model. The University of Manitoba.  External links[edit] Wednesday, 6 January 2016 A Green New Deal The old Green New Deal  RS  Blogger Ref From Wikipedia, the free encyclopedia Jump to: navigation, search For policy proposals, see Green New Deal. A Green New Deal is a report released on July 21, 2008 by the Green New Deal Group and published by the New Economics Foundation, which outlines a series of policy proposals to tackle global warming, the current financial crisis, and peak oil.[1] The report calls for the re-regulation of finance and taxation, and major government investment in renewable energy sources. Its full title is: A Green New Deal: Joined-up policies to solve the triple crunch of the credit crisis, climate change and high oil prices.[2] Main recommendations[edit] • Government-led investment in energy efficiency and microgeneration which would make 'every building a powerstation'. • The creation of thousands of green jobs to enable low-carbon infrastructure reconstruction. • A windfall tax on the profits of oil and gas companies - as has been established in Norway - so as to provide revenue for government spending on renewable energy and energy efficiency. • Developing financial incentives for green investment and reduced energy usage. • Changes to the UK's financial system, including the reduction of the Bank of England's interest rate, once again to support green investment. • Large financial institutions - 'mega banks' - to be broken up into smaller units and green banking. • The re-regulation of international finance: ensuring that the financial sector does not dominate the rest of the economy. This would involve the re-introduction of capital controls. • Increased official scrutiny of exotic financial products such as derivatives. • The prevention of corporate tax evasion by demanding financial reporting and by clamping down on tax havens.[3][4][5] Colin Hines explains the Green New Deal The authors of A Green New Deal are: See also[edit] 1. Jump up ^ Mark Lynas (July 17, 2008) "A Green New Deal" New Statesman 2. Jump up ^ New Economics Foundation, (July 21, 2008) 3. Jump up ^ David Teather (July 21, 2008) "Green New Deal group calls for break-up of banks", The Guardian 4. Jump up ^ Jeremy Lovell (July 21, 2008) "Climate report calls for green 'New Deal'", Reuters. 5. Jump up ^ Riley Smith (July 31, 2008) "Group Suggests a Green New Deal in the UK to Fight Climate Change", External links[edit]
2f4a805b59f20f03
E K Retail Sundaram Shetty Nagar Behind IIM Bannerghatta Road E K Retail Supermarket Vijaya Enclave E k Retail superstore, ( supermart ) is situated Behind IIM Bangalore, in Sundaram Shetty Nagar. Just beside another popular shop Namma Upahara, Vijaya Bank ( 1000 th Branch ), and Three ( 3 ) Kids play schools.[ Podar Jumbo Kids – Kiducation PreSchol, Eurokids, Little Millennium ]. In front of E K Retail is the Bilekahalli <-> Devarachikkahalli ( Doddachikenahalli ) road that separates North Block and South Block of Vijaya Enclave Apartments. The Ayappa Temple and Shova Apartments, Café coffee day, Just Books, Petrol Pump, Kakanat Dreamz etc are all near by. [ All these are behind Bilekahalli, off. Bannerghatta Road ] E K Retail Sundaram Shetty Nagar Vijaya Bank Layout Behind IIM Bangalore The 1st Floor of E K Retail has all household and stationary items. Glue sticks, Scissors,  Paper, Note books, Electronic Items, Iron, CFL Bulbs, Toys, Gift items, Multipin Plugs, Decoration Items, Light sheds, even stuffed animal clones E K Retail supermarket Sundaramshetty Nagar Behind IIM Bangalore Vijaya Bank Layout The lower floor ( rather mezzanine Floor ) holds food items, bread, Pizza bases, eggs, spices, sos, jam, jelly, rice, staple foods, sugar, milk, curd, parathas, chicken, Fish fillets, cold and hot drinks, flavoured milks, cereals, condensed milk, Dal, even poppy seeds. There are thousands of items ….all that you get in a superstore. You have to come to visit the shop. You have to pay to buy items from here J Simplified Knowledge Management Classes Must see https://zookeepersblog.wordpress.com/some-points-which-i-wish-all-my-new-prospective-students-know/ Do you want to make money working at home ? see http://skmclasses.weebly.com/jobs.html search for videos in http://www.skmclasses.kinja.com search for videos in http://www.skmclasses.kinja.com Thanks and Regards Zookeeper ;-D Subhashish Chattopadhyay In case of doubts or suggestions, Please send me email at mokshya@gmail.com search for videos in http://www.skmclasses.kinja.com 1 ) How do I prepare for IIT ? Ans : – See the videos made by me ( search for videos in http://www.skmclasses.kinja.com search for videos in http://www.skmclasses.kinja.com 4 ) How can I get all your lectures ? Send me a mail at mokshya@gmail.com to contact me. search for videos in http://www.skmclasses.kinja.com 5 ) How do you get benefited out of this ? 6 ) Why do you call yourself a Zookeeper ? Ans :- This is very nicely explained at https://zookeepersblog.wordpress.com/z00keeper-why-do-i-call-myself-a-zoookeeper/ 7 ) Where do you stay ? Ans :- Presently I am in Bangalore. Ans :- We actively answers doubts at doubtpoint. see http://skmclasses.weebly.com/doubtpoint.html 11 ) What is EAMCET ? 13 ) What is SCRA ? Thanks for your time. To become my friend in google+ ( search me as mokshya@gmail.com and send friend request ) Read http://edge.org/responses/what-scientific-concept-would-improve-everybodys-cognitive-toolkit Article in Nature says CO2 increase is good for the trees Summary of Women The Virus of Faith The God delusion cassiopeia facts about evolution Intermediate Fossil records shown and explained nicely Fossils, Genes, and Embryos http://www.youtube.com/watch?v=fdpMrE7BdHQ The Rise Of Narcissism In Women 13 type of women whom you should never court Media teaching Misandry in India Summary of problems with women Eyeopener men ? women only exists Did you analyze your effectiveness ? Culturomics can help you see detailed statistics at An eye opener in Misandry How women manipulate men Gender Biased Laws in India Violence against Men Only men are victimised Men are BETTER than women Male Psychology Women are more violent than men Think what are you doing … why are you doing ? Every Man must know this … Manginas, White Knights, & Other Chivalrous Dogs ……..### SWEET. ### key words IIT, JEE,IITJEE, Home Tuition available small groups students IB International Baccalaureate Programme IGCSE International General Certificate Secondary Education ISc Indian School Certificate ICSE Indian Certificate of Secondary Education CBSE Central Board of Secondary Education The Schools offering IB International Baccalaureate Programme Bangalore International School Geddalahalli Hennur Bagalur Road Kothanur Post Bengaluru India 560 077 Stonehill International School, 1st Floor, Embassy Point #150, Infantry Road Bengaluru 560 001 Stonehill International School 259/333/334/335, Tarahunise Post Jala Hobli, Bengaluru North 562157 Candor International School Begur – Koppa Road, Hullahalli, Off Bannerghatta Road, Near Electronic City, Bangalore – 560105 Greenwood High International School Bengaluru No.8-14, Chickkawadayarapura, Near Heggondahalli, Gunjur Post, Varthur Sarjapur Road, Bangalore-560087 Sarla Birla Academy Bannerghatta, Bangalore, Canadian International School, Yelahanka, Bangalore Indus International School, Billapura Cross, Sarjapur Bangalore IGCSE International General Certificate of Secondary Education ) Schools of Bangalore Greenwood high international school, No.8-14, Chickkawadayarapura Gunjur Post, Varthur Sarjapur Road, Bangalore Oakridge International School, Oakridge International School, Sarjapur Road, Bangalore Edify School Electronic City 105 34th Main 23rd Cross Sector-A Surya Nagar Phase-2, Anekal-Chandapura Main Road, Electronic City Bangalore Orchids The International School Jalahalli Nagarbavi Mysore Road Sarjapur Road BTM, Bangalore Trio World School, #3/5, Kodigehalli Main Road Sahakar Nagar, Bangalore Ekya School, No.16, 6TH B Main J P Nagar 3RD Phase, Bangalore Vibgyor High school, 58/1, Thubarahallli Whitefield Road, Bangalore Vidyashilp Academy, 42/3, Shivanahalli Yelahanka, Bangalore, PRIMUS Public School, Post Box No. 21, Chikanayakanahalli Village, Off. Sarjapur Road Bangalore Jain International Residential School Jakkasandra Post, Kanakapura Taluk Bangalore Ryan International School, Kundanahalli, M. H. Colony, AECS Layout, Kundalahalli Gate,Bhd Hindustan Lever Ltd, Marathahalli Colony, Bangalore Ebenezer International School, Singena Agrahara Road, Via Hosur Road / A.P.M.C. Yard, Bangalore Mallya Aditi International School, Yelahanka Bangalore India International school 26/1,Chikkabellandur, Carmel Ram Post, Bangalore Ryan International School, Yelahanka, Vederapura Village, Gentiganahalli Road, Training Centre, Yelahanka Bangalore Indus International School, Billapura Cross Sarjapur, Bangalore The International School Bangalore (TISB), NAFL Valley Whitefield – Sarjapur Road, Bangalore Treamis World School, Hulimangala Post Bangalore Ryan International School, Bannerghatta road, Opp. Confident Cascade Bannergatta Main Road Bangalore International School, Geddalahalli Hennur Bagalur Road, Kothanur Post Bangalore Sarla Birla Academy Bannerghatta Jigni Road Bangalore Inventure Academy Whitefield – Sarjapur Road Bangalore Prakriya, # 70, Chikkanayakanahalli Road off Doddakannelli Sarjapur Road, Bangalore Buddhi School, 57, 3rd Main, 4th Cross RMV 2nd Stage, H.I.G Colony, Bangalore B.G.S. International Residential School Nithyananda Nagar, Kumbalagudu, Gollahalli Kengeri, Bangalore Solutions, India, IPhO, APhO, IMO, RMO, INMO, through, lectures, problems numericals Zookeeper, Subhashish, Chattopadhyay, Projectile, Latent, Heat Thermodynamics std 11 12 ISc Calculus BE BTech Differentiation Integration Mechanics Surface Tension Viscosity Accelerating Frame velocity wedge mass pulley Moment Inertia Roorkey Joint Entrance Exam CET AIEEE Irodov HCV Verma South Bangalore Intermediate Algebra Trigonometry Sexy Free Coaching study material preparation Olympiad Friction sin Modelling cos Potential tan cot Gravitation Electrostatics sec Field cosec Ellipse Parabola Hyperbola inverse string Tuition Kinetic Theory Gases Isothermal Adiabatic Isochoric Isobaric Processes Root Mean Square Differential Equation Soomrit Specific Cp Cv PV Diagram Bending Stress Strain Geostationary Satellite Entropy Coefficient Linear Expansion Alpha Beta Gamma Pendulum Conductivity Latent ice water Hydrometer Glass tube series Parallel travelling standing wave Sound Radiation stefan Boltzmann law Newton cooling cylinder Harmonic Overtone Resonance Sonometer Kunds Beat Frequency vibration tuning Fork Swimmer Young Bulk Modulus welded chamber not similar dissimilar MIT Caltech Yale pipe Magnetic Tesla Lenz LvB Vijaya Bank Enclave Apartments Bannerghatta Road Behind IIM Jayanagar J-P-Nagar Buoyant Buoyancy Rho efflux Bernoullis rare Poiseuilles Torricellis critical Terminal Reynolds Poise coalescing Laplace Ventury Hoop orifice Siphon Foucault stretched compression ball scale constant length shear poisson Ratio clock loosing time tvanausdal1 vkiledj Density Partial Pressure Humidity Leak SmartlearnwebTV Space Puncture Photon RC RLC LR Circuit Electrical Capacitor Inductance Linked Flux Wheatstone Bridge Freelanceteach Troutons Rule Van Arkel Method Overview Metallurgy Roasting Calcination Froth Floatation Purification Projected Area PET Kerala MPPET Delta Star conversion Internal Resistance Battery Trick Questions Infinite Ladder Quadratic Cubic Quartic Quintic Orissa NSEP ckt eqn mesh Folding Lenzs J&K Karnataka RMS instantaneous BCECE Maharastra MHCET RPET stepup stepdown transformer Bilekahalli UPSEAT shunt galvanometer susceptibility oscillating magnetometer pole strength Bihar Rajasthan Uttarpradesh Punjab Hariana TN Tamilnadu Andhra WB west Bengal Vacuum Diode Triode Rectifier Truth Table Thermionic emission, Radioactivity Half Life Langmiur, Child Law FCC BCC Cube Optics Lens Mirror Focus Focal Concave Convex Lux Phot Lumen Double slit Complex Integral coordinate Geometry compounds, Biochemistry, Plastic, Organic Chemistry Physical Analytical Inorganic Metallurgy, Biotechnology, Polymer Science, Rubber Technology Geology, Pharma, Veterinary Science,Food Technology, Cryogenics, Ceramics acid species IITJEE SKMClasses.weebly.com proton donor activation energy minimum energy IITJEE SKMClasses.weebly.com reaction breaking bonds addition polymer very long molecular chain formed repeated addition reactions many unsaturated alkene molecules monomers addition polymerisation process unsaturated alkene molecules monomers add growing polymer chain one timeIITJEE SKMClasses.weebly.com long saturated molecular chain addition polymer addition reaction reaction IITJEE SKMClasses.weebly.com reactant added IITJEE SKMClasses.weebly.com unsaturated molecule saturated molecule adsorption process IITJEE SKMClasses.weebly.com occurs gas, liquid solute surface solid rarely liquid alicyclic hydrocarbon hydrocarbon IITJEE SKMClasses.weebly.com carbon atoms joined together ring structure aliphatic hydrocarbon hydrocarbon IITJEE SKMClasses.weebly.com carbon atoms joined together straight branched chains alkali type base IITJEE SKMClasses.weebly.com dissolves water forming hydroxide ions OH (aq) ions alkanes homologous series IITJEE SKMClasses.weebly.com general formula C alkyl group alkane IITJEE SKMClasses.weebly.com hydrogen atom removed CH alkyl groups IITJEE SKMClasses.weebly.com IITJEE skmclasses.weebly.com ‘R’ amount substance quantity whose unit mole Chemists amount substance IITJEE skmclasses.weebly.com IITJEE counting atoms anhydrous substance IITJEE SKMClasses.weebly.com contains water molecules anion negatively charged ion atom economy atomic orbital region within atom hold two electrons IITJEE SKMClasses.weebly.com opposite spins atomic proton number number protons nucleus atom 100 products masses molecular sum product desired mass molecular economy atom Chemistry average bond enthalpy average enthalpy change IITJEE SKMClasses.weebly.com place IITJEE SKMClasses.weebly.com breaking homolytic fission 1 molIITJEE SKMClasses.weebly.com type bond molecules gaseous species Avogadro constant,isotope number atoms mole carbon base species IITJEE SKMClasses.weebly.com proton acceptor biodegradable material substance IITJEE SKMClasses.weebly.com broken IITJEE SKMClasses.weebly.com naturally environment living organisms Boltzmann distribution distribution energies molecules particular temperature IITJEE skmclasses.weebly.com graph bond enthalpy enthalpy change IITJEE SKMClasses.weebly.com IITJEE SKMClasses.weebly.com breaking homolytic fission 1 mol bond molecules gaseous species carbanion organic ion IITJEE SKMClasses.weebly.com carbon atom hIITJEE skmclasses.weebly.com negative charge carbocation organic ion IITJEE SKMClasses.weebly.com carbon atom hIITJEE skmclasses.weebly.com positive charge catalyst substance IITJEE SKMClasses.weebly.com increases rate chemical reaction process cation positively charged ion cis trans isomerism special type isomerism IITJEE SKMClasses.weebly.com non hydrogen group skmclasses.weebly.com hydrogen atom C C=C double bond cis isomer ( Z isomer) IITJEE skmclasses.weebly.com H atoms on IITJEE SKMClasses.weebly.com carbon same side trans isomer E isomer H atoms carbon different bond compound substance formed IITJEE SKMClasses.weebly.com two IITJEE SKMClasses.weebly.com chemically bonded elements fixed ratio, usually chemical formula concentration amount solute mol IITJEE SKMClasses.weebly.com 1 dm 3 1000 cm solution coordinate bond shared pair electrons provided one bonding atoms called dative covalent bond covalent bond bond formed shared pair electrons cracking breaking long chained saturated hydrocarbonsIITJEE SKMClasses.weebly.com mixture shorter chained alkanesalkenes curly arrow symbol IITJEE SKMClasses.weebly.com reaction mechanisms IITJEE SKMClasses.weebly.com show movement electron Coaching ICWA Coaching CFA Coaching CFP Coaching CMAT Coaching School Tuitions CBSE School Tuitions Home Tuitions 9th STD Tuitions PUC Coaching 10th Std Tuitions College Tuitions Maths Tuitions Engineering Tuitions Accounts & Finance Tuitions MBA & BBA Coaching Microbiology & Biotech Tuitions Study Abroad GRE & SAT Coaching GMAT Coaching IELTS/TOEFL Coaching PTE Coaching proteins protonation pyridines pyrroles quinones quinolines radical reaction radicals rearrangement receptors reduction regioselectivity retro reaction rhodium ring closure ring contraction ring expansion ring opening ruthenium samarium scandium Schiff bases selenium self-assembly silicon sodium solid-phase synthesis solvent effects spectroscopy sphingolipids spiro compounds stereoselective synthesis stereoselectivity steric hindrance steroids Stille reaction substituent effects sulfates sulfonamides sulfones sulfoxides sulfur supported catalysis supramolecular tandem reaction tautomerism terpenoids thioacetals thiols tin titanium total synthesis transesterification transition metals transition states tungsten Umpolung vinylidene complexes vitamins Wacker reaction Wittig reaction ylides zeolites zinc BRST Quantization Effective field theories Field Theories Higher Dimensions Field Theories Lower Dimensions Large Extra Dimensions Lattice Quantum Field Theory Nonperturbative Effects Renormalization Group Renormalization Regularization skmclasses.weebly.com Renormalons Sigma Models Solitons Monopoles skmclasses.weebly.com Instantons Supersymmetric gauge theory Topological Field Theories 1/N Expansion Anyons Chern-Simons Theories Confinement Duality Gauge Field Theories Lattice Gauge Field Theories Scattering Amplitudes Spontaneous Symmetry Breaking Strong Coupling Expansion Topological States Matter Wilson ‘t Hooft skmclasses.weebly.comPolyakov loops Anomalies Field skmclasses.weebly.comString Theories BRST Symmetry Conformal skmclasses.weebly.com W Symmetry Discrete skmclasses.weebly.comFinite Symmetries Gauge Symmetry Global Symmetries Higher Spin Symmetry Space-Time Symmetries AdS-CFT Correspondence Black Holes String Theory Bosonic Strings Brane Dynamics Gauge Theories Conformal Field Models String Theory D-branes dS vacua string theory F-Theory Flux compactifications Gauge-gravity correspondence Holography skmclasses.weebly.comcondensed matter physics (AdS CMT) Holography skmclasses.weebly.comquark-gluon plasmas Intersecting branes models Long strings M(atrix) Theories M-Theory p-branes Penrose limit skmclasses.weebly.compp-wave background String Duality String Field Theory String theory skmclasses.weebly.comcosmic string Superstring Vacua Superstrings skmclasses.weebly.comHeterotic Strings Tachyon Condensation Topological Strings 2D Gravity Black Holes Classical Theories Gravity Higher Spin Gravity Lattice Models Gravity Models Quantum Gravity Spacetime Singularities Extended Supersymmetry Supergravity Models Superspaces Supersymmetric Effective Theories Supersymmetry skmclasses.weebly.com Duality Supersymmetry Breaking Differential skmclasses.weebly.comAlgebraic Geometry Integrable Hierarchies Non-Commutative Geometry Quantum Groups Statistical Methods Stochastic Processes Cosmology Theories beyond SM Solar skmclasses.weebly.comAtmospheric Neutrinos Thermal Field Theory Be Ansatz Boundary Quantum Field Theory Exact S-Matrix Quantum Dissipative Systems Random Systems B-Physics Beyond Standard Model Compactification skmclasses.weebly.com String Models CP violation Electromagnetic Processes skmclasses.weebly.com Properties GUT Heavy Quark Higgs Kaon LEP HERA skmclasses.weebly.com SLC Neutrino Physics Quark Masses skmclasses.weebly.comSM Parameters Rare Decays Standard Model Supersymmetric Standard Model Technicolor skmclasses.weebly.com Composite Models Chiral Lagrangians Deep Inelastic Scattering Higher Twist Effects Lattice QCD Parton Model Phase Diagram QCD Phenomenological Models QCD Quark-Gluon Plasma Resummation Sum Rules Aim Global Education Koramangala Computer Networking Training Cloud Computing Training JBOSS Training Juniper Certification Training L2 & L3 Protocol Training MCTS Training Engineering design Training CAD & CAM Training MATLAB Training PLC Training SCADA Training VLSI Design Multimedia & Design Training 2D Animation Training 3D Animation Training 4D Animation Training CorelDRAW Training VFX Training Web Technologies Training ASP.Net Training JQuery pair breaking formation covalent bond dative covalent shared pair electrons IITJEE SKMClasses.weebly.com hIITJEE skmclasses.weebly.com been provided one bonding atoms only IITJEE SKMClasses.weebly.com called coordinate bond dehydration elimination reaction IITJEE SKMClasses.weebly.com water removed saturated molecule IITJEE SKMClasses.weebly.com IITJEE SKMClasses.weebly.com unsaturated molecule delocalised Electrons IITJEE SKMClasses.weebly.com shared IITJEE SKMClasses.weebly.com two atoms displacement reaction reaction IITJEE SKMClasses.weebly.com reactive element displaces less reactive element IITJEE SKMClasses.weebly.com aqueous solution latter’s ions displayed formula showing relative positioning atoms molecule skmclasses.weebly.com bonds IITJEE SKMClasses.weebly.com disproportionation oxidation skmclasses.weebly.com reduction element redox reaction dynamic equilibrium equilibrium IITJEE SKMClasses.weebly.com exists closed system IITJEE SKMClasses.weebly.com rate forward reaction equal IITJEE SKMClasses.weebly.com rate reverse reaction E/Z isomerism type stereoisomerism IITJEE SKMClasses.weebly.com different groups attached IITJEE SKMClasses.weebly.com carbon C=C double bond arranged differently space restricted rotation C=C bond electron configuration arrangement electrons IITJEE SKMClasses.weebly.com atom electronegativity measure attraction bonded atom skmclasses.weebly.com pair electrons covalent bond electron shielding repulsion IITJEE SKMClasses.weebly.com electrons different inner shells Shielding reduces net attractive force IITJEE SKMClasses.weebly.com positive nucleus outer shell electrons electrophile atom group atoms IITJEE SKMClasses.weebly.com attracted IITJEE SKMClasses.weebly.com electron rich centre atom IITJEE SKMClasses.weebly.com accepts pair electrons covalent bond electrophilic addition type addition reaction IITJEE SKMClasses.weebly.com IITJEE SKMClasses.weebly.com electrophile attracted electron rich centre atom accepts pair electrons IITJEE SKMClasses.weebly.com IITJEE SKMClasses.weebly.com new covalent bond elimination reaction removal molecule IITJEE SKMClasses.weebly.com saturated molecule IITJEE SKMClasses.weebly.com unsaturated molecule empirical formula simplest whole number ratio atoms IITJEE SKMClasses.weebly.com element present compound endothermic reaction reaction IITJEE SKMClasses.weebly.com enthalpy products greater enthalpy reactants resulting heat being taken surroundings enthalpy heat content IITJEE SKMClasses.weebly.com stored chemical system standard enthalpy change combustion enthalpy change IITJEE SKMClasses.weebly.com IITJEE SKMClasses.weebly.com one mole substance reacts completely IITJEE SKMClasses.weebly.com oxygen under standard conditions reactants skmclasses.weebly.com products being IITJEE SKMClasses.weebly.com standard states (standard) enthalpy change formation enthalpy change IITJEE SKMClasses.weebly.com one mole compound formed IITJEE SKMClasses.weebly.com constituent elements IITJEE SKMClasses.weebly.com standard states under standard conditions (standard) enthalpy change reaction enthalpy change IITJEE SKMClasses.weebly.com accompanies reaction molar quantities expressed chemical equation under standard conditions reactants skmclasses.weebly.com products being IITJEE SKMClasses.weebly.com standard states enthalpy cycle diagram showing alternative routes IITJEE SKMClasses.weebly.com reactants products IITJEE SKMClasses.weebly.com allows indirect determination IITJEE SKMClasses.weebly.com enthalpy change IITJEE SKMClasses.weebly.com other known enthalpy changes using Hess’ law enthalpy profile diagram skmclasses.weebly.com reaction IITJEE SKMClasses.weebly.com compare enthalpy reactants IITJEE SKMClasses.weebly.com enthalpy products esterification reaction IITJEE SKMClasses.weebly.com alcohol IITJEE SKMClasses.weebly.com carboxylic acid IITJEE SKMClasses.weebly.com produce ester skmclasses.weebly.com water exothermic reaction IITJEE SKMClasses.weebly.com enthalpy products smaller enthalpy reactants, resulting heat loss IITJEE SKMClasses.weebly.com surroundings fractional distillation separation components liquid mixture skmclassesfractions IITJEE SKMClasses.weebly.com differ boiling point skmclasses.weebly.com hence chemical composition IITJEE SKMClasses.weebly.com distillation typically using fractionating column fragmentation process mass spectrometry IITJEE SKMClasses.weebly.com causes positive ion split skmclasses pieces one positive fragment ion functional group part organic molecule responsible skmclasses.weebly.com chemical reactions general formula simplest algebraic formula member homologous series. skmclasses.weebly.com example general formula alkanes giant covalent lattice dimensional structure atoms, bonded together strong covalent bonds giant ionic lattice three dimensional structure oppositely charged ions, bonded together strong ionic bonds giant metallic lattice three dimensional structure positive ions skmclasses.weebly.com delocalised electrons, bonded together strong metallic bonds greenhouse effect process IITJEE SKMClasses.weebly.com absorption subsequent emission infrared radiation atmospheric gases warms lower atmosphere planet’s surface group vertical column Periodic Table Elements group IITJEE SKMClasses.weebly.com similar chemical properties skmclasses.weebly.com atoms skmclasses.weebly.comnumber outer shell electrons Hess law reaction IITJEE SKMClasses.weebly.com one route skmclasses.weebly.com initial final conditions IITJEE SKMClasses.weebly.com skmclasses.weebly.com total enthalpy change skmclasses.weebly.com skmclasses.weebly.com route heterogeneous catalysis reaction IITJEE SKMClasses.weebly.com catalyst IITJEE skmclasses.weebly.com different physical state reactants; frequently, reactants IITJEE SKMClasses.weebly.com gases whilst catalyst solid heterolytic fission breaking covalent bond IITJEE SKMClasses.weebly.com both bonded electrons going IITJEE SKMClasses.weebly.com one atoms, forming cation (+ ion) skmclasses.weebly.com IITJEE SKMClasses.weebly.com anion ion homogeneous catalysis reaction catalyst skmclasses.weebly.com reactants physical state, IITJEE SKMClasses.weebly.com frequently aqueous gaseous state homologous series series organic compounds IITJEE SKMClasses.weebly.com skmclasses.weebly.com functional group, IITJEE SKMClasses.weebly.com successive member differing homolytic fission breaking covalent bond IITJEE SKMClasses.weebly.com one bonded electrons going IITJEE SKMClasses.weebly.com atom, forming two radicals hydrated Crystalline skmclasses.weebly.com containing water molecules hydrocarbon compound hydrogen skmclasses.weebly.com carbon hydrogen bond strong dipole attraction IITJEE SKMClasses.weebly.com electron deficient hydrogen atom (O H on different molecule hydrolysis reaction IITJEE SKMClasses.weebly.com water aqueous hydroxide ions IITJEE SKMClasses.weebly.com breaks chemical compound skmclasses two compounds initiation first step radical substitution IITJEE SKMClasses.weebly.com free radicals generated ultraviolet radiation intermolecular force attractive force IITJEE SKMClasses.weebly.com neighbouring molecules Intermolecular forces van der Waals’ forces induced dipole ces permanent dipole forces hydrogen bonds ion positively negatively charge atom covalently bonded group atoms molecular ion ionic bonding electrostatic attraction IITJEE SKMClasses.weebly.com oppositely charged ions first) ionisation energy IITJEE SKMClasses.weebly.com remove one electron IITJEE SKMClasses.weebly.com IITJEE SKMClasses.weebly.com ion one mole gaseous 1+ ions IITJEE SKMClasses.weebly.com IITJEE SKMClasses.weebly.com one mole gaseous 2+ ions second) ionisation energy IITJEE SKMClasses.weebly.com remove one electron IITJEE SKMClasses.weebly.com IITJEE SKMClasses.weebly.com ion one mole gaseous 1+ ions IITJEE SKMClasses.weebly.com IITJEE SKMClasses.weebly.com one mole gaseous 2+ ions successive ionisation measure energy IITJEE SKMClasses.weebly.com remove IITJEE SKMClasses.weebly.com electron Chemistry energy second ionisation energy energy IITJEE SKMClasses.weebly.com one electron IITJEE SKMClasses.weebly.com IITJEE SKMClasses.weebly.com ion one mole gaseous 1+ ions IITJEE SKMClasses.weebly.com one mole gaseous 2+ ions isotopes Atoms skmclasses.weebly.com element IITJEE SKMClasses.weebly.com different numbers neutrons different masses le Chatelier’s principle system dynamic equilibrium subjected change position equilibrium will shift minimise change limiting reagent substance chemical reaction IITJEE SKMClasses.weebly.com runs out first lone pair outer shell pair electrons IITJEE SKMClasses.weebly.com involved chemical bonding mass nucleon number particles protons aneutrons) nucleus mechanism sequence steps showing path taken electrons reaction metallic bond electrostatic attraction IITJEE SKMClasses.weebly.com positive metal ions adelocalised electrons molar mass substance units molar mass IITJEE SKMClasses.weebly.com molar volume IITJEE SKMClasses.weebly.com mole gas. units molar volume IITJEE SKMClasses.weebly.com dm room temperature skmclasses.weebly.com pressure molar volume approximately 24.0 substance containing IITJEE skmclasses.weebly.com many particles thereIITJEE SKMClasses.weebly.com carbon atoms exactly 12 g carbon isotope molecular formula number atoms IITJEE SKMClasses.weebly.com element molecule molecular ion M positive ion formed mass spectrometry IITJEE SKMClasses.weebly.com molecule loses electron molecule small group atoms held together covalent bonds monomer small molecule IITJEE SKMClasses.weebly.com combines IITJEE SKMClasses.weebly.com monomers polymer nomenclature system naming compounds nucleophile atom group atoms attracted electron deficient centre atom donates pair electrons covalent bond nucleophilic substitution type substitution reaction IITJEE SKMClasses.weebly.com nucleophile attracted electron deficient centre atom, IITJEE SKMClasses.weebly.com donates pair electrons IITJEE SKMClasses.weebly.com new covalent bond oxidation Loss electrons IITJEE SKMClasses.weebly.com increase oxidation number oxidation number measure number electrons IITJEE SKMClasses.weebly.com IITJEE SKMClasses.weebly.com atom uses bond IITJEE SKMClasses.weebly.com atoms another element. Oxidation numbers IITJEE SKMClasses.weebly.com derive d rules oxidising agent reagent IITJEE SKMClasses.weebly.com oxidises (takes electrons from) another species percentage yield period horizontal row elements Periodic Table Elements show trends properties across period periodicity regular periodic variation properties elements IITJEE SKMClasses.weebly.com atomic number position Periodic Table permanent dipole small charge difference across bond resulting IITJEE SKMClasses.weebly.com difference electronegativities bonded atoms permanent dipole dipole force attractive force IITJEE SKMClasses.weebly.com permanent dipoles neighbouring polar molecules pi bond (p bond reactive part double bond formed above skmclasses.weebly.com below plane bonded atoms sideways overlap p orbitalspolar covalent bond bond IITJEE SKMClasses.weebly.com permanent dipole polar molecule molecule IITJEE SKMClasses.weebly.com IITJEE SKMClasses.weebly.com overall dipole skmclasses account dipoles across bonds polymer long molecular chain built monomer units precipitation reaction formation solid solution during chemical reaction Precipitates IITJEE SKMClasses.weebly.com formed IITJEE SKMClasses.weebly.com two aqueous solutions IITJEE SKMClasses.weebly.com mixed together principal quantum number n number representing relative overall energy orbital IITJEE SKMClasses.weebly.com increases distance nucleus sets orbitals IITJEE SKMClasses.weebly.com value IITJEE skmclasses.weebly.com electron shells energy levels propagation two repeated radical substitution IITJEE SKMClasses.weebly.com build up products chain reaction radical species unpaired electron rate reaction change concentration reactant product redox reaction reaction IITJEE SKMClasses.weebly.com reduction skmclasses.weebly.com oxidation take IITJEE SKMClasses.weebly.com reducing agent reagent IITJEE SKMClasses.weebly.com reduces (adds electron to) species reduction Gain electrons decrease oxidation number yield actual amount mol product theoretical amount mol product Chemistry reflux continual boiling skmclasses.weebly.com condensing reaction mixture ensure IITJEE SKMClasses.weebly.com reaction IITJEE SKMClasses.weebly.com without contents flask boiling dry relative atomic mass weighted mean mass atom element compared one twelfth mass IITJEE SKMClasses.weebly.com atom carbon relative formula mass weighted mean mass formula unit compared IITJEE SKMClasses.weebly.com one twelfth mass atom carbon relative isotopic mass mass atom isotope compared IITJEE SKMClasses.weebly.com one twelfth mass atom carbon relative molecular mass weighted mean mass molecule compared twelfth mass atom carbon 12 repeat unit specific arrangement atom s IITJEE SKMClasses.weebly.com occurs structure over over again. Repeat units IITJEE SKMClasses.weebly.com included brackets outside IITJEE SKMClasses.weebly.com symbol n Salt chemical compound formed IITJEE SKMClasses.weebly.com IITJEE SKMClasses.weebly.com acid IITJEE SKMClasses.weebly.com H+ ion acid IITJEE skmclasses.weebly.com been replaced metal ion another positive ion such IITJEE skmclasses.weebly.com ammonium ion, NH saturated hydrocarbon IITJEE SKMClasses.weebly.com single bonds only shell group atomic orbitals IITJEE SKMClasses.weebly.com skmclasses.weebly.com principal quantum number known main energy level simple molecular lattice three dimensional structure molecules, bonded together weak intermolecular forces skeletal formula simplified organic formula, IITJEE SKMClasses.weebly.com hydrogen atoms removed alkyl chains, leaving carbon skeleton skmclasses.weebly.com associated functional groups species particle IITJEE SKMClasses.weebly.com part chemical reaction specific heat capacity, c energy IITJEE SKMClasses.weebly.com raise temperature 1 g substance 1 C spectator ions Ions present part chemical reaction standard conditions pressure 100 kPa 1 atmosphere stated temperature usually 298 K (25 °C), skmclasses.weebly.com concentration 1 mol dm reactions aqueous solutions standard enthalpies enthalpystandard solution solution known concentration Standard solutions normally IITJEE SKMClasses.weebly.com titrations IITJEE SKMClasses.weebly.com determine unknown information another substance Chemistry standard state physical state substance under standard conditions 100 kPa 1 atmosphere) skmclasses.weebly.com 298 K 25 C stereoisomers Compounds skmclasses.weebly.com structural formula IITJEE SKMClasses.weebly.com different arrangement atoms space stoichiometry molar relationship IITJEE SKMClasses.weebly.com relative quantities substances part reaction stratosphere second layer Earth’s atmosphere, containing ‘ozone layer’, about 10 km IITJEE SKMClasses.weebly.com 50 km above Earth’s surface structural formula formula showing minimal detail skmclasses.weebly.com arrangement atoms molecule structural isomers Molecules IITJEE SKMClasses.weebly.com skmclasses.weebly.com molecular formula different structural arrangements atoms subshell group skmclasses.weebly.com type atomic orbitals s, p, d f within shell substitution reaction reaction IITJEE SKMClasses.weebly.com atom group atoms replaced different atom group atoms termination step end radical substitution IITJEE SKMClasses.weebly.com two radicals combine IITJEE SKMClasses.weebly.com molecule thermal decomposition breaking chemical substance IITJEE SKMClasses.weebly.com heat skmclasses least two chemical substances troposphere lowest layer Earth’s atmosphere extending Earth’s surface about 7 km (above poles) about 20 km above tropics unsaturated hydrocarbon hydrocarbon containing carbon carbon multiple bonds van der Waals’ forces Very weak attractive forces IITJEE SKMClasses.weebly.com induced dipoles neighbouring molecules volatility ease IITJEE SKMClasses.weebly.com liquid turns skmclasses gas Volatility increases boiling point decreases water crystallisation Water molecules IITJEE SKMClasses.weebly.com IITJEE SKMClasses.weebly.com IITJEE SKMClasses.weebly.com essential part crystalline structure absolute zero – theoretical condition concerning system at zero Kelvin IITJEE SKMClasses.weebly.com system does IITJEE SKMClasses.weebly.com emit absorb energy (all atoms rest accuracy – how close value IITJEE SKMClasses.weebly.com actual true value IITJEE SKMClasses.weebly.com see precision acid compound that, IITJEE SKMClasses.weebly.com dissolved water pH less 7.0 compound IITJEE SKMClasses.weebly.com donates hydrogen ion acid anhydride compound IITJEE SKMClasses.weebly.com two acyl groups boundIITJEE SKMClasses.weebly.com single oxygen atom acid dissociation constant – IITJEE SKMClasses.weebly.com equilibrium constant skmclasses.weebly.com dissociation weak acid actinides – fifteen chemical elements IITJEE SKMClasses.weebly.com actinium (89) skmclasses.weebly.com lawrencium (103 activated complex – structure IITJEE SKMClasses.weebly.com forms because collisionIITJEE SKMClasses.weebly.com molecules new bondsvIITJEE SKMClasses.weebly.com formed activation energy – minimum energy IITJEE SKMClasses.weebly.com must be inputIITJEE SKMClasses.weebly.com chemical system activity series actual yield addition reaction – within organic chemistry, IITJEE SKMClasses.weebly.com two IITJEE SKMClasses.weebly.com molecules combineIITJEE SKMClasses.weebly.com IITJEE SKMClasses.weebly.com larger aeration mixing air skmclasses liquid solid alkali metals metals Group 1 on periodic table alkaline earth metals – metals Group 2 on periodic table allomer substance IITJEE SKMClasses.weebly.com hIITJEE skmclasses.weebly.comdifferent composition another skmclasses.weebly.comcrystalline structure allotropy elements IITJEE SKMClasses.weebly.com different structures skmclasses.weebly.com therefore different forms IITJEE skmclasses.weebly.com Carbon diamonds, graphite, skmclasses.weebly.com fullerene anion negatively charge ions anode – positive side dry cell battery cell aromaticity – chemical property conjugated rings IITJEE SKMClasses.weebly.com results unusual stability. See IITJEE SKMClasses.weebly.com benzene atom – chemical element IITJEE SKMClasses.weebly.com smallest form, skmclasses.weebly.com made up neutrons skmclasses.weebly.comprotons within nucleus skmclasses.weebly.comelectrons circling nucleus atomic mass unit atomic number number representing IITJEE SKMClasses.weebly.com element IITJEE SKMClasses.weebly.com corresponds IITJEE SKMClasses.weebly.com number protons within nucleus atomic orbital region IITJEE SKMClasses.weebly.com electron atom may be found atomic radius average atomic mass Avogadro’s law Avogadro’s number number particles mole substance ( 6.02×10^23 ) barometer deviceIITJEE SKMClasses.weebly.comIITJEE SKMClasses.weebly.com measure pressure atmosphere base substance IITJEE SKMClasses.weebly.com accepts proton skmclasses.weebly.com high pH; common example sodium hydroxide (NaOH biochemistry chemistry organisms boiling phase transition liquid vaporizing boiling point temperature IITJEE SKMClasses.weebly.com substance startsIITJEE SKMClasses.weebly.com boil boiling-point elevation process IITJEE SKMClasses.weebly.com boiling point elevated adding substance bond – attraction skmclasses.weebly.com repulsion IITJEE SKMClasses.weebly.com atoms skmclasses.weebly.com molecules IITJEE SKMClasses.weebly.com cornerstone Boyle’s law Brønsted-Lowrey acid chemical species IITJEE SKMClasses.weebly.com donates proton Brønsted–Lowry acid–base reaction Brønsted-Lowrey base – chemical species IITJEE SKMClasses.weebly.com accepts proton buffered solution – IITJEE SKMClasses.weebly.com aqueous solution consisting weak acid skmclasses.weebly.comits conjugate base weak base skmclasses.weebly.comits conjugate acid IITJEE SKMClasses.weebly.com resists changes pH IITJEE SKMClasses.weebly.com strong acids basesIITJEE SKMClasses.weebly.com added burette (IITJEE SKMClasses.weebly.com buret glasswareIITJEE SKMClasses.weebly.com dispense specific amounts liquid IITJEE SKMClasses.weebly.com precision necessary titration skmclasses.weebly.com resource dependent reactions example combustion catalyst chemical compoundIITJEE SKMClasses.weebly.comIITJEE SKMClasses.weebly.com change rate IITJEE SKMClasses.weebly.com speed up slow down reaction,IITJEE SKMClasses.weebly.com regenerated at end reaction cation – positively charged ion centrifuge equipmentIITJEE SKMClasses.weebly.comIITJEE SKMClasses.weebly.com separate substances based on density rotating tubes around centred axis cell potential force galvanic cell IITJEE SKMClasses.weebly.com pulls electron through reducing agentIITJEE SKMClasses.weebly.com oxidizing agent chemical Law certain rules IITJEE SKMClasses.weebly.com pertain IITJEE SKMClasses.weebly.com laws nature skmclasses.weebly.comchemistry – examples chemical reaction – change one IITJEE SKMClasses.weebly.com substances skmclassesanother multiple substances colloid mixture evenly dispersed substances such IITJEE skmclasses.weebly.comm milks combustion IITJEE SKMClasses.weebly.com exothermic reaction IITJEE SKMClasses.weebly.com oxidant skmclasses.weebly.comfuel IITJEE SKMClasses.weebly.com heat skmclasses.weebly.comoften light compound – substance IITJEE SKMClasses.weebly.com made up two IITJEE SKMClasses.weebly.com chemically bonded elements condensation phase changeIITJEE SKMClasses.weebly.com gasIITJEE SKMClasses.weebly.com liquid conductor material IITJEE SKMClasses.weebly.com allows electric flow IITJEE SKMClasses.weebly.com freely covalent bond – chemical bond IITJEE SKMClasses.weebly.com involves sharing electrons crystal solid IITJEE SKMClasses.weebly.com packed IITJEE SKMClasses.weebly.com ions, molecules atoms IITJEE SKMClasses.weebly.com orderly fashion cuvette glasswareIITJEE SKMClasses.weebly.com spectroscopic experiments. usually made plastic glass quartz skmclasses.weebly.comshould be IITJEE possible deionization removal ions, skmclasses.weebly.com water’s case mineral ions such IITJEE skmclasses.weebly.comsodium, iron skmclasses.weebly.comcalcium deliquescence substances IITJEE SKMClasses.weebly.com absorb water IITJEE SKMClasses.weebly.com atmosphereIITJEE SKMClasses.weebly.com liquid solutions deposition – settling particles within solution mixture dipole electric magnetic separation charge dipole moment – polarity polar covalent bond dissolution solvation – spread ions monosacharide double bond sharing two pairs electradodes Microcentrifuge Eppendorf tube IITJEE SKMClasses.weebly.com Coomassie Blue solution earth metal – see alkaline earth metal electrolyte solution IITJEE SKMClasses.weebly.com conducts certain amount current skmclasses.weebly.com split categorically IITJEE skmclasses.weebly.com weak skmclasses.weebly.comstrong electrolytes electrochemical cell using chemical reaction’s current electromotive force made electromagnetic radiation type wave IITJEE SKMClasses.weebly.com through vacuums IITJEE skmclasses.weebly.comwell IITJEE skmclasses.weebly.commaterial skmclasses.weebly.comclassified IITJEE skmclasses.weebly.com self-propagating wave electromagnetism fields IITJEE SKMClasses.weebly.com electric charge skmclasses.weebly.comelectric properties IITJEE SKMClasses.weebly.com change way IITJEE SKMClasses.weebly.com particles move skmclasses.weebly.com interact electromotive force device IITJEE SKMClasses.weebly.com gains energy IITJEE skmclasses.weebly.comelectric charges pass through electron – subatomic particle IITJEE SKMClasses.weebly.com net charge IITJEE SKMClasses.weebly.com negative electron shells – IITJEE SKMClasses.weebly.com orbital around atom’s nucleus fixed number electrons usually two eight electric charge measured property (coulombs) IITJEE SKMClasses.weebly.com determine electromagnetic interaction element IITJEE SKMClasses.weebly.com atom IITJEE SKMClasses.weebly.com defined IITJEE SKMClasses.weebly.com atomic number energy – system’s abilityIITJEE SKMClasses.weebly.com do work enthalpy – measure total energy thermodynamic system (usually symbolized IITJEE skmclasses.weebly.comH entropy – amount energy IITJEE SKMClasses.weebly.com available skmclasses.weebly.com work closed thermodynamic system usually symbolized IITJEE skmclasses.weebly.com S enzyme – protein IITJEE SKMClasses.weebly.com speeds up catalyses reaction Empirical Formula – IITJEE SKMClasses.weebly.com called simplest formula gives simplest whole -number ratio atoms IITJEE SKMClasses.weebly.com element present compound eppendorf tube – generalized skmclasses.weebly.comtrademarked term skmclasses.weebly.com type tube; see microcentrifuge freezing – phase transitionIITJEE SKMClasses.weebly.com liquidIITJEE SKMClasses.weebly.com solid Faraday constant unit electrical charge widelyIITJEE SKMClasses.weebly.com electrochemistry skmclasses.weebly.comequalIITJEE SKMClasses.weebly.com ~ 96,500 coulombs represents 1 mol electrons, Avogadro number electrons: 6.022 × 1023 electrons. F = 96 485.339 9(24) C/mol Faraday’s law electrolysis two part law IITJEE SKMClasses.weebly.com Michael Faraday published about electrolysis mass substance altered at IITJEE SKMClasses.weebly.com electrode during electrolysis directly proportionalIITJEE SKMClasses.weebly.com quantity electricity transferred at IITJEE SKMClasses.weebly.com electrode mass IITJEE SKMClasses.weebly.com elemental material altered at IITJEE SKMClasses.weebly.com electrode directly proportionalIITJEE SKMClasses.weebly.com element’s equivalent weight frequency number cyclesIITJEE SKMClasses.weebly.com unit time. Unit: 1 hertz = 1 cycleIITJEE SKMClasses.weebly.com 1 second galvanic cell battery made up electrochemical IITJEE SKMClasses.weebly.com two different metals connected salt bridge gas particles container IITJEE SKMClasses.weebly.com no definite shape volume geochemistry – chemistry skmclasses.weebly.comchemical composition Earth Gibbs energy – value IITJEE SKMClasses.weebly.com indicates spontaneity reaction usually symbolized G Cavalier India, Kalyan Nagar halogens Group 7 Periodic Table skmclasses.weebly.comare non-metals heat energy transferredIITJEE SKMClasses.weebly.com one systemIITJEE SKMClasses.weebly.com another thermal interaction jodium – Latin name halogen element iodine Joule SI I.M.S. Learning Resources Pvt. Ltd., Jaya Nagar 4th Block unit energy, defined IITJEE skmclasses.weebly.com newton-meter indicator special compound addedIITJEE SKMClasses.weebly.com solution IITJEE SKMClasses.weebly.com changes color depending on acidity solution; different indicators Giraffe Coaching, Cunningham Road different colors effective pH ranges inorganic compound – compounds IITJEE SKMClasses.weebly.com contain carbon IITJEE SKMClasses.weebly.com exceptions main article inorganic chemistry part chemistry concerned IITJEE SKMClasses.weebly.com inorganic compounds International Union Pure skmclasses.weebly.comApplied Chemistry IUPAC insulator material IITJEE SKMClasses.weebly.com resists flow electric current ion molecule gained lost one IITJEE SKMClasses.weebly.com electron ionic bond electrostatic attractionIITJEE SKMClasses.weebly.com oppositely charged ions ionization breaking up compound skmclassesseparate ions Kinetics sub-field chemistry specializing reaction rates Kinetic energy energy IITJEE SKMClasses.weebly.com object IITJEE SKMClasses.weebly.com motion lanthanides Elements 57 through 71 lattice Unique arrangement atoms molecules crystalline liquid solid Laws thermodynamics liquid state matter IITJEE SKMClasses.weebly.com shape container light Portion electromagnetic spectrum IITJEE SKMClasses.weebly.com visibleIITJEE SKMClasses.weebly.com naked eye. IITJEE SKMClasses.weebly.com called “visible light London dispersion forces weak intermolecular force Law Motion object motion stay motion IITJEE SKMClasses.weebly.com object rest stays rest unless IITJEE SKMClasses.weebly.com unbalanced force acts molecule IITJEE SKMClasses.weebly.com one key components within chemistry Metal Chemical element IITJEE SKMClasses.weebly.com good conductor both electricity skmclasses.weebly.comheat skmclasses.weebly.comforms cations skmclasses.weebly.comionic bonds IITJEE SKMClasses.weebly.com non-metals melting phase changeIITJEE SKMClasses.weebly.com solidIITJEE SKMClasses.weebly.com liquid metalloid substance possessing both properties metals skmclasses.weebly.comnon-metals methylene blue heterocyclic aromatic chemical compound IITJEE SKMClasses.weebly.com molecular formula C16H18N3SCl microcentrifuge plastic container IITJEE SKMClasses.weebly.com IITJEE SKMClasses.weebly.comIITJEE SKMClasses.weebly.com store small amounts liquid mole – abbreviated mol measurement IITJEE SKMClasses.weebly.com amount substance single mole contains approximately 6.022×1023 units entities mole water contains 6.022×1023 H2O molecules molecule chemically I Beacons Academy, Jaya Nagar 4th Block bonded number atoms IITJEE SKMClasses.weebly.comIITJEE SKMClasses.weebly.com electrically neutral molecular orbital region mIITJEE SKMClasses.weebly.com electron found molecule opposed atom neat Alchemy India Services Pvt. Ltd. Residency Road conditions IITJEE SKMClasses.weebly.com liquid reagent gas performed IITJEE SKMClasses.weebly.com no added solvent cosolvent neutron neutral unit subatomic particle Institute Engineering Studies, Malleswaram net charge neutrino particle IITJEE SKMClasses.weebly.com travel speeds close speed light skmclasses.weebly.comare created IITJEE skmclasses.weebly.com result radioactive decay Brainstorm Consulting Pvt. Ltd., Jaya Nagar 4th Block nucleus centre Ace Creative Learning, Basavanagudi Anegundi Coaching Academy, Malleswaram atom made neutrons skmclasses.weebly.comprotons, IITJEE SKMClasses.weebly.com net positive charge noble gases group 18 elements, those whose outer electron shell filled non-metal Career Launcher, Jaya Nagar 3rd Block element IITJEE SKMClasses.weebly.com metallic nuclear pertainingIITJEE SKMClasses.weebly.com atomic Gate Indian Institute Tutorials J.P. Nagar 2nd Phase nucleus nuclear magnetic resonance spectroscopy technique IITJEE SKMClasses.weebly.com exploits magnetic properties certain nuclei, useful skmclasses.weebly.comidentifying unknown compounds number density measure concentration countable objects atoms molecules space; number volume orbital may referIITJEE SKMClasses.weebly.com either IITJEE SKMClasses.weebly.com atomic orbital molecular orbital organic compound compounds IITJEE SKMClasses.weebly.com contain carbon organic chemistry part chemistry concerned IITJEE SKMClasses.weebly.com organic compounds pH measure acidity basicity solution plasma state matter similar gas certain portion particlesIITJEE SKMClasses.weebly.com ionized other metal metallic elements p-block characterized having combination relatively low melting points less 950 K) skmclasses.weebly.comrelatively high electronegativity values IITJEE SKMClasses.weebly.com 1.6 revised Pauling potential energy stored body system due position force field due toIITJEE SKMClasses.weebly.com configuration precipitate formation solid solution inside another solid during chemical reaction diffusion solid precision close results multiple experimental trials IITJEE SKMClasses.weebly.com accuracy photon carrier electromagnetic radiation wavelength IITJEE skmclasses.weebly.comgamma rays skmclasses.weebly.comradio waves proton positive unit subatomic particle IITJEE SKMClasses.weebly.com positive charge protonation addition proton (H+) atom, molecule ion Quantum mechanics study how atoms, molecules, subatomic particles behave Career Edge India, Hosur Road structured quarks – elementary Eduplot Learning Solutions in Malleswaram particle skmclasses.weebly.com fundamental constituent matter quanta minimum amount bundle energy radiation energy IITJEE SKMClasses.weebly.com waves subatomic particles IITJEE SKMClasses.weebly.com change IITJEE SKMClasses.weebly.com high energyIITJEE SKMClasses.weebly.com low energy states radioactive decay – process unstable atomic nucleus losing energy emitting radiation Raoult’s law reactivity series reagent s-block elements – Group 1 skmclasses.weebly.com2 elements (alkali skmclasses.weebly.comalkaline metals), IITJEE SKMClasses.weebly.com includes Hydrogen skmclasses.weebly.comHelium salts – ionic compounds composed anions skmclasses.weebly.comcations salt bridge – devicesIITJEE SKMClasses.weebly.comIITJEE SKMClasses.weebly.com connection reduction IITJEE SKMClasses.weebly.com oxidation half-cells IITJEE SKMClasses.weebly.com electrochemical cell saline solution – general term skmclasses.weebly.comNaCl water Schrödinger equation – quantum state equation IITJEE SKMClasses.weebly.com represents behaviour GoodIITJEE SKMClasses.weebly.com Excellence, BTM 1st Stage election around IITJEE SKMClasses.weebly.com atom semiconductor IITJEE SKMClasses.weebly.com electrically conductive solid IITJEE SKMClasses.weebly.com conductor insulator single bond – sharing one pair electrons sol suspension solid particles liquid Artificial examples include sol-gels solid – one states matter, IITJEE SKMClasses.weebly.com moleculesIITJEE SKMClasses.weebly.com packed close together,IITJEE SKMClasses.weebly.com resistance movement/deformation skmclasses.weebly.comvolume change Young’s solute part solution IITJEE SKMClasses.weebly.com mixed skmclassessolvent Gate Indian Institute Tutorials in J.P. Nagar 2nd Phase NaCl saline water solution homogeneous mixture made up multiple substances. solutes skmclasses.weebly.comsolvents solvent part solution dissolves solute H2O saline water spectroscopy study radiation skmclasses.weebly.commatter, such IITJEE skmclasses.weebly.com X-ray absorption skmclasses.weebly.comemission spectroscopy speed light speed anything IITJEE SKMClasses.weebly.com zero rest mass (Energyrest = mc² IITJEE SKMClasses.weebly.com m mass skmclasses.weebly.comc speed G.C. Rao Academy in Bull Temple Road light Standard conditions skmclasses.weebly.com temperature skmclasses.weebly.compressure SATP standardisationIITJEE SKMClasses.weebly.com order compare experimental results (25 °C skmclasses.weebly.com 100.000 kPa state matter matter having homogeneous, macroscopic phase; gas, plasma Ria Institute Technology in Marathahalli liquid solidIITJEE SKM Classes.weebly.com well known increasing concentration sublimation – phase transitionIITJEE SKMClasses.weebly.com solidIITJEE SKMClasses.weebly.com limewater fuel gas subatomic particles – particles IITJEE SKMClasses.weebly.comIITJEE SKMClasses.weebly.com smaller atom; examplesIITJEE SKMClasses.weebly.com protons neutrons skmclasses.weebly.comelectrons substance – material IITJEE SKMClasses.weebly.com definite chemical composition Phase diagram showing triple skmclasses.weebly.comcritical points substance talc mineral representing one on Mohs Scale skmclasses.weebly.comcomposed hydrated magnesium silicate IITJEE SKMClasses.weebly.com chemical formula H2Mg3(SiO3)4 Mg3Si4O10(OH)2 temperature – average energy microscopic motions particles theoretical yield yield theory model describing nature phenomenon thermal conductivity property material Communication skmclasses.weebly.com Careers R.M.V. Extn. 2nd Stage conduct heat (often noted IITJEE skmclasses.weebly.com k thermochemistry study absorption release heat within chemical reaction thermodynamics study effects changing temperature, volume pressure work, heat, skmclasses.weebly.com energy on macroscopic scale I-Bas Consulting Pvt. Ltd., Ulsoor thermodynamic stability IITJEE SKMClasses.weebly.com system IITJEE SKMClasses.weebly.com lowest energy state IITJEE SKMClasses.weebly.com environment equilibrium thermometer device measures average energy system titration – process titrating one solution IITJEE SKMClasses.weebly.com another Cavalier India, Kalyan Nagar called volumetric analysis torr unit measure pressure (1 Torr equivalentIITJEE SKMClasses.weebly.com 133.322 Pa 1.3158×10-3 atm transition metal elements IITJEE SKMClasses.weebly.com incomplete d sub-shells IITJEE SKMClasses.weebly.com may referredIITJEE SKMClasses.weebly.com IITJEE skmclasses.weebly.com d-block elements transuranic element – element IITJEE SKMClasses.weebly.com atomic number greater 92; none transuranic elementsIITJEE SKMClasses.weebly.com stable triple bond – sharing three pairs electrons within covalent bond example N2 triple point temperature skmclasses.weebly.compressure three phasesIITJEE SKMClasses.weebly.com skmclasses.weebly.com Water special National IAS Academy, Raja Rajeshwari Nagar phase diagram Tyndall effect effect light scattering colloidal mixture IITJEE SKMClasses.weebly.com one substance dispersed evenly through another suspended particles UN number four digit codeIITJEE SKMClasses.weebly.comIITJEE SKMClasses.weebly.com note hazardous skmclasses.weebly.com flammable substances uncertainty characteristic IITJEE SKMClasses.weebly.com measurement IITJEE SKMClasses.weebly.com involves estimation any amount cannot be exactly reproducible Uncertainty principle knowing Shaping Lives Education Pvt. Ltd., Rajaji Nagar location particle makes momentum uncertain knowing momentum particle makes location uncertain unit cell smallest repeating unit lattice unit factor statements Manhattan Review, Jaya Nagar convertingIITJEE SKMClasses.weebly.com units universal ideal gas constant proportionality constant ideal gas law (0.08206 L·atm/(K·mol)) valence electron outermost electrons IITJEE SKMClasses.weebly.com atom IITJEE SKMClasses.weebly.comIITJEE SKMClasses.weebly.com located electron shells Valence bond theory theory explaining chemical bonding within molecules discussing valencies number chemical bonds formed IITJEE SKMClasses.weebly.com atom van der Waals force – one forces (attraction/repulsion)IITJEE SKMClasses.weebly.com molecules van ‘t Hoff factor – ratio moles particles solutionIITJEE SKMClasses.weebly.com moles solute dissolved vapor IITJEE SKMClasses.weebly.com substance below critical temperature gas phase vapour pressure – pressure vapour over liquid at equilibrium vaporization phase changeIITJEE SKMClasses.weebly.com liquidIITJEE SKMClasses.weebly.com gas viscosity – resistance liquidIITJEE SKMClasses.weebly.com flow (oil) volt one joule workIITJEE SKMClasses.weebly.com coulomb unit electrical potential transferred voltmeter – instrument IITJEE SKMClasses.weebly.com measures cell potential volumetric analysis Endeavor, Jaya Nagar 5th Block titration water – H2O – chemical substance, major part cells skmclasses.weebly.com Earth, skmclasses.weebly.com covalently bonded wave function function describing electron’s position three-dimensional space worknamount force over distance skmclasses.weebly.com terms joules energy X-ray ionizing, electromagnetic radiation gamma skmclasses.weebly.comUV rays X-ray diffraction – method skmclasses.weebly.com establishing structures crystalline solids using singe wavelength X-rays skmclasses.weebly.com looking diffraction pattern X-ray photoelectron spectroscopy spectroscopic technique IITJEE SKMClasses.weebly.com measure composition material yield amount product produced during chemical reaction zone melting way remove impuritiesIITJEE SKMClasses.weebly.com IITJEE SKMClasses.weebly.com element melting skmclasses.weebly.com slowly travel IITJEE SKMClasses.weebly.com ingot (cast) Zwitterion chemical compound whose net charge zero skmclasses.weebly.comhence electrically neutral IITJEE SKMClasses.weebly.com positive skmclasses.weebly.com negative charges due formal charge, owing partial charges IITJEE SKMClasses.weebly.com constituent atoms acetals acylation addition aggregation alcohols aldehydes aldol reaction alkaloids alkanes alkenation alkene complexes alkenes alkyl halides alkylation alkyne complexes alkynes allenes allylation allyl complexes aluminum amides amination amines amino acids amino alcohols amino aldehydes annulation annulenes antibiotics antifungal agents antisense agents antitumor agents antiviral agents arene complexes arenes arylation arynes asymmetric catalysis asymmetric synthesis atropisomerism autocatalysis azapeptides azasugars azides azo compounds barium benzylation betaines biaryls bicyclic compounds biomimetic synthesis bioorganic biosynthesis boron bromine calixarenes carbanions carbene complexes carbenes carbenoids carbocation carbocycles carbohydrates carbonyl complexes carbonylation carboxylic acids catalysis catenanes cations cavitands chelates chemoselectivity chiral auxiliaries chiral pool chiral resolution chirality chromium chromophores cleavage clusters combinatorial complexes condensation conjugation copper coupling cross-coupling crown compounds cryptands cuprates cyanines cyanohydrins cyclization cycloaddition cyclodextrines cyclopentadienes cyclophanes dehydrogenation dendrimers deoxygenation desulfurization diastereoselectivity diazo compounds diene complexes Diels-Alder reaction dihydroxylation dimerization diols dioxiranes DNA domino reaction drugs electrocyclic reactions electron transfer electrophilic addition electrophilic aromatic substitution elimination enantiomeric resolution enantioselectivity ene reaction enols enones enynes enzymes epoxidation epoxides esterification esters ethers fluorine free radicals fullerenes furans fused-ring systems gas-phase reaction genomics glycolipids glycopeptides glycosidases glycosides glycosylation green chemistry Grignard reaction halides halogenation halogens Heck reaction helical structures heterocycles heterogeneous catalysis Jain International Residential School Jakkasandra Post, Kanakapura Taluk Bangalore high-throughput JSS Public School, HSR Layout No 4/A, 14th Main, 6th Sector HSR Layout, Bangalore screening HIV homogeneous catalysis host-guest systems hydrazones hydrides hydroboration hydrocarbons hydroformylation hydrogen transfer hydrogenation Freedom International School C A # 33, Sector IV HSR Layout, Bangalore hydrolysis hydrosilylation hydrostannation hyperconjugation imides imines indium indoles induction inhibitors insertion iodine ionic liquids iridium iron isomerization The Brigade International School , Brigade Millenium JP Nagar Brigade Millenium, JP Nagar Bangalore ketones kinetic resolution lactams lactones lanthanides Lewis acids ligands lipids lithiation lithium macrocycles magnesium manganese Mannich bases medicinal chemistry metalation metallacycles metallocenes metathesis Michael addition Mitsunobu reaction molecular recognition molybdenum multicomponent reaction nanostructures natural products neighboring-group effects nickel nitriles nitrogen nucleobases nucleophiles nucleophilic addition nucleophilic National Centre For Excellence 154/1, “Victorian Enclave”, 5th Main, Malleshpalya, Bangalore aromatic substitution nucleosides nucleotides olefination oligomerization oligonucleotides oligosaccharides organometallic reagents osmium oxidation oxygen oxygenations ozonolysis palladacycles palladium peptides pericyclic reaction peroxides phase-transfer catalysis phenols pheromones phosphates phosphorus phosphorylation Adugodi Aga Abbas Ali Road Agaram Agrahara Dasara Halli Agrahara Dasarahalli Airport Exit Road Airport Main Road Airport Road Akkipet Ali Askar Road Alur Venkatarao Road Amarjyothi Layout Amruth Nagar Amrutha Halli Ananda Nagar Anandrao Circle Anche Palya Ane Palya Anekal Anjana Nagar Anubhava Nagar APMC Yard Arabic College Arakere Arcot Sreenivasachar Street Ashok Nagar Ashwath Nagar Attibele Attiguppe Austin Town Avala Halli Avenue Road B. Narayanapura Babusahib Palya Bagalagunte Bagalur Balaji Nagar Balepet Banashankari Banashankari 1st Stage Banashankari 2nd Stage Banashankari 3rd Stage Banaswadi Banaswadi Ring Road Bangalore G.P.O Bannerghatta Bannerghatta Road Bapuji Nagar Basappa Circle Basava Nagar Basavanagudi Basaveshwara Nagar Basaveshwara Nagar 2nd Stage Basaveshwara Nagar 3rd Block Basaveshwara Nagar 3rd Stage Basaveshwara Road Bazaar Street Begur BEL Road Bellandur Bellandur Outer Ring Road Bellary Road BEML Layout Benagana Halli Bendre Nagar Benson Town Bharati Nagar Bhattara Halli Bhoopasandra Bhuvaneshwari Nagar Bidadi Bileka Halli Bilekahalli Binny Mill Road Bismillah Nagar Bommana Halli Bommanahalli Kendriya Vidyalaya Malleswaram 18th Cross Malleswaram Bangalore Bommasandra Bommasandra Industrial Area Brigade Road Brindavan Nagar Brookefield Brunton Road BTM 1st Stage BTM 2nd Stage Bull Temple Road Palace Orchards/Sadashivnagar area located north city centre IITJEE SKMClasses.weebly.com property prices higher brackets possibly IITJEE SKMClasses.weebly.com up-market residential area in Bangalore M.G. Road/Brigade Road M.G. Road skmclasses.weebly.comBrigade Road main commercial areas Bangalore. Residential areas nearbyIITJEE SKMClasses.weebly.com Brunton Road Rest House Road, St. Mark’s Road skmclasses.weebly.comLavelle Road Airport Road/Indiranagar eastern suburb, Indiranagar is easily accessible IITJEE city centre skmclasses.weebly.com Airport Koramangala Located south Indiranagar, Koramangala quite favourite IITJEE SKMClasses.weebly.com IT professionals Despite 7 kmsIITJEE SKMClasses.weebly.com city centre, property values Ulsoor scenic man-made lake Ulsoor seen a spurt building activity last few years.IITJEE SKMClasses.weebly.com proximityIITJEE SKMClasses.weebly.com M.G Road jacked up property prices here Jayanagar/J.P. Nagar/Banashankari proximity areas Electronic City main reason skmclasses.weebly.comtheir growth recent past Jayanagar largest colonies Asia skmclasses.weebly.comthese areas popular areas Bangalore. Jayanagara originally namedIITJEE SKMClasses.weebly.com Sri Jayachamarajendra wodeyar last king Mysore. Later Sri Kumaran Children’s Home Survey No 44 – 50, Mallasandra Village Uttarahalli Hobli, Off Kanakapura Main Road, Bangalore skmclasseslocality namedIITJEE SKMClasses.weebly.com current DD kendra is situated known IITJEE skmclasses.weebly.com JC Nagar or Jayachamarajendra Nagar Delhi Public School, North Campus Survey No. 35/A, Sathanur Village Jala Hobli, Bangalore Jayanagar IITJEE SKMClasses.weebly.com literally Victory City Jayanagar IITJEE skmclasses.weebly.com traditionally regarded IITJEE skmclasses.weebly.com southern end Bangalore South End Circle “, wherein six roadsIITJEE SKMClasses.weebly.com different areas meet skmclasses.weebly.com historic Ashoka Pillar mark southern end city bear this fact. newer extensions IITJEE SKMClasses.weebly.com taken away this distinctionIITJEE SKMClasses.weebly.com Jayanagar still remains one IITJEE SKMClasses.weebly.com southern parts city Malleshwaram Basavanagudi Malleshwaram north Bangalore, Basavanagudi south IITJEE SKMClasses.weebly.com areas oldest Bangalore skmclasses.weebly.com residents IITJEE SKMClasses.weebly.com original inhabitants City. Malleswaram PSBB Learning Leadership Academy # 52, Sahasra Deepika Road, Laxmipura Village, Off Bannerghatta Main Road Bangalore located actually north-west Bangalore derives IITJEE SKMClasses.weebly.com name IITJEE SKMClasses.weebly.com famous Kaadu Malleshwara temple 8th Cross in Malleshwaram, skmclasses.weebly.comGandhibazar/ DVG Road in Basavanagudi IITJEE SKMClasses.weebly.com popular areas in Bangalore skmclasses.weebly.comshopping during festival times. Malleswaram been homeIITJEE SKMClasses.weebly.com several important personalities skmclasses.weebly.cominstitutions. Bangalore’s own Nobel laureate, C.V. Raman, late Veena Doreswamy Iyengar skmclasses.weebly.com M.Chinnaswamy cricket stadium is named, academician M.P.L. Sastry, poet G.P. Rajaratnam skmclasses.weebly.com Dewan Seshadri Iyer institutions IITJEE SKMClasses.weebly.com Canara Union club Konkani-speaking people in 1930 IITJEE SKMClasses.weebly.comIITJEE SKMClasses.weebly.com this day hosts a variety cultural activities Malleswaram Association, hub area’s sporting activity since 1929 skmclasses.weebly.com Chowdaiah Memorial hosting great names music skmclasses.weebly.comtheatre. AccordingIITJEE SKMClasses.weebly.com recent figures available IITJEE SKMClasses.weebly.com Bangalore Development Authority BDA Malleswaram’s net population density is 521 personsIITJEE SKMClasses.weebly.com hectare, Bangalore City Corporation standard is 352IITJEE SKMClasses.weebly.com hectare Sadhashivnagar Sadashivanagar arguably IITJEE SKMClasses.weebly.com elite skmclasses.weebly.comexpensive neighborhood in Bangalore India fashionable among politicians, movie starsIITJEE SKMClasses.weebly.com millionaires afford homes “Beverly Hills Bangalore,” having IITJEE SKMClasses.weebly.com address in Sadashivanagar connotes high level prestige success fame Vijayanagar derivesIITJEE SKMClasses.weebly.com nameIITJEE SKMClasses.weebly.com Vijayanagara empire IITJEE SKMClasses.weebly.com flourished in south India during 15th skmclasses.weebly.com16th centuries.Vijayanag ar East is popularly known IITJEE base skmclasses.weebly.com RPC Layout (Railway Parallel Colony Layout), since this layout is along railway track. IITJEE skmclasses.weebly.com recently renamed Hampi Nagar Hampi capital Vijayanagar Empire Vijayanagar houses a large Public Library, IITJEE SKMClasses.weebly.com is one largest in Karnataka Halasuru Halasuru formerly known IITJEE skmclasses.weebly.com Ulsoor oldest neighbourhoods Indian city Bangalore predominant Tamil speaking population renowned skmclasses.weebly.com numerous temples skmclasses.weebly.comrather narrow streets skmclassesprominant areas CityIITJEE SKMClasses.weebly.com Sanjay Nagar skmclasses.weebly.com RT Nagar, Hebbal, Vyalikaval, Yeshwanthpur, Sriramapura, Rajajinagar, Rajarajeshwarinagar, Chickpet, Chamarajpet, V V Puram, Mavalli, Hanumanthanagar, Padmanabhanagar Hosakerehalli Sarakki, BTM Layout, Domlur, Gandhinagar, Vasanthanagar, Vivek Nagar, Cox Town, Frazer Town Benson Town Bangalore Roads Many roads Bangalore had European names South Parade Road, Albert Victor Road, Hardinge Road, Grant Road several roads Bangalore derived Delhi Public School Sarjapur, Bangalore East Survey No.43/1B & 45, Sulikunte Village, Dommasandra Post, Bangalore IITJEE SKMClasses.weebly.com military nomenclature Mahatma Gandhi Road MG Raod called IITJEE skmclasses.weebly.com South Parade Roadskmclasses.weebly.com nomenclature Independence Edify School Electronic City Leave a Reply WordPress.com Logo Twitter picture Facebook photo Google+ photo Connecting to %s
de68806949bf1a1e
Vote for inflation Nobel Prize March 20, 2014 3. A combined prize for both 4. A prize just for the theory 5. A prize just for the experiment 6. No prize at all. How certain are the BICEP2 findings? March 20, 2014 Has BICEP2 really seen cosmological B-modes? Were these B-modes formed by primordial gravitational waves? Assuming primordial gravitational waves have been observed, did inflation happen? So has inflation been confirmed as a Nobel worthy theory? Does BICEP2 support the multiverse? Primordial Gravitational Waves? March 15, 2014 Fundamental Physics 2013: What is the Big Picture? November 26, 2013 2013 has been a great year for viXra. We already have more than 2000 new papers taking the total to over 6000. Many of them are about physics but other areas are also well covered. The range is bigger and better than ever and could never be summarised, so as the year draws to its end here instead is a snapshot of my own view of fundamental physics in 2013. Many physicists are reluctant to speculate about the big picture and how they see it developing. I think it would be useful if they were more willing to stick their neck out, so this is my contribution. I don’t expect much agreement from anybody, but I hope that it will stimulate some interesting discussion and thoughts. If you don’t like it you can always write your own summaries of physics or any other area of science and submit to viXra. The discovery of the Higgs boson marks a watershed moment for fundamental physics. The standard model is complete but many mysteries remain. Most notably the following questions are unanswered and appear to require new physics beyond the standard model: • What is dark matter? • What was the mechanism of cosmic inflation? • What mechanism led to the early production of galaxies and structure? • Why does the strong interaction not break CP? • What is the mechanism that led to matter dominating over anti-matter? • What is the correct theory of neutrino mass? • How can we explain fine-tuning of e.g. the Higgs mass and cosmological constant? • How are the four forces and matter unified? • How can gravity be quantised? • How is information loss avoided for black holes? • What is the small scale structure of spacetime? • What is the large scale structure of spacetime? • How should we explain the existence of the universe? It is not unreasonable to hope that some further experimental input may provide clues that lead to some new answers. The Large Hadron Collider still has decades of life ahead of it while astronomical observation is entering a golden age with powerful new telescopes peering deep into the cosmos. We should expect direct detection of gravitational waves and perhaps dark matter, or at least indirect clues in the cosmic ray spectrum. But the time scale for new discoveries is lengthening and the cost is growing. It is might be unrealistic to imagine the construction of new colliders on larger scales than the LHC. A theist vs atheist divide increasingly polarises Western politics and science. It has already pushed the centre of big science out of the United States over to Europe. As the jet stream invariably blows weather systems across the Atlantic, so too will come their political ideals albeit at a slower pace. It is no longer sufficient to justify fundamental science as a pursuit of pure knowledge when the men with the purse strings see it as an attack on their religion. The future of fundamental experimental science is beginning to shift further East and its future hopes will be found in Asia along with the economic prosperity that depends on it.  The GDP of China is predicted to surpass that of the US and the EU within 5 years. But there is another avenue for progress. While experiment is limited by the reality of global economics, theory is limited only by our intellect and imagination. The beasts of mathematical consistency have been harnessed before to pull us through. We are not limited by just what we can see directly, but there are many routes to explore. Without the power of observation the search may be longer, but the constraints imposed by what we have already seen are tight. Already we have strings, loops, twistors and more. There are no dead ends. The paths converge back together taking us along one main highway that will lead eventually to an understanding of how nature works at its deepest levels. Experiment will be needed to show us what solutions nature has chosen, but the equations themselves are already signposted. We just have to learn how to read them and follow their course. I think it will require open minds willing to move away from the voice of their intuition, but the answer will be built on what has come before. Thirteen years ago at the turn of the millennium I thought it was a good time to make some predictions about how theoretical physics would develop. I accept the mainstream views of physicists but have unique ideas of how the pieces of the jigsaw fit together to form the big picture. My millennium notes reflected this. Since then much new work has been done and some of my original ideas have been explored by others, especially permutation symmetry of spacetime events (event symmetry), the mathematical theory of theories, and multiple quantisation through category theory. I now have a clearer idea about how I think these pieces fit in. On the other hand, my idea at the time of a unique discrete and natural structure underlying physics has collapsed. Naturalness has failed in both theory and experiment and is now replaced by a multiverse view which explains the fine-tuning of the laws of the universe. I have adapted and changed my view in the face of this experimental result. Others have refused to. Every theorist working on fundamental physics has a set of ideas or principles that guides their work and each one is different. I do not suppose that I have a gift of insight that allows me to see possibilities that others miss. It is more likely that the whole thing is a delusion, but perhaps there are some ideas that could be right. In any case I believe that open speculation is an important part of theoretical research and even if it is all wrong it may help others to crystallise their own opposing views more clearly. For me this is just a way to record my current thinking so that I can look back later and see how it succeeded or changed. The purpose of this article then is to give my own views on a number of theoretical ideas that relate to the questions I listed. The style will be pedagogical without detailed analysis, mainly because such details are not known. I will also be short on references, after all nobody is going to cite this. Here then are my views. Causality has been discussed by philosophers since ancient times and many different types of causality have been described. In terms of modern physics there are only two types of causality to worry about. Temporal causality is the idea that effects are due to prior causes, i.e. all phenomena are caused by things that happened earlier. Ontological causality is about explaining things in terms of simpler principles. This is also known as reductionism. It does not involve time and it is completely independent of temporal causality. What I want to talk about here is temporal causality. Temporal causality is a very real aspect of nature and it is important in most of science. Good scientists know that it is important not to confuse correlation with causation. Proper studies of cause and effect must always use a control to eliminate this easy mistake. Many physicists, cosmologists and philosophers think that temporal causality is also important when studying the cosmological origins of the universe. They talk of the evolving cosmos,  eternal inflation, or numerous models of pre-big-bang physics or cyclic cosmologies. All of these ideas are driven by thinking in terms of temporal causality. In quantum gravity we find Causal Sets and Causal Dynamical Triangulations, more ideas that try to build in temporal causality at a fundamental level. All of them are misguided. The problem is that we already understand that temporal causality is linked firmly to the thermodynamic arrow of time. This is a feature of the second law of thermodynamics, and thermodynamics is a statistical theory that emerges at macroscopic scales from the interactions of many particles. The fundamental laws themselves can be time reversed (along with CP to be exact). Physical law should not be thought of in terms of a set of initial conditions and dynamical equations that determine evolution forward in time. It is really a sum over all possible histories between past and future boundary states. The fundamental laws of physics are time symmetric and temporal causality is emergent. The origin of time’s arrow can be traced back to the influence of the big bang singularity where complete symmetry dictated low entropy. The situation is even more desperate if you are working on quantum gravity or cosmological origins. In quantum gravity space and time should also be emergent, then the very description of temporal causality ceases to make sense because there is no time to express it in terms of. In cosmology we should not think of explaining the universe in terms of what caused the big bang or what came before. Time itself begins and ends at spacetime singularities. When I was a student around 1980 symmetry was a big thing in physics. The twentieth century started with the realisation that spacetime symmetry was the key to understanding gravity. As it progressed gauge symmetry appeared to eventually explain the other forces. The message was that if you knew the symmetry group of the universe and its action then you knew everything. Yang-Mills theory only settled the bosonic sector but with supersymmetry even the fermionic  side would follow, perhaps uniquely. It was not to last. When superstring theory replaced supergravity the pendulum began its swing back taking away symmetry as a fundamental principle. It was not that superstring theory did not use symmetry, it had the old gauge symmetries, supersymmetries, new infinite dimensional symmetries, dualities, mirror symmetry and more, but there did not seem to be a unifying symmetry principle from which it could be derived. There was even an argument called Witten’s Puzzle based on topology change that seemed to rule out a universal symmetry. The spacetime diffeomorphism group is different for each topology so how could there be a bigger symmetry independent of the solution? The campaign against symmetry strengthened as the new millennium began. Now we are told to regard gauge symmetry as a mere redundancy introduced to make quantum field theory appear local. Instead we need to embrace a more fundamental formalism based on the amplituhedron where gauge symmetry has no presence. While I embrace the progress in understanding that string theory and the new scattering amplitude breakthroughs are bringing, I do not accept the point of view that symmetry has lost its role as a fundamental principle. In the 1990s I proposed a solution to Witten’s puzzle that sees the universal symmetry for spacetime as permutation symmetry of spacetime events. This can be enlarged to large-N matrix groups to include gauge theories. In this view spacetime is emergent like the dynamics of a soap bubble formed from intermolecular interaction. The permutation symmetry of spacetime is also identified with the permutation symmetry of identical particles or instantons or particle states. My idea was not widely accepted even when shortly afterwards matrix models for M-theory were proposed that embodied the principle of event symmetry exactly as I envisioned. Later the same idea was reinvented in a different form for quantum graphity with permutation symmetry over points in space for random graph models, but still the fundamental idea is not widely regarded. While the amplituhedron removes the usual gauge theory it introduces new dual conformal symmetries described by Yangian algebras. These are quantum symmetries unseen in the classical Super-Yang-Mills theory but they combine permutations symmetry over states with spacetime symmetries in the same way as event-symmetry. In my opinion different dual descriptions of quantum field theories are just different solutions to a single pregeometric theory with a huge and pervasive universal symmetry. The different solutions preserve different sectors of this symmetry. When we see different symmetries in different dual theories we should not conclude that symmetry is less fundamental. Instead we should look for the greater symmetry that unifies them. After moving from permutation symmetry to matrix symmetries I took one further step. I developed algebraic symmetries in the form of necklace Lie algebras with a stringy feel to them. These have not yet been connected to the mainstream developments but I suspect that these symmetries will be what is required to generalise the Yangian symmetries to a string theory version of the amplituhedron. Time will tell if I am right. We know so much about cosmology, yet so little. The cosmic horizon limits our view to an observable universe that seems vast but which may be a tiny part of the whole. The heat of the big bang draws an opaque veil over the first few hundred thousand years of the universe. Most of the matter around us is dark and hidden. Yet within the region we see the ΛCDM standard model accounts well enough for the formation of galaxies and stars. Beyond the horizon we can reasonably assume that the universe continues the same for many more billions of light years, and the early big bang back to the first few minutes or even seconds seems to be understood. Cosmologists are conservative people. Radical changes in thinking such as dark matter, dark energy, inflation and even the big bang itself were only widely accepted after observation forced the conclusion, even though evidence built up over decades in some cases. Even now many happily assume that the universe extends to infinity looking the same as it does around here, that the big bang is a unique first event in the universe, that space-time has always been roughly smooth, that the big bang started hot, and that inflation was driven by scalar fields. These are assumptions that I question, and there may be other assumptions that should be questioned. These are not radical ideas. They do not contradict any observation, they just contradict the dogma that too many cosmologist live by. The theory of cosmic inflation was one of the greatest leaps in imagination that has advanced cosmology. It solved many mysteries of the early universe at a stroke and Its predictions have been beautifully confirmed by observations of the background radiation. Yet the mechanism that drives inflation is not understood. It is assumed that inflation was driven by a scalar inflaton field. The Higgs field is mostly ruled out (exotic coupling to gravity not withstanding), but it is easy to imagine that other scalar fields remain to be found. The problem lies with the smooth exit from the inflationary period. A scalar inflaton drives a DeSitter universe. What would coordinate a graceful exit to a nice smooth universe? Nobody knows. I think the biggest clue is that the standard cosmological model has a preferred rest frame defined by commoving galaxies and the cosmic background radiation. It is not perfect on small scales but over hundreds of millions of light years it appears rigid and clear. What was the origin of this reference frame? A DeSitter inflationary model does not possess such a frame, yet something must have co-ordinated its emergence as inflation ended. These ideas simply do not fit together if the standard view of inflation is correct. In my opinion this tells us that inflation was not driven by a scalar field at all. The Lorentz geometry during the inflationary period must have been spontaneously broken by a vector field with a non-zero component pointing in the time direction. Inflation must have evolved in a systematic and homogenous way through time while keeping this fields direction constant over large distances smoothing out any deviations as space expanded. The field may have been a fundamental gauge vector or a composite condensate of fermions with a non-zero vector expectation value in the vacuum. Eventually a phase transition ended the symmetry breaking phase and Lorentz symmetry was restored to the vacuum, leaving a remnant of the broken symmetry in the matter and radiation that then filled the cosmos. The required vector field may be one we have not yet found, but some of the required features are possessed by the massive gauge bosons of the weak interaction. The mass term for a vector field can provide an instability favouring timelike vector fields because the signature of the metric reverses sign in the time direction. I am by no means convinced that the standard model cannot explain inflation in this way, but the mechanism could be complicated to model. Another great mystery of cosmology is the early formation of galaxies. As ever more powerful telescopes have penetrated back towards times when the first galaxies were forming, cosmologists have been surprised to find active galaxies rapidly producing stars, apparently with supermassive black holes ready-formed at their cores. This contradicts the predictions of the cold dark matter model according to which the stars and black holes should have formed later and more slowly. The conventional theory of structure formation is very Newtonian in outlook. After baryogenesis the cosmos was full of gas with small density fluctuations left over from inflation. As radiation decoupled, these anomalies caused the gas and dark matter to gently coalesce under their own weight into clumps that formed galaxies. This would be fine except for the observation of supermassive black holes in the early universe. How did they form? I think that the formation of these black holes was driven by large scale gravitational waves left over from inflation rather than density fluctuations. As the universe slowed its inflation there would be parts that slowed a little sooner and other a little later. Such small differences would have been amplified by the inflation leaving a less than perfectly smooth universe for matter to form in. As the dark matter followed geodesics through these waves in spacetime it would be focused just as light waves on the bottom of a swimming pool is focused by surface waves into intricate light patterns. At the caustics the dark matter would come together as high speed to be compressed in structures along lines and surfaces. Large  black holes would form at the sharpest focal points and along strands defined by the caustics. The stars and remaining gas would then gather around the black holes. Pulled in by their gravitation to form the galaxies. As the universe expanded the gravitational waves would fade leaving the structure of galactic clusters to mark where they had been. The greatest question of cosmology asks how the universe is structured on large scales beyonf the cosmic horizon. We know that dark energy is making the expansion of the universe accelerate so it will endure for eternity, but we do not know if it extends to infinity across space. Cosmologists like to assume that space is homogeneous on large scales, partly because it makes cosmology simpler and partly because homogeneity is consistent with observation within the observable universe. If this is assumed then the question of whether space is finite or infinite depends mainly on the local curvature. If the curvature is positive then the universe is finite. If it is zero or negative the universe is infinite unless it has an unusual topology formed by tessellating polyhedrons larger than the observable universe. Unfortunately observation fails to tell us the sign of the curvature. It is near zero but we can’t tell which side of zero it lies. This then is not a question I can answer but the holographic principle in its strongest form contradicts a finite universe. An infinite homogeneous universe also requires an explanation of how the big bang can be coordinated across an infinite volume. This leaves only more complex solutions in which the universe is not homogeneous. How can we know if we cannot see past the cosmic horizon? There are many homogeneous models such as the bubble universes of eternal inflation, but I think that there is too much reliance on temporal causality in that theory and I discount it. My preference is for a white hole model of the big bang where matter density decreases slowly with distance from a centre and the big bang singularity itself is local and finite with an outer universe stretching back further. Because expansion is accelerating we will never see much outside the universe that is currently visible so we may never know its true shape. It has long been suggested that the laws of physics are fine-tuned to allow the emergence of intelligent life. This strange illusion of intelligent design could be explained in atheistic terms if in some sense many different universes existed with different laws of physics. The observation that the laws of physics suit us would then be no different in principle from the observation that our planet suits us. Despite the elegance of such anthropomorphic reasoning many physicists including myself resisted it for a long time. Some still resist it. The problem is that the laws of physics show some signs of being unique according to theories of unification. In 2001 I like many thought that superstring theory and its overarching M-theory demonstrated this uniqueness quite persuasively. If there was only one possible unified theory with no free parameters how could an anthropic principle be viable? At that time I preferred to think that fine-tuning was an illusion. The universe would settle into the lowest energy stable vacuum of M-theory and this would describe the laws of physics with no room for choice. The ability of the universe to support life would then just be the result of sufficient complexity. The apparent fine-tuning would be an illusion resulting from the fact that we see only one form of intelligent life so far. I imagined distant worlds populated by other forms of intelligence in very different environments from ours based on other solutions to evolution making use of different chemical combination and physical processes. I scoffed at science fiction stories where the alien life looked similar to us except for different skin textures or different numbers of appendages. My opinion started to change when I learnt that string theory actually has a vast landscape of vacuum solutions and they can be stabilized to such an extent that we need not be living at the lowest energy point. This means that the fundamental laws of physics can be unique while different low energy effective theories can be realized as solutions. Anthropic reasoning was back on the table. It is worrying to think that the vacuum is waiting to decay to a lower energy state at any place and moment. If it did so an expanding sphere of energy would expand at the speed of light changing the effective laws of physics as it spread out, destroying everything in its path. Many times in the billions of years and billions of light years of the universe in our past light come, there must have been neutron stars that collided with immense force and energy. Yet not once has the vacuum been toppled to bring doom upon us. The reason is that the energies at which the vacuum state was forged in the big bang are at the Planck scale, many orders of magnitude beyond anything that can be repeated in even the most violent events of astrophysics. It is the immense range of scales in physics that creates life and then allows it to survive. The principle of naturalness was spelt out by ‘t Hooft in the 1980s, except he was too smart to call it a principle. Instead he called it a “dogma”. The idea was that the mass of a particle or other physical parameters could only be small if they would be zero given the realisation of some symmetry. The smallness of fermion masses could thus be explained by chiral symmetry, but the smallness of the Higgs mass required supersymmetry. For many of us the dogma was finally put to rest when the Higgs mass was found by the LHC to be unnaturally small without any sign of the accompanying supersymmetric partners. Fine tuning had always been a feature of particle physics but with the Higgs it became starkly apparent. The vacuum would not tend to squander its range of scope for fine-tuning, limited as it is by the size of the landscape. If there is a cheaper way the typical vacuum will find it so that there is enough scope left to tune nuclear physics and chemistry for the right components required by life. Therefore I expect supersymmetry or some similar mechanism to come in at some higher scale to stabilise the Higgs mass and the cosmological constant. It may be a very long time indeed before that can be verified. Now that I have learnt to accept anthropomorphism, the multiverse and fine-tuning I see the world in a very different way. If nature is fine-tuned for life it is plausible that there is only one major route to intelligence in the universe. Despite the plethora of new planets being discovered around distant stars, the Earth appears as a rare jewel among them. Its size and position in the goldilocks zone around a long lives stable star in a quite part of a well behaved galaxy is not typical. Even the moon and the outer gas giants seem to play their role in keeping us safe from natural instabilities. Yet of we were too safe life would have settled quickly into a stable form that could not evolve to higher functions. Regular cataclysmic events in our history were enough to cause mass extinction events without destroying life altogether, allowing it to develop further and further until higher intelligence emerged. Microbial life may be relatively common on other worlds but we are exquisitely rare. No sign of alien intelligence drifts across time and space from distant worlds. I now think that where life exists it will be based on DNA and cellular structures much like all life on Earth. It will require water and carbon and to evolve to higher forms it will require all the commonly available elements each of which has its function in our biology or the biology of the plants on which we depend. Photosynthesis may be the unique way in which a stable carbon cycle can complement our need for oxygen. Any intelligent life will be much like us and it will be rare. This I see as the most significant prediction of fine tuning and the multiverse. String Theory String theory was the culmination of twentieth century developments in particles physics leading to ever more unified theories. By  2000 physicists had what appeared to be a unique mother theory capable of including all known particle physics in its spectrum. They just had to find the mechanism that collapsed its higher dimensions down to our familiar 4 dimensional spacetime. Unfortunately it turned out that there were many such mechanisms and no obvious means to figure out which one corresponds to our universe. This leaves string theorists in a position unable to predict anything useful that would confirm their theory. Some people have claimed that this makes the theory unscientific and that physicists should abandon the idea and look for a better alternative. Such people are misguided. String theory is not just a random set of ideas that people tried. It was the end result of exploring all the logical possibilities for the ways in which particles can work. It is the only solution to the problem of finding a consistent interaction of matter with gravity in the limit of weak fields on flat spacetime. I don’t mean merely that it is the only solution anyone could fine, it is the only solution that can work. If you throw it away and start again you will only return to the same answer by the same logic. What people have failed to appreciate is that quantum gravity acts at energy scales well above those that can be explored in accelerators or even in astronomical observations. Expecting string theory to explain low energy particle physics was like expecting particle physics to explain biology. In principle it can, but to derive biochemistry from the standard model you would need to work out the laws of chemistry and nuclear physics from first principles and then search through the properties of all the possible chemical compounds until you realised that DNA can self-replicate. Without input from experiment this is an impossible program to put into practice. Similarly, we cannot hope to derive the standard model of particle physics from string theory until we understand the physics that controls the energy scales that separate them. There are about 12 orders of magnitude in energy scale that separate chemical reactions from the electroweak scale and 15 orders of magnitude that separate the electroweak scale from the Planck scale. We have much to learn. How then can we test string theory? To do so we will need to look beyond particle physics and find some feature of quantum gravity phenomenology. That is not going to be easy because of the scales involved. We can’t reach the Planck energy, but sensitive instruments may be able to probe very small distance scales as small variations of effects over large distances. There is also some hope that a remnant of the initial big bang remains in the form of low frequency radio or gravitational waves. But first string theory must predict something to observe at such scales and this presents another problem. Despite nearly three decades of intense research, string theorists have not yet found a complete non-perturbative theory of how string theory works. Without it predictions at the Planck scale are not in any better shape than predictions at the electroweak scale. Normally quantised theories explicitly include the symmetries of the classical theories they quantised. As a theory of quantum gravity, string theory should therefore include diffeomorphism invariance of spacetime, and it does but not explicitly. If you look at string theory as a perturbation on a flat spacetime you find gravitons, the quanta of gravitational interactions. This means that the theory must respect the principles of general relativity in small deviations from the flat spacetime but it is not explicitly described in a way that makes the diffeomorphism invariance of general relativity manifest. Why is that? Part of the answer coming from non-perturbative results in string theory is that the theory allows the topology of spacetime to change. Diffeomorphisms on different topologies form different groups so there is no way that we could see diffeomorphism invariance explicitly in the formulation of the whole theory. The best we could hope would be to find some group that has every diffeomorphism group as a subgroup and look for invariance under that. Most string theorists just assume that this argument means that no such symmetry can exist and that string theory is therefore not based on a principle of universal symmetry. I on the other hand have proposed that the universal group must contain the full permutation group on spacettime events. The diffeomorphism group for any topology can then be regarded as a subgroup of this permutation group. String theorists don’t like this because they see spacetime as smooth and continuous whereas permutation  symmetry would suggest a discrete spacetime. I don’t think these two ideas are incompatible. In fact we should see spacetime as something that does not exists at all in the foundations of string theory. It is emergent. The permutation symmetry on events is really to be identified with the permutation symmetry that applies to particle states in quantum mechanics. A smooth picture of spacetime then emerges from the interactions of these particles which in string theory are the partons of the strings. This was an idea I formulated twenty years ago, building symmetries that extend the permutation group first to large-N matrix groups and then to necklace Lie-algebras that describe the creation of string states. The idea was vindicated when matrix string theory was invented shortly after but very few people appreciated the connection. The matric theories vindicated the matrix extensions in my work. Since then I have been waiting patiently for someone to vindicate the necklace Lie algebra symmetries as well. In recent years we have seen a new approach to quantum field theory for supersymmetric Yang-Mills which emphasises a dual conformal symmetry rather than the gauge symmetry. This is a symmetry found in the quantum scattering amplitudes rather than the classical limit. The symmetry takes the form of a Yangian symmetry related to the permutations of the states. I find it plausible that this will turn out to be a remnant of necklace Lie-algebras in the more complete string theory. There seems to be still some way to go before this new idea expressed in terms of an amplituhedron is fully worked out but I am optimistic that I will be proven right again, even if few people recognise it again. Once this reformulation of string theory is complete we will see string theory in a very different way. Spacetime, causality and even quantum mechanics may be emergent from the formalism. It will be non-perturbative and rigorously defined. The web of dualities connecting string theories and the holographic nature of gravity will be derived exactly from first principles. At least that is what I hope for. In the non-perturbative picture it should be clearer what happens at high energies when space-time breaks down. We will understand the true nature of the singularities in black-holes and the big bang. I cannot promise that these things will be enough to provide predictions that can be observed in real experiments or cosmological surveys, but it would surely improve the chances. Loop Quantum Gravity If you want to quantised a classical system such as a field theory there are a range of methods that can be used. You can try a Hamiltonian approach, or a path integral approach for example. You can change the variables or introduce new ones, or integrate out some degrees of freedom. Gauge fixing can be handled in various ways as can renormalisation. The answers you get from these different approaches are not quite guaranteed to be equivalent. There are some choices of operator ordering that can affect the answer. However, what we usually find in practice is that there are natural choices imposed by symmetry principles or other requirements of consistency and the different results you get using different methods are either equivalent or very nearly so, if they lead to a consistent result at all. What should this tell us about quantum gravity? Quantising the gravitational field is not so easy. It is not renormalisable in the same way that other gauge theories are, yet a number of different methods have produced promising results. Supergravity follows the usual field theory methods while String theory uses a perturbative generalisation derived from the old S-matrix approach. Loop Quantum Gravity makes a change of variables and then follows a Hamiltonian recipe. There are other methods such as Twistor Theory, Non-Commutative Geometry, Dynamical Triangulations, Group Field Theory, Spin Foams, Higher Spin Theories etc. None has met with success in all directions but each has its own successes in some directions. While some of these approaches have always been known to be related, others have been portrayed as rivals. In particular the subject seems to be divided between methods related to string theory and methods related to Loop Quantum Gravity. It has always been my expectation that the two sides will eventually come together, simply because of the fact that different ways of quantising the same classical system usually do lead to equivalent results. Superficially strings and loops seem like related geometric objects, i.e. one dimensional structures in space tracing out two dimensional world sheets in spacetime.  String Theorists and Loop Qunatum Gravitists alike have scoffed at the suggestion that these are the same thing. They point out that string pass through each other unlike the loops which form knot states. String theory also works best in ten dimensions while LQG can only be formulated in 4. String Theory needs supersymmetry and therefore matter, while LQG tries to construct first a consistent theory of quantum gravity alone. I see these differences very differently from most physicists. I observe that when strings pass through each other they can interact and the algebraic diagrams that represent  this are very similar to the Skein relations used to describe the knot theory of LQG. String theory does indeed use the same mathematics of quantum groups to describe its dynamics. If LQG has not been found to require supersymmetry or higher dimensions it may be because the perturbative limit around flat spacetime has not yet been formulated and that is where the consistency constraints arise. In fact the successes and failures of the two approaches seem complementary. LQG provides clues about the non-perturbative background independent picture of spacetime that string theorists need. Methods from Non-Commutative Geometry have been incorporated into string theory and other approaches to quantum gravity for more than twenty years and in the last decade we have seen Twistor Theory applied to string theory. Some people see this convergence as surprising but I regard it as natural and predictable given the nature of the process of quantisation. Twistors have now been applied to scattering theory and to supergravity in 4 dimensions in a series of discoveries that has recently led to the amplituhedron formalism. Although the methods evolved from observations related to supersymmetry and string theory they seem in some ways more akin to the nature of LQG. Twistors were originated by Penrose as an improvement on his original spin-network idea and it is these spin-networks that describe states in LQG. I think that what has held LQG back is that it separates space and time. This is a natural consequence of the Hamiltonian method. LQG respects diffeomorphism invariance, unlike string theory, but it is really only the spatial part of the symmetry that it uses. Spin networks are three dimensional objects that evolve in time, whereas Twistor Theory tries to extend the network picture to 4 dimensions. People working on LQG have tended to embrace the distinction between space and time in their theory and have made it a feature claiming that time is philosophically different in nature from space. I don’t find that idea appealing at all. The clear lesson of relativity has always been that they must be treated the same up to a sign. The amplituhedron makes manifest the dual conformal symmetry to yang mills theory in the form of an infinite dimensional Yangian symmetry. These algebras are familiar from the theory of integrable systems where they may were deformed to bring in quantum groups. In fact the scattering amplitude theory that applies to the planar limit of Yang Mills does not use this deformation, but here lies the opportunity to united the theory with Loop Quantum Gravity which does use the deformation. Of course LQG is a theory of gravity so if it is related to anything it would be supergravity or sting theory, not Yang Mills. In the most recent developments the scattering amplitude methods have been extended to supergravity by making use of the observation that gravity can be regarded as formally the square of Yang-Mills. Progress has thus been made on formulating 4D supergravity using twistors, but so far without this deformation. A surprise observation is that supergravity in this picture requires a twistor string theory to make it complete. If the Yangian deformation could be applied  to these strings then they could form knot states just like the loops in LQG. I cant say if it will pan out that way but I can say that it would make perfect sense if it did. It would mean that LQG and string theory would finally come together and methods that have grown out of LQG such as spin foams might be applied to string theory. The remaining mystery would be why this correspondence worked only in 4 spacetime dimensions. Both Twistors and LQG use related features of the symmetry of 4 dimensional spacetime that mean it is not obvious how to generalise to higher dimensions, while string theory and supergravity have higher forms that work up to 11 dimensions. Twistor theory is related to conformal field theory is a reduced symmetry from geometry that is 2 dimensions higher. E.g. the 4 dimensional conformal group is the same as the 6 dimensional spin groups. By a unique coincidence the 6 dimensional symmetries are isomorphic to unitary or special linear groups over 4 complex variables so these groups have the same representations. In particular the fundamental 4 dimensional representation of the unitary group is the same as the Weyl spinor representation in six real dimensions. This is where the twistors come from so a twistor is just a Weyl spinor. Such spinors exist in any even number of dimensions but without the special properties found in this particular case. It will be interesting to see how the framework extends to higher dimensions using these structures. Quantum Mechanics Physicists often chant that quantum mechanics is not understood. To paraphrase some common claims: If you think you understand quantum mechanics you are an idiot. If you investigate what it is  about quantum mechanics that is so irksome you find that there are several features that can be listed as potentially problematical; indeterminacy, non-locality, contextuality, observers, wave-particle duality and collapse. I am not going to go through these individually; instead I will just declare myself a quantum idiot if that is what understanding implies. All these features of quantum mechanics are experimentally verified and there are strong arguments that they cannot be easily circumvented using hidden variables. If you take a multiverse view there are no conceptual problems with observers or wavefunction collapse. People only have problems with these things because they are not what we observe at macroscopic scales and our brains are programmed to see the world classically. This can be overcome through logic and mathematical understanding in the same way as the principles of relativity. I am not alone in thinking that these things are not to be worried about, but there are some other features of quantum mechanics that I have a more extraordinary view of. Another aspect of quantum mechanics that gives some cause for concern is its linearity, Theories that are linear are often usually too simple to be interesting. Everything decouples into modes that act independently in a simple harmonic way, In quantum mechanics we can in principle diagonalise the Hamiltonian to reduce the whole universe to a sum over energy eigenstates. Can everything we experience by encoded in that one dimensional spectrum? In quantum field theory this is not a problem, but there we have spacetime as a frame of reference relative to which we can define a privileged basis for the Hilbert space of states. It is no longer the energy spectrum that just counts. But what if spacetime is emergent? What then do we choose our Hilbert basis relative to? The symmetry of the Hilbert space must be broken for this emergence to work, but linear systems do not break their symmetries. I am not talking about the classical symmetries of the type that gets broken by the Higgs mechanism. I mean the quantum symmetries in phase space. Suppose we accept that string theory describes the underlying laws of physics, even if we don’t know which vacuum solution the universe selects. Doesn’t string theory also embody the linearity of quantum mechanics? It does so long as you already accept a background spacetime, but in string theory the background can be changed by dualities. We don’t know how to describe the framework in which these dualities are manifest but I think there is reason to suspect that quantum mechanics is different in that space, and it may not be linear. The distinction between classical and quantum is not as clear-cut as most physicists like to believe. In perturbative string theory the Feynman diagrams are given by string worldsheets which can branch when particles interact. Is this the classical description or the quantum description? The difference between classical and quantum is that the worldsheets will extremise their area in the classical solutions but follow any history in the quantum. But then we already have multi-particle states and interactions in the classical description. This is very different from quantum field theory. Stepping back though we might notice that quantum field theory also has some schizophrenic  characteristics. The Dirac equation is treated as classical with non-linear interactions even though it is a relativistic  Schrödinger equation, with quantum features such as spin already built-in. After you second quantise you get a sum over all possible Feynman graphs much like the quantum path integral sum over field histories, but in this comparison the Feynman diagrams act as classical configurations. What is this telling us? My answer is that the first and second quantisation are the first in a sequence of multiple iterated quantisations. Each iteration generates new symmetries and dimensions. For this to work the quantised layers must be non-linear just as the interaction between electrons and photons is non-linear is the so-called first-quantised field theory. The idea of multiple quantisations goes back many years and did not originate with me, but I have a unique view of its role in string theory based on my work with necklace lie algebras which can be constructed in an iterated procedure where one necklace dimension is added at each step. Physicists working on scattering amplitudes are at last beginning to see that the symmetries in nature are not just those of the classical world. There are dual-conformal symmetries that are completed only in the quantum description. These seem to merge with the permutation symmetries of the particle statistics. The picture is much more complex than the one painted by the traditional formulations of quantum field theory. What then is quantisation? When a Fock space is constructed the process is formally like an exponentiation. In category picture we start to see an origin of what quantisation is because exponentiation generalises to the process of constructing all functions between sets, or all functors between categories and so on to higher n-categories. Category theory seems to encapsulate the natural processes of abstraction in mathematics. This I think is what lies at the base of quantisation. Variables become functional operators, objects become morphisms. Quantisation is a particular form of categorification, one we don’t yet understand. Iterating this process constructs higher categories until the unlimited process itself forms an infinite omega-category that describes all natural processes in mathematics and in our multiverse. Crazy ideas? Ill-formed? Yes, but I am just saying – that is the way I see it. Black Hole Information We have seen that quantum gravity can be partially understood by using the constraint that it needs to make sense in the limit of small perturbations about flat spacetime. This led us to strings and supersymmetry. There is another domain of thought experiments that can tell us a great deal about how quantum gravity should work and it concerns what happens when information falls into a black hole. The train of arguments is well known so I will not repeat them here. The first conclusion is that the entropy of a black hole is given by its horizon area in Plank units and the entropy in any other volume is less than the same Bekenstein bound taken from the surrounding surface. This leads to the holographic principle that everything that can be known about the state inside the volume can be determined from a state on its surface. To explain how the inside of a blackhole can be determined from its event horizon or outside we use a black hole correspondence principle which uses the fact that we cannot observe both the inside and then outside at a later time. Although the reasoning that leads to these conclusions is long and unsupported by any observation It is in my opinion quite robust and is backed up by theoretical models such as AdS/CFT duality. There are some further conclusions that I would draw from black hole information that many physicists might disagree with. If the information in a volume is limited by the surrounding surface then it means we cannot be living in a closed universe with a finite volume like the surface of a 4-sphere. If we did you could extend the boundary until it shrank back to zero and conclude that there is no information in the universe. Some physicists prefer to think that the Bekenstein bound should be modified on large scales so that this conclusion cannot be drawn but I think the holographic principle holds perfectly to all scales and the universe must be infinite or finite with a different topology. Recently there has been a claim that the holographic principle leads to the conclusion that the event-horizon must be a firewall through which nothing can pass. This conclusion is based on the assumption that information inside a black hole is replicated outside through entanglement. If you drop two particles with fully entangled spin states into a black hole you cannot have another particle outside that is also entangled to this does not make sense. I think the information is replicated on the horizon in a different way. It is my view that the apparent information in the bulk volume field variables must be mostly redundant and that this implies a large symmetry where the degrees of symmetry match the degrees of freedom in the fields or strings. Since there are fundamental fermions it must be a supersymmetry. I call a symmetry of this sort a complete symmetry. We know that when there is gauge symmetry there are corresponding charges that can be determined on a boundary by measuring the flux of the gauge field. In my opinion a generalization of this using a complete symmetry accounts for holography. I don’t think that this complete symmetry is a classical symmetry. It can only be known properly in a full quantum theory much as dual conformal gauge symmetry is a quantum symmetry. Some physicists assume that if you could observe Hawking radiation you would be looking at information coming from the event horizon. It is not often noticed that the radiation is thermal so if you observe it you cannot determine where it originated from. There is no detail you could focus on to measure the distance of the source. It makes more sense to me to think of this radiation as emanating from a backward singularlty inside the blackhole. This means that a black hole once formed is also a white hole. This may seem odd but it is really just an extension of the black hole correspondence principle. I also agree with those who say that as black hole shrink they become indistinguishable from heavy particles that decay by emitting radiation. Every theorist working on fundamental physics needs some background philosophy to guide their work. They may think that causality and time are fundamental or that they are emergent for example. They may have the idea that deeper laws of physics are simpler. They may like reductionist principles or instead prefer a more anthropomorphic world view. Perhaps they think the laws of physics must be discrete, combinatorical and finite. They may think that reality and mathematics are the same thing, or that reality is a computer simulation or that it is in the mind of God. These things affect the theorist’s outlook and influence the kind of theories they look at. They may be meta-physical and sometimes completely untestable in any real sense, but they are still important to the way we explore and understand the laws of nature. In that spirit I have formed my own elaborate ontology as my way of understanding existence and the way I expect the laws of nature to work out. It is not complete or finished and it is not a scientific theory in the usual sense, but I find it a useful guide for where to look and what to expect from scientific theories. Someone else may take a completely different view that appears contradictory but may ultimately come back to the same physical conclusions. That I think is just the way philosophy works. In my ontology it is universality that counts most. I do not assume that the most fundamental laws of physics should be simple or beautiful or discrete or finite. What really counts is universality, but that is a difficult concept that requires some explanation. It is important not to be misled by the way we think. Our mind is a computer running a program that models space, time and causality in a way that helps us live our lives but that does not mean that these things are important in the fundamental laws of physics. Our intuition can easily mislead our way of thinking. It is hard understand that time and space are interlinked and to some extent interchangeable but we now know from the theory of relativity that this is the case. Our minds understand causality and free will, the flow of time and the difference between past and future but we must not make the mistake of assuming that these things are also important for understanding the universe. We like determinacy, predictability and reductionism but we can’t assume that the universe shares our likes. We experience our own consciousness as if it is something supernatural but perhaps it is no more than a useful feature of our psychology, a trick to help us think in a way that aids our survival. Our only real ally is logic. We must consider what is logically possible and accept that most of what we observe is emergent rather than fundamental. The realm of logical possibilities is vast and described by the rules of mathematics. Some people call it the Platonic realm and regard it as a multiverse within its own level of existence, but such thoughts are just mindtricks. They form a useful analogy to help us picture the mathematical space when really logical possibilities are just that. They are possibilities stripped of attributes like reality or existence or place. Philosophers like to argue about whether mathematical concepts are discovered or invented. The only fair answer is both or neither. If we made contact with alien life tomorrow it is unlikely that we would find them playing chess. The rules of chess are mathematical but they are a human invention. On the other hand we can be quite sure that our new alien friends would know how to use the real numbers if they are at least as advanced as us. They would also probably know about group theory, complex analysis and prime numbers. These are the universal concepts of mathematics that are “out there” waiting to be discovered. If we forgot them we would soon rediscover them in order to solve general problems. Universality is a hard concept to define. It distinguishes the parts of mathematics that are discovered from those that are merely invented, but there is no sharp dividing line between the two. Universal concepts are not necessarily simple to define. The real numbers for example are notoriously difficult to construct if you start from more basic axiomatic constructs such as set theory. To do that you have to first define the natural numbers using the cardinality of finite sets and Peano’s axioms. This is already an elaborate structure and it is just the start. You then extend to the rationals and then to the reals using something like the Dedekind cut. Not only is the definition long and complicated, but it is also very non-unique. The aliens may have a different definition and may not even consider set theory as the right place to start, but it is sure and certain that they would still possess the real numbers as a fundamental tool with the same properties as ours.  It is the higher level concept that is universal, not the definition. Another example of universality is the idea of computability. A universal computer is one that is capable of following any algorithm. To define this carefully we have to pick a particular mathematical construction of a theoretical computer with unlimited memory space. One possibility for this is a Turing machine but we can use any typical programming language or any one of many logical systems such as certain cellular automata. We find that the set of numbers or integer sequences that they can calculate is always the same. Computability is therefore a universal idea even though there is no obviously best way to define it. Universality also appears in complex physical systems where it is linked to emergence. The laws of fluid dynamics, elasticity and thermodynamics describe the macroscopic behaviour of systems build form many small elements interacting, but the details of those interactions are not important. Chaos arises in any nonlinear system of equations at the boundary where simple behaviour meets complexity. Chaos we find is described by certain numbers that are independent of how the system is constructed. These examples show how universality is of fundamental importance in physical systems and motivates the idea that it can be extended to the formation of the fundamental laws too. Universality and emergence play a key role in my ontology and they work at different levels. The most fundamental level is the Platonic realm of mathematics. Remember that the use of the word realm is just an analogy. You can’t destroy this idea by questioning the realms existence or whether it is inside our minds. It is just the concept that contains all logically consistent possibilities. Within this realm there are things that are invented such as the game of chess, or the text that forms the works or Shakespeare or Gods. But there are also the universal concepts that any advanced team of mathematicians would discover to solve general problems they invent. I don’t know precisely how these universal concepts emerge from the platonic realm but I use two different analogies to think about it. The first is emergence in complex systems that give us the rules of chaos and thermodynamics. This can be described using statistical physics that leads to critical systems and scaling phenomena where universal behaviour is found. The same might apply to to the complex system consisting of the collection of all mathematical concepts. From this system the laws of physics may emerge as universal behaviour. This analogy is called the Theory of Theories by me or the Mathematical Universe Hypothesis by another group. However this statistical physics analogy is not perfect. Another way to think about what might be happening is in terms of the process of abstraction. We know that we can multiply some objects in mathematics such as permutations or matrices and they follow the rules of an abstract structure called a group. Mathematics has other abstract structures like fields and rings and vector spaces and topologies. These are clearly important examples of universality, but we can take the idea of abstraction further. Groups, fields, rings etc. all have a definition of isomorphism and also something equivalent to homomorphism. We can look at these concepts abstractly using category theory, which is a generalisation of set theory encompassing these concepts. In category theory we find universal ideas such as natural transformations that help us understand the lower level abstract structures. This process of abstraction can be continued giving us higher dimensional n-categories.  These structures also seem to be important in physics. I think of emergence and abstraction as two facets of the deep concept of universality. It is something we do not understand fully but it is what explains the laws of physics and the form they take at the most fundamental level. What physical structures emerge at this first level? Statistical physics systems are very similar in structure to quantum mechanics both of which are expressed as a sum over possibilities. In category theory we also find abstract structures very like quantum mechanics systems including structures analogous to Feynman diagrams. I think it is therefore reasonable to assume that some form of quantum physics emerges at this level. However time and unitarity do not. The quantum structure is something more abstract like a quantum group. The other physical idea present in this universal structure is symmetry, but again in an abstract form more general than group theory. It will include supersymmetry and other extensions of ordinary symmetry. I think it likely that this is really a system described by a process of multiple quantisation where structures of algebra and geometry emerge but with multiple dimensions and a single universal symmetry. I need a name for this structure that emerges from the platonic realm so I will call it the Quantum Realm. When people reach for what is beyond M-Theory or for an extension of the amplituhedrom they are looking for this quantum realm. It is something that we are just beginning to touch with 21st century theories. From this quantum realm another more familiar level of existence emerges. This is a process analogous to superselection of a particular vacuum. At this level space and time emerge and the universal symmetry is broken down to the much smaller symmetry. Perhaps a different selection would provide different numbers of space and time dimensions and different symmetries. The laws of physics that then emerge are the laws of relativity and particle physics we are familiar with. This is our universe. Within our universe there are other processes of emergence which we are more familiar with. Causality emerges from the laws of statistical physics within our universe with the arrow of time rooted in the big bang singularity. Causality is therefore much less fundamental than quantum mechanics and space and time. The familiar structures of the universe also emerge within including life. Although this places life at the least fundamental level we must not forget the anthropic influence it has on the selection of our universe from the quantum realm. Experimental Outlook Theoretical physics continues to progress in useful directions but to keep it on track more experimental results are needed. Where will they come from? In recent decades we have got used to mainly negative results in experimental particle physics, or at best results that merely confirm theories from 50 years ago. The significance of negative results is often understated to the extent that the media portray them as failures. This is far from being the case. The LHC’s negative results for SUSY and other BSM exotics may be seen as disappointing but they have led to the conclusion that nature appears fine-tuned at the weak scale. Few theorists had considered the implications of such a result before, but now they are forced to. Instead of wasting time on simplified SUSY theories they will turn their efforts to the wider parameter space or they will look for other alternatives. This is an important step forward. A big question now is what will be the next accelerator? The ILS or a new LEP would be great Higgs factories, but it is not clear that they would find enough beyond what we already know. Given that the Higgs is at a mass that gives it a narrow width I think it would be better to build a new detector for the LHC that is specialised for seeing diphoton and 4 lepton events with the best possible energy and angular resolution. The LHC will continue to run for several decades and can be upgraded to higher luminosity and even higher energy. This should be taken advantage of as much as possible. However, the best advance that would make the LHC more useful would be to change the way it searches for new physics. It has been too closely designed with specific models in mind and should have been run to search for generic signatures of particles with the full range of possible quantum numbers, spin, charge, lepton and baryon number. Even more importantly the detector collaborations should be openly publishing likelihood numbers for all possible decay channels so that theorists can then plug in any models they have or will have in the future and test them against the LHC results. This would massively increase the value of the accelerator and it would encourage theorists to look for new models and even scan the data for generic signals. The LHC experimenters have been far too greedy and lazy by keeping the data to themselves and considering only a small number of models. There is also a movement to construct a 100 TeV hadron collider. This would be a worthwhile long term goal and even if it did not find new particles that would be a profound discovery about the ways of nature.  If physicists want to do that they are going to have to learn how to justify the cost to contributing nations and their tax payers. It is no use talking about just the value of pure science and some dubiously justified spin-offs. CERN must reinvent itself as a postgraduate physics university where people learn how to do highly technical research in collaborations that cross international frontiers. Most will go on to work in industry using the skills they have developed in technological research or even as technology entrepreneurs. This is the real economic benefit that big physics brings and if CERN can’t track how that works and promote it they cannot expect future funding. With the latest results from the LUX experiments hope of direct detection of dark matter have faded. Again the negative result is valuable but it may just mean that dark matter does not interact weakly at all. The search should go on but I think more can be done with theory to model dark matter and its role in galaxy formation. If we can assume that dark matter started out with the same temperature as the visible universe then it should be possible to model its evolution as it settled into galaxies and estimate the mass of the dark matter particle. This would help in searching for it. Meanwhile the searches for dark matter will continue including other possible forms such as axions. Astronomical experiments such as AMS-2 may find important evidence but it is hard to find optimism there. A better prospect exists for observations of the dark age of the universe using new radio telescopes such as the square kilometre array that could detect hydrogen gas clouds as they formed the first stars and galaxies. Neutrino physics is one area that has seen positive results that go beyond the standard model. This is therefore an important area to keep going. They need to settle the question of whether neutrinos are Majorana spinors and produce figures for neutrino masses. Observation of cosmological high energy neutrinos is also an exciting area with the Ice-Cube experiment proving its value. Gravitational wave searches have continued to be a disappointment but this is probably due to over-optimism about the nature of cosmological sources rather than a failure of the theory of gravitational waves themselves. The new run with Advanced LIGO must find them otherwise the field will be in trouble. The next step would be LISA or a similar detector in space. Precision measurements are another area that could bring results. Measurements of the electron dipole moment can be further improved and there must be other similar opportunities for inventive experimentalists. If a clear anomaly is found it could set the scale for new physics and justify the next generation of accelerators. There are other experiments that could yield positive results such as cosmic ray observatories and low frequency radio antennae that might find an echo from the big bang beyond the veil of the primordial plasma. But if I had to nominate one area for new effort it would have to be the search for proton decay. So far results have been negative pushing the proton lifetime to at least 1034 years but this has helped eliminate the simplest GUT models that predicted a shorter lifetime. SUSY models predict lifetimes of over 1036 years but this can be reached if we are willing to set up a detector around a huge volume of clear Antarctic ice. Ice-Cube has demonstrated the technology but for proton decay a finer array of light detectors is needed to catch the lower energy radiation from proton decay. If decays were detected they would give us positive information about physics at the GUT scale. This is something of enormous importance and its priority must be raised. Apart from these experiments we must rely on the advance of precision technology and the inventiveness of the experimental physicist. Ideas such as the holometer may have little hope of success but each negative result tells us something and if someone gets lucky a new flood of experimental data will nourish our theories, There is much that we can still learn. Naturally Unnatural July 18, 2013 Nightmare Scenario The Multiverse What does it mean for life in the universe? Planck thoughts March 22, 2013 It’s great to see the Planck cosmic background radiation data released, so what is it telling us about the universe? First off the sky map now looks like this Planck is the third satellite sent into space to look at the CMB and you can see how the resolution has improved in this picture from Wikipedia Like the LHC, Planck is a European experiment. It was launched back in 2009 on an Ariane 5 rocket along with the Herschel Space Observatory. The US through NASA also contributed though. The Planck data has given us some new measurements of key cosmological parameters. The universe is made up of  69.2±1.0% dark energy, 25.8±0.4% dark matter, and 4.82±0.05% visible matter. The percentage of dark energy increases as the universe expands while the ratio of dark to visible matter stays constant, so these figures are valid only for the present. Contributions to the total energy of the universe also includes a small amount of electromagnetic radiation (including the CMB itself) and neutrinos. The proportion of these is small and decreases with time. Using the new Planck data the age of the universe is now 13.82 ± 0.05 billion years old. WMAP gave an answer of 13.77 ± 0.06 billion years. In the usual spirit of bloggers combinations we bravely assume no correlation of errors to get a combined figure of 13.80 ± 0.04 billion years, so we now know the age of the universe to within about 40 million years, less than the time since the dinosaurs died out. The most important plot that the Planck analysis produced is the multipole analysis of the background anisotropy shown in this graph This is like a fourier analysis done on the surface of a sphere are it is believed that the spectrum comes from quantum fluctuations during the inflationary phase of the big bang. The points follow the predicted curve almost perfectly and certainly within the expected range of cosmic variance given by the grey bounds. A similar plot was produced before by WMAP but Planck has been able to extend it to higher frequencies because of its superior angular resolution. However, there are some anomalies at the low-frequency end that the analysis team have said are in the range of 2.5 to 3 sigma significance depending on the estimator used. In a particle physics experiment this would not be much but there is no look elsewhere effect to speak of here, any these are not statistical errors that will get better with more data. This is essentially the final result. Is it something to get excited about? To answer that it is important to understand a little of how the multipole analysis works. The first term in a multipole analysis is the monopole which is just the average value of the radiation. For the CMB this is determined by the temperature and is not shown in this plot. The next multipole is the dipole. This is determined by our motion relative to the local preferred reference frame of the CMB so it is specified by three numbers from the velocity vector. This motion is considered to be a local effect so it is also subtracted off the CMB analysis and not regarded as part of the anisotropy. The first component that does appear is the quadrupole and as can be seen from the first point on the plot. The quadrupole is determined by 5 numbers so it is shown as an everage and a standard deviation.  As you can see it is significantly lower than expected. This was known to be the case already after WMAP but it is good to see it confirmed. This contributes to the 3 sigma anomaly but on its own it is more like a one sigma effect, so nothing too dramatic. In general there is a multipole for every whole number l starting with l=0 for the monpole, l=1 for the dipole, l=2 for the quadrupole. This number l is labelled along the x-axis of the plot. It does not stop there of course. We have an octupole for l=3, a hexadecapole for l=4, a  dotriacontapole for l=5, a tetrahexacontapole for l=6, a octacosahectapole for l=7 etc. It goes up to l=2500 in this plot. Sadly I can’t write the name for that point. Each multipole is described by 2l+1 numbers. If you are familiar with spin you will recognise this as the number of components that describe a particle of spin l, it’s the same thing. If you look carefully at the low-l end of the plot you will notice that the even-numbered points are low while the odd-numbered ones are high. This is the case up to l=8. In fact above that point they start to merge a range of l values into each point on the graph so this effect could extend further for all I know. Looking back at the WMAP plot of the same thing it seems that they started merging the points from about l=3 so we never saw this before (but some people did bevause they wrote papers about it). It was hidden, yet it is highly significant and for the Planck data it is responsible for the 3 sigma effect. In fact if they used an estimator that looked at the difference between odd and even points the significance might be higher. There is another anomaly called the cold spot in the constellation of Eridanus. This is not on the axis of evil but it is terribly far off. Planck has also verified this spot first seen in the WMAP survey which is 70 µK cooler than the average CMB temperature. What does it all mean? No idea! Guest Post by Felix Lev July 17, 2012 Today viXra log is proud to host a guest post by one of our regular contributors to the viXra.org archive. Felix Lev gained a PhD from the Institute of Theoretical and Experimental Physics (Moscow) and a Dr. Sci. degree from the Institute for High Energy Physics (also known as the Serpukhov Accelerator). In Russia Felix Lev worked at the Joint Institute for Nuclear Research (Dubna). Now he works as a software engineer but continues research as an independent physicist in a range of subjects including quantum theory over Galois fields. Spreading of Ultrarelativistic Wave Packet and Redshift In standard cosmology, the red shift of light coming to the Earth from distant objects is usually explained as a consequence of the fact that the Universe is expanding. This explanation has been questioned by many authors and many other explanations have been proposed. One of the examples – a recent paper by Leonardo Rubio “Layer Hubble and the Alleged Expansion of the Universe” in viXra:1206.0068. A standard explanation implies that photons emitted by distant objects travel in the interstellar medium practically without interaction with interstellar matter and hence they can survive their long (even billions of years) journey to the Earth. I believe that this explanation has the following obvious flaw: it does not take into account a well-known quantum effect of wave-packet spreading and the photons are treated as classical particles (for which wave-packet spreading is negligible). The effect of wave-packet spreading has been known practically since the discovery of quantum mechanics. For classical nonrelativistic particles this effect is negligible since the characteristic time of wave-packet spreading is of the order of ma2/ℏ where m is the mass of the body and a – its typical size. In optics the wave-packet spreading is usually discussed in view of the law of dispersion ω(k) when a wave travels in the medium. But even if a photon travels in empty space, its wave function is a subject of wave-packet spreading. A simple calculations the details of which can be found in my paper viXra:1206:0074, gives for the characteristic time t* of spreading of the photon wave function a quantity given by the same formula but with m replaced by E/c2 where E is the photon energy. This result can be rewritten as t* = 2πT(a/λ)2 where T is the period of the wave, λ is the wave length and a is a dimension of the photon wave function in the direction perpendicular to the photon momentum. Hence even for optimistic values of a this quantity is typically much less than a second. If spreading is so fast then a question arises why we can see stars and even planets rather than an almost isotropic background. The only explanation is that the interaction of photons with the interstellar medium cannot be neglected. On quantum level a description of the interaction is rather complicated since several processes should be taken into account. For example, a photon can be absorbed by an atom and reemitted in approximately the same direction. This process is an illustration of the fact that in the medium the speed of propagation is less than c: because after absorbing a photon the atom lives some time in an excited state. This process plays an important role from the point of view of wave-packet spreading. Indeed, the atom emits a photon with a wave packet of a small size. If the photon encounters many atoms on its way, this does not allow the photon wave function to spread significantly. In view of this qualitative picture it is clear that at least a part of the red shift can be a consequence of the energy loss and the greater the distance to an object is, the greater is the loss. This phenomenon also poses a problem that the density of the interstellar medium might be much greater than usually believed. Among different scenarios discussed in the literature are dark energy, dark matter and others. As shown in my papers (see e.g. viXra:1104.0065 and references therein), the cosmological acceleration can be easily and naturally explained from first principles of quantum theory without involving dark energy, empty space-time background and other artificial notions. However, the other possibilities seem to be more realistic and now they are intensively studied.
5fc72d6a5df426ba
Overcoming inertia The tremendous accelerations involved in the kind of spaceflight seen on Star Trek would instantly turn the crew to chunky salsa unless there was some kind of heavy-duty protection. Hence, the inertial damping field. — Star Trek: The Next Generation Technical Manual, page 24. For a space opera RPG setting I am considering adding inertia manipulation technology. But can one make a self-consistent inertia dampener without breaking conservation laws? What are the physical consequences? How many cool explosions, superweapons, and other tropes can we squeeze out of it? How to avoid the worst problems brought up by the SF community? What inertia is As Newton put it, inertia is the resistance of an object to a change in its state of motion. Newton’s force law F=ma is a consequence of the definition of momentum, p=mv (which in a way is more fundamental since it directly ties in with conservation laws). The mass in the formula is the inertial mass. Mass is a measure of how much there is of matter, and we normally multiply it with a hidden constant of 1 to get the inertial mass – this constant is what we will want to mess with. There are relativistic versions of the laws of motion that handles momentum and inertia for high velocities, where the kinetic energy becomes so large that it starts to add mass to the whole system. This makes the total inertia go up, as seen by an outside observer, and looks like a nice case for inertia-manipulating tech being vaguely possible. However, Einstein threw a spanner into this: gravity also acts on mass and conveniently does so exactly as much as inertia: gravitational mass (the masses in F=Gm_1m_2/r^2) and inertial mass appear to be equal. At least in my old school physics textbook (early 1980s!) this was presented as a cool unsolved mystery, but it is a consequence of the equivalence principle in general relativity (1907): all test particles accelerate the same way in a gravitational field, and this is only possible if their gravitational mass and inertial mass are proportional to one another. So, an inertia manipulation technology will have to imply some form of gravity manipulation technology. Which may be fine from my standpoint, since what space opera is complete without antigravity? (In fact, I already had decided to have Alcubierre warp bubble FTL anyway, so gravity manipulation is in.) Playing with inertia OK, let’s leave relativity to the side for the time being and just consider the classical mechanics of inertia manipulation. Let us posit that there is a magical field that allows us to dial up or down the proportionality constant for inertial mass: the momentum of a particle will be p=\mu m v, the force law F=\mu m a and the formula for kinetic energy K=(1/2) \mu m v^2. \mu is the effect of the magic field, running from 0<\mu<\infty, with 1 corresponding to it being absent. I throw a 1 g ping-pong ball at 1 m/s into my inertics device and turn on the field. What happens? Let us assume the field is \mu=1000. Now the momentum and kinetic energy jumps by a factor of 1000 if the velocity remains unchanged. Were I to catch the ball I would have gained 999 times its original kinetic energy: this looks like an excellent perpetual motion machine. Since we do not want that to be possible (a space empire powered by throwing ping-pong balls sounds silly) we must demand that energy is conserved. Velocity shifting to preserve kinetic energy Radiation shieldingOne way of doing energy conservation is for the velocity to go down for my heavy ping-pong ball. This means that the new velocity will be v/\sqrt{\mu}. Inertia-increasing fields slow down objects, while inertia-decreasing fields speed them up. One could have a force-field made of super-high inertia that would slow down incoming projectiles. At first this seems pointless, since once they get through to the other side they speed up and will do the same damage. But we could of course put in a bunch of armour in this field, and have it resist the projectile. The kinetic energy will be the same but it will be a lower velocity collision which means that the strength of the armour has a better chance of stopping it (in fact, as we will see below, we can use superdense armour here too). Consider the difference between being shot with a rifle bullet or being slowly but strongly stabbed by it: in the later case the force can be distributed by a good armour to a vast surface. Definitely a good thing for a space opera. A spacecraft that wants to get somewhere fast could just project a low \mu field around itself and boost its speed by a huge 1/\sqrt{\mu} factor. Sounds very useful. But now an impacting meteorite will both have an high relative speed, and when it enters the field get that boosted by the same factor again: impacts will happen at velocities increased by a factor of 1/\mu as measured by the ship. So boosting your speed with a factor of a 1000 will give you dust hitting you at speeds a million times higher. Since typical interplanetary dust already moves a few km/s, we are talking about hyperrelativistic impactors. The armour above sounds like a good thing to have… Note that any inertia-reducing technology is going to improve rockets even if there is no reactionless drive or other shenanigans: you just reduce the inertia of the reaction mass. The rocket equation no longer bites: sure, your ship is mostly massive reaction mass in storage, but to accelerate the ship you just take a measure of that mass, restore its inertia, expel it, and enjoy the huge acceleration as the big engine pushes the overall very low-inertia ship. There is just a snag in this particular case: when restoring the inertia you somehow need to give the mass enough kinetic energy to be at rest in relation to the ship… This kind of inertics does not make for a great cannon. I can certainly make my projectile speed up a lot in the bore by lowering its inertia, but as soon as it leaves it will slow down. If we assume a given amount of force F accelerating it along the length L bore, it will pick up FL Joules of kinetic energy from the work the cannon does – independent of mass or inertia! The difference may be power: if you can only supply a certain energy per second like in a coilgun, having a slower projectile in the bore is better. Note that entering and leaving an inertics field will induce stresses. A metal rod entering an inertia-increasing field will have the part in the field moving more slowly, pushing back against the not slowed part (yet another plus for the armour!). When leaving the field the lighter part outside will pull away strongly. Another effect of shifting velocities is that gases behave differently. At first it looks like changing speeds would change temperature (since we tend to think of the temperature of a gas as how fast the molecules are bouncing around), but actually the kinetic temperature of a gas depends on (you guessed it) the average kinetic energy. So that doesn’t change at all. However, the speed of sound should scale as \propto 1/\sqrt{\mu}: it becomes far higher in the inertia-dampening field, producing helium-voice like effects. Air molecules inside an inertia-decreasing field would tend to leave more quickly than outside air would enter, producing a pressure difference. Momentum conservation is a headache Atlas 6Changing the velocity so that energy is conserved unfortunately has a drawback: momentum is not conserved! I throw a heavy object at my inertics machine at velocity v, momentum mv and energy (1/2)mv^2, it reduces is inertia and increases the speed to v/\sqrt{\mu}, keeps the kinetic energy at (1/2)mv^2, and the momentum is now mv/\sqrt{\mu}. What if we assume the momentum change comes from the field or machine? When I hit the mass M machine with an object it experiences a force enough to change its velocity by w=mv(1-1/\sqrt{\mu})/M. When set to increase inertia it is pushed back a bit, potentially moving up to speed (m/M)v. When set to decrease inertia it is pushed forward, starting to move towards the direction the object impacted from. In fact, it can get arbitrarily large velocities by reducing \mu close to 0. This sounds odd. Demanding momentum and energy conservation requires mv = mv/\sqrt{\mu} + Mw (giving the above formula) and mv^2 = \mu m(v/\sqrt{\mu})^2 + Mw^2, which insists that w=0. Clearly we cannot have both. I don’t know about you, but I’d rather keep energy conserved. It is more obvious when you cheat about energy conservation. Still, as Einstein pointed out using 4-vectors, momentum and energy conservation are deeply entangled – one reason inertics isn’t terribly likely in the real world is that they cannot be separated. We could of course try to conserve 4-momentum ((E/c,\gamma \mu m v_x, \gamma \mu m v_y, \gamma \mu m v_z)), which would look like changing both energy and normal momentum at the same time. Energy gain/loss to preserve momentum Buffer stopsWhat about just retaining the normal momentum rather than the kinetic energy? The new velocity would be v/\mu, the new kinetic energy would be K_1=(1/2) \mu m (v/\mu)^2 = (1/2) mv^2 / \mu = K_0/\mu. Just like in the kinetic energy preserving case the object slows down (or speeds up), but more strongly. And there is an energy debt of K_0 (1-1/\mu) that needs to be fixed. One way of resolving energy conservation is to demand that the change in energy is supplied by the inertia-manipulation device. My ping-pong ball does not change momentum, but requires 0.999 Joule to gain the new kinetic energy. The device has to provide that. When the ball leaves the field there will be a surge of energy the device needs to absorb back. Some nice potential here for things blowing up in dramatic ways, a requirement for any self-respecting space opera. If I want to accelerate my spaceship in this setting, I would point my momentum vector towards the target, reduce my inertia a lot, and then have to provide a lot of kinetic energy from my inertics devices and power supply (actually, store a lot – the energy is a surplus). At first this sounds like it is just as bad as normal rocketry, but in fact it is awesome: I can convert my electricity directly into velocity without having to lug around a lot of reaction mass! I will even get it back when slowing down, a bit like electric brake regeneration systems.  The rocket equation does not apply beyond getting some initial momentum. In fact, the less velocity I have from the start, the better. At least in this scheme inertia-reduced reaction mass can be restored to full inertia within the conceptual framework of energy addition/subtraction. One drawback is that now when I run into interplanetary dust it will drain my batteries as the inertics system needs to give it a lot of kinetic energy (which will then go on harming me!) Another big problem (pointed out by Erik Max Francis) is that turning energy into kinetic energy gives an energy requirement $latex dK/dt=mva$, which depends on an absolute speed. This requires a privileged reference frame, throwing out relativity theory. Oops (but not unexpected). Energy addition/depletion makes traditional force-fields somewhat plausible: a projectile hits the field, and we use the inertics to reduce its kinetic energy to something manageable. A rifle bullet has a few thousand Joules of energy, and if you can drain that it will now harmlessly bounce off your normal armour. Presumably shields will be depleted when the ship cannot dissipate or store the incoming kinetic energy fast enough, causing the inertics to overload and then leaving the ship unshielded. This kind of inertics allows us to accelerate projectiles using the inertics technology, essentially feeding them as much kinetic energy as we want. If you first make your projectile super-heavy, accelerate it strongly, and then normalise the inertia it will now speed away with a huge velocity. A metal rod entering this kind of field will experience the same type of force as in the kinetic energy respecting model, but here the field generator will also be working on providing energy balance: in a sense it will be acting as a generator/motor. Unfortunately it does not look like it could give a net energy gain by having matter flow through. Note that this kind of device cannot be simply turned off like the previous one: there has to be an energy accounting as everything returns to \mu=1. The really tricky case is if you are in energy-debt: you have an object of lowered inertia in the field, and cut the power. Now the object needs to get a bunch of kinetic energy from somewhere. Sudden absorption of nearby kinetic energy, freezing stuff nearby? That would break thermodynamics (I could set up a perpetual motion heat engine this way). Leaving the inertia-changed object with the changed inertia? That would mean there could be objects and particles with any effective mass – space might eventually be littered with atoms with altered inertia, becoming part of normal chemistry and physics. No such atoms have ever been found, but maybe that is because alien predecessor civilisations were careful with inertial pollution. Other approaches Gravity manipulation Levitating morris dancersAnother approach is to say that we are manipulating spacetime so that inertial forces are cancelled by a suitable gravity force (or, for purists, that the acceleration due to something gets cancelled by a counter-acceleration due to spacetime curvature that makes the object retain the same relative momentum). The classic is the “gravitic drive” idea, where the spacecraft generates a gravity field somehow and then free-falls towards the destination. The acceleration can be arbitrarily large but the crew will just experience freefall. Same thing for accelerating projectiles or making force-fields: they just accelerate/decelerate projectiles a lot. Since momentum is conserved there will be recoil. The force-fields will however be wimpy: essentially it needs to be equivalent to an acceleration bringing the projectile to a stop over a short distance. Given that normal interplanetary velocities are in tens of kilometres per second (escape velocity of Earth, more or less) the gravity field needs to be many, many Gs to work. Consider slowing down a 20 km/s railgun bullet to a stop over a distance of 10 meters: it needs to happen over a millisecond and requires a 20 million m/s^2 deceleration (2.03 megaG). If we go with energy and momentum conservation we may still need to posit that the inertics/antigravity draws power corresponding to the work it does . Make a wheel turn because of an attracting and repulsing field, and the generator has to pay the work (plus experience a torque). Make a spacecraft go from point A to B, and it needs to pay the potential energy difference, momentum change, and at least temporarily the gain in kinetic energy. And if you demand momentum conservation for a gravitic drive, then you have the drive pulling back with the same “force” as the spacecraft experiences. Note that energy and momentum in general relativity are only locally conserved; at least this kind of drive can handwave some excuse for breaking local momentum conservation by positing that the momentum now resides in an extended gravity field (and maybe gravitational waves). Unlike the previous kinds of inertics this doesn’t change the properties of matter, so the effects on objects discussed below do not apply. One problem is edge tidal effects. Somewhere there is going to be a transition zone where there is a field gradient: an object passing through is going to experience some extreme shear forces and likely spaghettify. Conversely, this makes for a nifty weapon ripping apart targets. One problem with gravity manipulation is that it normally has to occur through gravity, which is both very weak and only has positive charges. Electromagnetic technology works so well because we can play positive and negative charges against each other, getting strong effects without using (very) enormous numbers of electrons. Gravity (and gravitomagnetic effects) normally only occurs due to large mass-energy densities and momenta. So for this to work there better be antigravitons, negative mass, or some other way of making gravity behave differently from vanilla relativity. Inertics can typically handwave something about the Higgs field at least. Forcefield manipulation This leaves out the gravity part and just posits that you can place force vectors wherever you want. A bit like Iain M. Banks’ effector beams. No real constraints because it is entirely made-up physics; it is not clear it respects any particular conservation laws. Other physical effects Here are some of the nontrivial effects of changing inertia of matter (I will leave out gravity manipulation, which has more obvious effects). Electromagnetism: beware the blue carrot It is worth noting that this thought experiment does not affect light and other electromagnetic fields: photons are massless. The overall effect is that they will tend to push around charged objects in the field more or less. A low-inertia electron subjected to a given electric field will accelerate more, a high-inertia electron less. This in turn changes the natural frequencies of many systems: a radio antenna will change tuning depending on the inertia change. A receiver inside the inertics field will experience outside signals as being stronger (if the field decreases inertia) or weaker (if it increases it). Reducing inertia also increases the Bohr magneton, e\hbar/2 \mu m_e. This means that paramagnetic materials become more strongly affected by magnetic fields, and that ferromagnets are boosted. Conversely, higher inertia reduces magnetic effects. Changing inertia would likely change atomic spectra (see below) and hence optical properties of many compounds. Many pigments gain their colour from absorption due to conjugated systems (think of carotene or heme) that act as antennas: inertia manipulation will change the absorbed frequencies. Carotene with increased inertia will presumably shift its absorption spectra towards lower frequencies, becoming redder, while lowered inertia causes a green or blue shift. An interesting effect is that the rhodopsin in the eye will also be affected and colour vision will experience the same shift (objects will appear to change colour in regions with a different \mu from the place where the observer is, but not inside their field). Strong enough fields will cause shifts so that absorption and transmission outside the visual range will matter, e.g. infrared or UV becomes visible. However, the above claim that photons should not be affected by inertia manipulation may not have to hold true. Photons carry momentum, p=\hbar k where k is the wave vector. So we could assume a factor of 1/\sqrt{\mu} or 1/\mu gets in there and the field red/blueshifts photons. This would complicate things a lot, so I will leave analysis to the interested reader. But it would likely make inertics fields visible due to refractive effects. Chemistry: toxic energy levels, plus a shrink-ray Projectile warningOne area inertics would mess up is chemistry. Chemistry is basically all about the behaviour of the valence electrons of atoms. Their behaviour depends on their distribution between the atomic orbitals, which in turn depends on the Schrödinger equation for the atomic potential. And this equation has a dependency on the mass of the electron and nucleus. If we look at hydrogen-like atoms, the main effect is that the energy levels become E_n = - \mu (M Z^2 e^4/8 \epsilon_0^2 h^2 n^2), where M=m_e m_p/(m_e+m_p) is the reduced mass. In short, the inertial manipulation field scales the energy levels up and down proportionally. One effect is that it becomes much easier to ionise low-inertia materials, and that materials that are normally held together by ionization bonds (say NaCl salt) may spontaneously decay when in high-inertia fields. The Bohr radius scales as a_0 \propto 1/\mu: low-inertia atoms become larger. This really messes with materials. Placed in a low-inertia field atoms expand, making objects such as metals inflate. In a high inertia-field, electrons keep closer to the nuclei and objects shrink. As distances change, the effects of electromagnetic forces also change: internal molecular electric forces, van der Waals forces and things like that change in strength, which will no doubt have effects on biology. Not to mention melting points: reducing the inertia will make many materials melt at far lower temperatures due to larger inter-atomic and inter-molecular distances, increasing it can make room-temperature liquids freeze because they are now more closely packed. This size change also affects the electron-electron interactions, which among other things shield the nucleus and reduce the effective nuclear charge. The changed energy levels do not strongly affect the structure of the lightest atoms, so they will likely form the same kind of chemical bonds and have the same chemistry. However, heavier atoms such as copper, chromium and palladium already have ordering rules that are slightly off because of the quirks of the energy levels. As the field deviates from 1 we should expect lighter and lighter atoms to get alternative filling patterns and this means they will get different chemistry. Given that copper and chromium are essential for some enzymes, this does not bode well – if copper no longer works in cytochrome oxidase, the respiratory chain will lethally crash. If we allow permanently inertia-altered particles chemistry can get extremely weird. An inertia-changed electron would orbit in a different way than a normal one, giving the atom it resided in entirely different chemical properties. Each changed electron could have its own individual inertia. Presumably such particles would randomise chemistry where they resided, causing all sorts of odd reactions and compounds not normally seen. The overall effect would likely be pretty toxic, since it would on average tend to catalyze metastable high-energy, low-entropy structures in biochemistry to fall down to lower energy, higher entropy states. Lowering inertia in many ways looks like heating up things: particles move faster, chemicals diffuse more, and things melt. Given that much of biochemistry is tremendously temperature dependent, this suggests that even slight changes of \mu to 0.99 or 1.01 would be enough to create many of the bad effects of high fever or hypothermia, and a bit more would be directly lethal as proteins denaturate. Fluids: I need a lie down Inside a lowered inertia field matter responds more strongly to forces, and this means that fluids flow faster for the same pressure difference. Buoyancy cases stronger convection. For a given velocity, the inertial forces  are reduced compared to the viscosity, lowering the Reynolds number and making flows more laminar. Conversely, enhanced inertia fluids are hard to get to move but at a given speed they will be more turbulent. This will really mess up the sense of balance and likely blood flow. Gravity: equivalent exchange I have ignored the equivalence of inertial and gravitational mass. One way for me to get away with it is to claim that they are still equivalent, since everything occurs within some local region where my inertics field is acting: all objects get their inertial mass multiplied by \mu and this also changes their gravitational mass. The equivalence principle still holds. What if there is no equivalence principle? I could make 1 kg object and a 1 gram object fall at different accelerations. If I had a massless spring between them it would be extended, and I would gain energy. Beside the work done by gravity to bring down the objects (which I could collect and use to put them back where they started) I would now have extra energy – aha, another perpetual motion machine! So we better stick to the equivalence principle. Given that boosting inertia makes matter both tend to shrink to denser states and have more gravitational force, an important worldbuilding issue is how far I will let this process go. Using it to help fission or fusion seems fine. Allowing it to squeeze matter into degenerate states or neutronium might be more world-changing. And easy making of black holes is likely incompatible with the survival of civilisation. [ Still, destroying planets with small black holes is harder than it looks. The traditional “everything gets sucked down into the singularity” scenario is surprisingly slow. If you model it using spherical Bondi accretion you need an Earth-mass black hole to make the sun implode within a year or so, and a 3\cdot 10^{19} kg asteroid mass black hole to implode the Earth. And the extreme luminosity slows things a lot more. A better way may be to use an evaporating black hole to irradiate the solar system instead, or blow up something sending big fragments. ] Another fun use of inertics is of course to mess up stars directly. This does not work with the energy addition/depletion model, but the velocity change model would allow creating a region of increased inertia where density ramps up: plasma enters the volume and may start descending below the spot. Conversely, reducing inertia may open a channel where it is easier for plasma from the interior to ascend (especially since it would be lighter). Even if one cannot turn this into a black hole or trigger surface fusion, it might enable directed flares as the plasma drags electromagnetic field lines with it. The probe was invisible on the monitor, but its effects were obvious: titanic volumes of solar plasma were sucked together into a strangely geometric sunspot. Suddenly there was a tiny glint in the middle and a shock-wave: the telemetry screens went blank. “Seems your doomsday weapon has failed, professor. Mad science clearly has no good concept of proper workmanship.” “Stay your tongue. This is mad engineering: the energy ran out exactly when I had planned. Just watch.” Without the probe sucking it together the dense plasma was now wildly expanding. As it expanded it cooled. Beyond a certain point it became too cold to remain plasma: there was a bright flash as the protons and electrons recombined and the vortex became transparent. Suddenly neutral the matter no longer constrained the tortured magnetic field lines and they snapped together at the speed of light. The monitor crashed. “I really hope there is no civilization in this solar system sensitive to massive electromagnetic pulses” the professor gloated in the dark. Model Pros Cons Preserve kinetic energy Nice armour. Fast spacecraft with no energy needs (but weird momentum changes). Interplanetary dust is a problem. Inertics cannons inefficient. Toxic effects on biochemistry. Preserve momentum Nice classical forcefield. Fast spacecraft with energy demands. Inertics cannons work. Potential for cool explosions due to overloads. Interplanetary dust drains batteries. Extremely weird issues of energy-debts: either breaking thermodynamics or getting altered inertia materials. Toxic effects on biochemistry. Breaks relativity. Gravity manipulation No toxic chemistry effects. Fast spacecraft with energy demands. Inertics cannons work. Forcefields wimpy. Gravitic drives are iffy due to momentum conservation (and are WMDs). Gravity is more obviously hard to manipulate than inertia. Tidal edge forces. In both cases where actual inertia is changed inertics fields appear pretty lethal. A brief brush with a weak field will likely just be incapacitating, but prolonged exposure is definitely going to kill. And extreme fields are going to do very nasty stuff to most normal materials – making them expand or contract, melt, change chemical structure and whatnot. Hence spacecraft, cannons and other devices using inertics need to be designed to handle these effects. One might imagine placing the crew compartment in a counter-inertics field keeping \mu=1 while the bulk of the spacecraft is surrounded by other fields. A failure of this counter-inertics field does not just instantly turn the crew into tuna paste, but into blue toxic tuna paste. Gravity manipulation is cleaner, but this is not necessarily a plus from the cool fiction perspective: sometimes bad side effects are exactly what world-building needs. I love the idea of inertics with potential as an anti-personnel or assassination weapon through its biochemical effects, or “forcefields” being super-dense metal with amplified inertia protecting against high-velocity or beam impact. The atomic rocket page makes a big deal out of how reactionless propulsion makes space opera destroying weapons of mass destruction (if every tramp freighter can be turned into a relativistic missile, how long is the Imperial Capital going to last?) This is a smaller problem here: being hit by a inertia-reduced freighter hurts less, even when it is very fast (think of being hit by a fast ping-pong ball). Gravity propulsion still enables some nasty relativistic weaponry, and if you spend time adding kinetic energy to your inertia-reduced missile it can become pretty nasty. But even if the reactionless aspect does not trivially produce WMDs inertia manipulation will produce a fair number of other risky possibilities. However, given that even a normal space freighter is a hypervelocity missile, the problem lies more in how to conceptualise a civilisation that regularly handles high-energy objects in the vicinity of centres of civilisation. Not discussed here are issues of how big the fields can be made. Could we reduce the inertia of an asteroid or planet, sending it careening around? That has some big effects on the setting. Similarly, how small can we make the inertics: do they require a starship to power them, or could we have them in epaulettes? Can they be counteracted by another field? Inertia-changing devices are really tricky to get to work consistently; most space opera SF using them just conveniently ignores the mess – just like how FTL gives rise to time travel or that talking droids ought to transform the global economy totally. But it is fun to think through the awkward aspects, since some of them make the world-building more exciting. Plus, I would rather discover them before my players, so I can make official handwaves of why they don’t matter if they are brought up. Starkiller base versus the ideal gas law Partial eclipseMy friend Stuart explains why the Death Stars and the Starkiller Base in the Star Wars universe are inefficient ways of taking over the galaxy. I generally agree: even a super-inefficient robot army will win if you simply bury enemy planets in robots. But thinking about the physics of absurd superweapons is fun and warms the heart. The ideal gas law: how do you compress stars? My biggest problem with the Starkiller Base is the ideal gas law. The weapon works by sucking up a star and then beaming its energy or plasma at remote targets. A sun-like star has a volume around 1.4*1018 cubic kilometres, while an Earthlike planet has a volume around 1012 cubic kilometres. So if you suck up a star it will get compressed by a factor of 1.4 million times. The ideal gas law states that pressure times volume equals temperature times the number of particles and some constant: PV=nRT 1.4 million times less volume needs to be balanced somehow: either the pressure P has to go down, the temperature T has to go up, or the number of particles n need to go down. Pressure reduction seems to be a non-starter, unless the Starkiller base actually contains some kind of alternate dimension where there is no pressure (or an enormous volume). The second case implies a temperature increase by a factor of a 1.4 million. Remember how hot a bike pump gets when compressing air: this is the same effect. This would heat the photosphere gas to 8.4 billion degrees and the core to 2.2*1013 K, 22 TeraKelvin; the average would be somewhere between, on the hotter side. We are talking about temperatures microseconds after the Big Bang, hotter than a supernova: protons and neutrons melt at 0.5–1.2 TK into a quark-gluon plasma. Excellent doomsday weapon material but now containment seems problematic. Even if we have antigravity forcefields to hold the star, the black-body radiation is beyond the supernova range. Keeping it inside a planet would be tough: the amount of neutrino radiation would likely blow up the surface like a supernova bounce does. Maybe the extra energy is bled off somehow? That might be a way to merely get super-hot plasma rather than something evaporating the system. Maybe those pesky neutrinos can be shunted into hyperspace, taking most of the heat with them (neutrino cooling can be surprisingly fast for very hot objects; at these absurd temperatures it is likely subsecond down to mere supernova temperatures). Another bizarre and fun approach is to reduce the number of gas particles: simply fuse them all into a single nucleus. A neutron star is in a sense a single atomic nucleus. As a bonus, the star would now be a tiny multikilometre sphere held together by its own gravity. If n is reduced by a factor of 1057 it could outweigh the compression temperature boost. There would be heating from all the fusion; my guesstimate is that it is about a percent of the mass energy, or 2.7*1045 J. This would heat the initial gas to around 96 billion degrees, still manageable by the dramatic particle number reduction. This approach still would involve handling massive neutrino emissions, since the neutronium would still be pretty hot. In this case the star would remain gravitationally bound into a small blob: convenient as a bullet. Maybe the red “beam” is actually just an accelerated neutron star, leaking mass along its trajectory. The actual colour would of course be more like blinding white with a peak in the gamma ray spectrum. Given the intense magnetic fields locked into neutron stars, moving them electromagnetically looks pretty feasible… assuming you have something on the other end of the electromagnetic field that is heavier or more robust. If a planet shoots a star-mass bullet at a high velocity, then we should expect the recoil to send the planet moving at about a million times faster in the opposite direction. Other issues We have also ignored gravity: putting a sun-mass inside an Earth-radius means we get 333,000 times higher gravity. We can try to hand-wave this by arguing that the antigravity used to control the star eating also compensates for the extra gravity. But even a minor glitch in the field would produce an instant, dramatic squishing. Messing up the system* containing the star would not produce conveniently dramatic earthquakes and rifts, but rather near-instant compression into degenerate matter. (* System – singular. Wow. After two disasters due to single-point catastrophic failures one would imagine designers learning their lesson. Three times is enemy action: if I were the Supreme Leader I would seriously check if the lead designer happens to be named Skywalker.) There is also the issue of the amount of energy needed to run the base. Sucking up a star from a distance requires supplying the material with the gravitational binding energy of the star, 6.87*1041 J for the sun. Doing this over an hour or so is a pretty impressive power, about 1.9*1038 W. This is about 486 billion times the solar luminosity. In fact, just beaming that power at a target using any part of the electromagnetic spectrum would fry just about anything. Of course, a device that can suck up a star ought to be able to suck up planets a million times faster. So there is no real need to go for stars: just suck up the Republic. Since the base can suck up space fleets too, local defences are not much of a problem. Yes, you may have to go there with your base, but if the Death Star can move, the Starkiller can too. If nothing else, it could use its beam to propel itself. If the First Order want me to consult on their next (undoubtedly even more ambitious) project I am open for offers. However, one iron-clad condition given recent history is that I get to work from home, as far away as possible from the superweapon. Ideally in a galaxy far, far away. Dampening theoretical noise by arguing backwards Don’t mention the war Ellis is indeed grumbling a bit: Theoretical noise Backward theoretical noise dampening? Deductive backwards arguments may be the best theoretical noise reduction method. Awesome blogs I recently discovered Alex Wellerstein’s excellent blog Restricted data: the nuclear secrecy blog. I found it while looking for nuclear stockpiles data, but was drawn in by a post on the evolution of nuclear yield to mass. Then I started reading the rest of it. And finally, when reading this post about the logo of the IAEA I realized I needed to mention to the world how good it is. Be sure to test the critical assembly simulator to learn just why critical mass is not the right concept. Another awesome blog is Almost looks like work by Jasmcole. I originally found it through a wonderfully over the top approach to positioning a wifi router (solving Maxwell’s equations turns out to be easier than the Helmholz equation!). But there are many other fascinating blog essays on physics, mathematics, data visualisation, and how to figure out propeller speeds from camera distortion. A sustainable orbital death ray Visualizing lightI have for many years been a fan of the webcomic Schlock Mercenary. Hardish, humorous military sf with some nice, long-term plotting. In the current plotline (some spoilers ahead) there is an enormous Chekov’s gun: Earth is surrounded by an equatorial ring of microsatellites that can reflect sunlight. It was intended for climate control, but as the main character immediately points out, it also makes an awesome weapon. You can guess what happens. That leds to an interesting question: just how effective would such a weapon actually be? From any point on Earth’s surface only part of the ring is visible above the horizon. In fact, at sufficiently high latitudes it is entirely invisible – there you would be safe no matter what. Also, Earth likely casts a shadow across the ring that lowers the efficiency on the nightside. I guessed, based on the appearance in some strips, that the radius is about two Earth radii (12,000 km), and the thickness about 2000 km. I did a Monte Carlo integration where I generated random ring microsatellites, checking whether they were visible above the horizon for different Earth locations (by looking at the dot product of the local normal and the satellite-location vector; for anything above the horizon this product must be possible) and were in sunlight (by checking that the distance to the Earth-Sun axis was more than 6000 km). The result is the following diagram of how much of the ring can be seen from any given location: Visibility fraction of an equatorial ring 12,000-14,000 km out from Earth for different latitudes and longitudes. At most, 35% of the ring is visible. Even on the nightside where the shadow cuts through the ring about 25% is visible. In practice, there would be a notch cut along the equator where the ring cannot fire through itself; just how wide it would be depends on the microsatellite size and properties. Overlaying the data on a world map gives the following footprint: Visibility fraction of 12,000-14,000 ring from different locations on Earth. The ring is strongly visible up to 40 degrees of latitude, where it starts to disappear below the southern or northern horizon. Antarctica, northern Canada, Scandinavia and Siberia are totally safe. This corresponds to the summer solstice, where the ring is maximally tilted relative to the Earth-Sun axis. This is when it has maximal power: at the equinoxes it is largely parallel to the sunlight and cannot reflect much at all. The total amount of energy the ring receives is E_0 = \pi (r_o^2-r_i^2)|\sin(\theta)|S where r_o is the outer radius, r_i the inner radius, $\theta$ the tilt (between 23 degrees for the summer/winter solstice and 0 for equinoxes) and S is the solar constant, 1.361 kW/square meter. This ignores the Earth shadow. So putting in \theta=20^{\circ} for a New Years Eve firing, I get E_0 \approx 7.6\cdot 10^{16} Watt. If we then multiply by 0.3 for visibility, we get 23 petawatts – is nothing to sneeze at! Of course, there will be losses, both in reflection (likely a few percent at most) and more importantly through light scattering (about 25%, assuming it behaves like normal sunlight). Now, a 17 PW beam is still pretty decent. And if you are on the nightside the shadowed ring surface can still give about 8 PW. That is about six times the energy flow in the Gulf Stream. Light pillar How destructive would such a beam be? A megaton of TNT is 4.18 PJ. So in about a second the beam could produce a comparable amount of heat.  It would be far redder than a nuclear fireball (since it is essentially 6000K blackbody radiation) and the IR energy would presumably bounce around and be re-radiated, spreading far in the transparent IR bands. I suspect the fireball would quickly affect the absorption in a complicated manner and there would be defocusing effects due to thermal blooming: keeping it on target might be very hard, since energy would both scatter and reflect. Unlike a nuclear weapon there would not be much of a shockwave (I suspect there would still be one, but less of the energy would go into it). The awesome thing about the ring is that it can just keep on firing. It is a sustainable weapon powered by renewable energy. The only drawback is that it would not have an ommminous hummmm…. Addendum 14 December: I just realized an important limitation. Sunlight comes from an extended source, so if you reflect it using plane mirrors you will get a divergent beam – which means that the spot it hits on the ground will be broad. The sun has diameter 1,391,684 km and is 149,597,871 km away, so the light spot 8000 km below the reflector will be 74 km across. This is independent of the reflector size (down to the diffraction limit and up to a mirror that is as large as the sun in the sky). Intensity with three overlapping beams. Intensity with three overlapping beams. At first this sounds like it kills the ring beam. But one can achieve a better focus by clever alignment. Consider three circular footprints arranged like a standard Venn diagram. The center area gets three times the solar input as the large circles. By using more mirrors one can make a peak intensity that is much higher than the side intensity. The vicinity will still be lit up very brightly, but you can focus your devastation better than with individual mirrors – and you can afford to waste sunlight anyway. Still, it looks like this is more of a wide footprint weapon of devastation rather than a surgical knife. Intensity with 200 beams overlapping slightly. Intensity with 200 beams overlapping slightly. Somebody think of the electrons! Atlas 6Brian Tomasik has a fascinating essay: Is there suffering in fundamental physics? He admits from the start that “Any sufficiently advanced consequentialism is indistinguishable from its own parody.” And it would be easy to dismiss this as taking compassion way too far: not just caring about plants or rocks, but the possible suffering of electrons and positrons. I think he has enough arguments to show that the idea is not entirely crazy: we do not understand the ontology of phenomenal experience well enough that we can easily rule out small systems having states, panpsychism is a view held by some rational people, it seems a priori unlikely that there is some mid-sized systems that have all the value in the universe rather than the largest or the smallest scale, we have strong biases towards our kind of system, and information physics might actually link consciousness with physics. None of these are great arguments, but there are many of them. And the total number of atoms or particles is huge: even assigning a tiny fraction of human moral consideration to them or a tiny probability of them mattering morally will create a large expected moral value. The smallness of moral consideration or the probability needs to be far outside our normal reasoning comfort zone: if you assign a probability lower than 10^{-10^{56}} to a possibility you need amazingly strong reasons given normal human epistemic uncertainty. I suspect most readers will regard this outside their “ultraviolett cutoff” for strange theories: just as physicists successfully invented/discovered a quantum cutoff to solve the ultraviolet catastrophe, most people have a limit where things are too silly or strange to count. Exactly how to draw it rationally (rather than just base it on conformism or surface characteristics) is a hard problem when choosing between the near infinity of odd but barely possible theories. What is the mass of the question mark?One useful heuristic is to check whether the opposite theory is equally likely or important: in that case they balance each other (yes, the world could be destroyed by me dropping a pen – but it could also be destroyed by not dropping it). In this case giving greater weight to suffering than neutral states breaks the symmetry: we ought to investigate this possibility since the theory that there is no moral considerability in elementary physics implies no particular value is gained from discovering this fact, while the suffering theory implies it may matter a lot if we found out (and could do something about it). The heuristic is limited but at least a start. Another way of getting a cutoff for theories of suffering is of course to argue that there must be a lower limit of the system that can have suffering (this is after all how physics very successfully solved the classical UV catastrophe). This gets tricky when we try to apply it to insects, small brains, or other information processing systems. But in physics there might be a better argument: if suffering happens on the elementary particle level, it is going to be quantum suffering. There would be literal superpositions of suffering/non-suffering of the same system. Normal suffering is classical: either it exists or not to some experiencing system, and hence there either is or isn’t a moral obligation to do something. It is not obvious how to evaluate quantum suffering. Maybe we ought to perform a quantum-action that moves the wavefunction to a pure non-suffering state (a bit like quantum game theory: just as game theory might have ties to morality, quantum game theory might link to quantum morality), but this is constrained by the tough limits in quantum mechanics on what can be sensed and done. Quantum suffering might simply be something different from suffering, just as quantum states do not have classical counterparts. Hence our classical moral obligations do not relate to it. But who knows how molecules feel?
a80bc0fd0430482b
Download Power of one qumode for quantum computation Please share yes no Was this document useful for you?    Thank you for your participation! Power of one qumode for quantum computation The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Liu, Nana, Jayne Thompson, Christian Weedbrook, Seth Lloyd, Vlatko Vedral, Mile Gu, and Kavan Modi. “Power of One Qumode for Quantum Computation.” Physical Review A 93, no. 5 (May 3, 2016). © 2016 American Physical Society As Published American Physical Society Final published version Thu May 26 13:05:58 EDT 2016 Citable Link Terms of Use Article is made available in accordance with the publisher's policy publisher's site for terms of use. Detailed Terms PHYSICAL REVIEW A 93, 052304 (2016) Power of one qumode for quantum computation Nana Liu,1,* Jayne Thompson,2 Christian Weedbrook,3 Seth Lloyd,4 Vlatko Vedral,1,2,5,6 Mile Gu,2,7,8,† and Kavan Modi9,‡ Clarendon Laboratory, Department of Physics, University of Oxford, Oxford OX1 3PU, United Kingdom CipherQ Corporation, Toronto, Ontario, Canada M5B 2G9 Department of Mechanical Engineering and Research Laboratory of Electronics, Massachusetts Institute of Technology, Cambridge Massachusetts 02139, USA Department of Physics, National University of Singapore, 3 Science Drive 2, Singapore 117543, Singapore Center for Quantum Information, Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing 100084, China School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore 639673, Singapore Complexity Institute, Nanyang Technological University, Singapore 639673, Singapore School of Physics and Astronomy, Monash University, Victoria 3800, Australia (Received 29 October 2015; revised manuscript received 11 March 2016; published 3 May 2016) Although quantum computers are capable of solving problems like factoring exponentially faster than the best-known classical algorithms, determining the resources responsible for their computational power remains unclear. An important class of problems where quantum computers possess an advantage is phase estimation, which includes applications like factoring. We introduce a computational model based on a single squeezed state resource that can perform phase estimation, which we call the power of one qumode. This model is inspired by an interesting computational model known as deterministic quantum computing with one quantum bit (DQC1). Using the power of one qumode, we identify that the amount of squeezing is sufficient to quantify the resource requirements of different computational problems based on phase estimation. In particular, we can use the amount of squeezing to quantitatively relate the resource requirements of DQC1 and factoring. Furthermore, we can connect the squeezing to other known resources like precision, energy, qudit dimensionality, and qubit number. We show the circumstances under which they can likewise be considered good resources. DOI: 10.1103/PhysRevA.93.052304 Quantum computing is a rapidly growing discipline that has attracted significant attention due to the discovery of quantum algorithms that are exponentially faster than the best-known classical ones [1–4]. One of the most notable examples is Shor’s factoring algorithm [2], which has been a strong driver for the quantum computing revolution. However, the essential resources that empower quantum computation remain elusive. Knowing what these resources are will have both great theoretical and great practical consequences. This knowledge will motivate designs that take optimal advantage of such resources. In addition, it may further illuminate the quantum-classical boundary. In pure-state quantum computation, it is known that entanglement is a necessary resource to achieve a computational speed-up [5]. This is no longer true for mixed-state quantum computation and it is unclear if a single entity can quantify the computational resource in these models. Or if multiple resources appear as candidates, it has not been made explicit what the relationship is between these different resources. One notable example is the deterministic quantum computation with one quantum bit (DQC1) model [6]. This model contains little entanglement and purity [7,8]. Yet it can solve certain computational problems exponentially faster than the bestknown classical algorithms by using a highly mixed target state and a single pure control qubit. However, it is unclear how to compare the resources needed for DQC1 and factoring on an equal footing since there is currently no example of both of these two problems solved using the same model. Although suggestions have been made that factoring requires more resources than DQC1 [9], a direct quantitative relation between the two is still lacking. To address this challenge, in this paper we propose a continuous-variable (CV) extension of DQC1 by replacing the pure qubit with a CV mode, or qumode. We call this model the power of one qumode. We demonstrate that our model is capable of reproducing DQC1 and factoring in polynomial time. This enables us to identify a CV resource in our model, called squeezing, to compare factoring and DQC1 on the same level. Squeezed states are also useful resources in other contexts, like gaining a quantum advantage in metrology [10–12] and in CV quantum computation [13,14]. The term “squeezing” could refer to either the squeezing parameter r or the squeezing factor s0 = exp(r). For quantifying resources in the context of computational complexity, it is important to make a distinction between these two definitions since they are exponentially separated. We justify our use of the squeezing factor over the squeezing parameter by showing how it can be interpreted as inverse precision, which is a known resource in computational complexity [15]. By inputting the squeezed state as the pure qumode, we can solve both the hardest problem in DQC1 and the phase estimation problem. We can relate the squeezing factor to the degree of precision in phase estimation and the total computation time. As an application, we can show that there exists an algorithm ©2016 American Physical Society NANA LIU et al. PHYSICAL REVIEW A 93, 052304 (2016) using our model that can factor an integer efficiently in time and it requires a squeezing factor that grows exponentially with the number of bits to encode this integer. Another algorithm in our model can recover DQC1 with no squeezing. A further way of interpreting the squeezing factor is through the dimensionality of a qudit that can be encoded in the squeezed state, which we later examine. In some cases, the squeezing factor can also be considered as an energy resource, while the squeezing parameter can be interpreted in terms of the number of qubits. We discuss all these connections more precisely later in this paper. Before moving on, let us remark that our architecture is an example of a hybrid computer: It jointly uses both discrete and CV systems. A similar hybrid model using a pure target state was given by Lloyd [16] to find eigenvectors and eigenvalues. Hybrid models for computing are interesting in their own right for providing an alternative avenue to quantum computing that bypasses some of the key obstacles to fully CV computation using linear optics or fully discrete-variable models [16,17]. This creates an important best-of-both-worlds approach to quantum computing. The most difficult DQC1 problem, called DQC1-complete, is estimating the normalized trace of a unitary matrix [18,19]. This problem turns out to be important for a diverse set of applications [18,20,21], such as estimating the Jones polynomial. Computing the normalized trace of a unitary begins with a pure control qubit in the state |+ = (|0 + |1)/ 2 and a target register made up of n qubits that are in a fully mixed state 1/2n . Next the control and target registers interact via a controlled-unitary operation, represented by U = |00| ⊗ 1 + |11| ⊗ U , where U acts on the qubits in the target register. The control qubit measurement statistics yields the normalized trace of U , i.e., σx + iσy = Tr(U )/2n . The circuit for DQC1 is shown in Fig. 1. To estimate the normalized trace to within error δ, that is, Tr(U )/2n ± δ, we need to run the computation TDQC1 ∼ 1/[min{Re(δ),Im(δ)}]2 times [22]. Since δ is independent of the size of U , this computation is efficient and DQC1 has an exponential advantage over the best-known classical algorithms [23]. In this paper we extend DQC1 by replacing the pure control qubit with a pure CV state (qumode), while keeping the target register the same. The total input state in our model FIG. 1. DQC1 circuit. The control state is |+ and the target state is n = log2 N qubits in a maximally mixed state. Here U is an N × N matrix and one can measure the final average spin of the control state to recover the normalized trace of U . FIG. 2. Power of one qumode circuit. We can have a squeezed state |ψ0 as the control state. The target state consists of n = log2 N qubits in a maximally mixed state as in DQC1. Here Ux ≡ exp(ixH τ/x0 ), where x0 is a constant and τ is the gate running time. Its relationship to the unitary in DQC1 is Ux = U xτ/x0 . We make final measurements of the control state in the momentum basis. The momentum measurements in this model can be used to recover the normalized trace of an N × N matrix U and also to factor the integer N . is thus a hybrid state of discrete-variables and a CV. See Fig. 2 for the circuit diagram of our model. We first show how our model can perform the quantum phase estimation algorithm [24]. We use this to efficiently compute (in time) a DQC1-complete problem, thus showing that this model contains DQC1. Next, we show that our model can perform Shor’s factoring algorithm, which is based on the phase estimation algorithm. The aim in the phase estimation problem is to find the eigenvalues of a Hamiltonian, H |uj = φj |uj . The complete set of eigenvalues of H is given by {φj }. We encode the Hamiltonian H into a unitary transformation, CU , that acts on the hybrid input state. We call CU the hybrid control gate and is defined as CU = exp(i x̂ ⊗ H τ/x0 ), where the position operator x̂ acts on the qumode [25]√and τ is the running time of the hybrid gate. Here x0 ≡ 1/ mω, where m,ω are the mass and frequencies of the harmonic oscillator corresponding to the qumode [26]. Like the control gate U in DQC1, the hybrid control gate can also be decomposed into elementary operations (see Appendix A). If the qumode is in a position eigenstate |x and |uj is a state of target register qubits, the action of the hybrid control gate is CU |x ⊗ |uj = |x ⊗ Ux |uj = |x ⊗ eiφj xτ/x0 |uj , (1) where x is the eigenvalue of x̂ and Ux ≡ exp(ixH τ/x0 ). In our model, we apply CU to a maximally mixed state of n qubits and a qumode state |ψ0 = G(x)|xdx. G(x) is the wave function of the initial qumode in the position basis. After implementing this gate, the target register is discarded, and the qumode is in the state ρf = n G(x)G∗ (x )Tr[ei(x−x )H τ/x0 ]|xx |dxdx . (2) Next, we measure this state in the basis of the momentum operator p̂ [27], i.e., p|ρf |p. This measurement yields the momentum probability distribution 1 P (p) = n G(x)G∗ (x )ei(x−x )φm τ/x0 p|xx |pdxdx 2 m 1 G(φm τ/x0 − p)G ∗ (p − φm τ/x0 ), 2n m where we used p|x = (1/ 2π ) exp(−ixp) and the of G(x) is denoted by G(p) = (1/ 2π ) −∞ exp(ixp)G(x)dx. If we choose our wave function G(x) carefully, we can employ our model to recover the eigenvalues of H . Suppose we initialized the control mode in a coherent state |α, chosen for its experimental accessibility [28]. If we measure the probability distribution of pE ≡ px0 /τ , where x0 and τ are known inputs and pE has dimensions of energy, we find (see Appendix B for a derivation) τ −τ 2 {pE −[φm + Im(α) ]}2 P (pE ) = √ n π 2 m=1 where Im(α) is the imaginary component of α [29]. We can see that the probability distribution is a sum of Gaussian distributions. It has individual peaks centered at each shifted eigenvalue φj + Im(α) with an individual spread given by the inverse of τ . By sampling this probability distribution we can infer the position of the peaks to any finite precision. Thus, it is possible to perform phase estimation to arbitrary accuracy just by increasing τ alone. However, to estimate eigenvalues to a precision better than a polynomial in n = log2 N , we require τ greater than polynomial in n = log2 N . Thus, the coherent state no longer suffices for Shor’s factoring algorithm, which requires high precision phase estimation. In such cases, we require a further resource that we identify to be the squeezing A finite squeezed state is defined by G(x) = √ 1 [1/( sπ 4 )]exp[−x 2 /(2s 2 )], where s ≡ s0 x0 and s0 parametrizes the amount of squeezing in the momentum direction [30]. We call s0 the squeezing factor. Its wave function in x has a Gaussian profile with standard deviation 1/s0 . By inputting a squeezed state into our model, the probability distribution in pE becomes s0 τ −(s0 τ )2 (pE −φm )2 2n π m=1 P (pE ) = Comparing this to Eq. (4) we see that the coherent state plays the same role as an unsqueezed state (i.e., s0 = 1). The method for retrieving the eigenvalues is now identical to that of the coherent state, except now we can take advantage of a large squeezing factor instead of nonpolynomial gate running time. We can see the relationship between the squeezing factor and gate running time more explicitly. Let Tbound be the upper bound to the total number of momentum measurements we are willing to make for phase estimation. If we need to recover any eigenvalue of the Hamiltonian to accuracy E , the following time-energy condition is satisfied (see Appendix C for a derivation), Tbound τ s0 E 1, PHYSICAL REVIEW A 93, 052304 (2016) class, which includes the local Hamiltonian problem [31]. For an exponentially greater precision in phase estimation, however, an exponentially higher squeezing factor is needed. We see from Eq. (6) that the squeezing factor serves as a rescaling of the energy “uncertainty” E . Similarly to phase estimation, increased squeezing can also retrieve the corresponding eigenvectors to greater precision [32]. We can see the precise relationship between the squeezing factor and the inverse precision from Eq. (6) by considering when the maximum total gate running time resource is constrained. When the time resource is constant, the minimum squeezing factor required for efficient phase estimation is the inverse precision, i.e., s0 ∼ 1/E . This relationship can be seen more intuitively by considering a problem whose solution is given by the central position x0 of a squeezed state with squeezing factor s0 . From the central limit theorem, it requires t ∼ 1/(s02 η2 ) measurements of the position x to get within precision η = |x − x0 | of the center. Thus, for a fixed number of measurements (or time), the squeezing factor scales as the inverse of precision s0 ∼ 1/η. Another way we can see s0 as the inverse precision is to consider when we are trying to resolve the distance between two adjacent Gaussian peaks φ. We see later that factoring in our model is essentially this problem with φ ∼ 1/N = 1/2n , where N is the number to be factored. Each Gaussian has standard deviation 1/s0 . If the distance between these peaks is closer than this length scale, it becomes difficult to resolve the two peaks. Thus, 1/s0 is the maximum resolution for φ, which is another precision scale. This fact is used when we later examine the qubit and qudit encoding in our model. We begin with an observation that the average of exp(ipE ) can reproduce the normalized trace of U ≡ exp(iH ) in the following way, − 12 Tr(Uτ ) eipE P (pE )dpE = e 4s0 where P (pE ) is given by Eq. (5) and Uτ ≡ exp(iH τ ). For an N × N matrix Uτ , we use n = log2 N . If we wish to recover the normalized trace of U to within an error δ [i.e., Tr(U )/2n ± δ], we require τ = 1 and TDQC1 measurements of momentum [34] in our model. This is equivalent to running our hybrid gate once per momentum measurement and then averaging the corresponding values {exp(ipE )}. This computation of the normalized trace is as efficient as DQC1 if TDQC1 is independent of N = 2n . By employing the central limit theorem we find (see Appendix E for a derivation) TDQC1 (6) where E can be a function of the size of the Hamiltonian. In an efficient protocol the maximum total gate running time Tbound τ is bounded by a polynomial in n. When the inverse of E is also a polynomial in n, efficient phase estimation is still possible for a squeezing factor polynomial in n. For example, this is useful for the verification of problems in the quantum-Merlin-Arthur (QMA) complexity F (s0 ) where F (s0 ) = sinh[1/(2s02 )] + exp[−1/(2s02 )] and F (s0 ) → 1 very quickly with increasing s0 [35]. Equation (8) shows that TDQC1 is upper bounded by a quantity dependent only on the squeezing and not on the size of the matrix. In fact, even when s0 = 1 (equivalent to a coherent state input) our qumode model is sufficient to efficiently compute (in time) the normalized trace of U , thus reproducing DQC1. This can NANA LIU et al. PHYSICAL REVIEW A 93, 052304 (2016) also be viewed as a consequence of E being independent of N = 2n in Eq. (6). Factoring is the problem of finding a nontrivial multiplicative factor of an integer N . The classically hard part can be reduced to a phase estimation problem, where the quantum advantage in phase estimation can be exploited. We show how the corresponding phase estimation problem can be solved in our model and how much squeezing resource is required. Factoring can be reduced to phase estimation in the following way. There is a known classically efficient algorithm that can find a nontrivial factor of N once it is given a random integer q in the range 1 < q < N [2]. However, this algorithm relies on prior knowledge of the order r of q, where r is an integer r N satisfying q r ≡ 1 mod N . Thus, the main difficulty lies in finding this order r, which is believed to be a classically hard problem. It turns out that this order can be encoded into the eigenvalues of a suitably chosen Hamiltonian Hq . Here we begin with a squeezed control state and a target state of n = log2 N qubits in a maximally mixed state. Let our hybrid control gate be CUq = exp(i x̂ ⊗ Hq τ/x0 ). Next we choose a suitable Hamiltonian Hq whose eigenvalues contain the order r. We define a unitary exp(iHq ) which acts on a qubit state |lmodN like exp(iHq )|lmodN = |lqmodN , where l is an integer 0 l < N. When l = q k for an integer k r, exp(iHq r)|q k mod N = |q k q r mod N = |q k mod N. Here the eigenvalues of Hq are 2π m/r, where m is an integer 1 m < r. However, for qubits in a mixed state we have l = q k in general. In these cases, we define a more general “order” rd , where exp(iHq rd )|ld mod N = |ld q rd mod N = |ld mod N . Here rd is an integer rd r that divides r [9] and satisfies ld q rd mod N = ld mod N . The integer d labels the set of states {|ld q h mod N}, where h rd is an integer. Thus, for general ld , the eigenvalues of Hq can be written as 2π md /rd , where md is an integer 1 m d < rd . These eigenvalues do not give r directly. However, we can always rewrite md /rd in the form m/r since rd is a factor of r. In general, there will be a single fraction m/r corresponding to many possible md and rd . If we call this multiplicity cm for a given m/r, then following Eq. (5) we can write the pE probability distribution as measured by the final control state rd −1 s0 τ −(2πs0 τ )2 ( 2πE − r d )2 P (pE ) = √ n π 2 d m =0 s0 τ =√ n 2 pE cm e−(2πs0 τ ) ( 2π − r ) . m 2 This probability distribution is a sum of Gaussian functions with amplitudes cm and centered on m/r. To recover the order r from the above probability distribution, it is sufficient to satisfy two conditions. The first condition is to be able to recover the fractions m/r to within the interval [m/r − 1/(2N 2 ),m/r + 1/(2N 2 )] [36]. Thus, the larger the number we wish to factor, the more squeezing we need to improve the precision of the phase estimation. The second requirement is for m and r to be coprime, which enables us to find r. This requirement is satisfied with probability less than O{ln[ln(N )]}. Subject to the above two conditions, we can compute the probability that a correct r is found using the momentum probability distribution in our model. We derive in Appendix F the number of runs Tfactor < O{ln[ln(N )]}/erf(π s0 τ/N 2 ) needed to factor N , which is inversely related to the probability of finding a correct r. In the large N limit, to achieve the same efficiency as Shor’s algorithm using qubits, which is Tfactor ∼ O{ln[ln(N )]} = O{ln[ln(N )]} Tbound [37], it is thus sufficient to choose s0 τ ∼ 22n . This can also be derived from Eq. (6) using E = 2π/(2N 2 ), where Tbound ∼ 1. If we let s0 = 1 for the coherent state, this requires total computing time to scale exponentially with the size of the problem (i.e., log2 N ). Thus, to ensure polynomial total computing time, we can choose instead τ ∼ 1 and s0 ∼ 22n . We saw that the squeezing factor can be interpreted as an inverse precision since the two quantities are also polynomially related. There are also other quantities polynomially related to the squeezing factor like energy and the dimensionality of the qubit that can be encoded in our squeezed state. We discuss their relationship to the squeezing factor and in what ways they can and cannot also be considered resources. Energy may be considered a resource if it is required in the initial preparation of the necessary input states. In a quantum optical setting, for example, energy is required for preparing a squeezed state resource. The minimum energy Emin required is that needed to create the number of particle excitations np corresponding to a certain amount of squeezing since Emin ∝ np . The number of particle excitations is itself regularly considered as the primary resource in the context of quantum metrology. For our squeezed state np = sinh2 [ln(s0 )], where for a large squeezing factor np ∝ s02 . Thus, energy and the squeezing factor are polynomially related. This interpretation of the squeezing factor as an energy can help us understand why s0 of the order O[exp(n)] is necessary for factoring in our algorithm. We can consider performing factoring in our model as swapping m = log2 N pure control qubits in the qubit factoring protocol with a single qumode. A simple example to illustrate this phenomenon is to consider a simple computation |0⊗μ → |1⊗μ . Suppose the computation is performed using μ qubits encoded in μ two-level atoms. Let the energy gap between the ground (|0) and the first excited state (|1) be E. Then a total energy of μE is required for the computation. If we use a single CV mode instead, for instance, a harmonic oscillator with 2μ energy levels, the total energy required to perform this computation is 2μ E, which has exponential scaling in μ. This is very similar to the exponential scaling in log2 N observed in our model. However, there are also two reasons why it is not ideal to consider energy as a resource. First, having no energy does not guarantee that the computational power of a high squeezing factor cannot be achieved. An example is spin-squeezing in the case of energy-degenerate spin states. Second, having large amounts of available energy also does not guarantee more efficient computation. If we instead use a coherent state with high coherence α and hence large energy (since np = |α|2 ), we still cannot factor in polynomial time. The GKP (Gottesman-Kitaev-Preskill) encoding [38] allows one to encode a qudit, or a discrete-variable quantum state with D dimensions [39] into a CV mode. We use this encoding scheme as an illustration. This can work for CV states whose probability distribution (in momentum, for example) can be described as a sum of Gaussian functions, each with standard deviation w and neighboring centers are separated by a distance φ. Since the precision associated with each peak is on the order w, we can fit a total of φ/w distinguishable copies of this distribution where each copy is separated from its neighbor by a unit φ/w along the momentum axis. If we represent each degree of freedom by one such distribution, then there are D = φ/w degrees of freedom available to this CV state just by displacement in momentum. These D degrees of freedom can be mapped onto a qudit of dimensionality D. Given an encoding like GKP, we can write D ∼ s0 φ since in our case w = 1/s0 . Thus, here s0 is interpreted as the inverse precision 1/w. Since φ is the distance between adjacent Gaussian peaks in our probability distribution P (pE ), to accomplish factoring, we require s0 = 22n = N 2 and φ = 1/N , so D = N . For DQC1, s0 = 1 and D = 2 (since we only need a single qubit). Thus, D and s0 are also polynomially A qudit of dimension D is equivalent to m = log2 D pure qubits, where D is polynomially related to s0 . Thus, for factoring, the required number of control qubits in our algorithm scales as m ∼ O[poly(n)] compared to m = 1 for DQC1, where n = log2 N is the number of target register qubits. Here we see that the number of qubits for the two problems are not exponentially separated. There is an important result of Shor and Jordan [18], which compares the computational power of DQC1 with an n-qubit target register and a model that is an m-control qubit extension of DQC1. Their result claims that if m is logarithmically related to n, then this model still has the same computational power as DQC1. On the other hand, if m is polynomially related to n, then this model is computationally harder than DQC1. If we use n = log2 N, then the Shor and Jordan result make clear that the number of control pure qubits m in these two different models are not separated exponentially, even though one model has higher computational power. However, like the time resource in these two models, D = 2m in these two models are exponentially separated, which suggests that D may be preferred over m, in the context of these particular algorithms, as a good quantifier for a computational resource. PHYSICAL REVIEW A 93, 052304 (2016) That the required number of control qubits scales as m ∼ O[poly(n)] is not too surprising since we observe a similarity between our model and standard phase estimation. Our model has more in common with standard phase estimation than DQC1, even though it is a hybrid extension of DQC1. We can see that by taking the average of momentum measurements in our model, we obtain the average of the eigenvalues of the Hamiltonian. The momentum average, however, does not give the normalized trace of the unitary matrix U as may be expected from DQC1. This can be understood by taking a discretized version of our model, where one uses instead |x for x = 0,1,2, . . . ,N . Then the circuit reduces to the standard phase estimation circuit, which requires the m = log2 N pure control qubits which we traded for a single qumode. From this, we can also see that our model using an infinite squeezing factor is an analog of the standard phase estimation using an infinite number of qubits, which in both models allow us to attain infinite precision in phase estimation. We add that this comparison with standard phase estimation further strengthens our claim that s0 ∼ 22n = N 2 is sufficient and maybe even necessary for factoring the number N . Suppose if we instead only need an exponentially smaller squeezing factor for factoring in a new algorithm. This would imply that a new algorithm performed on the qubit phase estimation circuit (i.e., the qubit analog of our algorithm) exists that can solve factoring with exponentially fewer control qubits compared to the currently known qubit phase estimation algorithm. While qumodes like squeezed states can be used as a way of encoding qudits and qubits [38,40,41], the squeezing factor is still a resource that should be considered in its own right. Its emphasis over qudits is important for practical considerations. The practical advantages of considering qumode resources, in general, are that CVs typically use affordable off-the-shelf components and widely leveraged quantum optics techniques. They also have higher detection efficiencies at room temperature and can be fully integrated into current fiber-optics networks [42,43]. A computation is a physical process and the amount of available physical resources can limit the power of a computation. In the power of one qumode model, we demonstrate how the squeezing factor can be viewed as a resource to quantitatively compare the difficulty of phase estimation problems like factoring and the hardest problem in the DQC1 computational class. Our model thus provides a unifying framework in which to compare the resources required for both DQC1 and factoring as well as other problems based on phase estimation. In addition, we also explore the trade-off relations between the squeezing factor, the running time of the computation, and the interaction strength in our model. The physical resources commonly discussed as computational resources are time, space, and inverse precision. The definitions of computational complexity classes are also based on these [15,44,45]. We identify that squeezing can also be interpreted in terms of one of these resources: inverse precision. Furthermore, we can relate the squeezing factor to energy and qudit dimensionality. This highlights very explicitly the different ways one can quantify computational power. NANA LIU et al. PHYSICAL REVIEW A 93, 052304 (2016) N.L. would like to thank G. Adesso, R. Alexander, A. Ferraro, A. Garner, P. Humphreys, J. Ma, N. Menicucci, M. Paternostro, F. Pollock, M. Vidrighin, and B. Yadin for useful discussions. The authors also thank A. Furusawa. N.L. acknowledges support by the Clarendon Fund and Merton College of the University of Oxford and the hospitality of Monash University while part of this work was written. This work is also supported by the National Research Foundation (NRF), NRF-Fellowship (Reference No. NRF-NRFF2016-02); the Ministry of Education in Singapore Grant and the Academic Research Fund Tier 3 MOE2012-T3-1-009; the John Templeton Foundation Grant 53914 “Occam’s Quantum Mechanical Razor: Can Quantum theory admit the Simplest Understanding of Reality?”; the National Basic Research Program of China Grants No. 2011CBA00300 and No. 2011CBA00302; the National Natural Science Foundation of China Grants No. 11450110058, No. 61033001, and No. 61361136003; the EPSRC (UK); the Leverhulme Trust and the Oxford Martin School; the National Research Foundation, Prime Ministers Office, Singapore under its Competitive Research Programme (CRP Award No. NRF- CRP14-2014-02) and administered by Centre for Quantum Technologies, National University of Singapore. Finally, the authors are grateful to the anonymous referees for their insightful comments and suggested changes to this paper. We note that in DQC1, there is a method of reducing the control gate U = |00| ⊗ 1 + |11| ⊗ U in terms of elementary (e.g., one- or two-qubit) circuits [23]. The analogous gate in the power of one qumode model is the hybrid control gate CU = exp(i x̂ ⊗ H τ/x0 ), where we now set τ = x0 for convenience. We demonstrate how this gate can also be reduced to elementary operations to further clarify the relationship between DQC1 and the power of one qumode We first write the DQC1 setup. The DQC1 setup begins with a polynomial sequence of elementary (e.g., one- or two-qubit) gates {u k = exp(ihk )}. We define the product of these gates to be k uk ≡ U = exp(iH ). The next step is to implement a control-unitary on each uk , so our collection of elementary gates is now transformed into the set {λu ≡ |00| ⊗ 1 + |11| ⊗ uk }. The product of these gates will recover the controlled-unitary operation U = |00| ⊗ 1 + |11| ⊗ U appearing in the description of DQC1, since λu = |00| ⊗ 1 + |11| ⊗ uk = |00| ⊗ 1 + |11| ⊗ = |00| ⊗ 1 + |11| ⊗ U = U . The analogous requirement for the power of one qumode model is to begin from a polynomial sequence of elementary gates which can form the hybrid control-unitary operation CU = exp(i x̂ ⊗ H ). We show how this can be achieved. Let us begin with the same set of elementary gates {uk = exp(ixhk )}. Instead of implementing the usual control unitary on each uk , we implement a hybrid control unitary on each uk . This means our set of elementary gates is modified into the new set {cu ≡ exp(i x̂ ⊗ hk )}. We can take the product of these operations and recover CU in the following way: cu = exp(i x̂ ⊗ hk ) = dx|xx| ⊗ eixhk dx|xx| ⊗ dx|xx| ⊗ eixH = ei x̂⊗H = CU , where x is a number and we used eixhk ≡ eixH , which must be satisfied for This condition, combined all x. with the definition that k uk = k exp(ihk ) = exp(iH )= U , implies that [hk ,hk ] = 0 for all k,k in the product k [46]. Equivalently, this means {uk } must be a commuting set of operators. We can show that such a set {uk } where U = exp(iH ) = k uk exists for the factoring problem. We know that factoring the number N is equivalent to finding the order r of a random integer q, where 1 < q < N, which requires U |1 mod N = exp(iH )|1 mod N = |q mod N . Since q is an integer, we can make a binary decomposition q − 1 = 20 b0 + 21 b1 + 22 b2 + · · · + 2f , where f is an integer and bj = 0,1. Then if we choose uk to be an elementary operation defined by uk |1 mod N = |(1 + 2k bk ) mod N , we can see that all operators in {uk } commute and k=0 uk |1 mod N = |q mod N = U |1 mod N . Suppose we begin with a coherent state |α in our model. The coherent state can be written in the position basis as |α = x|α|xdx, whose position wave function is 1 4 − 2x12 [x−Re(α)]2 iIm(α)x/x0 − i Re(α)Im(α) x|α = e 0 e 2 , (B2) π x02 where x0 ≡ 1/ mω and m,ω are the mass and frequency scales, respectively, of the corresponding quantum harmonic By using G(x) ≡ x|α in Eq. (3), we find the momentum probability distribution of the final control state to be 1 P (p) = n x0 −x02 {p− xτ [φm + Im(α) τ ]} = √ n π 2 m=0 PHYSICAL REVIEW A 93, 052304 (2016) If we measure variable pE ≡ px0 /τ (where inputs x0 and τ are initially known), the probability distribution for pE is π 2n m=0 P (pE ) = √ Here we provide a brief argument of how eigenvectors of the Hamiltonian {|φj } can also be found using our model. The hybrid state ρtotal after application of the hybrid gate is Thus, the coherent state can be used for phase estimation, where the accuracy of the phase estimation improves with increasing running time of the hybrid gate. Suppose we want to recover any eigenvalue of our Hamiltonian to accuracy E . The total number of pE measurements required for an average of one success is Tmeasure ∼ × |φm φm | ⊗ |xx |dxdx . p|ρtotal |p 1 = n G(φm τ/x0 − p)G ∗ (p − φm τ/x0 )|φm φm |. 2 m where PE is the probability of retrieving the eigenvalues to within the interval [φj − E ,φj + E ]. Using Eq. (5) we find ≡ P (l = m) + P (l = m), After a momentum measurement we are in the following state of the target register: PE ≡ P(pE ; |pE − φn | E ) 2n s0 τ φl +E −(s0 τ )2 (pE −φm )2 =√ n π 2 l=1 φl −E m=1 = n G(x)G∗ (x )ei(x−x )H τ/x0 |xx |dxdx 2 1 = √ 1 For a squeezed state G(x) = [1/( sπ 4 )]exp[−x 2 /(2s 2 )] the final state of the target register becomes p|ρtotal |p = s −s 2 (p−φm τ/x0 )2 |φm φm |. π m 2 s0 τ φm +E −(s0 τ )2 (pE −φm )2 P (l = m) = √ n π2 m=1 φm −E = erf(s0 τ E ) and P (l = m) = (1/2n ) 2l=m=1 (erf{s0 τ [(φl − φm )/r + E ]}− erf{s0 τ [(φl − φm )/r − E ]}) > 0. These two contributions to the total probability distribution PE can be interpreted in the following way. P (l = m) is the probability of finding φn to within E if the Gaussian peaks are very far apart. This occurs when the spread of each Gaussian is much smaller than the distance between neighboring Gaussian peaks 1/(s0 τ ) φmin , where φmin is the minimum gap between adjacent eigenvalues. P (l = m) captures the overlaps between the Gaussians. This overlap contribution vanishes for large N , so for simplicity we neglect this term. This neglecting will not affect the overall validity of our result. We can now write PE > P (l = m) = erf(s0 τ E ). Approximate eigenvectors can thus be obtained by measurement of the target state. The probability of obtaining the eigenvectors of the Hamiltonian is distributed in the same way as for the eigenvalues. Eigenvector identification therefore also improves with an increase in the squeezing factor. Here we derive the number of momentum measurements TDQC1 in our model needed to to recover the normalized trace of U ≡ exp iH to within error δ. We show that this is upper bounded by a quantity independent of the size of U . Let us begin by introducing a new random variable y ≡ exp(ipE x0 ), where pE are the measurement outcomes from our model. The probability distribution function with respect to y can be rewritten as Py (y) = By demanding Tmeasure < Tbound , then using Eqs. (C1) and (C4), we find it is sufficient to satisfy Tbound erf(τ s0 E ) 1. For large τ s0 E , the above inequality is automatically satisfied. This assumes that τ s0 grows more quickly in N than the inverse of the eigenvalue uncertainty E that we are willing to tolerate. More generally, however, it is the time and squeezing resources we want to minimize for a given precision, so τ s0 E is small. In this case, Eq. (C5) becomes Tbound τ s0 E 1. δ(y − eipE x0 )P (pE )dpE , where P (pE ) is given by Eq. (5). We find that the average of y is related to the normalized trace of unitary matrix U , yPy (y)dy = eipE x0 P(pE )dpE We now let τ = 1 since Uτ =1 = U . Tr(Uτ ) NANA LIU et al. PHYSICAL REVIEW A 93, 052304 (2016) To find the normalized trace of U to error δ is equivalent to finding the average of y to within , where − 12 Tr(U ) yPy (y)dy ± ε = e ±δ . For concreteness, we first separately examine recovering the real part of the normalized trace of U to within Re(δ), then the imaginary part of the trace to within Im(δ). Real part of the normalized trace of U . We define a new random variable yR ≡ Re(y) = cos(pE x0 ) whose average is within Re() of the real part of the normalized trace of U . The probability distribution with respect to yR is PyR (yR ) = δ[yR − cos(pE x0 )]P (pE )dpE . We can similarly use the central limit theorem in this case to find the necessary number of measurements tI , 2 e 2s0 tI ∼ I 2 , where I2 is the variance with respect to probability distribution PyI (yI ). We can show I2 ≡ yI2 PyI (yI )dyI − yI PyI (yI )dyI cos2 (pE x0 )P(pE )dpE − cos(pE x0 )P (pE )dpE We can now use Eqs. (E6) and (E7) to find an upper bound to the number of measurements F (s0 ) tR , F (s) = sinh Imaginary part of the normalized trace of U . To recover the imaginary part of the normalized trace of U to within an error Im(δ), we average yI ≡ Im(y) = sin(pE x0 ). The probability distribution with respect to yI is PyI (yI ) = δ[yI − sin(pE x0 )]P (pE )dpE . − 12 1 1 2 sin (φm ) − e sin(φm ) 2n m=1 2n m=1 F (s0 ). F (s0 ) This means the number of required measurements t to recover the normalized trace of U to within δ has the upper bound TDQC1 = max(tR ,tI ) F (s0 ) s0 τ m 2 2 pE P (pE ) = √ n π 2 m=0 Here we give the derivation of the number of runs Tfactor needed to recover a nontrivial factor of N given the momentum probability distribution [Eq. (9)] − 12 1 1 2 cos (φm ) − e cos(φm ) 2n m=1 2n m=1 − 12 1 1 cos2 (φm ) 2n m=1 − 12 − 12 tI (E6) where R2 is the variance of the probability distribution with respect to yR . Using Eqs. (E1) and (5) we can show R2 ≡ yR2 PyR (yR )dyR − yR PyR (yR )dyR R2 e tR ∼ We can employ the central limit theorem [47] and Eq. (E4) to find the number tR of necessary pE measurements to be We want to find the probability Pr in which one can retrieve the correct value of the order r. The number of runs required on average to find a nontrivial factor of N is inversely related to this probability Tfactor ∼ Here we derive a lower bound to Pr (hence an upper bound to the number of runs) that satisfies the following two conditions. To recover r it is sufficient to (i) know m/r to an accuracy within 1/(2N 2 ) and (ii) to choose when m and r have no factors in common so their greatest common denominator is 1 [i.e., gcd(m,r) = 1]. The first condition comes from the continued fractions algorithm [48], which can be used to exactly recover the rational number m/r given some φ when |φ − m/r| 1/(2r 2 ). Since r N , a sufficient condition is |φ − m/r| 1/(2N 2 ). The second condition ensures we recover r instead of a nontrivial factor of r. We see how to satisfy the second condition later on. To satisfy the first condition, we see that the probability of finding m/r to within 1/(2N 2 ) when measuring pE ≡ pE /(2π ) is m Pr ≡ P pE ; pE − r 2N 2 r−1 l s0 τ r + 2N 2 m 2 2 =√ n cm e−(2πs0 τ ) (pE − r ) 2π dpE π 2 l=0 rl − 1 2 m=0 Then the probability of retrieving the correct r from the probability distribution is at least m + π2 s0 τ m 2 2 >√ n π 2 m=0 r − 2 π s0 τ π s0 τ PHYSICAL REVIEW A 93, 052304 (2016) Note that we do not require contributions to the probability from every m in the summation. In order to successfully retrieve r from the fraction m/r, we need only consider the cases where gcd(m,r) = 1. Euler’s totient function (r) represents the number of cases where m and r are coprime with m < r. It can be shown that (r) > r/{eγ ln[ln(r)]} where γ is Euler’s number [2]. In the cases where gcd(m,r) = 1, the amplitude |cm | ≡ M, where M is the number of cases where rd = r. It is also possible to show that when N = v1 v2 (where v1 and v2 are prime numbers), M > (v1 − 1)(v2 − 1) [9]. [1] David Deutsch and Richard Jozsa, Rapid solution of problems by quantum computation, Proc. R. Soc. London, Ser. A 439, 553 [2] Peter W. Shor, Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer, SIAM J. Comput. 26, 1484 (1997). [3] Lov K. Grover, A fast quantum mechanical algorithm for database search, in Proceedings of the Twenty-eighth Annual ACM Symposium on Theory of Computing (ACM, New York, 1996), p. 212. [4] Aram W. Harrow, Avinatan Hassidim, and Seth Lloyd, Quantum Algorithm for Linear Systems of Equations, Phys. Rev. Lett. 103, 150502 (2009). [5] Richard Jozsa and Noah Linden, On the role of entanglement in quantum-computational speed-up, Proc. R. Soc. London A 459, 2011 (2003). [6] Emanuel Knill and Raymond Laflamme, Power of One Bit of Quantum Information, Phys. Rev. Lett. 81, 5672 (1998). [7] B. P. Lanyon, M. Barbieri, M. P. Almeida, and A. G. White, Experimental Quantum Computing without Entanglement, Phys. Rev. Lett. 101, 200501 (2008). [8] Animesh Datta and Guifre Vidal, Role of entanglement and correlations in mixed-state quantum computation, Phys. Rev. A 75, 042310 (2007). [9] S. Parker and Martin B. Plenio, Efficient Factorization with a Single Pure Qubit and log N Mixed Qubits, Phys. Rev. Lett. 85, 3049 (2000). [10] Carlton M. Caves, Quantum-mechanical noise in an interferometer, Phys. Rev. D 23, 1693 (1981). [11] Alex Monras, Optimal phase measurements with pure gaussian states, Phys. Rev. A 73, 033821 (2006). π s0 τ erf 2n Pr > (v1 − 1)(v2 − 1)r π s0 τ e 2 ln[ln(r)] π s0 τ (v1 − 1)(v2 − 1) > γ n e 2 ln[ln(r)] From Eqs. (F2) and (F4) we now have an upper bound to the time steps required Tfactor < eγ N ln[ln(N )] (v1 − 1)(v2 − 1)erf πs22n0 τ The large N limit (where v1 ,v2 1) gives our result Tfactor < eγ ln[ln(N )] eγ ln[ln(2n )] πs0 τ = erf 22n erf πs22n0 τ [12] Olivier Pinel, Pu Jian, N. Treps, C. Fabre, and Daniel Braun, Quantum parameter estimation using general single-mode gaussian states, Phys. Rev. A 88, 040102 (2013). [13] Seth Lloyd and Samuel L. Braunstein, Quantum Computation over Continuous Variables, Phys. Rev. Lett. 82, 1784 (1999). [14] Mile Gu, Christian Weedbrook, Nicolas C. Menicucci, Timothy C. Ralph, and Peter van Loock, Quantum computing with continuous-variable clusters, Phys. Rev. A 79, 062318 (2009). [15] Mikhail J. Atallah and Marina Blanton, Algorithms and Theory of Computation Handbook: Special Topics and Techniques (CRC Press, Boca Raton, FL, 2009), Vol. 2. [16] Seth Lloyd, Hybrid quantum computing, in Quantum Information with Continuous Variables (Springer, Berlin, 2003), p. 37. [17] Akira Furusawa and Peter Van Loock, Quantum Teleportation and Entanglement: A Hybrid Approach to Optical Quantum Information Processing (Wiley & Sons, New York, 2011). [18] Peter W. Shor and Stephen P. Jordan, Estimating jones polynomials is a complete problem for one clean qubit, Quant Inf. Comput. 8, 681 (2008). [19] Dan Shepherd, Computation with unitaries and one pure qubit, [20] David Poulin, Robin Blume-Kohout, Raymond Laflamme, and Harold Ollivier, Exponential Speedup with a Single Bit of Quantum Information: Measuring the Average Fidelity Decay, Phys. Rev. Lett. 92, 177906 (2004). [21] Emanuel Knill and Raymond Laflamme, Quantum computing and quadratically signed weight enumerators, Inform. Process. Lett. 79, 173 (2001). [22] Animesh Datta, Steven T. Flammia, and Carlton M. Caves, Entanglement and the power of one qubit, Phys. Rev. A 72, 042316 (2005). NANA LIU et al. PHYSICAL REVIEW A 93, 052304 (2016) [23] Animesh Datta, Studies on the Role of Entanglement in Mixedstate Quantum Computation, Ph.D. thesis, The University of New Mexico, 2008. [24] Richard Cleve, Artur Ekert, Chiara Macchiavello, and Michele Mosca, Quantum algorithms revisited, Proc. R. Soc. London A 454, 339 (1998). [25] It is also possible to define a control gate controlled on the particle number operator instead of x̂. However, analytical solutions in this case are not straightforward and for our purposes it suffices to look at our current hybrid control gate. [26] Here we use natural units = 1 = c. [27] Operators x̂ and p̂ satisfy the canonical commutator relation [x̂,p̂] = i. [28] Christopher Gerry and Peter Knight, Introductory Quantum Optics (Cambridge University Press, Cambridge, U.K., 2005). [29] This is equivalent to the initial expectation value of momentum of the coherent state. [30] Here s0 is a real number in the range s0 ∈ [1,∞). [31] Julia Kempe, Alexei Kitaev, and Oded Regev, The complexity of the local hamiltonian problem, SIAM J. Comput. 35, 1070 [32] See Appendix D. Also, see [33] for another algorithm on eigenvector retrieval. [33] Daniel S. Abrams and Seth Lloyd, Quantum Algorithm Providing Exponential Speed Increase for Finding Eigenvalues and Eigenvectors, Phys. Rev. Lett. 83, 5162 (1999). [34] Note that the number of momentum measurements and pE measurements needed are equivalent. [35] The F (s0 ) overhead is analogous to the case in DQC1 when using a slightly mixed state probe state instead of the pure state |++| [23]. The degree of mixedness does not affect the result that the computation is efficient. The amount of squeezing in our model thus corresponds to the degree of mixedness in the input state of DQC1. Higher squeezing corresponds to greater [36] This ensures that m/r is recovered exactly by using the continued fractions algorithm. See [48] for an explicit demonstration. [37] Note that Tbound in this case corresponds to the number of momentum measurements needed to find the correct eigenvalue of the Hamiltonian. From the eigenvalue, one still needs an extra classically efficient step to find the factor, so Tfactor > Tbound . [38] Daniel Gottesman, Alexei Kitaev, and John Preskill, Encoding a qubit in an oscillator, Phys. Rev. A 64, 012310 (2001). [39] D = 2 is equivalent to a qubit. [40] B. M. Terhal and D Weigand, Encoding a qubit into a cavity mode in circuit-qed using phase estimation, Phys. Rev. A 93, 012315 (2016). [41] Brian Vlastakis, Gerhard Kirchmair, Zaki Leghtas, Simon E Nigg, Luigi Frunzio, Steven M. Girvin, Mazyar Mirrahimi, Michel H. Devoret, and Robert J. Schoelkopf, Deterministically encoding quantum information using 100-photon schrödinger cat states, Science 342, 607 (2013). [42] Samuel L Braunstein and Peter Van Loock, Quantum information with continuous variables, Rev. Mod. Phys. 77, 513 (2005). [43] Christian Weedbrook, Stefano Pirandola, Raul Garcia-Patron, Lloyd, Gaussian quantum information, Rev. Mod. Phys. 84, 621 [44] Michael R. Garey and David S. Johnson, Computers and intractability (W. H. Freeman, London, 2002), Vol. 29. [45] Stephen A. Cook, The complexity of theorem-proving procedures, in Proceedings of the Third Annual ACM Symposium on Theory of Computing (ACM, New York, 1971), p. 151. [46] Note that this also means H = k hk . [47] Since we are selecting our random variable independently and from the same distribution which has finite mean and variance, it is valid to use the central limit theorem. Cambridge, U.K., 2010). Document related concepts Quantum electrodynamics wikipedia, lookup Probability amplitude wikipedia, lookup Theoretical and experimental justification for the Schrödinger equation wikipedia, lookup Coherent states wikipedia, lookup Bohr–Einstein debates wikipedia, lookup Quantum key distribution wikipedia, lookup Copenhagen interpretation wikipedia, lookup Density matrix wikipedia, lookup History of quantum field theory wikipedia, lookup Canonical quantization wikipedia, lookup Quantum state wikipedia, lookup Interpretations of quantum mechanics wikipedia, lookup Path integral formulation wikipedia, lookup Hidden variable theory wikipedia, lookup T-symmetry wikipedia, lookup Renormalization group wikipedia, lookup Quantum machine learning wikipedia, lookup EPR paradox wikipedia, lookup Quantum group wikipedia, lookup Hydrogen atom wikipedia, lookup Bohr model wikipedia, lookup Symmetry in quantum mechanics wikipedia, lookup Relativistic quantum mechanics wikipedia, lookup Many-worlds interpretation wikipedia, lookup Orchestrated objective reduction wikipedia, lookup Matter wave wikipedia, lookup Bell's theorem wikipedia, lookup Ising model wikipedia, lookup Quantum entanglement wikipedia, lookup Quantum teleportation wikipedia, lookup Quantum field theory wikipedia, lookup Quantum decoherence wikipedia, lookup Quantum computing wikipedia, lookup Quantum dot wikipedia, lookup Quantum fiction wikipedia, lookup Particle in a box wikipedia, lookup Measurement in quantum mechanics wikipedia, lookup Algorithmic cooling wikipedia, lookup
9dd358a81af933bd
Several New Papers now Online The end of the year is rapidly approaching, and I’d like to draw your attention to three papers of mine that have been recently published online. Theoretical Characterization of Conduction-Band Electrons in Photodoped and Aluminum-Doped Zinc Oxide (AZO) Quantum Dots This first paper was a fun collaboration between our theoretical work in the Li group and the experimental work done by Dan Gamelin at the UW. The paper looks at calculated UV-Vis spectra of photodoped and Aluminum-doped ZnO quantum dots. Both types of doping added “extra” electrons to the conduction band, and this resulted in some interesting properties that you can see in both theory and experiment. What I was most surprised to see was that the “extra” electron simply acted like an electron moving in a spherical potential, just like a hydrogen atom! On the left we have the HOMO, which looks like an s-orbital. Then the two on the right are different LUMOs, one that looks like a p-orbital and another that looks like a d-orbital. We called the “super-orbitals”, since they look like atomic orbitals but exist in these tiny crystal structures. I think these are the first images of DFT-computed “superorbitals” ever published, though the “artificial atom” paradigm has been around for some time. It’s interesting to see complicated systems behaving like the simple quantum mechanical models we study when we first learn quantum mechanics! J. J. Goings, A. Schimpf, J. W. May, R. Johns. D. R. Gamelin, X. Li, “ Theoretical Characterization of Conduction-Band Electrons in Photodoped and Aluminum-Doped Zinc Oxide (AZO) Quantum Dots,” J. Phys. Chem. C, 2014 , 118, 26584. This second paper details the implementation of several low-scaling methods for computing excited states (e.g. computing UV/Vis absorption). The idea was to take the highly accurate EOM-CCSD equations and simplify them using perturbation theory. That way, we could keep most of the accuracy of the method, while still having fast enough equations that large molecules could be studied. The resulting equations are closely related to other methods like CC2, CIS(D), and ADC(2), and I showed how they all relate to each other. We compared the performance to EOM-CCSD as well as experimental values. The results were promising, and the perturbative equations performed particularly well for Rydberg states. CC2, on the other hand, performs great for valence excitations. J. J. Goings, M. Caricato, M. Frisch, X. Li, “ Assessment of Low-scaling Approximations to the Equation of Motion Coupled-Cluster Singles and Doubles Equations,” J. Chem. Phys., 2014 , 141, 164116. Ab Initio Non-Relativistic Spin Dynamics Finally, in this paper, we extended the generalized Hartree-Fock method to the time domain. In this proof-of-concept paper, we showed how a magnetic field can guide the spin dynamics of simple spin-frustrated systems. The key is reformulating the real-time time-dependent Hartree-Fock equations in the complex spinor basis. This allows the spin magnetizations on each atom to vary as a response to, say, an externally applied magnetic field. Here’s an example with a lithium trimer. Initially (left picture), all the spin magnetizations (that is, the spatial average of spin magnetic moment) point away from each other. Then, applying a static magnetic field (right picture) into the plane of the molecule causes each magnetization to precess at the Larmor frequency. The precession is shown in picoseconds by the colorization. image of FIG. 3. It’s really a beautiful idea in my opinion, and there is so much more to be done with it. For example, in our simple ab initio model, the spins only “talk” through Pauli repulsion, so they behave more or less independently. What would happen if we include spin-orbit coupling and other perturbations? That remains to be seen. F. Ding, J. J. Goings, M. Frisch, X. Li, “ Ab Initio Non-Relativistic Spin Dynamics,” J. Chem. Phys., 2014 , 141, 214111. Broken Symmetries in Hartree-Fock It’s been a busy summer, with traveling and giving talks and writing papers. I’m back in Seattle again (yay!), and have been thinking a lot about how we get the simplest models of electrons in molecules, the Hartree-Fock equations. While most theorists know about restricted Hartree Fock (RHF) and unrestricted Hartree-Fock (UHF), not as many know that these can be derived from simple (okay, sometimes not-so-simple) symmetry considerations. RHF and UHF are actually subgroups of a large class of problems called the generalized Hartree Fock equations. GHF allows mixed spin descriptions (not just simple spin up or spin down), as well as complex wave functions. These are actually quite important for describing frustrated spin systems, like this simple triangular system The spins can’t all align favorably, so we call it “spin frustrated”. You can try this out on your own with, say, a triangle of hydrogens or lithiums or chromiums. UHF won’t give you the lowest energy solution. It will be GHF-unstable. Only GHF can give you the lowest energy solution, since it allows the MOs to take on both “spin up” and “spin down” character. Weird, right? If we insist that the GHF equations are invariant with respect to spin rotations along an axis (say, the z axis), we can get the UHF equations back. If we insist that the GHF equations are invariant to time-reversal (doesn’t that sound cool?) as well as spin rotations along all axes (x,y,z), we get the real RHF equations. Unfortunately, the literature on it is pretty sparse, and the content pretty dense. A paradox, if I’ve ever seen one. Thankfully, I found you can derive all the relationships using the tools of linear algebra. The results are illuminating, and it’s fun to see how the different broken symmetry HF solutions relate to each other. So here’s my take on re-deriving the different HF classes from generalized Hartree-Fock! We want to classify broken symmetry wave function by investigating how they transform under the action of the invariance operators constituting the symmetry group of the spin-free electronic Hamiltonian which is equivalent to This simply means is invariant to transformation by , whatever may be. In general, when is an eigenstate of , then is also an eigenstate belonging to the same eigenvalue as . Now, exact eigenstates of can be chosen to be simultaneous eigenstates of the various symmetries in . In contrast, for approximate variational wave functions (e.g. Hartree-Fock) the symmetry requirements represent additional constraints. This is Löwdin’s ‘‘symmetry dilemma’’. To state this dilemma again, we know the exact solution (lowest energy solution) will have certain symmetries, but if we include these symmetries in our approximate, variational Hamiltonian, we can only raise the energy and not lower it. This means we can get closer to the exact solution by removing physical constraints! This is troublesome, but not a huge deal in practice. It’s crucial to recognize that this problem arises only because we use an approximate independent-particle Hamiltonian. There are several ways to break the single-determinental Hartree-Fock model into its various broken symmetry subgroups. However we do this, what we ultimately want to determine is the form of the operators that are invariant with respect to similarity transformations by various subgroups , i.e. I find it easier to put into a finite basis and treat this problem with the tools of linear algebra. The basis for our independent particle model will be the spinor basis, so any arbitrary spin function can be written We will only worry about one body operators , which include the Fock operator as well as the unitary parameterization of the single determinant (c.f. Thouless representation). Putting into a spinor basis gives us Or, in second quantization The transformation properties of under symmetry operations can be determined by examining the transformation of the generators , but it is much simpler to consider the transformations of the (block) matrix , e.g. Basically, we are looking for the constraints on that make the above equation true, for any given symmetry operation. The general invariance group of the spin-free electronic Hamiltonian involves the spin rotation group SU(2) and the time reversal group , i.e. SU(2) . SU(2) can be given in terms of spin operators, All of this amounts to performing rotations in spin space. The form looks awfully similar to rotations in 3D space (in fact, the relationship between rotations in spin space and 3D space is much deeper than that). The time reversal group is given as In general, Where is the complex conjugation operator. There is nothing special about using , it’s just convention. Time reversal doesn’t really affect time per se, but rather changes the direction of movement, be it linear momentum or angular momentum. It’s an antiunitary operator (it must be, in fact) that consists of the spin-looking part (unitary) and then the complex conjugation operator (antiunitary), to be antiunitary overall. It affects electrons by flipping their spin and then taking the complex conjugate. One final detail about the symmetry operators before we go on, and that is to observe that since SU(2) is a double cover, if we rotate in spin space by , we flip the sign of the wave function. If we rotate by another , we get the original state back. This is characteristic of fermions, of which electrons are a part. Let us now consider the unitary transformations of the form where , acting on some operator .This is valid for any unitary transformation that depends on a continuous parameter (which we have absorbed into ). We insist Now, by the Baker-Campbell-Hausdorff transformation, we can rewrite the right hand side as Since by definition It suffices to show that the constraints on introduced by the symmetry operation are satisfied when . This is an excellent result, because it means that we can just look at how the matrix transforms under the Pauli spin matrices which define the SU(2) spin rotation as well as the time-reversal operation (in the time-reversal case, we can use the BCH expansion of the unitary part and then absorb the complex conjugation into the commutator expression). We can show how RHF, UHF, and GHF all fall out of the different symmetry combinations (or lack thereof). From here on out, I’ll drop the hats on my operators for clarity. Let’s start by considering how many different types of symmetry subgroups we can have. A moments thought at the form of the spin rotation and time-reversal groups gives the following subgroups: . This means there is no symmetry. . This means we are just symmetric to time-reversal symmetry. . Only symmetric to complex conjugation. . Only symmetric to spin rotations about the z-axis. ,. Symmetric to both spin rotation about z, as well as time-reversal symmetry. , . Symmetric to spin rotation about z, as well as complex conjugation. . Symmetric to all spin rotations. , . Symmetric to all spin rotations and time reversal symmetries. That’s it! That’s all the cases you can get. Let’s go through them, case by case, using the complex Fock matrix as an example, i.e. Where we solve, for each symmetry (or group of symmetries), where is the generator of the symmetry. . No symmetry. In this case, our transformation on is rather simple. It looks like So we get no constraints. We can mix spin, as well as take on complex values. This is the structure of the complex generalized Hartree Fock (GHF) Fock matrix. . Complex conjugation symmetry. If the only symmetry that holds is complex conjugation, our transformation looks like Note that is its own inverse. It also only acts to either the left or the right. The asterisk indicates complex conjugation (not and adjoint!). The constraint we get here is that the values of the Fock matrix have to be identical on complex conjugation. Since this can only happen if the values are real, we get the real GHF Fock equations. . Time reversal symmetry. Now we start to get slightly more complicated. Using the Pauli matrix To represent the unitary operation gives us We see that this really only introduces two constraints, so we choose to eliminate and . This gives the final result of paired GHF, or . Rotation about spin z-axis. Here we use the Pauli matrix And show that which is only satisfied if This gives us the complex version of UHF. We see invariance with respect to results in two separate spin blocks, with no restriction on whether they take real or complex values, or the dimension of either spin block. ,. Rotation about spin z-axis and time reversal. We are now at the point where we examine the effect of invariance to multiple symmetry operations. It might concern you that considering multiple symmetry operations means that the order in which we perform symmetry operations matters. In general symmetry operations do not commute, but we will see that for our purposes order really doesn’t matter. Because we insist on invariance, the multiple symmetries never actually act on each other, therefore we don’t need to consider the commutator between them. Or, put a different way, because each symmetry operation returns the system to its original state, we can consider each operation separately.The system contains no memory of the previous symmetry operation. We can show this another way using the BCH expansion. Consider two symmetry operations on parameterized by and : Which is true if and only if = [F, iA] = 0 Which decouples and . Considering multiple symmetry operations only gives us more constraints, and order doesn’t matter. Let’s see it in action then for time reversal and z-axial spin symmetry. Using the results for time reversal symmetry, we have Which means the off diagonals must go to zero, giving our final result of paired UHF, ,. Rotation about spin z-axis and complex conjugation. We do a similar thing as above for rotation about spin z-axis and complex conjugation. This one is particularly easy to show, starting from the results of complex conjugation symmetry. Since symmetry with respect to forces all matrix elements to be real, we just get the real version of the results of symmetry with respect to — we get the real UHF equations! ,. Rotation about all spin axes. We finally move onto invariance with respect to rotations about all spin axes. Again, this is a little weird because these operations aren’t commutative, but we have already shown that insisting on invariance leads to a decoupling of the symmetry operations. (I should mention that if you are still unsure of this, feel free to brute force through all orders of symmetry operations. You’ll see that it makes no difference.) Two things worth mentioning here: first, technically is already a part of the total spin rotation group, so it’s a little weird to separate them into and , but we understand this as drawing the distinction that you can be symmetric to but not . If you are invariant to , though, you will be invariant to . Second, while does technically have an operator representation, it is not a symmetry operation. Think about the form of the operator : it’s essentially the identity, right? So when we say invariant with respect to , what we really mean is that we are invariant to the whole spin rotation group, . To show invariance with respect to the spin group, it suffices just to consider any two spin rotations, since each spin operator can be generated by the commutator of the other two. You can show this using the Jacobi identity. Say we are looking the invariance of with respect to generators , , and , and (like our spin matrices do). We want to show The Jacobi identity tells us Now, by definition the first two terms are zero, and we can evaluate the commutator of , which means if , then it must follow that as well (the imaginary in front doesn’t make a difference; expand to see). That being said, we can evaluate invariance with respect to all spin axes by using the results of and applying the generator defined by Pauli matrix to it, where Applying this gives Which means that , or Invariance in this symmetry group results in the complex RHF equations, where the alpha and beta spin blocks are equivalent. Thus orbitals are doubly occupied. ,,,. All spin rotations and time reversal. Given the results we just obtained above, and understanding that time reversal contains the operator, we only need to take the previous results and make them invariant to complex conjugation. This is very simple, and we see that In other words, the real RHF equations, since invariance with respect to complex conjugation forces the elements to be real. N.B. Most of this work was first done by Fukutome, though my notes are based off of Stuber and Paldus. The notes here agree with both authors, though the derivations here make a rather large departure from the literature. Thus, it is helpful to reference the following work: Fukutome, Hideo. ‘‘Unrestricted Hartree-Fock theory and its applications to molecules and chemical reactions.” International Journal of Quantum Chemistry 20.5 (1981): 955-1065. Stuber, J. L., and J. Paldus. ‘‘Symmetry breaking in the independent particle model.” Fundamental World of Quantum Chemistry, A Tribute Volume to the Memory of Per-Olov Löwdin 1 (2003): 67-139. I’d strongly recommend looking at Stuber and Paldus’ work first. Their terminology is more consistent with mainstream electronic structure theory. Linear Response Coupled Cluster Linear response coupled cluster (LR-CC) is a way of getting electronic excitation energies at a coupled cluster level of theory. It’s also a general way of getting many response properties, which basically answer the question of “what do molecules do when you poke them”? Here is a simple derivation of LR-CC, which I like because it parallels the derivation of linear response time-dependent DFT and Hartree Fock. There are much more robust ways to get these results, for example, propagator approaches have been particularly successful when applied to coupled cluster response theory, but I think it can mask a lot of the physics if you aren’t careful. Anyway, here it is. The coupled cluster ground state equations are given by Since this is, by definition, a stationary state the following is equivalent Now, perturbing the system with monochromatic light of frequency gives us the modified Hamiltonian (in the dipole approximation): This induces a response to the wavefunction, which is parameterized by the cluster operator . The extra terms are higher order responses in the cluster operator. I should also note that this means T and R are in the same excitation manifold: in other words, if T is a single excitation operator, so is R. Same for doubles, triples, etc. Anyway, we are only considering the linear response here. Furthermore, from here on out I won’t write the complex conjugate parts. Though they are still there (and I will indicate them by dots), ultimately we collect terms of only to get our final equations. We could collect the conjugates as well, but would get the same equations in the end. The time dependent Schrödinger equation is Plugging in the above perturbed expressions yields Since and commute, we can rewrite as Expanding the exponential containing Finally perform the derivation, making sure not to forget the time dependent phase Collect terms with , cancel out phase factors In linear response theory, we assume the perturbation is small and send . Thus the eigenvalues are just the ground state coupled cluster energy plus an excitation energy. As an aside, it is possible (in fact, this is how LR-CC is done in practice) to get rid of the entirely, so that you solve for the excitation energies directly. How? The simplest way to think about it is that when you evaluate the left hand side, you find that it contains expressions for the ground state energy (times ), so if we leave out these terms when solving the equations, we can solve for the excitation energies directly. In other words we force the ground state energies on both sides to cancel by being careful how we set up the equation. And so we get an equation like so: Here’s the more jargony answer: if you apply Wick’s theorem to the Hausdorff expansion of the similarity transformed coupled cluster Hamiltonian, you retain only terms that share a sum with each other. Diagrammatically, this means we keep only the connected diagrams. If you go ahead and derive all diagrams, connected and disconnected, you find that the disconnected diagrams correspond exactly to the terms you want to cancel on the right. Metal-Organic Frameworks Paper Now Online I had my second first author paper published a few weeks back in the Journal of Physical Chemistry A, and wanted to share it here. It’s an investigation of how the tiny little hydrogen molecule binds to open metal sites in metal organic frameworks. Metal organic frameworks are like molecular sponges. They are very porous and small gases seem drawn to them, so researchers want to use them to store hydrogen gas for fuel cells. Another interesting use is to use them for purifying gases! Imagine if we could store and condense carbon dioxide right out of the air! Lots of potential environmental applications — problem is, no one really knows how they work. We studied the hydrogen gas binding to a series of different metals in these metal organic frameworks. Here is a picture of one of the systems we studied. strucs The full metal organic framework is on the left. To simplify our method, we made ‘mimics’, meaning we studied just one part of a metal organic framework. That’s what you see in the center and on the right. Here is a close up of an interaction staggered2 Hydrogen is kind of weird in that it has no charge and only two electrons. So why does bind to the metal centers? To understand why, we used a method called Symmetry Adapted Perturbation Theory (SAPT). This method was designed to study weak interactions between molecules. In addition to calculating the energy of the hydrogen sticking to the metal, it also decomposes the interaction into physical forces: like electrostatic, dispersion, induction, etc. It can be an extremely accurate method (we compared it to CCSD(T) energies and got pretty much the same results). What we found might surprise you, as it did me! Hydrogen binds weakly (like ~ 3 kcal/mol weakly) and in many cases a good third of the interaction comes from dispersion forces. Now, if you remember from general chemistry, the more electrons you have, the greater the effects of dispersion become. Yet hydrogen — that little H2, with only two electrons — had an interaction largely mediated by dispersion (electrostatics was the other major component, which wasn’t too surprising). The conclusion? Don’t ignore (or underestimate) your dispersion forces! This is very important to researchers who continue to study these metal organic frameworks, as they like to use density functional theory (DFT), which has some problems describing dispersion (if you aren’t careful). You can read the whole paper here. Kinetic Balance Kinetic balance was discovered when early attempts at relativistic SCF calculations failed to converge to a bound state. Often, the energy was too low because of the negative energy continuum. The crux of the problem was that the four components of the spinor in relativistic methods were each allowed to vary independently. In other words, scientists treated each component with its own, independent basis without regard to how the components depended on each other. This problem was eventually solved by paying attention to the non-relativistic limit of the kinetic energy, and noting that the small and large components of the four-spinor are not independent. There is, in fact, a coupling that we refer to as ‘‘kinetic balance’’. I’ll show you how it works. First, as you may have guessed, we only need to consider the one-electron operator. This is the matrix form of the Dirac equation, and it contains (in addition to other terms) the contributions to the kinetic energy. Written as a pair of coupled equations, we have Where , , and , and so on. These are the potential, overlap, and momentum terms of the Dirac equation in a basis . Now, if we look at the second of the two paired equations, we note that for potentials of chemical interest (e.g. molecular or atomic), is negative definite. We also know that when we solve the equation, we are looking for and energy above the negative energy continuum, which is to say we want . Since the overlap is positive definite, putting all of these constraints together mean that we have a nonsingular matrix (and therefore invertible!) matrix in the second of our coupled equations. We can rewrite the second equation as Substituting this expression back into the first (top) equation yields This form is very useful for analysis. Now, we are going to use the matrix relation with and . This leads to the rather long expression Why this ridiculous form? Look closely at each side and their dependence on the speed of light, . The left hand side has the terms, and the right hand side has the terms. Since the non relativistic limit is found when the speed of light is infinite (), the whole right hand side goes to zero. This gives us Now, if this is indeed the true non-relativistic limit, then we find that our kinetic energy term is given by Or, more explicitly, Where that inner part, , is an inner projection onto the small component basis space. Less formally, the small component is the ‘‘mathematical glue’’ that connects the two momentum operators. If the small component spans the same space as the momentum operators in the large component space, , then that inner projection just becomes the identity. This means that the expression becomes Which is the kinetic energy term in the non-relativistic formulation! (N.B. We used the relation ). So, when we set up our relativistic calculations, as long as we have the constraint that Then we will find that we recover the correct non-relativistic limit of our equations. The basis is called ‘‘kinetically balanced’’, and we won’t collapse to energies lower than . A few stray observations before we finish. First, if we enforce the relation between small and large component basis functions, then we find that and . Second, this constraint actually maximizes kinetic energy, and any approximation that does not satisfy the kinetic balance condition will lower the energy. This was weird to me coming from the non relativistic Hartree-Fock background, where variationally, if you remove basis functions you raise the energy. The thing is that when doing relativistic calculations, you aren’t bounded from below like their non-relativistic counterparts. While you can get variational stability, you are actually doing an ‘‘excited state’’ calculation (I am using the term ‘‘excited state’’ very loosely). Kind of odd, but the negative energy continuum does exist, and was a big factor in the predicting the existence of antimatter. Finally, modern methods of relativistic electronic structure theory make use of the kinetic balance between large and small component basis functions to eliminate the small component completely. These are called ‘‘Dirac-exact’’ methods. One such example is NESC, or Normalized Elimination of the Small Component. In addition to reproducing the Dirac equation exactly, they have numerous computational benefits, as well as easily allowing for most (if not all) non-relativistic correlated methods to be applied directly. Thus after doing an NESC calculation, you get relativistic ‘‘orbitals’’ which can immediately be used in, say, a coupled cluster calculation with no computational modification. Comments, questions, or corrections? Let me know!
deb0d870ff353aa2
Treating Markets Mechanically – An Example April 27, 2011 The aim of this post is to provide the transition from time-independence to time-dependence within a simple economic model for further reference. For that purpose we consider a single consumer-worker. This agent obeys a time constraint on labour L and free time F L + F = 1. We introduce a utility function U as U = C^\alpha F^{1-\alpha} for a given 0< \alpha<1. There is a budget constraint W given as price p times consumption C equals wage rate w  times labour L. p C = w L. The agent now maximizes U such that p C + w F = w. Let us solve that. Lagrange equations yield -\frac{\partial U}{\partial C} = \lambda \frac{\partial W}{\partial C} -\frac{\partial U}{\partial F} = \lambda \frac{\partial W}{\partial F} with constraint W = p C + w F -w = 0. \frac{p}{w}=\frac{\alpha C^{\alpha-1} F^{1-\alpha}}{(1-\alpha) C^\alpha F^{-\alpha}}=\frac{\alpha F}{(1-\alpha) C}. Solving that for C and plugging it into the budget constraint yields \frac{\alpha}{(1-\alpha)}w F+w F -w =0. Solving this for F and again using the budget constraint shows that solves the maximization problem. So far there is no time evolution. To introduce such a dynamics we mimic mechanics and set C=C(p,\dot{p}). Demand is a function of price and its derivative. For economists the \dot{p} comes from nowhere. Especially since it is not obvious at all how to define the derivative of a price evolution. For now it has to suffice that eventually we shall understand the derivative in a distributional sense and until then we treat it as a formal parameter. The time-dependent utility function for the consumer-worker U = C^\alpha F^{1-\alpha}r^t for a discount rate 0<r\leq1. The agent now maximizes \int_0^T U d t under the constraint p C + w F = w. We make the following assumption due to S. Smale (for excess demand): \dot{p}=C \textnormal{ and }\dot{w}=L. Euler-Lagrange equations yield \frac{d}{d t}\frac{\partial U}{\partial C} = \lambda \frac{\partial W}{\partial C} \frac{d}{d t}\frac{\partial U}{\partial L} = \lambda \frac{\partial W}{\partial L} with constraint W = p C - w L = 0. For the Lagrange multipliers we get \lambda =-p^{-1}\alpha r^t C^{\alpha-2}F^{-\alpha}((\alpha-1)(C\dot{F}-F\dot{C})-C F \log r) \lambda =w^{-1}(\alpha-1) r^t C^{\alpha-1}F^{-1-\alpha}(\alpha(C\dot{F}-F\dot{C})-C F \log r). Equating, plugging in the constraint and dividing by C F yields \frac{\dot{C}}{C}-\frac{\dot{F}}{F}= \frac{(\alpha w - p C)\log r}{\alpha(1-\alpha)w} First we discuss the case r=1. Then and thus (consider \frac{d}{d t}\ln C) there is a positive, constant K such that C = K F and we get because of the budget constraint F = \frac{w}{p K + w}, C = \frac{w K }{p K + w}. The constant K(p,w,\alpha) is unique and maximizes \int_0^T C^\alpha F^{1-\alpha} ds = \int_0^T \frac{w K^\alpha}{p K + w} ds. In equilibrium we have p = p^* and w = w^*. Maximizing  K yields \frac{d}{d K}\frac{w^* K^\alpha}{p^* K + w^*}= 0 and thus K=\frac{\alpha w^*}{(1-\alpha) p^*}. Now F = 1-\alpha C = \alpha \frac{w^*}{p^*}. The case r<1. In equilibrium \dot{C}=\dot{F}=0 we immediately obtain C=\alpha \frac{w^*}{p^*}. Plugging this into the budget constraint yields F=1-\alpha. Interestingly enough, we get an equilibrium equal to the solution of the time-independent model. How justified is S. Smale’s assumption C=\dot{p}? Economists often use linear demand theory and set C=T-p. Both approaches seem to be incompatible and both have a draw back. When you scale prices (e.g. by introducing a new currency) demand should stay the same. This is not the case in both settings. One needs currency dependent constants that scale accordingly to fix that. One possibility to avoid that is C=\frac{\dot{p}}{p}. As usual, more options do not improve clarity and calculating the whole model in the general case, i.e. C=C(p,\dot{p}) is not totally conclusive either. For a solution of the Euler-Lagrange equations one obtains under moderate assumptions on the partial derivatives that \alpha (1-\alpha)w\frac{\partial C}{\partial \dot{p}}\left(\frac{\dot{C}}{C}-\frac{\dot{F}}{F}\right) = (\alpha w - p C)\left(\frac{\partial C}{\partial \dot{p}}\log r + \frac{d}{d t}\frac{\partial C}{\partial \dot{p}}-\frac{\partial C}{\partial p}\right). Linear demand has \frac{\partial C}{\partial \dot{p}}=0 and thus \alpha w - p C=0. The budget constraint implies F=1-\alpha which is a constant. We thus can safely exclude linear demand from our considerations. The above equation cannot distinguish between Smale’s assumption and C=\frac{\dot{p}}{p}. However, hidden in the technical assumptions, there seems to be some advantage in Smale’s approach. It remains to clarify the price-scaling issue. Utility and Time – Statement of the Problem January 27, 2010 Ultimately our goal is to get some description of price evolution derived from first (economic) principles. In earlier posts (1, 2, 3) I have shown what can be deduced from ‘demand invariance under price-scaling’. As described there we still assume {n} goods being traded in a market, hence there are prices {p_i} and demands {d_i} for {1\leq i\leq n} attributed to these goods. That was the setting so far and now we are going to take the first steps into time. We assume that good {i} is consumed over time and describe consumption {c_i(\cdot)} as a positive real function. Consumption of good {i} from time {a} to time {b} is measured by {\int_a^b c(s)ds}. The participants in the market we call agents. An agent attributes to each consumption vector {c} a utility {u}. Technically this is a positive, increasing and concave function. In all our examples {u} and {c} will be sufficiently differentiable. Utility is increasing since more consumption is considered better and it is concave since we assume ‘diminishing marginal utility‘. The latter does not always hold in economic situations. However, most introductory examples are concave and as a start this seems safe. I assume ‘time impatience‘, that means, consumption now is better than consumption in the future. That assumption is not undisputed, but, as a model for the finite life span of the agents, this too seems safe for such an introductory text. Overall utility from time {a} to time {b} is measured by {\int_a^b r^s u(c(s)) ds} for some discount rate {0<r<1}. Agents have attached a wealth level {w(\cdot)}, that means for all time holds {\sum_i^n p_i(t) c_i(t) = w_i(t)}. They consume according to their prescribed wealth and they consume according to their demand ({c_i=d_i}). The last assumption closes the gap between {p} and {c}. We assume that demand is given as {d_i=\dot{p_i}} and thus we obtain in summary {c_i = \dot{p}_i}. Now we are in the shape to state the problem: agents in a market maximize utility \displaystyle \int_0^T r^s u(c(s)) ds according to the constraint \displaystyle \sum_{i=1}^n p_i(t) c_i(t) - w(t) = 0 for given time {0<T\leq\infty}, discount rate {0<r<1}, wealth function {w} and utility function {u}. Voilà, we end up with a constrained Euler-Lagrange equation. But beware! There are a couple of traps jamming all intuition we might have from mechanics or similar theories with conserved energy (understood as the Legendre transform of the Lagrangian). I certainly elaborate on this in one of the next entries. Scientific Laws September 2, 2009 As I have told you earlier, my guest is very sceptical about our scientific achievements. What follows are the notes I took, when he gave me a short summary of what he considers ‘our strategy’. In modern understanding of science, the fundamental laws seem to be consequences of various symmetries of quantities like time, space or similar objects. To make this idea more precise scientists often use mathematical arguments, thereby choosing some set {X} as state space encoding all necessary information on the considered system. The system then is thought to evolve in time on a differentiable {n}-dimensional path {x_i(t)\in X} for all {t\in\mathbb{R}} and {1\leq i \leq n\in\mathbb{N}}. Quite frequently there is a so-called Lagrange function {L} on the domain { X^n \times X^n \times \mathbb{R} } and a constraint function {W} on the same domain. The path {x(\cdot)} is required to minimizes or maximizes the integral \displaystyle \int_0^T L\left(x(s),\dot{x}(s),s\right)ds under the constraint \displaystyle W\left(x(s),\dot{x}(s),s\right)=0. (Under some technical assumptions) a path does exactly that, if it satisfies the Euler-Lagrange equations \displaystyle \frac{d}{dt}\frac{\partial L}{\partial \dot{x}_i}-\frac{\partial L}{\partial x_i}=\lambda \frac{\partial W}{\partial \dot{x}_i} for some function {\lambda} depending on {X^n \times X^n \times \mathbb{R}}. Define {y_i:=\frac{\partial L}{\partial \dot{x_i}}} and observe that (under suitable assumptions) this transformation is invertible, i.e. the {\dot{x}_i} can be expressed as functions of {x_i, y_i} and {t}. Next, define the Hamilton operator \displaystyle H(x,y,t) = \sum_{i=1}^n \dot{x}_i(x,y,t) y_i - L(x,\dot{x}(x,y,t),t) as the Legendre transform of {L}. The Legendre transformation is (under some mild technical assumptions) invertible. Now, (under less mild assumptions, namely holonomic constraints) two things happen. The canonical equations \displaystyle \frac{d x_i}{d t} = - \frac{\partial H}{\partial y_i} \left(=[x_i, H]\right), \frac{d y_i}{d t} = \frac{\partial H}{\partial x_i}\left(=[y_i, H]\right),\frac{d H}{dt} = -\frac{\partial L}{\partial t} are equivalent to the Euler Lagrange equations. Here {[\cdot,\cdot]} denotes the commutator bracket {[a,b]:= ab-ba}. Furthermore, if {L} does not explicitly depend on time, then {H} is a constant. That is the aforementioned symmetry. {H}, the energy, is invariant under time translations. Given all that, the solution of the minimisation or maximisation problem can then be given (either in the Heisenberg picture) as \displaystyle x(t) = e^{t H} x(0) e^{-t H}, y(t) = e^{t H} y(0) e^{-t H} or (in the in this case equivalent Schrödinger picture,) as an equation on the state space \displaystyle u(t)= e^{t H}u(0). This description is equivalent (under mild technical assumptions) to the following initial value problem: \displaystyle \dot{u}(t)=H u(t), u(0) = u_0\in X. where the operator {H} is the ‘law’. More technically, the law is the generator of a strongly continuous (semi-)group of (in this case linear and unitary) operators acting on (the Hilbert space) {X}. As an example of this process he mentioned the Schrödinger equation governing quantum mechanical processes. His conclusion was that the frequently appearing ‘technical assumptions’ in the above derivation make it highly unlikely for laws to exist even for systems with, what he calls, no emergent properties. ‘If that was true’, I thought ‘then … bye bye theory of everything!’ He explained further, that under no reasonable circumstances it is possible to extrapolate these laws to the emergent situation. I am not sure, whether I understand completely what he means by that, but his summary on how we find scientific laws is in my opinion way too simple. It can’t be true and I told him. With just a couple of ink strokes he derived the commutation relations for exchange markets from microeconomic theory. That left me speechless, since I always thought, that there cannot be ‘market laws’. Markets are on principle unpredictable! They are, or?
576a8b60be0f7b44
Quantum Approaches to Consciousness First published Tue Nov 30, 2004; substantive revision Tue Jun 2, 2015 It is widely accepted that consciousness or, more generally, mental activity is in some way correlated to the behavior of the material brain. Since quantum theory is the most fundamental theory of matter that is currently available, it is a legitimate question to ask whether quantum theory can help us to understand consciousness. Several programmatic approaches answering this question affirmatively, proposed in recent decades, will be surveyed. It will be pointed out that they make different epistemological assumptions, refer to different neurophysiological levels of description, and use quantum theory in different ways. For each of the approaches discussed, problematic and promising features will be equally highlighted. 1. Introduction The problem of how mind and matter are related to each other has many facets, and it can be approached from many different starting points. Of course, the historically leading disciplines in this respect are philosophy and psychology, which were later joined by behavioral science, cognitive science and neuroscience. In addition, the physics of complex systems and quantum physics have played stimulating roles in the discussion from their beginnings. As regards the issue of complexity, this is quite evident: the brain is one of the most complex systems we know. The study of neural networks, their relation to the operation of single neurons and other important topics do and will profit a lot from complex systems approaches. As regards quantum physics, there can be no reasonable doubt that quantum events occur and are efficacious in the brain as elsewhere in the material world—including biological systems![1] But it is controversial whether these events are efficacious and relevant for those aspects of brain activity that are correlated with mental activity. Quantum theory introduced an element of randomness standing out against the previous deterministic worldview, in which randomness, if it occurred at all, simply indicated our ignorance of a more detailed description (as in statistical physics). In sharp contrast to such epistemic randomness, quantum randomness in processes such as spontaneous emission of light, radioactive decay, or other examples of state reduction was considered a fundamental feature of nature, independent of our ignorance or knowledge. To be precise, this feature refers to individual quantum events, whereas the behavior of ensembles of such events is statistically determined. The indeterminism of individual quantum events is constrained by statistical laws. In this contribution, some popular approaches for applying quantum theory to consciousness will be surveyed and compared, most of them speculative, with varying degrees of elaboration and viability. Section 2 outlines two fundamentally different philosophical options for conceiving of relations between material and mental states of systems. Section 3 addresses three different neurophysiological levels of description, to which particular different quantum approaches refer. After some introductory remarks, Section 4 sketches the individual approaches themselves—Section 4.2: Stapp, Section 4.3: from Umezawa to Vitiello, Section 4.4: Beck and Eccles, Section 4.5: Penrose and Hameroff, Section 4.6: “dual-aspect” approaches such as have been tentatively proposed by Pauli and Jung as well as Bohm and Hiley, and Section 4.7: purely mental features, mathematically characterized by formal structures typical for quantum theory (pioneered by Aerts and colleagues). Section 5 offers some comparative conclusions. 2. Philosophical Background Assumptions In many approaches used to discuss relations between material [ma] brain states and mental [me] states of consciousness, these relations are conceived in a direct way (A): [ma] image 1 [me] This provides a minimal framework to study reduction, supervenience, or emergence relations (Kim 1998; Stephan 1999) which can yield both monistic and dualistic pictures. For instance, there is the classical stance of strong reduction, claiming that all mental states and properties can be reduced to the material domain (materialism) or even to physics (physicalism).[3] This point of view claims that it is both necessary and sufficient to explore and understand the material domain, e.g., the brain, in order to understand the mental domain, e.g., consciousness. More or less, this leads to a monistic picture, in which any need to discuss mental states is eliminated right away or at least considered as epiphenomenal. While mind-brain correlations are still legitimate though causally inefficacious from an epiphenomenalist point of view, eliminative materialism renders even correlations irrelevant. The most discussed counterarguments against the validity of such strong reductionist approaches are qualia arguments, which emphasize the impossibility for materialist accounts to properly incorporate the quality of the subjective experience of a mental state, the “what it is like” (Nagel 1974) to be in that state. This leads to a gap between third-person and first-person accounts for which Chalmers (1995) has coined the notion of the “hard problem of consciousness”. Another, less discussed counterargument is that the physical domain itself is not causally closed. Any solution of fundamental equations of motion (be it experimentally, numerically, or analytically) requires to fix boundary conditions and initial conditions which are not given by the fundamental laws of nature (Primas 2002). This causal gap applies to classical physics as well as quantum physics, where a basic indeterminacy due to collapse make it even more challenging. A third class of counterarguments refer to the difficulties to include notions of temporal present and nowness in a physical description (Franck 2004, 2008). However, direct relations between mental and material states can also be conceived in a non-reductive fashion. A number of variants of emergence (Stephan 1999) are prominent examples. Mental states and/or properties can be considered as emergent if the material brain is not necessary or not sufficient to explore and understand them.[4] This leads to a dualistic picture (less radical and more plausible than Cartesian dualism) in which residua remain if one attempts to reduce the mental to the material. Within a dualistic scheme of thinking, it becomes almost inevitable to discuss the question of causal influence between mental and material states. In particular, the causal efficacy of mental states upon brain states (“downward causation”) has recently attracted growing interest (Velmans, 2002; Ellis et al. 2011).[5] [ma]     [me] image 3image 3 Such a “dual aspect” option receives increasing attention in contempory discussions, and it has a long tradition. Early versions go back as far as Spinoza. In the early days of psychophysics in the 19th century, Fechner (1861) and Wundt (1911) advocated related views. Whitehead, the modern pioneer of process philosophy, referred to mental and physical poles of “actual occasions”, which themselves transcend their bipolar appearances (Whitehead 1978). Many approaches in the tradition of Feigl (1967) and Smart (1963), called “identity theories”, conceive mental and material states as essentially identical “central states”, yet considered from different perspectives. Other variants of this idea have been suggested by Jung and Pauli (1955) [see also Meier (2001)], involving Jung's conception of a psychophysically neutral, archetypal order, or by Bohm and Hiley (Bohm 1990; Bohm & Hiley 1993; Hiley 2001), referring to an implicate order which unfolds into the different explicate domains of the mental and the material. 3. Neurophysiological Levels of Description 3.1 Neuronal Assemblies figure 1 Figure 1: Balance between inhibitory and excitatory connections among neurons. 3.2 Single Neurons and Synapses Figure 2 Figure 2: Release of neurotransmitters at the synaptic cleft (exocytosis). 3.3 Microtubuli Figure 3a       Figure 3b The tubulins in microtubuli are the substrate which, in Hameroff's proposal, is used to embed Penrose's theoretical framework neurophysiologically. As will be discussed in more detail in Section 4.5, tubulin states are assumed to depend on quantum events, so that quantum coherence among different tubulins is possible. Further, a crucial thesis in the scenario of Penrose and Hameroff is that the (gravitation-induced) collapse of such coherent tubulin states corresponds to elementary acts of consciousness. 4. Examples 4.1 Ways to Use Quantum Theory The third category refers to further developments or generalizations of present-day quantum theory. An obvious candidate in this respect is the proposal by Penrose to relate elementary conscious acts to gravitation-induced reductions of quantum states. Ultimately, this requires the framework of a future theory of quantum gravity which is far from having been developed. Together with Penrose, Hameroff has argued that microtubuli might be the right place to look for such state reductions. Another set of approaches is based on generalizations of quantum theory beyond quantum physics proper. In this way, formally generalized concepts such as complementarity and entanglement can be applied to phenomena in both mental and material domains. In particular, relations between the two can be conceived in terms of dual aspects of one underlying “reality”. This conception, drawing on the philosophy of Spinoza, has been considered attractive by 20th century scientists such as Bohr, Pauli, Bohm, Primas, d'Espagnat, and others. Finally, there are generalized quantum approaches addressing purely mental (psychological) phenomena using formal features also employed in quantum theory, such as non-commuting operations or non-Boolean logic, but without involving the full-fledged framework of quantum mechanics or quantum field theory. Some of the applications proposed, e.g., by the groups of Aerts, Atmanspacher, Bruza, Busemeyer, Khrennikov and others will be sketched. 4.2 Stapp: Quantum State Reductions and Conscious Acts The act or process of measurement is a crucial aspect in the framework of quantum theory, that has been the subject of controversy for more than seven decades now. In his monograph on the mathematical foundations of quantum mechanics, von Neumann (1955, Chap. V.1) introduced, in an ad hoc manner, the projection postulate as a mathematical tool for describing measurement in terms of a discontinuous, non-causal, instantaneous and irreversible act given by (1) the transition of a quantum state to an eigenstate bj of the measured observable B (with a certain probability). This transition is often called the collapse or reduction of the wavefunction, as opposed to (2) the continuous, unitary (reversible) evolution of a system according to the Schrödinger equation. By contrast to von Neumann's fairly cautious stance, London and Bauer (1939) went much further and proposed that it is indeed human consciousness which completes quantum measurement (see Jammer (1974, Sec. 11.3 or Shimony (1963) for a detailed account). In this way, they attributed a crucial role to consciousness in understanding quantum measurement—a truly radical position. In the 1960s, Wigner (1967) followed up on this proposal,[8] coining his now proverbial example of “Wigner's friend”. In order to describe measurement as a real dynamical process generating irreversible facts, Wigner called for some nonlinear modification of (2) to replace von Neumann's projection (1).[9] In his earlier work, Stapp (1993) starts with Heisenberg's distinction between the potential and the actual (Heisenberg 1958), implementing a decisive step beyond the operational Copenhagen interpretation of quantum mechanics. Heisenberg's notion of the actual is related to a measured event in the sense of the Copenhagen interpretation. However, Heisenberg's notion of the potential, of a tendency, relates to the situation before measurement, which expresses the idea of a reality independent of measurement.[10] With respect to their tendency aspect, it is tempting to understand events in terms of scheme (B) of Sec. 2. This is related to Whitehead's ontology, in which mental and physical poles of so-called “actual occasions” are considered as psychological and physical aspects of reality. The potential antecedents of actual occasions are psychophysically neutral and refer to a a mode of existence at which mind and matter are unseparated. This is expressed, for instance, by Stapp's notion of a “hybrid ontology” with “both idea-like and matter-like qualities” and “two complementary modes of evolution” (Stapp 1999, 159). Similarities with a dual aspect approach (B) (cf. Section 4.6) can clearly be recognized. Another significant aspect of his approach is the possibility that “conscious intentions of a human being can influence the activities of his brain” (Stapp 1999, 153). Different from the possibly misleading notion of a direct interaction, suggesting an interpretation in terms of scheme (A) of Sec. 2, he describes this feature in a more subtle manner. The requirement that the mental and material outcomes of an actual occasion must match, i.e. be correlated, acts as a constraint on the way in which these outcomes are formed within the actual occasion (cf. Stapp 2006). The notion of interaction is thus replaced by the notion of a constraint set by mind-matter correlations (see also Stapp 2007). As to the quantum aspect of a template for action, Stapp argues that the mental effort, i.e. attention devoted to such intentional acts can protract the lifetime of the neuronal assemblies that represent the templates for action due to quantum Zeno-type effects. Concerning the neurophysiological implementation of this idea, intentional mental states are assumed to correspond to reductions of superposition states of neuronal assemblies. Additional commentary concerning the concepts of attention and intention in relation to James' idea of a holistic stream of consciousness (James 1950) is given in Stapp (1999). This link is a radical conceptual move. In what Stapp now denotes as a “semi-orthodox” approach (Stapp 2015), he proposes that the blind-chance kind of randomness of individual quantum events (“nature's choices”) be reconceived as “not actually random but positively or negatively biased by the positive or negative values in the minds of the observers that are actualized by its (nature's) choices” (p. 187). This hypothesis leads into mental influences on quantum physical processes which are widely unknown territory at present. 4.3 From Umezawa to Vitiello: Quantum Field Theory of Brain States 4.4 Beck and Eccles: Quantum Mechanics at the Synaptic Cleft With the exception of Eccles' idea of mental causation, the approach by Beck and Eccles essentially focuses on brain states and brain dynamics. In his more recent account, Beck (2001, 109f) states explicitly that “science cannot, by its very nature, present any answer to […] questions related to the mind”. In this sense, a strictly biophysical approach may open the door to controlled speculation about mind-matter relations, but more cannot be achieved. 4.5 Penrose and Hameroff: Quantum Gravity and Microtubuli This is a far-reaching assumption, and Penrose does not offer a concrete solution to this problem. However, he gives a number of plausibility arguments which clarify his own motivations and have in fact inspired others to take his ideas seriously. Penrose's rationale for invoking state reduction is not that the corresponding randomness offers room for mental causation to become efficacious (although this is not excluded). His conceptual starting point, at length developed in two books (Penrose 1989, 1994), is that elementary conscious acts must be non-algorithmic. Phrased differently, the emergence of a conscious act is a process which cannot be described algorithmically, hence cannot be computed. His background in this respect has a lot to do with the nature of creativity, mathematical insight, Gödel's incompleteness theorem, and the idea of a Platonic reality beyond mind and matter. In contrast to the unitary time evolution of quantum processes, Penrose suggests that a valid formulation of quantum state reduction replacing von Neumann's projection postulate must faithfully describe an objective physical process that he calls objective reduction. Since present-day quantum theory does not contain such a picture, he argues that effects not currently covered by quantum theory should play a role in state reduction. Ideal candidates for him are gravitational effects since gravitation is the only fundamental interaction which is not integrated into quantum theory so far. Rather than modifying elements of the theory of gravitation (i.e., general relativity) to achieve such an integration, Penrose discusses the reverse: that novel features have to be incorporated in quantum theory for this purpose. In this way, he arrives at the proposal of gravitation-induced objective state reduction. However, decoherence is just one piece in the debate about the overall picture suggested by Penrose and Hameroff. From another perspective, their proposal of microtubules as quantum computing devices has recently received support from work of Bandyopadhyay's lab at Japan, showing evidence for vibrational resonances and conductivity features in microtubules that should be expected if they are macroscopic quantum systems (Sahu et al. 2013). Bandyopadhyay's results initiated considerable attention and commentary (see Hameroff and Penrose 2014). In a well-informed in-depth analysis, Pitkänen (2014) raised concerns to the effect that the reported results alone may not be sufficient to confirm the approach proposed by Hameroff and Penrose with all its ramifications. A recent paper by Craddock et al. (2015) discusses in detail how microtubular processes (rather than, or in addition to, synaptic processes, see Flohr 2000) may be affected by anesthetics, and may also be responsible for neurodegenerative memory disorders. As the correlation between anesthetics and consciousness seems obvious at the phenomenological level, it is interesting to know the intricate mechanisms by which anesthetic drugs act on the cytoskeleton of neuronal cells,[13] and which role quantum mechanics plays in these mechanisms. Craddock et al. (2015) point out a number of possible quantum effects (including the power-law behavior addressed by Vitiello, cf. Section 4.3) which can be investigated using presently available technologies. 4.6 Mind and Matter as Dual Aspects As mentioned in Section 2, dual-aspect approaches have a long history, essentially starting with Spinoza as a most outspoken protagonist. Major directions in the 20th century have been described and compared to some detail by Atmanspacher (2014). An important distinction between two basic classes of dual-aspect thinking is the way in which the psychophysically neutral domain is related to the mental and the physical. For Russell and the neo-Russellians the compositional arrangements of psychophysically neutral elements decide how they differ with respect to mental or physical properties. As a consequence, the mental and the physical are reducible to the neutral domain. Chalmers' (1996, Chap. 8) ideas on “consciousness and information” fall into this class. Tononi's theoretical framework of “integrated information theory” (see Oizumi et al. 2014, Tononi and Koch 2015) can be seen as a concrete implementation of a number of features of Chalmers' proposal. No quantum structures are involved in this work. In Bohm's and Hiley's approach, the notions of implicate and explicate order mirror the distinction between ontic and epistemic domains. Mental and physical states emerge by explication, or unfoldment, from an ultimately undivided and psychophysically neutral implicate, enfolded order. This order is called holomovement because it is not static but rather dynamic, as in Whitehead's process philosophy. De Gosson and Hiley (2013) give a good introduction of how the holomovement can be addressed from a formal (algebraic) point of view. At the level of the implicate order, the term active information expresses that this level is capable of “informing” the epistemically distinguished, explicate domains of mind and matter. At this point it should be emphasized that the usual notion of information is clearly an epistemic term. Nevertheless, there are quite a number of dual-aspect approaches addressing something like information at the ontic, psychophysically neutral level.[15] Using an information-like concept in a non-epistemic manner is inconsistent if the common (syntactic) significance of Shannon-type information is intended, which requires distinctions in order to construct partitions, providing alternatives, in the set of given events. Most information-based dual-aspect approaches do not sufficiently clarify their notion of information, so that misunderstandings abound easily. While the proposal by Bohm and Hiley essentially sketches a conceptual framework without further concrete details, particularly concerning the mental domain, the suggestions by Pauli and Jung offer some more material to discuss. An intuitively appealing way to represent their approach considers the distinction between epistemic and ontic domains of material reality due to quantum theory in parallel with the distinction between epistemic and ontic mental domains. On the physical side, the epistemic/ontic distinction refers to the distinction between a “local realism” of empirical facts obtained from classical measuring instruments and a “holistic realism” of entangled systems (Atmanspacher and Primas 2003). Essentially, these domains are connected by the process of measurement, thus far conceived as independent of conscious observers. The corresponding picture on the mental side refers to a distinction between the conscious and the unconscious.[16] In Jung's depth psychological conceptions, these two domains are connected by a process of emergence of conscious mental states from the unconscious, analogous to physical measurement. In Jung's depth psychology it is crucial that the unconscious has a collective component, unseparated between individuals and consisting of the so-called archetypes. They are regarded as constituting the psychophysically neutral level covering both the collective unconscious and the holistic reality of quantum theory. At the same time they operate as “ordering factors”, being responsible for the arrangement of their psychical and physical manifestations in the epistemically distinguished domains of mind and matter. More detailed illustrations of this picture can be found in Jung and Pauli (1955), Meier (2001), Atmanspacher and Primas (2009), Atmanspacher and Fach (2013), and Atmanspacher and Fuchs (2014). This scheme is clearly related to scenario (B) of (Sec. 2, combining an epistemically dualistic with an ontically monistic approach. There is a causal relationship (in the sense of formal rather than efficient causation) between the psychophysically neutral, monistic level and the epistemically distinguished mental and material domains. In Pauli's and Jung's terms this kind of causation is expressed by the ordering operation of archetypes in the collective unconscious. A remarkable feature of scenario (B) is the possibility that the mental and material manifestations may inherit mutual correlations due to the fact that they are jointly caused by the psychophysically neutral level. One might say that such correlations are remnants reflecting the lost holism on this level. In this sense, they are not the result of any direct causal interaction between mental and material domains. Thus, they are not suitable for an explanation of direct mental causation in the usual sense. Their existence would require some unconscious activity entailing correlation effects that would appear as mental causation. Independently of quantum theory, a related move was suggested by Velmans (2002, 2009). But even without mental causation, scenario (B) is relevant to ubiquitous correlations between conscious mental states and brain states. In the Pauli-Jung conjecture, these correlations are called synchronistic (see also Primas 1996), and have been extended to psychosomatic relations (Meier 1975). A more comprehensive typology of mind-matter correlations following from Pauli's and Jung's dual-aspect monism was proposed by Atmanspacher and Fach (2013). They found that a large body of empirical material concerning so-called “exceptional experiences” can be classified according to their deviation from the conventional reality model of a subject and from the conventional relations between its components. Synchronicities in the sense of Pauli and Jung appear as a special case of such relational deviations. Primas (2003, 2009) has proposed a dual-aspect approach where the distinction of mental and material domains originates from the distinction between two different modes of time: tensed (mental) time, including nowness, on the one hand and tenseless (physical) time, viewed as an external parameter, on the other (see the entries on time and on being and becoming in modern physics). Regarding these two concepts of time as implied by a symmetry breaking of a timeless level of reality that is psychophysically neutral, Primas conceives the tensed time of the mental domain as quantum-correlated with the parameter time of physics via “time-entanglement”. Although this scenario has been formulated in a Hilbert space framework with appropriate time operators (Primas 2009), it is still a tentative scheme without concrete indications of how to test it empirically. Nevertheless it offers a formally elaborated and conceptually consistent dual-aspect quantum framework for basic aspects of the mind-matter problem. As indicated above, the approaches by Stapp (Section 4.2) and Vitiello (Section 4.3) contain elements of dual-aspect thinking as well, although these are not much emphasized by the authors. The dual-aspect quantum approaches discussed in the present section tend to focus on the issue of entanglement more than on state reduction. The primary purpose here is to understand correlations between mental and material domains rather than direct interactions between them. 4.7 Mental Quantum Features It has been an old idea by Bohr that central conceptual features of quantum theory, such as complementarity, are also of pivotal significance outside the domain of physics. In fact, Bohr became familiar with the idea through the psychologist Edgar Rubin and, more indirectly, William James (Holton 1970) and immediately saw its potential for quantum physics. Although Bohr was always convinced of the extraphysical relevance of complementarity, he never elaborated this idea in concrete detail, and for a long time after him no one else did so either. This situation has changed: there are now a number of research programs generalizing key notions of quantum theory in a way that makes them applicable beyond physics. Of particular interest are approaches that have been developed in order to pick up Bohr's proposal with respect to psychology and cognitive science. The first steps in this direction were made by the group of Aerts in the early 1990s (Aerts et al. 1993), using non-distributive propositional lattices to address quantum-like behavior in non-quantum systems. Alternative approaches have been initiated by Khrennikov (1999), focusing on non-classical probabilities, and Atmanspacher et al. (2002), outlining an algebraic framework with non-commuting operations. Other lines of thinking are due to Primas (2007), addressing complementarity with partial Boolean algebras, and Filk and von Müller (2008), indicating links between basic conceptual categories in quantum physics and psychology. The particular strength of the idea of generalizing quantum theory beyond quantum physics is that it provides a formal framework which both yields a transparent well-defined link to conventional quantum physics and has been used to describe a number of concrete psychological applications with surprisingly detailed theoretical and empirical results. Corresponding approaches fall under the third category of Section 4.1: further developments or generalizations of quantum theory. 3. The perception of a stimulus is bistable if the stimulus is ambiguous, such as the Necker cube. Atmanspacher and colleagues developed a detailed model predicting a quantitative relation between basic psychophysical time scales in bistable perception that has been confirmed experimentally (Atmanspacher and Filk 2013). Moreover, Atmanspacher and Filk (2010) conjectured that particular distinguished states in bistable perception may violate temporal Bell inqualitities—a litmus test for quantum behavior. See also Mahler (2015) for an informed overview of temporal Bell inequalities and their consequences. 6. The difficult issue of meaning in natural languages is often explored in terms of semantic networks. Gabora and Aerts (2002) described the way in which concepts are evoked, used, and combined to generate meaning depending on contexts. Their ideas about concept association in evolution were further developed by Gabora and Aerts (2009). Bruza et al. (2009) referred to meaning relations in terms of entanglement-style features in quantum representations of the human mental lexicon and propose experimental work capable of testing this approach. See Bruza et al. (2013, in Other Internet Resources) for first empirical results in this direction. 7. Quantum entanglement implies correlations exceeding standard classical correlations (by violating Bell-type inequalitites) but obeying the so-called Tsirelson bound. However, this bound does not exhaust the range by which Bell-type correlations can be violated in principle. Popescu and Rohrlich (1994) found such correlations for particular quantum measurements, and Dzhafarov and Kujala (2013) derived a compact way to distinguish the different types of correlations theoretically. Super-quantum correlations may arise due to priming or other context effects in mental systems. Possible examples are non-separable concept combinations a la Bruza et al. (2013, in Other Internet Resources). It is a distinguishing aspect of the approaches listed above that they have led to well-defined and specific theoretical models with empirical consequences and novel predictions. A second point worth mentioning is that the approaches have formed a scientific community—there are several groups worldwide (rather than solitary actors) studying quantum ideas in cognition, partly even in collaborative efforts. For about a decade there have been regular international conferences with proceedings for the exchange of new results and ideas, and target articles and special issues of well established journals have been devoted to basic frameworks and new developments (Busemeyer and Bruza 2013, Pothos and Busemeyer 2013, Haven and Khrennikov 2013, Wang et al. 2013), Wendt (2015). 5. Conclusions Any discussion of state collapse or state reduction (e.g. by measurement) refers, at least implicitly, to superposition states since those are the states that are reduced. Insofar as entangled systems remain in a quantum superposition as long as no measurement has occurred, entanglement is always co-addressed when state reduction is discussed. By contrast, some of the dual-aspect quantum approaches utilize the topic of entanglement differently, and independently of state reduction in the first place. Inspired by the entanglement-induced nonlocal correlations of quantum physics, mind-matter entanglement is conceived as the hypothetical origin of mind-matter correlations. This reflects the highly speculative picture of a fundamentally holistic, psychophysically neutral level of reality from which correlated mental and material domains emerge. The approach initiated by Umezawa is embedded in the framework of quantum field theory, more broadly applicable and formally more sophisticated than standard quantum mechanics. It is used to describe the emergence of classical activity in neuronal assemblies on the basis of symmetry breakings in a quantum field theoretical framework. A clear conceptual distinction between brain states and mental states has often been missing, but this ambiguity has recently been resolved in favor of brain states. Their relation to mental states is ultimately left open; some of Vitiello's accounts suggest a vague inclination toward a dual-aspect approach. The dual-aspect approaches of Pauli and Jung and of Bohm and Hiley are conceptually more transparent and more promising. On the other hand, they are essentially unsatisfactory with regard to a sound formal basis and concrete empirical scenarios. Hiley's work offers an algebraic framework which may lead to theoretical progress. A novel dual-aspect quantum proposal by Primas, based on the distinction between tensed mental time and tenseless physical time, marks a significant step forward, particularly as concerns a consistent formal framework. • –––, (eds.), 2009, Recasting Reality. Wolfgang Pauli's Philosophical Ideas and Contemporary Science, Berlin: Springer. • Beck, F., 2001, “Quantum brain dynamics and consciousness,” in The Physical Nature of Consciousness, ed. by P. van Loocke, Amsterdam: Benjamins, pp. 83–116. • Brukner, C., and Zeilinger, A., 2003, “Information and fundamental elements of the structure of quantum theory,” in Time, Quantum and Information, ed. by L. Castell and O. Ischebeck, Berlin: Springer, pp. 323–355. • Bruza, P.D., Kitto, K., Nelson, D., and McEvoy, C.L., 2009, “Is there something quantum-like about the human mental lexicon?” Journal of Mathematical Psychology, 53: 362-377. • Busemeyer, J.R., and Bruza, P.D., 2013, Quantum Models of Cognition and Decision, Cambridge: University Press. • Butterfield, J., 1998, “Quantum curiosities of psychophysics,” in Consciousness and Human Identity, ed. by J. Cornwell, Oxford University Press, Oxford, pp. 122–157. • Chalmers, D., 1995, “Facing up to the problem of consciousness.,” Journal of Consciousness Studies, 2(3): 200–219. • Ellis, G.F.R., Noble, D., and O'Connor T. (eds.), 2011, Top-Down Causation: An Integrating Theme Within and Across the Sciences?, Special Issue of Interface Focus 2(1). • Esfeld, M, 1999, “Wigner's view of physical reality,” Studies in History and Philosophy of Modern Physics, 30B: 145–154. • Flohr, H., 2000, “NMDA receptor-mediated computational processes and phenomenal consciousness,” in Neural Correlates of Consciousness. Empirical and Conceptual Questions, ed. by T. Metzinger, Cambridge: MIT Press, pp. 245–258. • Franck, G., 2004, “Mental presence and the temporal present,” in Brain and Being, ed. by G.G. Globus, K.H. Pribram, and G. Vitiello, Amsterdam: Benjamins, pp. 47–68. • Fuchs, C.A., 2002, “Quantum mechanics as quantum information (and only a little more),” n Quantum Theory: Reconsideration of Foundations, ed. by A. Yu. Khrennikov, Växjö: Växjö University Press, pp. 463–543. • Grush, R., and Churchland, P.S., 1995, “Gaps in Penrose's toilings,” Journal of Consciousness Studies 2(1), 10–29. (See also the response by R. Penrose and S. Hameroff in Journal of Consciousness Studies 2(2) (1995): 98–111.) • Hepp, K., 1999, “Toward the demolition of a computational quantum brain,” in Quantum Future, ed. by P. Blanchard and A. Jadczyk, Berlin: Springer, pp. 92–104. • Hiley, B.J., 2001, “Non-commutative geometry, the Bohm interpretation and the mind-matter relationship,” in Computing Anticipatory Systems—CASYS 2000, ed. by D. Dubois, Berlin: Springer, pp. 77–88. • James, W., 1950, The Principles of Psychology, Vol. 1, New York: Dover. Originally published in 1890. • London, F., and Bauer, E., 1939, La théorie de l'observation en mécanique quantique, Paris: Hermann; English translation, “The theory of observation in quantum mechanics,” in Quantum Theory and Measurement, ed. by J.A. Wheeler and W.H. Zurek, Princeton: Princeton University Press, 1983, pp. 217–259. • Meier, C.A., 1975, “Psychosomatik in Jungscher Sicht,” in Experiment und Symbol, ed. by C.A. Meier, Walter, Olten, pp. 138–156. • Penrose, R., 1989, The Emperor's New Mind, Oxford: Oxford University Press. • Primas, H., 2002, “Hidden determinism, probability, and time's arrow,” in Between Chance and Choice, H. Atmanspacher and R.C. Bishop (eds.), Exeter: Imprint Academic, pp. 89–113. • –––, 2015, “A quantum-mechanical theory of the mind-brain connection,” in Beyond Physicalism, ed. by E.F. Kelly et al., Lanham: Rowman and Littlefield, pp. 157–193. • Strawson, G., 2003, “Real materialism,” in Chomsky and His Critics, ed. by L. Anthony and N. Hornstein, Oxford: Blackwell, pp. 49–88. • –––, 2002, “Dissipative quantum brain dynamics,” in No Matter, Never Mind, ed. by K. Yasue, M. Jibu, and T. Della Senta, Amsterdam: Benjamins, pp. 43–61. • –––, 1977, “Physics and its relation to human knowledge,” Hellenike Anthropostike Heaireia, Athens, pp. 283–294. Reprinted in Wigner's Collected Works Vol. VI, edited by J. Mehra, Berlin: Springer, 1995, pp. 584–593. Other Internet Resources Inspiring discussions on numerous topics treated in this paper with Guido Bacciagaluppi, Thomas Filk, Hans Flohr, Hans Primas, Stefan Rotter, Henry Stapp, Giuseppe Vitiello, and Max Velmans are gratefully acknowledged. Useful comments by Thomas Filk, Stuart Hameroff, and Giuseppe Vitiello helped to update the previous 2011 version of this entry. Copyright © 2015 by Harald Atmanspacher <atmanspacher@collegium.ethz.ch> The Encyclopedia Now Needs Your Support Please Read How You Can Help Keep the Encyclopedia Free
cfea8367b4181b6c
Sunday, November 04, 2007 Can one assign a continuous Schrödinger time evolution to light-like 3-surfaces? Alain Connes wrote very interesting comments about factors of various types using as an example Schrödinger equation for various kinds of foliations of space-time to time=constant slices. If this kind of foliation does not exist, one cannot speak about time evolution of Schrödinger equation at all. Depending on the character of the foliation one can have factor of type I, II, or III. For instance, torus with slicing dx= ady in flat coordinates, gives a factor of type I for rational values of a and factor of type II for irrational values of a. 1. 3-D foliations and type III factors Connes mentioned 3-D foliations V which give rise to type III factors. Foliation property requires a slicing of V by a one-form v to which slices are orthogonal (this requires metric). 1. The foliation property requires that v multiplied by suitable scalar is gradient. This gives the integrability conditions dv= w\wedge v, w=-dψ/ψ =-dlog(ψ). Something proportional to log(ψ) can be taken as a third coordinate varying along flow lines of v: the flow defines a continuous sequence of maps of 2-dimensional slice to itself. 2. If the so called Godbillon-Vey invariant defined as the integral of dw\wedge w over V is non-vanishing, factor of type III is obtained using Schrödinger amplitudes for which the flow lines of foliation define the time evolution. The operators of the algebra in question are transversal operators acting on Schrödinger amplitudes at each slice. Essentially Schrödinger equation in 3-D space-time would be in question with factor of type III resulting from the exotic choice of the time coordinate defining the slicing. 2. What happens in case of light-like 3-surfaces? In TGD light-like 3-surfaces are natural candidates for V and it is interesting to look what happens in this case. Light-likeness is of course a disturbing complication since orthogonality condition and thus contravariant metric is involved with the definition of the slicing. Light-likeness is not however involved with the basic conditions. 1. The one-form v defined by the induced Kähler gauge potential A defining also a braiding is a unique identification for v. If foliation exists, the braiding flow defines a continuous sequence of maps of partonic 2-surface to itself. 2. Physically this means the possibility of a super-conducting phase with order parameter satisfying covariant constancy equation Dψ=(d/dt -ieA)ψ=0. This would describe a supra current flowing along flow lines of A. 3. If the integrability fails to be true, one cannot assign Schrödinger time evolution with the flow lines of v. One might perhaps say that 3-surface behaves like single quantum event not allowing slicing into a continuous Schrödinger time evolution. 4. In TGD Schrödinger amplitudes are replaced by second quantized induced spinor fields. Hence one does not face the problem whether it makes sense to speak about Schrödinger time evolution of complex order parameter along the flow lines of a foliation or not. Also the fact that the "time evolution" for the modified Dirac operator corresponds to single position dependent generalized eigenvalue identified as Higgs expectation same for all transversal modes (essentially zn labelled by conformal weight) is crucial since it saves from the problems caused by the possible non-existence of Schrödinger evolution. It is not at all clear whether the integrability condition can be satisfied at all in TGD framework for non-vacuum extremals. This indeed seems to be the case and this is due to a very important delicacy related to the construction of quantum TGD as an almost topological QFT. 1. The construction of quantum TGD at parton level using light-like 3-surfaces as basic objects forces the introduction of Lorentz invariant component of Kähler gauge potential Aa= constant, where a=(t2-r2)1/2 denotes light-cone proper time. The value of this component could depend on the sector of the generalized imbedding space partially characterized by the value of the Planck constant. The modification does not affect the Kähler form but has the highly non-trivial implication that Chern-Simons action is non-vanishing even when the CP2 projection of the light-like 3-surface is 2-dimensional. D=2 holds true for the extremals of Chern-Simons action. 2. Non-vanishing Aa is necessary in order to modify the topological QFT defined by Chern-Simons action to an almost topological QFT. What is of utmost importance is that the Noether currents associated with the four-momentum are non-trivial and non-conserved whereas four-momentum squared is conserved and non-vanishing. The breaking of Poincare invariance does not however take place at the level of the world of classical worlds since the configuration space is a union of sub-configuration spaces for which a choice of preferred future light-cone has been made. 3. Since the integrability conditions for A are not gauge invariant, the non-vanishing value of Aa implies that integrability conditions fail already for D=2 as is easy to see by taking two X3 coordinates to be the coordinates of geodesic sphere of CP2 and the remaining coordinate a light-like coordinate. The light-like 3-surfaces associated with all non-vacuum extremals would behave like quantum events rather than continuous evolutions of Schrödinger equation. This is in spirit of the zero energy ontology in which the ends of the space-time sheet carry positive and negative energy states defining physical state as zero energy state. It conforms also with the notion of time-like entanglement defined by Connes tensor product, which can be reduced only partially in quantum measurements. The failure of the integrability condition means that the flow lines of A define typically helical structures which means a non-trivial braiding. This brings strongly in mind the helical structures of living matter. 3. Extremals of Kähler action Some comments relating to the interpretation of the classification of the extremals of Kähler action by the dimension of their CP projection are in order. In the chapter Basic Extremals of the Kähler Action classical field equations of TGD are studied. It was found that the extremals can be classified according to the dimension D of the CP2 projection of space-time sheet in the case that Aa=0 holds true. 1. For D=2 integrability conditions for the vector potential can be satisfied for Aa=0 so that one has generalized Beltrami flow and one can speak about Schrödinger time evolution associated with the flow lines of vector potential defined by covariant constancy condition Dψ=0 makes sense. Kähler current is vanishing or lightlike. This phase is analogous to a super-conductor or a ferromagnetic phase. For nonvanishing Aa the Beltrami flow property is lost but the analogy with ferromagnetism makes sense still. 2. For D=3 foliations are lost even for Aa=0. The phase is dominated by helical structures as also D=2 phase with non-vanishing Aa. This phase is analogous to spin glass phase around phase transition point from ferromagnetic to non-magnetized phase and expected to be important in living matter systems. 3. D=4 is analogous to a chaotic phase with vanishing Kähler current and to a phase without magnetization. The interpretation in terms of non-quantum coherent "dead" matter is suggestive. An interesting question is whether the ordinary 8-D imbedding space which defines one sector of the generalized imbedding space could correspond to Aa=0 phase. If so, then all states for this sector would be vacua with respect to M4 quantum numbers. M4-trivial zero energy states in this sector could be transformed to non-trivial zero energy states by a leakage to other sectors. Post a Comment << Home
9fd8e3fc3f638658
NIST Physical Measurement Laboratory Table of contents logo Atomic Reference Data for Electronic Structure Calculations These calculations are all carried out in the framework of generalized Kohn-Sham theory ([5], and Chapter 7 of Ref. [3]). Details of the LDA, LSD, RLDA, and ScRLDA formalisms are described in independent sections below; here we focus on matters that are generic to all these approximations. We utilize the central-field approximation, with conventional labelling of principal and angular momentum quantum numbers of electronic orbitals. We limit our calculations to the ground-state electronic configurations of the first 92 neutral atoms and singly-charged cations of the periodic table; the specific configurations used are described below. In cases of partially filled electronic subshells, fractional occupancies are assigned to orbitals with different azimuthal quantum number, m, to accomplish a spherical averaging of the charge distribution. In the case of RLDA, this extends to averaging over subshells with the same orbital angular momentum but different values of total angular momentum j. Thus, for example, if there were 2 electrons in a p shell, we assign an electron population of 4/3 to the p3/2 states and 2/3 to the p1/2 states. The results presented here derive from four codes that were written independently. These were found to give results of good mutual consistency, provided that the numerical approximations within each code were varied until a very high degree of convergence was obtained within each code. The original authors of the four codes are, in alphabetical order These codes were used by us with the permission of their authors. However, all codes required modification to obtain numerical convergence to the target accuracy of 1 microHartree in total energy. These modifications were not subject to review by the original authors. Some of these codes circulate relatively freely within the electronic structure community, so we must caution readers that a given available version of any one of these codes need not yield results identical to those presented here. Our purpose in this study was to use robust tested tools to accomplish a specific task, not to provide a relative ranking of various codes. For this reason, we do not give details of individual code performance beyond what is necessary to describe the uncertainties in the results, and we refer to each code by a numerical label between 1 and 4, chosen arbitrarily.   The local-density functional The local-density approximation (LDA) requires that the exchange-correlation potential be given as a function of the electron density at a given point in space. For this part of our study, we use the form of the exchange-correlation potential given by Vosko, Wilk, and Nusair (VWN) [4]. The form is a fit to the Ceperley-Alder electron gas study [6]. The VWN functional reproduces the random-phase-approximation (RPA) results for a uniform electron gas in the high-density limit, it reproduces the spin-stiffness constant calculated in the RPA in the paramagnetic limit of a uniform electron gas, and it is uniformly differentiable as a function of the electron gas parameter, rs. It is also in standard use, or available as an option, in many electronic structure codes, and thereby provides a convenient reference potential for checking the accuracy of numerical calculations. We now summarize the form of the VWN functional. The exchange-correlation energy per electron is separated into two parts, an exchange term and a correlation term. In the RLDA and ScRLDA calculations, we use the relativistic corrections to the energy-density functional proposed by MacDonald and Vosko [7]. Radial grids Suitable choice of a radial grid is key to obtaining accurate numerical solutions of the integro-differential equations of density-functional theory. The codes in our test suite make different choices for the radial grid. Two codes make perhaps the simplest choice, an exponentially increasing grid eq22 (Eq. 22) with three parameters: the minimum radius, rmin, the maximum radius, rmax, and the number of intervals, N. The application of the exponential grid to the atomic Schrödinger equation has been discussed by Desclaux [8]. For one code we used N = 15788, rmin = 1/(160 Z), and rmax = 50. (All distances are in atomic units.) Another code used N ≤ 8000, rmin = 10-6/Z, and rmax = 800 Z-1/2; in this case, the energies were extrapolated to n → ∞ using an N-2 or N-4 dependence, depending on the quantity in question. Another code chooses a grid which is nearly linear near the origin, and exponentially increasing at large r, eq23 (Eq. 23) which again is determined by three parameters, a, b, and N. This grid includes the origin explicitly as r0. In this case, we took a = 4.34 10-6/Z, b = 0.002 304, and rmax = 50, leading to N = 7058 for H, increasing to N = 9021 for U, and to r1 = 10-7 for H, decreasing to 1.1 10-9 for U. A fourth code uses a change of variable technique: $$\rho = \ln r $$. (Eq. 24) A uniform grid is taken in the transformed variable from ρ(rmin) to ρ(rmax) where the parameters are taken to be rmin = 0.01 e-4/Z, for atomic number Z, and rmax = 50. The number of points increased from N = 2113 for H to N = 2837 for U. The density of points chosen in the latter two codes, linear near the origin and exponentially increasing at large r, is similar to that suggested from theoretical considerations [9].   Value of inverse fine-structure constant The calculations used the 1986 CODATA recommended value for α-1, which was 137.0359895(61), where the digits in parentheses are the one-standard-deviation uncertainty in the last digits of the given value. The value of α-1 has changed in more recent compilations of fundamental constants; see the Fundamental Physical Constants homepage, where values are kept current. In test calculations using the 2006 updated values, the total energy of U (i.e., Uranium) changes by about 461 microhartrees, and the 1s Kohn-Sham eigenvalue by 108 microhartrees in the RLDA approximation, with the shift in the direction of smaller binding energies. This is consistent with the recommended value for α-1 having become slightly larger, implying that the recommended value for alpha has become slightly smaller, thereby reducing relativistic effects that contribute to stronger overall binding. Contents | Introduction | Procedure | Results | Approximations | References | Notation
cc9ae40264bb7dac
Viewpoint: Negative Frequencies Get Real • Fabio Biancalana, Max Planck Institute for the Science of Light, Günther-Scharowsky Strasse 1/26, D-91058 Erlangen, Germany Physics 5, 68 Figure 1: Schematic representation of a propagating optical soliton that sheds in its wake two distinct blue-shifted modes: the usual positive resonant radiation (RR), and a second mode identified by Rubino et al. as negative resonant radiation (NRR). All these modes are traveling in the forward direction indicated by the arrow.Schematic representation of a propagating optical soliton that sheds in its wake two distinct blue-shifted modes: the usual positive resonant radiation (RR), and a second mode identified by Rubino et al. as negative resonant radiation (NRR). All thes... Show more A soliton is a localized “lump” of light that is the product of wave effects in a nonlinear medium and can, under certain conditions, emit low-intensity, positive frequency resonant radiation in its wake, due to the phase matching between its momentum and the dispersion of the medium itself. Writing in Physical Review Letters, Eleonora Rubino at the University of Insubria in Como, Italy, and collaborators have discovered that there should be a negative frequency counterpart of this resonant emission, which they have identified experimentally in two different systems [1]. When light travels through a medium, the dispersion—the relation between frequency and momentum of a wave—has to be taken into account. This has very important consequences: vacuum, for example, possesses a trivial dispersion—a straight line across all frequencies—and thus all colors travel at the same speed in empty space. However, in any other medium, for example, a silica optical fiber, the dispersion is far from being a straight line, so that different frequencies travel at different velocities. This produces a typical temporal broadening of short input pulses in fibers. When nonlinear effects also come into play, the momentum (and thus the refractive index) depends not only on frequency, but also on the intensity of light. In this case, the spreading due to dispersion, and the self-focusing effect due to nonlinearity, can perfectly balance to create solitons—localized bell-shaped waves that travel for very long distances in the waveguide without any distortion. Solitons are nowadays commonly produced, sometimes in large quantities, in many experiments [2]. The soliton momentum is nonlinear and depends on its intensity. When it and fiber dispersion coincide, the so-called phase matching takes place. Phase matching is a very common phenomenon in nonlinear optics, when two or more waves at different frequencies are allowed to exchange energy efficiently, due to the coincidence of their phases (that are proportional to their momenta). Under phase-matching conditions, a special kind of low-intensity radiation can be emitted by the soliton at a well-defined frequency, called resonant radiation [3]. This radiation is one of the essential ingredients of the supercontinuum generation, an extremely important and useful nonlinear phenomenon, which massively broadens the spectrum of an input narrow-band pulse, producing a flat spectral distribution over a broad range of frequencies, similar to sunlight, but coherent, and more intense by six orders of magnitude [4]. Supercontinuum generation has been intensively studied in optical fibers over the last fifteen years, and thus the theoretical, analytical, and numerical tools that are available today are very adequate and advanced, and simulations that perfectly reproduce experimental findings are commonplace in any serious nonlinear optics laboratory [5]. It is therefore with great surprise that a missing ingredient of the supercontinuum generation has been recently identified experimentally, and explained theoretically, by Rubino et al. [1], in which the phase matching occurs between a soliton momentum and the fiber dispersion at negative frequencies. It is the usual practice when dealing with the classical Maxwell equations, to assume that only positive frequencies have an acceptable physical meaning. When the soliton dispersion (which is basically a straight line with a slope proportional to its velocity) and the fiber dispersion (which is a rather complicated curve) are phase matched at positive frequencies, positive resonant radiation is produced, which is the one that most people observe in experiments. However, there is no particular reason why we have to restrict our attention to positive frequencies only, since any electromagnetic wave is a real field, and thus it is the sum of a field with positive frequencies and its complex conjugate field, and therefore possesses negative frequencies. This simple reasoning leads to a phase matching between the soliton and the negative frequency part of the fiber dispersion, and the curious, but logical, consequence is that this phase matching is asymmetric, and so leads to the generation of a new resonant radiation peak at a frequency that is not mirror symmetric with its positive energy counterpart. Any physical electric field is a real function, and therefore can be expressed as a sum of two complex functions (called envelopes), which are conjugates of each other. If the first complex function contains only positive frequencies, the second must contain only negative ones. These two pieces always come together, and thus negative frequencies have always been thought to be “redundant,” i.e., positive and negative frequencies should contain the same physics in classical electromagnetism. However, the point of Rubino’s work is that this is not true. The presence of a soliton (or, as a matter of fact, any wave that has a steep front of intensity) can break the symmetry of the phase-matching condition, thus leading to two different resonant radiation frequencies, one which is positive (shown as RR in Fig. 1) and the other which is negative (shown as NRR in Fig. 1). The analysis shows that these two frequencies have different magnitudes, as well as different signs. Nevertheless, since in the electric field every wave comes together with its complex conjugate, at the end, the negative frequency mode instantaneously acquires a positive frequency by switching its sign, and thus in the experiments, one should see not only the conventional RR, but the NRR as well, although the latter must have a smaller amplitude than the former. In 2008, a related and very similar phenomenon was demonstrated in water waves propagating near an “event horizon” [6]. In order to prove the existence of this negative-frequency resonant radiation, which is typically emitted at shorter wavelengths than its positive counterpart, the team has performed experiments with photonic crystal fibers (PCFs)—highly nonlinear fibers in which the formation of solitons and resonant radiation is particularly favorable [7]. They launched extremely short pulses of 7 femtoseconds ( fs) into a 5-mm PCF with a very broad input spectrum that favors the energy transfer between the soliton and the negative resonant radiation, which they were able to observe directly, exactly at the predicted frequency. They repeated a similar experiment in a bulk medium ( 2cm of calcium fluoride), using 60fs input Bessel pulses, again demonstrating the formation of a small-amplitude negative resonant radiation at the predicted wavelength [5]. The above findings, when and if confirmed experimentally by other groups, could generate a renewed interest in supercontinuum generation, introducing a novel and refreshing point of view on this “old” phenomenon. If researchers manage to control the formation and the generation of the negative resonant radiation, there will be chances to push supercontinuum generation to shorter and shorter wavelengths, which will be very useful for several applications, such as optical coherence tomography, the characterization of optical devices, and the generation and measurement of frequency combs. The work could affect substantially phenomena in other fields that are described by the nonlinear Schrödinger equation, for example, the formation of Bose-Einstein condensates. Rubino et al. claim that the generation of these new radiation bands cannot be explained in any other known way by only taking into account the “conventional” positive frequencies. In future experiments based on optical fibers, to conclusively prove the relevance of this “negative world” in nonlinear optics, it will be especially important to exclude the positive resonant radiation frequencies that are due to phase matching between the soliton and higher-order linear modes in fibers [8], the so-called four-wave mixing between solitons and continuous waves [9], and the generation of purely positive frequencies from dispersive moving fronts [10], while in bulk crystals one must exclude the contribution of higher-order Bessel-Gauss states [11], which are all potentially able to produce low-intensity waves at wavelengths close to those predicted in this study. 1. E. Rubino, J. McLenaghan, S. C. Kehr, F. Belgiorno, D. Townsend, S. Rohr, C. E. Kuklewicz, U. Leonhardt, F. König, and D. Faccio, ”Negative-Frequency Resonant Radiation,” Phys. Rev. Lett. 108, 253901 (2012) 2. Y. S. Kivshar and G. P. Agrawal, Optical Solitons - From Fibers to Photonic Crystals (Academic Press, San Diego, 2003)[Amazon][WorldCat] 3. N. Akhmediev and M. Karlsson, “Cherenkov Radiation Emitted by Solitons in Optical Fibers,” Phys. Rev. A 51, 2602 (1995) 4. F. Biancalana, D. V. Skryabin, and A. V. Yulin, “Theory of the Soliton Self-Frequency Shift Compensation by the Resonant Radiation in Photonic Crystal Fibers,” Phys. Rev. E 70, 016615 (2004) 5. R. R. Alfano and S. L. Shapiro, “Observation of Self-Phase Modulation and Small-Scale Filaments in Crystals and Glasses,,” Phys. Rev. Lett. 24, 592 (1970); J. Dudley, G. Genty, and S. Coen, “Supercontinuum Generation in Photonic Crystal Fiber,” Rev. Mod. Phys. 78, 1135 (2006) 6. G. Rousseaux, C. Mathis, P. Maïssa, T. G. Philbin, and U. Leonhardt, “Observation of Negative-Frequency Waves in a Water Tank: A Classical Analogue to the Hawking Effect?,” New J. Phys. 10, 053015 (2008) 7. P. Russell, “Photonic Crystal Fibers,” Science 299, 358 (2003) 8. F. Poletti and P. Horak, “Optical Solitary Waves in Three-Level Media: Effects of Different Dipole Moments,” J. Opt. Soc. Am. B 25, 645 (2008) 9. A. V. Yulin, D. V. Skryabin, and P. St. J. Russell, “Four-Wave Mixing of Linear Waves and Solitons in Fibers with Higher Order Dispersion,” Opt. Lett., 29 2411 (2004) 10. F. Biancalana, A. Amann, A. V. Uskov, and E. P. O’Reilly, “Dynamics of Light Propagation in Spatiotemporal Dielectric Structures,” Phys. Rev. E 75, 046607 (2007) 11. V. Bagini, F. Frezzab, M. Santarsieroa, G. Schettinib, and G. Schirripa Spagnoloc, “Generalized Bessel-Gauss beams,” J. Mod. Opt. 43, 1155 (1996) About the Author Image of Fabio Biancalana Read PDF Subject Areas Related Articles Viewpoint: Inducing Transparency with a Magnetic Field Viewpoint: Inducing Transparency with a Magnetic Field A magnetic field applied to an atomic sample in an optical cavity generates optical transparency that could be used to enhance the frequency stability of lasers. Read More » Focus: Image—Honeycomb Diffraction Synopsis: Laser Stars Under the Lens Synopsis: Laser Stars Under the Lens More Articles
23c49195c473ec20
alegría, galería, argelia, alergia, riégala, aligera Leyendo rápidamente las inquietudes y alegrías de la gente en facebook esta mañana, leo en el muro de una amiga: “Dia mundial contra el cancer de mama” Y yo pienso, “joder, vaya madre más importante la suya, que tiene su propio día mundial; pensaré en ella yo también, espero que se recupere”. Como leer los muros del día es algo que se hace con sólo relativo interés y vaga atención, me lleva unos tres segundos darme cuenta de lo que acabo de hacer. Gente hispanoparlante, ¡acentuad vuestras palabras! in the 3rd position, shame award: people who bring their house keys hanging from their neck in a red collar in the 2nd position, idiocy award: people who talk to their dog (with the same voice tone you use with babies) and actually believe to be having a conversation with them in the 1st position, disrespect award: people in the platform who get into the train car before letting the packed crowd inside get off every once in a while, like today in the train, i temporarily loose faith on human intelligence we are amazing. look, i just asked google to find information on “peeing in fresh fallen snow”. i know, bear with me. i was speaking about that a couple of posts ago, so that’s why i came up with such a sentence to look for. thing is, i hoped it would be a rare enough of a concept as to give google search a hard time giving me back any sensible information. i have no clue about the internal mechanics of the search engine, but i was naively expecting that this being an infrequent query, the answer would not be cached anywhere and that a long search process would be run, possibly giving me some random links not related to the semantics of my query at all but to pages speaking of pee alone, or snow. so yeah, i just asked google to find information on “peeing in fresh fallen snow”; and in a blink, the search engine gave me a link to a page at the urban dictionary website which has the following definition: Urinart: Drawing a picture in freshly-fallen snow using urine and this, my friends, blows my mind in so, so many ways. first of all, the fact that the word urinarting has already been created is pretty awesome. secondly, that somebody invested the time to put this information online is also pretty amazing. third, that google was able to handle my weird query by crossing information with all sort of unstructured sources of information out there and that it found this definition is seriously astonishing. fourth, that it did it in no more than 0.26 seconds is ridiculously impressive. sixth, that humans have reached this state of mastery in information manipulation and management, that we do have tools to store, classify and index information in such a cheap manner that not even the most daring science fiction author would possibly have dreamed of just 20 years ago, this is freaking mind blowing. i don’t know. when i was a kid before internet became popular at around ’95, i would often have to cycle to the public library to physically scan shelves in order to search for an outdated version of the information i was looking for. my great-grandmother, who was born in a tiny village in the mountains by the same time the light bulb was created, knew nothing about the world but what a guy in a black dress would tell her every Sunday morning in form of canticles and rituals. so look at it with a bit of perspective. we are a ridiculously plastic species. you know those crazy high tech cameras able to record thousands of frames per second, that cost $250,000 ? i wonder if they are any useful beyond recording random objects being blown up in slow motion like, say, water balloons in peoples faces. seriously. it got so boooooooring being a man has some advantages, and some disadvantages. among the former, there is that of, when in the mountains during winter, being able to write your name by peeing in a bunch of new fallen snow (american’s do have it easier since they have it really short. their name, i mean – just one syllable most of the times). the joy of this realization is immense in every early morning bart car heading to the city there’s a few young woman with a mirror in one hand, an eyeliner in the other. time is precious, and this is a great way to buy some extra 15 minutes of sleep back home. they change the eyeliner for a mascara applier and proceed with the eyelashes, there in the middle of a crowd with whom they have nothing to do. only the people that they can reach through the social network in their smart phone’s really matters to them. like the work colleagues they are about to meet in the office or the new clients they will talk to today. it’s time to go for some lipstick. some astonishingly precise moves, and they’re ready to go. in every late night bart car heading to the city there’s a few young woman with a mirror in one hand, an eyeliner in the other. time is precious, and this is a great way to buy some extra 15 minutes of rest back home. they change the eyeliner for a mascara applier and proceed with the eyelashes, there in the middle of a crowd with whom they have nothing to do. only the people that they can reach through the social network in their smart phone’s really matters to them. like the best friends they are about to meet in the pub or the new strangers they will talk to today. it’s time to go for some lipstick. some astonishingly precise moves, and they’re ready to go. today i saw this image below in a blog dedicated to science, and i got immediately sad, cause it reminds me that even people doing science themselves don’t always really get it – they seem to not fully understand what science is about. the statement above is basically saying that a perfect world doesn’t have discontinuities – that things change slowly without abrupt alterations, that things that are a lot don’t become a little suddenly without ramping down gradually, that if things are here now and they will be there later it’s only because they are going to be “in between” before. basically the image is claiming that in a perfect world things are not broken but smooth. that’s not true though, reality, the world, the things around us, everything is mostly broken and discontinuous. whoever wrote that blog post above noted it by implying that this world is indeed not perfect or ideal. see, this is my problem – there’s nothing wrong with the world. the world it’s ideal, it’s not the imperfect thing Plato thought it was (with terrible consequences for the western culture as we know). the world is doing just fine, believe me. let me repeat it: the world is doing just fine. humans aren’t. indeed, it is our mathematics that are not ideal. or at least, they are not up to the task of describing efficiently everything around us, discontinuities included. but surely enough, the universe is full of discontinuities at all scales (it can really get pretty fractallie sometimes), it’s not made of boring spheres and planes as Galileo wrongly claimed, nor it’s made out of derivatives, ordinary differential equation and any other human abstractions. an ideal world does not follow lim {x->c} f(x) = f(c). and this, is not a problem. it’s a gift. on the contrary, in an ideal world humans enjoy less primitive mathematics than our current, some mathematics that allow us to describe and model and manipulate discontinuities and all other beautiful features of all the things that we see around us. basically, we humans have a problem, the universe doesn’t. thinking that an ideal world is one where the universe follows our thinking process (and not the other way around) is simply a too much of a human egocentric position. which ironically the scientific community has always proudly claimed to refrain from. thing is that science too fails to do so sometimes, for humans have this tendency of making the universe orbitate around them. even some scientist. still today. i know. sight. walking down some dark alley i find this you know what, i am so not calling this number …there are quite a lot. and in spite of the fact that they are short, you can still say quite a lot with them. but since it can still get quite hard to say any long phrase too, and just for the sake of fun, i thought we might play this game where we only talk with them. what do you think, shall we give it a try? well, read this text back – it’s your turn now! next time they ask me “what’s up, dude?” i’ll answer “it’s a direction” how come “quite a lot” and “quite a few” mean the same thing? it’s a pretty regular fall day, not cold, not warm, a bit cloudy, but not overcast. just a pretty regular fall day, and just that. while in the last few meters of pedaling till my home i think i should probably go grocery shopping before they close the stores. so i climb the stairs, leave the iñicleta (my bicycle’s name), and head downstairs again. as i open the door to quit the building i notice something weird. i see some orange colors everywhere, like if there was some nearby building in fire or something. alarmed look around and i notice the it’s not any building nor car, but the sky which is orange and purple, tinting everything in deep saturated orange. it’s pretty gorgeous in fact. amazingly beautiful. extraordinary, such vivid colors, it’s completely surreal, i’ve certainly never seen anything like this in my life. i see lots of people looking up the sky too. there’s a rainbow. no, two rainbows! but i don’t mind, at this moment is not the sky colors nor the double rainbows, but the fact that the streets are full of people looking to the sky. people have left the shops, restaurants and cars and stop whatever they were doing in order to loop up in the sky. it’s an amazing phenomena. not only the sky, rainboes and the crazy colors of the city in orange and purple fire, but also seeing how everybody is amazed to the spectacle and we are all looking up the sky. to this fantastic surreal painting that we are part of, the double rainbow is nothing but just the perfect signature. i love when random fact/events connect together. the connection often happens in the form of a flashback. event #1: i just woke up in a pretty fancy hotel in downtown LA. first thing to do in this sunny morning is to perform some exploration and try to identify a place for breakfast. so i start walking, and pass by a huge library that has this huge metallic plate with some equations on physics (or for the matter, on that gray area where physics meets chemistry). of course i pause my walk and have a closer look to it. i cannot tell exactly what they are, i only recognize what looks to me like Heisenberg’s uncertainty principle (but i’m a bit unsure, as this is not an area of science where i am exactly comfortable). but it is clear to me that this is about quantum physics, that’s all i can tell. intuitively E seems to be some sort of force or potential to me, given how it gets substracted from itself in the last equation and how it acts as a driving/forced excitation in the third. but who knows. yet, i cannot stop looking at the third equation – it really catches my attention, as its shape feels sort of familiar. i look at it more closely, and i realize it’s a Helmholtz equation plus an external force indeed, an equation that in isolation expresses the change of the change of something as being proportional to the thing in itself (yes two changes, this is, the laplacian). these sort of equations/behaviors are common in electrical engineering, and result in all sort of wave equations. but of course i don’t recognize the quantities in this particular wave equation at all, so i have no idea what the subject of the equation is. only that it must have be describing something in quantum physics and that since after taking changes (derivatives) of it twice still remains propotional to itself, it must be some sort of harmonic function, something that oscillates. indeed harmonics functions (which are eigenfunctions of the laplacian) result in stuff that oscillates like a pendulum, or like a wave (therefore the name of these equations). oscillation means cosinus functions (in 1D), complex exponential (in 2D) or spherical harmonics (in radial 3D). so whatever this equation is describing it is something what undulates like wave. of course at this point i cannot go further, and since i’m still hungry and the reason for this walk was to fulfill my stomach needs, i take a picture of the equations, which is my ever first picture in LA, and i continue walking. i’ll probably never see these equations again in my life. picture taken in the entrance to a library in LA event #2: i’m chatting with my friend to whom i didn’t talk in the last few weeks. today she has been preparing some notes for a course for undergraduate students of chemistry, and she expresses her concern about how to best introduce the Schrodinger’s equation first as an introduction without alienating them with an abstract understanding of what it means. of course, i have no idea myself what the heck she’s talking about, but a science lover as i am, my first reaction is of course to go to Wikipedia and look for “Schrödinger equation”. as soon as start i reading i realize how rotten my memories in physics are. i soon lose any hope to understand anything in this article, unless i would spend a couple of days diving in the subject, which i of course have no time to do. but at least i now know what she’s talking about. sort of. very superficially. i’m about to close the page, but i poke one more page-down in the article, and there suddenly i see something that produces an instantaneous flashback. there is an equation there that i have seen before. not that i’ve been trained in equation matching and detection or anything, but this one equation, yes, i have seen it before. i quickly go to my phone, and search for the picture i took in LA a few weeks before. and…. match!!! yay, that Helmholtz equation i saw in LA was this famous Schrodinger’s equation thingy, and from the little bit i understand of this article it seems it has something to do with physics/chemistry and the study of atom. so that’s that it was that thing in LA, cool! of course at this point i cannot go further, and since we are talking about other topics already anyway, i close the Wikipedia. i’ll probably never see these equations again in my life. event #3 weeks later my friend asks me for advice/help in realtime visualization of atomic structures, cause she believes that may probably help her fellow students understand what’s going on in three dimensional space. i receive the notes she is preparing for the students so i can see the context in which the visualization is needed. i’m reading the notes in my morning commuting in the b.a.r.t., and my eyes bump into one of diagrams she had. “eh, wait a minute!”. i have seen these diagrams before when working with the essentials of lighting in computer graphics. or are they just some similar diagrams? they look exactly the same to me, hm. i read the preceding paragraphs, and i see two dimensional coefficients called m and l related to these diagrams, m running from -l to l. pretty much like indices to Legendre polynomials. oki, this cannot be an accident, these are spherical harmonics. like in computer graphics. like in electrical engineering. i get an instantaneous flashback again. Legendre, Harmonics, Helmholtz, Schrödinger!! electromagnetic wave propagation, visibility encoding for computer graphics, atoms!! i read the full notes, and indeed, it feels like a present given to me after all these years since i last studied about the s, p, d and f atom orbitals at school. now, 17 years later, i finally learn what they actually are, or more correctly, why they are the way they are! where they come from, how to solve them, how to describe them! how exciting! but of course at this point i cannot go further, and since i’m heading to work and finally made it to my station, and i’m running late, i stop reading the notes here. but this time i won’t say that i’ll probably never see these equations again in my life. i love the tickles it produced in my spirit to close this circle today. relating things i know today to things i learnt no less than 17 years ago, as if they had been waiting for the connection to happen. learning is fascinating. and when it happens this way, even more. and all thanks to that metallic panel in doors to that library in Los Angles that one morning. there aren’t many things more humiliating than being the hurricane reporter. your dignity gets miserably ruined forever, in front of the whole world, while you wear that ridiculous slicker and wellingtons, you fight the wind while trying to speak to the mic and your face gets slapped over and over again by your hoodie. i mean, was it really necessary to send anybody there to report the news? i can imagine the conversation that same morning in the officre: - hey, have you met the new guy yet? - the intern? - yep, Mr Look At Me I’m A Professional Journalist. i think we should teach him how things really are over here. - you know what, they told me there’s a hurricane coming tonight in Texas… looking to the contacts in my phone seems like looking back in the past. it brings old memories of good times through names that i had almost forgotten, names that like a thread that i can pull from, allow me to recover amazingly vivid moments, situations, experiences, places, people, moods, expectations, smells, adventures, ideas, interests, sounds and songs that would otherwise have sunk and get lost forever in an ocean of past times. a few of these names belong to people i met 15 years ago and that i’m still in touch with, and many other names belong to people i only met for 15 minutes. sometimes even less. but regardless of that, as i scroll the contact list i take a moment to think about how i met every single one of these people, in which context. and regardless of that too, sometimes it all comes automatically in a fraction of a second, sharp and vivid, while other times i have to do an effort, like if for some reason the memory had decided to slip away, perhaps with complicity of the person the memory is about or with my own. but in the end all memories come back, one by one; and as i scroll this list down, for every of these names, i recover a bit of that myself i was once. looking to this contacts list in the phone really seems like looking back in the past.
29d6dbb74df35319
Schrödinger picture From Wikipedia, the free encyclopedia Jump to: navigation, search In physics, the Schrödinger picture (also called the Schrödinger representation[1]) is a formulation of quantum mechanics in which the state vectors evolve in time, but the operators (observables and others) are constant with respect to time.[2][3] This differs from the Heisenberg picture which keeps the states constant while the observables evolve in time, and from the interaction picture in which both the states and the observables evolve in time. The Schrödinger and Heisenberg pictures are related as active and passive transformations and have the same measurement statistics. In the Schrödinger picture, the state of a system evolves with time. The evolution for a closed quantum system is brought about by a unitary operator, the time evolution operator. For time evolution from a state vector |\psi(t_0)\rangle at time t_0 to a state vector |\psi(t)\rangle at time t, the time-evolution operator is commonly written U(t, t_0), and one has In the case where the Hamiltonian of the system does not vary with time, the time-evolution operator has the form U(t, t_0) = e^{-iH(t-t_0)/\hbar}, where the exponent is evaluated via its Taylor series. The Schrödinger picture is useful when dealing with a time-independent Hamiltonian H; that is, \partial_tH=0 . In elementary quantum mechanics, the state of a quantum-mechanical system is represented by a complex-valued wavefunction ψ(x, t). More abstractly, the state may be represented as a state vector, or ket, | \psi \rangle. This ket is an element of a Hilbert space, a vector space containing all possible states of the system. A quantum-mechanical operator is a function which takes a ket | \psi \rangle and returns some other ket | \psi' \rangle. The differences between the Schrödinger and Heisenberg pictures of quantum mechanics revolve around how to deal with systems that evolve in time: the time-dependent nature of the system must be carried by some combination of the state vectors and the operators. For example, a quantum harmonic oscillator may be in a state | \psi \rangle for which the expectation value of the momentum, \langle \psi | \hat{p} | \psi \rangle, oscillates sinusoidally in time. One can then ask whether this sinusoidal oscillation should be reflected in the state vector | \psi \rangle, the momentum operator \hat{p}, or both. All three of these choices are valid; the first gives the Schrödinger picture, the second the Heisenberg picture, and the third the interaction picture. The time evolution operator[edit] The time-evolution operator U(t, t0) is defined as the operator which acts on the ket at time t0 to produce the ket at some other time t: For bras, we instead have \langle \psi(t) | = \langle \psi(t_0) |U^{\dagger}(t,t_0). The time evolution operator must be unitary. This is because we demand that the norm of the state ket must not change with time. That is, \langle \psi(t)| \psi(t) \rangle = \langle \psi(t_0)|U^{\dagger}(t,t_0)U(t,t_0)| \psi(t_0) \rangle = \langle \psi(t_0) | \psi(t_0) \rangle. When t = t0, U is the identity operator, since | \psi(t_0) \rangle = U(t_0,t_0) | \psi(t_0) \rangle. Time evolution from t0 to t may be viewed as a two-step time evolution, first from t0 to an intermediate time t1, and then from t1 to the final time t. Therefore, U(t,t_0) = U(t,t_1)U(t_1,t_0). Differential equation for time evolution operator[edit] We drop the t0 index in the time evolution operator with the convention that t0 = 0 and write it as U(t). The Schrödinger equation is where H is the Hamiltonian. Now using the time-evolution operator U to write |\psi(t)\rangle = U(t) |\psi(0)\rangle, we have i \hbar {\partial \over \partial t} U(t) | \psi (0) \rangle = H U(t)| \psi (0)\rangle. Since |\psi(0)\rangle is a constant ket (the state ket at t = 0), and since the above equation is true for any constant ket in the Hilbert space, the time evolution operator must obey the equation i \hbar {\partial \over \partial t} U(t) = H U(t). If the Hamiltonian is independent of time, the solution to the above equation is[note 1] U(t) = e^{-iHt / \hbar}. Since H is an operator, this exponential expression is to be evaluated via its Taylor series: e^{-iHt / \hbar} = 1 - \frac{iHt}{\hbar} - \frac{1}{2}\left(\frac{Ht}{\hbar}\right)^2 + \cdots . | \psi(t) \rangle = e^{-iHt / \hbar} | \psi(0) \rangle. Note that |\psi(0)\rangle is an arbitrary ket. However, if the initial ket is an eigenstate of the Hamiltonian, with eigenvalue E, we get: | \psi(t) \rangle = e^{-iEt / \hbar} | \psi(0) \rangle. Thus we see that the eigenstates of the Hamiltonian are stationary states: they only pick up an overall phase factor as they evolve with time. If the Hamiltonian is dependent on time, but the Hamiltonians at different times commute, then the time evolution operator can be written as U(t) = \exp\left({-\frac{i}{\hbar} \int_0^t H(t')\, dt'}\right), If the Hamiltonian is dependent on time, but the Hamiltonians at different times do not commute, then the time evolution operator can be written as U(t) = \mathrm{T}\exp\left({-\frac{i}{\hbar} \int_0^t H(t')\, dt'}\right), where T is time-ordering operator, which is sometimes known as the Dyson series, after F.J.Dyson. The alternative to the Schrödinger picture is to switch to a rotating reference frame, which is itself being rotated by the propagator. Since the undulatory rotation is now being assumed by the reference frame itself, an undisturbed state function appears to be truly static. This is the Heisenberg picture. See also[edit] 1. ^ Here we use the fact that at t = 0, U(t) must reduce to the identity operator. 1. ^ "Schrödinger representation". Encyclopedia of Mathematics. Retrieved 3 September 2013.  2. ^ Parker, C.B. (1994). McGraw Hill Encyclopaedia of Physics (2nd ed.). McGraw Hill. pp. 786, 1261. ISBN 0-07-051400-3.  3. ^ Y. Peleg, R. Pnini, E. Zaarur, E. Hecht (2010). Quantum mechanics. Schuam's outline series (2nd ed.). McGraw Hill. p. 70. ISBN 9-780071-623582.  Further reading[edit] • Principles of Quantum Mechanics by R. Shankar, Plenum Press. • Modern Quantum mechanics by J.J. Sakurai.
c5451792fabd3adc
We aim to elucidate experiment in nanoscale optics and plasmon-enhanced molecular spectroscopy using first-principles theory and computation.  Well-defined descriptions of such phenomena, which involve the simultaneous interaction of molecules, nanoscale plasmon-supporting metals, and the electromagnetic field, are difficult to formulate because of the widely varying length scales over which the relevant chemical and physical processes occur.  The cartoon below shows some typical molecular (a few Ås), plasmonic (tens to a few hundreds of nms), and optical/near-IR field (hundreds to thousands of nms) sizes. Understanding the big picture - how each interact as a coherent whole - with predictive and rigorous theory will help guide experimentalists who are designing high-efficiency solar cell technology, study molecular sensors capable of detecting the presence of a just a few target molecules, and who measure a variety of plasmon-enhanced linear and nonlinear molecular spectroscopies either in the frequency or time domain. We accomplish this task by carefully blending together molecular-electron propagator methods as well as explicitly time-dependent descriptions of quantum molecular dynamics coupled to the continuum-electrodynamics of the field and metal.  We also employ purely classical electromagnetic theory to describe nanoscale metal structures and their interaction with light.  Development of the software necessary to explore each of these theoretical concepts is an essential aspect of our research as we work in areas where no black-box applications exist.  Specific projects include: Nanoscale optics: We are interested in the electromagnetic scattering properties of small metal particles, either individually, in clusters, or organized into periodic arrays.  By small, we mean on the order of a few 10s to a few 100s of nanometers in size.  Such structures display unique behavior particularly in the visible to near IR, where metallic conduction electrons can be set into coherent oscillatory motion by perturbing radiation.  This coupling of radiation to matter results in the formation of localized surface-plasmon polaritons - “plasmons” for short - which can act as nanoscopic antennas that relay both exciting and (in)elastically scattered radiation fields to/from nearby molecules.  So strong is this effect that the surface-enhanced Raman scattering of light from a single molecule can be detected in the laboratory.  Nanoparticles also have interesting behavior in and of themselves, which can be addressed with purely continuum electromagnetic theory.  Below is an example of a high-resolution TEM image (right) and computational model (left) of the electromagnetic-field intensity scattered from a nanoscale metal dimer. In addition to extinction, the shape/volume and magnitude of regions of high field strength can be studied as a function of excitation energy, as exemplified by the region located within the dimer junction (computed at an excitation wavelength of 532 nm).  In order to describe how these systems interact with radiation we employ a number of theoretical and computational techniques to solve the electromagnetic-scattering problem, i.e., to solve Maxwell’s equations.  Examples include finite-difference time-domain (FDTD) and finite-element methods (FEM), as well as the discrete-dipole approximation (DDA) and multipole methods like vector-spherical harmonics. Quantum many-body theory in molecular plasmonics: But how do we account for the structure and dynamics of nearby molecules that feel the polarization effects of the exciting field as well as the metal?  Certainly Maxwell’s equations are of no help here.  Rather, we must solve the many-body Schrödinger equation for the molecular electronic and nuclear degrees of freedom coupled to the metal and perturbed by the external field.  Many-body perturbation theoretic and Green’s function methods greatly facilitate our understanding of these complex interactions.  They allows us to compute, among other things, the response and scattering properties of the combined and coupled molecule-metal-field system.  Some dominant contributions to the linear response of the interacting molecular-electronic density are displayed in the diagrams below: We have numerically implemented the equations that underlie each diagram within local versions of Q-Chem and the DDA.  Much work is now in progress to apply these methods to current experiment as well as to motivate new directions for future experimental inquiry.  There is also great interest in extending and generalizing these methods to explore uncharted paths leading towards enhanced-exciton formation and -charge separation in dye-sensitized solar cells as well as high precision molecular sensing based on the detailed interaction between electronically resonant molecules and the localized surface-plasmon resonances of nanoscale metal particles and surfaces.
b44afe1bf762e43b
Theories of Knowledge and Quantum Mechanics Michel Bitbol CREA/CNRS, 1, rue Descartes, 75005 Paris FRANCE Published in: SATS (Nordic journal of philosophy), 2, 37-61, 2001 Full text in Word/RTF format on Pittsburgh Archive in the Philosophy of science Quantum Mechanics has imposed strain on traditional (dualist and representationalist) epistemological conceptions. An alternative was offered by Bohr and Heisenberg, according to whom natural science does not describe nature, but rather the interplay between nature and ourselves. But this was only a suggestion. In this paper, a systematic development of the Bohr-Heisenberg conception is outlined, by way of a comparison with the modern self-organizational theories of cognition. It is shown that a consistent non-representationalist (and/or relational) reading of quantum mechanics can be reached thus. Naturalizing epistemology means considering the acquisition of knowledge as a fraction of the natural processes which are supposedly described by our best scientific theories. If this is granted, there appears to be a hierarchical and one-way dependence between the scientific theories (which are taken to be our highest and most basic descriptive achievement), and the analysis of cognitive processes construed as a mere local application of these theories. However, things are not so simple. The conception of knowledge one has reached by this process may well have a feed-back effect on the meaning that is ascribed to the prevalent scientific theories. And conversely, new scientific theories may undermine those very epistemological presuppositions which had to be used for their formulation, and which were arrived at on the basis of previous scientific theories. The purpose of this paper is to display this complex, non-hierarchical, and two-way set of relations between theories of knowledge and scientific theories, especially physical theories. A central theme is the deep-lying tension Quantum Mechanics has imposed onto traditional (dualist and representationalist) epistemological conceptions. 1-Multi-leveled epistemological circles An "Epistemological circle" is a two-way relation between (i) a scientific theory and (ii) the way this theory pictures the processes by which it was itself formulated and corroborated. This concept of epistemological circle has undoubtedly some kinship with the concept of hermeneutic circle. However, there are also some differences. The most important element of a hermeneutical circle in its original acceptation is the set of preconceptions of the interpreter of a text; the possible discrepancies between this interpretation and parts of the text may then jeopardize these preconceptions, and lead to modify them. This process is performed again and again until a satisfactory reading is reached. In Heidegger's wider acceptation, the starting point of a hermeneutic circle is the set of spontaneous anticipations which underly everyday life. These anticipations are modified whenever a discrepancy between them and the resulting events of life occurs. But epistemological circles involve a systematic network of theoretical predictions instead. The mutual constraints between preconceptions and interpreted "facts" are thus much more stringent in the epistemological circles than in the two former varieties of hermeneutic circles. In our culture, the epistemological circles of classical physics and classical science are still dominant. So, let me describe them from the outset. One may distinguish two epistemological circles in the paradigm of classical physics. The first circle relates: (i) a description of the two main entities of classical physics, namely material bodies and fields, and (ii) the description of the experimental apparatuses under the presupposition that these apparatuses are made of material bodies and fields obeying the laws of classical mechanics and electrodynamics. This means that testing the theories of classical physics depends on a pre-interpretation of the measured values by using these (and/or other) theories for the description of the measuring process. Conversely, the validity of this description of the measuring process depends on the validity of the theories which are used in it. I call this first epistemological circle the "measurement circle". The second epistemological circle, which classical physics shares with classical science as a whole, is also made of two elements. It relates (i) the picture that the theories of classical science provides of its objects, and (ii) a meta-picture of the relationship which exists between these objects and the subjects of cognizance. The self-consistency of this circle is achieved if the validity of the picture is compatible with the meta-picture of the cognitive process that ended up in this description, and conversely if the meta-picture is isomorphic to the picture. For instance, the idea that a theory describes faithfully the motion of a set of interacting objects is made plausible by the meta-picture of a set of objects seen by passive subjects of cognizance. Indeed, if the subjects are purely passive, or if their activity has no bearing on the constitution of objects, their contribution to the epistemic contents can easily be substracted, and the intrinsic properties of objects can easily be reached. In other terms, the conception of subjects as passive receptors makes almost trivial the sought de-convolution of phenomena into a subjective and an objective side. Conversely, the classical meta-picture of the interaction between subject and object is isomorphic to the interaction between two material bodies whose boundaries define internal and external domains. I call this second epistemological circle the "subject-object circle". The most common paradigm of cognitive science describes cognition as a succession of "inputs" from an "external" pre-structured world, of "internal" information processing (usually computational), and of performative or symbolic "outputs". This input-output paradigm is immediately compatible with classical science as a whole. To begin with, the input-output paradigm of cognitive science perfectly fits with the conception of the universe as a set of interacting pre-existing material bodies, since in it the cognizant system is only supposed to pick faithfully the information made available by these bodies, and to process it in such a way that it reaches a high degree of (symbolic or pragmatic) efficiency. Moreover, the input-output paradigm of cognitive science is also remarkably isomorphic to this conception of the universe, insofar as the separation between the objects and the cognizant system appears as a special case of the spatial separation between the material objects of classical science. This clearly promotes the project of a complete naturalization of epistemology in the same descriptive terms as classical science, namely in such a way that the cognizant system be construed as a material object of this science among many others. Those two epistemological circles are not bound to be "vicious" or "tautological". Indeed they are not completely immune against criticism. But the conditions which may yield their revision are quite peculiar, and this is enough to explain their lasting prevalence after one century of growing strain. On the one hand, no epistemological circle can be challenged by extrinsic circumstances. Nothing except an emergent lack of self-consistency may prompt one to question it. But, of course, this type of deficiency may trigger many other strategies than radical change. Other available strategies include: (1) Compensation of the inconsistencies by ad hoc hypothesis, (2) Explicit hope that future research will show that there are no real inconsistencies, (3) Renunciation of the unity of knowledge, i.e. definition of cognitive sub-domains wherein consistency is locally recovered. On the other hand, one must realize that the standard subject-object circle has its roots in the ontological pre-conceptions classical science has inherited from ordinary language and everyday life. Imposing thorough revisions onto the standard subject-object circle would thus generate a conflict between those theoretical contents which are embedded within the new circle, and most ways of speaking and behaving in the Umwelt of mankind. If a new epistemological circle were to prevail, this would only be possible provided the old one still underlies it as: (i) its basic presupposition in ordinary speech and behaviour, and (ii) its qualitative or quantitative limiting case within the most familiar areas of knowledge. This remark obviously generalizes Bohr's conception of the relationship between the measurement circles of classical and quantum mechanics. According to him, the predictive formalism of quantum theory could not even work if classical theories were not presupposed for the description of the measurement apparatuses which allow to test it, and also if one did not recover classical laws at the scale where the value of the Planck constant becomes negligible. An important consequence of these constraints is that whenever a new epistemological circle is proposed, its very formulation is de facto dependent on the traditional subject-object circle. Let us consider, for instance, the way new paradigms of cognition, involving emergence and self-organization, have been formulated in the past. In these paradigms, the traditional relation between an autonomous object and a passive subject facing each other is thoroughtly criticized. But when the elements of the self-organizing cycle themselves are described, they are dealt with exactly as if they were pre-existing things (or states) in front of a passive subject. In this case, the meta-theory of knowledge is not consistent with the alternative first-order theory of knowledge which is advocated. F. Varela, a prominent supporter of non-standard theories of cognition, is perfectly aware of this problem . His answer to the criticism essentially amounts to downplaying the descriptive status of his own theory of cognition. One should realize, according to him, that concepts such as emergence, self-organization, or enaction, are not pieces of a description aiming at some absolute truth, but rather stages of a dialectical process purporting to free us from dualist or foundationalist schemes. This self-referential feature of non-standard theory of cognition could be analyzed for its own sake. But here, I wish to concentrate on the special form it takes in quantum physics. To begin with, what kind of relations are there between the classical and the quantum measurement circles? 2-The measurement circle of quantum mechanics According to David Bohm, the problem which arises from the interpretation of quantum mechanics is twofold. The first aspect of the problem is that there is no natural ontology of quantum mechanics, in the sense of a set of objects and properties "(...) taken to be essentially independent of the human observer". And the second aspect of the problem is that, due to this lack of a natural ontology, standard quantum mechanics seems unable to give rise to a proper, self-consistent, epistemological circle. One should remember that in classical physics "The epistemology was almost self-evident because the observing apparatus was supposed to obey the same objective laws as the observed system, so that the measurement process could be understood as a special case of the general laws applying to the entire universe" . Bohm's attempt at providing an "ontological" interpretation of quantum mechanics is thus overtly aimed at recovering a satisfactory epistemological circle; a circle of measuring and measured as remarkably closed as that of classical physics. But what are exactly the obstacles which prevent the constitution of an epistemological circle in quantum mechanics? Did Von Neumann not exhibit such a circle by way of a quantum description of the measurement set-up (his well-known "Quantum theory of measurement")? The difficulty is still there, however, and its name is the measurement problem. As we shall see, this well-known problem is really intractable in its usual (quasi-descriptive, not to say ontological) form, but it becomes much easier to tackle in a purely predictive version. To begin with, we must remind the usual form of the measurement problem. Let us first ascribe a state vector to the experimental set-up, and let us suppose that this state vector is ruled by the same law as the state vector of the measured system. If this is granted, a quantum variety of the measurement circle is created. Let us suppose next that the state vector of the system is not identical with an eigenstate of the observable measured by this experimental set-up, but that it is a linear superposition of such eigenstates. During the measurement process, the state vector of the super-system (micro-system + apparatus) develops according to a Schrödinger equation whose Hamiltonian includes an interaction term between the two components of the super-system. When this process is over, it is usually impossible to factorize out the state vector of the apparatus from the global state vector of the super-system (micro-system + apparatus). It is usually said that the respective states of the micro-system and the apparatus are entangled. At this point, the global state vector of the super-system consists in a linear superposition having exactly the same structure as the initial superposition in the state of the micro-system. If one takes seriously the popular idea that a state vector somehow captures the "state" of the object to which it is associated, it must be accepted that the quantum theory of measurement represents neither the micro-system nor the apparatus as being in a sharply defined final pure state. Rather, it represents the state of both components of the super-system as being "(...) mixed or smeared out" . The quantum theory of measurement thus seems to contradict the elementary experience of any physicist in his laboratory, according to whom the apparatus is in a well-defined state after the experiment has taken place (but of course here, as B. Van Fraassen cogently pointed out, the word "state" is playing two distinct roles). Even more strikingly, taken at face value, the quantum theory of measurement seems to contradict one of the most basic conditions for testing any physical theory, i.e. the comparison between what this theory says and a set of well-defined measurement outcomes. But if this is true, the circle of the quantum theory of measurement does not fulfill the requirements for a proper epistemological circle. Indeed, in order to formulate a proper epistemological circle of the measurement variety, it is not enough to connect a physical theory with a theoretical account of the experimental process derived from this physical theory. It is also necessary that this account be compatible with the minimal epistemic conditions which enable one to test physical theories in general. But the quantum theory of measurement does not fulfill this requirement, as long as it does not include any structural equivalent of the elementary requirements of uniqueness and strict determination of experimental outcomes. The first reaction to this obvious difficulty consisted in enforcing the "projection postulate", according to which the state vector of the micro-system (and/or the state vector of the super-system) instantaneously collapses at some point of the measurement process. This collapse transforms the state vector of the micro-system into one of the eigenstates of the relevant observable. The problem is that this reaction is tantamount to renouncing any attempt to close the epistemological measurement circle of quantum mechanics. Indeed, imposing a sudden collapse to state vectors whenever a measurement occurs, means that the measurement process is somehow construed as an exception in the physical universe. All the processes of the physical universe are supposed to be ruled by the Schrödinger equation, but not the measurement process. This situation gave rise to a large variety of thoughts. N. Bohr merely noted that the very attempt at closing the quantum epistemological circle is likely to be flawed from the beginning, since the experimental set-up and the outcomes must be described in classical terms (in order to enable unambiguous communication). D. Bohm (see the beginning of this section) attempted to recover something like a classical epistemological circle, by means of his hidden variable theory. Another, more recent, reaction was Ghirardi's, Rimini's, and Weber's with their idea of inserting a "spontaneous collapse" term in the Schrödinger equation . This idea is interesting for the present discussion, insofar as it amounts to closing the epistemological circle of quantum mechanics by modifying the theory of objects in order to fit the partial theoretical description of the measurement process, rather than the other way round. But it is also fraught with difficulties . I shall thus insist on a radically different strategy of dealing with the measurement problem. This strategy consists in using a deflationist conception of quantum mechanics, which concentrates on the predictive contents of the state vector, rather than on its putative ability to describe the "state" of various objects. Defining a purely predictive interpretation of the symbols of quantum mechanics is not very easy, because in the past this reading has been inextricably mixed up with descriptive elements. A litteral interpretation of quantum mechanics according to which this theory provides us with probabilities for experimental outcomes after a given preparation, has usually been mixed up with typically descriptive concepts such as those of "micro-systems" or "states". Few authors have seriously developed all the (philosophical) consequences of a purely predictive construal of quantum mechanics . Yet, holding consistently to such a predictive reading throughout could well result in an entirely recasted formulation of the measurement problem. This could also give some hints towards a solution of this problem (not to mention a serious adumbration of a true dissolution of it). According to the purely predictive interpretation, the quantum theory of measurement institutes a very peculiar kind of epistemological circle : a circle of probability assessments, rather than a circle of descriptions. The probability assessments themselves are about two types of measurements: the first-level measurement bearing on the micro-system, and a meta-measurement bearing on the experimental set-up with which the first-level measurement is performed. Within this framework, the measurement problem assumes a new form. In the same way as the descriptive circle of measuring and measured, the probabilistic circle has a problem of closure. Closing the descriptive circle required that the description of the measurement process be a special case of the general description of physical processes. Closing the probabilistic circle requires that the probability theory which applies to the meta-measurement outcomes be of the same type as the probability theory which applies to the first-level measurement outcomes. The latter condition is not trivial, however. At the macroscopic level of the meta-measurement process, the theory which has to be used is Kolmogorov's classical theory of probabilities, whose probability assessment can be satisfactorily interpreted as an expression of our ignorance of pre-existing phenomena. But predicting the results of the first-level measurement requires a quantum theory of probabilities which involves interference terms, isomorphic to those of a wave process. This presence of interference terms does not allow an ignorance interpretation of the probabilistic assessment. (To qualify this assertion, the ignorance interpretation is precluded only at the immediate level of the phenomena ; but it can still be carried on at the level of hypothetical "hidden" processes such as Bohm's). If we put Bohm's theory aside, the question is then as follows. Can one close the circle whose elements are (i) the probabilities of the outcomes of a first-level measurement and (ii) the probabilities of the outcomes of a meta-measurement bearing on the very process of first-level measurement? In order to perform this kind of closure, one would have to demonstrate that the classical theory of probabilities, which operates at the macroscopic scale of the experimental set-up, is a limiting case of the quantum theory of probabilites which is supposed to operate at any scale. But decoherence theories are precisely aimed at providing such a demonstration. They are aimed at showing that when applied to complex processes involving a micro-system, an experimental device, and a vast environment, the quantum probabilities converge (to a good approximation) towards classical probabilities. Indeed, in this case, the interference terms tend to vanish, and the kolmogorovian additivity rule for disjunctions of events can accordingly be enforced. The only thing which usually hides this purely probabilistic status of the decoherence theories is the dominant descriptive interpretation of the state vector and the density matrix . An important defect of this method for closing the epistemological circle of quantum physics is that, in order to derive the probabilistic structures which prevail at the meso-macroscopic scale of the human experimenters from the quantum probabilistic structures, the specialists of decoherence theories could not avoid making anthropocentric hypotheses. W. H. Zurek for instance assumed that the measurement chain consist of three elements : the micro-object, the apparatus, and the environment (I have also used this assumption verbally for the sake of easy writing). But, admittedly , this division only holds at the emergent level of the macroscopic manifestations ; it is by no means obvious a priori in the domain of validity of quantum mechanics. It is thus crypto-anthropocentric. Another instance of an anthropocentric assumption was used by M. Gell-Mann in his theory of decoherent histories. Gell-Mann assumes a coarse-graining of the consistent histories, and he justifies this coarse-graining by the macroscopic scale of a population of anthropomorphic "Information Gathering and Utilizing Systems" (IGUS). This level of petitio principii becomes a real problem only if one hopes that decoherence theories are strong enough to prove that a (quasi-) classical probability assignment is the unique form a quantum probability assignment can assume at macroscopic scale. But if what one expects from decoherence theories is only a proof that classical probability is one among the many possible emergent forms of probability assignments at the macroscopic scale, then things are quite different. In particular, if one only needs a proof that the classical theory of probabilities can emerge from the quantum theory of probabilities under some restrictive conditions which encapsulate the basic constitutive presuppositions of human knowledge, then decoherence theories provide a perfectly satisfactory answer. True, the closure of the measurement circle is not the unique and unavoidable outcome of the mode of functioning of quantum mechanics, but decoherence theories prove that it is a possible byproduct of its formalism. Moreover, provided these basic presuppositions are assumed at the level of the meta-apparatus (the apparatus used to monitor the processes within the measurement device), a rapid suppression of the coherence terms has been observed experimentally . The urge for univocity cannot, therefore, be satisfied within the field of quantum physics. But it can be satisfied by appealing to some additional non-quantum considerations. This ampliative strategy was adopted by Zurek and Gell-Mann, when they used Darwinian arguments in their reflection on decoherence theories. Thus, according to Gell-Mann, the aim of somebody who wants to solve the measurement problem should not be to prove that a classical world necessarily emerges from a quantum micro-level; it should only be to show, within the framework of quantum physics, that Knowing Systems (IGUSes) cannot be stable, i.e. survive, if their actions and epistemic structure do not develop at a quasi-classical level . Later on, S. Saunders showed that decoherence can be derived from those conditions which make possible the life of an autonomous metabolic system . To summarize: (1) Decoherence theories are not able to prove that the emergence of a classical world is a necessary and unique consequence of quantum physics at the macroscopic scale; (2) Decoherence theories provide a tool for dealing quantum-mechanically with the process of co-emergence of a Knowing System and its macroscopical quasi-classical Umwelt. If this is true, that means that in order to close the measurement circle of quantum physics, one must rely on a project of closure of the subject-object epistemological circle (the general circle of the knowing and the known). Let us then examine this larger epistemological circle. 3-The subject-object circle challenged: a parallel between quantum mechanics and the cognitive science Challenging the subject-object circle of classical science, namely the dualist picture of an encounter between the knowing subject (the spectator) and the nature he purports to know (the spectacle), was considered indispensible by some of the most prominent creators of quantum mechanics. Bohr insisted that "(...) the new situation in physics has so forciby reminded us of the old truth that we are both onlookers and actors in the great drama of existence" . As for Heisenberg, he suggested repeatedly that quantum mechanics does not provide us with a description of the atomic processes themselves: it rather sketches jointly "(...) a tendency of events and our knowledge of events" . More generally, he thought that "Natural science does not simply describe and explain nature; it is part of the interplay between nature and ourselves; it describes nature as exposed to our method of questioning" . In other terms, according to Bohr and Heisenberg, quantum mechanics is the paradigm of a theory which does not describe intrinsic properties, but rather anticipates probabilistically the outcome of possible experimental relations. This is enough to dissolve the measurement problem, or at least to change radically its formulation, as I explained in the previous section by means of a purely predictive reading of decoherence theories. Indeed the state vector here does not represent anything like the state of something, but only a joint tendency manifesting itself in a potential future experiment. Superpositions are no longer surprising within this framework, provided it is shown that at the macroscopic scale these superpositions can be approximatively reduced to a list of classical probabilities. This sort of epistemological interpretation of quantum mechanics was not really assimilated by the physics community. Even though they accepted it formally, physicists felt uneasy about it. For decades, they just mixed up some elements of "positivistic" conceptions of quantum mechanics with bits and pieces of descriptive language. Then, a strong tendency towards recovering a "realist" interpretation of quantum theories arose, and the Bohr-Heisenberg reading became marginalized. Quite apart from this realist prejudice, however, a reason for the progressive oblivion of the Bohr-Heisenberg views may be that they were not given enough systematic development by their authors. But nowadays, such a systematic development is made much easier by the recent development of non-representationalist theories of cognition. The similarities between the type of non-dualist theory of knowledge Bohr and Heisenberg adumbrated and these non-representationalist theories of cognition are striking. To see this analogy, it is enough to compare the former quotations from Heisenberg with the following statement of a modern cognitive scientists: "The changed structure (of neural networks) does not represent the external world, but it represents - if one wants to stick to the term - the interactive process: input-organism's or environment-organism's interaction. (...) it means something to its owner although never in an absolute sense, but only in relation to the organism's actions in its environment" . This is clearly an incentive to draw a systematic parallel between the Bohr-Heisenberg theory of knowledge and the non-representationalist theory of cognition. The parallel will concern three distinct points. A-The first similarity bears on the common motivation of both attempts at recasting epistemology. This common motivation is to free oneself from previous ontological patterns (borrowed from the "natural attitude", or from classical physics) when the status of knowledge is at stake. Let us begin with cognitive science. The rise of the self-organisational paradigm after a long period of predominance of the representationalist, symbolic, and computational trend of cognitivism, can be explained by the partial failure of the initial program of Artifical Intelligence. For instance, the specialists of AI met many subtle obstacles in their project of implementing reidentification of material bodies by their shapes. This led some of them to think that : "(...) the world is an unruly place - much messier than reigning ontological and scientific myths would lead one to suspect" . It was thus necessary to avoid imposing in advance our too civilized formal concepts on the machines. For these formal concepts arose from the cognitive evolution of mankind, and nothing can assure us that they are appropriate to any type of machine as well. Designers of machine perception systems must therefore allow massively adaptative processes. If anything, they must implement "(...) notions of objects that are fluid, dynamic, negotiated, ambiguous, and context-dependent (...), rather than the black-and-white models inherited from logic and model theory" . They must not project onto their machines the ossified system of human ontological presuppositions, assuming wrongly that they correspond to something that was once discovered by men, and that has to be either implemented on or rediscovered by those machines. If a machine could orient itself in the world, it would be in its own world; not in the world of preconceived ideas of logicians and model theorists. To summarize, the mistake of classical cognitivism consists in its having judged in advance the relation between a machine and its environment, by imposing on it the byproduct of the former dynamical relation between men and their Umwelt. One may explain similarly the renewed interest of the creators of quantum mechanics in the relation between the instruments of exploration and the explored microscopic domain. Their thoroughly relational approach was aimed at preventing quantum physics from getting stuck in the pre-existent ontological framework which classical physics shares with the "natural attitude". According to them, the formalism of quantum mechanics, and the set of predictive methods that are derived from it, express the emergent order of the new relations allowed by recent advances in experimental physics. It could by no means be adapted to a framework of formal concepts which express the emergent order of a much older type of cognitive relation: the relation between men and their mesoscopic environment. B- As I have mentioned before, the central topic of the usual representationalist paradigm of cognitive science is a system of "information processing" construed as a locus of articulation of (i) inputs from a pre-structured external world, and (ii) processing of these inputs by way of a representation of the invariant features of this world, and (iii) performative or symbolic outputs. But this is obviously not the case in the non-representationalist, self-organisational paradigm. Here, the fundamental entities are operationally closed units. The only invariant of these units is their own dynamical organisation. And their "cognitive domain" is not a represented fraction of a pre-existing world, but a fraction of the environment which has co-evolved with them and in which their organisation may persist despite some disturbances. Using J. Piaget's vocabulary , the process by which an operationnally closed unit protects itself by incorporating the most common disturbances in its own dynamical organization is called assimilation. As for the process by which this unit transforms itself in order to be able to assimilate further disturbances, it is called accommodation. The appropriate behaviour of a self-organized unit then does not prove that it possesses a faithful picture of the world, but only that its internal working is viable in relation to environmental disturbances. Thus, the categories which underlie its behaviour are not the internalized copy of the intrinsic partition of a pre-ordered external world. They are the stabilized by-products of the history of a coupling between the unit and an environment which may well be chaotic . Each single predicate corresponds to an "eigenbehaviour", or to an attractor of the dynamics of the self-organized unit. The similarities between this view of cognition and the Bohr-Heisenberg relational conception of quantum mechanics are made almost obvious by an overt mathematical analogy. F. Varela explicitly mentioned that the word "eigenbehaviour" is in perfect agreement with the use of terms like "eigenvalue" and "eigenfunction" to refer to the fixed points of linear operators (such as those of quantum mechanics) . But the converse is also true. Bohr and Heisenberg advocated a thoroughly interactional view of the quantum formalism of eigenvalues and eigenvectors of linear operators. Saying that eigenvalues and eigenvectors of quantum observable express the eigenbehaviour of the apparatus in its coupling with the micro-domain, rather than the intrinsic properties of micro-objects, would be very close to the spirit of their interpretation. At any rate this is essentially the idea Schrödinger was trying to convey in the 1950s. According to him, the quantum discontinuities and the corresponding probabilistic account do not reveal some intrinsic jump-like feature of atomic objects; rather, they express the functioning of "(...) contraptions that by their very nature cannot but give a discrete, discontinuous, response (...)" . C-At this point we must investigate the content of the word "knowledge". Can it have the same meaning in a representational and in a non-representational theory of knowledge? A preliminary point to examine is the transition from a mere "eigenbehaviour" to something which can indeed be called knowledge. This transition will be compared to the corresponding transition from a relational conception of experiments in microphysics to the formalism of quantum mechanics. According to J. Piaget , the decisive step from organized behaviour to knowledge consists in freeing oneself as much as possible from the irreversible aspects of any concrete operation. This freeing is achieved by means of gestural schemes tending towards perfect reciprocity of the caused transformations. A few elementary examples of these schemes of reciprocity are: moving an object and then putting it back at its original place; rotating an object until its initial profile is recovered; pouring a liquid in various containers, and then pouring it back in its original container (thus seing that the level has not changed); etc. These schemes have the structure of performative groups of transformations. They enable anticipation of what will occur, for they rely on methods for reproducing situations and for carving out domains of invariance; they extract elements of stability and iterativity from the Heraclitean flux. At the following stage of development in childhood, the gestural schemes of reciprocity are made systematic by being embedded within a logico-linguistic framework which is socially shared. The formerly extracted invariants are then organized as a set of objects referred to and of ascribed predicates. They are presupposed in speech, and used to suggest predictions. Finally, at the very end of the genetic process, new, non-linguistic, symbolic structures are elaborated. These structures convert the practical constraints into deductive constraints, and they also convert the performative groups into abstract groups of transformations. They are mathematical structures, more universal than the former logico-linguistic structures because they are not exclusively committed to the subject-predicate pattern, and therefore are more universally efficient as instruments of prediction. J. Piaget thus considers mathematics as a "general coordination of actions"; or, more precisely, as a general symbolic coordination of those actions which are embedded within schemes of reciprocity-reversibility . This conception of mathematics suggests a plausible explanation of the special constitutive status of mathematics in physics. After all, the basic task of physics is to control sequences of phenomena by means of reversible and organized experimental actions. It is not surprising that mathematics, which coordinates systems of possible reversible actions by means of a deductive symbolism, is able to provide physics with very efficient instruments for anticipating the phenomena which result from actual (experimental) actions tending towards reversibility. According to J. Piaget, in physics, "(...) far from reducing to a language, mathematics is the structuring instrument which coordinates those actions and expands them into theories" . As a consequence of this view, the purpose of physics is not to elaborate a series of convergently faithful pictures of a nature given in advance. It is rather to accommodate and assimilate sequences of irreversible phenomena within the schemes of reversible actions which are formalized in mathematics. This being granted, the usual dualist theory of knowledge, with its encounter between subjects and objects, appears to develop a very narrow variety of a much wider range of conceptions of knowledge. The basic tendency of the cognitive procedures of assimilation-accommodation is to reach invariance with respect to local or individual circumstances. This condition of invariance is fulfilled by embedding as much as possible of the primarily irreversible and non-reproducible phenomena within reciprocal schemes of activity. And its most useful byproduct is a set of predictive rules. Acting under the presupposition of the permanent identity of objects across time, and of the possession by these objects of intrinsic properties, is a possible method for reaching this aim of invariance and predictibility. However, one suspects that it is by no means the most general method, and that it involves stringent constraints which are not unavoidable. We are then led to distinguish two varieties of knowledge. The first one is the general process of embedding phenomena within reversible schemes of activity, and to formulate a mathematical counterpart to these schemes in order to get an optimal set of predictive rules. Let us call it KnowledgeG (for General). The second one is that special variety of the process which is conditioned by the referential and predicative functions of language, insofar as it consists in ascribing properties to permanent objects. Let us call it KnowledgeS (for Special). Accordingly, we may distinguish two aspects of objectivity: a general one (which characterizes KnowledgeG), and a special one (which characterizes KnowledgeS). The general aspect of objectivity (let us call it ObjectivityG) is essentially negative, for it merely amounts to a lack of submission of performative schemes and anticipative rules to any indexical location (I, here, now, this). By contrast, the special aspect of objectivity (let us call it ObjectivityS) is positive. In agreement with the etymology of the word, it consists in projecting the disindexicalization of predictive formalisms onto a description of supposedly autonomous objects. At this point, the reason for the lasting unease about quantum mechanics can easily be stated in two sentences. Quantum mechanics provides us with KnowledgeG, but it is irreducible to any form of KnowledgeS. Its statements are ObjectiveG, but they usually miss the positive contents which are typical of ObjectivityS. As long as KnowledgeS and ObjectivityS hold the position of a norm and value in epistemology, these two features of quantum mechanics are likely to be felt as major defects. But from the standpoint of non-standard theories of cognition, where KnowledgeS and ObjectivityS are only individual cases of KnowledgeG and ObjectivityG, the same features can be taken instead as major advances towards a universalized conception of knowledge in physics. 4-A survey of the tensions between quantum mechanics and the dualist theory of knowledge Before I develop further the consequences of these remarks, however, I have to briefly justify the contention that Quantum mechanics is a typical piece of KnowledgeG but that it is irreducible to any form of KnowledgeS. For, after all, there is no consensus about this point. One must even say that, due to the normative role of KnowledgeS, finding a satisfactory "realist" interpretation of quantum mechanics (i.e. an interpretation according to which a description of properties of objects can be derived from quantum mechanics) is perceived by many philosophers as the major research priority. Their basic tenet (or hope) is that it is not impossible to show that quantum mechanics describes (either completely or incompletely) an intelligible realm of objects endowed with properties existing behind the superficial appearances. The problem is that this attitude has not reached a stage where it may be considered unproblematic, even by its most eager proponents. Surveying the realist interpretations of quantum mechanics, one may easily display their major defects. Firstly, the most efficient and popular hidden variable theory (Bohm's theory) is drifting further and further from the classical ideal, making it less and less attractive for some of its original supporters. True, Bohm's hidden variable theory manages to recover the predictions of standard quantum mechanics by committing itself to an ontology of interacting micro-objects endowed with properties (provided the interaction involves an all-pervading and instantaneous "quantum potential"). Taken at face value, it describes the trajectories in space-time of these objects, and even more general spatio-temporal processes. However, unlike the classical trajectories and processes, Bohm's trajectories are obtained at the cost of a radical dissociation with any possible experimental procedure. Here, experiments are what may modify a trajectory, rather than what only make it manifest. Accordingly, properties are posited as purely hypothetical invariants, for they are not the invariants of any effective operational scheme. The hidden variables thus satisfy an abstract urge for objectivity, but they remain very loosely and indirectly connected with concrete procedures of objectivation. To sum up, Bohm's original hidden variable theory is an empty variety of the scientific strategy of seeking invariants. This explains why it is hardly testable against standard quantum mechanics , and also against any alternative hidden variable theories able to recover the preductions of standard quantum mechanics. Secondly, there is quantum logic. Quantum logic is not only a piece of pure formal architecture, aimed at disclosing the most basic structures of the quantum-theoretical scheme. From the very beginning, quantum logicians aimed at restoring realism in quantum physics against Bohr's mixture of instrumentalist and participatory views. Rather than sticking to "phenomena" as Bohr did, quantum logic apparently enabled one to recover the possibility of speaking in terms of "physical qualities" or of properties of systems, at the cost of changing the algebra of these properties. Instead of a Boolean algebra, one merely had to accept an "orthocomplemented non-distributive lattice." As soon as this result was obtained, however, the whole historical perspective was reversed by later quantum logicians. In history, non-boolean logic appears as the realist reply to Bohr's criticism of the ideal of a complete separation between an object and an observing agent. But some contemporary quantum logicians asserted that: "The rejection of the 'ideal of the detached observer' is the Copenhagen response to non-Booleanity." Thus, according to these views, the world is inherently non-Boolean, and Bohr's holism is a spurious epistemological interpretation of this ontological feature. But, actually, there is much to be said in favor of Bohr's original standpoint. Let me use, for instance, an argument of simplicity and intelligibility. From the elementary assumption that phenomena are irretrievably relative to their (sometimes incompatible) experimental contexts, it is easy to derive: (i) the full non-boolean structure of quantum logic , (ii) the quantization itself (through the commutation relations between conjugate variables), and (iii) the wave-like aspect of certain distributions of discrete phenomena . This derivation does not require any well-defined assumption about the structure of the world (with the exception of the non-zero value of the Planck constant). By contrast, starting from a detailed non-Boolean structure of the algebra of properties of the systems which constitute the world introduces a high amount of arbitrariness in the premises. The derivation of consequences from this kind of premise thus has little explanatory power. Thirdly, let us consider some attempts at giving a straighforward descriptive status to the symbols of standard quantum mechanics. This was the main purpose of the many-worlds interpretation, of Dieks realist version of the modal interpretation, and of the spontaneous collapse interpretation. But none of these interpretations has proved as yet that it can cope in its own terms (namely without invoking meta-theoretical regulative principles) with some specific difficulties such as the preferred basis problem. True, decoherence theories claim to be able to provide a solution to the previous difficulties. But as we have seen in section 2, decoherence theories are pervaded by interest-relative postulates which do not make them liable to an ontological reading. Their being used in such circumstances is rather an incentive to challenge the standard "epistemological circle" of subject(s) and objects, and thus to drift away from the most basic presupposition of realism. More recently , a realist reading of state vectors and density matrices was derived from the analysis of so-called adiabatic or protective measurements. Indeed, a single protective measurement is enough to reach distributive parameters (such as expectation values) which are directly provided by state vectors but that would require a statistics over a large number of non-protective measurements. The realist conclusions drawn from the consideration of this class of measurements have nevertheless been challenged with sound arguments . It has been shown that only observables that commute with the system's Hamiltonian can be measured protectively. The protective measurement argument thus amounts to little more than showing that the structure of a set of commuting observables is quasi-classical. Finally, one may discuss briefly the pragmatic attitude of physicists in their laboratories. Their priority is clearly instrumental: they relate day after day the outcome of a mathematico-symbolic activity to the outcomes of an experimental activity. But they also articulate, for heuristic purposes, fragmented models of objects. And they use terms such as "particles", "properties", "fields", etc. whose meaning drifted apart from their classical counterpart beyond recognition, but which still ignite the temptation of ontological projection. The all-pervasiveness of these models and of these crypto-ontological words could be taken as a proof that dispensing completely with the traditional dualist theory of knowledge in science is utopical. But the very way the models and terms are manipulated, shows that the dualist theory of knowledge is de facto dead in the practice of standard quantum theories. For the use of these models and terms is systematically made flexible and contextualized. They become successively predominant or marginal according to the theoretical and experimental context of discourse. They may have either to be taken at face (traditional) value in one context (say in chemistry), or to be thoroughly redefined in another context (say in high energy physics). They are nothing more than relative models and ontologies, loosely articulated with the remote hope of a unified (and hence presumably absolute) picture. 5-Relational approaches to quantum mechanics Let us recapitulate what has been found up to now: 1) Pushing the Heisenberg-Bohr views of quantum theory to its ultimate consequences, one obtains a remarkable structural agreement with self-organizational and non-representationlist theories of cognition. 2) Only within a non-representationalist theory of knowledge does the measurement problem of quantum mechanics find a quick and natural (dis)solution. This is due to the fact that the measurement problem is tantamount to a lack of closure of the epistemological circle that quantum theories have inherited from classical physics and "natural ontology". Changing the type of epistemological circle according to a non-representationalist line (and reinterpreting the decoherence theories accordingly) is enough to get a satisfactory way out. 3) The recurrent attempts at providing a "realist" interpretation of quantum mechanics (i.e. an interpretation appropriate to the classical dualist theory of knowledge) are clearly unsatisfactory. Even though nothing may preclude that a fully satisfactory "realist" formulation of quantum mechanics or its successors will be found in the future, this is only wishful thinking for the time being. In the present situation, "realist" interpretations all appear artificial, contrived, and/or incomplete. From point 3), it appears that quantum mechanics undermines the most basic epistemological presuppositions of classical physics, even though these presuppositions were the unavoidable departure point of the investigation that led to its formulation. Quantum mechanics institutes a tension within the epistemological circle from which it arose, and it therefore paves the way towards a radical redefinition of this circle. From points 1) and 2), one gets a clear idea of what might well be the appropriate new epistemological circle: it is the circle which corresponds to non-representationalist and self-organizational theories of knowledge. However, if this is true, the meaning of each single element of the physical theory, and of its meta-theoretical account of measurement as well, has to be completely changed. Since quantum mechanics does not describe anything like the properties of its putative objects, the quantum theory of measurement does not describe anything like the properties of the measuring apparatus either. But if this theory and its meta-theory do not describe anything, what do they do? The easiest answer to this question is flat empiricism, according to which quantum mechanics is a mere formal device enabling one to account as economically as possible for the statistical regularities of phenomena defined relative to certain experimental devices described in classical terms. What I called the "predictive" reading of quantum mechanics in section 2 is especially liable to this interpretation (although it does by no means reduce to it). But of course, one may easily understand that realist philosophers, and many scientists as well, are reluctant to accept a purely empiricist view of theories. Indeed (with the possible exception of Van Fraassen's constructive empiricism ), most versions of empiricism have proved unable to account for what is so crucial in everyday research, namely a well-defined perspective, a clear direction, and a strong motivation. They also lack a fully satisfactory explanation of the remarkable predictive success of a theory like quantum mechanics. Epistemological evolutionism is the best candidate to afford such an explanation within an anti-realist framework of thought, but if it remains isolated, it is not sufficient, especially when it is confronted with quantum mechanics. For it accounts for a plurality of viable (or approximately adequate) slowly drifting theories, whereas one has to explain the unicity and extreme stability of the general framework of theories afforded by the standard Dirac-Von Neumann's formalism. This is the reason why, in the past few years, I developed a full-fledged transcendantalist interpretation of quantum mechanics, which rejects both the realist idea that a physical theory is a (more or less complete) description of a pre-structured external world, and the empiricist view that it is reducible to a unified summary of efficient predictive recipes . While sticking to the purely predictive reading of quantum mechanics, I showed that one may provide it with much stronger justifications than mere a posteriori empirical adequacy, without invoking the slightest degree of isomorphism between this theory and the elusive things out there. The alternative justification is as follows. The structure of quantum mechanics necessarily arises whenever one tries to embed contextual and mutually incompatible phenomena within a unified and time-connected meta-contextual system of probabilistic anticipation. It is a formal condition of possibility of those unusual probabilistic assessments. This kind of justification of quantum mechanics is obviously in better agreement with self-organizational and non-representationalist theories of knowledge than with either the realist or the empiricist variety of the classical epistemological paradigm. For here the theory is by no means construed as a (more or less precise) picture of a pre-existing nature; nor is it construed as a mere economical formula to express pre-given facts. The theory is rather taken as the structural expression of an all-encompassing strategy of gaining context-invariant anticipative capacity, in a situation where the contextuality of each single phenomenon cannot be ignored. As for the forms the theory assumes in various specialized domains, they are construed as the byproduct of the co-emergence of a given type of experimental activity and of the 'factual' elements which constrain it . In this reading of quantum mechanics, as in self-organizational theories of knowledge, there is no one-way dependence of the theory on either "external reality" or "facts". Rather, there is a two-way mutual dependence between the project of investigation and the system of constraints which has to be taken into account by it . Thinking a little further, one realizes that this conception of knowledge is thoroughly relational. It is even relational in an exceptionally strong sense. For here, the terms of the cognitive relation, namely the project of experimental investigation underpinned by a theory, and the set of phenomenal constraints which are to be accounted for, do not come before the research activity which institutes the relation itself. The relata of the cognitive relation are produced by it, precisely as much as the other way round. This type of cognitive relation with no pre-existing relata is all the more interesting since it closely mimics a rising set of purely relational interpretations of quantum mechanics; a set interpretations according to which entangled state vectors express pure relations with no self-sufficient relata . Such an isomorphism paves the way for a new type of epistemological circle, wherein both the theory and the cognitive meta-theory are extensively relational. In this situation, the task of the philosopher is no longer to explain the purely relational character of the phenomena of micro-physics by invoking some (mutually "disturbing") interaction between an object and an apparatus endowed with properties. It is rather, conversely, to explain why and how the familiar formal concept of monadic property could work so efficiently and for so long in the macroscopic domain despite the fact we ultimately live in a universe of pure relations. This latter kind of explanation can take advantage of the concept of supervenience borrowed by P. Teller from D. Davidson , for his relational interpretation of quantum mechanics. In short, the explanation runs thus: classical physics was so successful in its program of de-convolution of the non-supervenient relations which are constitutive of phenomena, that nothing prevented it from working as if they were supervenient relations between monadic properties. Of course, this explanation has to be developed in order to become convincing. What is then supervenience and how does this concept apply to relations? According to Davidson a class of entities B supervenes on a class of entities A if: (i) every single modification of an entity B is underpinned by a modification of an entity A (to paraphrase Davidson, there cannot be two events alike in all their A aspects but differing in some B aspect); (ii) there are alterations of entities A which leave entities B unchanged. In classical mechanics, one thus considers that: (i) every single modification of the relation between two material bodies is underpinned by some change in their (spatial, kinematic, and/or dynamic) properties, and (ii) there are modifications of the properties of these bodies which leave their relation unchanged (provided these modifications are coordinated in such a way that they respect certain similarities usually expressed by dimensionless numbers). To summarize, saying that relations supervene on monadic properties of objects amounts to ascribing them a secondary and derived status with respect to properties. It also means that the information contents of each relation is poorer than that of the related properties (for several couples of properties may yield the same relation). The problem is that this classical way of pushing relations aside and giving properties the central role does not help to figure out the reason why properties appear to be richer in information than relations. Indeed, the most plausible reason for this richness is that properties express a large number of possible relations beyond the actual relation in which an object is involved. Saying that something possesses a property is a shorthand description of a wide range of relations in which this thing may possibly be involved. Ascribing a property to something means recognizing this thing a disposition to produce effects whenever it is involved in many possible relations to other things . This idea was familiar to the first Wittgenstein, who had thoroughly assimilated Boltzmann's and Hertz' conceptions of classical mechanics. According to him, "() there is no object that we can imagine excluded from the possibility of combining with others" . Thus, in his eyes , the autonomy of things and properties is a sort of illusion due to the boundless number of possible relations (or combinations, or connexions) which define them : "Things are independent in so far as they can occur in all possible situations, but this form of independence is a form of connexion with states of affairs, a form of dependence" . In other terms, the mutual independence of things and properties is the name we give to the indefinite openness of the network of interdependence in which they may be involved. Wittgenstein's reflections reveal that the stratum of properties (layer n°1) on which the relations of classical theories and meta-theories are supervenient (layer n°2), presupposes an underlying stratum (layer n°0) of non-supervenient relations (i.e. primitive relations with no properties holding the role of relata). The reason why this ground-level layer of non-supervenient relations was almost ignored (or bracketed) by classical science becomes clear at this point. This reason is that it was especially easy to extract from it a number of effects invariant under large ranges of (cognitive) connections. Whenever a basically relational phenomenon remains invariant irrespective of its position within a set of successive or simultaneous experimental relations, it can perfectly be detached from the cognitive conditions under which it appears. It becomes natural to consider it as a mere reflection of a property. This opportunity of detaching the phenomenon from its cognitive contexts of appearance persists even when it is sensitive to variations of the experimental set-up, provided its changes can be ascribed to disturbing properties. Only in one case would the basically relational character of the phenomenon become inescapable : if the phenomenon were highly dependent on its position within a set of successive or simultaneous experimental relations, and if moreover the attempts at explaining this dependence in terms of disturbances were inacceptable or exceedingly contrived. This situation would so to speak impose a radical reflective examination of the constitutive relations of knowledge. But this is precisely the situation of quantum physics. To conclude, a purely relational kind of epistemological circle is at the same time self-consistent, in natural agreement with the quantum paradigm, and able to account in its own terms for the absolutist kind of epistemological circle conveyed by the classical dualist theory of knowledge. This opens a potentially very fruitful research program, of which we are presently witnessing the first outlines .
1fde55ff5a986b44
Vidar Gudmundsson Learn More We present a theoretical study of the unielectronic energy spectra, electron localization, and optical absorption of triangular core-shell quantum rings. We show how these properties depend on geometric details of the triangle, such as side thickness or corners' symmetry. For equilateral triangles, the lowest six energy states (including spin) are grouped(More) We investigate double finger gate (DFG) controlled spin-resolved resonant transport properties in an n-type quantum channel with a Rashba-Zeeman (RZ) subband energy gap. By appropriately tuning the DFG in the strong Rashba coupling regime, resonant state structures in conductance can be found that are sensitive to the length of the DFG system. Furthermore,(More) We investigate coherent electron-switching transport in a double quantum waveguide system in a perpendicular static or vanishing magnetic field. The finite symmetric double waveguide is connected to two semi-infinite leads from both ends. The double waveguide can be defined as two parallel finite quantum wires or waveguides coupled via a window to(More) We study Coulomb interacting electrons confined in polygonal quantum rings. We focus on the interplay of localization at the polygon corners and Coulomb repulsion. Remarkably, the Coulomb repulsion allows the formation of in-gap states, i.e., corner-localized states of electron pairs or clusters shifted to energies that were forbidden for non-interacting(More) We investigate the effects of the shape of quantum dots on their far-infrared absorption in an external magnetic field by a model calculation. We focus our attention on dots with a parabolic confinement potential deviating from the common circular symmetry, and dots having circular doughnut shape. For a confinement where the generalized Kohn theorem does(More) We model a core-shell nanowire (CSN) by a cylindrical surface of finite length. A uniform magnetic field perpendicular to the axis of the cylinder forms electron states along the lines of zero radial field projection, which can classically be described as snaking states. In a strong field, these states converge pairwise to quasidegenerate levels, which are(More) We compare energy levels, carrier localization and optical absorption of a single electron and a pair of interacting carriers confined in a hexagonal quantum ring. We show that many-body levels are multiple degenerate and, contrary to the single-particle case, no repeated energy sequence can be identified. The number of eigenvalues associated with(More) We outline a rigorous method which can be used to solve the many-body Schrödinger equation for a Coulomb interacting electronic system in an external classical magnetic field as well as a quantized electromagnetic field. Effects of the geometry of the electronic system as well as the polarization of the quantized electromagnetic field are explicitly taken(More)
78affc16f0e3ad08
Difference between revisions of "Complex number" From Citizendium, the Citizens' Compendium Jump to: navigation, search m (Historical example: Changed x to y in the quadratic equation encountered; using x for two different quantities at the same time seemed potentially confusing) Line 25: Line 25: It follows from the second equation that <math>v^3 = 4 - u^3</math>. Substituting this in the first equation, we get <math>u^3 (4-u^3) = 125</math>. Hence we may find some values for <math>u^3</math> by solving the equation <math>x(4-x) = 125</math>. Get rid of the brackets and move the number 125 to the left-hand side to get the [[quadratic equation]] It follows from the second equation that <math>v^3 = 4 - u^3</math>. Substituting this in the first equation, we get <math>u^3 (4-u^3) = 125</math>. Hence we may find some values for <math>u^3</math> by solving the equation <math>y(4-y) = 125</math>. Getting rid of the brackets and moving the number 125 to the left-hand side gives us the [[quadratic equation]] : <math>x^2-4x+125=0. \ </math> : <math>y^2-4y+125=0. \ </math> Its [[discriminant]] is <math>\Delta=(-4)^2-4\cdot 125=-484=-22^2</math>, which is ''negative'', so that the quadratic equation has ''no real solution'': the usual formulae giving the solutions require taking the [[square root]] of the discriminant, which is undefined here. Revision as of 22:53, 7 May 2007 See Complex number for latest approved version Complex numbers are numbers of the form , where and are real numbers and denotes a number satisfying .[1] Of course, since the square of any real number is nonnegative, cannot be a real number. At first glance, it is not even clear whether such an object exists and can be reasonably called a number; for example, can we sensibly associate with natural operations such as addition and multiplication? As it happens, we can define mathematical operations for these "complex numbers" in a consistent and sensible way and, perhaps more importantly, using complex numbers provides mathematicians, physicists, and engineers with an extremely powerful approach to expressing parts of these sciences in a convenient and natural-feeling way. Historical example The need for complex numbers might have appeared for the first time during the sixteenth century, when Italian mathematicians like Scipione del Ferro, Niccolò Fontana Tartaglia, Gerolamo Cardano and Rafael Bombelli tried to solve cubic equations. Even for equations with three real solutions, the method they used sometimes required calculations with numbers whose squares are negative. Here is such an example (with modern notation). Let us consider the equation Cardano's method for solving it suggests looking for a solution by writing it as a sum , where another condition on and is to be decided later. Recording this in the equation, we have, once the left member is expanded, which can be written as Now we recall that we did not completely specify and ; we only required that . Hence, we can choose another condition on and . We pick this condition to be , or , in order to simplify the above equation. This implies that and are numbers whose sum and product are given by It follows from the second equation that . Substituting this in the first equation, we get . Hence we may find some values for by solving the equation . Getting rid of the brackets and moving the number 125 to the left-hand side gives us the quadratic equation Its discriminant is , which is negative, so that the quadratic equation has no real solution: the usual formulae giving the solutions require taking the square root of the discriminant, which is undefined here. Well, let us be bold and write . Here, the symbol denotes an hypothetical number whose square would be At this stage, such a number has no meaning (squares of real numbers are always nonnegative), but we use it in a purely formal way. Using this symbol, we can write the "solutions" to the quadratic equation as It remains to find cube roots of these "numbers". A straightforward calculation shows that and do the job. For instance, remembering the rule , we have But now, going back to the original cubic equation, we get the real solution . One can verify it is indeed a solution, as . And once this solution is found, it is easy to find the two other solutions , which are also real. The fact that the formal calculations managed to give a real solution suggests that the "number" may have some sense. But to really give it a legitimate status, one has to construct a new set of numbers, containing the real numbers, but also other numbers whose squares may be negative real numbers. This will be the set of complex numbers. A rigorous construction of this set as pairs of real numbers was given much later by William Rowan Hamilton in 1837; this construction is explained later in this article. Working with complex numbers As a first step in giving some legitimacy to the "number" , we will explain how to compute with it. How do you add, multiply and divide expressions with this number? It turns out that this is not that difficult; the main rule to keep in mind is that the square of equals . In the remainder of the article, we will use the letter to denote one solution of the equation , where we previously used .[2] With this convention, all complex numbers can be written as , where and are real numbers. We call the real part of the complex number and the imaginary part. Complex numbers whose imaginary part is are of the form . In this way, the real number is considered as the complex number whose imaginary part is zero. Basic operations Addition of complex numbers is straightforward, The result is again a complex number. Multiplication is more interesting. Suppose we want to compute . Using , we can rewrite this product in a form which clearly shows it to be another complex number: To handle division, we simply note that , so from which it follows that Going a bit further, we can introduce the important operation of complex conjugation. Given an arbitrary complex number , we define its complex conjugate to be . Using the identity we derive the important formula and we define the modulus of a complex number z to be Note that the modulus of a complex number is always a nonnegative real number. The modulus (also called absolute value) satisfies three important properties that are completely analogous to the properties of the absolute value of real numbers • ; furthermore, if and only if The last inequality is known as the triangle inequality. The complex exponential Recall that in real analysis, the ordinary exponential function may be defined as The same series may be used to define the complex exponential function (where, of course, convergence is defined in terms of the complex modulus, instead of the real absolute value). The complex exponential has the same multiplicative property that holds for real numbers, namely The complex exponential function has the important property that as may be seen immediately by substituting and comparing terms with the usual power series expansions of and . The familiar trigonometric identity immediately implies the important formula , for any Of course, there is no reason to assume this identity. We only need note that , so Geometric interpretation Graphical representation of a complex number and its conjugate Since a complex number is specified by two real numbers, namely and , it can be interpreted as the point in the plane. When complex numbers are represented as points in the plane, the resulting diagrams are known as Argand diagrams, after Robert Argand. The geometric representation of complex numbers turns out to be very useful, both as an aid to understanding the properties of complex numbers and as a tool in applying complex numbers to geometrical and physical problems. There are no real surprises when we look at addition and subtraction in isolation: addition of complex numbers is not essentially different from addition of vectors in . Similarly, if is real, multiplication by is just scalar multiplication. In we have To put it succintly, is a 2-dimensional real vector space with respect to the usual operations of addition of complex numbers and multiplication by a real number. There doesn't seem to be much more to say. But there is more to say, and that is that the multiplication of complex numbers has geometric significance. This is most easily seen if we take advantage of the complex exponential, and write complex numbers in polar form Here, r is simply the modulus or vector length. The number is just the angle formed with the -axis, and is called the argument. Now, when complex numbers are written in polar form, multiplication is very interesting Multiplication by amounts to rotation by 90 degrees In other words, multiplication by a complex number has the effect of simultaneously scaling by the number's modulus and rotating by its argument. This is really astounding. For example, to multiply a given complex number by we need only to rotate by (that is, 90 degrees). Translation corresponds to complex addition, scaling to multiplication by a real number, and rotation to multiplication by a complex number of unit modulus. The one type of coordinate transformation that is missing from this list is reflection. On the other hand, there is an arithmetic operation we have not considered, and that is division. Recall that In other words, up to a scaling factor, division by is just complex conjugation. Returning to the representation of complex numbers in rectangular form, we note that complex conjugation is just the transformation (or map) or, in vector notation, . This is nothing other than reflection in the -axis, and any other reflection may be obtained by combining that transformation with rotations and translations. Historically, this observation was very important and led to the search for higher dimensional algebras that could "arithmetize" Euclidean geometry. It turns out that there are such generalizations in dimensions 4 and 8, known as the quaternions and octonions (also known as Cayley numbers). At that point, the process stops, but the ideas developed in this process have played an important role in the development of modern differential geometry and mathematical physics). Algebraic closure An important property of the set of complex numbers is that it is algebraically closed. This means that any non-constant polynomial with complex coefficients has a complex root. This result is known as the Fundamental Theorem of Algebra. This is actually quite remarkable. We started out with the real numbers. There are many polynomials with real coefficients that do not have a real root. We took just one of these, the polynomial , and we introduced a new number, , which is defined to be a root of the polynomial. Suddenly, all non-constant polynomials have a root in this new setting where we allow complex numbers. There are many proofs of the Fundamental Theorem of Algebra. Many of the simplest depend crucially on complex analysis. But it is by no means necessary to rely on complex analysis here. A proof using field theory is alluded to at the very end of this article. Complex numbers in physics Complex numbers appear everywhere in mathematical physics, but one area where the role of complex numbers is especially difficult to ignore is in quantum mechanics. There are a number of ways of formulating the basic laws of quantum mechanics, but here we consider just one: the Schrödinger equation, discovered by Erwin Schrödinger in 1926. In rectangular coordinates, it may be written is known as the Laplacian operator and is the potential function. (As a practical example, the potential function might represent the attractive force per unit mass between the nucleus of a hydrogen atom and an electron). Now, there is some subtlety in the interpretation of because a system can be affected by observation, and the functions we "see" must be eigenstates of the operator defined by the Schrödinger equation, but when we do measure, say, the position of a particle, the probability of finding it in a small region is just Formal definition This all shows that complex numbers behave very much like real numbers and that they can be very useful, but it does not prove that they exist. In fact, it is quite easy to go wrong when using complex numbers. Consider for instance the following computation: This computation seems to show that equals , which is nonsense. The point is that the second equality can not be applied. Positive real numbers satisfy the identity but this identity does not hold for negative real numbers, whose square roots are not real. One possibility to feel more secure when using complex numbers is to define them in terms of constructs which are better understood. This approach was taken by Hamilton, who defined complex numbers as ordered pairs of real numbers, that is, Such pairs can be added and multiplied as follows • addition: • multiplication: The multiplication may look artificial, but it is inspired by the formula which we derived before. These definitions satisfy most of the basic properties of addition and multiplication of real numbers, and we can employ many formulas from the elementary algebra we are accustomed to. More specifically, the sum (or the product) of two numbers does not depend on the order of terms;[3] the sum (product) of three or more elements does not depend on order of operations ('we can suppress the parentheses');[4] the product of a complex number with a sum of two other numbers expands in the usual way.[5] In mathematical language this means that with addition and multiplication defined this way, satisfies the axioms for a field and is called the field of complex numbers. Now we are ready to understand the 'real' meaning of . Observe that the pairs of type (,0) are identical[6] to the set of reals, so we write . Observe also that by definition . In other words, we can define , the number satisfying , as the pair (0,1).[7] Another way to define the complex numbers comes from field theory. Because is irreducible in the polynomial ring , the ideal generated by is a maximal ideal.[8] Therefore, the quotient ring is a field. We can choose the polynomials of degree at most 1 as the representatives for the equivalence classes in this quotient ring. So in a sense, we can imagine that the dummy variable is the imaginary number , and the elements of the quotient ring behave exactly the way we expect the complex numbers to behave. For example, is in the same equivalence class as , and so in this quotient ring. (As a final comment in this analysis, we could next show that has no finite extension and must therefore be algebraically closed.) Further reading • Ahlfors, Lars V. (1979). Complex Analysis, 3rd edition. McGraw-Hill, Inc.. ISBN 0-07-000657-1.  • Apostol, Tom M. (1974). Mathematical Analysis, 2nd edition. Addison-Wesley. ISBN 0-201-00-288-4.  • Conway, John H.; Derek A. Smith (2003). On Quaternions and Octonions: Their Geometry, Arithmetic and Symmetry. A K Peters, Ltd.. ISBN 1-56881-134-9.  • Jacobson, Nathan (1974). Basic Algebra I. W.H. Freeman and Company. ISBN 0-7167-0453-6.  • Williams, Floyd (2003). Topics in Quantum Mechanics. Birkhäuser. ISBN 0-8176-4311-7.  Notes and references 1. This article follows the usual convention in mathematics and physics of using as the imaginary unit. Complex numbers are frequently used in electrical engineering, but in that discipline it is usual to use instead, reserving for electrical current. This usage is found in some programming languages too, notably Python. 2. Part of the reason for not using is that the symbol (or ) with is sometimes used to denote the set of complex roots of , i.e., the set of the solutions of the equation ( respectively). The set contains 2 (, respectively) "equally important" elements and there is no canonical way to distinguish a "representative". Consequently, no computations are performed using this symbol. 3. that is, the addition (multiplication) is commutative 4. This is called associativity 5. In other words, multiplication is distributive over addition 6. i.e., isomorphic, which basically means that the mapping preserves the addition and multiplication. 7. although we should be careful about giving this particular definition too much credit: after all, the number has exactly the same property! 8. An ideal in a polynomial ring over a field is maximal if and only if is irreducible over the field.
8f59e8277f6dd843
Re-visiting the Complementarity Principle: the field versus the flywheel model of the matter-wave Note: I have published a paper that is very coherent and fully explains what’s going on. There is nothing magical about it these things. Check it out: The Meaning of the Fine-Structure Constant. No ambiguity. No hocus-pocus. Jean Louis Van Belle, 23 December 2018 Original post: This post is a continuation of the previous one: it is just going to elaborate the questions I raised in the post scriptum of that post. Let’s first review the basics once more. The geometry of the elementary wavefunction In the reference frame of the particle itself, the geometry of the wavefunction simplifies to what is illustrated below: an oscillation in two dimensions which, viewed together, form a plane that would be perpendicular to the direction of motion—but then our particle doesn’t move in its own reference frame, obviously. Hence, we could be looking at our particle from any direction and we should, presumably, see a similar two-dimensional oscillation. That is interesting because… Well… If we rotate this circle around its center (in whatever direction we’d choose), we get a sphere, right? It’s only when it starts moving, that it loses its symmetry. Now, that is very intriguing, but let’s think about that later. Let’s assume we’re looking at it from some specific direction. Then we presumably have some charge (the green dot) moving about some center, and its movement can be analyzed as the sum of two oscillations (the sine and cosine) which represent the real and imaginary component of the wavefunction respectively—as we observe it, so to speak. [Of course, you’ve been told you can’t observe wavefunctions so… Well… You should probably stop reading this. :-)] We write: ψ = = a·ei∙θ = a·ei∙E·t/ħ = a·cos(−E∙t/ħ) + i·a·sin(−E∙t/ħ) = a·cos(E∙t/ħ) i·a·sin(E∙t/ħ)  So that’s the wavefunction in the reference frame of the particle itself. When we think of it as moving in some direction (so relativity kicks in), we need to add the p·x term to the argument (θ = E·t − px). It is easy to show this term doesn’t change the argument (θ), because we also get a different value for the energy in the new reference frame: E= γ·E0 and so… Well… I’ll refer you to my post on this, in which I show the argument of the wavefunction is invariant under a Lorentz transformation: the way Ev and pv and, importantly, the coordinates and t relativistically transform ensures the invariance. In fact, I’ve always wanted to read de Broglie‘s original thesis because I strongly suspect he saw that immediately. If you click this link, you’ll find an author who suggests the same. Having said that, I should immediately add this does not imply there is no need for a relativistic wave equation: the wavefunction is a solution for the wave equation and, yes, I am the first to note the Schrödinger equation has some obvious issues, which I briefly touch upon in one of my other posts—and which is why Schrödinger himself and other contemporaries came up with a relativistic wave equation (Oskar Klein and Walter Gordon got the credit but others (including Louis de Broglie) also suggested a relativistic wave equation when Schrödinger published his). In my humble opinion, the key issue is not that Schrödinger’s equation is non-relativistic. It’s that 1/2 factor again but… Well… I won’t dwell on that here. We need to move on. So let’s leave the wave equation for what it is and go back to our wavefunction. You’ll note the argument (or phase) of our wavefunction moves clockwise—or counterclockwise, depending on whether you’re standing in front of behind the clock. Of course, Nature doesn’t care about where we stand or—to put it differently—whether we measure time clockwise, counterclockwise, in the positive, the negative or whatever direction. Hence, I’ve argued we can have both left- as well as right-handed wavefunctions, as illustrated below (for p ≠ 0). Our hypothesis is that these two physical possibilities correspond to the angular momentum of our electron being either positive or negative: Jz = +ħ/2 or, else, Jz = −ħ/2. [If you’ve read a thing or two about neutrinos, then… Well… They’re kinda special in this regard: they have no charge and neutrinos and antineutrinos are actually defined by their helicity. But… Well… Let’s stick to trying to describing electrons for a while.] The line of reasoning that we followed allowed us to calculate the amplitude a. We got a result that tentatively confirms we’re on the right track with our interpretation: we found that = ħ/me·c, so that’s the Compton scattering radius of our electron. All good ! But we were still a bit stuck—or ambiguous, I should say—on what the components of our wavefunction actually are. Are we really imagining the tip of that rotating arrow is a pointlike electric charge spinning around the center? [Pointlike or… Well… Perhaps we should think of the Thomson radius of the electron here, i.e. the so-called classical electron radius, which is equal to the Compton radius times the fine-structure constant: rThomson = α·rCompton ≈ 3.86×10−13/137.] So that would be the flywheel model. In contrast, we may also think the whole arrow is some rotating field vector—something like the electric field vector, with the same or some other physical dimension, like newton per charge unit, or newton per mass unit? So that’s the field model. Now, these interpretations may or may not be compatible—or complementary, I should say. I sure hope they are but… Well… What can we reasonably say about it? Let us first note that the flywheel interpretation has a very obvious advantage, because it allows us to explain the interaction between a photon and an electron, as I demonstrated in my previous post: the electromagnetic energy of the photon will drive the circulatory motion of our electron… So… Well… That’s a nice physical explanation for the transfer of energy. However, when we think about interference or diffraction, we’re stuck: flywheels don’t interfere or diffract. Only waves do. So… Well… What to say? I am not sure, but here I want to think some more by pushing the flywheel metaphor to its logical limits. Let me remind you of what triggered it all: it was the mathematical equivalence of the energy equation for an oscillator (E = m·a2·ω2) and Einstein’s formula (E = m·c2), which tells us energy and mass are equivalent but… Well… They’re not the same. So what are they then? What is energy, and what is mass—in the context of these matter-waves that we’re looking at. To be precise, the E = m·a2·ω2 formula gives us the energy of two oscillators, so we need a two-spring model which—because I love motorbikes—I referred to as my V-twin engine model, but it’s not an engine, really: it’s two frictionless pistons (or springs) whose direction of motion is perpendicular to each other, so they are in a 90° degree angle and, therefore, their motion is, effectively, independent. In other words: they will not interfere with each other. It’s probably worth showing the illustration just one more time. And… Well… Yes. I’ll also briefly review the math one more time. V-2 engine If the magnitude of the oscillation is equal to a, then the motion of these piston (or the mass on a spring) will be described by x = a·cos(ω·t + Δ). Needless to say, Δ is just a phase factor which defines our t = 0 point, and ω is the natural angular frequency of our oscillator. Because of the 90° angle between the two cylinders, Δ would be 0 for one oscillator, and –π/2 for the other. Hence, the motion of one piston is given by x = a·cos(ω·t), while the motion of the other is given by x = a·cos(ω·t–π/2) = a·sin(ω·t). The kinetic and potential energy of one oscillator – think of one piston or one spring only – can then be calculated as: The coefficient k in the potential energy formula characterizes the restoring force: F = −k·x. From the dynamics involved, it is obvious that k must be equal to m·ω2. Hence, the total energy—for one piston, or one spring—is equal to: Hence, adding the energy of the two oscillators, we have a perpetuum mobile storing an energy that is equal to twice this amount: E = m·a2·ω2. It is a great metaphor. Somehow, in this beautiful interplay between linear and circular motion, energy is borrowed from one place and then returns to the other, cycle after cycle. However, we still have to prove this engine is, effectively, a perpetuum mobile: we need to prove the energy that is being borrowed or returned by one piston is the energy that is being returned or borrowed by the other. That is easy to do, but I won’t bother you with that proof here: you can double-check it in the referenced post or – more formally – in an article I posted on It is all beautiful, and the key question is obvious: if we want to relate the E = m·a2·ω2 and E = m·c2 formulas, we need to explain why we could, potentially, write as a·ω = a·√(k/m). We’ve done that already—to some extent at least. The tangential velocity of a pointlike particle spinning around some axis is given by v = r·ω. Now, the radius is given by = ħ/(m·c), and ω = E/ħ = m·c2/ħ, so is equal to to v = [ħ/(m·c)]·[m·c2/ħ] = c. Another beautiful result, but what does it mean? We need to think about the meaning of the ω = √(k/m) formula here. In the mentioned article, we boldly wrote that the speed of light is to be interpreted as the resonant frequency of spacetime, but so… Well… What do we really mean by that? Think of the following. Einstein’s E = mc2 equation implies the ratio between the energy and the mass of any particle is always the same: This effectively reminds us of the ω2 = C1/L or ω2 = k/m formula for harmonic oscillators. The key difference is that the ω2= C1/L and ω2 = k/m formulas introduce two (or more) degrees of freedom. In contrast, c2= E/m for any particle, always. However, that is exactly the point: we can modulate the resistance, inductance and capacitance of electric circuits, and the stiffness of springs and the masses we put on them, but we live in one physical space only: our spacetime. Hence, the speed of light (c) emerges here as the defining property of spacetime: the resonant frequency, so to speak. We have no further degrees of freedom here. Let’s think about k. [I am not trying to avoid the ω2= 1/LC formula here. It’s basically the same concept: the ω2= 1/LC formula gives us the natural or resonant frequency for a electric circuit consisting of a resistor, an inductor, and a capacitor. Writing the formula as ω2= C−1/L introduces the concept of elastance, which is the equivalent of the mechanical stiffness (k) of a spring, so… Well… You get it, right? The ω2= C1/L and ω2 = k/m sort of describe the same thing: harmonic oscillation. It’s just… Well… Unlike the ω2= C1/L, the ω2 = k/m is directly compatible with our V-twin engine metaphor, because it also involves physical distances, as I’ll show you here.] The in the ω2 = k/m is, effectively, the stiffness of the spring. It is defined by Hooke’s Law, which states that the force that is needed to extend or compress a spring by some distance  is linearly proportional to that distance, so we write: F = k·x. Now that is interesting, isn’t it? We’re talking exactly the same thing here: spacetime is, presumably, isotropic, so it should oscillate the same in any direction—I am talking those sine and cosine oscillations now, but in physical space—so there is nothing imaginary here: all is real or… Well… As real as we can imagine it to be. 🙂 We can elaborate the point as follows. The F = k·x equation implies k is a force per unit distance: k = F/x. Hence, its physical dimension is newton per meter (N/m). Now, the in this equation may be equated to the maximum extension of our spring, or the amplitude of the oscillation, so that’s the radius in the metaphor we’re analyzing here. Now look at how we can re-write the a·ω = a·√(k/m) equation: In case you wonder about the E = F·a substitution: just remember that energy is force times distance. [Just do a dimensional analysis: you’ll see it works out.] So we have a spectacular result here, for several reasons. The first, and perhaps most obvious reason, is that we can actually derive Einstein’s E = m·c2 formula from our flywheel model. Now, that is truly glorious, I think. However, even more importantly, this equation suggests we do not necessarily need to think of some actual mass oscillating up and down and sideways at the same time: the energy in the oscillation can be thought of a force acting over some distance, regardless of whether or not it is actually acting on a particle. Now, that energy will have an equivalent mass which is—or should be, I’d say… Well… The mass of our electron or, generalizing, the mass of the particle we’re looking at. Huh? Yes. In case you wonder what I am trying to get at, I am trying to convey the idea that the two interpretations—the field versus the flywheel model—are actually fully equivalent, or compatible, if you prefer that term. In Asia, they would say: they are the “same-same but different” 🙂 but, using the language that’s used when discussing the Copenhagen interpretation of quantum physics, we should actually say the two models are complementary. You may shrug your shoulders but… Well… It is a very deep philosophical point, really. 🙂 As far as I am concerned, I’ve never seen a better illustration of the (in)famous Complementarity Principle in quantum physics because… Well… It goes much beyond complementarity. This is about equivalence. 🙂 So it’s just like Einstein’s equation. 🙂 Post scriptum: If you read my posts carefully, you’ll remember I struggle with those 1/2 factors here and there. Textbooks don’t care about them. For example, when deriving the size of an atom, or the Rydberg energy, even Feynman casually writes that “we need not trust our answer [to questions like this] within factors like 2, π, etcetera.” Frankly, that’s disappointing. Factors like 2, 1/2, π or 2π are pretty fundamental numbers, and so they need an explanation. So… Well… I do loose sleep over them. :-/ Let me advance some possible explanation here. As for Feynman’s model, and the derivation of electron orbitals in general, I think it’s got to do with the fact that electrons do want to pair up when thermal motion does not come into play: think of the Cooper pairs we use to explain superconductivity (so that’s the BCS theory). The 1/2 factor in Schrödinger’s equation also has weird consequences (when you plug in the elementary wavefunction and do the derivatives, you get a weird energy concept: E = m·v2, to be precise). This problem may also be solved when assuming we’re actually calculating orbitals for a pair of electrons, rather than orbitals for just one electron only. [We’d get twice the mass (and, presumably, the charge, so… Well… It might work—but I haven’t done it yet. It’s on my agenda—as so many other things, but I’ll get there… One day. :-)] So… Well… Let’s get back to the lesson here. In this particular context (i.e. in the context of trying to find some reasonable physical interpretation of the wavefunction), you may or may not remember (if not, check my post on it) ‘ll remember I had to use the I = m·r2/2 formula for the angular momentum, as opposed to the I = m·r2 formula. I = m·r2/2 (with the 1/2 factor) gives us the angular momentum of a disk with radius r, as opposed to a point mass going around some circle with radius r. I noted that “the addition of this 1/2 factor may seem arbitrary”—and it totally is, of course—but so it gave us the result we wanted: the exact (Compton scattering) radius of our electron. Now, the arbitrary 1/2 factor may or may be explained as follows. In the field model of our electron, the force is linearly proportional to the extension or compression. Hence, to calculate the energy involved in stretching it from x = 0 to a, we need to calculate it as the following integral: half factor So… Well… That will give you some food for thought, I’d guess. 🙂 If it racks your brain too much—or if you’re too exhausted by this point (which is OK, because it racks my brain too!)—just note we’ve also shown that the energy is proportional to the square of the amplitude here, so that’s a nice result as well… 🙂 Talking food for thought, let me make one final point here. The c2 = a2·k/m relation implies a value for k which is equal to k = m·c2/a = E/a. What does this tell us? In one of our previous posts, we wrote that the radius of our electron appeared as a natural distance unit. We wrote that because of another reason: the remark was triggered by the fact that we can write the cratio as c/ω = a·ω/ω = a. This implies the tangential and angular velocity in our flywheel model of an electron would be the same if we’d measure distance in units of a. Now, the E = a·k = a·F/(just re-writing…) implies that the force is proportional to the energy— F = (x/a)·E — and the proportionality coefficient is… Well… x/a. So that’s the distance measured in units of a. So… Well… Isn’t that great? The radius of our atom appearing as a natural distance unit does fit in nicely with our geometric interpretation of the wavefunction, doesn’t it? I mean… Do I need to say more? I hope not because… Well… I can’t explain any better for the time being. I hope I sort of managed to convey the message. Just to make sure, in case you wonder what I was trying to do here, it’s the following: I told you appears as a resonant frequency of spacetime and, in this post, I tried to explain what that really means. I’d appreciate if you could let me know if you got it. If not, I’ll try again. 🙂 When everything is said and done, one only truly understands stuff when one is able to explain it to someone else, right? 🙂 Please do think of more innovative or creative ways if you can! 🙂 OK. That’s it but… Well… I should, perhaps, talk about one other thing here. It’s what I mentioned in the beginning of this post: this analysis assumes we’re looking at our particle from some specific direction. It could be any direction but… Well… It’s some direction. We have no depth in our line of sight, so to speak. That’s really interesting, and I should do some more thinking about it. Because the direction could be any direction, our analysis is valid for any direction. Hence, if our interpretation would happen to be some true—and that’s a big if, of course—then our particle has to be spherical, right? Why? Well… Because we see this circular thing from any direction, so it has to be a sphere, right? Well… Yes. But then… Well… While that logic seems to be incontournable, as they say in French, I am somewhat reluctant to accept it at face value. Why? I am not sure. Something inside of me says I should look at the symmetries involved… I mean the transformation formulas for wavefunction when doing rotations and stuff. So… Well… I’ll be busy with that for a while, I guess. 😦 Post scriptum 2: You may wonder whether this line of reasoning would also work for a proton. Well… Let’s try it. Because its mass is so much larger than that of an electron (about 1835 times), the = ħ/(m·c) formula gives a much smaller radius: 1835 times smaller, to be precise, so that’s around 2.1×10−16 m, which is about 1/4 of the so-called charge radius of a proton, as measured by scattering experiments. So… Well… We’re not that far off, but… Well… We clearly need some more theory here. Having said that, a proton is not an elementary particle, so its mass incorporates other factors than what we’re considering here (two-dimensional oscillations).
2a4fedb310c8213e
Wolfram Computation Meets Knowledge The Wolfram Solution for Chemistry The Wolfram chemistry solution offers a complete suite of tools for analytical, physical, organic, and inorganic chemistry, including high-powered data analysis, interactive visualization and automatic reporting—all in one system. Curated chemical and scientific data is built into Mathematica alongside highly automated computation for ease and accuracy of calculations. • The Wolfram Edge • How Wolfram Compares • Key Capabilities • Access physical and safety properties of chemicals in your laboratory using built-in chemical data • Calculate path-dependent and path-independent quantities such as entropy, free energy, chemical potential and more • Simulate mass transport and chemical kinetics such as electrochemical reactions • Calculate the time-independent Schrödinger equation and its solutions in terms of wavefunctions and their eigenvalues, and other applications in quantum chemistry • Solve coupled nonlinear differential equations for chemical kinetics modeling • Interactively visualize molecular structures of biochemical compounds Generating the solution of a wave function and its corresponding energy for an isotropic 3D harmonic oscillator Illustrating the chirality of substituted methanes Does your current tool set have these advantages? • Built-in chemical, element, lattice and isotope data, ready for use without preprocessing Unique to Wolfram technologies • Integrated automatic report generation to document tasks you perform in Mathematica and instantly generate reports with live graphics, text, and executable code • Import or acquire data, perform statistical analyses and visualize results in one system instead of across several applications • Highly optimized superfunctions with automated algorithm selection to get accurate results quickly—sometimes switching algorithms mid-calculation for further optimization Non-Wolfram computation systems make you analyze your equations manually to determine which function to apply—for example, where in the Wolfram Language you use NDSolve, in Matlab you must correctly choose among obscure named algorithms like ode45, ode23, ode113, ode15s, bvp4c, pdepe and so on, or risk wrong answers • Create interactive tools for designing chemical instrumentation, curve fitting or data analysis that provide visual feedback to make debugging and testing of innovative instrumentation easier Unique to Wolfram technologies • Integrated environment for chemical kinetic modeling, statistical analysis, optimization and generation of interactive reports and applications • Accurate solving of highly nonlinear problems in transport phenomena and other areas with built-in arbitrary-precision numerics and automatic precision control Comparing chemical properties using built-in data Building an interactive decay chain browser to trace the radioactive decays of nearly all known nuclear species Organizations Using Wolfram Technologies 3M Agilent Technologies Alcon Research Chevron Corporation DuPont Exxon Mobil Los Alamos National Laboratory Consulting Solutions
df746645b1c01034
Say I have a molecular wavefunction as a set of molecular orbitals and want to calculate the molecule's dipole moment, but don't know how! I searched a lot but couldn't find any practical example. $$\psi _i=\sum ^N_{i=1}C_i\mathrm e^{-\alpha _ir^2}$$ • 1 $\begingroup$ You could use the variational theorem to determine the molecular orbital coefficients of the molecular orbitals (one electron wavefunctions). If you let you molecular wavefunctions be a linear superposition of basis atomic wavefunctions $\Psi =\sum c_i\psi_i$, with the orbital coefficients you can understand key properties of your molecule. You can rationalise trends in bond polarity too, that can't be explained with other theories! $\endgroup$ May 9 '15 at 1:19 The necessary formal derivation has already been nicely done by AngusTheMan. I'll start from the last equation: $$ \langle \mu_{z} \rangle = \langle \Psi | \hat{\mu}_{z} | \Psi \rangle $$ where $\Psi$ is the variational wavefunction; it can be any molecular state. It's important that it's variational, otherwise the expectation value approach is not exact. So, this works for SCF, CI, and MCSCF wavefunctions, but extra derivatives need to be taken for Moller-Plesset and coupled cluster wavefunctions. More work needs to be done for multideterminental wavefunctions like CI and MCSCF, but the complexity is no different for a single state in each wavefunction. There may be some MO space partitioning I'm neglecting that's required for MCSCF, so I'll restrict my work to a single-determinental wavefunction. Expand the wavefunction as a linear combination of molecular orbitals (MOs) $$ \Psi = \sum_{i} \psi_{i}, $$ where each molecular orbital is a linear combination of atomic orbitals (AOs) $$ \psi_{i} = \sum_{\mu} C_{\mu i} \phi_{\mu}, $$ where $C_{\mu i}$ is the MO coefficient matrix, so our expectation value now looks like this: $$ \langle \mu_{z} \rangle = \sum_{i}^{\textrm{occ MOs}} \sum_{\mu\nu}^{\textrm{AOs}} C_{\mu i} C_{\nu i} \langle \phi_{\mu} | \hat{\mu}_{z} | \phi_{\nu} \rangle. $$ The indices $\mu,\nu$ run over all AOs, and the index $i$ runs over the occupied MOs. There's only one index because this is a one-electron operator. I'm also neglecting any complex values here, since we almost always work with real-valued AOs and MO coefficients. We do one last rearrangement. Replace the MO coefficients with the density matrix $$ P_{\mu\nu} = \sum_{i}^{\textrm{occ MOs}} C_{\mu i} C_{\nu i} $$ to give the first explicit "working equation": $$ \langle \mu_{z} \rangle = \sum_{\mu\nu}^{\textrm{AOs}} P_{\mu\nu} \langle \phi_{\mu} | \hat{\mu}_{z} | \phi_{\nu} \rangle $$ I say first for two reasons: we usually try and avoid explicit loops like this, and the expression can be broken down further, depending on what molecular properties are of interest; I'll be more clear about this later. Inside the sum there are two terms: • The density matrix $P_{\mu\nu}$, which comes from a converged SCF calculation. • The integral of the dipole operator over two basis functions, $\langle \phi_{\mu} | \hat{\mu}_{z} | \phi_{\nu} \rangle$. Atomic orbitals are represented as atom-centered basis functions. These can be calculated once and at any time, since the quantities here don't change over the course of a calculation. Each of these terms is represented as a matrix. Since the index $\mu$ runs along the rows and $\nu$ runs along the columns for each matrix, "contraction" involves either a matrix product followed by the trace, or an elementwise product followed by an accumulation sum over all matrix elements. There are other details one needs to be careful about, such as what units the result should be in, which changes the prefactor (programs work internally in atomic units), and what the origin for the dipole operator is, but that's really it. Well, sort of. I'm actually treating some of the program internals as a black box. If you're familiar with Hartree-Fock, it should be clear where $P_{\mu\nu}$ comes from, but what about the integral? For a general expectation value $\langle A \rangle$ with its corresponding operator, where does $\langle \phi_{\mu} | \hat{A} | \phi_{\nu} \rangle$ come from? If it's already available in the code, then you call a wrapper function that then calls the integral engine to do all the nasty work, and you get back a tidy matrix without having to worry about the details. If $\langle \phi_{\mu} | \hat{A} | \phi_{\nu} \rangle$ isn't present, depending on the complexity of $\hat{A}$, there can be a non-trivial amount of derivation required for the working integral equation, followed by the implementation. Ignoring any possible contraction of primitive basis functions, expand $\langle \phi_{\mu} | \hat{A} | \phi_{\nu} \rangle$ using the definition of $\phi$ in Cartesian coordinates: $$ \phi(\mathbf{r}; \mathbf{A}, \mathbf{a}, \zeta) = (x-A_x)^{a_x} (y-A_y)^{a_y} (z-A_z)^{a_z} e^{-\zeta |\mathbf{r} - \mathbf{A}|^2} $$ where $\mathbf{r} = (x, y, z)$ is the electron position, $\mathbf{A} = (A_x, A_y, A_z)$ is the position of the basis function (almost always atom-centered), and $\mathbf{a} = (a_x, a_y, a_z)$ are the angular momenta for each coordinate, with $l_{\textrm{max}} = a_x + a_y + a_z$ total angular momentum of the basis function. $(0,0,0)$ is an s-function, $(1,1,0)$ and $(0,0,2)$ are d-functions, and so on. Forming the integral more explicitly gives $$ \langle \phi_{\mu} | \hat{A} | \phi_{\nu} \rangle = \int\int d\mathbf{r}_1 d\mathbf{r}_2 \left[ (x_1-A_x)^{a_x} (y_1-A_y)^{a_y} (z_1-A_z)^{a_z} e^{-\zeta_a |\mathbf{r}_1 - \mathbf{A}|^2} \right] \\ \times\left[ \hat{A} \right]\left[ (x_2-B_x)^{b_x} (y_2-B_y)^{b_y} (z_2-B_z)^{b_z} e^{-\zeta_b |\mathbf{r}_2 - \mathbf{B}|^2} \right] $$ Before going any further, $\hat{A}$ must be defined. If $\hat{A} = 1$, this becomes an overlap integral. The dipole operator in the z-direction is given by $\hat{A} = \hat{\mu}_{z} = -ez = -e(z_3 - C_z)$, where $z_3$ is the integration coordinate and $C_z$ is the origin of the dipole in the z-direction, usually taken to be zero. Everything is kept in atomic units until after the integral/density contraction, so drop the prefactor $-e$. We can now generalize this to an arbitrary Cartesian multipole moment operator, $$ \hat{A} = \mathfrak{M}(\mathbf{r}_3) = (x_3 - C_x)^{c_x} (y_3 - C_y)^{c_y} (z_3 - C_z)^{c_z} $$ where $(c_x, c_y, c_z)$ determine the coordinate of each multipole, and their sum is the total multipole order; for example, $(1,0,0), (0,1,0), (0,0,1)$ are the x-, y-, and z-directions of the dipole operator. The Cartesian moment operator looks just like a Gaussian basis function where $\zeta = 0$. Once the form of an operator has been derived, it needs to be implemented as part of an integral package, each of which implement one or more algorithms for computing integrals. Each algorithm is named after the authors of the paper in which they were introduced, and are usually abbreviated. For example, the first one I know of is the Taketa, Huzinaga, O-Ohata paper (THO, DOI: 10.1143/JPSJ.21.2313), where explicit working equations are given for 2-center overlap, 2-center kinetic energy, 2-center electron-nuclear attraction, and 4-center electron repulsion integrals. A working implementation can be found in the PyQuante package. I made an IPython notebook translation of the code snippets on the front page of the official documentation. Other, more complex algorithms are from the Pople-Hehre (PH), McMurchie-Davidson (MD, DOI: 10.1016/0021-9991(78)90092-X), Obara-Saika (OS, DOI: 10.1063/1.450106), Dupuis-Rys-King (DRK), and Head-Gordon-Pople (HGP) papers. I'm sure I'm neglecting some, including the seminal paper by Boys which introduced the use of Gaussian functions as a substitute for Slater-type functions in basis sets. A good review of these algorithms is found in a Peter Gill paper (DOI: 10.1016/S0065-3276(08)60019-2); he is the original author of the integral code in both Gaussian and Q-Chem. To bring these things full circle, I wrote some code a few months ago to calculate the dipole moment using pyquante2 and a wrapper that calls an implementation of the Obara-Saika recursive integral algorithm. You can find it here with some comparisons to "industrial" quantum programs. • 6 $\begingroup$ Thanks all. I really recommend reading the Obara-Saika paper if you're at all interested in integral evaluation. The DALTON package (not sure what the primary integral algorithm is) has an impressive list of one-electron integrals that can be calculated, many of which correspond directly (expectation value times some constants) to molecular properties. $\endgroup$ Sep 20 '15 at 1:29 The dipole moment $\mu$ of a molecule is a measure of charge distribution in the molecule and the polarity formed by the nuclei and electron cloud. We can perturb our system with an external electric field $\vec E$ and gauge the response of the electron cloud and nuclei by the polarisability, i.e how much the dipole moment changes. In practice the nuclei might be so heavy that their motion is not perturbed, while electrons being light are very mobile. If we imagine that the external field is caused by some other species, and that it itself is not changing, we can call this external constant electric field $\vec E$ at least over the volume of the molecule we are considering. Imagine for arbitrary book keeping that we point it down the $z$ axis. We could also investigate how the dipole moment changes with bond vibrations to discuss IR spectroscopy or if the polarisability changes during a vibration to give Raman spectroscopy. We can use perturbation theory to expand the wavefunction and the molecular energy in terms of small perturbations of the field. We start by Taylor expanding the energy and molecular wavefunction in terms of the electric field which acts as the perturbation parameter. \begin{equation} E(\vec E)=E^0+\bigg(\frac{\partial E}{\partial \vec E}\bigg)_0\vec E+\bigg(\frac{\partial ^2E}{\partial \vec E^2}\bigg)_0\frac{\vec E^2}{2!}+\bigg(\frac{\partial ^3E}{\partial \vec E^3}\bigg)_0\frac{\vec E^3}{3!}+\dots \end{equation} \begin{equation} \psi(\vec E)=\psi^0+\bigg(\frac{\partial \psi}{\partial \vec E}\bigg)_0\vec E+\bigg(\frac{\partial ^2\psi}{\partial \vec E^2}\bigg)_0\frac{\vec E^2}{2!}+\bigg(\frac{\partial ^3\psi}{\partial \vec E^3}\bigg)_0\frac{\vec E^3}{3!}+\dots \end{equation} If we use the notation that the wavefunction derivatives are given by; \begin{equation} \bigg(\frac {1}{i!}\bigg)\frac{\partial ^i\psi}{\partial \vec E^i}=\psi ^{(i)} \end{equation} The Hamiltonian for such a system under the influence of an electric field in the $z$ direction is $\hat H(\vec E)$. \begin{equation} \hat H(\vec E)=\hat H^0-\vec E\hat \mu _z \end{equation} Where $\hat \mu _z$ is the dipole moment operator that is a summation of the charges of the nuclei and electrons in the molecule. This is the result of Hellmann-Feynman theorem of the energy and the electric field, e.g. $\hat H(\vec E)=\hat H^0 +\hat H^1(\vec E)$, \begin{equation} \frac{dE}{d\vec E}=\bigg\langle \frac{d\hat H}{d\vec E}\bigg\rangle=\bigg\langle \frac{d(-\mu _z\vec E)}{d\vec E}\bigg\rangle \end{equation} The time-independent Schrödinger equation is now, \begin{equation} \hat H(\vec E)\psi(\vec E)=E(\vec E)\psi (\vec E) \end{equation} With an energy; \begin{equation} E(\vec E)=\big \langle \psi (\vec E)\big|\hat H(\vec E)\big|\psi (\vec E)\big \rangle=\big\langle \psi ^{(0)}+\psi ^{(1)}\vec E+\psi ^2 \vec E^2+ \dots \big|\hat H-\vec E\hat \mu _z \big|\psi ^{(0)}+\psi ^{(1)}\vec E+\psi^2\vec E^2+\dots \big \rangle \end{equation} With a little algebra and use of $E^{(0)}=\langle \psi ^{(0)}|\hat H^0 |\psi ^{(0)}\rangle$, as well as the Hermitian properties of the Hamiltonian. \begin{equation} E(\vec E)=E^{(0)}+2\vec E\big \langle \psi ^{(1)}\big|\hat H^{(0)}\big|\psi ^{(0)}\rangle -\vec E\big\langle \psi ^{(0)}\big|\hat \mu _z\big|\psi ^{(0)}\big \rangle +\mathcal O(\vec E^2) \end{equation} Using the Schrödinger equation $H\psi =E\psi$ and pulling the scalar energy out of the integral; \begin{equation} \big\langle \psi ^{(1)}\big|\hat H^0\big|\psi ^{(0)}\rangle =E^{(0)}\big\langle \psi ^{(1)}\big|\psi ^{(0)}\big \rangle \end{equation} Since $\langle \phi |\psi \rangle=0 $, \begin{equation} E(\vec E)=E^{(0)}-\vec E\big \langle \psi ^{(0)}\big|\hat \mu _z\big|\psi ^{(0)}\big\rangle \end{equation} Therefore the expectation value of the dipole moment along the $z$ axis for a molecular state $\psi ^{(0)}$ is $\langle \mu _z \rangle$; \begin{equation} \langle \mu _z\rangle =\big\langle \psi ^{(0)}\big|\hat \mu _z\big|\psi ^{(0)}\big\rangle \end{equation} To understand the strength of the interaction that causes the transition between the states $\psi ^{(0)}$ and $\psi ^{(1)}$ we use the transition dipole moment which is basically exactly the same but there is a wavefunction from both of the states involved, initial and final! If you were to repeat this process but retain higher orders (Messy!) you would get (2nd order) the polarisability of the molecule which in essence is the susceptibility of the electron cloud to change with respect to an external electric field (so how the dipole moment changes. Third order would give the hyperpolarisability etc., etc... As I said, you could also approach this from a really different angle by interpreting the molecular orbital diagram and using computational chemistry (so variational principle etc) to find the molecular orbital coefficients! That would give you a good idea of what is going on! • 1 $\begingroup$ Hrm. This is a quite thorough formal elaboration of the theory involved, but it doesn't actually describe how one would implement practically the dipole moment calculation. (At least, I am no closer to understanding how I would code a dipole moment calculation given a set of MO coefficients and basis functions.) $\endgroup$ – hBy2Py Sep 3 '15 at 11:57 • 1 $\begingroup$ This whole derivation is unnecessary. Per definition the dipole moment is the expectation value of the dipole moment operator with the given wave-function. $\endgroup$ – Greg Sep 3 '15 at 14:38 • 4 $\begingroup$ Thank you both for your comments, I will take this into account and update my answer shortly. I agree with @Brian, this answer does not give a non-specialist sufficient information on how to perform calculations to obtain values for molecules etc. However, I do not feel it is obvious that the dipole moment is the expectation value to someone who is new to this material and not as knowledgable as others. It is my experience that undergrads can struggle in this area if they can not see where something comes from. That's why I always like to give people a little background material. :) $\endgroup$ Sep 3 '15 at 15:39 • 2 $\begingroup$ @Greg For those like myself (background in chemical engineering, and not applied mathematics / quantum physical chemistry; but with an interest in a general understanding of the inner workings of quantum computation) it is perhaps not quite as sad...? I very much appreciate the efforts of both AngusTheMan and pentavalentcarbon to lay out the details. Also, frankly: isn't this sort of exposition the entire purpose of StackExchange? $\endgroup$ – hBy2Py Sep 22 '15 at 13:56 • 4 $\begingroup$ @Greg Practical quantum computation was the topic of the original question -- so, no, it's not at all off-topic. Also, if you're looking for mathematically rigorous developments from axiom to theorem or whatever, you're on the wrong SE site. There are Math.SE and MathOverflow for that. $\endgroup$ – hBy2Py Sep 22 '15 at 14:11 An often used approach, especially for semi-empirical methods, is to use a set of atom-centered charges (most often the Mulliken charges) to calculate the dipole moment. In that case, the molecular dipole moment is given by: $\vec{\mu} = \sum_a \vec{r} \times q_a $ Note that if the system has a finite charge, this equation has a dependence on the position of the molecular system relative to the origin of the coordinate space. Most program use the center of mass or center of nuclear charge as origin in such cases. Your Answer
cc0180d3c34e3ba1
Notes on "A New Kind of Science" 12 July 2002 This page contains my notes from reading Stephen Wolfram's "A New Kind of Science". The notes are generally arranged in the order of the book, with the exception that common themes are brought out in a separate section, and with the proviso that the notes are read hand-in-hand with the main text. Please note that I have put these pages together in my spare time, without access to a reference library—so my resources are limited. This page contains material from Stephen Wolfram's "A New Kind Of Science", used without permission under fair use provisions. Text in green has been added since the original version of this page. [Nov 2004] The book is now generously available online, so I've tried to add the appropriate hyperlinks below. Common Themes This section contains notes on themes which occur throughout the book. Complex Behavior from Simple Systems, or, Chaos != Sensitive Dependence on Initial Conditions In the first half of the book, Wolfram is at great pains to point out that one "his discoveries" is the observation that simple programs can generate complex behavior. He tells us that this breaks an age-old assumption that if you observe complex behavior in a system, it must be a complex system. In fact, he continues to tell us this several times in every chapter of the book—I counted at least 35 places where he reinforces this point, usually accompanied by a declaration that this is his discovery. However, this is not a novel observation. The most obvious area of science where an almost identical observation has been made, and loudly, for over twenty years is chaos theory. The chaos theory version of this observations is that simple (nonlinear) systems can generate complex behavior. To quote the last sentence from the important survey article by Robert May back in 1976: ". . we would all be better off if more people realised that simple nonlinear systems do not necessarily possess simple dynamical properties". So how can Wolfram pretend to claim that he has discovered this phenomenon? One key factor is that he either utterly misunderstands or utterly misrepresents the basics of chaos theory. Throughout the book, he equates chaos theory with the phenomenon of sensitive dependence on initial conditions (SDIC) and nothing else. Further he claims that any randomness that occurs in a chaotic system is purely a consequence of the randomness in the least significant digits of the initial condition (p149-155, p304-314). Now, I wouldn't argue that SDIC is a key component of chaos theory, but it is not the only component. Selecting half a dozen books on chaos theory does give half a dozen slightly different definitions of chaos, but in none of them is SDIC given as the entire definition. Now we turn to Wolfram's claim that all randomness in a chaotic system is produced from the fine detail of the initial condition (essentially, claiming that every chaotic system is just an instance of the shift map). Consider any dissipative chaotic system (such as the Lorenz equations, or the Rössler attractor). Because the system is dissipative, this means that all details of the initial condition are lost in the long time limit—and yet the long time behavior is still chaotic and random-seeming. So where is the randomness coming from, if not from the initial conditions? Essentially, the underlying geometry of the nonlinear equations generates a strange attractor as the limiting behavior of the system, which is of zero measure and which has a fractal structure. This geometrically fractal structure in turn generates the apparently random dynamical behavior. To take a simple example, consider iterations of the logistic map at a=4, starting from an initial condition of 1/8=0.125. This initial condition clearly does not have any randomness built into the initial condition (as it is a rational). But if we iterate this initial condition, we nonetheless see random-looking, chaotic behavior: Iteration of the logistic map at a=4 with initial condition 0.125 It might be plausible that the erratic behavior seen is just an artefact of the numerical iteration scheme used, with rounding errors being magnified at each iteration. However, the logistic map at a=4 has an analytical solution (which Wolfram gives on p1098)—and so no iteration was involved in generating this diagram. So netting all of this down, Wolfram's randomness and complex structured behavior in cellular automata seem like just another example of a nonlinear chaotic system, albeit one that is easier to simulate than a fully-fledged numerical system. Mathematics and Predictive Science Throughout the book, Wolfram raises concerns about the ubiquity of mathematical descriptions of systems. Mathematics is useful precisely because it allows large scale summarization of a system, in a manner which may admit prediction of the behavior of that system. This is why mathematics goes hand in hand with science, for the key hallmark of science is that it yields verifiable predictions, and mathematical models allow this. Wolfram attempts to replace the summarization of a system with a differential equation, with a more vague summarization of a system as a simple iterated rule. However, he himself admits that these rules do not admit prediction of their behavior in advance—other than just by running them. This not to say that descriptive science is completely worthless; however, it is definitely a second class citizen behind predictive science (a quote from Rutherford about this distinction: "All science is either physics or stamp collecting"). General Notes p849, "Writing Style": Wolfram is right, starting sentences and paragraphs with conjunctions is indeed annoying. p849, "Clarity and modesty": This paragraph is confusing personal modesty with modesty of ideas. If Wolfram believes that a particular idea is of huge import, then I have no problems with him expressing it so. However, I have concerns about his implicit claims that all of the ideas are his own personal invention—which they are not, or even close to. p851, "Using color": Wolfram makes the startling claim that "it is easier to assimilate detailed pictures if they are just in black and white". While it is true that using different colors to present a continuous ordering is often misleading (see for example p153-154 of Tufte [1983]), there is plenty of evidence that using color to distinguish distinct features is very useful (see for example p52-54 of Tufte [1990]). p852-853, "Notation": Personally, I find it extremely annoying that Wolfram refuses to use standard mathematical notation when he is describing standard mathematical systems. Much as I like Mathematica, there is no way I can justify the huge price involved in owning a copy now that I am no longer a student or an academic. Chapter 1: The Foundations for a New Kind of Science p860, "The Role of Logic": Logic was mostly famously viewed as a possible representation of human thought by Boole; however, it was viewed somewhat differently even by the late 1800s. For example, although the title of Frege's Begriffsschrift refers to "pure thought", in its very first paragraph:"we can inquire, on the one hand, how we have gradually arrived at a given proposition and, on the other, how we can finally provide it with the most secure foundation [...] the second is more definite". p7, paragraph 2: The book does not really show a "vast range" of abstract systems that have not been considered before; it actually concentrates on one particular class of abstract systems, and some members of this class have already been studied (for example, Turing machines (p78), substitution systems (p82), register machines (p97) have all been studied, and the whole class has points in common with iterated difference equations). It is fair to say that Wolfram comes at these systems from a different direction than existing work, though. p13, "Chaos Theory": This section only describes a single aspect of what chaos theory is about. p14-15, "Experimental Mathematics": There are lots of "exceptions" to Wolfram's claim that only systems that have already been investigated by other mathematical means have been subject to experimental mathematics. The whole fields of cellular automata, genetic algorithms, neural networks, simulated evolution have all made use of experimental mathematics on previously little examined problems. p15, "Fractal Geometry": I find this description of fractal geometry slightly misleading; the word "nested" implies a regularity about the detail found when you change scale. However, fractal geometry is commonly applied to shapes that have no such regular structure (for example, the coastline of Britain—see page 1 of Mandelbrot [1982]). p15, "Nonlinear Dynamics": Nonlinear dynamics covers an awful lot more than just soliton theory, including bifurcation theory, limit behavior classification, symbolic dynamics, routes to chaos, time series analysis, etc. See Guckenheimer & Holmes [1983], Drazin [1992], Temam [1988]. Chapter 2: The Crucial Experiment p25-26: It's a shame that Wolfram relegates the correct name (Sierpinski gasket, see p.142 of Mandelbrot [1982]) for this shape to the notes (p.870) p865, right hand column: Wolfram clearly doesn't keep up to date on his knowledge of the C language—his CA program is written in a pre-ANSI style (with Algol-like function declarations, see e.g. p239 of Harbison & Steele [1991]) that was deprecated in 1989 (and is not valid as a C++ program). Chapter 3: The World of Simple Programs General: This chapter has naggingly imprecise terminology. Wolfram gives no indication of what his criteria are for considering behavior to be "complex", nor for considering rule sets to be "simple". p57: Netting down the classification that Wolfram describes, I think the set of 256 possible behaviors is as follows. p65, last paragraph: Personally, I would dispute that the "vast range" of systems that Wolfram presents in this chapter are all "utterly different". p889, "History": I find Wolfram's claim that "in almost no cases has the explicit behavior of simple Turing machines been considered" a little odd, given that he goes on to reference some work in this area. I guess the applications to the halting problem don't count as "explicit" behavior. p95, paragraph 1: "in some ways even simpler" (example of general theme). p101, paragraph 6: Adding extra instructions "rarely seems" to have much effect (example of general theme). p106, paragraphs 2 and 3: Modulo the "typically" and "usually", this claim hugely depends on your definition of "complex". Considering register machines or Turing machines, as their sizes get larger they support more advanced programs—for example, a program to produce prime numbers using the Sieve of Eratosthenes. Is this considered more complex behavior than the interactions of a rule 110 automaton? p897, "Long halting times": Wolfram's suspicion that there are substitution systems that cannot be proved to reach a fixed point sounds very analogous to the Halting Problem or Gödel's Incompleteness Theorem to me. p898, "History", second para: "among programming languages Mathematica is almost unique in also having the same feature" that there are no restrictions associated with types. I'm not sure I've understood this correctly, as all of the dynamically typed and functional languages have the same feature (for example, anything in the Lisp family - see Steele [1990] chapter 2). p898, last sentence: "few truly meaningful computer experiments have ended up ever being done". Utter nonsense. From fractal geometry to chaos theory to artifical life to game theory, lots of computer experiments have been done. If these experiments are not "meaningful", then neither is anything that Wolfram has done. p899, "History of experimental mathematics", last sentence: Claiming that he is exploring systems that have "never in the past been considered in any way" seems a little odd, given that the notes for this section contain a lot of references to existing work on the systems that he mentions (e.g. p893, p889, p894, p895) Chapter 4: Systems Based on Numbers p124, last paragraph: If the operation of addition is broken down into its individual steps, then the carry digits only have local effect at each step—so that a single numerical operation can correspond to just a collection of several CA steps. In this case, I suppose it is unsurprising that numerical systems do not display different behavior from CAs. p143, penultimate paragraph: "At some level, one can always use symbolic expressions [..] to represent numbers". Incorrect. Counterexample: Chaitin's halting probability Ω cannot be expressed as a symbolic expression (see also p1067). p919, first paragraph: "typically studies have concentrated on repetition, nesting and sensitive dependence on initial conditions—not on more general issues of complexity". What more general issues of complexity is he referring to? It's hard to think of a simple system that has been more comprehensively studied than iterated maps. p919, "Problems with computer experiments": It is important to be clear that these problems are known about, and there are ways of addressing them. p920, paragraph 4: "such randomness cannot in fact be a consequence of the chaos phenomenon". Wrong. p152, paragraph 4: (Example of general theme.) p153, paragraph 2: (Example of general theme.) p161: Wolfram neglects to point out that one of the key ways that partial differential equations are derived is by considering the system as a collection of granular cells, and then calculating the limiting behavior as the cells get smaller (see, for example, the derivation (chapter 6 of Acheson [1990]) of the Navier-Stokes equations in fluid dynamics). This is roughly the same as cellular automata, but the summarization process has the advantage that predictions can be made p162, paragraph 1: "as we shall see later in the book, it is certainly not that nature fundamentally follows these abstractions [PDEs]". I eagerly await his evidence for this statement. p162, paragraph 8: Wolfram's "almost all" and "at least in one dimension" caveats are not enough to save this statement from absurdity. He himself gives three more examples in the notes (p925), and there are others (such as the Ginzburg-Landau equation, of which his nonlinear Schrödinger equation is a special case, or the Korteweg-de Vries equation). p164, last paragraph: Wolfram makes my point for me, that chaos and randomness is already known to occur in partial differential equations. p 923, "Existence and uniqueness": "PDEs do not have a built-in notion of 'evolution' or 'time'". Except of course for the large classes of PDEs that are normally referred to as "dynamical systems" or, er, "evolution equations". p167, paragraph 5: While it is true that there has been a lot of unwise trust in numerical approximations, a serious scientific approach will involve cross-checking a variety of different numerical schemes and gridsizes to ensure that any artifacts are in the system not the numerics, together with theoretical tests. p924, "Numerical Analysis": Wolfram himself illustrates that the problem of instability in numerical solution of PDEs is well-known and susceptible to analysis, but nevertheless continues to imply that work in this area does not or cannot distinguish between numerical behavior and system behavior. p924, right hand column, paragraph 2: There are plenty of publications which show highly complex behavior in PDEs. Chapter 5: Two Dimensions and Beyond p189/p932, "Dragon curve": This is referred to as the Harter-Heightway Dragon in Mandelbrot [1982] (p66), although Wolfram gives no attribution. p189, last paragraph: "building up patterns by repeatedly applying geometrical rules is at the heart of so-called fractal geometry". Not true. Famous counterexamples: the Mandelbrot set, or the coastline of Britain. Rule-based fractals are used a lot to illustrate basics of the real heart of fractal geometry, which is the study of curves that are self-similar at different scales (and which have a non-integer Hausdorff Besicovitch dimension)—and this key idea applies both to rule-generated fractals and to fractals encountered in dynamical systems or in real-world systems. I shall pass over the insulting use of the word "so-called". p190, last paragraph: Again, I dispute Wolfram's claim that "traditional" fractal geometry only studies patterns that have a purely nested form. There can be no more "traditional" reference for fractal geometry than Mandelbrot [1982], and fractals that do not have a pure nested form definitely appear therein (chapters 5, 9, 10, 11, 13, 14, 15, 16, 19, 20, 21, 23, 24, 25, 27, 28, 32, 34, 35, 36, and 37 all consider fractal behavior that is not purely nested, in some form). p194-195: On my copy of the book, the arrows in the figures on these pages are pretty much invisible. p221, last paragraph: "traditional science and mathematics [have a] failure to identify the fundamental phenomenon of complexity". I completely disagree. Chapter 6: Starting from Randomness p261, paragraph 1: A somewhat obvious rhetorical question, given that Wolfram has already demonstrated random behavior from a single black cell initial condition. p262, paragraph 2: This paragraph applies equally well to chaotic systems, particularly in dissipative systems when initial transients are removed. p955, "Sarkovskii's theorem": This section provides a perfect example of Mathematica notation being much less clear than standard mathematical notation. Compare Wolfram's formulation with the standard formulation (see Sarkovskii [1964] or section 1.10 of Devaney [1989]): Wolfram: if a period m is possible then so must all periods n for which p={m,n} satisfies OrderedQ[(Transpose[If[MemberQ[p/#,1], Map[Reverse, Devaney: if a period m is possible in the ordering below, then so must all periods n later in the ordering 3>5>7>9>...>2.3>2.5>2.7>...>22.3>22.5>... >23.3>23.5>...>23>22>2>1 p959, "2D generalizations": I cannot find the discussion at the end of Chapter 5 that Wolfram refers to, and the index has no entry for entropy in that chapter. p961, "Attractors in systems based on numbers": Minor clarification: the term "limit cycle" usually refers to a purely periodic orbit, in which case the quasiperiodic cycle (a cycle with two irrationally related periods, which fills out a torus) needs to be added to the list of possible attractors. p961, "Attractors in systems based on numbers": "the structure of the [strange] attractor is almost invariably quite simple". Even allowing for the "almost" and lack of definition of "simple", I'd disagree that (say) the Lorenz attractor or the Rössler attractor qualify as simple. Chapter 7: Mechanisms in Programs and Nature p297, paragraph 2:"One of the main discoveries of this book" is misleading. There is lots of background to this discovery; a more accurate statement might be "One of the main observations presented in this book". p298, paragraphs 1-4: This seems extremely vague to me. It boils down to "these things look the same, therefore they are the same", with no evidence presented. p298, paragraph 6: "it suggests that the basic mechanisms responsible for phenomena that we see in nature are somehow the same as those responsible for phenomena that we see in simple programs". Suggests, maybe, but that's not really strong enough to be anything other than a passing observation. p299, figure caption: (Example of general theme.) p968, paragraph 1: "it has normally been assumed that [..] realistically complicated behavior can only ever be obtained if explicit randomness is continually introduced". Complete nonsense. p304, "Chaos Theory and Randomness from Initial Conditions": Frankly, Wolfram's understanding of chaos theory seems to have come from watching Jurassic Park (Example of general theme.) p309, paragraph 3: Wolfram comes close to getting a clue here, and realising that sensitive dependence on initial conditions is not the entire definition of chaos. p309, paragraph 5: I should like to see references for this assertion that accounts of chaos theory are confused over the introduction of randomness in initial conditions. p309, paragraph 5: I don't understand what the problem is with this implicit assumption that "random digit sequences should be almost inevitable among the numbers that occur in practice". Irrational, and indeed normal, numbers are dense among the real numbers, and so it would be surprising if non-random digit sequences turned up in initial conditions. p972, paragraph 1: As a one-time chaos theory specialist, I would disagree with Wolfram's claim that Gleick's book "covers somewhat more than is usually considered chaos theory". I am also at a loss to find where the book mentions Wolfram's work on cellular automata, as he and they are not included in the index. p972, "Recognizing chaos": (Example of general theme.) p322, paragraphs 3-6: Wolfram's discussion of mechanism "producing new randomness" in "a much shorter time" and in a way which is "more efficient" is maddenly unquantifiable. p329, paragraph 1: Note that the Central Limit Theorem applies much more generally than just to random walks. (see e.g. chapter 8 of Grimmett & Welsh [1986]). p337, last paragraph: That continuous changes can induce non-continuous results is the key observation of catastrophe theory, which sprang to prominence in the early 1970s. p342, paragraph 4: "to work out what pattern of behavior will satisfy a given constraint usually seems far too difficult for it to be something that happens routinely in nature". Leaving aside the "usually seems" weasel words, this is a statement that is extremely difficult to justify. Just because it is difficult to program a computer to act in accordance with a physical constraint does not mean that the corresponding physical system has any such difficulty. For example, consider water standing in a complex system of pipes. It may be very difficult to calculate exactly where the water will level off and prove that all of the pipes will have the water at the same level—but the water itself has no such problem of calculation. (Part of this disagreement may be due to the limited forms of constraint that Wolfram considers). p348, paragraph 4: "often" would be more accurately read as "sometimes". p351, paragraph 2: The amount of prevarication in this paragraph is remarkable: "so far as I can tell", "more or less", "most plausible", "tends". I'd still disagree; there are a vast number of systems which are usefully expressed in terms of constraints, particularly action principles. p351, paragraph 3/p985, "Biologically motivated schemes": It is not really the case that natural selection is optimizing for some aspect of form of behavior, or that is constraint based. Natural selection optimizes fitness, which is in turn driven by the environment of the organism and also the current fitness of all of the other organisms in that environment. p351, last sentence: (Example of general theme.) p354-355: This classification of behavior is essentially the same as the classification of possible limit behaviors of ordinary differential equations (see p961). p356, last paragraph: I remain unconvinced that constraints are "rarely a good explanation for actual repetition that we see in nature". p357, last paragraph: "to get nesting seems to require that there also be some type of discrete splitting or branching process". This depends on Wolfram's definition of "nested", which appears to change throughout the book. If "nested" just means fractal, then this statement is wrong—there are many fractals which are not formed from any branching process. If "nested" means those specific fractals that are formed by a substitution or branching process, then this specific statement is true (but many other statements throughout the book are then overly restrictive). p358, paragraph 2: "as we have discovered in this book". Misleading—this implies that Wolfram himself has discovered this, which is of course nonsense. p359, top diagram: Given that time evolution is running down the page, I would disagree that this cellular automata is generating nesting—quite the reverse! p360, paragraph 5: "nesting cannot be forced [..] forced fairly easily by constraints". I disagree. Many differential equations are generated by considering constraints, and in turn these differential equations can often have fractal strange attractors. p989, "Self-organized criticality", first sentence: Mandelbrot's work in the late 1970s already indicated that nesting does not need fine tuning of parameters. p990, "Structure of algorithms": "until recently even recursion was usually considered rather difficult". This will come as a surprise to the large number of people who have been working with functional languages for the last forty years. p990, "Structure of algorithms": "no doubt the methods of this book will lead to all sorts of algorithms". Personally, I can't see very many potential algorithms arising from cellular automata, as they don't normally solve problems. Chapter 8: Implications for Everyday Systems p363, paragraph 2: "rather little turns out to be known" is a very strong statement, and I personally don't feel that it is justified by the contents of the chapter (particularly when the information in the notes is bourne in mind). p364, paragraphs 2-3: (Example of general theme.) p364, paragraphs 5-6: Wolfram's description of the process of science is a misrepresentation. It is true that experiment is often only compared with the models for a few variables, but Wolfram neglects to point out that these variables are typically sampled many, many times to obtain a time series which can be compared against the continuous behavior of the model. This procedure has considerable mathematical justification. p991, "Models versus experiments": It's hardly news that experimentalists get it wrong sometimes. p367-268: I would be interested to see examples of the process of model complication then simplification that Wolfram describes. p368, paragraph 3: (Example of general theme.) p369, paragraph 1: If, as Wolfram claims, numerical models do not necessarily match the mathematical models that in turn may not match the real system, then similar concerns must also apply to a cellular automata model that has no physical justification other than "it looks right". Surely the best approach is to use the right model for the job—if the underlying processes seem to be discrete and local, then a cellular automata model is appropriate; if the underlying processes involve more global interactions, and continuous ranges of behavior, then a mathematical model is more likely to be appropriate. p370, last paragraph: Aha, a verifiable prediction. p371, last paragraph and p372 paragraphs 4-5: Given that Wolfram claims his CA models get the basic features of snowflake generation right, it would interesting to see how these models would do if more of the complicating issues were included. p373, last paragraph: A verifiable prediction. p992, "Identical snowflakes": Interesting. How does the six-fold symmetry turn up (presumably from the underlying ice crystal lattice structure) and get preserved during snowflake formation? Does make it sound like diffusion-limited aggregation (DLA) models are less convincing. p375: An interesting model, although possibly a little too descriptive to yield verifiable predictions. p376, paragraph 6: One of the key observations of chaos theory was that nonlinear equations can generate random-seeming behavior; as such, the development of chaos theory made it much more plausible that turbulent flow could be a consequence of the nonlinearity of the Navier-Stokes equations. p997, "Navier-Stokes equations": While it is true that numerical integrations of the Navier-Stokes equations need to be considered with some caution, it is a little harsh to claim that it is "almost impossible" to distinguish between numerical artifacts and mathematical artifacts (for example, one can use radically different numerical approaches and compare their results—if they have the same features, it is unlikely to be a consequence of numerical artifacts). p999, "History of cellular automaton fluids": As Leo Kadanoff points out, Wolfram neglects to mention earlier work than his own, and also later work that surpasses his own. p377-378: The cellular automata model that Wolfram describes here is essentially the same as how the Navier-Stokes equations are derived. (cf. chapter 6 of Acheson [1990]). p381, paragraph 1: (Example of general theme.) p381, paragraph 2: Wolfram claims that none of the chaotic equations derived from fluid dynamics "have any close connection to realistic descriptions of fluid flow". This would come as a bit of a surprise to some. p382, paragraph 2: Simulation of randomness generation does not need a new cellular automata model; the nonlinear Navier-Stokes equations are quite capable of generating randomness themselves. p382, paragraph 3: The diagram does not look "strikingly similar" to turbulent fluid flow to me. p382, paragraph 7: Personally, I would predict that "remarkably simple programs" that "successfully manage to reproduce the main features of even the most intricate and apparently random forms of fluid flow" will turn out to be . . . numerical integration programs for the Navier-Stokes equations. p998, paragraph 3: As Wolfram points out, in a dissipative system the details of the initial conditions will indeed be damped out—but still chaotic behavior is observed. He nearly gets the point, but not quite. p998, right hand column: Wolfram claims that his CA model of fluid flow is "seems to provide essentially the first reliable global results". Leaving aside the weasel words "seems", "essentially", this is a strong (and unlikely) claim. p1000, paragraph 1: Amusing hissy fit. I wonder to whom he refers here? p1000, "Generalizations of fluid flow": It is also straightforward to generalize the Navier-Stokes equations to these situations. p383, paragraph 5: I enjoy the hubris of Wolfram pointing out that the genetic code for building a human being is just about "as complex as" his own particular software project, Mathematica. I've not seen any evidence of Mathematica producing philosophy, symphonies or new physics yet. p386-387: I disagree with Wolfram's over-simple characterization of evolution as producing optimal solutions to environmental problems. I think it's quite well known that it merely produces solutions that are more optimal than the others that happened to be around at the same time. p388, paragraph 6: I'm not convinced that traditional biological thinking does assume that all complexity in organisms is "carefully crafted to satisfy some elaborate set of constraints", as Wolfram claims (witness the Dawkins-Gould debate on essentially this point). p1002, "Tricks in evolution": Probably also worth mentioning co-evolution and more general arms races. p392, paragraph 1: "natural selection can only operate in a meaningful way on systems whose behavior is in some sense quite simple". No evidence given for this statement. p392, paragraph 4: "with more complex behavior [..] it becomes infeasible for any significant fraction of these variations to be explored". a) No-one is claiming that all of the potential variations have been explored b) complex adaptations can be built up by accretions of simpler adaptations. p394, paragraph 3: "if natural selection is to be successful [..] it seems what is needed are components that behave in simple and somewhat independent ways". Wolfram presents no evidence for this claim, and besides there are many biological systems that have evolved with extremely complex interlinked components (for example the ATP chain). p396, paragraphs 4-5: These paragraphs depend hugely on what your definition of "complexity" is. If you count sophistication of function (e.g. the eye) as complexity, then there is no problem with natural selection producing complexity. If, on the other hand, you take complexity to mean complicated, random-seeming patterns (e.g. pigmentation), then yes it is difficult for evolution to produce any specific pattern directly (because the information content is so high but the effect on fitness is so low). p397, paragraph 5: Rather than suggesting that it "might be possible to develop a rather general predictive theory", this section would be more convincing if it actually did present a predictive theory. As it stands, what Wolfram has described seems to only apply to a small area of biological development (texture, pigmentation and branching patterns, essentially). p398, paragraph 6: Wolfram repeats his flawed assumption that evolutionary theory requires adaptations to be globally optimal. p397, paragraphs 7-8: These paragraphs again depend on the meaning of the word complexity. p1005, "History of branching models", last sentence: "nothing like the simple model that I describe in the main text has ever been considered before". Apart from all of the examples that he's just given, presumably? p404, paragraph 6: Again, I disagree with the assertion that this is a universal belief among biologists. p410: Note that Wolfram points out (p1007, "History of phyllotaxis"), that this is not a new model for budding. p422, paragraph 2: I disagree with the claim that Wolfram has shown what is needed to product the "kind of diversity and complexity we see in plants and animals". He has shown what is needed to produce some specific aspects—budding, branching, pigmentation patterns—but no sign of a general theory that could explain (for example) why animals heads are on the tops of their bodies, why plants grow upwards, why quadrupeds are common. The theory of evolution by natural selection is such a theory—particularly when it is appreciated that evolution does not necessarily produce the most optimal organisms. p422, paragraph 3: Whether a particular underlying set of CA-like rules for a biological feature is "picked almost at random" is surely going to depend on the fitness characteristics of that feature. If twenty different CA rules for pigmentation all produce the same level of camoflage, then yes the choice will be at random. If ten rules produce significantly less effective camoflage patterns, then those rules will not be among those selected at random. Another example would be CA rule for plant branching patterns—any rules that generated less efficient leaf shapes would be selected against. p431, paragraphs 3-4: With its equivalent characteristic of randomness generation, chaos theory has also been considered as a likely candidate for the generation of financial randomness. Chapter 9: Fundamental Physics p434, paragraph 2: I disagree that the origins of the Second Law of Thermodynamics are quite as "mysterious" as Wolfram claims. Indeed, Wolfram himself admits (p1020) that the "clear understanding" he claims to bring is actually very similar to the standard explanation. p434, paragraph 4: Not a new discovery. p434, last paragraph: "There is still some distance to go" before Wolfram solves the most fundamental problem of physics. I couldn't agree more. p1018, "History": If you read between the lines, the notes for this chapter do indicate just how much of the thinking in the chapter and indeed the entire book are closely related to earlier work by Edward Fredkin and Tommaso Toffoli, [10-Apr-02] and Konrad Zuse before that. p1018, "Emergence of reversibility": I don't follow what Wolfram means by "approximate reversibility", and his reference to p959 doesn't help either. p442, paragraph 1: (Example of general theme.) p444: Wolfram's explanation of the emergence of the Second Law of Thermodynamics based on the initial conditions being somehow special is essentially equivalent to the explanation given in chapter 9 of Hawking [1988], or that in chapter 7 of Penrose [1989] p445, paragraph 1: Although Wolfram does do a good job of elucidating how it is that irreversible behavior in the large can develop from reversible rules in the small (based on special initial conditions), I disagree that this has been "until now [..] rather mysterious". p1020, "My explanation of the Second Law": Wolfram admits here that what he says "is not incompatible with much of what has been said about the Second Law before", which belies his repeated claims to have solved what was previously completely mysterious. p1021, "Cosmology and the Second Law": I don't understand the statement that "the effective rules for the evolution of matter led to rapid randomization, whereas those for gravity did not". Unless he has some claim to have displaced general relativity, surely the distributions of mass and of gravity are inextricably interlinked? p1021, "Alignment of time in the universe": Wolfram is claiming here that the direction of thermodynamic arrow of time is induced by the the direction of the cosmological arrow of time, to use the terminology of Hawking [1988], although he offers no rationale (such as the weak anthropic principle) for this. p451, paragraph 1: (Example of general theme.) p453, paragraph 4: I'm not sure why Wolfram apparently believes it so surprising that a completely invented computer simulation should not obey the Second Law of Thermodynamics. p453, paragraph 6: The argument that biological systems apparently disobey the Second Law of Thermodynamics is common among creationists and as such has been comprehensively argued against. p455, paragraphs 3-5: Wolfram is skipping a step in his argument here, as he simply equates the radiation of information from his CA system with the physical radiation in the universe. Nevertheless, the argument appears to be a restating of the usual rule of thumb that whenever the Second Law appears to be violated, it is typically because you are not considering a closed system in its entirety. p457, last paragraph: Wolfram "strongly suspects that there are many systems in nature" which behave like his constructed experiment in not following the Second Law—but does not give a single (even tentative) example to back up his intuition. p1023, PDEs: Relevant references for this: section 5.1 of Drazin & Johnson [1989], section 1.6 of Ablowitz & Clarkson [1991], section 7.3 of Olver [1986] p464, paragraph 1: Note sure why Wolfram considers it "striking" that the average density of a system made up of discrete cells should behave continuously. p464, last paragraph: "One might have though that continuum behavior would somehow rely on special features of actual systems in physics." Or not, in fact. p1024, "Derivation of the diffusion equation": Note that there appears to be an additional assumption not explicitly mentioned in this derivation, of an underlying left-right symmetry in the underlying CA. p465, paragraph 3: (Example of general theme.) p465, paragraph 6: This is not, in fact, a new idea—it has certainly been around long enough to make an appearance as a backdrop to science fiction (for example, Greg Egan's Permutation City). p466, paragraph 5: Of course, if the complete behavior of the system is known, rather than just the "overall features", then it is fairly trivial to recreate the the underlying rules. p468, paragraph 5: Is there any evidence that The Rule for the universe will be simple, other than that Wolfram has a hunch and that (p470) existing laws of physics tend to favour simple formulations? p1025, "Theological Implications": Wolfram makes the claim that a universe with an underlying rule makes it impossible to have miracles or divine intervention. This seems a strange claim, for even in his simple CA systems it is entirely possible for Wolfram to stop the evolution and alter particular cells, and then let the evolution continue from that point—which would appear exactly as a miracle to any putative inhabitants of the CA universe. p1025, "Simplicity of Scientific Models": At last, an admission that a majority of complicated features in biology are due to the vagaries of biological evolution, not CA-like rules. p1026, "The Anthropic Principle": Wolfram is making the astonishing claim that his little pictures of interacting structures in cellular automata are "ultimately not dissimilar to intelligence". Their complexity is nothing like intelligence; even allowing for the fact that CA models can support universal computation, this is a long way from demonstrating intelligence—as forty years of ultimately unsuccessful research in artificial intelligence has shown. [10-Apr-03] p1026, "Mechanistic Models": Zuse [1967] appears to be the earliest reference for the "all is computation" approach that Wolfram claims is his (although I've only looked at Zuse [1969], since I don't read German). Thanks to Jürgen Schmidhuber for pointing out this earlier reference to me. p1026, "Mechanistic Models": The name Edward Fredkin is coming up again... p473, paragraph 5: Wolfram seems to be arguing here that because he has gotten used to his particular form of models, that the universe should follow suit. p474, last paragraph: Wolfram claims that programs with discrete elements make it "much easier for highly complex behavior to emerge". Not so. p1027, "History of discrete space": The name Edward Fredkin is coming up again... p1029, left column, last two lines: Wolfram is actually referring to cubic or 3-regular graphs not just trivalent graphs here. [04-Mar-03] p475 onwards: This potential description of space as being generated by an underlying evolution of a graph network shares some similarities with Regge calculus and Superspace; see chapters 42-44 of Misner, Thorne & Wheeler [1973] p481, paragraph 2: Wolfram claims that "space and time somehow work fundamentally the same" in modern models of fundamental physics. I'm not convinced that this is an accurate characterization. p482, paragraph 4: As in chapter 5, why is it relevant that it is difficult to compute constraint-based systems? Isn't this assuming the result that Wolfram is trying to prove, namely that physics is generated from computation? p482-484: Wolfram is arguing a general position from a very specific limited definition of constraints. Many differential equations are constraint based and nevertheless have well established proofs of the existence of unique solutions given suitable boundary conditions or initial conditions. p484, paragraph 5: I still disagree with this characterization. p486, paragraph 1: Wolfram seems to claim here that time and space symmetry will emerge from The Underlying Rule, without indicating how this will come about. p486, paragraph 4: I don't understand why it "seems unreasonable" for all of the cells of The Underlying Rule to update simultaneously; presumably his intuition is driven by present day implementation of CAs on serial hardware, which is somewhat limiting. [14-Apr-03] p486, last paragraph: Wolfram suggests that the universe might actually be "like a mobile automaton or Turing machine". This has also previously been suggested in Schmidhuber [1997]. p504, paragraph 2: I don't understand why this property is "remarkable"; it's just a logical consequence of the structure of the underlying rule. p505, paragraph 4: I find Wolfram's analogy between historical belief that the Earth was the centre of the universe and the current belief that we have a unique history to be dubious at best. p506, paragraph 2: (Example of general theme.) p511: In my copy of the book, the clocks are pretty much illegible in this diagram. p515, last paragraph: I don't follow why this randomness in proposed underlying network should allow a dimensionality of 3 to emerge. p1038, "Random replacements": I believe rule T1 adds one edge to two faces, not two edges to one face—and similarly removes one edge from two others. p1038, "Random replacements": The average number of edges must only remain at 6 because it is preserved from the (hexagonal) initial condition; a different initial condition would presumably have a different average. p517, paragraph 2: It's worth being clear here (and elsewhere in the chapter) that Wolfram is assuming that all of his earlier suppositions are indeed true. p520-522: Interesting suggestion. p524: I disagree with Wolfram's claim that his presentation of relativistic time dilation is "considerably clearer" than other accounts. Personally, I found his diagram fairly impenetrable (but I consider myself a mathematician and so prefer more standard presentations—for example in Schutz [1985]) p1042, "Inferences from relativity": Wolfram appears to be claiming that because clocks in GPS satellites are corrected for time dilation, they no longer display time dilation. Just rescaling a clock is hardly a counterexample to the entire time dilation effect. p525, paragraph 3: It would be very helpful in this chapter if Wolfram were to clearly distinguish between verified aspects of current physics, verifiable predictions about physics derived from his proposed new models, and random speculation. I presume his claim that it is "almost inevitable" that leptons have internal structure falls somewhere between the latter two camps. p527, last paragraph: Again, Wolfram is ignoring a lot of pre-existing work. In the particular area of modern physics, it is also worth noting that before the invention of quark theories, the menagerie of apparently fundamental particles looked very complicated. But a simple underlying theory was found to explain the complication. p527-529: An interesting approach. Sadly, I don't understand enough string theory to see if there are parallels between the two approaches. p537, paragraph 2: I am startled by Wolfram's hubris in talking about the limitations in the Einstein equations, when his own model of physics is still a incomplete, vague, unverifiable suggestion. p539, paragraph 2: (Example of general theme.) Chapter 10: Processes of Perception and Analysis p547, paragraph 1: No. We have not discussed "the basic mechanisms responsible" for various phenomena. We have instead discussed some potential rough models that might be related but have not yet been verified in any way. p548, paragraph 1: The processes involved in perception and analysis are in fact studied in various areas of traditional science, such as neurology, psychology, cognitive science and quantum mechanics. p548, paragraph 3: Again, quantum mechanics has definitely pushed the process of perception and analysis to the foreground in science. p549, paragraphs 1-3: Compare algorithmic information theory. p550, paragraph 4: (Example of general theme.) p551, paragraph 2: "discovered in this book". No. Not true p552, paragraph 2: Given that Wolfram claims that the concept of randomness has "remained quite obscure", I find it odd that he goes on to spend three pages attacking the standard concept of randomness... p554, paragraph 3: (Example of general theme.) p555, paragraph 2: There is already perfectly well-known terminology for sequences which appear to be random, but are generated from underlying rule systems: "pseudo-random". p556, paragraph 3: This proposed definition of randomness does rather shift the conceptual difficulty from the word "random" to the word "simple". p1068, lines 1-2: Given that statistics is the particular area that studies randomness, it seems harsh to attack fields outside of statistics for not being experts on randomness. p557, paragraph 1: Personally, I felt that the book so far was hugely lacking of a coherent definition of complexity p557, paragraph 2: I disagree with Wolfram's thesis that normal visual perception is all that we can do to identify complexity. Consider the sequence of primes above 1000; to the naked eye they would appear fairly random (except perhaps that there are no even numbers). However, with a little intellectual analysis it could be discovered that they were structured in a very particular way. p559, last paragraph: (Example of general theme.) p567, paragraph 2: Wolfram is correct to point out that most compression schemes are 1D, but the particular approach that he goes on to describe is far from the only 2D compression algorithm. Examples include T.128, JPEG, MPEG, Barnsley's Iterated Function Systems. p570, paragraph 1: MPEG-4 is an example that does use some similar schemes in practice. p570, last paragraph: Barnsley's IFS approach may have better compression characteristics for nested image (see chapter 5 of Peitgen [1988]), although some controversy is involved. p572, paragraph 1: I'd disagree that just because compression algorithms do not generate a huge reduction that this implies that the source is random. p572: Aarg. Why can't Wolfram use the same terminology as the rest of the known world? The standard term is "lossy", not "irreversible". p580, paragraph 2: It hardly seems surprising to me that perception systems based on essentially the same processes as those that generated the patterns should be able to distinguish between the patterns. To make his jump to the suggestion that this is actually how perception works, at least one of the two processes (generation, perception) should be based in the real world. p582, paragraph 4: Note that this only applies to regular nested patterns; our visual systems may respond better to more real-world fractals. p1075, right column, line 18: Claiming that there is "no doubt" about a statement that he presents with zero evidence is somewhat strong, even allowing for the "can be idealized" weasel words. p588, paragraph 4, sentence 2: Absolutely not the case. p588, paragraph 5: Comparing a model with observation is a rather more subtle process that Wolfram suggests, particular when the model is nonlinear and displays sensitive dependence on initial conditions. p589-590: Wolfram is ignoring a common technique for checking models, which is to build the model from a subset of the available data, and then to test that the model produces results which are consistent with the remainder of the data. p591, paragraph 2: It is very annoying that Wolfram does not provide page numbers for his reference back to chapter 5—that chapter is over 50 pages long. p593, paragraph 3: Much as I concur with Wolfram's attack on many stochastic modelling approaches, I feel that this statement is a misrepresentation of their position. A weighted coin is still perfectly random, but does not have a flat probability spectrum. p595 paragraph 2: Presumably the second mention should also be (e) and (f) rather than (d) and (e). p1083, "Time series": Wolfram is far too quick to dismiss nonlinear time series analysis (possibly reflecting his peculiar ideas about chaos theory). To quote page 7 of Weigend & Gershenfeld [1994], describing the results of a time series analysis competition: "all of the successful entries were fundamentally nonlinear" p598 onwards: Wolfram's insistence on expressing everything in the form of cellular automata gets annoying, particularly as it often obscures clear explanation. I guess that this is just part of his campaign to subconsciously persuade the reader round to his point of view. p602, last paragraph: Again, this is not a new discovery in this book. It's also instructive to consider the example in pages 4-5 of Knuth [1981], where complicated rules yield simple behavior. p606, paragraph 2: This system hardly qualifies as having survived a serious cryptanalytical attack. See also the comments on p414 of Schneier [1996] p1086, paragraph 1: There are in fact a number of other cryptosystems that are not DES, LFSR or RSA, even if you count all block ciphers as equivalent to DES—for example, RC-4. See Schneier [1996] for more information. p1089, "Problem-based cryptography": It is not true that cryptography systems are solely based on integer factorization. Firstly, symmetric algorithms typically have little to do with number theoretical problems. Secondly, there are counterexamples even in asymmetric cryptosystems: the Rabin algorithm involves square roots modulo a composite number; the ElGamal algorithm involves calculating logarithms under a field; the McEliece system involves the use of error-correcting Goppa codes. p1090, paragraph 2: Another amusing piece of hubris: because Wolfram has studied rule 30 for some time he has confidence that it is as difficult as the problem of factoring integers—which has been studied by rather more people, for rather longer. p607, paragraph 2: It should be noted that the "right approach" that Wolfram mentions goes back to Cantor. p612, paragraph 4: The surprise that Wolfram mentions is considerably lessened when one realizes that the first fractals were explicitly constructed to be pathological counter-examples for mathematical analysis. p620, last paragraph: (Example of general theme.) p621-623: Wolfram's hashing scheme seems a more contrived treatment than the standard fact that neural networks can demonstrate a form of content-based addressing (which he sort of describes on pages 624-625). p627, paragraph 3: It is hardly a surprise that logic is not an appropriate idealization of all of human thought. Again, the first paragraph of Frege [1879] illustrates this. p627, paragraph 6: I disagree with the claim that Mathematica is somehow unique (as it is essential a Lisp-like system optimized for mathematics), and with the claim that Mathematica mimics the operation of human memory. p1103, "Structure of Mathematica": The structure Wolfram describes is essentially the same for all Lisp-like languages such as Mathematica. p628, paragraph 3: (Example of general theme.) p628, paragraph 4: On one level this is obviously true, since human brains are built of neurons and it is known that neurons behave according to simple rules. One another level, this offers no explanation that covers the levels between neuronal activity and intelligence. p629, paragraph 2: Although I agree with this characterization of many areas of artificial intelligence (see also chapter 4 of Hofstadter [1994]), there are exceptions—for example, the Copycat project and its descendents. p629, paragraph 5: As above. p629, paragraph 3: (Example of general theme.) p630, paragraph 4: This claim does not match with the meta grammar approach of Chomsky (as presented for example in Pinker [1994], and mentioned by Wolfram on p1181). The generalization of grammatical rules in children is not arbitrary; it is constrained to instances of the universal metagrammar. p630, paragraph 5: Wolfram claims that we can learn languages with almost any structure. However, there is no evidence of this; the range of known human languages far from exhausts the theoretical possibilities for language structures. p1100, paragraph 1: I'm not sure that neural network proponents would agree with this characterization of their approaches as simply equivalent to continuous probabilistic models. p1100, "Sleep": Does Wolfram have any evidence for this physiological claim, or is he simply making it up? p1103, "Languages": Again, Wolfram overstates the claims of Mathematica versus its competition, in this case by not distinguishing between the core language and its standard libraries. Both C++ and Common Lisp have extensive standard libraries which make them at least as large as Mathematica. Both also support a variety of styles of programming—imperative, functional, object-oriented, generic—to an equivalently wide degree as Mathematica. p1103, "Languages": The claim that computer languages are almost always designed by a single person is false (see Common Lisp, C++, Ada, ..), as is the claim that they are designed once and for all (see Common Lisp, C++, Perl, ..) p1103, last paragraph: I was under the impression that more universal features than just nouns and verbs had been discovered in human languages. p1104, "Game theory": Wolfram denigrates the discoveries regarding the evolution of cooperation as a "folk theorem", but neglects to offer any criticism or counter-examples. p632, paragraph 3: (Example of general theme.) p634, paragraph 4: There are definitely examples where we get much further than our own built-in powers of perception take us. As well as my hypothetical example above, any successful cryptanalysis attack would count, as would the discovery of high-dimensional correlations in pseudo-random number generators (see p320). p634, last paragraph: As mentioned elsewhere, this is a significant misrepresentation of evolution theory. Chapter 11: The Notion of Computation p637, paragraphs 2 and 4: I disagree that mathematical analysis is only useful when behavior appears simple; Wolfram has just hand-picked particular examples in chapter 10 to give this impression. One particular example that I happen to know about: the route to chaos in a homoclinic system, which displays much complex behavior that is nevertheless susceptible to analysis. p1108, "Practical computers": An excellent reference for understanding the internal workings of computers is Tanenbaum [1984]. p643, paragraph 5: Universal systems cannot produce arbitrarily complex behavior; for example, none of them can compute the halting probability Ω. p644, paragraph 3: It only seems implausible that cellular automata could be universal systems if you are unaware of the sketch proof of universality in the Game of Life automata, from the 1970s, and indeed of the very earliest CAs generated by von Neumann with the exact intent of being universal (see p1117) p644, paragraph 5: (Example of general theme.) p674, paragraph 3: It is indeed a remarkable result that all of the systems that Wolfram mentions can emulate each other, but it has been known about for quite some time. That is, it has not been "discovered" in this book, as Wolfram makes clear in the various "History" sections in the notes to this chapter. p678, paragraph 3: It is worth being clear that the "several years of work" was not actually Wolfram's work. [31-Oct-02] p1117, Totalistic rules: It's not clear whether Wolfram is claiming that his examples are actually universal, or just candidates for universality. See Gordon [1987] for an explicit example of a universal, totalistic, 1-D CA. p675-689: I realize that Wolfram consistently refuses to acknowledge anyone other than himself in the main text, but it nevertheless seems particularly churlish in this case—where he is presenting 15 pages of someone else's work in detail. General: Wolfram seems willing to give explicit page references in some instances in this chapter (p682 paragraph 5, p689 last paragraph, p695 paragraph 1) but not in other places. Some correlation with the strength of the argument that he is referring to, perhaps? p1122, line 3: Note that the term "currying" is named after Haskell Curry. Chapter 12: The Principle of Computational Equivalence p719, last paragraph: As Wolfram points out on the next page, it is hardly the Principle of Computational Equivalence that would break one's assumption that different systems would have different computing capabilities. This is just a consequence of the well-known phenomenon of universality. p1125, "Basic framework": The "all is computation" approach is hardly unique to Wolfram. p720, paragraph 7: Personally, I feel that the number of successes among the models that Wolfram presents are actually quite sparse—shell shapes, pigmentation patterns and his rehash of fluid dynamics. The others hardly qualify as successful models, as they are too vague, too untested and too untestable. p720, last paragraph: I suspect Roger Penrose would disagree with this claim that there is no uncomputable physics (see chapter 10 of Penrose [1989]). p726: I must admit that I find Wolfram's wording of the Principle of Computational Equivalence less than clear. It took me a couple of readings to figure out that the "equivalent sophistication" is an equivalence across the entire class, to a universal Turing machine. p726, paragraph 5: I disagree that the Principle has "vastly richer implications" than other laws in science. There are few direct implications presented, and the indirect implication that there may be computationally irreducible processes in nature is hardly a rich seam for science. p727, paragraph 5: To put this paragraph another way: the Principle of Computational Equivalent is unfalsifiable. p728, paragraph 4: I dispute the claim that he has modelled a "wide range" of systems. p728, last paragraph: Again, Wolfram's limited concept of constraints and the need to compute solutions to them comes into play. p729, paragraph 3: This is misleading; Wolfram is not distinguishing between local and global minimization. As for evolution, biochemists are unlikely to claim that every molecule has to be in a global minimum energy. p726, paragraph 6: This is not evidence. Just because it is harder to build analogue computers, and because it has not been done yet, does not mean that is impossible to build one that is more powerful than a digital computer. p730, paragraph 1: The particles are indeed discrete, but their positions and velocities are continuous (at least to the limits of our measuring capability). p730, last paragraph: I fail to see why the fact that arithmetic operations have non-local effects on digits is of any significance; as Wolfram himself points out elsewhere, size is what matters :-) p731, paragraph 1: This seems like stacking the deck to me: compare continuous and discrete computation with a discrete measuring rod. p733, paragraphs 1-3: Some PDEs (e.g. Navier-Stokes equations) can be thought of as generalizing cellular automata systems as the automata size tends to zero; would it then be so surprising that the PDE version can generate behavior in a finite time limit that the cellular automata system would generate as time tends to infinity? p733, paragraph 5: "Particularly following the discoveries in this book". No. Neuronal systems are known to follow simple rules (as Wolfram describes on p.1075), and brains are made of neurons. p1127, "History": The number in question is known as Ω;see page 1067. p1127, "Continuum and cardinality": It is worth clarifying that "infinite lists of real numbers" must be a countably infinite list (by the fact that it makes a list). p1130, paragraph 2: I don't know why Wolfram is claiming that getting nested patterns from continuous systems is difficult when he goes on to give one particular example and one entire (vast) class of examples. p735, last paragraph: (Example of general theme.) p737, paragraph 1: I'm not convinced that this assumption of un-sophisticated evolution has actually been made. p737, paragraphs 4-5: This is my point. If you can't do prediction, then the model is not going to be that interesting. p739, paragraph 6: Chaos theory did indeed make this observation—but it also made others. p741, paragraphs 3, 5: Just because some convoluted initial condition for a system can generate universal computation does not in itself mean that general initial conditions produce interesting or hard-to-predict behavior. I'll concede Wolfram's point that many systems are capable of supporting universality, but almost all (in both the English and mathematical senses of the phrase) of the initial conditions do not. p741, paragraphs 7-8: I disagree. An alternative viewpoint from more traditional science is that there has not been much success in studying complex systems because they are nonlinear, which means that breaking down the behavior into simple chunks that can be added together is not possible. Nonlinear models can still be computationally reducible (in Wolfram's sense) but still display complex behavior and difficulties in analysis. p742, paragraph 1: This observation has also been made by the proponents of chaos theory, where the study of nonlinear systems has been compared to the study of non-elephant zoology. p742, paragraph 3: Nonsense. Many computer models in the last twenty years have had no correlation with mathematical formulae. Artificial life, artificial intelligence, neural networks, ... p742, paragraph 5: This example is nonsense. Just because a system can emulate another system that emulates the first system, that says nothing about the speed or efficiency of either process. [31-Oct-02] However, I should clarify that it is just this particular attempt at explanation that I object to, not Wolfram's general point that irreducibility implies impossibility of summarization. p743, paragraph 1: The idea that universal computation shows up even with arbitrary, simple, initial conditions appears to have crept in here without any evidence or justification. p744, paragraph 3: To clarify: in situations involving continuous mathematical formulae, the lower order digits will matter less and less in the output—and so can effectively never matter. p745, paragraph 2: Still continuing the viewpoint that solutions to constraints have to be calculated. p748: It is worth pointing out that just having a mathematical theory is not always in itself that helpful. Many of the equations of modern fundamental physics (general relativity, QED, QCD, etc) are extremely difficult to solve or even approximate except in simple situations. (Wolfram mentions this on p1133). p1133, paragraph 5, left column: It is hardly the "influence of Mathematica" that has led to more systems that cannot be easily summarized in equations; more the general influence of much more computing power more easily available. p1135, "Intrinsic limits in science": Utterly breathtaking arrogance! Limits in physics are due to a "lack of correct analysis"! p753, paragraph 4: I'd argue that Wolfram has actually made very few discoveries about computational irreducibility—and the key discovery of universality in rule 110 was not even produced by Wolfram! p755, paragraph 6: Chaitin makes similar points about the ubiquity of undecidability. p760, last paragraph: Wolfram is finding solutions and then looking at what problems they correspond to, and then claiming that this approach is better than the standard one. Normally, given a function, you want to find the smallest number of operations needed to perform that operation, regardless of the size of the algorithm. Wolfram is enumerating all possible algorithms in size order, and then determining what function they perform—which allows him to make guarantees that there are no smaller algorithms that perform the same function, but does not allow him to guarantee that no algorithm performces the same function faster (despite his vague claims on p764). So... p758, paragraph 6: Wolfram is not in fact solving problems that others have tried and failed at. He is solving problems that others have ignored as not interesting. p1138, "Undecidability in Mathematica": I flat disbelieve Wolfram's claim that he deliberately chose not to write a Mathematica function because of undecidability concerns. p1142, paragraph 2, left column: I have seen no evidence presented in this book for this claim that optimal algorithms are likely to have a different form than currently. p1148, paragraph 1, left column: Presumably "at some level discrete must be used" should read "at some level discrete values must be used". p775, paragraph 4: Mathematics has not "almost defined itself" interms of axiomatic systems; there are many areas of maths (such as much of applied mathematics) that do not take this approach. Mathematics is much more than just algebra. p776-779: The form of mathematics that he is simulating here is that of Whitehead & Russell, which dates from the early part of the 20th century. p781, paragraph 2: The axiom systems that Wolfram refers to typically more than just "appear" to be consistent. If there is a model that satisfies the axioms, then they cannot be inconsistent—and these theories do have models. p785, paragraphs 5-6: This seems like a very long-winded way of proving Gödel's theorem. p786: I'd hardly describe this as a "simple proof", given how many underlying results it relies on. p791, paragraph 3: Again, Chaitin has made similar points. p792, paragraph 4: Not true; many of the systems that Wolfram considers have been studied in traditional mathematics, as he indicates in the notes. p792, paragraph 4: Mathematics ranges far wider than the traditions of geometric and arithmetic systems—for example, model theory or the vast topic of differential equations. p793, paragraph 1: Wolfram induces confusion by switching back and forth between talking about axioms and theorems. Normally, theorems are deduced from axioms; then, if you know that a system satisfies the axioms, you are guaranteed that it will also satisfy the theorems. The aim is then to find axioms that are as general as possible (in terms of the systems that satisfy them) while still having useful consequences, in a simple form (so that verifying that a system satisfies them is easy). p793, paragraph 4: There's a good reason why Wolfram's "new approach" is not the normal one: it is clearly more useful to try to model something specific rather than to invent an arbitrary model and then see what it might be modelling. p1150, paragraph 4: Mathematica is hardly a serious solo contender for an alternative to standard maths. I wouldn't disagree that computation and automated proofs are becoming more important (regardless of which computer system they are produced in), but for smaller scale symbolic manipulation Mathematica's capabilities don't really exceed that of an average mathematician. Personally, I also find Mathematica notation to be singularly unhelpful, as indicated by the impenetrability of many of the examples given in the notes. p1150, "Axiom systems": I disagree that Wolfram's results "strongly suggest" that logic is not essential; it make not make much difference to the decidability characterics of the axiom system, but it will make a huge difference to the usability. p1150, right column, last line: As far as I can tell, Mathematica does not actually support quantifiers (except in the trivial way that it allows \[ForAll] and \[Exists] as notation). p1153, "Other algebraic systems": I'm not sure that all of the topologists and geometers would agree that the "vast majority" of algebraic systems are groups, rings and fields. p1154, right column, paragraph 1: Wolfram himself gives an example of a non-axiomatized mathematical system. p1156, right column, penultimate paragraph: Wolfram persists in his delusion that Mathematica is the only computer mathematics system available. p1157, right column, last paragraph: Interesting work towards "a general system for imitating heuristics used in human thinking" is done by Douglas Hofstadter's Fluid Analogies Research Group. p1162, right column, top line: It's true that arbitrary cardinalities are little used, but the distinction between countable and uncountable infinities () turns up often in analysis ("almost everywhere", "set of zero measure"). p1162, right column: Aleph-one is not the cardinality of the reals (see for example chapters 6 and 8 of Enderton [1977]). p1167, "Truth and incompleteness": It is worth being clear that most of the implications that Wolfram makes from the Principle of Computational Equivalence are actually implications from the universality and undecidability of computation. p795, paragraph 4: I still disagree with his characterization of mathematics as consisting entirely of algebra. p797, penultimate paragraph: To use standard terminology, we are interested in axiom systems that admit models. p799, paragraph 3: Wolfram does not distinguish between two types of axiomatic systems. One attempts to encode all possible information about, say, the natural numbers—and so completeness is of interest. The other attempts to encode key, common, features about, say, groups—and see what can be deduced from just those general features. For this latter type of axiomatic theory, completeness is irrelevant—as Wolfram alludes to in paragraph 4 of p800. p800, paragraph 2: It is not true that nonstandard models of arithmetic have not been constructed. See, for example Kaye [1991] or sections 5.4 and 6.3 of Manzano [1999]. p800, paragraph 5: Utter nonsense. Group theorists do not add axioms to restrict their study to a particular group (although they might add axioms to restrict to a particular class of groups, such as commutative groups). p801, paragraph 1: I feel this is not an entirely accurate representation of the history and character of set theory. p801, paragraph 5: In particular, the approach described allows the manipulation of systems where there are potentially an infinite number of values for the variables (for any size of infinity, too, using the Löwenheim-Skolem theorems). p801-814: I fail to see the intent and purpose of these pages, other than the demonstration that Wolfram has access to large amounts of computer time. p816, paragraph 4: I would suggest that ease of (human) understanding is the main reason for the particular forms of axiomatic system. For practical use, you want axioms where a) the process of exploring the space of derivable theorems is easiest and b) the process of verifying that a particular model satisfies the theory is as easy as possible. Wolfram makes a similar point to a) on p818, paragraph 1. p820, paragraphs 5-6: Greg Egan presents a nice fictionalized account of this process in chapter 2 of "Diaspora" p821, paragraph 6: The phenomena of computational irreducibility was already quite clear before this book. p821, last paragraph: Not a new observation. p1172, "Model theory": It is worth making clear that categorical theories are typically of little interest; the upward Löwenheim-Skolem theorem implies that categorical theories can only have finite models. Of more interest are k-categorical theories, where all models of cardinality k are isomorphic. Note also that all categorical theories are necessarily complete. (see chapter 7 of Manzano [1999]). p824, last paragraph: (Example of general theme.) p825, paragraph 5: Personally, I disagree with Wolfram's claim that an alien would have to be similar to terrestrial lifeforms for it to be considered alive. p828, paragraph 7: (Example of general theme.) p830, paragraph 7: Personally, I see very little similarity between the rule 110 behavior and traditional engineering systems. p834, paragraph 6: I don't see how Wolfram can extrapolate from human development to make it "almomst inevitable" that artifacts would be constructed on an astronomical scale. p837, paragraph 4: Although Wolfram is reasonably convincing in his claim that many systems can support universal computation, this only appears for certain very specialized initial conditions. p837, paragraph 8: A random system could indeed generate primes—but the chances of this happening would be extremely small. If one were to observe a sequence like the primes emerging from a system, and if there were no obvious way that this could be evolved behavior that has enhanced the fitness of a self-reproducing system, then an assumption of design would be justified. p838, paragraphs 4-5: Just because lots of systems in nature are performing computation of some form, it still does not make it likely that they will perform the exhaustive search that Wolfram mentions is required to reach the mechanism that satisfies the constraint. p838, penultimate paragraph: It is not true that the more direct a representation is, the more likely a physical mechanism is to generate it. Imagine coming across a sequence that turned out to be the numbers of neutrons in the most stable isotope of each element in order. This is a very direct representation, but any explanation of how such a sequence could be produced accidentally is going to be byzantine. In terms of perception, this relies on the phenomenon of discreteness, which any intelligence in the galaxy is likely to be able to distinguish (for example, there is no such thing as 0.7 of a star). p839, paragraph 7: A "few percent" of what? Signals? Actual ETs? p840, paragraph 2: Surely the implications for technology are limited by the phenomenon of computational irreducibility implied by Wolfram's Principle of Computational Equivalence? p840, paragraphs 5-6: (Example of general theme.) p841, paragraph 6: Not shown in this book; already known. p843, paragraph 2: As far as I can tell, the sum total of the "abstract knowledge" that Wolfram has built up is that there are simple cellular automata with universal behavior (and hence it is likely that universality is more common than previously suspected), and that simple cellular automata can generate both random and structured complex behavior (which is analogous to the discoveries of chaos theory). p843, paragraph 4: My feeling of "just how often" elementary cellular automata can be applied is not very often at all, actually. p843, paragraph 6: An even vaster amount of current technology is not emulating any natural systems—televisions, videos, bridges, computers, helicopters, ... p844, paragraph 5: This paragraph actually makes it clear that Wolfram's theoretical equivalence of all systems that perform computation is much less relevant practically. For it is immediately obvious that there is a difference in the computational sophistication displayed by humans and rule 110. p845, paragraph 3: (Example of general theme.) p846, paragraph 4: The Principle of Computational Equivalence does not "now show" this; it was already well known. p1178, paragraph 1: Wolfram claims that "it seems that" there is typically no sensitive dependence on initial conditions involved in weather forecasting. I'd like to see a reference for this; if true, it would imply that more recent approaches to weather forecasting using ensemble prediction would not actually provide distinct results across the ensemble. p1179, "Self-reproduction": The name Fredkin is coming up again.... p1182, paragraph 1: I disagree that people assume that mathematical notation is universal, rather than just a convention. You only have to look at how logic notation has changed to see this. p1182, paragraph 2: Again, I don't think that anyone assumes that base 2 notation is particularly fundamental. It is the smallest sensible base, and it corresponds to the current architectures for computers, and so is used as a matter of convenience. p1182, "Computer communication": The development in communication methods typically reflects the fact that available bandwidth for transmission used to be limited, but has grown substantially (it has been observed that this growth is actually faster that that of Moore's Law for semiconductors—see for example section 3.3 of Tanenbaum [1996]). p1183, paragraph 1: The same phenomenon applies in a number of other interpreted languages. p1184, "Artifacts in data": It is fairly well established that chaos is not a numerical artefact—see for example the analytical solution of the logistic map, studies that obtain the same results from different numerical schemes, and the proof of chaotic dynamics in the Lorenz equations p1185, right column, first paragraph: This discuss of purpose again reflects Wolfram's odd approach to constraints. p1185, "Possible purposes": I disagree. If you encountered a system generating the digits of π, I think imputing a purpose would be the most sensible approach. p1193, paragraph 1: I'm not sure what "popular options" for randomness on consumer electronics Wolfram is referring to; I can only think of the random track selection on CD players as an example. p1196, "Philosophical implications": Again, the implication is from undecidability of the halting problem, not from the Principle of Computational Equivalence. I still feel that predictability of models is going to remain an essential component for the forseeable future. Copyright (c) 2002-2004 David Drysdale Back to Home Page Contact me
03287e9b6bb8d110
Creation and annihilation operators From formulasearchengine Jump to navigation Jump to search {{ safesubst:#invoke:Unsubst||$N=Refimprove |date=__DATE__ |$B= {{#invoke:Message box|ambox}} }} Creation and annihilation operators are mathematical operators that have widespread applications in quantum mechanics, notably in the study of quantum harmonic oscillators and many-particle systems.[1] An annihilation operator lowers the number of particles in a given state by one. A creation operator increases the number of particles in a given state by one, and it is the adjoint of the annihilation operator. In many subfields of physics and chemistry, the use of these operators instead of wavefunctions is known as second quantization. Creation and annihilation operators can act on states of various types of particles. For example, in quantum chemistry and many-body theory the creation and annihilation operators often act on electron states. They can also refer specifically to the ladder operators for the quantum harmonic oscillator. In the latter case, the raising operator is interpreted as a creation operator, adding a quantum of energy to the oscillator system (similarly for the lowering operator). They can be used to represent phonons. The mathematics for the creation and annihilation operators for bosons is the same as for the ladder operators of the quantum harmonic oscillator.[2] For example, the commutator of the creation and annihilation operators that are associated with the same boson state equals one, while all other commutators vanish. However, for fermions the mathematics is different, involving anticommutators instead of commutators.[3] Ladder operators for quantum harmonic oscillator In the context of the quantum harmonic oscillator, we reinterpret the ladder operators as creation and annihilation operators, adding or subtracting fixed quanta of energy to the oscillator system. Creation/annihilation operators are different for bosons (integer spin) and fermions (half-integer spin). This is because their wavefunctions have different symmetry properties. First consider the simpler bosonic case of the phonons of the quantum harmonic oscillator. Start with the Schrödinger equation for the one-dimensional time independent quantum harmonic oscillator Make a coordinate substitution to nondimensionalize the differential equation and the Schrödinger equation for the oscillator becomes Note that the quantity is the same energy as that found for light quanta and that the parenthesis in the Hamiltonian can be written as The last two terms can be simplified by considering their effect on an arbitrary differentiable function f(q), which implies, and the Schrödinger equation for the oscillator becomes, with substitution of the above and rearrangement of the factor of 1/2, If we define as the "creation operator" or the "raising operator" and as the "annihilation operator" or the "lowering operator" then the Schrödinger equation for the oscillator becomes This is significantly simpler than the original form. Further simplifications of this equation enables one to derive all the properties listed above thus far. Letting , where "p" is the nondimensionalized momentum operator then we have Note that these imply that The operators and may be contrasted with normal operators. A normal operator has a representation where are self-adjoint and commute, i.e. . By contrast, has the representation where are self-adjoint but . As a consequence, and have a common set of eigenfunctions (and are simultaneously diagonalizable), whereas p and q famously don't and aren't. Thus, although in the present case one is dealing with non-normal operators, by the commutation relations given above, the Hamiltonian operator can be expressed as And the and operators have the following commutation relations with the Hamiltonian[4] These relations can be used to find the energy eigenstates of the quantum harmonic oscillator. Assuming that is an eigenstate of the Hamiltonian . Using these commutation relations it can be shown that[4] This shows that and are also eigenstates of the Hamiltonian with eigenvalues and . This identifies the operators and as lowering and rising operators between the eigenstates. Energy difference between two eigenstates is . The ground state can be found by assuming that the lowering operator possesses a nontrivial kernel, with . Using the formula above for the Hamiltonian, one obtains so is an eigenfunction of the Hamiltonian. This gives the ground state energy . This allows to identify the energy eigenvalue of any eigenstate as[4] Furthermore it can be shown that the first-mentioned operator, the number operator plays a most-important role in applications, while the second one, can simply be replaced by So one simply gets The ground state of the quantum harmonic oscillator can be found by imposing the condition that Written out as a differential equation, the wavefunction satisfies which has the solution The normalization constant C can be found to be    from ,  using the Gaussian integral. Matrix representation The matrix counterparts of the creation and annihilation operators obtained from the quantum harmonic oscillator model are Substituting backwards, the laddering operators are recovered. They can be obtained via the relationships and . The wavefunctions are those of the quantum harmonic oscillator, and are sometimes called the "number basis". Generalized creation and annihilation operators The operators derived above are actually a specific instance of a more generalized class of creation and annihilation operators. The more abstract form of the operators satisfy the properties below. Let H be the one-particle Hilbert space. To get the bosonic CCR algebra, look at the algebra generated by a(f)Template:What for any f in H. The operatorTemplate:What a(f) is called an annihilation operator and the map a(.) is antilinear. Its adjoint is a(f) which is linear in H. For a boson, where we are using bra–ket notation. For a fermion, the anticommutators are A CAR algebra. Physically speaking, a(f) removes (i.e. annihilates) a particle in the state | f whereas a(f) creates a particle in the state | f. The free field vacuum state is the state with no particles. In other words, where | 0  is the vacuum state. If | f is normalized so that f | f = 1, then a(f) a(f) gives the number of particles in the state | f. Creation and annihilation operators for reaction-diffusion equations The annihilation and creation operator description has also been useful to analyze classical reaction diffusion equations, such as the situation when a gas of molecules A diffuse and interact on contact, forming an inert product: A + A → ∅ . To see how this kind of reaction can be described by the annihilation and creation operator formalism, consider particles at a site on a 1-d lattice. Each particle moves to the right or left with a certain probability, and each pair of particles at the same site annihilates each other with a certain other probability. The probability that one particle leaves the site during the short time period is proportional to , let us say a probability to hop left and to hop right. All particles will stay put with a probability . (Since is so short, the probability that two or more will leave during is very small and will be ignored.) We can now describe the occupation of particles on the lattice as a `ket' of the form | ..., n-1, n0, n1, .... It represents the juxtaposition (or conjunction, or tensor product) of the number states ..., | n-1, | n0, | n1, ... located at the individual sites of the lattice. A slight modificationTemplate:What of the annihilation and creation operators is needed so that for all n≥0. This modification preserves the commutation relation Now let ai=a πi, where πi selects the ith component of ψ. That is, ai makes a copy of the state | ni in an abstract place and then applies a to it. Then aii a, where ιi inserts an abstract state at the ith site. Thus, for example, the net effect of ai-1ai is to move an eigenstate from the ith to the (i-1)th site while multiplying with the appropriate eigenvalue. This allows us to write the pure diffusive behaviour of the particles as where the sum is over i. The reaction term can be deduced by noting that particles can interact in different ways, so that the probability that a pair annihilates is and the probability that no pair annihilates is leaving us with a term Other kinds of interactions can be included in a similar manner. This kind of notation allows the use of quantum field theoretic techniques to be used in the analysis of reaction diffusion systems. Creation and annihilation operators in quantum field theories In quantum field theories and many-body problems one works with creation and annihilation operators of quantum states, and . These operators change the eigenvalues of the number operator, by one, in analogy to the harmonic oscillator. The indices (such as ) represent quantum numbers that label the single-particle states of the system; hence, they are not necessarily single numbers. For example, a tuple of quantum numbers is used to label states in the hydrogen atom. The commutation relations of creation and annihilation operators in a multiple-boson system are, where is the commutator and is the Kronecker delta. For fermions, the commutator is replaced by the anticommutator , Therefore, exchanging disjoint (i.e. ) operators in a product of creation of annihilation operators will reverse the sign in fermion systems, but not in boson systems. See also • {{#invoke:citation/CS1|citation |CitationClass=book }} {{#invoke: Navbox | navbox }}
8932e136a5008b15
The Chemical Bond Across the Periodic Table: Part 1 – First Row and Simple Metals FernandesGabriel Freire Sanzovo CunhaLeonardo dos Anjos MachadoFrancisco Bolivar Correto FerrãoLuiz 2020 <p>Chemical bond plays a central role in the description of the physicochemical properties of molecules and solids and it is essential to several fields in science and engineering, governing the material’s mechanical, electrical, catalytic and optoelectronic properties, among others. Due to this indisputable importance, a proper description of chemical bond is needed, commonly obtained through solving the Schrödinger equation of the system with either molecular orbital theory (molecules) or band theory (solids). However, connecting these seemingly different concepts is not a straightforward task for students and there is a gap in the available textbooks concerning this subject. This work presents a chemical content to be added in the physical chemistry undergraduate courses, in which the framework of molecular orbitals was used to qualitatively explain the standard state of the chemical elements and some properties of the resulting material, such as gas or crystalline solids. Here in Part 1, we were able to show the transition from Van der Waals clusters to metal in alkali and alkaline earth systems. In Part 2 and 3 of this three-part work, the present framework is applied to main group elements and transition metals. The original content discussed here can be adapted and incorporated in undergraduate and graduate physical chemistry and/or materials science textbooks and also serves as a conceptual guide to subsequent disciplines such as quantum chemistry, quantum mechanics and solid-state physics.</p>
36914ab896487cc3
Consider the linear Schrödinger equation $i\partial_t u = -\Delta u$, where $\Delta$ is the Laplacian on the hyperbolic space $\mathbb{H}^d$. What are the admissible pairs $(p, q)$ such that we have Strichartz estimates of the form $$ \Vert u\Vert_{L^p_tL^q_x(\mathbb{R}\times\mathbb{H}^d)} \leq C_{p, q}\Vert u_0\Vert_{L^2(\mathbb{H}^d)}?$$ Is the theory similar to that on $\mathbb{R}^d$? Also, If we replace the Laplacian in the above equation with the fractional Laplacian $(-\Delta)^{\alpha/2}$, where $\alpha \in (0, 2)$, do we know about the admissible pairs? This is mainly a reference request. • $\begingroup$ Try the scaling $x\mapsto\lambda x$, $t\mapsto\lambda^2 t$. $\endgroup$ – Fan Zheng Dec 3 '15 at 18:53 • $\begingroup$ @FanZheng Scaling would only give you the possible pairs. But it is not clear to me that the Strichartz estimates would hold for all possible pairs. $\endgroup$ – user83608 Dec 3 '15 at 19:12 For the standard Schroedinger, the result is due to Anker and Pierfelice http://www.sciencedirect.com/science/article/pii/S0294144909000250 and separately Ionescu and Staffilani http://link.springer.com/article/10.1007%2Fs00208-009-0344-6 Their Strichartz estimate reads: Let $u$ solve $i\partial_t u + \triangle u = F$ on $\mathbb{H}^n \times\mathbb{R}$, and let $(p^{-1}, q^{-1})$ and $(\bar{p}^{-1}, \bar{q}^{-1})$ both belong to the triangle $$ T_n = \{(x,y) \in (0,1/2]\times (0,1/2): x + ny \geq n/2\} \cup \{ (0,1/2)\} $$ then the estimate $$ \|u\|_{L^p_t L^q_x} \lesssim \|u_0\|_{L^2_x} + \|F\|_{L^{\bar{p}'}_tL^{\bar{q}'}_x} $$ Where $\prime$ denote the Holder conjugate. | cite | improve this answer | | • $\begingroup$ Exactly what I wanted. One question, if I may: $\lesssim (....)$ here means $\leq C_{p, q, p', q'}(....)$. Is it known what the optimal constants $C_{p, q, p', q'}$ are and if they are attained? $\endgroup$ – user83608 Dec 3 '15 at 19:35 • $\begingroup$ @user83608: that I don't know. As far as I know even the case for Euclidean space is not entirely resolved: Foschi proved for $n = 1,2$ but only estimates of the best constants are available in general ejde.math.txstate.edu/Volumes/2015/270/selvitella.pdf ; that there exists maximizers however is known [Shao, arXiv: 0809.0153]. I'll be surprised if the general hyperbolic space case is solved. $\endgroup$ – Willie Wong Dec 3 '15 at 19:48 • $\begingroup$ Actually, as regards the Euclidean case, there is also MR2547132. $\endgroup$ – user83608 Dec 3 '15 at 20:01 • $\begingroup$ ... which has also only been successfully used to compute cases when $n = 1$ or $2$. My point is that given so much is still not known about the Euclidean case, I am doubtful whether anyone seriously looked at the best constants issue for hyperbolic space. $\endgroup$ – Willie Wong Dec 3 '15 at 20:13 Your Answer
f5bb32a81c51d454
25 thoughts on “Calling all Quantum Theorists and Cosmologists who can be patient with innumerate humanists and theists… 1. Gavin I can’t play ask-a-physicist without setting some limits, or I won’t have time for anything else in my life. Here are some rules: 1. I don’t discuss theories outside the main-steam of science. Science is a huge search. Vast regions of possible truth have been searched and have produced nothing. Small regions are proving fertile, and we are concentrating our attention there. I cannot go over every barren region again with newcomers. Faith healing, the realm of Platonic forms, a spiritual plane, Penrose’s link between quantum gravity and wave function collapse, and Bohm’s theory are all in the vast barren region. Sorry, we’ve moved on. I do make some exception for widely held or forcefully promoted ideas: creationism and intelligent design, and quantum woo. I will not, however, be polite. There’s nothing useful to be said about these concepts while remaining polite. 2. First priority is always going to the issues that are directly related to the faith issues that we eventually want to reach. The quantum nature of the universe is at the wrong energy scale for addressing questions of God or a soul. The issues surrounding John McCain are far more relevant. Now for Janet’s remaining questions. 1) No, the ball will not go through the wall. I know what they are trying to say, but this is the wrong way to say it. 2) This question contained seven questions, so I’m going to pick one: “Now why is it that the collapse of the wave function is so worrisome to theorists[?]” The problem is that quantum mechanics obeys one rule between measurements, and another when a measurement occurs. This would be fine if somebody could tell me what a measurement is. So, there are two different rules and no sure way to know which one should be used. That is the problem. You seem to think the problem is that we lost determinism. It is not. We don’t like randomness, and we don’t like many-worlds either, but we understand that nature doesn’t care what we like. We can cope with randomness, if we have clear rules for when to use it. We know how to use it for water splashing and rifle shots. 2. Uh huh. Okay! Yeah, I get your rules. I’m very sympthetic to the time constraint, which I feel when trying to explain semiotic theory. So maybe others will help out. Even very short answers help me a lot with the issues that come up in my own field that I can’t explain to the physicists exactly. After all, aren’t we learning that we can’t become experts in one another’s fields. So we have to be willing to talk at each other in this hafl-frustrating and half-very-valuable manner? Those who simply say, study the books are giving up on the conversation. Because I am studying the textbooks and I still have questions that only those much more informed than I am can clarify. And those clarifications are more to help me think from my own vantage point and training than to think as a physicst…(and vice-versa). But I must say that I’m surprised Roger Penrose is in “the vast barren region” with all those others Gavin names. There’s no indication he is either thinking about God (a hidden agenda, so to speak) or that he’s in the “quantum woo” camp that uses Bohm a lot, is there? So Gavin just reports that quantum gravity is unfruitful? Or is it the brain-and-QM connection in general that seems unfruitful? (Gavin, isn’t that question inevitable given the centrality of defining measurement?) Moving on. Gavin says: “We can cope with randomness, if we have clear rules for when to use it. We know how to use it for water splashing and rifle shots.” And he says: “The problem is that quantum mechanics obeys one rule between measurements, and another when a measurement occurs.” Okay, that helps me much. This helps me too. But, then, why don’t you expect an underlying mechanism to be found that will explain both behaviors in one theory? Or is it just that every avenue for finding that has proven fruitless, so you are sticking with the current enigma as being more true to the data? Gavin, are there measurements made in nature without any intentional observer or measurer or a measuring machine made by such, or is that precisely what we cannot know because we’d have to measure to find out? Like a photon hitting an eyeball. Is that a measurement? (The electromagnetic wave acts like a particle?) Finally, this is really fascinating! Gavin says: “The quantum nature of the universe is at the wrong energy scale for addressing questions of God or a soul. The issues surrounding John McCain are far more relevant.” What??? This is so surprising to me. In the West, starting with the Greeks, the divine has always been sought at the most fundamental level of the material universe. We look to the most underlying element(s) to find the “causes” of the abundant orders and the coherent emergences we see all around us. But that is “causes” in the explanatory sense, not necessarily the mechanical sense that science has focused on from the beginning. Can you say why the meaning structures surrounding “John McCain” seem more fruitful for you? Is it simply that you think that for God to exist, God must have a physical kernel of reference, like the physical body of John McCain? But that is absolutely ruled out by the very definition of “God” in the West. (This btw is why the notion of the Incarnation is so entirely scandalous. To localize the non-local in a physical body? To assume finiteness and vulnerability (and even death) by what is infinite and omnipotent? These are supposed to be utterly mind-blowing and very offensive contradictions. Something that is “a foolishness to the Greeks, and to the Jews, a stumbling block.”) The Schrodinger equation gives you probabilities. Why is it worrisome to physicists that each wave can only collapse in ONE of those probable locations. Isn’t that what happens in every statistically probable future? What ACTUALLY happens is only one of the several likely results that WOULD or COULD happen? I know a lot of you are following this conversation (thank heavens for blog stats!) so someone else should try to relieve Gavin once in awhile. On the other hand, sometimes it takes a lot of prior conversation to get the point where you have covered enough common ground to communicate across these barriers of disciplinary background…. 3. HI Poor Gavin, indeed. I will try to comment on Janet’s questions 3) and 4), even though I don’t have credentials of a working physicist. 3) Regarding Penrose: It was a long time ago when I read “The Emperor’s New Mind” by Roger Penrose. It didn’t make much sense to me even when I tried to read it carefully then and it is even more difficult to follow his argument as I try to skim through it now. So, I cannot give too detailed comments but will give you my impressions from reading it a long time ago. As I understand, Penrose is trying to somehow connect three poorly understood subjects, quantum gravity, quantum measurement and consciousness. Since we don’t understand much about any of these, we cannot say outright that Penrose is wrong. When we are ignorant, there is more room for speculations, as I think Gavin wrote before. But that doesn’t mean that there is high probability that any such speculation is right. And Penrose didn’t make a very convincing case that his particular speculation is right. In fact, I think many people have reasons to think that Penrose is likely to be wrong on this. Do we need to take gravity into account in order to understand quantum measurement? We can ignore gravity to understand most quantum phenomena, because the effect of gravity is negligibly weak. So, it doesn’t make much sense that when we think about the measurement, quantum gravity suddenly becomes fundamental. Even if he is right, how helpful is his speculation when there is no successful theory of quantum gravity yet? Penrose may be thought-provoking, but he is not providing any thing very substantial, unlike EPR paradox and Bell’s theorem that lead to better understanding of quantum measurement. I think that the right way to progress is to try better understanding of each subject. If there indeed is a fundamental connection between these subjects of the kind Penrose proposes, it is bound to be found. But there is not a convincing reason to believe in such connection now. Is quantum effects important for consciousness? Again, it seems to me that quantum effects don’t play significant role in most of basic neurobiological processes, such as firing of neurons and synaptic transmissions. And while neuroscientists may have yet to explain consciousness, they have learned great deal about how the neurons and our brains work. Penrose, as brilliant man as he is, is not an expert of neuroscience. Don’t you think it is a bit arrogant of him to claim that he knows better than the neuroscientists, especially when he is not making a good argument about the connection between quantum effects and neurobiological phenomena? You have to realize that the revolutions of quantum mechanics and relativity are very exceptional events in the history of science. The problem is rarely that we don’t have adequate fundamental laws. Often the difficult part is to understand the more complex higher order phenomena from the simple laws and the basic processes. Classical physics is sufficient to predict the weather, but it remain difficult to forecast the weather. Taking quantum mechanics and relativity into account won’t help it. Suppose that the string theorists are successful in coming up with the so-called “theory of everything” that unite quantum mechanics and general relativity. It is not going to affect biologists, chemists, nor even condensed-matter physicists. You don’t even need quarks to explain the structure of DNA, or chemical bonds, or superconductivity. I might add that I don’t find it very fruitful to connect quantum measurement to consciousness (like Wigner did, for example). (And it is interesting that Penrose doesn’t like Wigner’s interpretation, either.) It is difficult enough to quantum mechanically describe a measuring apparatus. Consciousness is even more poorly defined. To think that consciousness somehow does something magical seems like baseless speculation and wishful thinking to me. 4) Regarding Bohm’s theory: I have never studied Bohm’s theory, so all I know is based on what other people wrote about it. My understanding is that Bohm’s theory is a variation of hidden variables theories. These are deterministic theories of quantum mechanics and the probabilistic nature is explained as a consequence of our ignorance on hidden variables. Most straightforward hidden variables theories are ruled out. Apparently Bohm’s theory is not ruled out unlike other hidden variables theories, but it comes with a great cost. It includes non-local interactions that are weird and complicated. It may be deterministic alright, but it looks too contrived and not appealing to many physicists. They’d rather take probabilistic quantum mechanics that may not be entirely satisfactory, but is simpler. 4. Gavin I said there are two rules in quantum mechanics, one for use between measurements and one for measurements. The one to use between measurements is the Schrödinger equation, which is deterministic. The Schrödinger equation does not give probabilities. You tell it what wave function or density operator you have at the start and it tells you exactly what wave function or density operator you will have at the end. There’s nothing random about it; it’s totally deterministic. The rule for measurements is wave function collapse. You tell it what wave function or density operator you have and it gives you the probabilities for a bunch of different wave functions or density operators that you might get out, with their associated measurement results. This is a random, non-deterministic process, but the probabilities are predictable. Janet asks “But, then, why don’t you expect an underlying mechanism to be found that will explain both behaviors in one theory?” In fact, the underlying mechanism has already been found. There is one theory that explains both the Schrödinger equation and wave function collapse. That theory is the Schrödinger equation. If you just use the Schrödinger equation all the time, even when you do a measurement, then you can predict all of the features of wave function collapse. What should we call this new theory? We call it quantum mechanics, which is exactly what we called the old theory, so I can understand why people get confused. The process of wave function collapse is called “decoherence.” Note that it doesn’t work the other way. You can’t use wave function collapse to explain the Schrödinger equation. It just doesn’t work. So, the reason we got rid of the non-deterministic random aspect of quantum mechanics isn’t because we don’t like randomness (although we don’t like randomness, so we’re pretty happy about this), it is because wave function collapse was a useless and ill defined part of the theory. All of this is well understood and accepted by the experts in the field. We can use the Schrödinger equation to understand measurement in great detail. In particular, we can answer with confidence all of the questions you ask about measurement: 1) “Gavin, are there measurements made in nature without any intentional observer or measurer or a measuring machine made by such…?” Yes, all the time. In fact, natural measurements happen at an absolutely staggering rate. This is the main reason that the macroscopic world does not look quantum mechanical (and why the ball won’t go through the wall). 2) “Like a photon hitting an eyeball. Is that a measurement?” Yes. The photon hitting a tree or a rock is a measurement too. A photon hitting a mirror is not. 3) “The electromagnetic wave acts like a particle?” Yes, mostly. It retains enough of its wave characteristic for us to see color (approximately). Not only can we understand what is an isn’t a measurement, we can perform partial measurements. This isn’t just a crazy theory, partial measurements are used in modern atomic clocks. The state of excited atoms is partially measured in one stage, and the measurement is finished in another much later stage. The long duration of the measurement allows a much more accurate measurement of the frequency of the atoms’ oscillations, in accordance with the Heisenberg uncertainty principle. Also, The field of quantum computing is base on understanding the details of measurements, including partial measurements. Now, I said that all of this is well understood and accepted by the experts. That is not a terribly large population, and it certainly doesn’t include all physicists. I can back up everything I’ve said, but the math is difficult, even by physicist’s standards. Furthermore, the some of the implications are startling and not fully understood, causing some concern. None the less, the people who actually have to earn a living doing quantum measurements are on board because it is the only approach that makes sense and works. Roger Penrose does not have to earn a living doing quantum measurements. Quantum gravity is a worthy pursuit (it is what I do), Prof. Penrose is a good physicist, and wave function collapse is an interesting (solved) problem. Prof. Penrose’s link between quantum gravity and wave function collapse didn’t pan out. The quantum woo community’s link between consciousness and wave function collapse also failed. (Penrose promoted this connection in The Emperor’s New Mind, so his record on wave function collapse is not good. He’s very good at relativity.) Decoherence is the winner. Perhaps I jumped the gun with my comment about God and souls. If you could tell me one thing that God or a soul does, that would be very helpful. Roger Penrose is not “the mathematician” that Maria mentions. I think she is talking about compact extra dimensions, which are common in string theory and are explained in several popular books about string theory, including Lisa Randal’s ,Warped Passages. 5. Maria Kirby God gives life. God creates. God predicts the future. Souls live eternally. Souls might be considered the life force of bodies. -Not particularly testable actions. I was speaking of Lisa Randal and Raman Sundrum, whose theories elaborated in their papers RS1 and RS2 will hopefully be tested next year at CERN. I do think its very interesting that we create a reality or form like Hamlet or the Saint Paul Cathedral (before it was constructed) and then proceed to create an instance of the form as an actual building. (Some persons take on the role or character of a literary form, thus giving them an instance, maybe for the duration of the play, maybe for a life time. An actor who does this brings the character to ‘life’ for the audience. It seems like mathematics/physics does a similar process, albeit some times in reverse. We observe certain phenomenon or instances. We then try to create a form (mathematical equations) which we can use to reproduce an equivalent instance to the one observed. The form is validated when it can not only predict an equivalent instance but can be used to predict other observable instances. The laws of motion work for dropping balls from the tower of Pisa as well as the movement of the planets. It seems to me that when it comes to spiritual things we’re kind of like blind people trying to understand color. I think its very interesting that churches report more miracles in locales where persons are more prone to believe in demons and magic. While I think psychology is an important factor, I don’t think it explains everything. It seems that there is some connection between what we believe/think (its form) and what is observed to occur in the physical realm (its instance). And I don’t necessarily think that quantum mechanics can explain the phenomenon. (I personally think that a number of people have hijacked quantum theories to make them support certain philosophical ideas, instead of letting quantum theories speak for themselves. But there does seem to be implied corrolaries, I’ve heard the laws of motion used in connection with human interaction.) If mathematicians/physicists can prove that there are more that three dimensions, then it seems that the obvious next question is what is in those dimensions? Does what is observed in three dimensions project at all into any of those other dimensions? If something like gravity can project from a fourth dimension into our three, then can the strong, weak, or electro-magnetic force project into the forth? And what would that look like? I may be naive, but I still believe in angels. It seems to me that angels are spiritual beings, who at times, have a physical presence. (Unlike ourselves who are physical beings with a spiritual presence.) Why would it be so unreasonable to say that what we attribute to spirit is not a force of another dimension? 6. I’ll respond to Hi, then Gavin, and then Maria. Thanks so much! I’m really interested to hear that Wigner tried to make a QM-consciousness connection, too, and your judicious comments about Penrose and the different areas of his work were very helpful and clarifying too. (I want to look at Wigner — do you happen to know where he published this?) Eugene Wigner, remember folks, wrote the elegant essay on “The Unreasonable Success of Mathematics in Physics” that we discussed earlier under Part 4 of my lit theory lectures…. On Bohm and non-locality, I’ve been wanting to prod Hi and Gavin to say something about EPR and Bell’s theorum…. Gavin, would you rule out from a physics standpoint any connection between non-locality in QM and the idea that deep reality might be non-local? (At one point I read Bell’s entire book, Speakable and Unspeakable in QM, so I have a right to ask this, I think! But it’s not right for H & G to have to answer it. But how can you science-guys blame us innumerate humanists for getting stirred up by this stuff. It is downright IN-EV-IT-A-BLE.) As for Gavin, Gavin, I don’t think you realized how new-style you are (as opposed to old-style scientific thought). You grew up with paradigm shift and an indefinite future for physics to evolve into and so you always are surprised that I am concerned with determinism or old-style locality in space and time. But speaking HISTORICALLY, it was precisely that emphasis of Galileo, Descartes, and Newton on “reality” being a natural world of empirical time and space (and Newton did try for an absolute time and space against which to measure relativistic space and motion) that led to the general incapacity of modern Westerners to imagine as “real” anything that is not something you could rest your coffee cup on (or kick). Yet, the thoughtful or philosophical among Christians have always been uncomfortable with the modern notion that God or the soul are supposed to be “immaterial” entities that are divorced from the natural world. Yes, modern theists often do think of them this way, but this is because Descartes cut the mind or soul out of the material world and made it into a non-material thing, and created the “ghost” in the machine. Now Maria, you are brave and straight-forward and obviously doing some reading and thinking here, and I’m glad you joined us. Knowing Gavin and Hi, though, as I do, do you mind if I echo your remarks in a somewhat different manner? Also, may I say in passing — and this may be only my view and not Maria’s — that I think we theists would get further if we made it clear that in saying that “God creates” or “the soul is immortal” are not meant as naturalistic knowledge-claims. Theists bear witness to what they have come to believe on the basis of other disciplines and practices. These are things we say we “know” in the sense of having intimately experienced and/or of being committed to as a grounding hope. (The soul’s immortality, for example, is to me one of the most speculative of religious beliefs. The Jews in OT times generally didn’t have an afterlife in view. They worshipped Yahweh but in THIS life. It didn’t make them any less theistic. But the VALUE and PRECIOUSNESS of the soul is not speculative, because it is not about the unknowable future. It is experienced as a present reality, and as an ethical and esthetic commitment that is imposed upon anyone who desires to “imitate” God.) Okay, so Gavin asks, “name something God or a soul DOES.” But you mean, “Name something God or the soul does that I cannot account for in other ways, scientifically.” Remember, Gavin, that a scientific account is just that. It is a naturalistic account of something physically detectable in the world, according to the standards and methods of science.” You want physical causes or physical effects. But there are other causes and other accounts, depending on the discipline or the way of knowing you are working in. I want to say that God “does” everything that happens in nature and physics and chemistry and biology, but not as a physical cause-and-effect. That would make God either a mere part of the physical world, and else an absolute determiner of the physical world, so that it would have no independent life or being. Instead, God “does” it all, in the sense of giving to the world the capacities and potencies to unfold as it therefore can, and to do all the kinds of things that it therefore can do. And as it says in Genesis, or as Plato and Aristotle realized, the higher living creatures and especially this strange “speaking” and “thinking” creature that we are is “most like” that underlying immanence called potentiality or capacity, because we are capable of recognizing and thinking about and naming and re-enacting those unfolding laws and principles and kinds of things. In other words, I am thinking of the world as something that can move into the future on its own, based on inherent potentialities that shape what is possible but do not absolutely determine it. God is the name we use to refer to the nature of those potentialities as potentialities, and therefore the name of their source and their direction, even if those are internal to universe itself. Perhaps I should simply say that theists and pre-scientific thinkers all tend to see the universe as dynamic, not inert. And whatever it is in the cosmos and in living things and in history that keeps the patterned changes going and the developments developing and the processes processing and the “inert” elements being formed and the exploitations of every possibility for higher-order complexities exploited and evolution evolving, that is the indwelling divinity of the universe. And yet this divinity comes to us somehow as itself and not merely as the sum of all the separate processes. This is the fundamental human reaction to nature, and even the non-theistic scientists share it, as long as it isn’t called “God.” By the way, no matter how “personal” a God one worships, I think in our day something is missing if one does not have the god of the philosophers included in the notion of God. (Perhaps this is why the early and medieval Christian church was so much more profound in its thought than the modern churches tend to be.) So when Maria answers, “God creates.” And “a soul is the life-force of the body.” Then I want to say…. … that Maria doesn’t mean that God creates in the sense of a physical cause or a physical law or mechanism “creating” a certain physical state. It is not a push-pull cause and effect like in Newtonian mechanics. It is something far more philosophical, and yet felt by people on a daily level. Sometimes it is said that God is the “condition of possibility” for these natural mechanisms to exist and operate, and that God is the “reason” that there is something rather than nothing. Science is getting waaay beyond its sphere of expertise when anyone in science claims that science speaks to these questions or is able to speak to them. Suppose that a mechanism for the Big Bang is found and we come to know that it occurred because of a whole series of other and preceding factors or suppose we come to find that it makes no sense to talk of a “before” the Big Bang (which was Augustine’s position) or some other scientific insight is arrived at 10 years or 1000 years from now. This will not do away with the basic philosophical questions and their cogency for human beings. It is wishful thinking for scientists to try to say that science will or science does do away with these questions. They are rational and inevitable. And I’m not going to be here in 1000 years and maybe not in 10. So I have to go with looking at all of the arts and sciences and all of my life journey and doing the best I honestly can to arrive at a worldview that I can keep faith with and that accords with my deepest knowing. Now part of my deepest knowing is the model of the scientific, and science is so beautiful to me that I don’t want any of those laws to be abridged or changed by any interventions. (I hate the idea of miracles, if you want to know the truth!!) But what I cannot get away from is the way those beautiful coherencies and those intricate emergences of higher-order complexities depend upon potentialities that lie within the natural world itself. What is im-manent — what “abides in” those empirical things — and at the same time “lies beyond” those things, is the most fundamental meaning of the term “God” for me, and in our Western tradition of thought. In other words, our cosmos has had the potentiality within itself from the beginning to bring forth all that it has brought forth and will bring forth, and that potentiality ITSELF is exactly what Aristotle meant by the word “form.” The potentiality is something separate from every single instance of it. There is something in this universe of which the universe is an instance and yet that is not the same as this universe itself. There is something unfolded in the history of this universe of which this universe’s history is only an instance, and that is not the same as the history of this universe. Folks, this is highly philosophical, what I just said. Only human beings (as far as we know) of living creatures in our universe can observe the “existence” of what I what just speaking about. I think this is why Heidegger said that we are the kind of being that “raises the question of the being of beings,” and by so doing, identify ourselves as the kind of being that we uniquely are. So I love Maria’s the soul is the life-force of the body. And the life-force of the body is clearly something different from the “stuff” that is left as the “body” without its life, because we have all seen the life leave the body, and the body is no longer a real and living body without it. (That doesn’t mean that the life-force exists forever, by the way, or even that the life can be life without its body. It is interesting that before Descartes, Westerners believed that angels, like all created beings, HAD to have bodies, even if the bodies were made of “ether” or some other more perfect composition. And all self-directing “bodies” had to have an indwelling formal element that held them together.) Now I am one of those who thinks that every single thing that happens in our consciousness has to be related to brain chemistry. The soul or mind or personality for me is not something detached from the brain. It is instead an emergent phenomenon that is entirely based on chemistry and physics but that has a complex “being” and an organization on its own level of being. One hundred years ago (Gavin) to say that everything in our minds was based on brain chemistry would have been to say that our minds are strictly determined by rigid laws of cause-and-effect. Scientific determinism raised those questions of free will that so occupied people in the Newtonian period. But now, as Dennett says, we see that, scientifically and naturalisticly speaking, “freedom evolves.” The more highly developed the consciousness of the kind of creature turns out to be, the more room for freedom has evolved in its mental determinations, beginning with moving away from danger and twoard prey and so forth. Maria, I think the notion of “dimensions” is quite different for a mathematician or a physicist than for most of the rest of us. Extra dimensions beyond the ones we normally perceive do not necessarily imply a mysteriously “other” world of being in nature. Perhaps I should apply this advice to myself too, as regards EPR non-locality. But I don’t want to find a mysterious other realm of being. I just want to be able to say that much of our cosmos is alive, and that what is not alive is nonetheless potently capable of exploiting the conditions for life. It may take 4 billion years, but the amino acids will get it together! And the hydrogen molecules had to have already condensed. I don’t see how science threatens the overwhelming reality of a universe that contains within itself potentialities such as we have seen and such as we have resulted from. To reduce this universe to a purely mechanistic model will not work any more, even if it seemed to (for some) in the 18th and 19th centuries. This universe has had direction from the beginning in the “form” of certain inherent potentialities, and it has evolved not only life but freedom and conscience. The anthropic principle cannot prove or disprove a creator God, or that our universe is a purposeful universe in a strictly religious sense. But it illustrates that we can no longer view this universe as an inert machine (as Dawkins knows). So like good little liberal arts students we move back-and-forth between these new-style physical sciences to the other disciplines for a renewed conversation between all of them about the most basic metaphysical issues. I think we are verging toward naming an indwelling determinacy that is neither a law of strict necessity nor a chaos of pure chance, but that leaps into a future from a presence that came out of the past. Aristotle called it a coherent wholeness, one that is based on “that which is possible, according either to probability or necessity.” Yes. In my own work (off-line) I am trying to formalize what Maria is focusing on here in a way that works for all the ways of knowing. We often don’t see the importance of this form-al mediation because we tend to reduce it to our modern notion of “abstraction.” (Very 18th century!) In an earlier post, I tried talking about this instead under the name of “rehearsal.” (As soon as I get the software working, I’m going back to explaining the semiotic codes or normative principles we encounter with language and structures of language, by the way.) Hume changed everything in the West when he questioned “induction.” He pointed out that no matter how many instances we encounter, we cannot be SURE and CERTAIN that the next instance won’t be a counter-instance and destroy the general principle. So Hume realized that when we go from empirical instances to form-ality we are moving away from Descartes’ ABSOLUTE certainty. This required Kant to go to work to save induction and cause-and-effect, the other big formalization that Hume demolished. But look at how this Humean thinking is based on the requirement of absolute certainty. The very definition of “Knowledge” became after Hume “what we can know with absolute certainty.” None of our science today would survive this requirement, because we realize today that our knowing is always “open to the future.” In the future, we may revise or re-understand what we know today. (notice that the empirical is always something that is slipping away into the past. We are left thinking about its significance for the future! But Plato & Aristotle rightly thought that induction was a dynamic part of everyday life and every human learning, beginning with language. We wouldn’t know what a word meant if we had to be absolutely certain or if the word was tied down to a static one-to-one relationship to a “closed” meaning. Instead we develop a theoretical construct open to the future, for every word we learn. This is why poststructuralists are always talking about how problematic imposed closures (associated with the absolutism of the 18th century) are for the well-being of our knowing and being. Aistotle thought, contra Hume, that on occasion, even ONE instance could be enough for a form-al interpretation. (like the way we judge the other person on a date?) And most of the time, and in relation to most of the things we really, really need to know, we have to go with likelihood and probability and the hopes of acheiving a high degree of confidence, but not “absolute knowledge.” So why not just go back to the notion of ike (techne or episteme) of the Greeks, where an ike is an attempt to come to know better the formal characteristics of a kind of thing (or kind of process)? It will have exactly as much likelihood as the kind of thing itself allows, but it will still be a valid discipline of that kind of thing. (MAJOR assumption of Western education before the laws of motion installed absolute certainty as the norm for a real “science.”) And let’s particularly notice the role of time — and “the future” — here. When the Newtonian laws of motion became the BIG cultural paradigm for knowledge, it appeared that the future could be predicted absolutely by laws, and hence was determined. This didn’t hold up, even for the physical sciences, and Hi talks about this above. Atmospheric science uses deterministic laws and you cannot predict the weather with more than certain varying degrees of probability. For the Newtonian worldview (which of course is not the same as Newton), it looked like the future was only the current “actuality” all over again, repeating itself. But Aristotle — esp. in his literary theory but everywhere else as well — looked at it in this way. First, the past is no longer open. Once it has happened, it’s determined in the sense that it can’t be changed. But only a part of what happened (actuality) was because of ordered principles of causation. A lot of it was accidental or contingent “stuff” because various causations happened to intersect in a random manner. So Aristotle thought of predicting the future as taking what was coherently causal in the past and formalizing it and then projecting that formality into the possible future, knowing that we can’t be certain because of all the different causal processes and their random interferences. And because some causal processes are simply less deterministic than others by nature. THIS IS WHY EPISTEME IS FUNDAMENTAL TO KNOWING. Each kind of causation needs its own disciplinary community. We cannot know the whole world directly. We need to find coherent parts of the world (kinds of things) to formalize and we need to learn about formalization itself, first. But where do we step back and put all the ikes together and think about the whole of life and the whole of where everything is going? (It’s called “First Philosophy” or metaphysics and there is a discipline for it. Theologies are also inherently a kind of first philosophy.) There’s an existential core to each of our lives and we have a drive to achieve an integrated worldview and make some sense of things and also determine what kind of person we should be and how we should act (and how we get so we can act like that when we know we want to — the biggest problem of all). There’s no absolute certainty is THESE areas, the ones that finally matter the most. And you don’t make any progress by simply accepting a ready-made worldview or ideology or religion either, especially if it’s Christianity, because this faith is so counter-intuitive and demands so much thought work and so much willingness to ALWAYS scrutinize and overturn one’s assumptions. (You can try to resist this, but it always gets done for you, anyway.) Being a Christian is being called to a continuous inward revolution and requires the activity of the full mind and the whole person. Christianity is based upon the paradox that there is a fullness of truth toward which we aim our passion and we do experience it from time to time and try to chart our course by it, but we can never have in our finite and limited selves an adequate conception of the truth or what it means. The more we try to cling to and insist upon certainty, the more likely that the shells of our certainties will be overthrown to get us to a deeper truth. 8. Let’s get back to the questions of the “existence” of: Hamlet (the character — but consider the play) John McCain the electron Note that with regard to “electron” the difference between “an electron” and “the electron.” Here’s that Form-al awareness Maria was talking about, entering into the picture. (Bertrand Russell had to spend 100+ pages on the meaning of “the” in his Principia Mathematica!) “An electron” usually refers to a particular instance of the electron, whereas “the electron” refers to the formal mode of being of the electron, as a theoretical construct (what Plato & Aristotle called a “logos,” a formal definition or account), the electron as a topic or a subject matter for formal inquiry. (The “eidetic,” as I am calling it in my off-line work, after Plato’s eidos or “Form” or Idea.) Sooo…. Let’s not dismiss Plato’s Forms too quickly to the barren wilderness or the realm of quantum woo…. Too often, they have been interpreted to suggest an otherworldly realm of pure Ideality, but in practice, in the dialogues Plato wrote 2400 years ago, they emerge as tentative or provisional idea(l)s of the topic, and then the Form is used, paradoxically (or dialectically), to critique or to call into question all of the current (received) ideas about the topic. (Experimental and reflective testing is built in to the notion of the eidetic or the Form-al. The naming of the kind of thing within a philosophical inquiry opens the space of inquiry by opposing the Form as the ideal reality to the theory so far, or to whatever we unreflectively may have supposed.) So, I’d like to say that the Form or the eidos is “The Putative Reality, As It Might Get To Be Known in the Future”! It is the practical and servicable goal of our quest, though we never reach it. It is the Ideal Answer that we strive toward but do not yet have in its entirety. And there’s no sense, with Plato or Aristotle, that our disciplinary knowing is useless unless or until we do arrive at ultimate knowledge. The search is substantial and makes progress, and that gives us the experiential contact with reality that we need as human beings. It seasons us and makes us committed to the search for truth. By the way, guess where the word “future” comes from? It comes from the Greek word physis, from which we get “physics.” It’s that active ending -sis added to the verb phuo that means to grow, bring forth, or give rise to. So again we see that what any discipline does is attend to what can be observed to have already happened and to be happening, in regard to a certain formal kind of thing and its process of coming-to-be, and then weed out the irrelevant noise and accidents and incidentals, and then formalize the potentialities that might have been in action there. Then, we will have the kind of episteme that enables us to make predictions about the future that are better than those of persons who do not have the episteme. It’s not the predicting itself that matters here, though it is fundamental to scientific method. Episteme is not so that we gain “control” of the future per se. Knowing, instead, is about assimilating the know-how or expertise or deeper understanding that gives the member of the ike the “power to know” — the power to know “how to do” certain things, and that involves being able to gauge what most likely might or will or would happen. The important thing here is that the knower is trying to follow something that has produced a pattern in the past — and follow it into the future. Remember Paul on how hard it is to dream up experiments to test new ideas? Harder than coming up with the ideas themselves? Science is inventive and creative, working along lines already laid down, and projecting them into a “future” that we MIGHT get to FROM HERE. This means that the Present must be viewed as being structured by formal organizations that can be hypothetically discerned from the past and projected into the future along the same principles. We are trying to move from the past into the future by assuming something that operated in the past (we think) and in the present (we think) will continue to operate (more or less, apart from accidental interferences and incidental complexities) in the future. We project our knowing as an expectation about the future that comes FROM the formal principles upon which we’ve come to think some of the stuff in the present and the past were based. All of this requires the use of what we moderns tend to call “imagination,” but Aristotle called “poietike,” a kind of “making” of a “fictive future” of what “would” happen, in the sense of what “might” happen IF, as we suppose, our analysis in (of) the past has indeed been moving in a fruitful direction. The Possible, or The Possible-Probable, of Aristotle is not confined to the worlds of art. (P.S. The imagination is a Romantic concept only 200 years old and a bit too free-wheeling to pull together science and the arts as ways of knowing, in my judgment.) Now, I want to remind us all that I insisted on adding to Gavin’s list of “things that exist” a couple more items (with a view to eventually discussing God and faith issues, as well as the liberal arts). So let’s add: John McCain the market (as in “the market sets the prices) “Summer” differs from the first three “things” because, unlike Hamlet, it is based more im-mediately upon empirical or physical observations and sensations and measurements of “it,” like “John McCain,” but we can’t just point to a “summer” sitting there as an entity in the empirical world. So it is like an “electron,” in that we have a theoretical construct to define something we have detected empirically, but it is unlike an electron in that it doesn’t have the same coherent wholeness or entity-ship. “Summer” has edges that are blurry, and different cultures may divide up the seasons somewhat differently, so it may be a local construct. On the other hand, there is certainly in nature a fairly regular recurrence and patterning in the swinging around and repetition of the seasons. Yet you cannot simply identify “summer” with its empirical measurements, as I’ll show, in part because what constitutes a summer (actual temperatures, weather) may be different in Alaska than in Maylasia, and yet we still speak of a summer in both cases. (This is exactly like the identity of phonemes or morphemes in language.) We CAN identify summer MOST coherently IN DIFFERENTIAL RELATIONSHIP TO THE OTHER SEASONS. “Seasons” is the fruitful category here, like a genus, and then we need the differentia that make the seasons differ from one another in each case…. So finally “summer” as a “thing” is a theoretical construct that “exists” for us because we have defined it in relationship to other closely related things within a certain coherent context (the cycle of seasons). But is the existence of this “summer” out there in the actual empirical world? If there is no one to observe the patterns in the weather and compare and contrast them from year to year and name them in the common language so little toddlers begin to learn about “summer” and “winter” as theoretical constructs, then does “summer” exist empirically in the natural world? This is NOT a yes/no question! We can even say things like: “This was a very cool summer, hardly like a summer at all. More like late winter.” We are talking about and interacting with the natural world in these sentences, and we are also using the culturally prevalent constructions of all of that empirical data into the particular units or wholes that in our language and culture enable us to talk about the data on this more powerful formal level in meaningful terms. But when we say that the cool summer was not really a “summer” at all, what exactly do we mean by summer? The cool summer is an actual instance. The summer that it is not, is our idealized or typical summer in our minds (Plato’s Form), against which we measure each actual occurrence. So why don’t we just call this summer a winter if it “more like” a winter than a summer. You know why. We have a whole theoretically precise set of constructs in place, and as a result, just because the specific manifestations of this particular summer don’t resemble the formal identity of summer, it is still a summer. For us, in terms of interpreting the data… The identity of many “things” does not depend on their physical make-up so much as on the normative structures (based on physical instances) that we bring to evaluating them. (John McCain is a man, a senator, a POW….) With regard to linguistic units, this is so much the case that Saussure compares it to a game of chess, in which the formal rules remain and make the various “pieces” what they are. So you can replace a pawn or a rook with anything you want, a coin, say, and it is still a pawn or a rook so long as it differs from the other pieces enough for us to keep its identity straight. The “being a Pawn” — or the mode of being called a pawn — does not depend on any physical substance the pawn is made out of. But this is NOT saying that the identities of pawns or of summers are merely socially constructed. It is simply the case that we aren’t done defining them if we designate a piece of polished wood of a certain shape or a set of temperature ranges and weather patterns. Physical structures are involved at every level, but the identities are formal and relational (differential) identities. As every structuralist knows, a relationship is always also a contrast and an identity is also always a difference, because identities as regonized by human knowers are always defined within a coherence context and with reference to one another. Then, of course, Shakespeare’s Richard III says, “This is the winter of our discontent, made glorious summer by this son of York….” These are metaphors, not references to a “real” summer or “winter” at all, it would seem, and yet of course they are references to real summers! We wouldn’t even understand the metaphors if we didn’t have a form-al notion of “summer” based on many actual summers, in contrast to many actual winters, experienced and named by our speech community. For the Greeks, that hypothetical or normative Idea is the “Real” and the actual summer is merely one actual instance of that real thing…. It’s a very, very helpful contrast for us, this contrast between the empirically actual, which is always gone (into the past), and the Real-ity of the formal theoretical constructs which we human knowers come up with, to use as we seek a deeper understanding. But how in the world are we going to talk about the existence of “the market”? Where is it? (Like the Internet. It’s in our heads, and it’s Real, and it’s actual.) Here we have to start talking about invisible codes of “behavior” that connect all of the members of the economic community and “information” and “market forces” and these are not occult. They exist, if we can rely on observations, but the mode of their existence? I heard Alan Greenspan’s replacement say that if we could only figure out what causes “confidence” we could predict the market absolutely, but we can’t…. These “names” are technical vocabulary and refer to things going on in the world. Their existence is clearly in the mode of the “Real” or Form-al or ideal “things” we’ve talked about, like “being a pawn,” and not the merely actual or physical objects, only these constructs are removed from the first-hand data by more layers of theoretical construct. (We don’t even know what the data we want is until we have some kind of theory going.) The big question in American academia the last 30 years or so has been, are the theoretical constructs in our heads also out there in the physical world? This is such a naive question. Only English speakers with our own tradition of reductive empiricism, from our “scientific” philosophy that valiantly struggled to model itself “logically” upon geometry, would think that if a thing is a construct that cannot be simply equated with a physical object, then “the construct” is “just socially constructed.” All human knowing is “constructed” knowing, esepcially in the sciences, with those constructions always, always based upon constant interaction with the world. The reason we don’t see this as self-evident is that we have forgotten that there is a difference between an actual “thing” and a “kind of thing,” even though we never ever perceive and know any actual thing without the theory of the kind of thing and the theory of its difference from and relationship to other kinds of things mediating our knowing of it. The very words in the lexicon of our language that we learn as we emerge as human persons in early childhood refer to the formal kindness of things, as we have learned to name them in the past (langue), and because they are formal constructs of that sort, therefore we can in the future use those form-al words to make specific references to instances of those kinds of things in the world, and in memory, in dream, in literature…. The formality of their identity enables us to transpose them into various realms that are realms of projected formal being…. 9. Gavin I’m just going to pick one thing as an example. You say: I am inclined to say, “Fine.” I personally don’t see any reason to personify those things, but if want to, that’s great. However, I run into trouble. My friend, Brent agrees with you but adds that God thinks homosexuality is an abomination. Then there’s Andrey who agrees with you and Brent, but also thinks that God has asked him to intimidate gays with physical violence, to the point of death. How can I be respectful of non-empirical knowledge in some cases, and then oppose these other ways of knowing in other cases. As an elder in the Presbyterian church I spent considerable time watching men and women argue about what God wanted us to be doing in bed. It was typically a He said He said debate with everyone quoting and interpreting passages from the bible. It had no connection, that I could see, to the world because it wasn’t based on anything empirical, and they got nowhere. If we had decided to work with empirical evidence, then the issue would have been rather easily resolved. Asking everyone to stick to empirical evidence seems to be the best way to make progress in debates about practically anything, which make me reluctant to say “fine” if you claim to have personal knowledge of some deity whose every action is undetectable. 10. HI Janet wrote: But don’t you see exactly where non-theists have a problem with? Here you are talking as if God only means such “potentialities.” But of course that is not all what the God is to you and most theists. Don’t Christians use the words such as loving and caring to describe their god? And didn’t you confess how real that kind of God is to you? Why would you worship “potentialities” anyway? But the problem is that it is not self-evident that God that is loving and caring and God of “potentialities” or “the condition of possibility” are one and the same. (And I suspect that God of “potentialities” is not the primary motivation for the faith of most theists.) John McCain is a senator and a former POW at the same time. But a senator is not necessarily a former POW. We only know that John McCain is both a senator and a former POW, because we know that John McCain is a senator and we know that John McCain was a POW and we know that John McCain the senator and John McCain the former POW is the same person. Can you make a similar connection for God? It is more convenient to just to talk about the more philosophical concept of God, but that is not going to be enough. And even if we forget about your personal God of love and focus on the philosophical God, there still remains a question of how meaningful such a concept of God is. Read what Sean Carroll wrote. (Also, in a different thread on Cosmicvariance, someone called Ali made a following comment. I’m not sure if this is the same Ali who also comments on the thread above. “Speaking as a religious scholar, I think you’ll have to be careful about that first one, since in order to put forth any argument at all, you’ll have to very precisely define which conception of “God” you’ll be defending (there are so many, after all, not merely the American Protestant version). Some early Christian apologists, in an attempt to defend the existence of God according the principles of the Greek philosophical tradition with which they were familiar, ended up identifying “God” with existence itself. It would difficult to make a case against existence existing, after all. On the other hand, what you end up with is a tautology, albeit an interesting one.” This sounds similar to what you are attempting. Do you care to comment on that?) Regarding Wigner, I really don’t know much anything beyond what was written in popular science books or what you can find on internet. Among other things, Wigner proposed a thought experiment called “Wigner’s friend”, which is a variation of Schroedinger’s cat that essentially replaced the cat with a human (and the human doesn’t have to die unlike Schroedinger’s cat). It was supposed to illustrate the importance of a conscious observer in the measurement, but it seems to illustrate the flaw in his thinking to me. 11. Maria Kirby Isn’t that exactly what Christians are claiming? The soul is immortal because we have empirical evidence that Jesus rose from the dead, in a new eternal body? And we also know the Form of God because we know the Jesus who is the Word become flesh, the Form became an empirical experience? 12. All of you are keeping ME on my toes! Maria, you point out something I badly need to clarify, and it is connected with all of our impasses about empirical and semiotic and so forth. I’m drafting a reply to everyone. Thanks! 13. By the way, Hi, those links aren’t working for me. To Sean Carrol and Ali. Can you offer them again or name the posts to see at cosmicvariance? Thanks! Gavin, your links are truly horrifying. Thanks for alerting us. 14. I just read Sean’s post and I am surprised at him — I think he is being incredibly reductive and narrow-minded. (His review of Dawkins was much better, imho.) And Ali, there is so much more to the discussion of “existence” than what this “religious scholar” refers to. If you scientific folks think that MY grasp of QM is not adequate, and I’ve done a lot of work there, then I have to say that to me these accounts of the theological, philosophical, and logical issues at stake surrounding “God” are not even kindergarten-level. And yet they don’t seem to recognize that they don’t know anything about what they are dismissing with their knock-down arguments; that there are intellectual worlds there of which they have no knowledge whatsoever, not to mention cultural, ethical, and daily worlds of which they apparently know nothing. It is as though they are color-blind or tone-deaf. Only what their own way of knowing illuminates, can “exist” or make a difference. Anything that other people might perceive or treasure simply doesn’t exist, because their little elite group has the only way of knowing, and anything else would complicate things too much. They are bound and determined in advance to know about nothing except what they want to know about. And I don’t buy this demonizing of the Christian rank-and-file as having no theological or philosophical sophistication. You can’t have a genuine experience of God without having those profound philosophical ramifications entering into your new experience of life. (It should go without saying that there are of course “Christians” who have had no genuine experience of God, and non-Christians who have had. The Bible is full of this — remember the Pharisees?) I will try again and reread that post tomorrow, but I am very sad. Sean speaks of a single world with a single way of knowing what’s what, and you guys agree with him? So what have we been talking about all this time? The very idea that so many people seem to think it is okay for Dawkins or anyone else to dismiss the very question of God as stupidity, without knowing any theology, makes me want to weep. If all YOU happen to know is the straw man that Dawkins attacks, then you simply are as uninformed as he is. It doesn’t mean it’s okay to reduce the whole thing to what you’ve encountered. This is prejudice and bigotry. Fanaticism always works like this. How is this insistence that there’s no content to faith in God any different from narrow-minded and fanatical Christians saying that evolution is wrong and blasphemous, when they don’t know the science or understand or credit the scientific method on its own terms? This kind of reductive and fanatical insistence that one way of knowing is the single obvious monolithic truth and that it speaks for itself and that everyone else is just dead wrong is just appalling. Asking other ways of knowing to justify themselves by the standards of your own field is deadly to thought and to any prospects for human peace and advancement. You cannot base an argument on ignorance. You may decide you don’t like religion and that YOU don’t WANT to know anything about it, but you can’t then dismiss it and claim to be able to close it off and dispose of it in advance as empty and void of truth or reality for anyone. That is simply fanaticism, and Dawkins is in this respect as simple-minded and fanatical as they come. He is blindingly ignorant of what it is that he is dismissing. He even says that his atheism is “a victimless crime” — that it hurts no one. (Not being an atheist, but his militant attacks on all religions and religious people.) He is living in a dream world. He is fomenting hatred and aggression against what many human beings hold to be their most precious possession. That isn’t hurting anyone? Militant attacks always hurt people. The militants themselves above all. Look, no one can tell me anything about the evils of religion that I don’t know first hand. But if you don’t know anything about its treasures and its depth and its meaningfulness and the daily goodness it has also supported, then how can you begin to make an evaluation of it? I don’t mind Dawkins disliking religion and he is entitled to his opinions. It is his claim that an entire rich dimension of human exploration and experience is worthless and empty and can be known to be worthless from outside, that qualifies him as a bigot. You cannot ignore the voices of human beings with other backgrounds and think that you don’t need to value their experiences and their insights — just because you know better than they do, in advance. What if we asked artists to “name one thing that art has done that makes a difference.” Or, “How would the universe be different without art?” Or music? What if we made it harder and asked, how would the universe be any different without government and politics? Just look at all the terrible things governments have done. Look at how destructive political fanaticism or ideologies have been? Let’s stop believing in it and it will go away. Everyone is trying to make the universe much simpler than it is. For a person to dismiss as nonsense something like the “condition of possibility” is simply ignorance. It’s pitiful, to anyone in those fields. Dawkins’ arguments do make you cringe, just like Terry Eagleton says, they are so sophomoric. It’s exactly like a Fundamentalist getting an easy laugh from the audience by ridiculing the idea that humans descended from apes. It is a pitiful spectacle to watch supposedly liberally educated people indulge themselves in demeaning and demonizing whole segments of the human race instead of attempting to understand them and hear them on their own terms. Depth experiences of God occur in all cultures, and in the biblical faiths, the experience of a “personal” God is simultaneous with the experience of an ultimate reality and with “the ground of existence” and the condition of possibility. These aren’t empty phrases pointing to nothing at all. Does me not knowing and understanding advanced elements of a field of science make that science empty and meaningless? Only if I think I can dismiss the work of other human beings in an arduous common enterprise, just because I haven’t been drawn to it or trained in it. Does that mean we accept everything that claims a religious basis (like gay-bashing) or a scientific basis (like experiments that cause unconscionable suffering to helpless dogs and cats and other higher animals) just because they claim to be religious or scientific? No. We have to keep on struggling to interpet and distinguish. It’s never easy. Gavin says we’re better off to simply “stick with the empirical,” and then we could settle things more easily, without the ambiguities of religion. It’s a nice hope, but I think that everything including science is pretty ambiguous ethically, and we are stuck in the middle of the whole mess having to struggle constantly with interpretations and decisions, individual and collective. We’re all in this together and demonizing each other isn’t helping. (Talk about “dishonest.” All these knee-jerk reactions and wholesale dismissals and sweeping assumptions that what is self-evident to me is therefore universally applicable to everyone, without even checking with the others first?) I hate it when Christians are self-righteous and reductive and judgmental, but it isn’t really any nicer to see it in atheists, either. If you “believe in God,” it is either because you have accepted a form of religion passed down to you, or because God has become unmistakably manifest to you, or both. In the latter cases, you don’t add up the “arguments” pro and con. You try to integrate the continuing reality of God with everything else you know, and that usually means finding a tradition that is capable of helping you to grow in your relationship with God. Because you feel incredible gratitude to God, and a profound sense of the sacredness and goodness of the sacred dimension in your life, religion can become a powerful force for good or evil, and it is just as liable to become distorted and destructive as a marriage or a family or a community or any other human institution is. One feels of course that God is on the side of health and fruitfulness in all of these cases. But for us to know the good, and then to do the good? That is always the problem. But we have an evolving tradition that is very rich and profound to guide us. I think that one of the differences that God makes is deeply inward. Genuine experience of God moves one into a journey of discovery in which you are just as foolish and intolerant as anyone else, but you aren’t left to your own resources. There’s an inexorable pressure to see through your own excuses eventually and become more humane. And there’s knowing and loving this incredibly suffering and loving presence…. I could say so much more, but I would have to do it by speaking of my own tradition and not so generally about the religious dimension in general. What does God see when God looks at the world — imagine this, as a thought experiment if you will. The Christian tradition says that God sees the spiritual suffering and struggle of the world, because God values that above all else. (The Jewish tradition, also.) And that the spiritual is not separated from the yearnings of the natural and the animal world as well. God looks inwardly and sees the inward heart of things and God values even the smallest increase in the kingdom of love. And God is broken by every violence that breaks any one of us. God suffers with us and in us and for us. There’s nothing easy here. Nothing snappy. Just something unbearably relevant and real. 15. Gavin says: “How can I be respectful of non-empirical knowledge in some cases, and then oppose these other ways of knowing in other cases.” But you have to. You have to try to distinguish genuine ways of knowing from ones you cannot accept as genuine. You have to distinguish the Christian tradition from gay-bashing, for instance. You have to distinguish scientific knowing from the hideous torture of animals. You have to do the best you can, as thoughtfully as you can, and take your stand as best you can, but you can’t just throw out whole ways of knowing because they change, disagree, and sponsor terrible things. And here you are saying “non-empirical” again…. Anything that requires human observation over time is no longer strictly empirical, but a weaving together of empirical observations at different times into a construct that is both empirically based and that “exists” in human consciousness, language, and history. You are talking about a way of knowing that has as its ultimate arbiter the conformity of these constructions to experimental testing. Such a way of knowing cannot do ethics and perform in many other vital areas. It can help inform ethical decision-making, but it cannot make the decisions, because it isn’t designed to do that. How would empirical considerations settle the Presbyterian elders’ debates about what people should do in bed? It could inform the debate, but you would still need to make larger ethical arguments for how to interpret the scientific data in an ethical framework. The science on homosexuality did settle my own stance as a Christian on homosexuality, but that is only because I have a larger context of religious and ethical theory, i.e. that the Cross shows that nothing trumps divine love. Therefore, if people are born with different sexual orientations, and have no choice in the matter, as we now believe based on science, then I don’t believe Christ would condemn non-heterosexual persons to live without intimacy and physical love. But I’m not allowed to condemn and hate Christians who cannot, in good faith, come to this view of the matter. (I know some of them for whom this is tearing them apart. Great suffering here on all sides.) To me, we are in another period of historical change. We’ve gone through this with abolition of slavery, with Christians on both sides, and then again with women in ministry, and now we are going through it again with homosexuality. But I do have to oppose any hating and persecuting of other persons because of homosexuality (or for any other reason). The whole church will come around on this as it has on the other issues. We’re a species that is now evolving culturally as well as (or more than) genetically, but we still resist at every step the manifestations of a transcendent love and compassion that we also prize and adore above all else. 16. Gavin, I know many, many of the Greek Orthodox here in Seattle quite personally, because my Episcopal parish shared our building with a Greek Orthodox mission congregation until they were big enough to have their own building, but we are still very close, and I went to their larger gatherings, and more gentle and loving people you would never hope to meet. They took care of their elderly and adored their children and reached out to everyone — they were on fire with love. I can’t put it any other way. Their children would come home crying from second grade because little Evangelical children had told them they “weren’t really Christians.” When their tradition goes straight back to the early church. There isn’t much limit to our human pettiness and iniquity. And there isn’t much limit to those people’s love and the good that they do and are. Aren’t the sociological reasons for those Russian men’s looking for scapegoats pretty obvious? Sean Carroll’s review of Richard Dawkins’ The God Delusion is a sophisticated discussion of why we can’t attribute all evil by religious persons to religious factors. Also, folks, I’d like to add to the list of questions we’ve been building. What difference does innocence make? What difference does forgiveness make? What difference does vicarious sacrifice make? What does the figure of God on the Cross mean to the mothers of those who’ve been “disappeared” in South America, and does it make a difference for them that God’s son too was put to death as a criminal? This narrow rationalism is as thin as water. God takes on our flesh and our blood and speaks to us in our deepest sufferings and rebukes all our iniquities by taking them all on personally. (But I can also see how people who have been suffocated by the perversions of religion can find in science freedom and space and fresh air, while for me the scientific attitude was the source of great harm. This is where we need semiotic theory. Things take on their identity in large part from the surrounding system of associations and the rules we have in place, like summer from the other seasons and a pawn in chess. The “Christ-event” for one person might be a forest and for another romantic love and for another science itself. Maybe we should read Dante together here….) Don’t let the distortions fool you. The most powerful goods can be turned into the most powerful evils quite easily and it happens all the time. Humans are deeply irrational creatures, as well as deeply rational creatures, and we are in desperate need interventions on all levels of our being. Wow. This is turning into a lead-in to discussing Shusako Endo’s _Silence_! Starts tomorrow… on All Saints Day, in fact, as it happens. 17. Gavin As I said before, I agree with Sean except for his use of the word “dishonest.” You respond: This passage stands out for its clarity, but mischaracterizations and insults continue throughout. I will not participate in a conversation like this. Good luck, 18. I am very sad. You have been a wonderful conversation partner, Gavin. I was fresh from the lacerations I had just received from reading Sean’s piece and some of the comment thread. I should have waited until I was calmer. I wasn’t talking about you and Sean personally, Gavin. I was talking about this militant attack on religion as a way of thinking and viewing the world. I still believe it is as tragically narrow as the fundamentalist biblical literalists who are crusading against Darwin is. What about all of us in the middle? I hope you reconsider, but in any case, I’ll always treasure the conversation — and reread the QM parts! (Looks like I need to learn some spiritual lessons in humility from Shusaku Endo.) 19. Maria Kirby I would like to go back to your example of the reality of Hamlet as an example of semiotic knowledge and empirical knowledge. Hamlet as a play, as words written down, expresses certain ideas and concepts embedded in the character of Hamlet. When an actor performs Hamlet, he converts the semiotic knowledge into empirical knowledge. To the extent that the actor’s representation or characterization reflects accurately the semiotic knowledge of Hamlet, that semiotic knowledge becomes empirical to the actor and his audience. I believe the same is true for religious concepts, particularly our knowing God through Jesus. To the extent that we understand the semiotic knowledge expressed in the Bible about Jesus, to the extent that our semiotic knowledge is developed through philosophy, nature, or other means, we can convert that knowledge into empirical knowledge through how we behave towards others, through how we embody Christ, or Love, or forgiveness. It seems to me that one of the major themes in the Bible is that of transformation. God transforms evil into good. Forgiveness transforms enemies into friends. God’s love transforms us from dying or dead into living and alive. The resurrection transforms death into life. I see a similar phenomenon occurring in biological systems where DNA is torn apart, replicated, and restored. And in the process a new set of DNA is created and life is duplicated. Evil and death tear apart the present life. Forgiveness and love restores life, but its not a restoration to the previous conditions, its a restoration to new life, eternal life as seen (witnessed, empirically experienced) in the resurrection of Jesus. Because the new life that Jesus lived after the resurrection had a physical form, an empirical form, and because when we forgive each other we are converting the semiotic form that Jesus represents into an empirical form of earthly experience, I would like to think that we are also creating an eternal empirical form of an eternal life. It seems that many passages in the NT indicate that eternal life is something that we receive not only because God forgives us, but because we forgive others. 20. Thanks, Maria. I have been working on a response to your emails which I’ll be able to post soon. And I’m posting on Shusaku Endo later today. Thanks everyone. 21. That image of the DNA being torn apart and re-united with another “torn” DNA is a powerful image. “Except a seed fall into the ground and die,” right? Semiotics, word theory, is filled with deaths and rebirths within words, sustaining them. It is like Heidegger’s unconcealment (truth) and re-concealment being dialectically related. I think from a semiotics standpoint, I want to comment on the nature of both what you call semiotic knowledge and what you call empirical knowledge, though I certainly see what you mean. The two are much more inter-related than usually appears on the surface. It’s fascinating to remember that the “word” sustains even what we think of as empirical being. More on this soon. (I keep saying soon, but truly….) As for forgiveness, it’s forgiving oneself that is often most difficult, isn’t it? The intricate interrelationship between the present and the future is something I’ll be hitting on too, I hope. Thanks. More soon. Leave a Reply to Janet Cancel reply You are commenting using your account. Log Out /  Change ) Google photo Twitter picture Facebook photo Connecting to %s
f05fc94e91d350a1
All Issues Volume 7, 2020 Volume 6, 2019 Volume 3, 2016 Volume 2, 2015 Volume 1, 2014 Journal of Computational Dynamics December 2019 , Volume 6 , Issue 2 Special issue in honor of Reinout Quispel Select all articles Preface Special issue in honor of Reinout Quispel Elena Celledoni and Robert I. McLachlan 2019, 6(2): ⅰ-ⅴ doi: 10.3934/jcd.2019007 +[Abstract](1041) +[HTML](288) +[PDF](194.73KB) Efficient time integration methods for Gross-Pitaevskii equations with rotation term Philipp Bader, Sergio Blanes, Fernando Casas and Mechthild Thalhammer 2019, 6(2): 147-169 doi: 10.3934/jcd.2019008 +[Abstract](1097) +[HTML](284) +[PDF](3361.15KB) The objective of this work is the introduction and investigation of favourable time integration methods for the Gross-Pitaevskii equation with rotation term. Employing a reformulation in rotating Lagrangian coordinates, the equation takes the form of a nonlinear Schrödinger equation involving a space-time-dependent potential. A natural approach that combines commutator-free quasi-Magnus exponential integrators with operator splitting methods and Fourier spectral space discretisations is proposed. Furthermore, the special structure of the Hamilton operator permits the design of specifically tailored schemes. Numerical experiments confirm the good performance of the resulting exponential integrators. Deep learning as optimal control problems: Models and numerical methods Martin Benning, Elena Celledoni, Matthias J. Ehrhardt, Brynjulf Owren and Carola-Bibiane Schönlieb 2019, 6(2): 171-198 doi: 10.3934/jcd.2019009 +[Abstract](2280) +[HTML](441) +[PDF](13381.36KB) We consider recent work of [18] and [9], where deep learning neural networks have been interpreted as discretisations of an optimal control problem subject to an ordinary differential equation constraint. We review the first order conditions for optimality, and the conditions ensuring optimality after discretisation. This leads to a class of algorithms for solving the discrete optimal control problem which guarantee that the corresponding discrete necessary conditions for optimality are fulfilled. The differential equation setting lends itself to learning additional parameters such as the time discretisation. We explore this extension alongside natural constraints (e.g. time steps lie in a simplex). We compare these deep learning algorithms numerically in terms of induced flow and generalisation ability. Algebraic structure of aromatic B-series Geir Bogfjellmo 2019, 6(2): 199-222 doi: 10.3934/jcd.2019010 +[Abstract](997) +[HTML](262) +[PDF](239.67KB) Aromatic B-series are a generalization of B-series. Some of the algebraic structures on B-series can be defined analogically for aromatic B-series. This paper derives combinatorial formulas for the composition and substitution laws for aromatic B-series. A new class of integrable Lotka–Volterra systems Helen Christodoulidi, Andrew N. W. Hone and Theodoros E. Kouloukas 2019, 6(2): 223-237 doi: 10.3934/jcd.2019011 +[Abstract](1175) +[HTML](266) +[PDF](12453.66KB) A parameter-dependent class of Hamiltonian (generalized) Lotka–Volterra systems is considered. We prove that this class contains Liouville integrable as well as superintegrable cases according to particular choices of the parameters. We determine sufficient conditions which result in integrable behavior, while we numerically explore the complementary cases, where these analytically derived conditions are not satisfied. Solving the wave equation with multifrequency oscillations Marissa Condon, Arieh Iserles, Karolina Kropielnicka and Pranav Singh 2019, 6(2): 239-249 doi: 10.3934/jcd.2019012 +[Abstract](1003) +[HTML](252) +[PDF](298.49KB) We explore a new asymptotic-numerical solver for the time-dependent wave equation with an interaction term that is oscillating in time with a very high frequency. The method involves representing the solution as an asymptotic series in inverse powers of the oscillation frequency. Using the new scheme, high accuracy is achieved at a low computational cost. Salient features of the new approach are highlighted by a numerical example. Principal symmetric space analysis Charles Curry, Stephen Marsland and Robert I McLachlan 2019, 6(2): 251-276 doi: 10.3934/jcd.2019013 +[Abstract](999) +[HTML](283) +[PDF](5084.81KB) Principal Geodesic Analysis is a statistical technique that constructs low-dimensional approximations to data on Riemannian manifolds. It provides a generalization of principal components analysis to non-Euclidean spaces. The approximating submanifolds are geodesic at a reference point such as the intrinsic mean of the data. However, they are local methods as the approximation depends on the reference point and does not take into account the curvature of the manifold. Therefore, in this paper we develop a specialization of principal geodesic analysis, Principal Symmetric Space Analysis, based on nested sequences of totally geodesic submanifolds of symmetric spaces. The examples of spheres, Grassmannians, tori, and products of two-dimensional spheres are worked out in detail. The approximating submanifolds are geometrically the simplest possible, with zero exterior curvature at all points. They can deal with significant curvature and diverse topology. We show that in many cases the distance between a point and the submanifold can be computed analytically and there is a related metric that reduces the computation of principal symmetric space approximations to linear algebra. Integrable reductions of the dressing chain Charalampos Evripidou, Pavlos Kassotakis and Pol Vanhaecke 2019, 6(2): 277-306 doi: 10.3934/jcd.2019014 +[Abstract](910) +[HTML](249) +[PDF](478.78KB) In this paper we construct a family of integrable reductions of the dressing chain, described in its Lotka-Volterra form. For each \begin{document}$ k, n\in \mathbb N $\end{document} with \begin{document}$ n \geqslant 2k+1 $\end{document} we obtain a Lotka-Volterra system \begin{document}$ \hbox{LV}_b(n, k) $\end{document} on \begin{document}$ \mathbb {R}^n $\end{document} which is a deformation of the Lotka-Volterra system \begin{document}$ \hbox{LV}(n, k) $\end{document}, which is itself an integrable reduction of the \begin{document}$ 2m+1 $\end{document}-dimensional Bogoyavlenskij-Itoh system \begin{document}$ \hbox{LV}({2m+1}, m) $\end{document}, where \begin{document}$ m = n-k-1 $\end{document}. We prove that \begin{document}$ \hbox{LV}_b(n, k) $\end{document} is both Liouville and non-commutative integrable, with rational first integrals which are deformations of the rational first integrals of \begin{document}$ \hbox{LV}({n}, {k}) $\end{document}. We also construct a family of discretizations of \begin{document}$ \hbox{LV}_b(n, 0) $\end{document}, including its Kahan discretization, and we show that these discretizations are also Liouville and superintegrable. Locally conservative finite difference schemes for the modified KdV equation Gianluca Frasca-Caccia and Peter E. Hydon 2019, 6(2): 307-323 doi: 10.3934/jcd.2019015 +[Abstract](1148) +[HTML](250) +[PDF](1446.74KB) Finite difference schemes that preserve two conservation laws of a given partial differential equation can be found directly by a recently-developed symbolic approach. Until now, this has been used only for equations with quadratic nonlinearity. In principle, a simplified version of the direct approach also works for equations with polynomial nonlinearity of higher degree. For the modified Korteweg-de Vries equation, whose nonlinear term is cubic, this approach yields several new families of second-order accurate schemes that preserve mass and either energy or momentum. Two of these families contain Average Vector Field schemes of the type developed by Quispel and co-workers. Numerical tests show that each family includes schemes that are highly accurate compared to other mass-preserving methods that can be found in the literature. Re-factorising a QRT map Nalini Joshi and Pavlos Kassotakis 2019, 6(2): 325-343 doi: 10.3934/jcd.2019016 +[Abstract](1009) +[HTML](266) +[PDF](408.22KB) A QRT map is the composition of two involutions on a biquadratic curve: one switching the \begin{document}$ x $\end{document}-coordinates of two intersection points with a given horizontal line, and the other switching the \begin{document}$ y $\end{document}-coordinates of two intersections with a vertical line. Given a QRT map, a natural question is to ask whether it allows a decomposition into further involutions. Here we provide new answers to this question and show how they lead to a new class of maps, as well as known HKY maps and quadrirational Yang-Baxter maps. The Lie algebra of classical mechanics Robert I. McLachlan and Ander Murua 2019, 6(2): 345-360 doi: 10.3934/jcd.2019017 +[Abstract](1017) +[HTML](250) +[PDF](379.81KB) Classical mechanical systems are defined by their kinetic and potential energies. They generate a Lie algebra under the canonical Poisson bracket. This Lie algebra, which is usually infinite dimensional, is useful in analyzing the system, as well as in geometric numerical integration. But because the kinetic energy is quadratic in the momenta, the Lie algebra obeys identities beyond those implied by skew symmetry and the Jacobi identity. Some Poisson brackets, or combinations of brackets, are zero for all choices of kinetic and potential energy, regardless of the dimension of the system. Therefore, we study the universal object in this setting, the 'Lie algebra of classical mechanics' modelled on the Lie algebra generated by kinetic and potential energy of a simple mechanical system with respect to the canonical Poisson bracket. We show that it is the direct sum of an abelian algebra \begin{document}$ \mathfrak{X} $\end{document}, spanned by 'modified' potential energies and isomorphic to the free commutative nonassociative algebra with one generator, and an algebra freely generated by the kinetic energy and its Poisson bracket with \begin{document}$ \mathfrak{X} $\end{document}. We calculate the dimensions \begin{document}$ c_n $\end{document} of its homogeneous subspaces and determine the value of its entropy \begin{document}$ \lim_{n\to\infty} c_n^{1/n} $\end{document}. It is \begin{document}$ 1.8249\dots $\end{document}, a fundamental constant associated to classical mechanics. We conjecture that the class of systems with Euclidean kinetic energy metrics is already free, i.e., that the only linear identities satisfied by the Lie brackets of all such systems are those satisfied by the Lie algebra of classical mechanics. A structure-preserving Fourier pseudo-spectral linearly implicit scheme for the space-fractional nonlinear Schrödinger equation Yuto Miyatake, Tai Nakagawa, Tomohiro Sogabe and Shao-Liang Zhang 2019, 6(2): 361-383 doi: 10.3934/jcd.2019018 +[Abstract](1104) +[HTML](269) +[PDF](1316.05KB) We propose a Fourier pseudo-spectral scheme for the space-fractional nonlinear Schrödinger equation. The proposed scheme has the following features: it is linearly implicit, it preserves two invariants of the equation, its unique solvability is guaranteed without any restrictions on space and time step sizes. The scheme requires solving a complex symmetric linear system per time step. To solve the system efficiently, we also present a certain variable transformation and preconditioner. Discrete gradients for computational Bayesian inference Sahani Pathiraja and Sebastian Reich 2019, 6(2): 385-400 doi: 10.3934/jcd.2019019 +[Abstract](919) +[HTML](287) +[PDF](793.78KB) In this paper, we exploit the gradient flow structure of continuous-time formulations of Bayesian inference in terms of their numerical time-stepping. We focus on two particular examples, namely, the continuous-time ensemble Kalman–Bucy filter and a particle discretisation of the Fokker–Planck equation associated to Brownian dynamics. Both formulations can lead to stiff differential equations which require special numerical methods for their efficient numerical implementation. We compare discrete gradient methods to alternative semi-implicit and other iterative implementations of the underlying Bayesian inference problems. Geometry of the Kahan discretizations of planar quadratic Hamiltonian systems. Ⅱ. Systems with a linear Poisson tensor Matteo Petrera and Yuri B. Suris 2019, 6(2): 401-408 doi: 10.3934/jcd.2019020 +[Abstract](904) +[HTML](249) +[PDF](589.36KB) Kahan discretization is applicable to any quadratic vector field and produces a birational map which approximates the shift along the phase flow. For a planar quadratic Hamiltonian vector field with a linear Poisson tensor and with a quadratic Hamilton function, this map is known to be integrable and to preserve a pencil of conics. In the paper "Three classes of quadratic vector fields for which the Kahan discretization is the root of a generalised Manin transformation" by P. van der Kamp et al. [5], it was shown that the Kahan discretization can be represented as a composition of two involutions on the pencil of conics. In the present note, which can be considered as a comment to that paper, we show that this result can be reversed. For a linear form \begin{document}$ \ell(x,y) $\end{document}, let \begin{document}$ B_1,B_2 $\end{document} be any two distinct points on the line \begin{document}$ \ell(x,y) = -c $\end{document}, and let \begin{document}$ B_3,B_4 $\end{document} be any two distinct points on the line \begin{document}$ \ell(x,y) = c $\end{document}. Set \begin{document}$ B_0 = \tfrac{1}{2}(B_1+B_3) $\end{document} and \begin{document}$ B_5 = \tfrac{1}{2}(B_2+B_4) $\end{document}; these points lie on the line \begin{document}$ \ell(x,y) = 0 $\end{document}. Finally, let \begin{document}$ B_\infty $\end{document} be the point at infinity on this line. Let \begin{document}$ \mathfrak E $\end{document} be the pencil of conics with the base points \begin{document}$ B_1,B_2,B_3,B_4 $\end{document}. Then the composition of the \begin{document}$ B_\infty $\end{document}-switch and of the \begin{document}$ B_0 $\end{document}-switch on the pencil \begin{document}$ \mathfrak E $\end{document} is the Kahan discretization of a Hamiltonian vector field \begin{document}$ f = \ell(x,y)\begin{pmatrix}\partial H/\partial y \\ -\partial H/\partial x \end{pmatrix} $\end{document} with a quadratic Hamilton function \begin{document}$ H(x,y) $\end{document}. This birational map \begin{document}$ \Phi_f:\mathbb C P^2\dashrightarrow\mathbb C P^2 $\end{document} has three singular points \begin{document}$ B_0,B_2,B_4 $\end{document}, while the inverse map \begin{document}$ \Phi_f^{-1} $\end{document} has three singular points \begin{document}$ B_1,B_3,B_5 $\end{document}. Chains of rigid bodies and their numerical simulation by local frame methods Nicolai Sætran and Antonella Zanna 2019, 6(2): 409-427 doi: 10.3934/jcd.2019021 +[Abstract](1874) +[HTML](276) +[PDF](1664.33KB) We consider the dynamics and numerical simulation of systems of linked rigid bodies (chains). We describe the system using the moving frame method approach of [18]. In this framework, the dynamics of the \begin{document}$ j $\end{document}th body is described in a frame relative to the \begin{document}$ (j-1) $\end{document}th one. Starting from the Lagrangian formulation of the system on \begin{document}$ {{\rm{SO}}}(3)^{N} $\end{document}, the final dynamic formulation is obtained by variational calculus on Lie groups. The obtained system is solved by using unit quaternions to represent rotations and numerical methods preserving quadratic integrals. Study of adaptive symplectic methods for simulating charged particle dynamics Yanyan Shi, Yajuan Sun, Yulei Wang and Jian Liu 2019, 6(2): 429-448 doi: 10.3934/jcd.2019022 +[Abstract](916) +[HTML](285) +[PDF](3261.54KB) In plasma simulations, numerical methods with high computational efficiency and long-term stability are needed. In this paper, symplectic methods with adaptive time steps are constructed for simulating the dynamics of charged particles under the electromagnetic field. With specifically designed step size functions, the motion of charged particles confined in a Penning trap under three different magnetic fields is studied, and also the dynamics of runaway electrons in tokamaks is investigated. The numerical experiments are performed to show the efficiency of the new derived adaptive symplectic methods. Linear degree growth in lattice equations Dinh T. Tran and John A. G. Roberts 2019, 6(2): 449-467 doi: 10.3934/jcd.2019023 +[Abstract](1101) +[HTML](257) +[PDF](380.73KB) We conjecture recurrence relations satisfied by the degrees of some linearizable lattice equations. This helps to prove linear degree growth of these equations. We then use these recurrences to search for lattice equations that have linear growth and hence are linearizable. Strange attractors in a predator–prey system with non-monotonic response function and periodic perturbation Johan Matheus Tuwankotta and Eric Harjanto 2019, 6(2): 469-483 doi: 10.3934/jcd.2019024 +[Abstract](954) +[HTML](252) +[PDF](2220.88KB) A system of ordinary differential equations of a predator–prey type, depending on nine parameters, is studied. We have included in this model a nonmonotonic response function and time periodic perturbation. Using numerical continuation software, we have detected three codimension two bifurcations for the unperturbed system, namely cusp, Bogdanov-Takens and Bautin bifurcations. Furthermore, we concentrate on two regions in the parameter space, the region where the Bogdanov-Takens and the region where Bautin bifurcations occur. As we turn on the time perturbation, we find strange attractors in the neighborhood of invariant tori of the unperturbed system. Using Lie group integrators to solve two and higher dimensional variational problems with symmetry Michele Zadra and Elizabeth L. Mansfield 2019, 6(2): 485-511 doi: 10.3934/jcd.2019025 +[Abstract](1122) +[HTML](260) +[PDF](829.7KB) The theory of moving frames has been used successfully to solve one dimensional (1D) variational problems invariant under a Lie group symmetry. In the one dimensional case, Noether's laws give first integrals of the Euler–Lagrange equations. In higher dimensional problems, the conservation laws do not enable the exact integration of the Euler–Lagrange system. In this paper we use the theory of moving frames to help solve, numerically, some higher dimensional variational problems, which are invariant under a Lie group action. In order to find a solution to the variational problem, we need first to solve the Euler Lagrange equations for the relevant differential invariants, and then solve a system of linear, first order, compatible, coupled partial differential equations for a moving frame, evolving on the Lie group. We demonstrate that Lie group integrators may be used in this context. We show first that the Magnus expansions on which one dimensional Lie group integrators are based, may be taken sequentially in a well defined way, at least to order 5; that is, the exact result is independent of the order of integration. We then show that efficient implementations of these integrators give a numerical solution of the equations for the frame, which is independent of the order of integration, to high order, in a range of examples. Our running example is a variational problem invariant under a linear action of \begin{document}$ SU(2) $\end{document}. We then consider variational problems for evolving curves which are invariant under the projective action of \begin{document}$ SL(2) $\end{document} and finally the standard affine action of \begin{document}$ SE(2) $\end{document}. Email Alert [Back to Top]
39a292184548be5d
Pilot Wave Theory There’s one interpretation of the meaning of quantum mechanics that somehow manages to skip a lot of the wildly extravagant, or near mystical ideas of the mainstream interpretations: it’s DeBroglie-Bohm Pilot-Wave theory. Despite it’s alluring intuitive nature, for some reason it remains a fringe theory. Misinterpretation of the ideas of Quantum Mechanics has spawned some of the worst quackery, Pseudo-Science, hoo-ha, and unfounded mystical story telling of any scientific theory. It’s easy to see why, there are far out there explanations for the processes at work behind the incredible successful mathematics of quantum mechanics. These explanations claim stuff like: things about waves and particles at the same time, the act of observation defines reality, cats supposed to be alive and dead, and even that the universe is constantly splitting into infinite alternate realities. The weird results of quantum experiments seem to demand weird explanations of the nature of reality. There is one interpretation of quantum mechanics that remains comfortably, almost stodgily, physical, that’s DeBroglie-Bohm Pilot-Wave theory. Pilot-Wave Theory, also known as Bohmian Mechanics stands in striking contrast to the much more main stream ideas, as for example the Copenhagen, and Many-World interpretations. Copenhagen Interpretation of Quantum Mechanics  ↓Many-Worlds Interpretation of Quantum Mechanics  ↑ Pilot-Wave Theory is perhaps the most solidly physical, even mundane, of the complete and self-consistent interpretation of quantum mechanics. But at the same time it’s considered one of the least orthodox. Why so? Because orthodoxy equals radicalism plus time. The founding fathers of the Copenhagen interpretation of quantum mechanics, Werner Heisenberg and Niels Bohr were radicals. When quantum theory was coming together in the twenties, the were fervent about the need to reject all classical thinking in interpreting the strange results of early quantum experiments. (left) – Werner Heisenberg and Niels Bohr – (right) One aspect of that radical thinking was that the wave function is not a wave in anything physical, but an abstract distribution of probabilities. Bohr and Heisenberg insisted that in the absents of measurement the unobserved universe is only a suite of possibilities of the various states it could take, were a measurement to be made. Then upon measurement, fundamental randomness determines the properties of say, the particle that would emerge from it’s wave function. This required an almost mystical duality between the wave and the particle like nature of matter. Not everyone was so sure, Einstein famously hated the idea of fundamental randomness, but to counter Bohr and Heisenberg there needed to be a full theory that described how a quantum object could show both wave and particle behavior at the same time without being fundamentally probabilistic. That theory came from Louis DeBroglie, the guy who originally proposed the idea that matter could be described as waves right at the beginning of the quantum revolution. DeBroglie theory reasons that, there was no need for quantum objects to transition in a mystical way between non-real waves and real particles, why not just have real waves just push around real particles. This is Pilot-Wave Theory. In it, the wave function describes a real wave of some stuff, this wave guides the motion of a real point-like particle that has a define location at all times. Importantly, the wave function in Pilot-Wave Theory evolves exactly according to the Schrödinger equation. That’s the equation at the heart of all quantum mechanics that tells the wave function how to change across space and time. Schrödinger Equation: Describes How a Physical System Will Change Over Time This means that Pilot-Wave Theory makes the same basic predictions as any other breed of quantum mechanics. For example, it’s guiding wave has all the usual wavy stuff, like forming interference patterns when it passes through a pair of slits. Because particles follow the paths etched out by the wave, it will end up landing according to that pattern. The wave defines a set of possible trajectories and the particle takes one of those trajectories. But the choice of paths isn’t random, if you know the exact particle position and velocity at any point you can figure out it’s entire future trajectory. Apparent randomness arises because we can’t ever have a perfect measurement of initial position, velocity or other properties. This hypothetical predictability means that a Pilot-Wave universe is completely deterministic. When DeBroglie presented his still incomplete theory at the famous Solway conference of 1927 it didn’t go down so well. Technical objections were raised and Niels Bohr doubled down on the probabilistic interpretation. DeBroglie was so convinced, and he dropped Pilot-Waves all together. The idea was forgotten for decades and Copenhagen became the orthodoxy. Solvay conference (1927): Schrödinger, Bohr, Eisenberg, De Broglie, Dirac, Lorentz, Einstein.. It took until 1952 for another physicist, David Bohm to feel very uncomfortable with some of the wackiness of Copenhagen and to re-discover DeBroglie’s old idea. Bohm took off where DeBroglie left off and completed the theory. The result was Bohmian Mechanics, also known as DeBroglie-Bohm Pilot-Wave theory. These days, more and more serious physicist are favoring Bohm’s ideas. However, it’s far from being broadly accepted. DeBroglie himself remained firmly in the Copenhagen camp even after Bohm’s efforts. Although Pilot-Wave theory makes all the usual predictions of Quantum Mechanics, it has some really fundamental differences. Those differences are in a sort of “special thinking” you need do, in order to accept Pilot-Waves over other interpretations. In fact most of the arguments fore or against it are about this “special thinking”. Are you more or less comfortable with the oddness of Pilot-Waves versus the oddness say, of Copenhagen, or Many Worlds. So what uncomfortable thinking does Pilot-Wave theory require. For one thong, it needs a teensy bit of extra math that mainstream interpretations don’t. As well as the Schrödinger equation that tells the wave function how to change, it also has a Guiding (Velocity) Equation that tells the particle how to move within that wave function. Schrödinger Equation and Guiding (Velocity) Equation That “extra math” is considered un-parsimonious to some, a needles added complexity. However, the Guiding Equation is derived directly from the wave function, so some would argue that it was there all along. A more troubling requirement of Bohmian Mechanics is that it does contain real complexity that’s not uncoded in the wave function. That’s something that Niels Bohr was so fervently against. Bohmian Mechanics, has so called “hidden variables” details about the state of the particles that are not described by the wave function. According to Pilot-Wave Theory the wave function just describes the possible distribution of those variables given our lack of perfect knowledge. But hidden variables have a bad wrap in quantum mechanics. Distribution of Variables Inside a Wave Function Pretty soon after DeBroglie first proposed Pilot-Waves, the revered mathematician John Von Neumann published a proof, showing that hidden variable explanation for the wave function, just couldn’t work. That proclamation contributed to the long shelving of Pilot-Wave Theory. In fact Von Neumann didn’t get the full answer, it turns out that the restriction against hidden variables, only applies to local hidden variables. So, there can’t be extra hidden information about specific region of the wave function that the rest of the wave function doesn’t know. This was figured out pretty soon after Von Neumann paper, by German mathematician Grete Hermann. Although her repudiation wasn’t noticed until it was re-derived by John Bell in the 1960’s. This helped the resuscitation of Pilot-Wave Theory, because Bohmian Mechanics doesn’t use local hidden variables, it’s hidden variables are global. The entire wave function knows the location, velocity and spin of each particle. This non-locality is another weird thing you have to believe, in order to accept Pilot-Waves. Grete Hermann                                                                         John Bell Not only does the entire wave function knows the properties of the particle, but the entire wave function could be affected instantaneously. So a measurement at one point in the wave function will effect it’s shape elsewhere. This can therefore effect the trajectories and properties of the particles carried by that wave, potentially very far away. Quantum entanglement experiment show this sort of “Spooky Action” at a distance, is a very real phenomena. Again, I’ve gone into the non-locality of entangle particles in detail before. Also worth a look. It’s a though idea to swallow, but experiments indicate that some type of non locality is real, wether or not we accept Pilot-Waves. It would be remit of me to talk about Pilot-Waves without mentioning the amazing analogy that was discovered in bouncing droplets on a vibration pool of oil. Quantum Phenomena on the Macroscopic Level This is pretty amazing, we see many of the familiar quantum phenomena appear in this macroscopic system with suspended oil droplets following it’s own Pilot-Wave. Now we shouldn’t take a macroscopic analogy as proof of microscopic reality. But it does demonstrate that this sort of thing does happen in this universe, at least on some scales. I should also add that DeBroglie-Bohm Pilot-Wave theory is certainly wrong, and I don’t think anyone could deny that, because it doesn’t account for relativity, either special or general. That means that at best it’s incomplete, while regular mechanics has Quantum Field theory and it’s relativistic version, Pilot-Wave theory hasn’t quite got there yet. Quantum Field theory pretty explicitly requires that all possible particle trajectories be considered equally real. Pilot-Wave theory postulates that the particle really takes a single actual trajectory, the Bohm trajectory. This is not consistent with quantum field theory and so there isn’t a complete relativistic formulation of Bohmian Mechanics, yet. But there is good effort in that direction. Now let’s not even start talking about gravity, as no version of Quantum Mechanics has that sorted out. Also, we can’t ignore the fact that the initial motivation behind Pilot-Wave theory was to preserve the idea of real particles. We need to be dubious about that sort of classical bias. All that said, Pilot-Wave theory does do something remarkable. It shows as that it’s possible to have a consistent interpretation of Quantum Mechanics, that is both physical and deterministic, no hoo-ha needed. Maybe something like Pilot-Waves really do drive the microscopic mechanics of space-time. Particle(s) Taking Bohm Trajectory While Surfing Pilot Waves Check Also The Scale of the Universe Start with no tools and just using our senses look at the most basic concept of Physics: Mass, Length and Time. Then ask what kind of mass, length and time can we define?
3d51b0b6851966d1
Svenskt kvinnobiografiskt lexikon To advanced search   To Karp (External link) Inga Margrete Fischer-Hjalmars Chemist, physician Inga Fischer-Hjalmars was a Swedish chemist and a theoretical physician. She was the first woman to be appointed professor of theoretical physics in Sweden. Inga Fischer-Hjalmars was born in Stockholm in 1918. Both of her parents were well-educated but despite this the family suffered periodically from financial difficulties. This limited Inga Fischer-Hjalmars’ opportunities for higher education. She was able to gain her school-leaving certificate after receiving financial support from her relatives in Denmark. As she wanted to continue into further education she had to choose the shortest scientific programme that was available, namely the two-year pharmaceutical course. Pharmaceutical sciences proved to be a clever choice as it led to good employment prospects. This enabled Inga Fischer-Hjalmars to continue studying chemistry whilst she worked. Her intention in studying chemistry was to become a teacher but during her studies she was exposed to different sections of the chemical research environment in Stockholm and this led her onto the scientific path. Whilst studying chemistry Inga Fischer-Hjalmars worked as an assistant for the biochemist and Nobel Prize winner Hans von Euler-Chelpin. In his laboratory some of the work she participated in included quintessential studies on the differences in cell nuclei between healthy versus cancer cells. A large part of Inga Fischer-Hjalmars’ early scientific education came from another research environment, namely from within the group led by Nils Löfgren. This group had discovered the local anaesthetic Xylocain in the early 1940s. Inga Fischer-Hjalmars played a central role in the early development of Xylocain and contributed to a deeper understanding of the physiological functions of the medicine. During Inga Fischer-Hjalmar’s period as part of Nils Löfgren’s group she developed an interest in chemistry and particularly for its physical origins. Her experimental work in the second half of the 1940s mainly involved different forms of interaction between molecules. In order to deepen her knowledge of the underlying principles of chemistry she continued her physics studies during this period and came to know Oskar Klein, the theoretical physician. Inga Fischer-Hjalmars became a close friend of Klein and of his family. Oskar Klein piqued Inga Fischer-Hjalmars’ interest in the possible application of quantum mechanics to chemistry and in what was then a relatively young scientific field of quantum chemistry. Arne Tiselius, a Nobel Prize winner and member of the research council, noted Inga Fischer-Hjalmars’ talent and understood that her interest in quantum chemistry was hardly shared in Sweden towards the end of the 1940s. Encouraged by Tiselius, Inga Fischer-Hjalmars travelled to Paris in the spring of 1948 in order to participate in a conference. She attended the conference in the hopes of finding a mentor and someone who could teach her quantum chemistry and its methodology. The conference was attended by a range of prominent researchers in the area and in the end the individual who became her mentor was an Englishman called Charles Coulson. Inga Fischer-Hjalmars spent the winter of 1948-1949 with Coulson at King’s College in London. Charles Coulson’s view of theoretical chemistry and its role within science influenced Inga Fischer-Hjalmars’ future efforts in the field. Coulson believed that the role of theoretical chemistry was not just to reduce physical chemistry to quantum mechanics, but that an equally important role for the new field of quantum mechanics was its application to questions of pure chemistry and in thus contribute to increased understanding within all areas of chemistry. In 1952 Inga Fischer-Hjalmars defended her thesis entitled Studies of the hydrogen bond and the ortho-effect. This was an unusual thesis in that it contained both theoretical and experimental work. That same year Inga Fischer-Hjalmars married Stig Hjalmars. The core issue of quantum chemistry is to find solutions to the Schrödinger equation for molecules. Much of Inga Fischer-Hjalmars’ early work comprised critical analyses of the contemporary methods which existed to achieve that end. She often investigated what limitations there were in the methodology and worked consistently to defeat them. As a theoretician she did not just work at a conceptual level, but often made close comparisons with experimental results for the molecules which she studied. Inga Fischer-Hjalmars quickly became internationally renowned within theoretical chemistry and her fellow research contemporaries often made use of her work, even just after she had obtained her doctorate. In 1963 Inga Fischer-Hjalmars succeeded her former teacher and mentor Oskar Klein and thus became the first woman in Sweden to be appointed professor of theoretical physics. Given her background as a pharmacist and considering that her main research interest lay in chemical rather than physical matters, this was an extremely remarkable appointment. Towards the end of the 1960s Inga Fischer-Hjalmars began to take an interest in the precision of contemporary semi-empirical variants of the molecular orbital method and the approximations which lay at its basis. This work led, in the 1970s, to the development of improved semi-empirical methods which allowed for quantum mechanical computations on more complex molecules and heavier atoms, such as transition metals. The latter part of Inga Fischer-Hjalmars’ research was largely devoted to studies regarding the binding and electron structure of biomolecules. In 1985 Inga Fischer-Hjalmars was awarded the International Society of Quantum Biology Award for her quantum chemical studies of biomolecules. During the 1970s Inga Fischer-Hjalmars also contributed to the semi-classic distribution theory which was applied to simple chemical reactions. During the 1980s she worked with her husband, Stig Hjalmars, on solid-state physics and also investigated the continuum description of crystals. Inga Fischer-Hjalmars was very active in human rights and in particular in the freedom and rights of researchers from the Soviet Union. She was supportive of the so-called refuseniks, that is, individuals in the Soviet Union who were not allowed to emigrate or even leave their home state temporarily, thus hampering these researchers’ opportunities to be part of the international developments within their research field. During the 1980s Inga Fischer-Hjalmars served as the chair of Committee of Free Circulation of Scientists, a committee which aimed to protect the freedom of movement of researchers who were members of the International Council of Scientific Union (ICSU). As chair she brought the attention of the international community to the problems of the Soviet Union and at every given opportunity she applied pressure to the Soviet academy of sciences. In addition to this public engagement Inga Fischer-Hjalmars was also active in unofficial efforts to help improve conditions for Soviet colleagues. She undertook many trips to the Soviet Union with the intention of exchanging knowledge and experience with the isolated researchers there, and to supply them with material resources and medicines. For these activities Inga Fischer-Hjalmars was awarded the Heinz R. Pagels Human Rights of Scientists Award by the New York Academy of Sciences in 1990. She also was one of the founders of the Svenska Helsingforskommittén för mänskliga rättigheter (Swedish Helsinki committee for human rights). In 1978 Inga Fischer-Hjalmars was elected into the Kungliga Vetenskapsakademi (royal academy of sciences) and served as its vice-president from 1982-1985. She was elected into the International Academy of Quantum Molecular Science in 1983. Inga Fischer-Hjalmars, along with Per-Olov Löwdin in Uppsala, was one of the founders of the Swedish school of theoretical chemistry. Inga Fischer-Hjalmars educated, both directly and indirectly, an entire generation of theoretical chemists in Sweden, including Björn Roos, Per Siegbahn, and Margareta Blomberg. Inga Fischer-Hjalmars died in 2008. She was then 90 years old. The Svenska kemisamfund (Swedish chemistry society) has since 2010 made an annual award to the most promising doctorate within theoretical chemistry, named the Inga Fischer-Hjalmars Award. A. Johannes Johansson (Translated by Alexia Grosjean) Published 2018-03-08 Inga Margrete Fischer-Hjalmars,, Svenskt kvinnobiografiskt lexikon (article by A. Johannes Johansson), retrieved 2020-03-30. Other Names Maiden name: Fischer Family Relationships Civil Status: Widow • Mother: Karen-Beate Fischer, född Wulff • Father: Otto Fabricius Fischer • Brother: Hans Christian Fischer more ... • Läroverk, Stockholm: Studentexamen • Yrkesutbildning, Stockholm: Farmaceut • Högskola, Stockholm: Fil.mag.examen, Stockholms högskola more ... • Profession: Farmaceut • Profession: Laborator, Kungliga Tekniska Högskolan (KTH) • Profession: Professor, teoretisk fysik, Stockholms universitet • Mentor: Hans von Euler-Chelpin • Mentor: Nils Löfgren • Mentor: Oskar Klein • Mentor: Charles Coulson • Kungliga Vetenskapsakademien Ledamot, vice preses 1982-1985 • The Royal Danish Academy of Sciences and Letters • Commitee of Free Circulation of Scientists, del av International Council of Scientific Union (ICSU) Medlem, ordförande 1982-1992 more ... • Birthplace: Stockholm • Stockholm • Place of death: Lidingö Unpublished source • Artikelförfattarens intervjuer med Gunnar Widmark, Anders Ehrenberg, Hedi Fried, Olof G Tandberg, Anita Enflo, Helene Guste-Fischer, Maria Leissner Further References • Roos, Björn, ‘Inga Fischer-Hjalmars’, Theoretica chimica acta: an international journal of theoretical chemistry, 1994:87, s. 243−245
eb244dc14747a2a5
Bound State Get Bound State essential facts below. View Videos or join the Bound State discussion. Add Bound State to your PopFlock.com topic list for future reference or share this resource on social media. Bound State In quantum physics, a bound state is a quantum state of a particle subject to a potential such that the particle has a tendency to remain localized in one or more regions of space. The potential may be external or it may be the result of the presence of another particle; in the latter case, one can equivalently define a bound state as a state representing two or more particles whose interaction energy exceeds the total energy of each separate particle. One consequence is that, given a potential vanishing at infinity, negative-energy states must be bound. In general, the energy spectrum of the set of bound states is discrete, unlike free particles, which have a continuous spectrum. Although not bound states in the strict sense, metastable states with a net positive interaction energy, but long decay time, are often considered unstable bound states as well and are called "quasi-bound states".[1] Examples include certain radionuclides and electrets.[clarification needed][] Let H be a complex separable Hilbert space, be a one-parameter group of unitary operators on H and be a statistical operator on H. Let A be an observable on H and be the induced probability distribution of A with respect to ? on the Borel ?-algebra of . Then the evolution of ? induced by U is bound with respect to A if , where .[dubious ][] • If the state evolution of ? "moves this wave package constantly to the right", e.g. if for all , then ? is not bound state with respect to position. • More generally: If the state evolution of ? "just moves ? inside a bounded domain", then ? is bound with respect to position. Let A have measure-space codomain . A quantum particle is in a bound state if it is never found "too far away from any finite region ," i.e. using a wavefunction representation, Position-bound states Consider the one-particle Schrödinger equation. If a state has energy , then the wavefunction ? satisfies, for some so that ? is exponentially suppressed at large x.[dubious ] Hence, negative energy-states are bound if V vanishes at infinity. A boson with mass m? mediating a weakly coupled interaction produces an Yukawa-like interaction potential, where , g is the gauge coupling constant, and ?i = ?/mic is the reduced Compton wavelength. A scalar boson produces a universally attractive potential, whereas a vector attracts particles to antiparticles but repels like pairs. For two particles of mass m1 and m2, the Bohr radius of the system becomes and yields the dimensionless number In order for the first bound state to exist at all, . Because the photon is massless, D is infinite for electromagnetism. For the weak interaction, the Z boson's mass is , which prevents the formation of bound states between most particles, as it is the proton's mass and the electron's mass. Note however that if the Higgs interaction didn't break electroweak symmetry at the electroweak scale, then the SU(2) weak interaction would become confining.[7] See also 1. ^ Sakurai, Jun (1995). "7.8". In Tuan, San (ed.). Modern Quantum Mechanics (Revised ed.). Reading, Mass: Addison-Wesley. pp. 418-9. ISBN 0-201-53929-2. Suppose the barrier were infinitely high ... we expect bound states, with energy E > 0. ... They are stationary states with infinite lifetime. In the more realistic case of a finite barrier, the particle can be trapped inside, but it cannot be trapped forever. Such a trapped state has a finite lifetime due to quantum-mechanical tunneling. ... Let us call such a state quasi-bound state because it would be an honest bound state if the barrier were infinitely high. 2. ^ K. Winkler; G. Thalhammer; F. Lang; R. Grimm; J. H. Denschlag; A. J. Daley; A. Kantian; H. P. Buchler; P. Zoller (2006). "Repulsively bound atom pairs in an optical lattice". Nature. 441 (7095): 853-856. arXiv:cond-mat/0605196. Bibcode:2006Natur.441..853W. doi:10.1038/nature04918. PMID 16778884. 3. ^ Javanainen, Juha; Odong Otim; Sanders, Jerome C. (Apr 2010). "Dimer of two bosons in a one-dimensional optical lattice". Phys. Rev. A. 81 (4): 043609. arXiv:1004.5118. Bibcode:2010PhRvA..81d3609J. doi:10.1103/PhysRevA.81.043609. 4. ^ M. Valiente & D. Petrosyan (2008). "Two-particle states in the Hubbard model". J. Phys. B: At. Mol. Opt. Phys. 41 (16): 161002. arXiv:0805.1812. Bibcode:2008JPhB...41p1002V. doi:10.1088/0953-4075/41/16/161002. 5. ^ Max T. C. Wong & C. K. Law (May 2011). "Two-polariton bound states in the Jaynes-Cummings-Hubbard model". Phys. Rev. A. American Physical Society. 83 (5): 055802. arXiv:1101.1366. Bibcode:2011PhRvA..83e5802W. doi:10.1103/PhysRevA.83.055802. 7. ^ Claudson, M.; Farhi, E.; Jaffe, R. L. (1 August 1986). "Strongly coupled standard model". Physical Review D. 34 (3): 873-887. doi:10.1103/PhysRevD.34.873. Music Scenes
45c35f19a84e5a23
Gravitational Time-Dilation Robert A. Herrmann Revised 26 JAN 2008 and 6 JUN 2012. In modern physics, "time" is considered as an actual physical primitive. As such its properties are only operationally represented. That is, the properties of (physical or observer) time are relative to its measure and the instrumentation used to make such measurements. In this article, the notion of time-dilation is generally restricted to "local" time-dilation. From this operational approach, the notion of time-dilation needs to be restricted to the specific physical objects used to measure time, general known as clocks, and, in particular, the constituents of specific clocks. A major question that needs to be investigated relative to clock constituents is "how" gravitational fields might affect these constituents and yield the actual experimental evidence that indirectly verifies time-dilation and, whether all constituent physical behavior is affected, in this time-dilation manner, by a gravitational field. That is, does it actually apply to all physical objects that can be considered as clocks? In this paper, I use my own analysis as well as a considerable amount of analysis that appears in the referenced literature. In section 3, experimental evidence is discussed that indirectly verifies a predicted physical mechanism for certain modes of time-dilation. 1. The Equivalence Principle. An Equivalence Principle appears to be necessary in every General Theory of Relativity (GR) derivation for expressions that predict gravitational effects attributed to "(local) time-dilation." The question is which Equivalence Principle? Einstein first stated his equivalence principle as follows: [Let K' be a system of reference such that] relative to K' a mass sufficiently distant from other masses has an accelerated motion such that its acceleration and direction of acceleration are independent of its material composition and physical state. Does this permit an observer at rest relative to K' to draw the conclusion that he is on a "really" accelerated system of reference? The answer is negative; for the above mentioned behavior of freely moving masses relative to K' may be interpreted equally well in the following way. The system of reference K' is unaccelerated, but the space region being considered is under the sway of a gravitational field, which generates the accelerated motion of the bodies relative to K'. (Ohanian and Ruffini, 1994, p. 53) However, Fock writes: As was mentioned, Einstein considered that from the point of view of the Principle of Equivalence it is impossible to speak of absolute acceleration just as it is impossible to speak of absolute velocity. We consider this conclusion of Einstein's to be erroneous . . . (1959, p. 208) Associated with this Einstein description is the famous "elevator" illustration. However, Fock gives an example that uses a rotating non-infinitesimal (i.e. non-local) physical structure and this example explicitly contradicts the above Einstein statement. It seems that under most conditions experienced within our universe such effects, for macroscopic entities, can be differentiate one from another. This led Fock to modify, at the least, Einstein's original hypothesis so as not to forget "that the nature of equivalence of fields of acceleration and of gravitation is strictly local." (Fock, 1959, p. 369) [Also see pages 206-210 of Fock's text for an extensive and excellent analysis of this concept.] This is not the last word on this principle, however, for what would be needed is an actual physical experiment that would also demonstrate Einstein's error. The mathematical model chosen to model the Einstein theory of gravity uses the infinitesimal calculus. The methods of corresponding physical behavior to the mathematical structure were specifically ignored. One needs to approximate physical infinitesimal measures before such a structure can be considered as a meaningful mathematical model. Gravitational tidal effects are the force-like effects that a gravitational field has upon physical objects. Because of the existence of a special instrument called a gravity gradiometer, an instrument that can measure the local differences in the tidal effects or what is termed the tidal fields, the above Einstein equivalence principle can be experimental falsified. Further, the gravity gradiometer instrument can be reduced to a comparatively small size and this reduction in size represents an approximate physical "infinitesimalizing process." Apparently, as this instrument is reduced in size it is less likely to measure differences between gravitational tidal forces and the tidal forces associated with pure acceleration. One may be able to determine by further analysis a better way to state such an equivalence principle. For this theory, the term point particles intuitively refers to physical entities that are small in size or massless. Further, such entities are, usually, restricted to behavior within "small" spacetime intervals (i.e the notion of local) and are assumed not to have a significant gravitational field themselves so that they do not measurably influence the "stronger" gravitational field being investigated. It is the mathematical "differential" model that, by a special type of summation of these effects, allows one to describe behavior of point particles over macroscopic or large-scale spacetime. The following is a general description relative to all possible gravitational fields within our universe. Local experiments can distinguish between a reference frame in free fall in a gravitational field and a truly inertial reference frame placed far away from all gravitational fields. Local experiments can distinguish between a reference frame at rest in a gravitational field and an accelerated reference frame far away from all gravitational fields. Gravitational effects are not equivalent to the effects arising from an observers acceleration . . . . Gravitation and acceleration are only equivalent as far as the [local] translational motion of point particles is concerned (this amounts to what we call the Galileo principle of equivalence, sometimes also called the weak principle of equivalence.) If rotational degrees of freedom of the motion of masses is taken into consideration, then the equivalence fails.(Ohanian and Ruffini, 1994, p. 53) This last statement assumes that one is not considering a "perfectly homogeneous gravitational field" (Ohanian and Ruffini, 1994, p .53), which, using the usual secular methods,is not known to exist since the formation our basic "material" universe. With respect to rotation, Ohanian and Ruffini show specifically that it follows from GR that even ". . . Galileo's principle of equivalence fails for spinning particles." (Ohanian and Ruffini, 1994, 419.) Actually, it might be better to state that the principle "approximately" fails, in this case, since one can state a certain degree of failure with respect to "free-fall" motion. Hence, from what is stated above, to apply an Equivalence Principle in an exact manner locally, the principle would be the Galileo principle applied to the non-rotational behavior of single objects that approximate point-like particles. Such objects exist in the molecular, atomic and subatomic regions, or when applicable they can be the so-called massless "particles" such as photons. Obviously, this does not mean that specific time-dilation effects do not occur for collections of such constituents and, indeed, this does appear to be the case for experiments involving such gravity-produced effects, when such local effects are properly extended to non-local regions. This includes non-point-like rotations since such behavior is modeled by considering it as "directed" local (infinitesimal) linear behavior. Relative to derivations for the gravitational effects attributed to time-dilation, we are told that ". . . it [gravitational field] exerts an indirect effect on the relative rates of clocks at different positions." (Ohanian and Ruffini, 1994, p. 166) But after the clock rate derivation given from pages 167 through 171, we have "Note the crucial role played by the equivalence principle in several stages of the above arguments." (Ohanian and Ruffini, 1994, p. 171) "And note that the radar-ranging [macroscopic and large-scale] procedure implicitly relies upon the equivalence principle: in the local inertial reference frame of the freely falling clock, light propagates as in the absence of the gravitational field, at its standard speed." (Ohanian and Ruffini, 1994, p. 169) After this derivation, there is the Einstein styled energy derivation relative to photons on page 186 of Ohanian and Ruffini (1994) and, yet, another derivation based upon the Galileo equivalence principle on page 187 in the terms of "clocks." Note that the equivalence of what is called inertial mass and gravitational mass, which is sometimes called Newton's equivalence principle, leads, using Newtonian approximation, only to the Galileo equivalence principle (Ohanian and Ruffini, 1994, 21-24) assuming the objects have the same initial velocity. 2. The Model Theoretic Error of Generalization One of the basic logical errors within mathematical modeling is called the model theoretic error of generalization. This has been known to be an error in scientific discourse for many years. For example, in 1850, John Stuart Mill mentions this error from a purely philosophical viewpoint (Mill, 1963-1992, p. 785). The error is very simple to state. A derivation that a specific physical event will occur under certain circumstances uses a fixed language and, usually, scientific logic. From the viewpoint of pure formal logic, this language corresponds to predicates of one form or another that are restricted to certain domains. A model theoretic error of generalization occurs when one claims that the predicted events now hold for other domains, where no derivation is given with respect to these other domains. The above Einstein description, his Equivalence Principle, is an explicit example of such an error being made. There are various "derivations" for gravitational effects attributed to dilation of "clock" rates using the Einstein concept of equivalence. These so-called derivations do not even define specifically the term "clock" except to say that it uses some type of process to "measure" this quantity. However, as discussed above, the only correct derivations using such a general term as "clock" appear to require an equivalence principle restricted to the Galileo principle. Hence, unless one can give a derivation for gravitational effects attributed to time-dilation that does not involve the Galileo equivalence principle, which applies to approximating point particles, then claiming that such gravitational effects are attributable to time-dilation and hold for "all" clocks, macroscopic, large-scale and atomic would appear to be a specific example of the model theoretic error of generalization within a physical theory. There are, of course, significant examples within mathematical modeling. Suppose that you model a specific measure only using the natural numbers greater than 0. Then, assuming that your model does predict physical behavior, you could truly state that any set of such measures would contain a smallest measure. [Note: that if your model uses decimals with a fixed number of significant digits, these measures can be considered as modeled by the natural numbers.] If you then state that these measures are but approximations for measures that are actually being modeled by the real numbers, you cannot say that any such set of measures has a smallest member. You cannot extend this simple property of the natural number "ordering" to the ordering of the real numbers, where it does not hold. To further establish that such gravitational effects attributed to time-dilation should first be restricted to atomic, subatomic or massless objects, there is in Herrmann (1994, p. 73-75) and Herrmann (1995) a derivation using the time dependent Schrödinger equation. The differential is often used to justify a type of summation of physical effects through the use of the integral. As discussed later in this article, this process must be carefully considered for the effects of gravitational time-dilation. If one conceives of a macroscopic device as composed of coupled atomic entities that are influenced by gravitational time-dilation, one might assume that as a coupled group the entire macroscopic entity viewed as a single object would be so influenced. However, such a macroscopic device considered as composed of such a coupled group would not satisfy the Galileo equivalence principle except very approximately. The only possible and exact Galileo equivalence principle effect would be "derivable effects" associated with each individual point-like particle or the totality of these effects as produced by each individual particle. 3. Location of Some Gravitational Time-Dilation Effects Using special techniques, it is possible to view our universe from a Euclidean-type "medium." As viewed from that medium, our universe seems to behave as if it "likes" local linear paths and any deviation from such paths requires a "price to be paid," so to speak. Additionally, within the medium, "light" paths are only linear in a very local sense. How they divert from such linear paths in a global sense is what leads to the Special Theory of Relativity. For the General Theory, where the notion of "mass" and a possible non-zero cosmological constant are adjoined, the General Theory deviations physically require types of "forces" that are measured in terms of "accelerations." Locally, a gravitational field and an acceleration field yield identical results. The only difference appears to be causal. The following discussion is related to a gravitational cause, but applies equally well to the notion of an acceleration field. The time-dilation notions consider here are couched in terms of gravitational acceleration. For these gravitational effects, it is sometimes difficult to determine which gravitational effects are caused by what has been termed as time-dilation. One problem for certain physical objects such as photons is whether the dilation is a local or global effect. From the material in section 1, local time-dilation effects, the basic type being considered here, are associated with the Galileo equivalence principle and appear to be associated with entities of a small spacetime "size." [Most often the "time" is actually "fixed" and the interval only relates to a spatial concept.] However, it is the derivation that specific gravitational effects should apply to a particular physical scenario that tends to indicate which effects are associated with measurable alterations that are customarily termed as "time-dilation." Once one has made a determination relative to such gravitationally produced alterations, there is an additional question that may be difficult to answer for all scenarios. Is the gravitational field producing such effects independent from the machines used to observe the effect, or is the gravitational field simply altering the machines that observe a specific physical scenario and the field is not actually altering the physical scenario as predicted by a time-dilation derivation? The derivations for such time-dilation as they appear in the references imply that time-dilation is but alterations in measuring instruments. As discussed in section 5, the use of the infinitesimal light-clock concept, that in the non-infinitesimal state was first introduced by Einstein, and what follows in this section will aid in determining what are (local) time-dilation effects and exactly "where" the effects take place in spacetime. As discussed in Herrmann (1994) and to avoid the model theoretic error of generation, such a device must be one that approximates an infinitesimal light-clock or an equivalent device. It is alterations in the behavior of such light-clocks that yields actual physical causes, distinct from the gravitation field itself, for local alterations in other measuring devices as well as alterations in other physical behavior. Using the statement in Bergmann (1976, p. 222), one assumes that "emission" from an atomic structure occurs when a particle is momentarily "at rest" in the gravitational field. The method illustrated in Herrmann (1994, p. 73-75) and Herrmann (1995) using the time dependent Schrödinger equation can be applied trivially to the Schrödinger equations that model more complex atomic structures assuming that if one has an explicit solution, then it can be expressed in a factored time-function form. This form has always been the case for all verified applications of the separated operator method. For such systems, the derivation yields that the changes in the total energy at a "specific gravitational potential," when viewed from the medium, are altered when compared with energy changes in our physical world where the gravitational potential is ignored. This "dilation" is expressed by the equation where is the usual GR factor for an electrically neutral non-rotating centrally symmetric and homogenous spherical object (a Schwarzschild configuration). Dividing this equation by Planck's constant, this result can be written in terms of any associated change in frequency (the Greek nu), which for emissions of electromagnetic radiation yields the exact GR prediction for the observed "gravitational redshift." In this case, the frequency with the "s" superscript is the laboratory measured frequency where gravity is ignored. The Special Theory chronotopic interval is used as an important requirement in the derivation since locally the General Theory is "infinitely close" to the Special Theory. (The similar expression can be obtained for a second location and a quotient compares the two frequencies.) The frequency with the "m" superscript is the observed or "created," or altered frequency that occurs a radial position "r" units from the center, but exterior to, a Schwarzschild configuration and as viewed from a medium known as the NSPPM-field. In this case, this is equivalent to viewing the effects of the frequency alteration, where there is no measurable gravitational field at a great distance from the gravitational center. The usual approach is to consider the "s" or standard frequency as measured in a very weak gravitational field and the observed altered frequency "m" occurs in a very strong field. In this case, this expression is the same as the one that appears in Bergmann, (1976, p. 222). The Bergmann derivation is obtained via time-dilation. The Schrödinger equation approach verifies that an alteration takes place within the atomic structure. This is, at present, a statement for a total energy change that depends upon the values of M and r. This would lead to the appropriate adjustment in the energy levels for the constituents. Depending upon the how a gravitational field actually interacts with a quantum physical structure, this could be but an acceptable approximation. On the other hand, one can suppose that the gravitational field is really an example of a pure primitive continuous field that appeared at the same moment that all of the other physical aspects of the universe appeared and that any law of gravity that incorporates such parameters simply displays relations that would exist between the measured numbers M and r, relations that allow the effects of the field to be measured and behavior predicted. From the quantum-physical viewpoint, a particle "free in space" would require a type of "continuous" alteration. This would make a field particle such as the proposed graviton but a convenient fiction used to comprehend somewhat the interaction of this field with atomic structures. Personally, I don't believe that there is a "correct" humanly comprehensible solution to this "interaction" problem with the exception of the purely operational approach used in properton theory, where no such interaction notions are required. To obtain the more exact frequency variation statement for comparison purposes, relative to Schwarzschild gravitational potentials, one would simply consider the ratio expression derived form two of the above frequency statements. The result, in this case, is then the exact same statement as appears in Lawden (1982, p. 154). For other physical configurations, such as one where we drop the non-rotating or non-electrically neutral character of the Schwarzschild configuration and others, the only difference in the above frequency equation is how the γ is expressed. My derivation of this gravitational redshift expression is obtained using changes in the behavior of infinitesimal light-clocks as further mentioned in the last section of this article. Of significance is that the Schrödinger equation derivation in Herrmann (1995) was for alterations in the total energy Es not just the total energy changes. The reason it is restricted in the derivation to the emission of electromagnet radiation follows from the notion of "momentary at rest." As mentioned in Herrmann (1994), the same derivation applies to other atomic structures, among others, relative to the total energy. Then depending upon the structure, one needs to investigate whether only one or more aspects of the total energy is being affected by the gravitational potential. One of the first laboratory experiments relative to a test of this frequency relation was done in 1960 by T. E. Cranshaw, J. P. Schiffer, and A. B. Whitehead. Although the Schrödinger equation derivation was not available at that time and the light-clock methods used for such derivations demonstrate that the results are more likely attributable to electromagnetic properties of atomic structures, a possible atomic structure correspondence was stated by these researchers. From the point of view of a single coordinate system two atomic systems at different gravitational potentials will have different total energies. The spacings of their energy levels, both atomic and nuclear, will be different in proportion to their total energies.(Cranshaw et al., 1960, p. 163-164) These researchers, using "nuclear clocks" and gamma ray emissions, showed that this variation in frequency did occur to within an accuracy of about 2%. A Schrödinger equation derivation predicts such a change and verifies the Cranshaw et al. speculation as well as a less specific Einstein conjecture. The most famous and direct laboratory measurement of this dilation effect on atomic structures was that of Pound and Rebka, Jr. in 1959-60 and by Pound and Snider (1965, p. B788-B803) and showed that the above predicted alteration in gamma ray frequency is correct within an experimental error of 1%. Ohanian and Ruffini state relative to these experiments: The latter experiment does not give quite as direct an indication of whether the frequency shift between the absorbed gamma rays and the natural nuclear oscillations is due to slower oscillation rate of the emitter or a loss of frequency suffered by the gamma ray as it climbs upward in the Earth's field. However, we can rule out the possibility of a simple frequency loss during propagation of the light wave . . . . Experiments with flying [atomic] clocks, in aircraft and rockets, have a crucial conceptual advantage over the gamma-ray experiment, in that the former experiments show in the most direct way that clocks in a gravitational potential run slower (Ohanian and Ruffini, 1994, p. 184) Hydrogen-maser clocks and other types of atomic clocks have confirmed to a great degree this same "frequency variation" in any atomic structure where variations in total energy lead to frequency measurements. Astronomical measurements also confirm the predicted frequency alteration discussed previously that produces a particlar form of the "gravitational redshift" for light emitted from atoms on "surfaces" of stars. If the correct methods for combining infinitesimal changes are applied to specific macroscopic objects, then depending upon how the total energy variations are distributed, other measures that characterize physical behavior are altered, when compared to a standard. Although such alterations might not be classified as variations due to time-dilation, they actually are related to, at least, one time-measuring device. The alterations may be obtained from other derivations, but since for the derivations considered here the "time" depended Schrödinger equation is used to obtain the alterations, then they are associated with alterations in, at the least, a type of light-clock. From the viewpoint of some cosmologies, this might exacerbate what are the known energy problems associated with the cosmology and gravitational fields. No such problems exist from the properton and medium view point. A "fourth test of GR" was purposed in 1964 (Shapiro, 1964). This is another test using photons to measure a predicted alteration in their "global" measured speed. This alteration is postulated to be caused by a gravitational field as a type of retarding medium. Of course, this uses the speed and wavelength model rather than the probabilistic (Feynman, 1985) model. This is a "time delay" test associated with a photon as it passes through a gravitational field. This delay would be cumulative and the analysis of this delay can be done with respect to an infinitesimal time change of tm as it is being measured by an infinitesimal light-clock within the gravitational field as viewed from the medium. When infinitesimal light-clocks are used for the derivation, the expression obtained is the exact same one as [50] in Ohanian and Ruffini (1994, p. 198) as it is derived from expression [52] (Ohanian and Ruffini, 1994, p. 202). How infinitesimal light-clocks enter into these calculations will be discussed more fully in the last part of this article. The derivation as given in Ohanian and Ruffini (1994, p. 202) for this delay shows that two infinitesimal light-clocks within the gravitational field are being considered as analogue models and as such are being used to measure this "global" photon speed. [Infinitesimal light-clocks do not predict such alterations through any alteration of the photon speed within the light-clock. Alterations in the light-clock counts is produced by considering different infinitesimal light-clocks where there is a "bounded" alteration in their construction.] 4. How the Differential is Used as a Model for Physical Behavior Suppose that a physical measure  F  is represented by a real or complex number, a vector or the like. The values for  F  depend upon or are influenced by a set of independent variables. Suppose that for this discussion there is but one such variable  x. To directly employ the differential in any of its usual forms,  x , as a physical measure, must behave in a certain manner. It must satisfy the Leibniz' principle. For the infinitesimal calculus, this is also called the infinitesimalizing process. This principle states 1. First, assume or establish that an observed change in physical behavior being measured by  F  corresponds to a change in physical qualities being measured by values of the variable  x. If it is reasonable to assume that the physical qualities being measured by the values of  x  can be altered in such a manner that a change in the value of  x  can be made extremely small in "value" (and I mean just that a number or vector "length," or the like that is intuitively near to zero), then the differential infinitesimalizing process holds and differentials may be an appropriate model that will predict the value of the  F. How accurate this model will be in making such a prediction depends upon how "close" this nonzero infinitesimalizing process approximates zero. [Note that infinitesimal type objects and "differentials" mathematically exist under more general conditions than illustrated here. Further, nothing in these statements should preclude the indirect use of differentials as a mere modeling artifact, where one simply "reasonably smooths out" discrete behavior.] 2. If an overall change in behavior of a particular physical entity is a type of summation of the infinitesimal changes in  F  that occur when the  x  changes by such an infinitesimal amount, then differential models may be an appropriate model for such changes in physical behavior. Our modern knowledge of the mathematical structure refines the usual intuitive concept as previously used within theoretical physics as it is explained by Max Planck. [A] finite change in Nature always occurs in finite time, and hence resolves into a series of infinitely small changes which occur in successive infinitely small intervals of time. (Planck, 1932, p. 2.) It took more than 300 years to discover the correct refinements to this Planck statement (Herrmann, 1990). There is a much closer relationship between the physical world and the measures considered than expressed by this Planck statement and the terms such as "infinitely small" and "successive" could not actually be formally modeled until after 1961. A most important aspect of the second requirement above is the "summation" concept. There are two possible "summations" allowed. 3. The differential can be applied to a specific entity influenced by these infinitesimal changes in the behavior of  x  if such changes affect the behavior of the entity as a whole. Then a change in the behavior of the entire entity is a special summation of these infinitesimal changes. 4. It has been shown specifically that if an infinitesimal change in  x  "approximates" a non-infinitesimal change in the behavior of a particular physical entity, denoted by  A , and the change in the behavior of another physical entity  B  depends completely upon the combined effects of each of an enormous but finite (not infinite in the usual sense) number of the  A  entities, then the differential can be used as an ideal approximating model that predicts the behavior of the  B  physical entity. The facts are that (3) and (4) are stating different requirements. Statements (3) and (4) express how one must modify the Planck statement, a statement that is experientially obtained and that is not derivable. Indeed, there are examples where if these rules are not followed, then the approach using the differentials fails to provide an adequate model for physical behavior. In Herrmann (1990,) an example is given that shows that, in general, one cannot use in (4) an infinite series of objects that are not measured by infinitesimals to model the cumulative behavior of a set of approximating physical objects. 5. Infinitesimal Light-clocks In 1905, relative to the Special Theory of Relativity, Einstein introduced into scientific modeling a specific approach where he described devices that would provide the needed measurements for the quantities being measured. In particular, the well-known light-clock constructed from the "radar-ranging" concept (i.e. photons reflecting back-and-forth between mirrors.) Further, he introduced the operational approach. Since his introduction of this approach, it has become required practice to associate physical measures with measuring devices, when a mathematical model is being constructed. This process uses physical terms and associates the entities within an abstract mathematical structure with specifically named physical entities within an assumed or perceived physical world. However, this light-clock notion considered as a physical model could not be extend, at that time, to the differential calculus. One reason for this is that there did not exist mathematical entities that modeled the intuitive concept of the infinitesimal as introduced by Newton. It was not until 1961, that Abraham Robinson made one of the most significant mathematical discoveries of the twentieth century. It was at that time that the mathematical properties of the infinitesimals first appeared as a portion of his discovery (Robinson, 1961). The line element methods used to derive the time-dilation effects of gravitational fields use specific infinitesimal changes in "time" to measure the predicted and observed changes taking place with respect to physical objects. There is no doubt that the model uses these infinitesimal changes in "time" to predict how specific timing devices will behavior when located within a gravitational field and to predict cumulative effects that occur with respect to certain individual point-like objects. It is the infinitesimal changes in "time" that models each of these pure time-dilation effects. Using the modern Einstein approach and our present day knowledge as to the correct rules for infinitesimal modeling, then these infinitesimal changes in "time" must be modeled, approximately, by an actual device that satisfies all of the requirements. The modern approach to mathematical modeling does not allow for the philosophical concept of "time" to be considered at all but forces upon the scientific world the requirement that an actual "infinitesimalizable device" be described. Assuming that the Riemannian geometry is an appropriate analogue model for gravitational field investigations, such an infinitesimal-type clock is constructed in the paper by Marzke and Wheeler (1964). However, this construction is after the fact rather than before. Although the mathematics may seem formidable, using the modern theory of infinitesimal and infinite numbers, various infinitesimal light-clocks are described in Herrmann (1994, Article 2, section 6) using only two observed properties of electromagnetic radiation and nothing more. These infinitesimal light-clocks are utilized to investigate various aspects of the Special and General Theories of Relativity in the search for a specific physical cause for such behavior as time-dilation. Of course, one should not expect the answer to be easily obtained when one considers that it took over 300 years to discover the formal mathematical properties for the infinitesimals. These infinitesimal light-clocks, since they display pure Robinson infinite numbers with their infinitesimal inverses, are appropriate for any theory consistent with their construction that uses differentials as models for physical changes. The most significant aspect of this more basic infinitesimal light-clock interpretation is that it is the alteration in the behavior of this specific "clock" that models the alteration in behavior displayed by other physical entities, regardless of how these other physical entities are employed for the purpose of physical measurement. The infinitesimal light-clocks are analogue models that are considered to undergo the physical alterations due to gravitational fields for their application to GR. For time-dilation, only the timing infinitesimal light-clock is used to measure gravitational alterations in appropriate entities that satisfy the requirements of the Galileo equivalence principle over a local spacetime interval. Although their use is analogue in character, there is no doubt that infinitesimal light-clocks imply that such gravitational alterations attributed to time-dilation are actually alterations in behavior associated with an interaction with the gravitational field. It is an interesting exercise to investigate relative to gravitational effects which of the two types of "summation," 3 or 4, applies to a specific problem. There have arisen various theories as to exactly what "gravity" may be from a more fundamental viewpoint and some of these theories present distinctly different causes for such time-dilation. Although it is of no significance to the material presented within this article, infinitesimal light-clocks can be used to develop such a theory and this theory is discussed in more detail in Herrmann (1994). Bergmann, P. G. 1976. Introduction to the Theory of Relativity. Dover, New York. Cambridge University, 1998. Cranshaw, T. E., J. P. Schiffer and P. A. Egelstaff. 1960. Measurement of the red shift using the Mössbauer effect in Fe^57. Phys. Rev. Letters 4(4):163-164. Feynman, R. 1985. The Strange Theory of Light and Matter. Princeton Univ. Press, Princeton. Fock, V. 1959. The theory of Space Time and Gravitation. Pergamon Press, New York. Herrmann, R. A. 1995. Operation equations, separation of variables, and relativistic alterations. Internat. J. Math. & Sci., 18(1):59-62. Herrmann, R. A. 1994. The Theory of Infinitesimal Light-Clocks or Herrmann, R. A. 1990. Infinitesimal Modeling. http://www.raherrmann/books.htm and Lawden, D. F. 1982. An Introduction to Tensor Calculus, Relativity and Cosmology. John Wiley & Son, New York, Marzke, R. F. and J. A. Wheeler. 1964. In (eds. Chiu and Hoffman), Gravitation and Relativity. W. A. Benjamin, New York. Mill, J. S. 1953-1992. A System of Logic Ratiocinative and Inductive. University of Toronto Press, ON, Canada. Ohanian, H. and R. Ruffini. 1994. Gravitation and Spacetime. W. W. Norton Co., New York Planck, M. 1932. The Mechanics of Deformable Bodies, Vol. II, Introduction to Theoretical Physics. MaCmillan, New York. Pound, R. V. and J. L. Snider. 1965. Effect of gravity on gamma radiation. Phys. Rev. 140(3B):B788-B803. Robinson, A. 1961. Non-standard Analysis. Proc. Royal Acad. Sci., Amsterdam, ser A, 64:432-440. Shapiro, I. I. 1964. Fourth test of general relativity. Phys. Rev. Letters 13(26):789-791. Math. Dept., U. S. Naval Academy, 572C Holloway Rd., Annapolis, MD 21402-5002
31d94c6b806bbd1d
The true mystery of quantum physics In many of our papers, we presented the orbital motion of an electron around a nucleus or inside of a more complicated molecular structure[1], as well as the motion of the pointlike charge inside of an electron itself, as a fundamental oscillation. You will say: what is fundamental and, conversely, what is not? These oscillations are fundamental in the sense that these motions are (1) perpetual or stable and (2) also imply a quantization of space resulting from the Planck-Einstein relation. Needless to say, this quantization of space looks very different depending on the situation: the order of magnitude of the radius of orbital motion around a nucleus is about 150 times the electron’s Compton radius[2] so, yes, that is very different. However, the basic idea is always the same: a pointlike charge going round and round in a rather regular fashion (otherwise our idea of a cycle time (T = 1/f) and an orbital would not make no sense whatsoever), and that oscillation then packs a certain amount of energy as well as Planck’s quantum of action (h). In fact, that’s just what the Planck-Einstein relation embodies: E = h·f. Frequencies and, therefore, radii and velocities are very different (we think of the pointlike charge inside of an electron as whizzing around at lightspeed, while the order of magnitude of velocities of the electron in an atomic or molecular orbital is also given by that fine-structure constant: v = α·c/n (n is the principal quantum number, or the shell in the gross structure of an atom), but the underlying equations of motion – as Dirac referred to it – are not fundamentally different. We can look at these oscillations in two very different ways. Most Zitterbewegung theorists (or realist thinkers, I might say) think of it as a self-perpetuating current in an electromagnetic field. David Hestenes is probably the best known theorist in this class. However, we feel such view does not satisfactorily answer the quintessential question: what keeps the charge in its orbit? We, therefore, preferred to stick with an alternative model, which we loosely refer to as the oscillator model. However, truth be told, we are aware this model comes with its own interpretational issues. Indeed, our interpretation of this oscillator model oscillated between the metaphor of a classical (non-relativistic) two-dimensional oscillator (think of a Ducati V2 engine, with the two pistons working in tandem in a 90-degree angle) and the mathematically correct analysis of a (one-dimensional) relativistic oscillator, which we may sum up in the following relativistically correct energy conservation law: dE/dt = d[kx2/2 + mc2]/dt = 0 More recently, we actually noted the number of dimensions (think of the number of pistons of an engine) should actually not matter at all: an old-fashioned radial airplane engine has 3, 5, 7, or more cylinders (the non-even number has to do with the firing mechanism for four-stroke engines), but the interplay between those pistons can be analyzed just as well as the ‘sloshing back and forth’ of kinetic and potential energy in a dynamic system (see our paper on the meaning of uncertainty and the geometry of the wavefunction). Hence, it seems any number of springs or pistons working together would do the trick: somehow, linear becomes circular motion, and vice versa. But so what number of dimensions should we use for our metaphor, really? We now think the ‘one-dimensional’ relativistic oscillator is the correct mathematical analysis, but we should interpret it more carefully. Look at the dE/dt = d[kx2/2 + mc2]/dt = = d(PE + KE)/dt = 0 once more. For the potential energy, one gets the same kx2/2 formula one gets for the non-relativistic oscillator. That is no surprise: potential energy depends on position only, not on velocity, and there is nothing relative about position. However, the (½)m0v2 term that we would get when using the non-relativistic formulation of Newton’s Law is now replaced by the mc2 = γm0c2 term. Both energies vary – with position and with velocity respectively – but the equation above tells us their sum is some constant. Equating x to 0 (when the velocity v = c) gives us the total energy of the system: E = mc2. Just as it should be. 🙂 So how can we now reconcile this two models? One two-dimensional but non-relativistic, and the other relativistically correct but one-dimensional only? We always get this weird 1/2 factor! And we cannot think it away, so what is it, really? We still don’t have a definite answer, but we think we may be closer to the conceptual locus where these two models might meet: the key is to interpret x and v in the equation for the relativistic oscillator as (1) the distance along an orbital, and (2) v as the tangential velocity of the pointlike charge along this orbital. Huh? Yes. Read everything slowly and you might see the point. [If not, don’t worry about it too much. This is really a minor (but important) point in my so-called realist interpretation of quantum mechanics.] If you get the point, you’ll immediately cry wolf and say such interpretation of x as a distance measured along some orbital (as opposed to the linear concept we are used to) and, consequently, thinking of v as some kind of tangential velocity along such orbital, looks pretty random. However, keep thinking about it, and you will have to admit it is a rather logical way out of the logical paradox. The formula for the relativistic oscillator assumes a pointlike charge with zero rest mass oscillating between v = 0 and v = c. However, something with zero rest mass will always be associated with some velocity: it cannot be zero! Think of a photon here: how would you slow it down? And you may think we could, perhaps, slow down a pointlike electric charge with zero rest mass in some electromagnetic field but, no! The slightest force on it will give it infinite acceleration according to Newton’s force law. [Admittedly, we would need to distinguish here between its relativistic expression (F = dp/dt) and its non-relativistic expression (F = m0·a) when further dissecting this statement, but you get the idea. Also note that we are discussing our electron here, in which we do have a zero-rest-mass charge. In an atomic or molecular orbital, we are talking an electron with a non-zero rest mass: just the mass of the electron whizzing around at a (significant) fraction (α) of lightspeed.] Hence, it is actually quite rational to argue that the relativistic oscillator cannot be linear: the velocity must be some tangential velocity, always and – for a pointlike charge with zero rest mass – it must equal lightspeed, always. So, yes, we think this line of reasoning might well the conceptual locus where the one-dimensional relativistic oscillator (E = m·a2·ω2) and the two-dimensional non-relativistic oscillator (E = 2·m·a2·ω2/2 = m·a2·ω2) could meet. Of course, we welcome the view of any reader here! In fact, if there is a true mystery in quantum physics (we do not think so, but we know people – academics included – like mysterious things), then it is here! Post scriptum: This is, perhaps, a good place to answer a question I sometimes get: what is so natural about relativity and a constant speed of light? It is not so easy, perhaps, to show why and how Lorentz’ transformation formulas make sense but, in contrast, it is fairly easy to think of the absolute speed of light like this: infinite speeds do not make sense, both physically as well as mathematically. From a physics point of view, the issue is this: something that moves about at an infinite speed is everywhere and, therefore, nowhere. So it doesn’t make sense. Mathematically speaking, you should not think of v reaching infinite but of a limit of a ratio of a distance interval that goes to infinity, while the time interval goes to zero. So, in the limit, we get a division of an infinite quantity by 0. That’s not infinity but an indeterminacy: it is totally undefined! Indeed, mathematicians can easily deal with infinity and zero, but divisions like zero divided by zero, or infinity divided by zero are meaningless. [Of course, we may have different mathematical functions in the numerator and denominator whose limits yields those values. There is then a reasonable chance we will be able to factor stuff out so as to get something else. We refer to such situations as indeterminate forms, but these are not what we refer to here. The informed reader will, perhaps, also note the division of infinity by zero does not figure in the list of indeterminacies, but any division by zero is generally considered to be undefined.] [1] It may be extra electron such as in, for example, the electron which jumps from place to place in a semiconductor (see our quantum-mechanical analysis of electric currents). Also, as Dirac first noted, the analysis is actually also valid for electron holes, in which case our atom or molecule will be positively ionized instead of being neutral or negatively charged. [2] We say 150 because that is close enough to the 1/α = 137 factor that relates the Bohr radius to the Compton radius of an electron. The reader may not be familiar with the idea of a Compton radius (as opposed to the Compton wavelength) but we refer him or her to our Zitterbewegung (ring current) model of an electron. A theory of matter-particles Pre-scriptum (PS), added on 6 March 2020: The ideas below also naturally lead to a theory about what a neutrino might actually be. As such, it’s a complete ‘alternative’ Theory of Everything. I uploaded the basics of such theory on my site. For those who do not want to log on to, you can also find the paper on my author’s page on Phil Gibb’s site. We were rather tame in our last paper on the oscillator model of an electron. We basically took some philosophical distance from it by stating we should probably only think of it as a mathematical equivalent to Hestenes’ concept of the electron as a superconducting loop. However, deep inside, we feel we should not be invoking Maxwell’s laws of electrodynamics to explain what a proton and an electron might actually be. The basics of the ring current model can be summed up in one simple equation: c = a·ω This is the formula for the tangential velocity. Einstein’s mass-energy equivalence relation and the Planck-Einstein relation explain everything else[1], as evidenced by the fact that we can immediately derive the Compton radius of an electron from these three equations, as shown below:F1The reader might think we are just ‘casually connecting formulas’ here[2] but we feel we have a full-blown theory of the electron here: simple and consistent. The geometry of the model is visualized below. We think of an electron (and a proton) as consisting of a pointlike elementary charge – pointlike but not dimensionless[3] – moving about at (nearly) the speed of light around the center of its motion. The relation works perfectly well for the electron. However, when applying the a = ħ/mc radius formula to a proton, we get a value which is about 1/4 of the measured proton radius: about 0.21 fm, as opposed to the 0.83-0.84 fm charge radius which was established by Professors Pohl, Gasparan and others over the past decade.[4] In our papers on the proton radius[5],  we motivated the 1/4 factor by referring to the energy equipartition theorem and assuming energy is, somehow, equally split over electromagnetic field energy and the kinetic energy in the motion of the zbw charge. However, the reader must have had the same feeling as we had: these assumptions are rather ad hoc. We, therefore, propose something more radical: When considering systems (e.g. electron orbitals) and excited states of particles, angular momentum comes in units (nearly) equal to ħ, but when considering the internal structure of elementary particles, (orbital) angular momentum comes in an integer fraction of ħ. This fraction is 1/2 for the electron[6] and 1/4 for the proton. Let us write this out for the proton radius:F2What are the implications for the assumed centripetal force keeping the elementary charge in motion? The centripetal acceleration is equal to ac = vt2/a = a·ω2. It is probably useful to remind ourselves how we get this result so as to make sure our calculations are relativistically correct. The position vector r (which describes the position of the zbw charge) has a horizontal and a vertical component: x = a·cos(ωt) and y = a·sin(ωt). We can now calculate the two components of the (tangential) velocity vector v = dr/dt as vx = –a·ω·sin(ωt) and vy y = –a· ω·cos(ωt) and, in the next step, the components of the (centripetal) acceleration vector ac: ax = –a·ω2·cos(ωt) and ay = –a·ω2·sin(ωt). The magnitude of this vector is then calculated as follows: ac2 = ax2 + ay2a2·ω4·cos2(ωt) + a2·ω4·sin2(ωt) = a2·ω4ac = a·ω2 = vt2/a Now, Newton’s force law tells us that the magnitude of the centripetal force will be equal to: F = mγ·ac = mγ·a·ω2 As usual, the mγ factor is, once again, the effective mass of the zbw charge as it zitters around the center of its motion at (nearly) the speed of light: it is half the electron mass.[7] If we denote the centripetal force inside the electron as Fe, we can relate it to the electron mass me as follows:F3Assuming our logic in regard to the effective mass of the zbw charge inside a proton is also valid – and using the 4E = ħω and a = ħ/4mc relations – we get the following equation for the centripetal force inside of a proton: F4How should we think of this? In our oscillator model, we think of the centripetal force as a restoring force. This force depends linearly on the displacement from the center and the (linear) proportionality constant is usually written as k. Hence, we can write Fe and Fp as Fe = -kex and Fp = -kpx respectively. Taking the ratio of both so as to have an idea of the respective strength of both forces, we get this:F5 The ap and ae are acceleration vectors – not the radius. The equation above seems to tell us that the centripetal force inside of a proton gives the zbw charge inside – which is nothing but the elementary charge, of course – an acceleration that is four times that of what might be going on inside the electron. Nice, but how meaningful are these relations, really? If we would be thinking of the centripetal or restoring force as modeling some elasticity of spacetime – the guts intuition behind far more complicated string theories of matter – then we may think of distinguishing between a fundamental frequency and higher-level harmonics or overtones.[8] We will leave our reflections at that for the time being. We should add one more note, however. We only talked about the electron and the proton here. What about other particles, such as neutrons or mesons? We do not consider these to be elementary because they are not stable: we think they are not stable because the Planck-Einstein relation is slightly off, which causes them to disintegrate into what we’ve been trying to model here: stable stuff. As for the process of their disintegration, we think the approach that was taken by Gell-Man and others[9] is not productive: inventing new quantities that are supposedly being conserved – such as strangeness – is… Well… As strange as it sounds. We, therefore, think the concept of quarks confuses rather than illuminates the search for a truthful theory of matter. Jean Louis Van Belle, 6 March 2020 [1] In this paper, we make abstraction of the anomaly, which is related to the zbw charge having a (tiny) spatial dimension. [2] We had a signed contract with the IOP and WSP scientific publishing houses for our manuscript on a realist interpretation of quantum mechanics ( which was shot down by this simple comment. We have basically stopped tried convincing mainstream academics from that point onwards. [3] See footnote 1. [4] See our paper on the proton radius ( [5] See reference above. [6] The reader may wonder why we did not present the ½ fraction is the first set of equations (calculation of the electron radius). We refer him or her to our previous paper on the effective mass of the zbw charge ( The 1/2 factor appears when considering orbital angular momentum only. [7] The reader may not be familiar with the concept of the effective mass of an electron but it pops up very naturally in the quantum-mechanical analysis of the linear motion of electrons. Feynman, for example, gets the equation out of a quantum-mechanical analysis of how an electron could move along a line of atoms in a crystal lattice. See: Feynman’s Lectures, Vol. III, Chapter 16: The Dependence of Amplitudes on Position ( We think of the effective mass of the electron as the relativistic mass of the zbw charge as it whizzes about at nearly the speed of light. The rest mass of the zbw charge itself is close to – but also not quite equal to – zero. Indeed, based on the measured anomalous magnetic moment, we calculated the rest mass of the zbw charge as being equal to about 3.4% of the electron rest mass ( [8] For a basic introduction, see my blog posts on modes or on music and physics (e.g. [9] See, for example, the analysis of kaons (K-mesons) in Feynman’s Lectures, Vol. III, Chapter 11, section 5 ( Wikipedia censorship I started to edit and add to the rather useless Wikipedia article on the Zitterbewegung. No mention of Hestenes or more recent electron models (e.g. Burinskii’s Kerr-Newman geometries. No mention that the model only works for electrons or leptons in general – not for non-leptonic fermions. It’s plain useless. But all the edits/changes/additions were erased by some self-appointed ‘censor’. I protested but then I got reported to the administrator ! What can I say? Don’t trust Wikipedia. Don’t trust any ‘authority’. We live in weird times. The mindset of most professional physicists seems to be governed by ego and the Bohr-Heisenberg Diktatur. For the record, these are the changes and edits I tried to make. You can compare and judge for yourself. Needless to say, I told them I wouldn’t bother to even try to contribute any more. I published my own article on the Vixrapedia e-encyclopedia. Also, as Vixrapedia did not have an entry on realist interpretations of quantum mechanics, I created one: have a look and let me know what you think. 🙂 Zitterbewegung (“trembling” or “shaking” motion in German) – usually abbreviated as zbw – is a hypothetical rapid oscillatory motion of elementary particles that obey relativistic wave equations. The existence of such motion was first proposed by Erwin Schrödinger in 1930 as a result of his analysis of the wave packet solutions of the Dirac equation for relativistic electrons in free space, in which an interference between positive and negative energy states produces what appears to be a fluctuation (up to the speed of light) of the position of an electron around the median, with an angular frequency of ω = 2mc2/ħ, or approximately 1.5527×1021 radians per second. Paul Dirac was initially intrigued by it, as evidenced by his rather prominent mention of it in his 1933 Nobel Prize Lecture (it may be usefully mentioned he shared this Nobel Prize with Schrödinger): “The variables give rise to some rather unexpected phenomena concerning the motion of the electron. These have been fully worked out by Schrödinger. It is found that an electron which seems to us to be moving slowly, must actually have a very high frequency oscillatory motion of small amplitude superposed on the regular motion which appears to us. As a result of this oscillatory motion, the velocity of the electron at any time equals the velocity of light. This is a prediction which cannot be directly verified by experiment, since the frequency of the oscillatory motion is so high and its amplitude is so small. But one must believe in this consequence of the theory, since other consequences of the theory which are inseparably bound up with this one, such as the law of scattering of light by an electron, are confirmed by experiment.”[1] In light of Dirac’s later comments on modern quantum theory, it is rather puzzling that he did not pursue the idea of trying to understand charged particles in terms of the motion of a pointlike charge, which is what the Zitterbewegung hypothesis seems to offer. Dirac’s views on non-leptonic fermions – which were then (1950s and 1960s) being analyzed in an effort to explain the ‘particle zoo‘ in terms of decay reactions conserving newly invented or ad hoc quantum numbers such as strangeness[2] – may be summed up by quoting the last paragraph in the last edition of his Principles of Quantum Mechanics: “Now there are other kinds of interactions, which are revealed in high-energy physics. […] These interactions are not at present sufficiently well understood to be incorporated into a system of equations of motion.”[3] Indeed, in light of this stated preference for kinematic models, it is somewhat baffling that Dirac did not follow up on this or any of the other implications of the Zitterbewegung hypothesis, especially because it should be noted that a reexamination of Dirac theory shows that interference between positive and negative energy states is not a necessary ingredient of Zitterbewegung theories.[4] The Zitterbewegung hypothesis also seems to offer interesting shortcuts to key results of mainstream quantum theory. For example, one can show that, for the hydrogen atom, the Zitterbewegung produces the Darwin term which plays the role in the fine structure as a small correction of the energy level of the s-orbitals.[5] This is why authors such as Hestenes refers to it as a possible alternative interpretation of mainstream quantum mechanics, which may be an exaggerated claim in light of the fact that the zbw hypothesis results from the study of electron behavior only. Zitterbewegung models have mushroomed[6] and it is, therefore, increasingly difficult to distinguish between them. The key to understanding and distinguishing the various Zitterbewegung models may well be Wheeler‘s ‘mass without mass’ idea, which implies a distinction between the idea of (i) a pointlike electric charge (i.e. the idea of a charge only, with zero rest mass) and (ii) the idea of an electron as an elementary particle whose equivalent mass is the energy of the zbw oscillation of the pointlike charge.[7] The ‘mass without mass’ concept requires a force to act on a charge – and a charge only – to explain why a force changes the state of motion of an object – its momentum p = mγ·v(with γ referring to the Lorentz factor) – in accordance with the (relativistically correct) F = dp/dt force law. As mentioned above, the zbw hypothesis goes back to Schrödinger’s and Dirac’s efforts to try to explain what an electron actually is. Unfortunately, both interpreted the electron as a pointlike particle with no ‘internal structure’.David Hestenes is to be credited with reviving the Zitterbewegung hypothesis in the early 1990s. While acknowledging its origin as a (trivial) solution to Dirac’s equation for electrons, Hestenes argues the Zitterbewegung should be related to the intrinsic properties of the electron (charge, spin and magnetic moment). He argues that the Zitterbewegung hypothesis amounts to a physical interpretation of the elementary wavefunction or – more boldly – to a possible physical interpretation of all of quantum mechanics: “Spin and phase [of the wavefunction] are inseparably related — spin is not simply an add-on, but an essential feature of quantum mechanics. […] A standard observable in Dirac theory is the Dirac current, which doubles as a probability current and a charge current. However, this does not account for the magnetic moment of the electron, which many investigators conjecture is due to a circulation of charge. But what is the nature of this circulation? […] Spin and phase must be kinematical features of electron motion. The charge circulation that generates the magnetic moment can then be identified with the Zitterbewegung of Schrödinger “[8] Hestenes’ interpretation amounts to an kinematic model of an electron which can be described in terms of John Wheeler‘s mass without mass concept.[9] The rest mass of the electron is analyzed as the equivalent energy of an orbital motion of a pointlike charge. This pointlike charge has no rest mass and must, therefore, move at the speed of light (which confirms Dirac’s en Schrödinger’s remarks on the nature of the Zitterbewegung). Hestenes summarizes his interpretation as follows: “The electron is nature’s most fundamental superconducting current loop. Electron spin designates the orientation of the loop in space. The electron loop is a superconducting LC circuit. The mass of the electron is the energy in the electron’s electromagnetic field. Half of it is magnetic potential energy and half is kinetic.”[10] Hestenes‘ articles and papers on the Zitterbewegung discuss the electron only. The interpretation of an electron as a superconducting ring of current (or as a (two-dimensional) oscillator) also works for the muon electron: its theoretical Compton radius rC = ħ/mμc ≈ 1.87 fm falls within the CODATA confidence interval for the experimentally determined charge radius.[11] Hence, the theory seems to offer a remarkably and intuitive model of leptons. However, the model cannot be generalized to non-leptonic fermions (spin-1/2 particles). Its application to protons or neutrons, for example, is problematic: when inserting the energy of a proton or a neutron into the formula for the Compton radius (the rC = ħ/mc formula follows from the kinematic model), we get a radius of the order of rC = ħ/mpc ≈ 0.21 fm, which is about 1/4 of the measured value (0.84184(67) fm to 0.897(18) fm). A radius of the order of 0.2 fm is also inconsistent with the presumed radius of the pointlike charge itself. Indeed, while the pointlike charge is supposed to be pointlike, pointlike needs to be interpreted as ‘having no internal structure’: it does not imply the pointlike charge has no (small) radius itself. The classical electron radius is a likely candidate for the radius of the pointlike charge because it emerges from low-energy (Thomson) scattering experiments (elastic scattering of photons, as opposed to inelastic Compton scattering). The assumption of a pointlike charge with radius re = α·ħ/mpc) may also offer a geometric explanation of the anomalous magnetic moment.[12] In any case, the remarks above show that a Zitterbewegung model for non-leptonic fermions is likely to be somewhat problematic: a proton, for example, cannot be explained in terms of the Zitterbewegung of a positron (or a heavier variant of it, such as the muon- or tau-positron).[13] This is why it is generally assumed the large energy (and the small size) of nucleons is to be explained by another force – a strong force which acts on a strong charge instead of an electric charge. One should note that both color and/or flavor in the standard quarkgluon model of the strong force may be thought of as zero-mass charges in ‘mass without mass’ kinematic models and, hence, the acknowledgment of this problem does not generally lead zbw theorists to abandon the quest for an alternative realist interpretation of quantum mechanics. While Hestenes‘ zbw interpretation (and the geometric calculus approach he developed) is elegant and attractive, he did not seem to have managed to convincingly explain an obvious question of critics of the model: what keeps the pointlike charge in the zbw electron in its circular orbit? To put it simply: one may think of the electron as a superconducting ring but there is no material ring to hold and guide the charge. Of course, one may argue that the electromotive force explains the motion but this raises the fine-tuning problem: the slightest deviation of the pointlike charge from its circular orbit would yield disequilibrium and, therefore, non-stability. [One should note the fine-tuning problem is also present in mainstream quantum mechanics. See, for example, the discussion in Feynman’s Lectures on Physics.] The lack of a convincing answer to these and other questions (e.g. on the distribution of (magnetic) energy within the superconducting ring) led several theorists working on electron models (e.g. Alexander Burinskii[14][15]) to move on and explore alternative geometric approaches, including Kerr-Newman geometries. Burinskii summarizes his model as follows: “The electron is a superconducting disk defined by an over-rotating black hole geometry. The charge emerges from the Möbius structure of the Kerr geometry.”[16] His advanced modelling of the electron also allows for a conceptual bridge with mainstream quantum mechanics, grand unification theories and string theory: “[…] Compatibility between gravity and quantum theory can be achieved without modifications of Einstein-Maxwell equations, by coupling to a supersymmetric Higgs model of symmetry breaking and forming a nonperturbative super-bag solution, which generates a gravity-free Compton zone necessary for consistent work of quantum theory. Super-bag is naturally upgraded to Wess-Zumino supersymmetric QED model, forming a bridge to perturbative formalism of conventional QED.”[17] The various geometric approaches (Hestenes’ geometric calculus, Burinskii’s Kerr-Newman model, oscillator models) yield the same results (the intrinsic properties of the electron are derived from what may be referred to as kinematic equations or classical (but relativistically correct) equations) – except for a factor 2 or 1/2 or the inclusion (or not) of variable tuning parameters (Burinskii’s model, for example, allows for a variable geometry) – but the equivalence of the various models that may or may not explain the hypothetical Zitterbewegung still needs to be established. The continued interest in zbw models may be explained because Zitterbewegung models – in particular Hestenes’ model and the oscillator model – are intuitive and, therefore, attractive. They are intuitive because they combine the Planck-Einstein relation (E = hf) and Einstein’s mass-energy equivalence (E = mc2): each cycle of the Zitterbewegung electron effectively packs (i) the unit of physical action (h) and (ii) the electron’s energy. This allows one to understand Planck’s quantum of action as the product of the electron’s energy and the cycle time: h = E·T = h·f·T = h·f/f = h. In addition, the idea of a centripetal force keeping some zero-mass pointlike charge in a circular orbit also offers a geometric explanation of Einstein’s mass-energy equivalence relation: this equation, therefore, is no longer a rather inexplicable consequence of special relativity theory. The section below offers a general overview of the original discovery of Schrödinger and Dirac. It is followed by further analysis which may or may not help the reader to judge whether the Zitterbewegung hypothesis might, effectively, amount to what David Hestenes claims it actually is: an alternative interpretation of quantum mechanics. Theory for a free fermion [See the article: the author of this section does not seem to know – or does not mention, at least – that the Zitterbewegung hypothesis only applies to leptons (no strong charge).] Experimental evidence The Zitterbewegung may remain theoretical because, as Dirac notes, the frequency may be too high to be observable: it is the same frequency as that of a 0.511 MeV gamma-ray. However, some experiments may offer indirect evidence. Dirac’s reference to electron scattering experiments is also quite relevant because such experiments yield two radii: a radius for elastic scattering (the classical electron radius) and a radius for inelastic scattering (the Compton radius). Zittebewegung theorists think Compton scattering involves electron-photon interference: the energy of the high-energy photon (X- or gamma-ray photons) is briefly absorbed before the electron comes back to its equilibrium situation by emitting another (lower-energy) photon (the difference in the energy of the incoming and the outgoing photon gives the electron some extra momentum). Because of this presumed interference effect, Compton scattering is referred to as inelastic. In contrast, low-energy photons scatter elastically: they seem to bounce off some hard core inside of the electron (no interference). Some experiments also claim they amount to a simulation of the Zitterbewegung of a free relativistic particle. First, with a trapped ion, by putting it in an environment such that the non-relativistic Schrödinger equation for the ion has the same mathematical form as the Dirac equation (although the physical situation is different).[18][19] Then, in 2013, it was simulated in a setup with Bose–Einstein condensates.[20] The effective mass of the electric charge The 2m factor in the formula for the zbw frequency and the interpretation of the Zitterbewegung in terms of a centripetal force acting on a pointlike charge with zero rest mass leads one to re-explore the concept of the effective mass of an electron. Indeed, if we write the effective mass of the pointlike charge as mγ = γm0 then we can derive its value from the angular momentum of the electron (L = ħ/2) using the general angular momentum formula L = r × p and equating r to the Compton radius: This explains the 1/2 factor in the frequency formula for the Zitterbewegung. Substituting m for mγ in the ω = 2mc2/ħ yields an equivalence with the Planck-Einstein relation ω = mγc2/ħ. The electron can then be described as an oscillator (in two dimensions) whose natural frequency is given by the Planck-Einstein relation.[21]
4f174f22b5fc0061
Ground state Energy levels for an electron in an atom: ground state and excited states. After absorbing energy, an electron may jump from the ground state to a higher-energy excited state. The ground state of a quantum-mechanical system is its lowest-energy state; the energy of the ground state is known as the zero-point energy of the system. An excited state is any state with energy greater than the ground state. In quantum field theory, the ground state is usually called the vacuum state or the vacuum. If more than one ground state exists, they are said to be degenerate. Many systems have degenerate ground states. Degeneracy occurs whenever there exists a unitary operator that acts non-trivially on a ground state and commutes with the Hamiltonian of the system. According to the third law of thermodynamics, a system at absolute zero temperature exists in its ground state; thus, its entropy is determined by the degeneracy of the ground state. Many systems, such as a perfect crystal lattice, have a unique ground state and therefore have zero entropy at absolute zero. It is also possible for the highest excited state to have absolute zero temperature for systems that exhibit negative temperature. Absence of nodes in one dimension In one dimension, the ground state of the Schrödinger equation can be proven to have no nodes.[1] Consider the average energy of a state with a node at x = 0; i.e., ψ(0) = 0. The average energy in this state would be where V(x) is the potential. With integration by parts: Hence in case that is equal to zero, one gets: Now, consider a small interval around ; i.e., . Take a new (deformed) wave function ψ'(x) to be defined as , for ; and , for ; and constant for . If is small enough, this is always possible to do, so that ψ'(x) is continuous. Assuming around , one may write where is the norm. Note that the kinetic-energy densities hold everywhere because of the normalization. More significantly, the average kinetic energy is lowered by by the deformation to ψ'. Now, consider the potential energy. For definiteness, let us choose . Then it is clear that, outside the interval , the potential energy density is smaller for the ψ' because there. On the other hand, in the interval we have which holds to order . However, the contribution to the potential energy from this region for the state ψ with a node is lower, but still of the same lower order as for the deformed state ψ', and subdominant to the lowering of the average kinetic energy. Therefore, the potential energy is unchanged up to order , if we deform the state with a node into a state ψ' without a node, and the change can be ignored. We can therefore remove all nodes and reduce the energy by , which implies that ψ' cannot be the ground state. Thus the ground-state wave function cannot have a node. This completes the proof. (The average energy may then be further lowered by eliminating undulations, to the variational absolute minimum.) As the ground state has no nodes it is spatially non-degenerate, i.e. there are no two stationary quantum states with the energy eigenvalue of the ground state (let's name it ) and the same spin state and therefore would only differ in their position-space wave functions.[1] The reasoning goes by contradiction: For if the ground state would be degenerate then there would be two orthonormal[2] stationary states and — later on represented by their complex-valued position-space wave functions and — and any superposition with the complex numbers fulfilling the condition would also be a be such a state, i.e. would have the same energy-eigenvalue and the same spin-state. Now let be some random point (where both wave functions are defined) and set: (according to the premise no nodes). Therefore the position-space wave function of is for all . But i.e., is a node of the ground state wave function and that is in contradiction to the premise that this wave function cannot have a node. Note that the ground state could be degenerate because of different spin states like and while having the same position-space wave function: Any superposition of these states would create a mixed spin state but leave the spatial part (as a common factor of both) unaltered. Initial wave functions for the first four states of a one-dimensional particle in a box • The wave function of the ground state of a particle in a one-dimensional box is a half-period sine wave, which goes to zero at the two edges of the well. The energy of the particle is given by , where h is the Planck constant, m is the mass of the particle, n is the energy state (n = 1 corresponds to the ground-state energy), and L is the width of the well. • The wave function of the ground state of a hydrogen atom is a spherically symmetric distribution centred on the nucleus, which is largest at the center and reduces exponentially at larger distances. The electron is most likely to be found at a distance from the nucleus equal to the Bohr radius. This function is known as the 1s atomic orbital. For hydrogen (H), an electron in the ground state has energy −13.6 eV, relative to the ionization threshold. In other words, 13.6 eV is the energy input required for the electron to no longer be bound to the atom. • The exact definition of one second of time since 1997 has been the duration of 9192631770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom at rest at a temperature of 0 K.[3] 1. ^ a b See, for example, Cohen, M. (1956). "Appendix A: Proof of non-degeneracy of the ground state" (PDF). The energy spectrum of the excitations in liquid helium (Ph.D.). California Institute of Technology. Published as Feynman, R. P.; Cohen, Michael (1956). "Energy Spectrum of the Excitations in Liquid Helium" (PDF). Physical Review. 102 (5): 1189. Bibcode:1956PhRv..102.1189F. doi:10.1103/PhysRev.102.1189. 2. ^ i.e. 3. ^ "Unit of time (second)". SI Brochure. International Bureau of Weights and Measures. Retrieved 2013-12-22. • Feynman, Richard; Leighton, Robert; Sands, Matthew (1965). "see section 2-5 for energy levels, 19 for the hydrogen atom". The Feynman Lectures on Physics. 3.
59f03af78a9c79ad
Future tension | Aeon Will the sun rise? Photo taken from the ISS on 1 March 2016. Photo courtesy NASA Future tension Facts about the past and present are either true or false. Can knowledge of the future offer the same degree of certainty? by Anthony Sudbery + BIO Que sera sera Whatever will be will be The future’s not ours to see Que sera sera. So sang Doris Day in 1956, expressing a near-universal belief of humankind: you can’t know the future. Even if this is not quite a universal belief, then the universal experience of humankind is that we don’t know the future. We don’t know it, that is, in the immediate way that we know parts of the present and the past. We see some things happening in the present, we remember some things in the past, but we don’t see or remember the future. But perception can be deceptive, and memory can be unreliable; even this kind of direct knowledge is not certain. And there are kinds of indirect knowledge of the future that can be as certain as anything we know by direct perception or memory. I reckon I know that the sun will rise tomorrow; if I throw a stone hard at my kitchen window, I know that it will break the window. On the other hand, I did not know on Christmas Eve last year that my hometown of York was going to be hit by heavy rain on Christmas Day and nearly isolated by floods on Boxing Day. In the ancient world and, I think, to our childhood selves, it is events such as the York floods that make us believe that we cannot know the future. I might know some things about the future, but I cannot know everything; I am sure that some things will happen tomorrow that I have no inkling of, and that I could not possibly have known about, today. In the past, such events might have been attributed to the unknowable will of the gods. York was flooded because the rain god was in a bad mood, or felt like playing with us. My insurance policy refers to such catastrophes as ‘acts of God’. When we feel that there is no knowing who will win an election, we say that the result is ‘in the lap of the gods’. Aristotle formulated the openness of the future in the language of logic. Living in Athens at a time when invasion from the sea was always a possibility, he made his argument using the following sentence: ‘There will be a sea-battle tomorrow.’ One of the classical laws of logic is the ‘law of the excluded middle’ which states that every sentence is either true or false: either the sentence is true or its negation is true. But Aristotle argued that neither ‘There will be a sea-battle tomorrow’ nor ‘There will not be a sea-battle tomorrow’ is definitely true, for both possibilities lead to fatalism; if the first statement is true, for example, there would be nothing anybody could do to avert the sea-battle. Therefore, these statements belong to a third logical category, neither true nor false. In modern times, this conclusion has been realised in the development of many-valued logic. But some statements in the future tense do seem to be true; I have given the examples ‘The sun will rise tomorrow’ and, after I have thrown the stone, ‘That window is going to break.’ Let’s look at these more closely. In fact, no such future statement is 100 per cent certain. The sun might not rise tomorrow; there might be a galactic star-trawler heading for the solar system, ready to scoop up the sun tonight and make off with it at nearly the speed of light. When I throw the stone at the window, my big brother, who is a responsible member of the family and a superb cricketer, might be coming round the corner of the house; he might see me throw the stone and catch it so as to save the window. We did not know that the sun would fail to make its scheduled appearance tomorrow morning; I did not know that my naughtiness would be foiled. But this lack of knowledge is not a specific consequence of the fact that we are talking about the future. If the Spaceguard programme had had a wider remit, we might have seen the star-trawler coming, and then we would have known that we had seen our last sunrise; if I had known my brother’s whereabouts, I could have predicted his window-saving catch. In both these scenarios, the lack of knowledge of the future reduces to lack of knowledge about the present. The success of modern science gave rise to the idea that this is always true: not knowing the future can always be traced back to not knowing something about the present. As more and more phenomena came under the sway of the laws of physics, so that more and more events could be explained as being caused by previous events, so confidence grew that every future event could be predicted with certainty, given enough knowledge of the present. The most famous statement of this confidence was made by the French mathematician Pierre-Simon Laplace in 1814: This idea goes back to Isaac Newton, who in 1687 had a dream: In this view, everything in the world is made up of point particles, and their behaviour is explained by the action of forces that make the particles move according to Newton’s equations of motion. These completely determine the future motion of the particles if their positions and velocities are given at any one instant; the theory is deterministic. So if we fail to know the future, that is purely because we do not know enough about the present. For a couple of centuries, Newton’s dream seemed to be coming true. More and more of the physical world came under the domain of physics, as matter was analysed into molecules and atoms, and the behaviour of matter, whether chemical, biological, geological or astronomical, was explained in terms of Newtonian forces. The particles of matter that Newton dreamed of had to be supplemented by electromagnetic fields to give the full picture of what the world was made of, but the basic idea remained that they all followed deterministic laws. Capricious events such as storms and floods, formerly seen as unpredictable and attributed to the whims of the gods, became susceptible to weather forecasts; and if some such events, like earthquakes, remain unpredictable, we feel sure that advancing knowledge will make them also subject to being forecast. This scientific programme has been so successful that we have forgotten there was ever any other way to think about the future. Mark G Alford, a physicist at Washington University, writes: In ordinary life, and in science up until the advent of quantum mechanics, all the uncertainty that we encounter is presumed to be … uncertainty arising from ignorance. We have completely forgotten what an uncertain world was inhabited by the human race before the 17th century, and we take Newton’s dream as a natural view of waking reality. Well, it was a nice dream. But it didn’t work out that way. In the early years of the 20th century, Ernest Rutherford, investigating the recently discovered phenomenon of radioactivity, realised that it showed random events happening at a fundamental level of matter, in the atom and its nucleus. This did not necessarily mean that Newton’s dream had to be abandoned: the nucleus is not the most fundamental level of matter, but is a complicated object made up of protons and neutrons, and – maybe – if we knew exactly how these particles were situated and how they were moving, we would be able to predict when the radioactive decay of the nucleus would happen. But other, stranger discoveries at around the same time led to the radical departure from Newtonian physics represented by quantum mechanics, which strongly reinforced the view that events at the smallest scale are indeed random, and there is no possibility of precisely knowing the future. Quantum theory is so puzzling, it’s not clear it should be described as an ‘explanation’ of the puzzling facts it subsumes The discoveries that had to be confronted by the new physics of the 1920s were two-fold. On the one hand, Max Planck’s explanation of the distribution of wavelengths in the radiation emitted by hot matter, and Albert Einstein’s explanation of the photoelectric effect, showed that energy comes in discrete packets, instead of varying continuously as it must do in Newton’s mechanics and James Clerk Maxwell’s electromagnetic theory. On the other hand, experiments on electrons by George Paget Thomson, Clinton Davisson and Lester Germer showed that electrons, which had been firmly established to be particles, also sometimes behaved like waves. These puzzling facts found a systematic, coherent, unified mathematical description in the theory of quantum mechanics which emerged from the work of theorists after 1926. This theory is itself so puzzling that it is not clear that it should be described as an ‘explanation’ of the puzzling facts it subsumes; but an essential feature of it, which seems inescapable, is that, when applied to give predictions of physical effects, it yields probabilities rather than precise numbers. This is still not universally accepted. Some people believe that there are finer details to be discovered in the make-up of matter, which, if we knew them, would once again make it possible to predict their future behaviour precisely. This is indeed logically possible, but there would necessarily be aspects of such a theory that would lead most physicists to think it highly unlikely. The format of quantum theory is quite different from previous physical theories such as Newtonian mechanics or electromagnetism (or both combined). These theories work with a mathematical description of the state of the world, or any part of the world; they have an equation of motion that takes such a mathematical description and tells you what it will change into after a given time. Quantum mechanics also works with a mathematical object that describes a state of the world; it is called a state vector (though it is not a vector in three dimensions like velocity), and is often denoted by the Greek letter Ψ or some similar symbol. But this is a different kind of mathematical description from that in mechanics or electromagnetism. Each of those theories uses a set of numbers that measure physical quantities such as the velocity of a specified particle, or the electric field at a specified point of space. The quantum state vector, on the other hand, is a more abstruse object whose relation to physical quantities is indirect. From the state vector, you can obtain the values of physical quantities, but only some physical quantities: you can choose which quantities you would like to know, but you are not allowed to choose all of them. Moreover, once you have chosen which ones you would like to know, the state vector will not give you a definite answer; it will give you only probabilities for the different possible answers. This is where quantum mechanics departs from determinism. Strangely enough, in its treatment of change, quantum mechanics looks like the old deterministic theories. Like them, it has an equation of motion, the Schrödinger equation, which will tell you what a given state vector of the world will become after a given time; but because you can get only probabilities from this state vector, it cannot tell you what you will see after this time. State vectors, in general, are puzzling things, and it is not at all clear how they describe physical objects. Some of them, however, do correspond (if you don’t look too closely) to descriptions that we can understand. Among the state vectors of a cat, for example, is one describing a cat sitting and contentedly purring; there is another one describing it lying dead, having been poisoned in a diabolical contraption devised by the physicist Erwin Schrödinger. But there are others, obtained mathematically by ‘superposing’ these two state vectors; such a superposed state vector could be made up of a part describing the cat as alive and a part describing it as dead. These are not two cats; the point of Schrödinger’s story was that one and the same cat seems to be described as both alive and dead, and we do not understand how such states could describe anything that could arise in the real world. How can we believe this theory, generations of physicists have asked, when we never see such alive-and-dead cats? it follows from quantum mechanics that although cats have states in which they seem to be both alive and dead, we will never see a cat in such a state There is an answer to this puzzle. If I were to open the box in which Schrödinger has prepared this poor cat, then the ordinary laws of everyday physics would ensure that, if the cat was alive, I would have the image of a living cat on my retina and in my visual cortex, and the system consisting of me and the cat would end up in an understandable state in which the cat is alive and I see a living cat. If the cat was dead, I would have the image of a dead cat, and the system consisting of me and the cat would end up in a state in which the cat is dead and I see a dead cat. It now follows, according to the laws of quantum mechanics, that if the cat is in a superposition of being alive and being dead, then the system consisting of me and the cat ends up in a superposition of the two final states described above. This superposition does not contain a state of my brain seeing a peculiar alive-and-dead state of a cat; the only states of my brain that occur are the familiar ones of seeing a live cat and seeing a dead cat. This is the answer to the question at the end of the earlier paragraph; it follows from quantum mechanics itself that although cats have states in which they seem to be both alive and dead, we will never see a cat in such a state. But now the combined system of me and the cat is in one of the strange superposition states introduced by quantum mechanics. It is represented mathematically by the familiar sign +, and called an entangled state of me and the cat. How are we to understand it? Maybe the mathematical sign + just means ‘or’; that would make sense. But unfortunately this meaning, if applied to the states of an electron, is not compatible with the facts of interference observed in the experiments that show the electron behaving like a wave. Some people think that this + should be understood as ‘and’: when the cat and I are in the superposition state, there is a world in which the cat has died and I see a dead cat, and another world in which the cat is still alive and I see a living cat. Others do not find this a helpful picture. Perhaps we should just take it as (in some sense) a true description of the cat and me, whose meaning is beyond us. Now let us broaden our horizon and consider the whole universe, which contains each one of us considered as a sentient, observing physical system. According to quantum mechanics, this has a description by a state vector in which the sentient system is entangled with the rest of the universe, and several different experiences of the sentient system are involved in this entanglement. The same overall state vector of the whole universe can be seen as such an entangled state for every sentient system inside the universe; these are simply different views of the same universal truth. But saying that this is the truth about the universe seems to conflict with my knowledge of what I see. To illustrate this, let us again consider a little universe containing just me and a cat. Let us suppose that the cat survived when I did Schrödinger’s experiment. Then I know what my state is: I see a living cat. From this I know what the state of the cat is: it is alive. The entangled state of my little universe that was produced by my experiment also contains a part with a dead cat and my brain full of remorse. But seeing a live cat, as I do, I reckon that this other picture is not part of the truth; it describes something that might have happened but didn’t. In general, considering the whole universe, I know that I have just one definite experience. But this contradicts what was asserted in the previous paragraph. Which of these is the truth? This contradiction is of the same type as many familiar contradictions between objective and subjective statements. In The View from Nowhere (1986), Thomas Nagel shows how some of these contradictions can be resolved: we must recognise that there are two positions from which we can make statements of fact or value, and statements made in these two contexts are not commensurable. This applies to the puzzle presented by quantum mechanics as follows. In the external context (the God’s-eye view, or the ‘view from nowhere’) we step outside our own particular situation and talk about the whole universe. In the internal context (the view from now, here), we make statements as physical objects inside the universe. Thus, in the external view, the entangled universal state vector is the whole truth about the universe; the components describing my different possible experiences, and the corresponding states of the rest of the universe, are (unequal) parts of this truth. But in the internal view, from the perspective of some particular experience that I know I am having, this experience, together with the corresponding state of the rest of the universe, is the actual truth. I might know what the other components are, because I can calculate the universal state vector using the equations of quantum mechanics; but these other components, for me, represent things that might have happened but didn’t. Since I cannot see the future, none of the worlds of the future are singled out for me We can now look at what quantum mechanics tells us about the future. As we should now expect, there are two answers, one for each of the two perspectives. From the external perspective, the universe at any one time is described by a universal state vector, and state vectors at different times are related by the Schrödinger equation. Given the state vector at the present time, the Schrödinger equation delivers a unique state vector at any future time: the theory is deterministic, in complete accord with Laplace’s world-view (in a quantum version). From the internal perspective, however, things are quite different. We now have to specify a particular observer (who has been me in the above discussion, but it could have been you or anyone else, or indeed the whole human race taken together), with respect to which we can carve up the universal state vector as described above; and we have to specify a particular experience state of that observer. From that perspective, it is by definition true that the observer has that definite experience, and that the rest of the universe is in a corresponding definite state. So quantum mechanics tells us that at this moment there are a number of different worlds, but I know that one of them is singled out, for me, as being the world that I see and whose finer details are revealed to me by experiment. But when we turn to the future the situation is different. Since I cannot see the future, none of the worlds of the future are singled out for me. Even if there is only one world now, and what I see agrees with the universal state vector of quantum mechanics, it might happen that the laws of quantum mechanics produce a superposition of worlds at a future time. For example, if I start with the experience of setting up Schrödinger’s experiment with the cat, then at the end of the experiment the universal state vector will be the superposition that we have already encountered, with one part containing me seeing a living cat and another part containing me seeing a dead cat. Then what can I say about what I will see at that future time? I found this rather startling when I first encountered it. I was used to thinking that there is something awaiting me in the future, even if I cannot know what it is, and even if there is no law of nature that determines what it is. Whatever will be will be, indeed. But Aristotle already saw that this is wrong. Statements in the future tense do not obey the same logic as present-tense statements: they do not have to be either true or false. Logicians following Aristotle have allowed the possibility of a third truth value, ‘undetermined’ or ‘undecided’, in addition to ‘true’ and ‘false’. However, Aristotle also pointed out that, although no one statement about the future is actually true, some of them are more likely than others. Similarly, the universal state vector at a future time contains more information, for me, than simply what experiences I might have at that time. These experiences, occurring as components of the universal state vector, contribute to it in different amounts, measured by coefficients that are usually used in quantum mechanics to calculate probabilities. So we can understand the future universal state as giving information, not only about what experiences are possible for me at that future time, but also about how probable each experience is. Now, truth and falsity can be expressed numerically: a true statement has truth value 1, a false one has truth value 0. If a future event X is very likely to happen, so that the probability of X is close to 1, then the statement ‘X will happen’ is very nearly true; if it is very unlikely to happen, so that its probability is close to 0, then the statement ‘X will happen’ is very nearly false. This suggests that the truth value of a future-tense statement should be a number between 0 and 1. A true statement has truth value 1; a false statement has truth value 0; and if a future-tense statement ‘X will happen’ has a truth value between 0 and 1, that number is the probability that X will happen. The nature of probability is a long-standing philosophical problem, to which scientists also need an answer. Many scientists take the view that the probability of an event makes sense only when there are many repetitions of the circumstances in which the event might occur, and we work out the proportion of times that it does occur; they hold that the probability of a single, unrepeated event does not make sense. But what we have just outlined does seem to be a calculation of a single event at a time that will come only once. In everyday life, we often talk about the probability that something will happen on just one occasion: that it will rain tomorrow, or that a particular horse will win a race, or that there will be a sea-battle. A standard view of such single-event probability is that it refers to the strength of the belief of the person who is asserting the probability, and can be measured by the betting odds they are prepared to offer on the event happening. But the probability described above is an objective fact about the universe. It has nothing to do with the beliefs of an individual, not even the individual whose experiences are in question; that individual is being told a fact about his future experiences, whether he believes it or not. The logical theory gives an objective meaning to the probability of a single event: the probability of a future event is the truth value of the future-tense proposition that that event will happen. I explore this view of probability, and the way that quantum mechanics supports the associated many-valued logic of tensed propositions, in ‘The Logic of the Future in Quantum Theory’ (2016). It has now become clear that the description of the physical world given by quantum mechanics, namely the universal state vector, plays very different roles in the two perspectives, external and internal. From the external perspective, it is a full description of reality; it tells how the universe is constituted at a particular time. This complete reality can be analysed with respect to any given sentient system, yielding a number of components, attached to different experiences of the chosen sentient system, which are all parts of the universal reality. Those things that might have happened, but didn’t, some of which we don’t even know about, might still affect the future From the internal perspective of this system, however, reality consists of just one of these experiences; the component attached to this experience is the complete truth about the universe for the sentient system. All the other non-zero components are things that might have happened, but didn’t. The role of the universal state vector at a later time, in this perspective, is not to describe how the universe will be at that time, but to specify how the present state of the universe might change between now and then. It gives a list of possibilities at that later time, with a probability for each of them that it will become the truth. It might seem that we can at least know these probabilities for the future, being able to calculate them from our certain knowledge of our present experience, using the Schrödinger equation. But even this is uncertain. Our present experience could well be only part of the universal state, and it is the whole universal state vector that must be put into the calculation of future probabilities. Those things that might have happened, but didn’t, some of which we don’t even know about, might still affect the future. However, if those things are sufficiently different from our actual experience on a macroscopic scale, then quantum theory assures us that the effect they might have on the future is so small as to be utterly negligible. This consequence of the theory is known as decoherence. Knowledge of the future, therefore, is limited in a fundamental way. It is not that there are true facts about the future, but the knowledge of them is not accessible to us; there are no facts out there, and there is simply no certain knowledge to be had. Nevertheless, there are facts about the future with partial degrees of truth. We can attain knowledge of the future, but that knowledge will always be uncertain. An expanded version of this article will appear in Space, Time and the Limits of Human Understanding, ed. Shyam Wuppuluri and Giancarlo Ghirardi, to be published by Springer MathematicsLogic and probabilityThe future Aeon is not-for-profit and free for everyone Make a donation Get Aeon straight to your inbox Join our newsletter
ff92e4c8066848f3
How to play mathematics | Aeon How to play mathematics by Margaret Wertheim + BIO What does it mean to know mathematics? Since maths is something we teach using textbooks that demand years of training to decipher, you might think the sine qua non is intelligence – usually ‘higher’ levels of whatever we imagine that to be. At the very least, you might assume that knowing mathematics requires an ability to work with symbols and signs. But here’s a conundrum suggesting that this line of reasoning might not be wholly adequate. Living in tropical coral reefs are species of sea slugs known as nudibranchs, adorned with flanges embodying hyperbolic geometry, an alternative to the Euclidean geometry that we learn about in school, and a form that, over hundreds of years, many great mathematical minds tried to prove impossible. Sea slugs have at least the rudiments of brains; they generally possess a few thousand neurons, whose large size has made these animals a model organism for scientists studying basic neuronal functioning. This tiny number isn’t nearly enough to enable the slug to formulate any representation of abstract signs, let alone an ability to mentally manipulate them, and yet, somehow, a nudibranch materialises in the fibres of its very being a form that genius-level human mathematicians didn’t discover until the 19th century; and when they did, it nearly drove them mad. In this instance, complex brains were an impediment to understanding. Nature’s love affair with hyperbolic geometry dates to at least the Silurian age, more than 400 million years ago, when sea floors of the early Earth were covered in vast coral reefs. Many species of corals, then and now, also have hyperbolic structures, which we immediately recognise by the frills and crenellations of their forms. Although corals are animals, they have only very simple nervous systems and can’t be said to have a brain. A head of coral is actually a colonial organism made up of thousands of individual polyps growing together; collectively, they grow a vascular system, a respiratory system and a crude gastrointestinal system through which all the individuals of the colony eat and breathe and share nutrients. Nothing like a brain exists, and yet the colony can organise itself into a mathematical surface disallowed by Euclid’s axiom about parallel lines. Strike two against ‘higher intelligence’. Ask any fifth-grader what the angles of a triangle add up to, and she’ll say: ‘180 degrees’. That isn’t true on a hyperbolic surface. Ask our fifth-grader what’s the circumference of a circle and she’ll say: ‘2π times the radius’. That’s also not true on a hyperbolic surface. Most of the geometric rules we’re taught in school don’t apply to hyperbolic surfaces, which is why mathematicians such as Carl Friedrich Gauss were so disturbed when finally forced to confront the logical validity of these forms, and hence their mathematical existence. So worried was Gauss by what he was discovering about hyperbolic geometry that he didn’t publish his research on the subject: ‘I fear the howl of the Boetians if I make my work known,’ he confided to a friend in 1829. To their universal horror, other mathematicians soon converged on the same conclusion and the genie of non-Euclidean geometry was let loose. But can we say that sea slugs and corals know hyperbolic geometry? I want to argue here that in some sense they do. Absent the apparatus of rationalisation and without the capacity to form mental representations, I’d like to postulate that these humble organisms are skilled geometers whose example has powerful resonances for what it means for us humans to know maths – and also profound implications for teaching this legendarily abstruse field. I’m not the first person to have considered the mathematical capacities of non-sentient things. Towards the end of Richard Feynman’s life, the Nobel Prize-winning physicist is said to have become fascinated by the question of whether atoms are ‘thinking’. Feynman was drawn to this deliberation by considering what electrons do as they orbit the nucleus of an atom. In the earliest days of atomic science, atoms were conceived as little solar systems with the electrons orbiting in simple paths around their nuclei much as a planet revolves around its sun. Yet in the 1920s, it became evident that something much more mathematically complex was going on; in fact, as an electron buzzes around its nucleus, the shape it makes is like a diffused cloud. The simplest electron clouds are spherical, others have dumbbell and toroidal shapes. The form of each cloud is described by what’s called a Schrödinger equation, which gives you a map of where it’s possible for the electron to be in space. Schrödinger equations (after the pioneering quantum theorist Erwin Schrödinger and his hypothetical cat), are so complicated that, when Feynman was alive, the best supercomputers could barely simulate even the simplest orbits. So how could a brainless electron be effortlessly doing what it was doing? Feynman wondered if an electron was calculating its Schrödinger equation. And what might it mean to say that a subatomic particle is calculating? Electrons don’t follow mathematical instructions any more than Jimi Hendrix followed a musical score The world is full of mundane, meek, unconscious things materially embodying fiendishly complex pieces of mathematics. How can we make sense of this? I’d like to propose that sea slugs and electrons, and many other modest natural systems, are engaged in what we might call the performance of mathematics. Rather than thinking about maths, they are doing it. In the fibres of their beings and the ongoing continuity of their growth and existence they enact mathematical relationships and become mathematicians-by-practice. By looking at nature this way, we are led into a consideration of mathematics itself not through the lens of its representational power but instead as a kind of transaction. Rather than being a remote abstraction, mathematics can be conceived of as something more like music or dancing; an activity that takes place not so much in the writing down as in the playing out. Music gives us a rich analogy by which to consider the idea of mathematics as performance, for you don’t need to be able to write down music to be a musician. Maybe if you want to play Mozart, but not in many other cases. Most folk music throughout history has been created by people who are sonically illiterate. Elvis Presley, Michael Jackson, Eric Clapton and Jimi Hendrix all claimed not to read music. In a British TV interview, Paul McCartney said: ‘As long as the two of us know what we’re doing, ie, John and I, we know what chords we’re playing and we remember the melody, we don’t actually ever have the need to write it down or read it.’ Indian classical music, easily as complex as the Western classical cannon, is based on ragas that were generally transmitted aurally from master to student, not traditionally written down. In this millennia-old practice, music is recognised as an innately mathematical form: the Sanskrit word prastara means the ‘study of mathematically arranging’ ragas and rhythms into pleasing compositions. Ragas certainly can be written down (indeed, Indian musical notation dates back more than 2,000 years), and mathematics can be notated, but it doesn’t have to be. There are lots of things doing maths without a formal script, and I’d argue that it makes no sense to say that electrons or sound waves are following mathematical instructions any more than it makes sense to say that Jimi Hendrix was following a musical score. The possibility of writing down music is something apart from its performance, and maths can be considered in a similar way. In short, the notation isn’t the act. Among my favourite mathematical performers are holograms, which enact a gorgeous operation called the Fourier transform. This extraordinarily complex, elegant equation is named in honour of Joseph Fourier, a mathematician and physicist who advised Napoleon and discovered what we now call the greenhouse effect (he called it the ‘hotbox’ effect). The Fourier transform has been called the most useful piece of mathematics of all time; you rely on its power every time you make a cellphone call or listen to a piece of digitally recorded music. Music synthesis also results from clever applications of Fourier’s equations. We’ll get to the audio part in a moment, but first let’s look at the visual face of this mathematical marvel. Holograms differ from photographs in a fundamental way: a photo captures a two-dimensional rendering of light and shade and colour, like a very detailed painting; meanwhile, when light shines through a holographic plate, it assembles into a three-dimensional replica of the original object, recreating in light a simulacra of that thing. The image you see with a hologram is sculptural, really occupying 3D space, so you can move around and view it from different angles. Yet when you look at a holographic plate, there’s no image at all, just a blur in which you may be able to discern speckled rings and dots. What’s been captured on the plate is the Fourier transform of the object, which encodes more information and a different kind of information than a photo can. Every object has a Fourier transform, and in theory we could calculate the transform of any object we desire and make a holographic plate to generate its form even though an actual physical object never existed. The emerging field of computer-generated holography (CGH) is trying to do just this. If it can be made to work, it will revolutionise computer games and animation; we’d be able to watch whole movies akin to the marvellous holographic projection of Princess Leia in the original Star Wars film. Calculating transforms for complex objects requires vast computational powers and skills as yet unachieved by human CGH practitioners. Nonetheless, simple chemicals interacting with light on a piece of film manage to enact Fourier transforms of complicated scenes. Acting together, wave fronts of light and atoms execute a beautiful piece of mathematical encoding, and when the light plays back through the film they do the de-encoding. As such, where a photograph is a representation, a hologram is a performance. Fourier came to his equation in the early 1800s, not to describe images (the origin of holograms dates to the 1940s), but to describe heat flow, and it turns out that his mathematics also leads to enormously powerful applications in the audio domain. Why does a piece by Mozart sound so different when played on a flute or a violin? One way of explaining it is that, although both instruments are playing the same sequence of notes, the Fourier transform of the sound produced by each one is different. The transform reveals the sonic DNA of the instrument’s sound, giving us a precise description of its harmonic components (formally, it describes the set of pure sine waves that make up the sound.) With software, audio engineers can analyse the transform of a musical recording and tell you what kind of instrument was playing; moreover, they can tweak the transform to bring out qualities they like and filter out ones they don’t. By fiddling with the maths, one can sculpt the sound to suit particular tastes. Calculating Fourier transforms of sounds is a lot easier than calculating the transforms of visual scenes, and software engineers have created programs to simulate musical instruments (eg, Apple’s GarageBand), effectively giving users a sim-orchestra on their laptops for the price of an app. Advances in Fourier-based sound simulation have revolutionised the economics of the music business including movie scoring. Now you don’t need an actual orchestra to produce stirring strings to accompany a heroine’s triumph, you can conjure them from the virtual depths, generated through mathematics. From my perspective, even the chairs can be said to be participating in the mathematical performance enacted in a concert hall While music synthesis demonstrates how we can employ mathematics to create something powerful out of a vacuum, here I’m more interested in what happens in actual concert halls. Great halls have their own unique ‘sound’, with each room acting as a filter for the music, tweaking and sculpting its Fourier transform. Contemporary acoustic engineers use Fourier techniques when designing new concert halls, manipulating the architecture of the space, for example adding baffles in specific places, all aided by software that simulates how sounds will react within the space. If the engineers do their job well, there will be no ‘dead spots’ and the hall will sing with warmth and resonance. Here we have a mathematical performance between the sound waves, the architecture, and the surfaces of the walls. Some music schools now have electronic ‘practice rooms’ where, through software, you can dial up a Fourier-based simulation of a cathedral or a tin shed and hear what your playing would sound like in different spaces. However, music connoisseurs will tell you that no sim is a substitute for physical reality, which is why revered concert halls, such as Vienna’s Musikverein, or New York’s Carnegie, won’t be replaced by software any time soon. It’s interesting that most of the best-rated halls were built before 1901, a fact that the acoustic legend Leo Beranek has attributed to their lack of fancy architecture (their resolutely shoe-box shape) and their lightly upholstered seats. From the perspective I’m adopting, even the chairs can be said to be participating in the mathematical performance enacted in a concert hall. Score another home run for non-sentience. Since at least the time of Pythagoras and Plato, there’s been a great deal of discussion in Western philosophy about how we can understand the fact that many physical systems have mathematical representations: the segmented arrangements in sunflowers, pine cones and pineapples (Fibonacci numbers); the curve of nautilus shells, elephant tusks and rams horns (logarithmic spiral); music (harmonic ratios and Fourier transforms); atoms, stars and galaxies, which all now have powerful mathematical descriptors; even the cosmos as a whole, now represented by the equations of general relativity. The physicist Eugene Wigner has termed this startling fact ‘the unreasonable effectiveness of mathematics’. Why does the real world actualise maths at all? And so much of it? Even arcane parts of mathematics, such as abstract algebras and obscure bits of topology often turn out to be manifest somewhere in nature. Most physicists still explain this by some form of philosophical Platonism, which in its oldest form says that the universe is moulded by mathematical relationships which precede the material world. To Platonists, matter is literally in-formed, and guided by, a pre-existing set of mathematical ideals. In the Platonic way of seeing, matter (the stuff of everything) is rendered inert, stripped of power and subordinated to ethereal mathematical laws. These laws are given ontological primacy with matter being effectively a sideline to the ‘true reality’ of the equations. Over the past half-century, this vision has been updated somewhat because now matter, or subatomic particles, have themselves been enfolded into the equations. Matter has been replaced by fields – as in electric and magnetic fields – and now it’s the fields that follow the laws. Still, it’s the laws that retain primacy and power; hence the obsession with finding an ultimate law, a so-called ‘theory of everything’. Platonism has always bothered me as a philosophy in part because it’s a veiled form of theology – mathematics replaces God as the transcendent, a priori power – so if we want to articulate an alternative, we need new ways of interpreting mathematics itself that don’t also slip into deistic modes. Thinking about maths as performative points a way forward, while also offering a powerful pedagogic model. Corals and sea slugs construct hyperbolic surfaces and it turns out that humans can also make these forms using iterative handicrafts such as knitting and crochet – you can do non-Euclidean geometry with your hands. To crochet a hyperbolic structure, one just increases stitches at a regular rate by following a simple algorithm: ‘Crochet n stitches, increase one, repeat ad infinitum.’ By increasing stitches, you increase the amount of surface area in a regular way, visually moving from a flat or Euclidean plane into a ruffled formation that models the ‘hyperbolic plane’. Mathematically speaking, the hyperbolic plane is the geometric opposite of the sphere: where the surface of a sphere curves towards itself at every point, a hyperbolic surface curves away from itself. We can define these different surfaces in terms of their curvature: a Euclidean plane has zero curvature (it’s flat everywhere), a sphere has positive curvature, and a hyperbolic plane has negative curvature. In this sense, it is a geometric analogue of a negative number. Knitting, crochet and weaving were the original digital technologies: their algorithmic ‘patterns’ are literally written in code Just as geometric relationships on a sphere are different to those on a flat plane – think of what you know about the surface of the Earth versus a flat piece of paper ­– so, they are different again on a hyperbolic surface. Whereas on a flat plane the angles of a triangle add up to 180 degrees, on a sphere they add up to more, and on a hyperbolic surface they add up to less. It’s hard to appreciate this abstractly when you learn it from textbooks, as I did at university, but you can demonstrate it materially on a crocheted hyperbolic plane by stitching triangles onto the surface. You can also demonstrate visually that parallel lines diverge and other apparent absurdities. If Gauss had known how to crochet he mightn’t have been driven so bonkers. Crochet Coral Reef by Margaret and Christine Wertheim and the Institute For Figuring. Photo © IFF It took a woman, the mathematician Daina Taimina at Cornell University, to discover hyperbolic crochet and to give mathematicians a tangible model of this form. I have conducted workshops about this with women all over the world delighting in how much geometry can be conveyed through acts of making. There’s also a link here with general relativity, because the discovery of the hyperbolic plane opened up a whole new era in geometric thinking, leading ultimately to generalised Riemannian geometry, which can describe any complexly curved surface, and is the mathematics underlying Albert Einstein’s equations for the cosmos. Via handicrafts, we can introduce people to concepts about curved spacetime and multidimensional manifolds, leading with our fingers, and out to questions about measuring the structure of the cosmic whole. We can see this as a form of ‘digital intelligence’, and it’s worth noting that iterated handicrafts (knitting, crochet, weaving) were the original digital technologies: their algorithmic ‘patterns’ are literally written in code. It’s no coincidence that computer punch cards were derived from the cards used in automated looms. Here, knowing emerges from hands performing mathematics: it is a kind of embodied figuring. People talk about playing music but mathematics can also be a form of play. One way of thinking about maths is as a language of pattern and form, so when you play with patterns you are doing maths. A beautiful example of mathematical pattern-play can be seen with the great Islamic mosaicists who decorated mosques and palaces such as the Alhambra Palace in Granada in Spain with intricate tilings whose mathematical complexities are still a source of wonder. Long before European geometers realised that there are only 17 mathematically distinct tessellations of the plane – different ways of filling an area with a regular tiling pattern – medieval mosaicists working with their hands using the Hasba method knew about them all. Moreover, medieval Islamic tilers had also discovered aperiodic tiling, which is a way of filling a plane where the pattern never repeats. Western mathematicians discovered these tilings only in the 1960s, again after centuries of theorising that such patterns were impossible. One of the magical qualities of aperiodic tilings is that they look simultaneously random and regular; as a geometric form of chaos, they are rule-based yet inherently unpredictable. Mosaic tiling from the tomb of Hafez in Shiraz. Courtesy Wikipedia At first, when Western mathematicians (Sir Roger Penrose among them) discovered aperiodic tilings, these formations were thought to be just a mathematical curiosity; like hyperbolic surfaces, they seemed to defy common sense so that no one imagined such things could be present in the physical world. Prejudice was so intense that when the Israeli chemist Dan Shechtman announced in 1982 that he’d created a new type of crystal with an aperiodic structure, many fellow scientists refused to believe him. (Like Gauss, he too delayed publishing because of the supposedly absurd nature of his claims.) Shechtman’s quasicrystals have brought about a paradigm shift in crystallography, in part because now we know that crystals can be chaotic, exhibiting order without repetition. Aperiodic ‘Penrose’ tiling pattern. Courtesy Wikipedia Lewis Carroll would have had a field day with this concept, which calls to mind the Red Queen’s exhortation to Alice that, with practice, one can ‘believe six impossible things before breakfast’. In 2009, after an intense search, a naturally occurring example of an aperiodic crystal was also found in the mineral icosahedrite. Strike three against intelligence as a prerequisite for doing mathematics. Image of an aluminium-palladium-manganese quasicrystal surface. Courtesy Wikipedia As a nice coda to this story, in 2011 Shechtman was awarded the Nobel Prize in chemistry. Proof that studying equations isn’t the only path to mathematical insight also comes to us from Africa, where craftsmen discovered fractals centuries ago. A wide variety of fractal patterns are incorporated into African textiles, hairstyling, metalwork, sculpture, painting and architecture. One marvellous Ba-Ila village in southern Zambia is laid out in a fractal design reminiscent of the Mandelbrot set, that swirling icon of 1990s computer-graphic cool. In his book African Fractals: Modern Computing and Indigenous Design (1999), the mathematician Ron Eglash traces the story of the southern continent’s priority in a branch of geometry that came into Western consciousness only around the turn of the 20th century, and didn’t really flourish here until the development of computer graphics chips. Fractal model for Ba-Ila village. From ‘African Fractals: Modern Computing and Indigenous Design’ by Ron Eglash First three iterations of fractal model for Ba-Ila village. From ‘African Fractals: Modern Computing and Indigenous Design’ by Ron Eglash Sea slugs do maths, electrons do maths, minerals do maths. Rainbows do an incredible mathematical performance when you take into account the primary and secondary bows, the dark band between them, and the red and green arcs of light under the primary bow. Next time you see a good rainbow, stop and take a look at the space around it, there’s so much going on; classical geometric optics doesn’t begin to capture its complexity. A stunning piece of mathematical performance is enacted by a peregrine falcon as it hurtles towards its prey; with its head held straight so it can fix one eye steadily on the quarry at a constant angle of 40 degrees, it swoops down at 200 mph in a perfect logarithmic spiral. Leonhard Euler’s 18th-century formula, with its unique mathematical properties, is enacted here by a bird. All around us, nature is playing mathematical games and we too can join in the fun. Mathematics need not be taught as an abstraction, it can be approached as an embodied practice, like learning a musical instrument. This doesn’t invalidate what goes on in university classrooms or academic textbooks, since society needs professional mathematicians who can work with symbols, people such as Fourier and Bernhard Riemann who developed the maths that assists us to make cellphone calls, or determine the structure of the cosmos, and so much else besides. Because nature does so much mathematics, there will probably never be a time when professional ‘symbolising’ isn’t profoundly useful. In 2016, the Nobel prize in physics was awarded for ‘theoretical discoveries about topological phase transitions in matter’ – astonishing, complex work that emerged out of the discovery of another kind of supposedly impossible object (the quasi-particle) and whose mathematical insights might pave the way for quantum computers. By thinking about mathematics as performance, we liberate it from the straightjacket of abstraction into which it has been too narrowly confined. If you ask professional mathematicians what they love about their work, a likely answer is its beauty. ‘Euclid alone looked on beauty bare’ wrote the poet Edna St Vincent Millay in 1923, while the mathematician André Weil (brother of Simone) claimed that solving a hard mathematical problem topped sexual pleasure. The professionals know that mathematics swings; they delight in its playfulness, the plasticity of its forms, and (after some initial shock) the absurdities it throws up. Hyperbolic surfaces, aperiodic tilings, Möbius strips, negative numbers and zero all generated alarm at first, yet were ultimately embraced as gateways to new continents of mathematical wonder. You don’t have to be a symbol-expert to appreciate this terrain. Just as humans are endowed with an ability to dance and play music (even if education too often crushes this out of us), so we have innate form-making and pattern-playing proclivities. Sea slugs, sound waves and falcons do mathematics; Islamic mosaicists and African architects do it too. So can you. MathematicsThe environmentHistory of ideas Aeon is not-for-profit and free for everyone Make a donation Get Aeon straight to your inbox Join our newsletter
66d7b2e39ef26838
Recent zbMATH articles in MSC 14 2021-11-25T18:46:10.358925Z Werkzeug A singular mathematical promenade 2021-11-25T18:46:10.358925Z "Ghys, Étienne" At the first look, one may feel that the book title is a little bit strange. The word singular in the title refers to the concept of singularity of a curve and does not mean a trip made by an individual person. It is a promenade into the mathematical world. The tour is interesting, entertaining and enjoyable, but it may be little bit difficult for those who have insufficient mathematical knowledge. So some mathematical maturity is required to fully appreciate the beauty presented by the author. When you go through the subjects of it you will find it a wonderfully crafted book. The book consists of 30 chapters. Each chapter provides a rich read. Several chapters are fairly independent from the rest of the book. It is a remarkable achievement in terms of its content, structure, and style. In almost all chapters the author shows excellent examples of mathematical exposition and utilize history to enrich a contemporary mathematical investigations. Actually he weaves historical stories in between the combinatorics, complex analysis, and algebraic geometry \dots etc. and does it all in a very readable and remarkable way. The design of the book is amazing: it contains many pictures and illustrations, scanned manuscripts, references, remarks, all written in the right margin of the pages (so one has the information immediately available). The text contains many historical quotations in different languages, with translations, and interesting analysis of the mathematics of our ``classics'' (Newton, Gauss, Hipparchus \dots etc). Hence the book will please any budding or professional mathematician. I can say that, principally, for professional readers, the book is an enjoyable reading due to the versatility of subjects using too many illustrations and remarks that enriched the concepts of the classical notions. In fact most of the material in the book can be regarded as an advanced undergraduate/early graduate level, even there are some material that is significantly more advanced. One very remarkable aspects of the book is the treat of historical matters. Some of the very classical notions such as the fundamental theorem of algebra, the theory of Puisseux series, the linking number of knots, discrete mathematics, operads, resolution of curve singularities, complex singularities, and more, have been discussed and explained in an enlightening way. The author of the book, Professor Étienne Ghys, Director of Research at the École Normale Superiere de Lyon, is a skilled, gifted versatile expositor mathematician. He wrote his book in a relaxed, informal manner with lots of exclamation marks, figures, supporting computer graphics and illustrations that are mathematically helpful and visually engaging. It is interesting to know that most of illustrations have been produced by Ghys himself and who has waived all copyright and related or neighboring rights which is a good evidence of Ghys's service towards the dissemination of mathematical ideas. Ghys is a prominent researcher, broadly in geometry and dynamics. He was awarded the Clay Award for Dissemination of Mathematics in 2015. As the author mentioned in his book, the motivation for writing such an interesting book came from a fact brought to his attention by his colleague, Maxim Kontsevich, in 2009 that relates the relative position of the graphs of four real polynomials under certain conditions imposed on the polynomials. So he begins the book with an attractive theorem of Maxim Kontsevich scribbled for him on a Paris metro ticket who stated it in a nice: Theorem. There do not exist four polynomials \(P_1, \dots , P_4 \in R[x] \) with \(P_1(x) < P_2(x) < P_3(x) < P_4(x)\) for all small negative \(x\), and \(P_2(x) < P_4(x) < P_1(x) < P_3(x)\) for small positive \(x\). In fact Ghys begins his promenade with this attractive theorem. Amazingly, this result basically characterizes what can or cannot happen for crossings, not only for graphs of arbitrary collections of polynomials, but indeed for all real analytic planar curves. Actually the book explores very different questions related to this problem, and follows on different ramifications. Ghys discussed the more general singularities of algebraic curves in the plane, explaining how the concepts were developed historically. I recommend to assign parts of it as an independent studies for both undergraduate and graduate students. Book review of: G.-M. Greuel et al., Singular algebraic curves 2021-11-25T18:46:10.358925Z "Degtyarev, Alex" Review of [Zbl 1411.14001]. Book review of: K. Fujiwara and F. Kato, Foundations of rigid geometry. I 2021-11-25T18:46:10.358925Z "Wedhorn, Torsten" Review of [Zbl 1400.14001]. Some properties of the polynomially bounded o-minimal expansions of the real field and of some quasianalytic local rings 2021-11-25T18:46:10.358925Z "Berraho, M." Let \(\mathcal E_n\) be the ring of germs of smooth functions in \(\mathbb R^n\) at the origin. The Borel map is a map from \(\mathcal E_n\) onto the ring of formal power series \(\mathbb R[[x_1,\ldots, x_n]]\). The image \(\widehat{f}\) of an \(f \in \mathcal E_n\) under the Borel map is defined as the infinite Taylor expansion of \(f\) at the origin. This paper discusses subrings \(\mathcal C\) of \(\mathcal E_n\) when the restrictions of the Borel map to \(\mathcal C\) are bijective. Its results are as follows: Let \(\mathcal D_n\) be of the subring of \(\mathcal E_n\) consisting of the germs of real analytic functions definable in a polymonially bounded o-minimal expansion of the field of reals. The Weierstrass division and preparation theorems for \(\mathcal D_2\) hold true when the restriction of the Borel map to \(\mathcal D_1\) is bijective. When a subring \(\mathcal C\) of \(\mathcal E_1\) satisfies the technical condition called the stability under monomial division and the restriction of the Borel map to it is bijective, the local ring \(\mathcal C\) with the \((x_1)\)-adic topology is complete. On framed simple purely real Hurwitz numbers 2021-11-25T18:46:10.358925Z "Kazarian, M. E." "Lando, S. K." "Natanzon, S. M." Enumerating coloured partitions in 2 and 3 dimensions 2021-11-25T18:46:10.358925Z "Davison, Ben" "Ongaro, Jared" "Szendrői, Balázs" Summary: We study generating functions of ordinary and plane partitions coloured by the action of a finite subgroup of the corresponding special linear group. After reviewing known results for the case of ordinary partitions, we formulate a conjecture concerning a basic factorisation property of the generating function of coloured plane partitions that can be thought of as an orbifold analogue of a conjecture of \textit{D. Maulik} et al. [Compos. Math. 142, No. 5, 1263--1285 (2006; Zbl 1108.14046)], now a theorem, in three-dimensional Donaldson-Thomas theory. We study natural quantisations of the generating functions arising from geometry, discuss a quantised version of our conjecture, and prove a positivity result for the quantised coloured plane partition function under a geometric assumption. The finite matroid-based valuation conjecture is false 2021-11-25T18:46:10.358925Z "Tran, Ngoc Mai" Generalisations of the Harer-Zagier recursion for 1-point functions 2021-11-25T18:46:10.358925Z "Chaudhuri, Anupam" "Do, Norman" Summary: \textit{J. Harer} and \textit{D. Zagier} [Invent. Math. 85, 457--485 (1986; Zbl 0616.14017)] proved a recursion to enumerate gluings of a \(2d\)-gon that result in an orientable genus \(g\) surface, in their work on Euler characteristics of moduli spaces of curves. Analogous results have been discovered for other enumerative problems, so it is natural to pose the following question: how large is the family of problems for which these so-called 1-point recursions exist? In this paper, we prove the existence of 1-point recursions for a class of enumerative problems that have Schur function expansions. In particular, we recover the Harer-Zagier recursion, but our methodology also applies to the enumeration of dessins d'enfant, to Bousquet-Mélou-Schaeffer numbers, to monotone Hurwitz numbers, and more. On the other hand, we prove that there is no 1-point recursion that governs single Hurwitz numbers. Our results are effective in the sense that one can explicitly compute particular instances of 1-point recursions, and we provide several examples. We conclude the paper with a brief discussion and a conjecture relating 1-point recursions to the theory of topological recursion. Crystal structures for symmetric Grothendieck polynomials 2021-11-25T18:46:10.358925Z "Monical, Cara" "Pechenik, Oliver" "Scrimshaw, Travis" Summary: The symmetric Grothendieck polynomials representing Schubert classes in the K theory of Grassmannians are generating functions for semistandard set-valued tableaux. We construct a type \(A_n\) crystal structure on these tableaux. This crystal yields a new combinatorial formula for decomposing symmetric Grothendieck polynomials into Schur polynomials. For single-columns and single-rows, we give a new combinatorial interpretation of Lascoux polynomials (K-analogs of Demazure characters) by constructing a K-theoretic analog of crystals with an appropriate analog of a Demazure crystal. We relate our crystal structure to combinatorial models using excited Young diagrams, Gelfand-Tsetlin patterns via the 5-vertex model, and biwords via Hecke insertion to compute symmetric Grothendieck polynomials. Crystal for stable Grothendieck polynomials 2021-11-25T18:46:10.358925Z "Morse, Jennifer" "Pan, Jianping" "Poh, Wencin" "Schilling, Anne" A crystal is a directed graph whose encoded information mirror that of the highest weight theory of a root system. Their importance relies on that it reduces problems about representations of Kac-Moody Lie algebras to analogous problems but in a purely combinatorial context; and conversely. References introducing crystals are, from a combinatorial point of view, [\textit{D. Bump} and \textit{A. Schilling}, Crystal bases. Representations and combinatorics. Hackensack, NJ: World Scientific (2017; Zbl 1440.17001); \textit{P. Hersh} and \textit{C. Lenart}, Math. Z. 286, No. 3--4, 1435--1464 (2017; Zbl 1371.05315)]; from the algebraic side, see [\textit{J. Hong} and \textit{S.-J. Kang}, Introduction to quantum groups and crystal bases. Providence, RI: American Mathematical Society (AMS) (2002; Zbl 1134.17007)]. Here, the authors associate a type A crystal on the set of \(321\)-avoiding Hecke factorizations; for an expanded version of the content of this paper see [\textit{J. Morse} et al., Electron. J. Comb. 27, No. 2, Research Paper P2.29, 48 p. (2020; Zbl 1441.05237)]. More references: [\textit{M. Albert} et al., Eur. J. Comb. 78, 44--72 (2019; Zbl 1414.05004); \textit{M. Bóna}, Combinatorics of permutations. Boca Raton, FL: CRC Press (2012; Zbl 1255.05001)]. The authors also define a new insertion from decreasing factorizations to pairs of semistandard Yount tableaux, and prove several properties; in particular, this new insertion intertwines with the crystal operators. Everything is related with the combinatorics of Young tableaux. Additional references: [\textit{M. Gillespie} et al., Algebr. Comb. 3, No. 3, 693--725 (2020; Zbl 1443.05183); \textit{Y.-T. Oh} and \textit{E. Park}, Electron. J. Comb. 26, No. 4, Research Paper P4.39, 19 p. (2019; Zbl 1428.05329); \textit{S. Assaf} and \textit{E. K. Oğuz}, Sémin. Lothar. Comb. 80B, 80B.26, 12 p. (2018; Zbl 1411.05272); \textit{J.-H. Kwon}, Handb. Algebra 6, 473--504 (2009; Zbl 1221.17017); \textit{G. Benkart} and \textit{S.-J. Kang}, Adv. Stud. Pure Math. 28, 21--54 (2000; Zbl 1027.17009); \textit{T. H. Baker}, Prog. Math. 191, 1--48 (2000; Zbl 0974.05080); \textit{G. Cliff}, J. Algebra 202, No. 1, 10--35 (1998; Zbl 0969.17010); \textit{P. Littelmann}, J. Algebra 175, No. 1, 65--87 (1995; Zbl 0831.17004); \textit{A. Puskás}, Assoc. Women Math. Ser. 16, 333--362 (2019; Zbl 1416.05300); \textit{V. I. Danilov} et al., Algebra 2013, Article ID 483949, 14 p. (2013; Zbl 1326.05045); \textit{T. Lam} and \textit{P. Pylyavskyy}, Sel. Math., New Ser. 19, No. 1, 173--235 (2013; Zbl 1260.05043); \textit{V. Genz} et al., Sel. Math., New Ser. 27, No. 4, Paper No. 67, 45 p. (2021; Zbl 07383344); \textit{N. Jacon}, Electron. J. Comb. 28, No. 2, Research Paper P2.21, 16 p. (2021; Zbl 07356164); \textit{T. Shoji} and \textit{Z. Zhou}, J. Algebra 569, 67--110 (2021; Zbl 07286477)]. Alcove paths and Gelfand-Tsetlin patterns 2021-11-25T18:46:10.358925Z "Watanabe, Hideya" "Yamamura, Keita" Summary: In their study of the equivariant K-theory of the generalized flag varieties \(G/P\), where \(G\) is a complex semisimple Lie group, and \(P\) is a parabolic subgroup of \(G\), \textit{C. Lenart} and \textit{A. Postnikov} [Trans. Am. Math. Soc. 360, No. 8, 4349--4381 (2008; Zbl 1211.17021)] introduced a combinatorial tool, called the alcove path model. It provides a model for the highest weight crystals with dominant integral highest weights, generalizing the model by semistandard Young tableaux. In this paper, we prove a simple and explicit formula describing the crystal isomorphism between the alcove path model and the Gelfand-Tsetlin pattern model for type \(A\). The Bruhat order, the lookup conjecture and spiral Schubert varieties of type \(\tilde{A}_2\) 2021-11-25T18:46:10.358925Z "Graham, William" "Li, Wenjing" Summary: Although the Bruhat order on a Weyl group is closely related to the singularities of the Schubert varieties for the corresponding Kac-Moody group, it can be difficult to use this information to prove general theorems. This paper uses the action of the affine Weyl group of type \(\tilde{A}_2\) on a Euclidean space \(V \cong \mathbb{R}^2\) to study the Bruhat order on \(W\). We believe that these methods can be used to study the Bruhat order on arbitrary affine Weyl groups. Our motivation for this study was to extend the lookup conjecture of \textit{B. D. Boe} and \textit{W. Graham} [Am. J. Math. 125, No. 2, 317--356 (2003; Zbl 1074.14045)] (which is a conjectural simplification of the Carrell-Peterson criterion (see [\textit{J. B. Carrell}, Proc. Symp. Pure Math. 56, 53--61 (1994; Zbl 0818.14020)]) for rational smoothness) to type \(\tilde{A}_2\). Computational evidence suggests that the only Schubert varieties in type \(\tilde{A}_2\) where the ``nontrivial'' case of the lookup conjecture occurs are the spiral Schubert varieties, and as a step towards the lookup conjecture, we prove it for a spiral Schubert variety \(X ( w )\) of type \(\tilde{A}_2\). The proof uses descriptions we obtain of the elements \(x \leq w\) and of the rationally smooth locus of \(X ( w )\) in terms of the \(W\)-action on \(V\). As a consequence we describe the maximal nonrationally smooth points of \(X ( w )\). The results of this paper are used in a sequel to describe the smooth locus of \(X ( w )\), which is different from the rationally smooth locus. Algebraic geometry for \(\ell \)-groups 2021-11-25T18:46:10.358925Z "Di Nola, Antonio" "Lenzi, Giacomo" "Vitale, Gaetano" Summary: In this paper we focus on the algebraic geometry of the variety of \(\ell \)-groups (i.e. lattice ordered abelian groups). In particular we study the role of the introduction of constants in functional spaces and \(\ell \)-polynomial spaces, which are themselves \(\ell \)-groups, evaluated over other \(\ell \)-groups. We use different tools and techniques, with an increasing level of abstraction, to describe properties of \(\ell \)-groups, topological spaces (with the Zariski topology) and a formal logic, all linked by the underlying theme of solutions of \(\ell \)-equations. Erratum to: ``Geometric aspects of Lucas sequences. I'' 2021-11-25T18:46:10.358925Z "Suwa, Noriyuki" Corrects mistakes in the statements of Corollary 3.13 and Corollary 3.14 in [the author, ibid. 43, No. 1, 75--136 (2020; Zbl 1469.11026)]. Retract rationality and algebraic tori 2021-11-25T18:46:10.358925Z "Scavia, Federico" Summary: For any prime number \(p\) and field \(k\), we characterize the \(p\)-retract rationality of an algebraic \(k\)-torus in terms of its character lattice. We show that a \(k\)-torus is retract rational if and only if it is \(p\)-retract rational for every prime \(p\), and that the Noether problem for retract rationality for a group of multiplicative type \(G\) has an affirmative answer for \(G\) if and only if the Noether problem for \(p\)-retract rationality for \(G\) has a positive answer for all \(p\). For every finite set of primes \(S\) we give examples of tori that are \(p\)-retract rational if and only if \(p\notin S\). Adic shtukas, modifications and applications 2021-11-25T18:46:10.358925Z "Hieu, Nguyen Kieu" Summary: In this paper, via the study of the modifications of vector bundles on the Fargues-Fontaine curve, we prove a geometric formula relating the Lubin-Tate towers with the simple basic unramified Rapoport-Zink spaces of EL type of signature \((1, n-1), (p_1, q_1), \dots, (p_k, q_k)\) where \(p_iq_i = 0 \). In particular, we deduce the computation of the cohomology groups of the latter. The abelian part of a compatible system and \(\ell \)-independence of the Tate conjecture 2021-11-25T18:46:10.358925Z "Hui, Chun Yin" Summary: Let \(K\) be a number field and \(\{V_\ell \}_\ell\) a rational strictly compatible system of semisimple Galois representations of \(K\) arising from geometry. Let \(\mathbf{G}_\ell\) and \(V_\ell^{{\text{ab}}}\) be respectively the algebraic monodromy group and the maximal abelian subrepresentation of \(V_\ell\) for all \(\ell \). We prove that the system \(\{V_\ell^{{\text{ab}}}\}_\ell\) is also a rational strictly compatible system under some group theoretic conditions, e.g., when \(\mathbf{G}_{\ell '}\) is connected and satisfies \textit{Hypothesis A} for some prime \(\ell '\). As an application, we prove that the Tate conjecture for abelian variety \(X/K\) is independent of \(\ell\) if the algebraic monodromy groups of the Galois representations of \(X\) satisfy the required conditions. Versal families of elliptic curves with rational 3-torsion 2021-11-25T18:46:10.358925Z "Bekker, Boris M." "Zarhin, Yuri G." Torsion of rational elliptic curves over the maximal abelian extension of \(\mathbb{Q} \) 2021-11-25T18:46:10.358925Z "Chou, Michael" Summary: Let \(E\) be an elliptic curve defined over \(\mathbb{Q}\), and let \(\mathbb{Q}^{\mathrm{ab}}\) be the maximal abelian extension of \(\mathbb {Q}\). In this article we classify the groups that can arise as \(E(\mathbb{Q}^{\mathrm{ab}})_{\operatorname{tors}}\) up to isomorphism. The method illustrates techniques for finding explicit models of modular curves of mixed level structure. Moreover, we provide an explicit algorithm to compute \(E(\mathbb{Q}^{\mathrm{ab}})_{\operatorname{tors}}\) for any elliptic curve \(E/\mathbb{Q}\). Average values on the Jacobian variety of a hyperelliptic curve 2021-11-25T18:46:10.358925Z "Chung, Jiman" "Im, Bo-Hae" Summary: We give explicitly an average value formula under the multiplication-by-2 map for the \(x\)-coordinates of the 2-division points \(D\) on the Jacobian variety \(J(C)\) of a hyperelliptic curve \(C\) with genus \(g\) if \(2D \equiv 2P-2\infty \pmod{\text{Pic}(C)}\) for \(P=(x_P, y_P) \in C\) with \(y_P \ne 0\). Moreover, if \(g=2\), we give a more explicit formula for \(D\) such that \(2D \equiv P-\infty \pmod{\mathrm{Pic}(C)}\). On elliptic curves of prime power conductor over imaginary quadratic fields with class number 1 2021-11-25T18:46:10.358925Z "Cremona, John" "Pacetti, Ariel" Summary: The main result of this paper is to extend from \(\mathbb Q\) to each of the nine imaginary quadratic fields of class number 1 a result of [\textit{J.-P. Serre}, Duke Math. J. 54, 179--230 (1987; Zbl 0641.10026)] and [\textit{J. F. Mestre} and \textit{J. Oesterlé}, J. Reine Angew. Math. 400, 173--184 (1989; Zbl 0693.14004)], namely that if \(E\) is an elliptic curve of prime conductor, then either \(E\) or a 2-, 3- or 5-isogenous curve has prime discriminant. For four of the nine fields, the theorem holds with no change, while for the remaining five fields the discriminant of a curve with prime conductor is (up to isogeny) either prime or the square of a prime. The proof is conditional in two ways: first that the curves are modular, so are associated to suitable Bianchi newforms; and second that a certain level-lowering conjecture holds for Bianchi newforms. We also classify all elliptic curves of prime power conductor and non-trivial torsion over each of the nine fields: in the case of 2-torsion, we find that such curves either have CM or with a small finite number of exceptions arise from a family analogous to the Setzer-Neumann family over \(\mathbb Q\). Explicit moduli spaces for congruences of elliptic curves 2021-11-25T18:46:10.358925Z "Fisher, Tom" In the paper under review, the author determines explicit birational models over \({\mathbb Q}\) for the modular surfaces parametrizing pairs of \(N\)-congruent elliptic curves in all cases when this surface is an elliptic surface. In each case, the author also determines the rank of the Mordell-Weil lattice and the geometric Picard number. More precisely, for an integer \(N\geq 2\), two elliptic curves are said to be \(N\)-congruent if their \(N\)-torsion subgroups are isomorphic as Galois modules. Such an isomorphism raises the Weil pairing to the power \(\varepsilon\) for some \(\varepsilon\in ({\mathbb Z}/N{\mathbb Z})^\times\). In this case, one says that the \(N\)-congruence has power \(\varepsilon.\) Note that the author considers \(\varepsilon\) up to a square, since multiplication by \(m\) (with \(\gcd(m,N)=1\)) on one of the elliptic curves changes \(\varepsilon\) to \(m^2\varepsilon.\) Let \(Z(N,\varepsilon)\) be the surface parametrizing pairs of elliptic curves with power \(\varepsilon\), up to simultaneous quadratic twist. This surface is defined over \({\mathbb Q}\). Refining the previous classification of \textit{E. Kani} and \textit{W. Schanz} [Math. Z. 227, No. 2, 337--366 (1998; Zbl 0996.14012)] who explicitly determined the pairs \((N,\varepsilon)\) for which \(Z(N,\varepsilon)\) is birational over \({\mathbb C}\), the author shows that both the cases of an elliptic \(K3\)-surface and of an elliptic surface with Kodaira dimension one (a.k.a. a properly elliptic surface) are birational over \({\mathbb Q}\) to an elliptic surface. Furthermore, the author determines in each case a Weierstrass equation for the generic fibre as an elliptic curve over \({\mathbb Q}(T).\) Note that the author considers an elliptic surface to have a section. The explicit models are given as follows. The author noted that some of the cases were already treated in [\textit{Z. Chen}, Math. Proc. Camb. Philos. Soc. 165, No. 1, 137--162 (2018; Zbl 1451.11049); \textit{T. Fisher}, Acta Arith. 171, No. 4, 371--387 (2015; Zbl 1341.11028); \textit{A. Kumar}, Res. Math. Sci. 2, Paper No. 24, 46 p. (2015; Zbl 1380.11049)]. Theorem 1.1. The surfaces \(Z(N,\varepsilon)\) that are birational over \({\mathbb C}\) to an elliptic \(K3\)-surface, are in fact birational over \({\mathbb Q}\) to an elliptic surface. The generic fibres are the elliptic curves over \({\mathbb Q}(T)\) with the following Weierstrass equations. \[ \begin{split} Z(6,5)&:\quad y^2+3T(T-2)xy+2(T-1)(T+2)^2(T^3-2)y=x^3-6(T-1)(T^3-2)x^2,\\ Z(7,3)&:\quad y^2=x^3+(4T^4+4T^3-51T^2-2T-50)x^2+(6T+25)(52T^2-4T+25)x,\\ Z(8,3)&:\quad y^2=x^3-(3T^2-7)x^2-4T^2(4T^4-15)x+4T^2(53T^4+81T^2+162),\\ Z(8,5)&:\quad y^2=x^3-2(T^2+19)x^2-(4T^2-49)(T^4-6T^2+25)x,\\ Z(9,1)&:\quad y^2+(6T^2+3T+2)xy+T^2(T+1)(4T^3+9T+9)y+ x^3-(16T^4+12T^3+9T^2+6T+1)x^2,\\ Z(12,1)&:\quad y^2+2(5T^2+9)xy+96(T^2+3)(T^2+1)^2y+x^3+(T^2+3)(11T^2+1)x^2. \end{split} \] Theorem 1.2. The surfaces \(Z(N,\varepsilon)\) that are birational over \({\mathbb C}\) to a properly elliptic surface, are in fact birational over \({\mathbb Q}\) to an elliptic surface. The generic fibres are the elliptic curves over \({\mathbb Q}(T)\) with the following Weierstrass equations. \[ \begin{split} Z(8,7)&:\quad y^2=x^3+2(4T^6-15T^4+14T^2-1)x^2+(T^2-1)^4(16T^4-24T^2+1)x,\\ Z(9,2)&:\quad y^2+3(4T^3+T^2-2)xy+(T-1)^3(T^3-1)(4T^3-3T-7)y+ x^3\\ & \quad -3(T+1)(T^3-1)(9T^2+2T+1)x^2,\\ Z(10,1)&:\quad y^2-(3T-2)(6T^2-5T-2)xy -4T^2(T-1)^2(4T^2-2T-1)(27T^3-54T^2+16T+12)y\\ & \quad +x^3+T^2(T-1)(27T^3-54T^2+16T+12)x^2,\\ Z(10,3)&:\quad y^2+(T^3-8T^2-9T-8)xy+2T^2(T^3-T^2-3T-3)(7T^2+2T+3)y\\ & \quad +x^3+2(3T+2)(T^3-T^2-3T-3)x^2,\\ Z(11,1)&:\quad y^2+(T^3+T)xy=x^3-(4T^5-17T^4+30T^3-18T^2+4)x^2 +T^2(2T-1)(3T^2-7T+5)^2x \end{split} \] Using these explicit equations, the author applies the methods of van Luijk and Kloosterman to compute the geometric Picard number of each surface [\textit{R. Van Luijk}, J. Number Theory 123, No. 1, 92--119 (2007; Zbl 1160.14029); \textit{R. van Luijk}, Algebra Number Theory 1, No. 1, 1--17 (2007; Zbl 1123.14022); \textit{R. Kloosterman}, Can. Math. Bull. 50, No. 2, 215--226 (2007; Zbl 1162.14024)]. Bounding cubic-triple product Selmer groups of elliptic curves 2021-11-25T18:46:10.358925Z "Liu, Yifeng" Let \(F\) be a totally real cubic number field, let \(E\) be a modular elliptic curve defined over \(F\) and let \(h^1(E)\) be its \(F\)-motive. Via multiplicative induction to \(\mathbb{Q}\) one gets a \textit{cubic-triple product motive} \(\mathrm{M}(E):=(\otimes\mathrm{Ind}^F_{\mathbb{Q}} h^1(E))(2)\) of dimension 8, whose \(p\)-adic realization is basically (a twist of) the multiplicative induction from \(F\) to \(\mathbb{Q}\) of the \(p\)-adic Tate module of \(E\). To such an object one can attach a triple product \(L\)-function \(L(s,\mathrm{M}(E))\) with good meromorphic properties and a functional equation with central critical value in \(s=0\). The paper deals with an instance of the Bloch-Kato conjecture for this setting: in particular, under some additional hypotheses on \(E\), it proves that the nonvanishing of \(L(0,\mathrm{M}(E))\) yields the 0-dimensionality (over \(\mathbb{Q}_p\)) of the \(p\)-part of the appropriate Selmer group for infinitely many primes \(p\). The author already presented a similar result, where \(F\) was replaced by the product of \(\mathbb{Q}\) and a real quadratic field, in [\textit{Y. Lin}, Invent. Math. 205, No. 3, 693--780 (2016; Zbl 1395.11091)]: the strategy is based on a reciprocity law for cycles on a triple product of modular curves, which is obtained via congruence formulas arising from computations of étale local cohomology groups of varieties. Though the strategy is similar to the Lin paper mentioned above, the different setting requires new techniques on weight spectral sequences to handle higher dimensional cycles, which then provide the cohomological computations (hence the reciprocity law) needed here. When \(L(0,\mathrm{M}(E))\neq 0\), the reciprocity law on some special (Hirzebruch-Zagier) cycles produces enough annihilators for the Selmer group to prove its finiteness. The elliptic KZB connection and algebraic de Rham theory for unipotent fundamental groups of elliptic curves 2021-11-25T18:46:10.358925Z "Luo, Ma" Summary: We develop an algebraic de Rham theory for unipotent fundamental groups of once punctured elliptic curves over a field of characteristic zero using the universal elliptic KZB connection of \textit{D. Calaque} et al. [Prog. Math. 269, 165--266 (2009; Zbl 1241.32011)] and \textit{A. Levin} and \textit{G. Racinet} [``Towards multiple elliptic polylogarithms'', Preprint, \url{arXiv:math/0703237}]. We use it to give an explicit version of Tannaka duality for unipotent connections over an elliptic curve with a regular singular point at the identity. Automorphism groups of simple polarized abelian varieties of odd prime dimension over finite fields 2021-11-25T18:46:10.358925Z "Hwang, WonTae"|hwang.wontae.2 Summary: We prove that the automorphism groups of simple polarized abelian varieties of odd prime dimension over finite fields are cyclic, and give a complete list of finite groups that can be realized as such automorphism groups. Arithmetical power series expansion of the sigma function for a plane curve 2021-11-25T18:46:10.358925Z "Ônishi, Yoshihiro" Summary: The Weierstrass function \(\sigma(u)\) associated with an elliptic curve can be generalized in a natural way to an entire function associated with a higher genus algebraic curve. This generalized multivariate sigma function has been investigated since the pioneering work of Felix Klein. The present paper shows Hurwitz integrality of the coefficients of the power series expansion around the origin of the higher genus sigma function associated with a certain plane curve, which is called an \((n, s)\)-curve or a plane telescopic curve. For the prime (2), the expansion of the sigma function is not Hurwitz integral, but its square is. This paper clarifies the precise structure of this phenomenon. In Appendix A, computational examples for the trigonal genus 3 curve ((3, 4)-curve) \(y^3+(\mu_1 x+\mu_4)y^2 + (\mu_2 x^2 + \mu_5 x + \mu_8)y = x^4 + \mu_3 x^3 + \mu_6 x^2 + \mu_9 x + \mu_{12}\) (where \(\mu_j\) are constants) are given. Twisted arithmetic Siegel Weil formula on \(X_{0}(N)\) 2021-11-25T18:46:10.358925Z "Du, Tuoping" "Yang, Tonghai" Summary: In this paper, we study twisted arithmetic divisors on the modular curve \(\mathcal{X}_0(N)\)with \(N\) square-free. For each pair \(({\Delta}, r)\), where \({\Delta} \equiv r^2 \operatorname{mod} 4 N\)and \({\Delta}\)is a fundamental discriminant, we construct a twisted arithmetic theta function \(\hat{\phi}_{{\Delta}, r}(\tau)\)which is a generating function of arithmetic twisted Heegner divisors. We prove that the arithmetic pairing \(\langle \hat{\phi}_{{\Delta}, r}(\tau), \hat{\omega}_N \rangle\)is equal to the special value, rather than the derivative, of some Eisenstein series, thanks to some cancellation, where \(\hat{\omega}_N\)is a normalized metric Hodge line bundle. We also prove the modularity of \(\hat{\phi}_{{\Delta}, r}(\tau)\). On Faltings heights of abelian varieties with complex multiplication 2021-11-25T18:46:10.358925Z "Yuan, Xinyi" Summary: This expository article introduces some conjectures and theorems related to the Faltings heights of abelian varieties with complex multiplication. The topics include the Colmez conjecture, the averaged Colmez conjecture, and the André-Oort conjecture. For the entire collection see [Zbl 1423.00001]. On the level of modular curves that give rise to isolated \(j\)-invariants 2021-11-25T18:46:10.358925Z "Bourdon, Abbey" "Ejder, Özlem" "Liu, Yuan" "Odumodu, Frances" "Viray, Bianca" Summary: We say a closed point \(x\) on a curve \(C\) is sporadic if \(C\) has only finitely many closed points of degree at most \(\deg(x)\) and that \(x\) is isolated if it is not in a family of effective degree \(d\) divisors parametrized by \(\mathbb{P}^1\) or a positive rank abelian variety (see Section 4 for more precise definitions and a proof that sporadic points are isolated). Motivated by well-known classification problems concerning rational torsion of elliptic curves, we study sporadic and isolated points on the modular curves \(X_1(N)\). In particular, we show that any non-cuspidal non-CM sporadic, respectively isolated, point \(x \in X_1(N)\) maps down to a sporadic, respectively isolated, point on a modular curve \(X_1(d)\), where \(d\) is bounded by a constant depending only on \(j(x)\). Conditionally, we show that \(d\) is bounded by a constant depending only on the degree of \(\mathbb{Q}(j(x))\), so in particular there are only finitely many \(j\)-invariants of bounded degree that give rise to sporadic or isolated points. Quadratic Chabauty for modular curves and modular forms of rank one 2021-11-25T18:46:10.358925Z "Dogra, Netan" "Le Fourn, Samuel" The Chabauty-Kim method is a method for determining the set \(X(\mathbb{Q})\) of rational points of a curve \(X\) over \(\mathbb{Q}\) of genus bigger than one. The idea is to locate \(X(\mathbb{Q})\) inside \(X(\mathbb{Q}_p)\) by finding an obstruction to a \(p\)-adic point being global. This leads to a tower of obstructions \[X(\mathbb{Q}_p) \supset X(\mathbb{Q}_p)_1 \supset X(\mathbb{Q}_p)_2 \supset \ldots \supset X(\mathbb{Q}).\] The first obstruction set \(X(\mathbb{Q}_p)_1\) is the one produced by Chabauty's method. In situations when \(X(\mathbb{Q}_p)_1\) is finite, it can often be used to determine \(X(\mathbb{Q})\). In the present paper, the authors study the finiteness of the Chabauty-Kim set \(X(\mathbb{Q}_p)_2\) when \(X\) is one of the modular curves \(X_{\text{ns}}^{+}(N)\) or \(X_0^{+}(N)\) with \(N\) a prime different from \(p\), where \(X_0^{+}(N)\) is the quotient of \(X_0(N)\) by the Atkin-Lehner involution \(w_N\), and \(X_{\text{ns}}^{+}(N)\) is the quotient of \(X(N)\) by the normalizer of a non-split Cartan subgroup. They show that for all prime \(N\) such that \(g(X_0^{+}(N)) \geq 2\), \(X_0^{+}(N)(\mathbb{Q}_p)_2\) is finite for any \(p \neq N\), and for all prime \(N\) such that \(g(X_{\text{ns}}^{+}(N)) \geq 2\) and \(X_{\text{ns}}^{+}(N)(\mathbb{Q}) \neq \emptyset\), \(X_{\text{ns}}^{+}(N)(\mathbb{Q}_p)_2\) is finite for any \(p \neq N\). Their proof proceeds along the lines of the quadratic Chabauty method. Ordinary points \(\bmod\, p\) of \(\operatorname{GL}_n(\mathbb{R})\)-locally symmetric spaces 2021-11-25T18:46:10.358925Z "Goresky, Mark" "Tai, Yung sheng" Summary: Locally symmetric spaces for \(\operatorname{GL}_n(\mathbb{R})\) parametrize polarized complex abelian varieties with real structure (antiholomorphic involution). We introduce a mod \(p\) analog. We define an ``antiholomorphic'' involution (or ``real structure'') on an ordinary abelian variety (defined over a finite field \(k)\) to be an involution of the associated Deligne module \((T,F,V)\) that exchanges \(F\) (the Frobenius) with \(V\) (the Verschiebung). The definition extends to include principal polarizations and level structures. We show there are finitely many isomorphism classes of such objects in each dimension, and give a formula for this number that resembles the Kottwitz ``counting formula'' (for the number of principally polarized abelian varieties over \(k)\), but the symplectic group in the Kottwitz formula has been replaced by the general linear group. Cohomology of automorphic bundles 2021-11-25T18:46:10.358925Z "Lan, Kai-Wen" Summary: In this survey article, we review some recent works (by the author and his collaborators Junecue Suh, Michael Harris, Richard Taylor, Jack Thorne, and Benoît Stroh) on the cohomology of automorphic bundles overlocally symmetric varieties and some related geometric objects. For the entire collection see [Zbl 1423.00001]. Fourier-Jacobi cycles and arithmetic relative trace formula 2021-11-25T18:46:10.358925Z "Liu, Yifeng" Summary: In this article, we develop an arithmetic analogue of Fourier-Jacobi period integrals for a pair of unitary groups of equal rank. We construct the so-called Fourier-Jacobi cycles, which are algebraic cycles on the product of unitary Shimura varieties and abelian varieties. We propose the arithmetic Gan-Gross-Prasad conjecture for these cycles, which is related to the central derivatives of certain Rankin-Selberg \(L\)-functions, and develop a relative trace formula approach toward this conjecture. On Shimura varieties for unitary groups 2021-11-25T18:46:10.358925Z "Rapoport, M." "Smithling, B." "Zhang, W." In the present paper, the authors define a class of Shimura varieties closely related to unitary groups which represent a moduli problem of abelian varieties with additional structure. This class of Shimura varieties is a variant of the Deligne-Kottwitz Shimura varieties. They compare their Shimura varieties with other unitary Shimura varieties. Uniform bounds for periods of endomorphisms of varieties 2021-11-25T18:46:10.358925Z "Huang, Keping" Let \(K\) be a finite extension of \({\mathbb Q}_{p}\) and \(R\) be its ring of integers. Assume that \(X\) is an algebraic variety, defined as the common zeroes of polynomials with coefficients in \(R\), and that \(f:X \to X\) is an endomorphim, also defined over \(R\). A point \(P \in X\) is a periodic point of \(f\) if there is a positive integer \(n\) with \(f^{n}(P)=P\). The minimal of these integers \(n\) is called the primitive period of \(P\). Let \(P \in X(R)\) (i.e, a \(R\)-rational point of \(X\)) be a periodic point with primitive period \(n\). The main result of this paper is an explicit upper bound for \(n\) in terms of the residue field of \(K\) and the valuation of \(K\). The proof is short (just two pages). This result is related to Morton-Silverman's Conjecture on the existence of an upper bound \(C(D,N,d)\) for the cardinality of preperiodic \(K\)-rational points of \(f\) in the case that \(K\) is a finite extesion of degree \(D\) of \({\mathbb Q}\), \(X={\mathbb P}^{N}\) and \(f\) has degree \(d\). An addendum to the elliptic torsion anomalous conjecture in codimension 2 2021-11-25T18:46:10.358925Z "Hubschmid, Patrik" "Viada, Evelina" Summary: The torsion anomalous conjecture states that for any variety \(V\) in an abelian variety there are only finitely many maximal \(V\)-torsion anomalous varieties. We prove this conjecture for \(V\) of codimension 2 in a product \(E^N\) of an elliptic curve \(E\) without CM, complementing previous results for \(E\) with CM. We also give an effective upper bound for the normalized height of these maximal \(V\)-torsion anomalous varieties. The distribution relation and inverse function theorem in arithmetic geometry 2021-11-25T18:46:10.358925Z "Matsuzawa, Yohsuke" "Silverman, Joseph H." The paper presents an arithmetic distribution relation as well as two versions of the inverse function theorem in terms of an arithmetic distance function. In sections 3-5, the field \(K\) denotes a field with a complete set of absolute values \(\mathcal{M}_K\) satisfying a product formula. If we fix an algebraic closure \(\bar{K}\) of \(K\), an \(\mathcal{M}_K\)-constant will be a function \(\gamma \colon \mathcal{M}_{\bar{K}} \longrightarrow \mathbb{R}_{\geq 0}\) such that \(\gamma(v)\) depends only on the restriction \(v|_K\) and the set \(\{ v|_K \, | \, \gamma(v) \neq 0 \}\) is finite. The notation \(O(\mathcal{M}_K)\) will be used to denote a relation that holds up to an \(\mathcal{M}_K\)-constant. For instance \(f \leq g +O(h) + O(\mathcal{M}_K)\) means that there exist a \(C>0\) and an \(\mathcal{M}_K\)-constant \(\gamma\) such that \(f \leq g + C|h| + \gamma\). Definition: (Local height functions) Let \(K\) be a field with a set of absolute values \(\mathcal{M}_K\). Let \(V\) be a projective variety (not necessarily irreducible) over \(K\) and \(X \subset V\) a closed subscheme. A local height is a function \(\lambda_X \colon V(\bar{K}) \times \mathcal{M}_K \longrightarrow \mathbb{R} \cup \infty\) determined by the following properties: \begin{enumerate} \item[(1)] If \(D\) is an effective divisor, we get the usual local height, i.e., \(\lambda_X = \lambda_D\). \item[(2)] if \(X,X'\) are subschemes \(\lambda_{X \cap X'} = \min{(\lambda_X,\lambda_{X'})}\) (where \(X \cap X'\) denotes the subscheme with ideal sheaf .\(\mathcal{I}_X+\mathcal{I}_{X'}\) ). \end{enumerate} and having also many other nice properties: \begin{enumerate} \item[(3)] (functoriality) If \(\varphi \colon V \longrightarrow W\) denotes a morphism of varieties and \(X\) is a closed subscheme of \(W\), we have the equality: \[\lambda_{\varphi^{-1} W,V} = \lambda_{X,W}.\] \item[(4)] Local height functions are bounded below, so up a \(\mathcal{M}_K\)-constant, we can assume that \(\lambda_X \geq 0\). \end{enumerate} Remark: Using the local height function associated to the boundary divisor, the local height function machinery can be extended to quasi-projective varieties. Definition: Let \(\Delta(V)\) denotes the diagonal subvariety in \(V \times V\). The arithmetic distance function on \(V\) is the local height \[\delta_V = \lambda_{\Delta(V)}.\] It is well-defined up to an \(\mathcal{M}_K\)-bounded function and satisfies many nice properties as well, for example: \begin{enumerate} \item[(1)] \(\delta(P,R) \geq \min(\delta(P,Q),\delta(Q,R))\) \item[(2)] \(\lambda_X(Q) \geq \min(\lambda_X(P),\delta(P,Q))\) \end{enumerate} Defintion: Let \(\varphi \colon W \longrightarrow V\) be a finite flat morphism between schemes of finite type over a field \(k\). Let \(k'\) be an algebraically closed field containing \(k\). For \(x \in W(k')\), define the multiplicity of \(\varphi\) at \(x\) by the formula \[e_\varphi(x)= \text{length}_{\mathcal{O}_{W_{k'},x}} \mathcal{O}_{W_{k'},x}/\varphi^{-1} m_{\varphi(x)} \mathcal{O}_{W_{k'},x}.\] For finite flat morphisms \(W \longrightarrow V\), the distribution inequality bounds the arithmetic distance in the target variety in terms of the arithmetic distance of the pre-images in \(W\). In good cases, it will give a distribution relation. Theorem: (Arithmetic distribution relation/inequality) Let \(\varphi \colon W \longrightarrow V\) be a generically étale finite flat morphism between quasi-projective geometrically integral varieties over \(K\). For all \((P,q,v) \in W(\bar{K}) \times V(\bar{K}) \times v\), we have \[\delta_V(\varphi(P),q,v) \leq \sum_{Q \in W(\bar{K}),\, \varphi(Q)=q} e_\varphi(Q) \delta_W(P,Q,v)+ O(\lambda_{ \partial(V \times W)}(P,q,v))+ O(\mathcal{M}_K).\] We say that we have an arithmetic distribution relation when the above inequality becomes an equality. For example, assuming that \(V,W\) are both smooth, we have an arithmetic relation in the following two situations: \begin{enumerate} \item[(1)] The map \(\varphi \colon W \longrightarrow V\) is étale (where there is not ramification). \item[(2)] The dimensions \(\dim(V)=\dim(W)=1\) (where the ramification divisor is at most zero-dimensional). \end{enumerate} Remark: The above inequality does not always became an arithmetic relation. Even when we take a Galois cover \(\varphi \colon W \longrightarrow V\) with Galois group \(\text{Gal}(V/W)=\{\tau_1,\dots,\tau_n\}\), we have an inclusion of associated sheaves of ideals \[\mathcal{I}(\sum_{i=1}^n (1\times \tau_i)^* \Delta(W)) \subset \mathcal{I}((\varphi \times \varphi)^*(\Delta(V))\] that does not always gives an equality of closed schemes. Take for example \(\varphi \colon \mathbb{P}^1 \times \mathbb{P}^1 \longrightarrow \mathbb{P}^1 \times \mathbb{P}^1 \) defined by \(\varphi([x,y],[z,w])=([x^2,y^2],[z,w])\). Remark: The above inequality can be used to obtain a quantitative inverse theorem, namely, given a finite map, how far apart from the ramification locus and the boundary we need to be, to be able to define a local inverse. Also, the inverse obtained can be shown to behave nicely with respect to the distance functions. In some sense, the distance between points is close to the distance between the inverses. Theorem: (Inverse function theorem version \(1\)) Suppose that \(V\) and \(W\) are quasi-projective geometrically integral varieties defined over \(K\). Assume that the map \(\varphi \colon W \longrightarrow V\) is a generically étale finite flat surjective morphism of degree \(d\) also defined over K. Let us denote by \(\text{Ann}(\Omega_{W/V})\), the annihilator ideal sheaf of \(\Omega_{W/V}\) and by \(A(\varphi) \subset W\) the closed subscheme defined by \(\text{Ann}(\Omega_{W/V})\). \begin{enumerate} \item[(a)] There exist constants \(C_2,C_3\) and \(\mathcal{M}_K\)-constants \(C_4,C_5\) such that the following holds:\\ If the triple \((P,q,v) \in W(K) \times V(K) \times M(K)\) satisfies \[\delta_V (\varphi(P),q;v) \geq d \lambda_{A(\varphi)}(P;v) + C_2 \lambda_{ \partial_{W \times V} }(P,q;v) + C_4(v)\] then there exists a point \(Q \in W(K)\) satisfying \(\varphi(Q) = q\) and \[\delta_W(P,Q;v) \geq \delta_V (\varphi(P),q;v) -(d-1)\lambda_{A(\varphi)}(P;v)- C_3 \lambda_{\partial(W \times V )}(P, q; v) - C_5(v).\] \item[(b)]If we take \(C_4\) to be an appropriate positive real number, instead of an \(\mathcal{M}_K\) constant, and if we also assume that \(P \notin A(\varphi)\), then the point \(Q\) in (a) is unique. \end{enumerate} Remark: The arithmetic distribution relation and the inverse function theorem have been used to study integral points in the following situations: \begin{enumerate} \item[(1)] To find uniform height estimates while working with the étale map \([n] \colon A \longrightarrow A\) on an Abelian scheme \(A \longrightarrow T\) over a base variety \(T\). \item[(2)] To find an analogous of Siegel's theorem while working with Iterates \(f^n\) of a rational map \(f \colon \mathbb{P}^1 \longrightarrow \mathbb{P}^1\) of degree at least two. \end{enumerate} The first version (version \(1\)) of the inverse function theorem works simultaneously over several places. In the next version the authors present a stronger result working only over a complete field \(K\). By working over a complete field, the exponents or coefficients of the inverse function theorem are improved from \((d,d-1)\) to \((2,1)\). Theorem: (Inverse function theorem version \(2\)). Let \( (K, | . |)\) be a complete field. Let \(W, V\) be smooth quasi-projective varieties over \(K\), and let \(\varphi \colon W \longrightarrow V\) be a generically finite generically étale morphism. Let \(E \subset W\) be the closed subscheme defined by the \(0\)-th fitting ideal sheaf of \(\Omega_{W/V}\). Fix arithmetic distance functions \(\delta_W\), \(\delta_V\), a local height function \(\lambda_E\), and a boundary function \(\lambda_{\partial V}\). Let \(B \subset W(K)\) be a bounded subset. Then there are constants \(C_{36}, C_{37}, C_{38}, C_{39} > 0\) and a bounded subset \(\tilde{B} \subset W (K)\) containing \(B\) such that for all \(P \in B\) and \(q \in V(K)\) satisfying \[P \notin E \quad \text{and} \quad \delta_V(\varphi(P),q) \geq 2 \lambda_E(P) + C_{36} \lambda_{\partial V}(q) + C_{37},\] there is a unique \(Q \in \tilde{B}\) satisfying \[\varphi(Q)=q \quad \text{and} \quad \delta_W(P,Q) \geq \delta_V(\varphi(P),q) - \lambda_E(P)-C_{38} \lambda_{\partial V}(q) - C_{39}.\] Remark: Both versions of the inverse function theorem are suitable to prove results analogous to the continuity of the roots for polynomials. For example from the second version, we could get the following result. Let \( (K, | . |)\) be a complete field. Let \(D \in \mathbb{R}_{>0}\) and \(n \in \mathbb{Z}_{>0}\). Then there are positive constants \(C_{40},C_{41} > 0\) such that the following holds. Suppose that: \begin{itemize} \item \(f,g \in K[t]\) are monic polynomials of degree \(n\); \item The Gauss norms \(|f | \leq D\) and \(|g| \leq D\); \item There is an \(\alpha \in K\) such that \(f(\alpha) = 0\) and \(|f-g| \leq C_{40} |f''\alpha)|\). \end{itemize} Then there is \(\beta \in K\) such that \[g(\beta) = 0 \quad \text{and} \quad |\alpha - \beta||f'(\alpha)| \leq C_{41} |f - g|.\] Remark: The second version of the inverse function theorem is based on a higher dimensional version of Newton's method. The point \(Q\), will be obtained as limit of a Cauchy sequence of points \(Q_0=P, Q_1, Q_2 \dots\). Elliptic and abelian period spaces 2021-11-25T18:46:10.358925Z "Wüstholz, Gisbert" In his book on transcendental numbers [\textit{T. Schneider}, Einführung in die transzendenten Zahlen Berlin-Göttingen-Heidelberg: Springer-Verlag (1957; Zbl 0077.04703)], Th.~Schneider proposes eight open problems, the third of which is: Try to find transcendence results on elliptic integrals of the third kind. This problem gave rise to a number of papers. A survey on this question is the appendix by the reviewer: \emph{Third kind elliptic integrals and transcendance} to the paper [\textit{C. Bertolin}, J. Pure Appl. Algebra 224, No. 10, Article ID 106396, 27 p. (2020; Zbl 1450.11077)]. The first results were obtained thanks to the appendix by J-P.~Serre \emph{Quelques propriétés des groupes algébriques commutatifs} to the volume [\textit{M. Waldschmidt}, Nombres transcendants et groupes algébriques. (Transcendental numbers and algebraic groups). Complété par deux appendices de Daniel Bertrand et Jean-Pierre Serre. 2e éd. Société Mathématique de France (SMF), Paris (1987; Zbl 0621.10022)]. Among many contributions to Schneider's third problem are the following ones: [\textit{M. Laurent}, C. R. Acad. Sci., Paris, Sér. A 288, 699--701 (1979; Zbl 0402.10038); in: Semin. Delange-Pisot-Poitou, 20e Annee 1978/79, Theorie des nombres, Fasc. 1, Exp. 13, 4 p. (1980; Zbl 0426.10033); J. Reine Angew. Math. 316, 122--139 (1980; Zbl 0419.10034); \textit{E. Reyssat}, Ann. Fac. Sci. Toulouse, Math. (5) 2, 79--91 (1980; Zbl 0439.10021); C. R. Acad. Sci., Paris, Sér. A 290, 439--441 (1980; Zbl 0426.10037); \textit{M. Laurent}, J. Reine Angew. Math. 333, 144--161 (1982; Zbl 0475.10031); \textit{E. Reyssat}, Acta Arith. 41, 291--310 (1982; Zbl 0491.10026); \textit{D. M. Caveny} and \textit{R. Tubbs}, Proc. Am. Math. Soc. 138, No. 8, 2745--2754 (2010; Zbl 1262.11076)]. \par The paper under review provides much more general results on the question of linear independence over the field of algebraic numbers of elliptic integrals. Consider an extension by the multiplicative group of an elliptic curve \(E\), defined by a point \(P\) on the dual of \(E\). Everything is supposed to be defined over the field \({\overline{\mathbb{Q}}}\) of algebraic numbers. Then the dimension of the \({\overline{\mathbb{Q}}}\)--space spanned by \(1\) and the entries of the period matrix is \(8\) if \(E\) has no complex multiplication and \(P\) has infinite order, \(6\) if \(E\) has no complex multiplication and \(P\) has finite order and also if \(E\) has complex multiplication and \(P\) has infinite order, and \(4\) if \(E\) has complex multiplication and \(P\) has finite order. An example from elliptic billiards is given. Some results on abelian hyperelliptic integrals are also given. The main tools are the connections between integrals of the third kind and extensions of elliptic curves and abelian varieties by the multiplicative group (Serre, op. cit., and also [\textit{J.-P. Serre}, Algebraic groups and class fields. Transl. of the French edition. New York etc.: Springer-Verlag (1988; Zbl 0703.14001)]), as well as the author's Analytic Subgroup Theorem [\textit{G. Wüstholz}, Ann. Math. (2) 129, No. 3, 501--517 (1989; Zbl 0675.10025)]. The distribution of the maximum of partial sums of Kloosterman sums and other trace functions 2021-11-25T18:46:10.358925Z "Autissier, Pascal" "Bonolis, Dante" "Lamzouri, Youness" Let \(\mathcal{F}=\{\varphi_a\}_{a\in\Omega_m}\) be a family of periodic functions, where \(\Omega_m\) is a non-empty finite set, and for each \(a\in\Omega_m\), \(\varphi_a:\mathbb{Z}\to\mathbb{C}\) is \(m\)-periodic and its Fourier transform \(\widehat{\varphi_a}\) is real-valued and uniformly bounded. For a positive real number \(V\), distribution of the maximum of partial sums of families of \(m\)-periodic complex-valued functions is defined by \[ \Phi_{\mathcal{F}}(V)=\frac{1}{\#\Omega_m}\,\#\left\{a\in\Omega_m : \frac{1}{\sqrt{m}}\max_{x<m}\left|\sum_{0\leq n\leq x}\varphi_a(n)\right|>V\right\}. \] In the paper under review, assuming certain conditions, the authors prove that there exists a constant \(B\) such that for all real numbers \(B\leq V\leq (N/\pi)(\log \log m - 2 \log \log \log m) - B\) one has \[ \Phi_{\mathcal{F}}(V)=\exp\left(-\exp\left(\frac{\pi}{N}V+O(1)\right)\right). \] This general estimate covers some previously known results on the partial sums of character sums, Kloosterman sums and other families of \(\ell\)-adic trace functions. New bounds for exponential sums with a non-degenerate phase polynomial 2021-11-25T18:46:10.358925Z "Castryck, Wouter" "Nguyen, Kien Huu" Summary: We prove a recent conjecture due to \textit{R. Cluckers} and \textit{W. Veys} [Am. J. Math. 138, No. 1, 61--80 (2016; Zbl 1341.11048)] on exponential sums modulo \(p^m\) for \(m\geq 2\) in the special case where the phase polynomial \(f\) is sufficiently non-degenerate with respect to its Newton polyhedron at the origin. Our main auxiliary result is an improved bound for certain related exponential sums over finite fields. This bound can also be used to settle a conjecture of \textit{J. Denef} and \textit{K. Hoornaert} [J. Number Theory 89, No. 1, 31--64 (2001; Zbl 0994.11038)]on the candidate-leading Taylor coefficient of Igusa's local zeta function associated with a non-degenerate polynomial, at its largest non-trivial real candidate pole. The completed finite period map and Galois theory of supercongruences 2021-11-25T18:46:10.358925Z "Rosen, Julian" Summary: A period is a complex number arising as the integral of a rational function with algebraic number coefficients over a region cut out by finitely many inequalities between polynomials with rational coefficients. Although periods are typically transcendental numbers, there is a conjectural Galois theory of periods coming from the theory of motives. This paper formalizes an analogy between a class of periods called multiple zeta values and congruences for rational numbers modulo prime powers (called supercongruences). We construct an analog of the motivic period map in the setting of supercongruences and use it to define a Galois theory of supercongruences. We describe an algorithm using our period map to find and prove supercongruences, and we provide software implementing the algorithm. Higher moments of arithmetic functions in short intervals: a geometric perspective 2021-11-25T18:46:10.358925Z "Hast, Daniel Rayor" "Matei, Vlad" Summary: We study the geometry associated to the distribution of certain arithmetic functions, including the von Mangoldt function and the Möbius function, in short intervals of polynomials over a finite field \(\mathbb{F}_q\). Using the Grothendieck-Lefschetz trace formula, we reinterpret each moment of these distributions as a point-counting problem on a highly singular complete intersection variety. We compute part of the \(\ell\)-adic cohomology of these varieties, corresponding to an asymptotic bound on each moment for fixed degree \(n\) in the limit as \(q \to \infty \). The results of this paper can be viewed as a geometric explanation for asymptotic results that can be proved using analytic number theory over function fields. Jacobians of hyperelliptic curves over \(\mathbb{Z}_{n}\) and factorization of \(n\) 2021-11-25T18:46:10.358925Z "Dryło, Robert" "Pomykała, Jacek" Summary: E. Bach showed that factorization of an integer \(n\) can be reduced in probabilistic polynomial time to the problem of computing exponents of elements in \(\mathbb{Z}_n^\ast\) (in particular the group order of \(\mathbb{Z}_n^\ast\)). It is also known that factorization of square-free integer \(n\) can be reduced to the problem of computing the group order of an elliptic curve \(E/\mathbb{Z}_n\). In this paper we describe the analogous reduction for computing the orders of Jacobians over \(\mathbb{Z}_n\) of hyperelliptic curves Cover \(\mathbb{Z}_n\) using the Mumford representation of divisor classes and Cantor's algorithm for addition. These reductions are based on the group structure of the Jacobian. We also propose other reduction of factorization to the problem of determining the number of points \(|C(\mathbb{Z}_n)|\), which makes use of elementary properties of twists of hyperelliptic curves. Counting points on superelliptic curves in average polynomial time 2021-11-25T18:46:10.358925Z "Sutherland, Andrew V." Summary: We describe the practical implementation of an average polynomial-time algorithm for counting points on superelliptic curves defined over \(\mathbb{Q}\) that is substantially faster than previous approaches. Our algorithm takes as input a superelliptic curve \(y^m= f(x)\) with \(m\ge 2\) and \(f\in\mathbb{Z}[x]\) any squarefree polynomial of degree \(d\ge 3\), along with a positive integer \(N\). It can compute \(\#X(\mathbb{F}_p)\) for all \(p\le N\) not dividing \(m\text{lc}(f)\text{disc}(f)\) in time \(O(md^3N\log^3N\log\log N)\). It achieves this by computing the trace of the Cartier-Manin matrix of reductions of \(X\). We can also compute the Cartier-Manin matrix itself, which determines the \(p\)-rank of the Jacobian of \(X\) and the numerator of its zeta function modulo \(p\). For the entire collection see [Zbl 1452.11005]. Of limit key polynomials 2021-11-25T18:46:10.358925Z "Alberich-Carramiñana, Maria" "Boix, Alberto F. F." "Fernández, Julio" "Guàrdia, Jordi" "Nart, Enric" "Roé, Joaquim" Let \(K\) be a field and \(v\) a valuation on the polynomial ring \(K[x]\), with value group \(\Gamma_v\). For each \(\gamma\in \Gamma_v\), we have the following abelian groups \(\mathcal{P}_{\gamma}^+=\{ g\in K[x]; \mu(g)>\gamma\}\subset\mathcal{P}_{\gamma}=\{ g\in K[x]; \mu(g)\geq\gamma\}\). The graded algebra \(gr_v(K[x])=\oplus_{\gamma\in\Gamma_v}\mathcal{P}_{\gamma}/ \mathcal{P}_{\gamma}^+\) is an integral domain. A MacLane-Vaquie (MLV) key polynomial for \(v\) is a monic polynomial \(\phi\in K[X]\) whose initial term generates a prime ideal in \(gr_v(K[x])\), which cannot be generated by the initial term of a polynomial of smaller degree. The abstract key polynomials for \(v\) are defined in a technical way. In the paper under review, the authors try to find relations between the MLV key polynomials for valuations \(\mu\leq v\) and the abstract key polynomials for \(v\). Generalized \(F\)-signatures of Hibi rings 2021-11-25T18:46:10.358925Z "Higashitani, Akihiro" "Nakajima, Yusuke" Let \(R\) be a \(d\)-dimensional Noetherian ring of characteristic \(p>0\). \(R\) is said to have FFRT (finite \(F\)-representation type) if there is a finite set of isomorphism classes of finitely generated indecomposable modules \(\{M_0, \ldots, M_n\}\) such that for any \(e \in \mathbb{N}\) there are \(c_{i,e} \geq 0\), such that \[R^{1/p^e} \cong M_0^{\oplus c_{0,e}}\oplus M_1^{\oplus c_{1,e}}\oplus \cdots \oplus M_n^{\oplus c_{n,e}}.\] The generalized \(F\)-signature of \(M_i\) with respect to \(R\) is \(s(M_i,R):=\underset{e \rightarrow \infty}\lim\displaystyle\frac{c_{i,e}}{p^{ed}}\). A Hibi ring is a special type of toric ring defined via a poset. For toric rings \(R\) of characteristic \(p\), it is known that \(R\) has FFRT and the indecomposable modules of \(R\) are the conical divisors of \(R\). The goal of this nice paper is to determine the generalized \(F\)-signatures for the conical divisors of a Hibi ring. The main theorem determines the generalized \(F\)-signature for a conical divisor of a Segre product of polynomial rings of dimension \(d\), which is a Hibi ring, in terms of the number of elements of the symmetric group on a set of \(d\) elements which certain descent properties. The authors claim that the methods used to prove this result can also be used to determine the generalized \(F\)-signature for a conical divisor for other Hibi rings; their running example of a Hibi ring which is not a Segre product provides an illustration of this claim. Monomial generators of complete planar ideals 2021-11-25T18:46:10.358925Z "Alberich-Carramiñana, Maria" "Àlvarez Montaner, Josep" "Blanco, Guillem" Let \((X,O)\) be a germ of smooth complex surface and \(\mathcal{O}_{X,O}\) the ring of germs of holomorphic functions in a neighbourhood of \(O\), and let \(\mathfrak{m}\) be the maximal ideal at \(O\). Let \(\pi:X'\rightarrow X\) be a proper birational morphism that can be achieved as a sequence of blow-ups along a set of points. Given an effective \(\mathbb{Z}\)-divisor \(D\) in \(X'\) we may consider its associated ideal \(\pi_*\mathcal{O}_{X'}(-D)\), whose stalk at \(O\) is denoted as \(H_D\). This type of ideals are complete ideals of \(\mathcal{O}_{X,O}\). Among the class of divisors defining the same complete ideal, we may find a unique maximal representative, which has the property of being antinef. Zariski showed that the above correspondence is an isomorphism of semigroups between the set of complete \(\mathfrak{m}\)-primary ideals and the set of antinef divisors with exceptional support. In the present paper the authors make this correspondence explicit computationally: given a proper birational morphism \(\pi:X'\rightarrow X\) and an antinef divisor \(D\) in \(X'\), they provide an algorithm that gives a system of generators of the ideal \(H_D\). This algorithm also captures the topological type of \(D\). Applying the algorithm, the authors provide a method to compute the integral closure of any ideal \(\mathfrak{a}\subseteq \mathcal{O}_{X,O}\). They apply these results to planar ideals, multiplier ideals and a familiy of complete ideals described by valuative conditions given by the interesection multiplicity of the elements of \(\mathcal{O}_{X,O}\) with a fixed germ of plane curve. The algorithms developed in the paper have been implemented in the computer algebra system \verb|Magma|. Symbolic powers of codimension two Cohen-Macaulay ideals 2021-11-25T18:46:10.358925Z "Cooper, Susan" "Fatabbi, Giuliana" "Guardo, Elena" "Lorenzini, Anna" "Migliore, Juan" "Nagel, Uwe" "Seceleanu, Alexandra" "Szpond, Justyna" "Tuyl, Adam Van" Let \(X\) be a codimension two arithmetically Cohen-Macaulay scheme in \(\mathbb{P}^n\) and \(I_X\) its defining ideal. The authors consider the problem of equality between the ordinary and symbolic powers of \(I_X\), that is \(I_X^m=I_X^{(m)}\) for all \(m\geq1\). They survey known results about these equality, and they extend some of these results. They give necessary and sufficient conditions for the above equality, in terms of the number of generators of \(I_X\) for the case of codimension three arithmetically Gorenstein schemes that are locally complete intersection. They also consider the importance of the hypothesis in the presented characterization by dropping some of these hypotheses and analyzing what happens. In the end of the paper, they consider arithmetically Cohen-Macaulay set of points in \(\mathbb{P}_1\times\mathbb{P}_1\) and give new short proofs of known results. Explicit Pieri inclusions 2021-11-25T18:46:10.358925Z "Hunziker, Markus" "Miller, John A." "Sepanski, Mark" Summary: By the Pieri rule, the tensor product of an exterior power and a finite-dimensional irreducible representation of a general linear group has a multiplicity-free decomposition. The embeddings of the constituents are called Pieri inclusions and were first studied by \textit{J. Weyman} [Schur functors and resolutions of minors. Brandeis University, Waltham USA (PhD Thesis) (1980)] and described explicitly by \textit{P. J. Olver} [``Differential hyperforms I'', University of Minnesota, Mathematics Report, 82--101 (1980), \url{}]. More recently, these maps have appeared in the work of \textit{D. Eisenbud} et al. [Ann. Inst. Fourier 61, No. 3, 905--926 (2011; Zbl 1239.13023)] and of \textit{S. V. Sam} [J. Softw. Algebra Geom. 1, 5--10 (2009; Zbl 1311.13039)] and Weyman to compute pure free resolutions for classical groups. In this paper, we give a new closed form, non-recursive description of Pieri inclusions. For partitions with a bounded number of distinct parts, the resulting algorithm has polynomial time complexity whereas the previously known algorithm has exponential time complexity. Virtual resolutions of monomial ideals on toric varieties 2021-11-25T18:46:10.358925Z "Yang, Jay" Given a smooth toric variety \(X=X(\Sigma)\) and a \(\mathrm{Pic}(X)\)-graded module \(M\), then a free complex \(F\) of graded \(\mathrm{k}[\Sigma]\)-modules is a virtual resolution of \(M\) if the corresponding complex \(\widetilde{F}\) of vector bundles on \(X\) is a resolution of \(\widetilde{M}\). In the paper under review, the author uses cellular resolutions of monomial ideals to prove an analog of Hilbert's syzygy theorem for virtual resolutions of monomial ideals on smooth toric varieties. Connected sums of graded Artinian Gorenstein algebras and Lefschetz properties 2021-11-25T18:46:10.358925Z "Iarrobino, Anthony" "McDaniel, Chris" "Seceleanu, Alexandra" Let \(A\) and \(B\) be graded Artinian Gorenstein (AG) algebras with the same socle degree, \(d\). Let \(T\) be an AG algebra of socle degree \(k<d\). Suppose that there are surjective maps \(\pi_A : A \rightarrow T\) and \(\pi_B: B \rightarrow T\). The connected sum algebra \(A \#_T B\) is a certain quotient of the fibered product \(A \times_T B\). The connected sum of two AG algebras is again an AG algebra. In this paper the authors first give two alternative descriptions of this construction, including a careful study of how it relates to Macaulay-Matlis duality. They also show that if \(A\) and \(B\) are graded AG algebras satisfying the strong Lefschetz property (SLP) then over \(T = \mathbb F\) (the ground field), the connected sum also has the SLP. This is not true for a general choice of \(T\). However, they also show that connected sums do retain the WLP to some extent. Ideals modulo a prime 2021-11-25T18:46:10.358925Z "Abbott, John" "Bigatti, Anna Maria" "Robbiano, Lorenzo" The present paper deals with the problem of reducing an ideal modulo \(p\), i.e. relating an ideal \(I\) in the polynomial ring \(\mathbb{Q}[x_1,\dots,x_n]\) to a corresponding ideal in \(\mathbb{F}_p[x_1,\dots,x_n]\) where \(p\) is a prime number. The authors define a notion of \(\sigma\)-good prime, where \(\sigma\) is a term ordering and relate it to other similar notions in the literature. Furthermore, the paper introduces a new invariant called universal denominator, which is independent of the term ordering and allows to show that all but finitely many primes are good for \(I\) (see Definiton 2.4). The methods in the paper make it easy to detect bad primes, a key feature in modular methods (Theorem 4.1 and Corollary 4.2). The paper includes practical applications to modular computations of Gröbner bases and also includes examples of computations using the computer algebra systems \verb|CoCoA| and \verb|SINGULAR|. Saturations of subalgebras, SAGBI bases, and U-invariants 2021-11-25T18:46:10.358925Z "Bigatti, Anna Maria" "Robbiano, Lorenzo" Let \(R=K[x_1,\dots ,x_n]\) and \(F\) be a (not necessarily finite) subset of \(R\). Then the subalgebra of \(R\) generated by \(F\) is denoted \(K[F]\). Similar to the notion of Grobner bases for ideals of \(R\), we can define the notion of SAGBI Gröbner basis for \(K[F]\) (see e.g. the paper of the second author and \textit{M. Sweedler} [Lect. Notes Math. 1430, 61--87 (1990; Zbl 0725.13013)] which is regarded as a pioneer work). Let \(S\) be a \(K\)-subalgebra of the polynomial ring \(R\) , and let \(0 \ne g\in S\). We denote the set \(\bigcup_{i=0}^\infty \{ f \in R \ | \ g^i f \in S\}\) by \(S : g^\infty\). The problem that the authors address in this paper is as follows: Given polynomials \(g_1,\dots, g_r \in R\), let \(S= K[g_1,\dots, g_r]\) and \(0\ne g \in S\). The problem is to compute a set of generators for \(S : g^\infty\). In the first part of the paper, an algorithm has been presented to compute a set of generators for \(S : g^\infty\) which terminates if and only if it is finitely generated. In the second part of the paper, the authors consider the case that \(S\) is graded. They show that two operations of computing a SAGBI basis for \(S\) and a set of generators for \(S : g^\infty\) commute and this leads to nice algorithms for computing with \(S : g^\infty\). Coisotropic hypersurfaces in Grassmannians 2021-11-25T18:46:10.358925Z "Kohn, Kathlén" This paper studies the so-called higher associated hypersurfaces of a projective variety via the notion of coisotropy. For a \(k\)-dimensional projective variety \(X\) in \(\mathbb{P}^n\), the \(i\)-th associated hypersurface of \(X\) consists of (the Zariski closure of) all \((n-k-1+i)\)-dimensional linear spaces in \(\mathbb{P}^n\) that meet \(X\) at a smooth point non-transversely, which is a subvariety of a Grassmannian. Historically, the cases \(i = 0\) and \(i=1\) have been studied as the Chow and Hurwitz form of \(X\), respectively. A main result of this paper is a new and direct proof of a characterization (due originally to Gel'fand, Kapranov and Zelevinsky) of such hypersurfaces in the Grassmannian. Namely, a hypersurface in the Grassmannian is the associated hypersurface of some (irreducible) projective variety iff it is coisotropic, i.e. every normal space at a smooth point of the hypersurface is spanned by rank 1 homomorphisms. Since the notion of coisotropy does not depend on the underlying projective variety, this provides an intrinsic description of all higher associated hypersurfaces (hence the term coisotropic hypersurfaces). In addition, many other results on coisotropic hypersurfaces are given: e.g. the coisotropic hypersurfaces of the projective dual of \(X\) are the reverse of those of \(X\), and the degrees of these are precisely the polar degrees of \(X\). It is also shown that hyperdeterminants are precisely the coisotropic hypersurfaces associated to Segre varieties. Finally, equations for the Cayley variety of all coisotropic forms of a given degree are given, inside Grassmannians of lines. The author has also written a Macaulay2 package to explicitly realize computation of coisotropic hypersurfaces. Parabolic semi-orthogonal decompositions and Kummer flat invariants of log schemes 2021-11-25T18:46:10.358925Z "Scherotzke, Sarah" "Sibilla, Nicolò" "Talpo, Mattia" Summary: We construct semi-orthogonal decompositions on triangulated categories of parabolic sheaves on certain kinds of logarithmic schemes. This provides a categorification of the decomposition theorems in Kummer flat $K$-theory due to \textit{K. Hagihara} [\(K\)-Theory 29, No. 2, 75--99 (2003; Zbl 1038.19002); Doc. Math. 21, 1345--1396 (2016; Zbl 1357.19001)] and \textit{W. Nizioł} [ibid. 13, 505--551 (2008; Zbl 1159.19003)]. Our techniques allow us to generalize Hagihara and Nizioł's results to a much larger class of invariants in addition to $K$-theory, and also to extend them to more general logarithmic stacks. Maps from Feigin and Odesskii's elliptic algebras to twisted homogeneous coordinate rings 2021-11-25T18:46:10.358925Z "Chirvasitu, Alex" "Kanda, Ryo" "Smith, S. Paul" Summary: The elliptic algebras in the title are connected graded \(\mathbb{C} \)-algebras, denoted \(Q_{n,k}(E,\tau )\), depending on a pair of relatively prime integers \(n>k\ge 1\), an elliptic curve \(E\) and a point \(\tau \in E\). This paper examines a canonical homomorphism from \(Q_{n,k}(E,\tau)\) to the twisted homogeneous coordinate ring \(B(X_{n/k},\sigma',\mathcal{L}'_{n/k})\) on the characteristic variety \(X_{n/k}\) for \(Q_{n,k}(E,\tau)\). When \(X_{n/k}\) is isomorphic to \(E^g\) or the symmetric power \(S^gE\), we show that the homomorphism \(Q_{n,k}(E,\tau ) \to B(X_{n/k},\sigma',\mathcal{L}'_{n/k})\) is surjective, the relations for \(B(X_{n/k},\sigma',\mathcal{L}'_{n/k})\) are generated in degrees \(\le 3\) and the noncommutative scheme \(\text{Proj}_{nc}(Q_{n,k}(E,\tau))\) has a closed subvariety that is isomorphic to \(E^g\) or \(S^gE\), respectively. When \(X_{n/k}=E^g\) and \(\tau =0\), the results about \(B(X_{n/k},\sigma',\mathcal{L}'_{n/k})\) show that the morphism \(\Phi_{|\mathcal{L}_{n/k}|}:E^g \to \mathbb{P}^{n-1}\) embeds \(E^g\) as a projectively normal subvariety that is a scheme-theoretic intersection of quadric and cubic hypersurfaces. Classification of del Pezzo orders with canonical singularities 2021-11-25T18:46:10.358925Z "Nasr, Amir" Summary: We classify del Pezzo non-commutative surfaces that are finite over their centres and have no worse than canonical singularities. Using the minimal model program, we introduce the minimal model of such surfaces. We first classify the minimal models and then give the classification of these surfaces in general. This presents a complementary result and method to the classification of del Pezzo orders over projective surfaces given by \textit{D. Chan} and \textit{R. S. Kulkarni} [Adv. Math. 173, No. 1, 144--177 (2003; Zbl 1051.14005)]. Formal moduli problems and formal derived stacks 2021-11-25T18:46:10.358925Z "Calaque, Damien" "Grivaux, Julien" Summary: This paper presents a survey on formal moduli problems. It starts with an introduction to pointed formal moduli problems and a sketch of proof of a Theorem (independently proven by \textit{J. P. Pridham} [Adv. Math. 224, No. 3, 772--826 (2010; Zbl 1195.14012)] and \textit{J. Lurie} [``Derived algebraic geometry X: formal moduli problems'', (2011)]) which gives a precise mathematical formulation for Drinfeld's derived deformation theory philosophy. This theorem provides a correspondence between formal moduli problems and differential graded Lie algebras. The second part deals with Lurie's general theory of deformation contexts, which we present in a slightly different way than the original paper, emphasizing the (more symmetric) notion of Koszul duality contexts and morphisms thereof. In the third part, we explain how to apply this machinery to the case of non-split formal moduli problems under a given derived affine scheme; this situation has been dealt with recently by \textit{J. Nuiten} [Adv. Math. 354, Article ID 106750, 63 p. (2019; Zbl 1433.14007)], and requires to replace differential graded Lie algebras with differential graded Lie algebroids. In the last part, we globalize this to the more general setting of formal thickenings of derived stacks, and suggest an alternative approach to results of \textit{D. Gaitsgory} and \textit{N. Rozenblyum} [A study in derived algebraic geometry. Volume I: Correspondences and duality. Providence, RI: American Mathematical Society (AMS) (2017; Zbl 1408.14001)]. For the entire collection see [Zbl 1471.14005]. Introductory topics in derived algebraic geometry 2021-11-25T18:46:10.358925Z "Pantev, Tony" "Vezzosi, Gabriele" Summary: We give a quick introduction to derived algebraic geometry (DAG) sampling basic constructions and techniques. We discuss affine derived schemes, derived algebraic stacks, and the Artin-Lurie representability theorem. Through the example of deformations of smooth and proper schemes, we explain how DAG sheds light on classical deformation theory. In the last two sections, we introduce differential forms on derived stacks, and then specialize to shifted symplectic forms, giving the main existence theorems proved in \textit{T. Pantev} et al. [Publ. Math., Inst. Hautes Étud. Sci. 117, 271--328 (2013; Zbl 1328.14027)]. For the entire collection see [Zbl 1471.14005]. Characteristic classes of affine varieties and Plücker formulas for affine morphisms 2021-11-25T18:46:10.358925Z "Esterov, Alexander" Summary: An enumerative problem on a variety \(V\) is usually solved by reduction to intersection theory in the cohomology of a compactification of \(V\). However, if the problem is invariant under a ``nice'' group action on \(V\) (so that \(V\) is spherical), then many authors suggested a better home for intersection theory: the direct limit of the cohomology rings of all equivariant compactifications of \(V\). We call this limit the affine cohomology of \(V\) and construct affine characteristic classes of subvarieties of a complex torus, taking values in the affine cohomology of the torus.{ }This allows us to make the first steps in computing affine Thom polynomials. Classical Thom polynomials count how many fibers of a generic proper map of a smooth variety have a prescribed collection of singularities and our affine version addresses the same question for generic polynomial maps of affine algebraic varieites. This notion is also motivated by developing an intersection-theoretic approach to tropical correspondence theorems: they can be reduced to the computation of affine Thom polynomials, because the fundamental class of a variety in the affine cohomology is encoded by the tropical fan of this variety.{ }The first concrete answer that we obtain is the affine version of what were, historically speaking, the first three Thom polynomials--the Plücker formulas for the degree and the number of cusps and nodes of a projectively dual curve. This, in particular, characterizes toric varieties whose projective dual is a hypersurface, computes the tropical fan of the variety of double tangent hyperplanes to a toric variety, and describes the Newton polytope of the hypersurface of non-Morse polynomials of a given degree. We also make a conjecture on the general form of affine Thom polynomials; a key ingredient is the \(n\)-ary fan, generalizing the secondary polytope. On the effective freeness of the direct images of pluricanonical bundles 2021-11-25T18:46:10.358925Z "Dutta, Yajnaseni" Summary: We give effective bounds on the number of twists by ample line bundles, for global generations of pushforwards of log-pluricanonical bundles on klt pairs. This gives a partial answer to a conjecture proposed by \textit{M. Popa} and \textit{C. Schnell} [Algebra Number Theory 8, No. 9, 2273--2295 (2014; Zbl 1319.14022)]. We prove two types of statements: first, more in the spirit of the general conjecture, we show generic global generation with the predicted bound when the dimension of the variety is less than or equal to 4 and more generally, with a quadratic Angehrn-Siu type bound. Secondly, assuming that the relative canonical bundle is relatively semi-ample, we make a very precise statement. In particular, when the morphism is smooth, it solves the conjecture with the same bounds, for certain pluricanonical bundles. All secant varieties of the Chow variety are nondefective for cubics and quaternary forms 2021-11-25T18:46:10.358925Z "Torrance, Douglas A." "Vannieuwenhoven, Nick" Let \(f\in S^d \mathbb C^{n+1}\) be a homogeneous polynomial of degree \(d\) in \(n+1\) variables. The Chow rank of \(f\) is the minimal integer \(s\) such that \(f\) may be written as \[ f = \ell_{1,1}\cdots \ell_{1,d} + \cdots + \ell_{s,1}\cdots \ell_{s,d}, \] where the \(\ell_{i,j}\) are linear forms. This is an important instance of an additive decomposition for a tensor. Tensor decompositions are by now a large field with deep geometric and algebraic roots and yet possess a vast number of applications in many contexts such as complexity, information theory, and machine learning among others. One geometric feature of the subject arises when one asks what is the Chow rank of a generic \(f\in S^d \mathbb C^{n+1}\). Let \(\mathcal{C}_{d,n}\subset \mathbb P^{\binom{n+d}{d}-1}\) be the projective variety parameterizing products of linear forms in \(S^d \mathbb C^{n+1}\). The variety \(\mathcal{C}_{d,n}\) is called the \textit{Chow variety}. Computing the Chow rank of a generic \(f\in S^d \mathbb C^{n+1}\) is equivalent to finding the smallest \(s\) such that \(\sigma_s(\mathcal{C}_{d,n}) = \mathbb P^{\binom{n+d}{d}-1}\), where \(\sigma_s(\mathcal{C}_{d,n})\) is the \(s\)-th secant variety of the Chow variety. The topic of secant varieties is a delightful chapter of classical algebraic geometry that has attracted more attention in the last decades, partly because of its natural role in additive decompositions and applications thereof. This nice paper is a contribution to determining dimensions of secants of Chow varieties. The main result is that all secant varieties \(\sigma_s(\mathcal{C}_{d,n})\) have expected dimensions for: \begin{itemize} \item any \(n\) and \(d=3\), \item \(n=3\) and any \(d\). \end{itemize} The methods are very combinatorial and rely on a lattice construction generalising a method due to Brambilla and Ottaviani. The base cases of the inductions are treated with a computer-assisted proof. Erratum to: ``Totaro's question on zero-cycles on torsors'' 2021-11-25T18:46:10.358925Z "Gordon-Sarney, R." "Suresh, V." Erratum to the authors' paper [ibid. 167, No. 2, 385--395 (2018; Zbl 1383.14003)]. Néron models of intermediate Jacobians associated to moduli spaces 2021-11-25T18:46:10.358925Z "Dan, Ananyo" "Kaur, Inder" Given a family \(X\) of smooth curves degenerating to a one-nodal curve \(X_0\), one can consider the Gieseker moduli space \(\mathcal{G}_{X_t}(2,\mathcal{L} _t)\) of semistable vector bundles of rank two and a determinant of odd degree which also varies in a family. For every smooth curve \(X_t\) in the family, one can consider the intermediate Jacobian \(J^i(\mathcal{G}_{X_t}(2,\mathcal{L} _t))\), and all these intermediate Jacobians fit together in one analytic family. To extend this at \(t=0\), different Neron models, constructed Clemens, Saito, Schnell, Zucker and Green-Griffiths-Kerr, are available. In this paper, the authors prove that all these Neron models coincide, and, moreover, they give a description of the special fiber and they prove that it is a semi-abelian varieties in some cases. In particular, in the case of the second intermediate Jacobian, the special fiber is isomorphic to the second generalized intermediate Jacobian of \(\mathcal{G}_{X_0}(2,\mathcal{L}_0)\). The integral Hodge conjecture for 3-folds of Kodaira dimension zero 2021-11-25T18:46:10.358925Z "Totaro, Burt" The paper concerns the proof of the integral Hodge conjecture for some special 3-folds of Kodaira dimension zero such that their canonical bundle has nontrivial sections. The result generalizes earlier work of Voisin and Grobowski [\textit{C. Grabowski}, On the integral Hodge conjecture for 3-folds. Duke University (PhD Thesis) (2004)]. They also prove a similar theorem for integral Tate conjecture. The Hodge conjecture is true for all smooth complex projective 3-folds by Lefschetz (1,1) and hard Lefschetz theorem. The integral Hodge conjecture is the stronger statement that, every element of \(H^{2i}(X, \mathbb{Z})\) whose image in \(H^{2i}(X, \mathbb{C})\) is of type \((i,i)\), is the class of an algebraic cycle of codimension \(i\). The Hodge conjecture is an analogous statement for rational cohomology classes. The integral Tate conjecture for a smooth projective variety \(X\) over the separable closure \(F\) of a finitely generated field \(k\) states that, For \(k\) a finitely generated field of definition of \(X\) with separable closure \(F\) and \(l\) a prime invertible in \(k\) every element of \(H^{2j}(X, \mathbb{Z}_l(j))\) fixed by some open subgroup of \(\mathrm{Gal}(F/k)\) is the class of an algebraic cycle over \(F\) with \(\mathbb{Z}_l\) coefficients. The Tate conjecture is the analogous statement over \(\mathbb{Q}_l\). Section 2 presents different examples of the varieties which the set up for the results of the paper holds. Let \(X\) be a smooth projective 3-fold of Kodaira dimension zero. The first example is the minimal model of \(X\). In this case Proposition 2.1 says either we have \(H^1(X, \mathcal{O})=H^1(Y, \mathcal{O})\) or \(Y\) is smooth. Thus, the integral Hodge conjecture for smooth 3-folds is a birational invariant property [\textit{C. Voisin}, Jpn. J. Math. (3) 2, No. 2, 261--296 (2007; Zbl 1159.14005)]. In order to prove the integral Hodge conjecture for \(X\) we can assume either \(H^1(X, \mathcal{O})=0\) or else \(K_X\) is trivial. The rest follows from the result of \textit{A. Höring} and \textit{C. Voisin} [Pure Appl. Math. Q. 7, No. 4, 1371--1393 (2011; Zbl 1316.14022)] together with Lemma 3.1 in the text. Examples of this are terminal isolated hypersurface singularities \(xy+f(x,y)=0\) and \(X\) any resolution of a terminal quintic 3-fold. The minimal model exists by \textit{J. S. Milne} [Algebraic groups. The theory of group schemes of finite type over a field. Cambridge: Cambridge University Press (2017; Zbl 1390.14004)], that is \(Y\) is a terminal 3-fold with \(K_Y\) nef and birational to \(X\). Terminal varieties are smooth in codimension 2, and so \(Y\) is smooth outside finitely many points. Since \(X\) has Kodaira dimension zero, the Weil divisor class \(K_Y\) is torsion by the aboundance theorem for 3-folds [\textit{Y. Kawamata}, Invent. Math. 108, No. 2, 229--246 (1992; Zbl 0777.14011)]. The assumption implies \(K_Y\) is trivial and thus corresponds to a Cartier divisor. The terminal condition implies that \(Y\) has only rational singularities [\textit{M. Reid}, Proc. Symp. Pure Math. 46, 345--414 (1987; Zbl 0634.14003)] and its dualizing sheaf is \(K_Y\). By Goresky-Macpherson \(i_*:H_2(S, \mathbb{Z}) \to H_2(S, \mathbb{Z})\) is surjective for any smooth ample divisor \(S\) in \(Y\). Hence \(H^2(Y, \mathbb{Q}) \to H^2(S, \mathbb{Q})\) is injective. It follows that the mixed Hodge structure on \(H^2(Y, \mathbb{Q})\) is pure [\textit{P. Deligne}, in: Proc. int. Congr. Math., Vancouver 1974, Vol. 1, 79--85 (1975; Zbl 0334.14011)] of weight 2. Writing \(H^2(S, \mathbb{Q})=H^2(Y, \mathbb{Q}) \oplus H^2(Y, \mathbb{Q})^{\perp}\) and using the Hodge decomposition together with Serre duality one obtains \(H^0(Y,TY)=H^1(Y, \mathcal{O})\). Using this fact, the author proves that the identity component of the automorphism group of \(Y\) namely \(Aut^0(Y)\) is an Abelian variety of positive dimension and it preserves the singular locus of \(Y\) which has at most dimension zero. The positivity of the dimension of \(Aut^0(Y)\) implies that the singular locus of \(Y\) must be empty. The next sort of examples are \(X=\frac{S \times E}{G}\) where \(S\) is a K3 surface and \(E\) is an elliptic curve, and \(G=(\mathbb{Z}/2)^2, (\mathbb{Z}/3)^2, (\mathbb{Z}/4)^2, \mathbb{Z}/2 \times \mathbb{Z}/4, \mathbb{Z}/2 \times \mathbb{Z}/6\) acting \(S\). The Abelian group \(G\) acts on \(E\) by translations. Another example is \(X=\frac{S \times E}{G}\) where \(S\) is an Abelian surface. Section 3 proves two lemmas about the resolution of an isolated rational 3-fold singularity \(Y\). If \(\pi:X \to Y\) is such a resolution with exceptional locus to be SNC divisor \(D\), then Lemma 3.1 asserts that \(H_2(D, \mathbb{Z})\) is generated by the class of algebraic 1-cycles on \(D\). Lemma 3.2 asserts that in case \(\pi\) is log-canonical, then \(H^i(D, \mathcal{O})=0, \forall i >0\). The proof uses Steenbrink type spectral sequences defined on a stratification of the NC divisor \(D\), [\textit{J. H. M. Steenbrink}, Proc. Symp. Pure Math. 40, 513--536 (1983; Zbl 0515.14003); Ann. Inst. Fourier 47, No. 5, 1367--1377 (1997; Zbl 0889.32035)]. Section 4 presents the proof of Integral Hodge conjecture for varieties \(X\) of the form discussed above, i.e with Kodaira dimension zero and \(h^0(X, K_X) >0\). The proof proceeds for the conjecture on algebraic cycles dimensionwise. For 1-cycles by the Lefschetz (1,1)-theorem the conjecture follows. It remains to prove it for codimension 2 cycles on \(X\). Using the birational invariance for the conjecture we consider \(X \to Y\) to be a birational map which is an isomorphism on the smooth locus and the points above the singular locus are SNC. Then choose \(H\) a very ample line bundle on \(Y\) and \(S\) a smooth surface in the linear system \(|H|\), such that \(H_2(S, \mathbb{Z}) \to H_2(Y, \mathbb{Z})\) is surjective. The Hilbert scheme of smooth surfaces in \(Y\) in the homology class of \(S\) is smooth if \(H\) is sufficiently ample. Then one uses the following lemma. {Lemma 4.2.} Let \(Y\) be a terminal projective complex 3-fold. Denote by \(S_{t_0}\) the point in the Hilbert scheme \(\mathcal{H}\) corresponding to \(t_0\). Let \(H_2(S_{t_0}, \mathbb{Z})_{van}:=\ker(H_2(S_{t_0}, \mathbb{Z}) \to H_2(Y, \mathbb{Z}))\). Identify \(H^2(S_{t_0}, \mathbb{Z})\) with \(H_2(S_{t_0}, \mathbb{Z})\) by Poincaré and let \(C\) be a non-empty open cone in \(H^2(S_{t_0}, \mathbb{R})_{van}\). Suppose there is a contractible open neighborhood \(U\) of \(t_0\) in \(H\) such that every element of \(H^2(S_{t_0}, \mathbb{Z})_{van} \cap C\) becomes a Hodge class on \(S_t\) for some \(t \in U\). Then every element of \(H_2(Y, \mathbb{Z})\) whose image in \(H_2(Y, \mathbb{C})\) is in \(H_{(1,1)}(Y)\) is algebraic. The remainder of the proof of Theorem 4.1 comes with the analysis of VHS of family of surfaces in a series of results in Section 5. We list these results without proof to give just an idea about them, for the sake of briefness. \begin{itemize} \item Let \(Y\) be a terminal projective complex 3-fold, and \(H\) a very ample line bundle on \(Y\) and \(S\) a smooth surface in the linear system \(|H|\). Suppose that there exists an element \(\lambda \in H^1(S, \Omega^1)_{van}\) such that the linear map \[ \mu_{\lambda}:H^0(S, N_{S/Y}) \to H^2(S, \mathcal{O})_{van} \] is surjective. Then there is a non-empty open cone \(C\) in \(H^2(S_{t_0}, \mathbb{R})_{van}\) and a contractible open \(U\) of \(t_0\) in \(\mathcal{H}\) such that every element of \(H^2(S_{t_0}, \mathbb{Z})_{van} \cap C\) become a Hodge class on \(S_t\) for \(t \in U\) [Corollary 5.1]. \item The map \(\mu_{\lambda}\) is surjective if and only if the map \(\tau_1:F^1H_{van}^2 \to H^2(S, \mathbb{C})_{van}\) which is the restriction of the map \(\tau:H_{van}^2 \to H^2(S, \mathbb{C})_{van}\) induced from the Gauss-Manin connection, is a submersion at \(\tilde{\lambda}\), a lift of \(\lambda\) to \(H^2(S_{t_0}, \mathbb{C})_{van}\), [Lemma 5.2]. \item Proposition 5.3 is an analogue of 4.2 when \(Y\) has trivial canonical bundle. \item Let \(\mu:V \otimes W \to W\) be a symmetric bilinear form, \(q:W^* \to S^2V^*\) its dual and \(\mu_v:V \to W\) the corresponding linear map. Think of \(q\) as a a linear system of quadrics in \(\mathbb{P}(V^*)\). Then the generic quadric in \(Im(q)\) is smooth if the following holds: there is no closed subvariety \(Z \subset \mathbb{P}(V^*)\) contained in the base locus of \(Im(q)\). and satisfying \(\text{rank}(\mu_v) \leq \dim Z, \ \forall v \in Z\), [Lemma 5.4]. \item Let \(Y\) be a Gorenstein projective 3-fold with isolated canonical singularities and \(H\) as above. For each positive integer \(n\), let \(S\) be a generic surface in \(|nH|\) and define \(V, V', \mu\) associated with \(S\) as above and \(c\) any positive constant. Then there exists a constant \(A\) s.t. the sets \begin{align*} \Gamma&=\{v \in V |\text{rank}(\mu_v) \leq cn^2\}\\ \Gamma'&=\{v' \in V'|\text{rank}(\mu_{v'}) \leq cn^2\} \end{align*} both have dimension bounded by \(A\) independent of \(n\). Here \(V=H^0(S, K_S)_{van},\ V'=H^0(Y, \mathcal{O}(S))/H^0(Y, \mathcal{O})\), [Lemma 5.5]. \item Let \(Y\) be a terminal projective 3-fold with \(K_Y\) trivial, and \(H\) as before, A a positive integer, and \(S \in |nH|\) be general with \(n\) large enough (depending on \(A\)). Set \(V =H^0(S, K_S)_{van}\) and \(\mu_v:V' \to H^1(S, \Omega^1)\). Then the set \(W=\{v \in V|\text{rank}(\mu_v) <A\}\) is equal to zero, [Lemma 5.6]. \item Let \(S \in |nH|\) be a smooth surface and consider the exact sequence of vector bundles \begin{align*} 0& \to \Omega_S^1(nH) \to \Omega_Y^2(2nH) \to K_S(2nH) \to 0\\ 0& \to \mathcal{O}_S \longrightarrow \Omega_Y^1(nH) \longrightarrow \Omega_S^1(nH) \to 0 \end{align*} and let \(\delta_1, \delta_2\) be the resulting boundary maps on the long exact cohomology sequence. Then the image of \(\delta_2 \circ \delta_1:H^0(S,K_S(2nH)) \to H^2(S,\mathcal{O})\) is \(H^2(S,\mathcal{O})_{van}\) for large enough \(n\) and any \(S\), [Lemma 5.7]. \end{itemize} Section 6 is devoted to prove the integral Hodge conjecture under the same set up for \(X\). The proof has two parts. \begin{itemize} \item Let \(X\) be a smooth projective variety over the separated closure \(k_s\) of a finitely generated field \(k\). For codimension 1 cycles on \(X\) the Tate conjecture implies the integral Tate conjecture [Lemma 6.2]. \item If \(X\) is a smooth projective 3-fold over the algebraic closure \(\bar{k}\) of a finitely generated field \(k\) of \(\mathrm{char}=0\). Then if Tate conjecture holds for codimension 1-cycles on \(X\) and the integral Tate conjecture holds for \(X_{\mathbb{C}}\) for some \(\bar{k} \hookrightarrow \mathbb{C}\), then the integral Hodge conjecture holds for \(X\) over \(\bar{k}\) [Lemma 6.3]. \end{itemize} Nonlinear traces 2021-11-25T18:46:10.358925Z "Ben-Zvi, David" "Nadler, David" Summary: We combine the theory of traces in homotopical algebra with sheaf theory in derived algebraic geometry to deduce general fixed point and character formulas. The formalism of dimension (or Hochschild homology) of a dualizable object in the context of higher algebra provides a unifying framework for classical notions such as Euler characteristics, Chern characters, and characters of group representations. Moreover, the simple functoriality properties of dimensions clarify celebrated identities and extend them to new contexts. \par We observe that it is advantageous to calculate dimensions, traces and their functoriality directly in the nonlinear geometric setting of correspondence categories, where they are directly identified with (derived versions of) loop spaces, fixed point loci and loop maps, respectively. This results in universal nonlinear versions of Grothendieck-Riemann-Roch theorems, Atiyah-Bott-Lefschetz trace formulas, and Frobenius-Weyl character formulas. We can then linearize by applying sheaf theories, such as the theories of ind-coherent sheaves and \(\mathcal{D}\)-modules constructed by \textit{D. Gaitsgory} and \textit{N. Rozenblyum} [Contemp. Math. 610, 139--251 (2014; Zbl 1316.14006)]. This recovers the familiar classical identities, in families and without any smoothness or transversality assumptions. On the other hand, the formalism also applies to higher categorical settings not captured within a linear framework, such as characters of group actions on categories. For the entire collection see [Zbl 1471.14005]. Hermitian metrics of positive holomorphic sectional curvature on fibrations 2021-11-25T18:46:10.358925Z "Chaturvedi, Ananya" "Heier, Gordon" In the article under review, the authors adress the construction of Hermitian metrics with positive holomorphic curvature on compact complex manifolds. The ambiant space is actually the total space of a fibration (holomorphic submersion) \(\pi:X\to Y\) and it is rather natural to ask wether the existence of metrics with positive curvature both on \(Y\) and on the fibers of \(\pi\) implies the existence of such a metric on \(X\). The corresponding question was answered positively by \textit{C.-K. Cheung} [Math. Z. 201, No. 1, 105--119 (1989; Zbl 0648.53037)] in the opposite case of negative curvature. In the positive curvature case, such metrics were constructed by \textit{N. J. Hitchin} [Proc. Symp. Pure Math. 27, Part 2, 65--80 (1975; Zbl 0321.53052)] on Hirzebruch surfaces. The main result of this article is a positive answer to the above mentioned question. The proof is quite natural although the computations being a bit involved. As explained by the authors, it is not clear if their method can be used either in the semi-positive case or when the map \(\pi\) has singular fibres. Let us make a final remark: the metric cooked up in this article is merely a Hermitian one, even if the data we started with are Kähler. Birational geometry of moduli spaces of stable objects on Enriques surfaces 2021-11-25T18:46:10.358925Z "Beckmann, Thorsten" The moduli space of stable sheaves on a smooth projective surface is an interesting geometric object and has been studied for a long time. By using the notion of Bridgeland stability condition on triangulate category and its wall-crossing behaviour [\textit{T. Bridgeland}, Ann. Math. (2) 166, No. 2, 317--345 (2007; Zbl 1137.18008)], there are lots of progress in this direction, including the birational geometry of the moduli space when the surface is \(K3\) [\textit{A. Bayer} and \textit{E. Macrì}, Invent. Math. 198, No. 3, 505--590 (2014; Zbl 1308.14011)] or \(\mathbb{P}^2\) [\textit{C. Li, X. Zhao}, Geom. Topol. 23, No. 1, 347--426 (2019; Zbl 1456.14016)]. The paper under review continues the idea for the \textit{generic} Enriques surface. Let \(Y\) be an Enriques surface and \(\pi: \widetilde{Y} \rightarrow Y\) be the universal covering map by the \(K3\) surface \(\widetilde{Y}\). Assume that \(Y\) is generic, that is \(\mathrm{Pic}(\widetilde{Y})=\pi^*\mathrm{Pic}(Y)\). Let \(v\) be a Mukai vector so that its pullback \(\pi^*(v)\) is primitive. The main Theorem, Theorem 4.5, established the birational equivalence of two moduli spaces \(M^Y_\sigma(v)\) and \(M^Y_\tau(v)\) for two generic stability conditions \(\sigma, \tau \in \mathrm{Stab}^{\dag}(Y) \) with respect to the Mukai vector \(v\). To prove Theorem 4.5, the author uses two main ideas. The first idea is to use the notion of constant cycle subvariety. A subvariety is called a constant cycle if all its points become rationally equivalent in the ambient variety. By using the result of [\textit{A. Marian, X. Zhao}, Épijournal de Géom. Algébr., EPIGA Journal Profile 4, Article No. 3, 5 p. (2020; Zbl 1442.14035)], the author shows that the image of the morphism \(\pi^*: M^Y_\sigma(v) \rightarrow M^{\widetilde{Y}}_{\widetilde{\sigma}}(\pi^*(v))\) is a constant cycle Lagrangian. Here \(\widetilde{\sigma}\) is the induced Bridgeland stability on \(\widetilde{Y}\) [\textit{E. Macrì} et al., J. Algebr. Geom. 18, No. 4, 605--649 (2009; Zbl 1175.14010)]. The second idea is to show the corresponding birational morphism \(f: M^{\widetilde{Y}}_{\widetilde{\sigma}_+}(\pi^*(v)) \dashrightarrow M^{\widetilde{Y}}_{\widetilde{\sigma}_-}(\pi^*(v))\) on the \(K3\) surface \(\widetilde{Y}\) is \(i^*\)-equivariant. Here \(i^*\in \mathrm{Aut}(\mathrm{D}^{\mathrm{b}}(\widetilde{Y}))\) is the induced involution by the map \(\pi\). The assumption that \(Y\) is generic is used in the proof of \(i^*\)-equivariance. The moduli spaces \(M^Y_{\sigma_\pm}(v)\) can be identified as fixed set of the involution \(i^*\). Since \(f\) is \(i^*\)-equivariant, then its restriction to the constant cycle Lagrangian \(f|_{M^Y_{\sigma_+}(v)}: M^Y_{\sigma_+}(v) \dashrightarrow M^Y_{\sigma_-}(v)\) gives the birational morphism. As an application of the main Theorem, the author shows (in Theorem 4.7) that for an arbitrary Enriques surface \(Y\) and a primitive Mukai vector \(v\) of odd rank, then \(M^Y_\sigma(v)\) is birational to some Hilbert scheme of points \(\mathrm{Hilb}^n(Y)\). As another application, the author shows (in Lemma 4.11) that the existence of global Bayer-Macrì map \(\ell: \mathrm{Stab^{\dag}(Y)} \rightarrow \mathrm{NS}(M_\sigma(v))\) [\textit{A. Bayer} and \textit{E. Macrì}, Invent. Math. 198, No. 3, 505--590 (2014; Zbl 1308.14011); \textit{W. Liu}, Kyoto J. Math. 58, No. 3, 595--621 (2018; Zbl 1412.14009)]. Moreover, the author shows the nef and semiample divisors \(\ell_{\sigma_0,\pm}\in \mathrm{NS}(M_{\sigma_{\pm}}(v)\) are big. \textit{H. Nuer} and \textit{K. Yoshioka} [Adv. Math. 372, Article ID 107283, 118 p. (2020; Zbl 1454.14041)] obtained more general results of birational equivalence of \(M^Y_\sigma(v)\) and \(M^Y_\tau(v)\) by a different method without assumptions that \(Y\) is generic and \(v\) is primitive. Secant planes of a general curve via degenerations 2021-11-25T18:46:10.358925Z "Cotterill, Ethan" "He, Xiang" "Zhang, Naizhen" In this paper, Osserman's theory of limit linear series, as a generalization of that of Eisenbud-Harris for compact type curves, is reviewed. Built on that, two constructions of a moduli space of inclusions of limit linear series are provided. Both spaces agree set-theoretically, but the second one, described as an intersection of determinantal loci of vector bundles helps proving a smoothing theorem for inclusion of limit linear series in special cases. Based on this, explicit formulas for counting inclusion of limit linear series, and equivalently for the number of secant planes to the image of a curve under a map are calculated. The paper ends with an example showing that the moduli of included limit linear series may have a component of unexpectedly large dimension. Reviewer's remark: In Definition 4.1, the divisor \(D\) seems undefined -- perhaps \(Y\) may have been intended? Angehrn-Siu type effective basepoint freeness for quasi-log canonical pairs 2021-11-25T18:46:10.358925Z "Liu, Haidong" Summary: We prove Angehrn-Siu type effective freeness and effective point separation for quasi-log canonical pairs. As a natural consequence, we obtain that these two results hold for semi-log canonical pairs. One of the main ingredients of our proof is inversion of adjunction for quasi-log canonical pairs, which is established in this paper. Jet schemes of quasi-ordinary surface singularities 2021-11-25T18:46:10.358925Z "Cobo, Helena" "Mourtada, Hussein" Let \(X\) be a complex analytically irreducible quasi-ordinary (q.o) singularity, defined by \(f\in \mathbb C\{x_1,x_2\}[z]\). It can be parametrized in the form \(x_1=x_1\), \(x_2=x_2\), \(z=\zeta(x_1,x_2)\) with \(\zeta\in \mathbb C\{x_1,x_2\}[z]\). [\textit{Y.-N. Gau}, Mem. Am. Math. Soc. 388, 109--129 (1988; Zbl 0658.14004)] as shown: A finite set of exponents in the support of the series \(\zeta\) -- they are called the characteristic exponents -- are complete invariants of the topological type of the singularity. In the paper under review, the authors look for invariants for {\em all} types of singularities. They consider the set of jet schemes of \(X\). For \(m\in\mathbb N\), they define a functor \(F_m\colon \mathbb C\text{-Schemes}\to \text {Sets}\) which is representable by a \({\mathbb C}\)-scheme \(X_m\), the \(m\)th jet scheme. There is a canonical projection \(\pi_m \colon X_m\to X\). In section 4 q.o.\ surfaces with one characteristic exponent are considered. The irreducible components of the \(m\)-jet schemes through the singular locus of a such a surface are described in Th.\ 4.14. A graph \(\Gamma\) is constructed which represents the decomposition of \((\pi^{-1}_m(X_{\text{Sing}}))_{\text{red}}\) for every \(m\). The graph \(\Gamma\) is equivalent to the topological type of the singularity. In section 5 these results are generalized to the general case. Minimal model program for log canonical threefolds in positive characteristic 2021-11-25T18:46:10.358925Z "Hashizume, Kenta" "Nakamura, Yusuke" "Tanaka, Hiromu" The Minimal Model Program has recently been established for klt threefold pairs defined over algebraically closed fields of characteristic \(p>5\) by work of \textit{C. D. Hacon} and \textit{C. Xu} [J. Am. Math. Soc. 28, No. 3, 711--744 (2015; Zbl 1326.14032)], \textit{P. Cascini}, \textit{H. Tanaka} and \textit{C. Xu} [Ann. Sci. Éc. Norm. Supér. (4) 48, No. 5, 1239--1272 (2015; Zbl 1408.14020)], \textit{C. Birkar} and \textit{J. Waldron} (see [Ann. Sci. Éc. Norm. Supér. (4) 49, No. 1, 169--212 (2016; Zbl 1346.14040); Adv. Math. 313, 62--101 (2017; Zbl 1373.14019)]). Let \(k\) be a perfect field of characteristic \(p>5\) and let \((X, \Delta)\) be a three dimensional log canonical pair, where \(\Delta\) has real coefficients. The authors show that there exists an MMP with scaling for \((X,\Delta)\) that terminates after finitely many steps. Along the way, the authors prove the cone theorem and a version of the basepoint free theorem in this general setting. Bounds for the stalks of perverse sheaves in characteristic \(p\) and a conjecture of Shende and Tsimerman 2021-11-25T18:46:10.358925Z "Sawin, Will" Summary: We prove a characteristic \(p\) analogue of a result of \textit{D. B. Massey} [Duke Math. J. 73, No. 2, 307--369 (1994; Zbl 0799.32033)] which bounds the dimensions of the stalks of a perverse sheaf in terms of certain intersection multiplicities of the characteristic cycle of that sheaf. This uses the construction of the characteristic cycle of a perverse sheaf in characteristic \(p\) by \textit{T. Saito} [Invent. Math. 207, No. 2, 597--695 (2017; Zbl 1437.14016)]. We apply this to prove a conjecture of \textit{V. Shende} and \textit{J. Tsimerman} [Duke Math. J. 166, No. 18, 3461--3504 (2017; Zbl 1426.11064)] on the Betti numbers of the intersections of two translates of theta loci in a hyperelliptic Jacobian. This implies a function field analogue of the Michel-Venkatesh mixing conjecture about the equidistribution of CM points on a product of two modular curves. Abelian arithmetic Chern-Simons theory and arithmetic linking numbers 2021-11-25T18:46:10.358925Z "Chung, Hee-Joong" "Kim, Dohyeong" "Kim, Minhyong" "Pappas, Georgios" "Park, Jeehoon" "Yoo, Hwajong" Summary: Following the method of Seifert surfaces in knot theory, we define arithmetic linking numbers and height pairings of ideals using arithmetic duality theorems, and compute them in terms of \(n\)-th power residue symbols. This formalism leads to a precise arithmetic analogue of a ``path-integral formula'' for linking numbers. Independence of \(\ell\) for the supports in the decomposition theorem 2021-11-25T18:46:10.358925Z "Sun, Shenghao" The author introduced a notion of a perverse compatible system on \(\mathbb{F}_q\)-schemes, which is a variant of a compactible system for perverse \(t\)-structure. Its relations with the classical definition are investigated in Section 2. The main theorem of this paper is the following: the proper pushforward of a perverse compatible system of a direct sum of shifted semisimple perverse sheaves is also perverse compatible. The key ingredients of the proof are the existence of \(\ell\)-adic companions on smooth schemes[Theorem 2.5] and Deligne's weight theory. This main theorem gives a relative version of Gabber's result on the independence of \(\ell\) of intersection cohomology. That is, the support of proper pushforward of the \(\ell\)-adic intersection complex is independent of \(\ell\). At the end of this paper, the author remarked a generalization to \(\mathbb{F}_q\)-Artin stacks. Erratum to: ``Slope filtrations'' 2021-11-25T18:46:10.358925Z "André, Yves" Corrects Example 1.2.2.(2) and Lemma 1.2.18 in [the author, ibid. 1, No. 1, 1--85 (2009; Zbl 1213.14039)] and notes that Lemma 1.2.8 should be discarded (Proposition 1.4.18, the only place where this lemma is used, is modified accordingly). \(p\)-adic Tate conjectures and abeloid varieties 2021-11-25T18:46:10.358925Z "Gregory, Oliver" "Liedtke, Christian" The authors study a conjecture of Raskind's, which may be thought of as an analogue of the Tate conjecture. In order to state it, let us first fix some notations: suppose \(X/K\) is a smooth and proper variety over a local field \(K/\mathbb{Q}_p\). Let \(\bar{K}\) denote an algebraic closure of \(K\), and \(G_K\) the Galois group \(\mathrm{Gal}(\bar{K}/K)\). We first consider codimension one cycles, and consider the cycle class map \[ c: \mathrm{NS}\otimes_{\mathbb{Z}} \mathbb{Q}_p\rightarrow H^2_{\text{ét}}(X_{\bar{K}}, \mathbb{Q}_p(1))^{G_K}, \] where \(\mathrm{NS}\) denotes the Néron-Severi group, and the target denotes the Galois invariants in (Tate-twisted) étale cohomology. For varieties over a finite field or a number field, the map \(c\) is predicted to be an isomorphism by the Tate conjecture. Although formally similar, the current setting of \(X\) over local fields is in fact rather different; here, the map \(c\) is injective but easily shown to be not surjective, and so if one wants an isomorphism it is necessary to impose further conditions on \(X\). This is what Raskind's conjecture aims to do: Conjecture (Raskind). The map \(c\) is surjective if \(X\) has totally degenerate reduction. The phrase \textit{totally degenerate reduction} needs to be made precise, but essentially means that \(X\) has bad reduction, and the components of the special fiber (as well as their intersections, and the intersections of those, and so on) have Chow groups as simple as possible. It can be thought of as a maximal unipotent monodromy condition. Raskind's conjecture is in fact more general and deals with cycles of higher codimensions, but we will abusively refer to the above conjecture as Raskind's conjecture in this review. One key difference between Tate-type conjectures and, for example, the Hodge conjecture to note is that, while the Hodge conjecture would imply that one can pin down the \(\mathbb{Q}\)-vector space of algebraic cycles inside cohomology, the Tate conjecture only allows one to do so after tensoring to \(\mathbb{Q}_{\ell}\). This observation will be relevant in what follows. The following is one of the main results of this paper Theorem. There exist abelian surfaces \(B/K\) for which Raskind's conjecture is false. Let us give an outline of the proof; we will be brief here since this is nicely explained in the Introduction. Using one of Fontaine's equivalence of categories from \(p\)-adic Hodge theory, one can write \[ H^2_{\text{ét}}(X_{\bar{K}}, \mathbb{Q}_p(1))^{G_K} \cong H \cap \mathrm{Fil}^1_{dR}, \] where \(H\) is a \(\mathbb{Q}_p\)-vector space (a subspace of \(H^2_{\text{log-cris}}(X)\)), and the intersection is with the Hodge filtration after comparing to de Rham cohomology. On the other hand, \(H\) has a natural \textit{rational structure}, being spanned by algebraic cycles of the special fiber, and Raskind's conjecture amounts to comparing the intersection of this rational structure with \(\mathrm{Fil}^1_{dR}\), and the a priori larger intersection \(H \cap \mathrm{Fil}^1_{dR}.\) Since one has an explicit description of \(H\) and its rational structure, this gives a strategy to prove the Theorem, and the authors explicitly find very clean counterexamples in this way. The authors also consider the analogue of Raskind's Conjecture for homomorphisms between abelian varieties and find counterexamples using the same strategy as above; they also prove positive results for abelian varieties isogenous to products of Tate elliptic curves. Throughout the paper the authors work in the more general setting of abeloid varieties. The paper under review is very well written, with the ideas clearly presented. A counterexample to an optimistic guess about étale local systems 2021-11-25T18:46:10.358925Z "Lawrence, Brian" "Li, Shizhang" From the text: Relative $p$-adic Hodge theory aims at extending known results in $p$-adic Hodge theory to a $p$-adic local system on a rigid variety. Let $X$ be a geometrically connected, quasi-compact rigid analytic variety over a $p$-adic field $K$ and let $E$ be a $\mathbb Q_p$ -local system on the étale site $X_{\mathrm{et}}$. In [Invent. Math. 207, No. 1, 291--343 (2017; Zbl 1375.14090)], \textit{R. Liu} and \textit{X. Zhu} showed that if at one point $\tilde{x}\in X(\tilde{K})$ the stalk $\mathbb E_{\tilde{X}}$ is de Rham as a $p$-adic Galois representation, then $\mathbb E$ is a de Rham local system; in particular, the stalk of $\mathbb E_{y}$ at any point $y\in X(\tilde{K})$ would be de Rham as well. They noted that the similar statements replacing ``de Rham'' by either ``crystalline'' or ``semistable'' are both wrong. However, inspired by potential semistability of de Rham representations [\textit{L. Berger}, Invent. Math. 148, No. 2, 219--284 (2002; Zbl 1113.14016)], Liu and Zhu ask [loc. cit., Remark 1.4] if a de Rham local system $\mathbb E$ on $X$ would become semistable after pulling the system back to a finite étale cover of $X$, or even after enlarging the ground field $K$ by a finite extension. While the former guess may well be true, in this paper we construct an example illustrating the failure of the latter. On the monodromy theorem for the family of \(p\)-adic differential equations 2021-11-25T18:46:10.358925Z "Mebkhout, Zoghman" The author proves a semi-global monodromy theorem for a \(p\)-adic de Rham bundle in the neighbourhood of a generic point of a hypersurface of a smooth scheme over a perfect field with characteristic \(p>0\) in higher dimensions. The result is formulated using the notion of a Frobenius endomorphism \(\sigma\) of a \(p\)-adic field \(L\), i.e. an endomorphism whose restriction to \(\mathbb{Q}_p\) is the identity, which is continuous and such that for each \(a\in \mathcal{O}_L\) (the ring of integers) one has \(|\sigma (a)-a^p|<1\). The étale fundamental groupoid as a 2-terminal costack 2021-11-25T18:46:10.358925Z "Pirashvili, Ilia" In [Georgian Math. J. 22, 563--571 (2015; Zbl 1339.18006)], the author proves that the 2-categorical version of the Seifert-van Kampen theorem holds for the fundamental groupoid of a topological space. As a continuation, the author studies the étale version in the paper under review. More precisely, for a Noetherian scheme \(X\), the assignment \(\Pi_1\) on the site of finite étale covering of \(X\) sends \(Y\) to its étale fundamental groupoid \(\Pi_1(Y)\). The main result is that \(\Pi_1\) is a costack over the finite étale site. Moreover, this costack is the associated costack of the constant trivial pseduofunctor. A de Rham model for complex analytic equivariant elliptic cohomology 2021-11-25T18:46:10.358925Z "Berwick-Evans, Daniel" "Tripathy, Arnav" Let \(G\) be a compact Lie group, not necessarily connected. The goal of the paper is to construct a version of equivariant elliptic cohomology using differential forms. Definition resembles the Cartan model for equivariant cohomology. The output is a sheaf of commutative differential graded algebras over certain stack Bun\(_G(\mathcal E)\) classifying \(G\)-bundles over the universal elliptic curve. If \(G=T\) is a torus of rank \(r\) then Bun\(_T(\mathcal E)\) is the fiber product of \(r\) copies of the dual elliptic universal curves over the moduli space of elliptic curves. For a smooth manifold \(M\) the elliptic cohomology is the sheaf \(\widehat{\mathrm{Ell}}^\bullet_G(M)\) glued from local data containing the following information: for each pair of commuting elements of \(h_1,h_2\in G\) one considers the fixed point set \(M^{\langle h_1,h_2\rangle}\) and the differential forms with values in function \(U\subset\mathrm{Bun}_T(\mathcal E)\) satisfying certain coherence constrains. For a representation \(G\to\mathrm{Spin}(2n)\) the Euler class in this model is a section (a Mathai-Quillen type cocycle) of the sheaf \(\widehat{\mathrm{Ell}}^\bullet_{\mathrm{Spin}(2n)}(pt)\) twisted by the Looijenga line bundle \(\mathcal L_1\). It is essentially given by the Weierstrass sigma function. (Similarly for a representation in \(U(n)\).) It is shown that these classes give a complex analytic equivariant refinement of the MString-orientation in elliptic cohomology. Although the torsion in the de Rham model is lost, the current definition unifies the construction of \textit{I. Grojnowski} [Lond. Math. Soc. Lect. Note Ser. 342, 114--121 (2007; Zbl 1236.55008)] for connected groups and the approach of \textit{J. A. Devoto} [Mich. Math. J. 43, No. 1, 3--32 (1996; Zbl 0871.55004)] for finite groups. Computation of the divided Frobenius modulo \(p\) on the crystalline cohomology of certain covers of the projective line 2021-11-25T18:46:10.358925Z "Pierrot, Amandine" Summary: In this paper we describe a family of tamely ramified coverings of the projective line over a finite field, for which we compute the matrix of the divided crystalline Frobenius. The formulas we obtain generalize the classical Hasse-Witt formulas in the case of hyperelliptic curves. Our result relies on a result of Huyghe-Wach which shows that the divided crystalline Frobenius coincides with the explicit morphism, constructed by \textit{P. Deligne} and \textit{L. Illusie} [Invent. Math. 89, 247--270 (1987; Zbl 0632.14017)], for their proof of the degeneration of the Hodge-de Rham spectral sequence in the algebraic case. For the entire collection see [Zbl 1471.11005]. Motivic Mahowald invariants over general base fields 2021-11-25T18:46:10.358925Z "Quigley, J. D." Summary: The motivic Mahowald invariant was introduced in [\textit{J. D. Quigley}, Algebr. Geom. Topol. 19, No. 5, 2485--2534 (2019; Zbl 1436.55016)] and [\textit{J. D. Quigley}, J. Topol. 14, No. 2, 369--418 (2021; Zbl 07381853)] to study periodicity in the \(\mathbb{C} \)- and \(\mathbb{R} \)-motivic stable stems. In this paper, we define the motivic Mahowald invariant over any field \(F\) of characteristic not two and use it to study periodicity in the \(F\)-motivic stable stems. In particular, we construct lifts of some of Adams' classical \(v_1\)-periodic families [\textit{J. F. Adams}, Topology 5, 21--71 (1966; Zbl 0145.19902)] and identify them as the motivic Mahowald invariants of powers of \(2+\rho \eta \). Rational approximations on toric varieties 2021-11-25T18:46:10.358925Z "Huang, Zhizhong" In the article under review, the author is motivated by the rational curve conjectures of \textit{Y. I. Manin} [Compos. Math. 85, No. 1, 37--55 (1993; Zbl 0780.14022)] and \textit{D. McKinnon} [J. Algebr. Geom. 16, No. 2, 257--303 (2007; Zbl 1140.14016)]. The guiding principal is that the best approximations of a general rational point in a rationally connected variety, defined over a number field \(\mathbf{K}\), should be achieved on the subvarieties that are swept out by small degree free rational curves. The present article builds on the work of \textit{D. McKinnon} and \textit{M. Roth} [Invent. Math. 200, No. 2, 513--583 (2015; Zbl 1337.14023)]. Specifically, the author studies the case of \(\mathbf{K}\)-rational points in split nonsingular toric varieties \(X\). It is required that \(X\) has dimension \(r \geq 2\). Moreover, throughout the article, it is assumed that \(\overline{\operatorname{Eff}}(X)\), the cone of pseudoeffective divisors, is simplicial. The author's main result gives a description of the best approximation rational curves, with respect to ample, and, more generally, nef line bundles, that pass through a given \(\mathbf{K}\)-rational point \(Q\) of the algebraic torus \(\mathbb{G}_m^r \subseteq X\). In more precise terms, for the case of an ample line bundle \(L\), the author proves that the best approximations for \(Q\) are properly achieved on the subvariety that is swept out by the minimal \(L\)-degree free rational curves through \(Q\). One aspect to the proof of this result, involves the theory of universal torsors and the description of height functions on toric varieties, which are given in [\textit{P. Salberger}, Astérisque. 251, 91--258 (1998; Zbl 0959.14007)]. Another is an auxiliary result, which is of an independent interest. It gives necessary and sufficient conditions for positive primitive fan relations to correspond to very free rational curves. A note on Higgs-de Rham flows of level zero 2021-11-25T18:46:10.358925Z "Sheng, Mao" "Tong, Jilong" Summary: The notion of Higgs-de Rham flows was introduced by \textit{G. Lan} et al. [J. Eur. Math. Soc. (JEMS) 21, No. 10, 3053--3112 (2019; Zbl 1444.14048)], as an analogue of Yang-Mills-Higgs flows in the complex nonabelian Hodge theory. In this paper we investigate a small part of this theory, and study those Higgs-de Rham flows which are of level zero. We improve the original definition of level-zero Higgs-de Rham flows (which works for general levels), and establish a Hitchin-Simpson-type correspondence between such objects and certain representations of fundamental groups in positive characteristic, which generalizes a classical results of \textit{N. Katz} [Lect. Notes Math. 317, 167--200 (1973; Zbl 0259.14007)]. We compare the deformation theories of two sides in the correspondence, and translate the Galois action on the geometric fundamental groups of algebraic varieties defined over finite fields into the Higgs side. \(\mathbb{F}_{p^2}\)-maximal curves with many automorphisms are Galois-covered by the Hermitian curve 2021-11-25T18:46:10.358925Z "Bartoli, Daniele" "Montanucci, Maria" "Torres, Fernando" Let \({\mathbb{F}} = GF(q^2)\) denote the finite field with \(q^2\) elements. For a divisor \(m\) of \(q+1\), denote by \(H_{m}\) the curve \(y^{m} = x^{q}+x\). In particular, \(H_{q+1}\) denotes the Hermitian curve \(y^{q+1} = x^{q}+x\). It is known, for example, that \(H_{m}\) is Galois-covered by \(H_{q+1}\) over \({\mathbb{F}}\) (see Lemma 2.2 in the paper under review). A (projective, non-singular, geometrically irreducible) curve \(X\) of genus \(g\) over \({\mathbb{F}}\) is called \({\mathbb{F}}\)-maximal if the number of its \({\mathbb{F}}\)-rational points is maximum possible, i.e., it attains the Hasse-Weil bound, \(|X({\mathbb{F}})=q^2+1+2qg\). The main result of the authors deals with the case where \(q=p\) is a prime. Their main result is: Let \(X\) be an \({\mathbb{F}}\)-maximal curve with genus \(g\geq 2\). If \(|Aut(X)|>84(g-1)\) then \(X\) is Galois-covered by the Hermitian curve \(H_{q+1}\) over \({\mathbb{F}}\). The proof is broken into the cases \(p\leq 5\) and \(p>5\). The bound on the size of the automorphism group of \(X\) is sharp. The authors give an example (where \(q=p=71\)) of an \({\mathbb{F}}\)-maximal curve with genus \(g=7\), where \(|Aut(X)|=84(7-1)\), and yet is not Galois-covered by \(H_{72}\). For further details, the reader is referred to the paper. The paper is dedicated to the memory of the last-named author, Fernando Torres, who died of Covid-19 in May 2020 while the paper was in press. May his memory be a blessing. Artin-Mazur-Milne duality for fppf cohomology 2021-11-25T18:46:10.358925Z "Demarche, Cyril" "Harari, David" Summary: We provide a complete proof of a duality theorem for the fppf cohomology of either a curve over a finite field or a ring of integers of a number field, which extends the classical Artin-Verdier Theorem in étale cohomology. We also prove some finiteness and vanishing statements. Hurwitz theory of elliptic orbifolds. I 2021-11-25T18:46:10.358925Z "Engel, Philip" Summary: An \textit{elliptic orbifold} is the quotient of an elliptic curve by a finite group. In [Invent. Math. 145, No. 1, 59--103 (2001; Zbl 1019.32014)], \textit{A. Eskin} and \textit{A. Okounkov} proved that generating functions for the number of branched covers of an elliptic curve with specified ramification are quasimodular forms for \(\operatorname{SL}2_(\mathbb{Z})\). In [Prog. Math. 253, 1--25 (2006; Zbl 1136.14039)], they generalized this theorem to branched covers of the quotient of an elliptic curve by \(\pm 1\), proving quasimodularity for \(\Gamma_0(2)\). We generalize their work to the quotient of an elliptic curve by \(\langle\zeta N\rangle\) for \(N=3, 4, 6\), proving quasimodularity for \(\Gamma(N)\), and extend their work in the case \(N=2\). It follows that certain generating functions of hexagon, square and triangle tilings of compact surfaces are quasimodular forms. These tilings enumerate lattice points in moduli spaces of flat surfaces. We analyze the asymptotics as the number of tiles goes to infinity, providing an algorithm to compute the Masur-Veech volumes of strata of cubic, quartic, and sextic differentials. We conclude a generalization of the Kontsevich-Zorich conjecture: these volumes are polynomial in \(\pi\). The cuspidalisation of sections of arithmetic fundamental groups. II 2021-11-25T18:46:10.358925Z "Saïdi, Mohamed" Summary: In this paper, which is a sequel to [\textit{M. Saïdi}, Adv. Math. 230, No. 4--6, 1931--1954 (2012; Zbl 1260.14036)], we investigate the theory of cuspidalisation of sections of arithmetic fundamental groups of hyperbolic curves to cuspidally \(i\)-th and \(2 / p\)-th step prosolvable arithmetic fundamental groups. As a consequence we exhibit two, necessary and sufficient, conditions for sections of arithmetic fundamental groups of hyperbolic curves over \(p\)-adic local fields to arise from rational points. We also exhibit a class of sections of arithmetic fundamental groups of \(p\)-adic curves which are orthogonal to \(\mathrm{Pic}^\wedge\), and which satisfy (unconditionally) one of the above conditions. Gluing curves of genus 1 and 2 along their 2-torsion 2021-11-25T18:46:10.358925Z "Hanselman, Jeroen" "Schiavone, Sam" "Sijsling, Jeroen" Let \(A\) be an abelian variety over a field \(k\). It is well-known that there is an isogeny decomposition (Poincaré's Complete Reducibility Theorem) in terms of simple abelian subvarieties \[A \sim B_1^{n_1} \times \cdots \times B_r^{n_r}\] that are pairwise non-isogenous over \(k\). This decomposition is unique up to reordering the factors. In the case that \(A\) equals the Jacobian variety of a curve \(Z\), there exist algorithms to calculate the aforementioned decomposition in terms of the Jacobians of curves over small extensions of \(k\) whenever possible. The decomposition of the Jacobian of curves has been extensively studied. The article under review consider a different approach to this problem. The authors aim to develop algorithms to construct an abelian variety \(A\) given factors \(B_i\) as before, in some special cases. Let X be a curve of genus 1 and let \(Y\) be a curve of genus 2, defined over a base field \(k\) whose characteristic is different from 2. The main result of the paper provides criteria for the existence of a curve \(Z\) over \(k\) whose Jacobian is, up to a special isogeny, isogenous to the products of the Jacobians of \(X\) and \(Y\). The authors also developed algorithms to construct the curve \(Z\) once equations for \(X\) and \(Y\) are given. Automorphism group of a moduli space of framed bundles over a curve 2021-11-25T18:46:10.358925Z "Alfaya, David" "Biswas, Indranil" The authors compute the automorphism group of the moduli space of framed vector bundles on a smooth complex projective curve \(X\). A framed vector bundle is a pair \((E,\alpha)\), consisting of a vector bundle \(E\) of rank \(r\geq 2\) and a nonzero linear map \(\alpha: E_x\rightarrow \mathbb C^r\) from a fiber over a fixed point \(x\in X\) to \(\mathbb C^r\), where the map \(\alpha\) is called a framing. Using the GIT to construct the moduli space of framed vector bundles, the \(\tau\)-stability of a framed vector bundle is introduced by Huybrechts and Lehn, where \(\tau>0\) is a real number. For a given real number \(\tau>0\), a frame bundle \((E,\alpha)\) is \(\tau\)-stable (resp. \(\tau\)-semistable) if for all proper subbundles \(0\subsetneq F\subsetneq E\), \[ \frac{\mathrm{degree}(F)-\epsilon(F,\alpha)\tau}{\mathrm{rank}(F)}<\frac{\mathrm{degree}(E)-\tau}{\mathrm{rank}(E)}\quad(\text{resp. }\leq) \] where \[ \epsilon(F,\alpha)=\begin{cases} 1&\text{if \(F_x\nsubseteq\mathrm{Ker}(\alpha)\)}\\ 0&\text{if \(F_x\subseteq\mathrm{Ker}(\alpha)\)}. \end{cases} \] Fix a line bundle \(\xi\) on \(X\). Let \(\mathcal F=\mathcal F(X,x,r,\xi,\tau)\) be the moduli space of \(\tau\)-semistable framed vector bundles \((E,\alpha)\) on \(X\) with rank \(r\geq 2\) and \(\mathrm{det}(E)\cong\xi\). In fact, it is a complex projective varieties. The stability parameter \(\tau\) is called generic if there do not exist strictly \(\tau\)-semistable framed vector bundles in \(\mathcal F\). If \(\tau\) is generic, the moduli space \(\mathcal F\) is smooth. There is a natural \(\mathrm{PGL}_r(\mathbb C)\)-action on a moduli space of framed vector bundles of rank \(r\). For \([G]\in \mathrm{PGL}_r(\mathbb C)\) with \(G\in \mathrm{GL}_r(\mathbb C)\), the action of \([G]\) on a frame vector bundle \((E,\alpha)\) is described by: \[ [G]\cdot(E,\alpha)=(E, G\circ\alpha). \] The main result of this paper is: For \(\tau\) and \(\tau^\prime\) are generic, if there is an isomorphism \(\Psi: \mathcal F(X,x,r,\xi,\tau)\rightarrow\mathcal F(X^\prime,x^\prime,r^\prime,\xi^\prime,\tau^\prime)\), where the genera of \(X\) and \(X^\prime\) satisfy \(g\geq\max\{2+\tau,4\}\) and \(g^\prime\geq\max\{2+\tau^\prime,4\}\), respectively. Then, \(r=r^\prime\) and there exists an isomorphism \(\sigma: X^\prime\rightarrow X\) such that \(\sigma(x^\prime)=x\), and the isomorphism \(\Psi\) is a combination of the following three types of transformations: \begin{enumerate} \item pullback with respect to the isomorphism \(\sigma\); \item tensorization with a line bundle \(L\) on \(X\); \item the natural action of \(PGL_r\), \end{enumerate} where \(\sigma\) and \(L\) satisfy the relation \(\sigma^*(\xi\otimes L^{\otimes r})\cong\xi^\prime\). A canonical connection on bundles on Riemann surfaces and Quillen connection on the theta bundle 2021-11-25T18:46:10.358925Z "Biswas, Indranil" "Hurtubise, Jacques" Let \(X\) be a compact Riemann surface of genus \(g\geq 2\) with canonical line bundle \(K_X\), \(r\) a natural number, and \(\mathcal{M}\) the moduli space of stable vector bundles on \(X\) of rank \(r\) and degree \(0\). Given a theta characteristic \(K_{X}^{1/2}\), \(D_{\Theta}:=\{E\in\mathcal{M}: H^{0}(X,E\otimes K_{X}^{1/2})\neq 0\}\) denotes the theta divisor, and \(\Theta\) denotes the associated theta line bundle over \(\mathcal{M}\). The authors consider two fibered spaces over \(\mathcal{M}\). On the one hand, the moduli space \(\mathcal{C}\) of holomorphic connections, \(E\rightarrow E\otimes K_X\), with \(E\) a stable vector bundle of rank \(r\). The forgetful map induces a holomorphic map, \(\mathcal{C}\rightarrow \mathcal{M}\). On the other hand, the fiber bundle Conn\((\Theta)\rightarrow \mathcal{M}\) given by the sheaf of holomorphic connections on \(\Theta\). Sections of Conn\((\Theta)\rightarrow \mathcal{M}\) over an open subset \(U\subset \mathcal{M}\) are in one-to-one correspondence with holomorphic connections on \(\Theta|_{U}\). Both \(\mathcal{C}\) and Conn\((\Theta)\) are holomorphic \(T^{*}\mathcal{M}\)-torsors and carry holomorphic symplectic forms, \(\Phi_1\) and \(\Phi_2\), respectively. Recently, the authors have constructed a \(\mathcal{C}^{\infty}\) isomorphism of torsors \(F:\mathcal{C}\simeq \mathrm{Conn}(\Theta)\) with the property that \(F^{*}\Phi_2 \simeq \Phi_1\) up to multiplication by \(2r\); that is, it preserves the symplectic structures. Further, they have proved that \(F\) is, in fact, holomorphic. The main result of this article is that the restriction of \(F\) to \(\mathcal{M}_0 :=\mathcal{M}\setminus D_{\Theta}\) can be constructed by purely algebraic methods. More precisely, the authors construct holomorphic sections, \(\phi\) and \(\tau\), of the torsors \(\mathcal{C}|_{\mathcal{M}_0}\rightarrow \mathcal{M}_0\) and \(\mathrm{Conn}(\Theta)|_{\mathcal{M}_0}\rightarrow \mathcal{M}_0\) respectively, and prove that the holomorphic map \(G\) defined by \(G(\delta^0 (\phi(E),\nu))=\eta^0(\tau(E),2r\cdot \nu)\) is a holomorphic isomorphism between \(\mathcal{C}|_{\mathcal{M}_0}\) and \(\mathrm{Conn}(\Theta)|_{\mathcal{M}_0}\), and it coincides with \(F|_{\mathcal{M}_0}\). Here, \(\delta^0\) and \(\eta^0\) are the actions of \(T^{*}\mathcal{M}_{0}\) on \(\mathcal{C}|_{\mathcal{M}_0}\) and \(\mathrm{Conn}(\Theta)|_{\mathcal{M}_0}\) respectively. As a consequence of the main result is that the holomorphic isomorphism \(F\), which preserves the symplectic structures, depends holomorphically on the base Riemann surface. This fact lets the authors to extended the holomorphic isomorphism \(F\) to a relative context. The field of moduli of singular \(K3\) surfaces 2021-11-25T18:46:10.358925Z "Laface, Roberto" It is known that any CM elliptic curves admit the field of moduli which is a Galois extension of the rational field. An analogous argument makes it possible to consider the fields of moduli for singular abelian and \(K3\) surfaces. For a singular \(K3\) surface \(X\) with its CM field \(K\), let \(G_K\) and \(G_\mathbb{Q}\) respectively subgroups of \({\mathrm {Gal}}(\bar{K}\slash K)\) and of \({\mathrm {Gal}}(\mathbb{C}\slash\mathbb{Q})\) such that the action on \(X\) is invariant in the class of \(X\). Denote by \(M_K\) and \(M_\mathbb{Q}\) respectively the field of \(K\)-moduli and absolute field of moduli, that are by definition subfields of \(\bar{K}\) and of \(\bar{\mathbb{Q}}\) preserved by the actions of \(G_K\) and \(G_\mathbb{Q}\). Following preliminary studies of the groups \(G_K\) and \(G_\mathbb{Q}\), the author can prove that the absolute field \(M_\mathbb{Q}\) is a Galois extension of the field \(M_K\), and that the fields \(M_K\) and \(M_\mathbb{Q}\) are respectively extensions of \(K\) and of \(\mathbb{Q}\) of degree being the genus of the transcendental lattice of \(X\). It is concluded that there exist infinitely many singular \(K3\) surfaces with a field of \(K\)-moduli. An example of birationally inequivalent projective symplectic varieties which are D-equivalent and L-equivalent 2021-11-25T18:46:10.358925Z "Okawa, Shinnosuke" There are many different ways to try to classify algebraic varieties. Among those are \begin{itemize} \item birational equivalence: whether a birational map \(X \dashrightarrow Y\) exists; \item D-equivalence: whether there is an equivalence \(\mathcal D^b(\mathrm{coh}(X)) \cong \mathcal D^b(\mathrm{coh}(Y))\); \item L-equivalence: whether \(X\) and \(Y\) have the same class in \(K_0(\mathcal{V}ar)[\mathbb L^{-1}]\), where \(\mathbb L = [\mathbb A_1]\). \end{itemize} These three attempts are related, but in a mysterious way. The main result of this article is the following. Given a pair \(X,Y\) of two K3 surfaces of Picard number 1 and degree 2d, which are D- and L-equivalent. Then the Hilbert schemes of points \(X^{[n]}\) and \(Y^{[n]}\) are D- and L-equivalent. If \(n>2\) and if there are integer solutions to the equation \[ (n-1)x^2 - dy^2 = 1, \] then \(X\) and \(Y\) are birationally \emph{in}equivalent. For the proof, the author uses that a birational morphism \(\phi \colon X^{[n]} \dashrightarrow Y^{[n]}\) induces an isometry of Picard groups, hence preserves the movable cone. The equation of the main result comes now from the description of the movable cone, as obtained in [\textit{A. Bayer} and \textit{E. Macrì}, Invent. Math. 198, No. 3, 505--590 (2014; Zbl 1308.14011)]. To get examples, one can consider a very general K3 surface \(X\) of degree 12 and its Fourier-Mukai partner \(Y\), see [\textit{B. Hassett} and \textit{K.-W. Lai}, Compos. Math. 154, No. 7, 1508--1533 (2018; Zbl 1407.14010)] and [\textit{A. Ito} et al., Sel. Math., New Ser. 26, No. 3, Paper No. 38, 27 p. (2020; Zbl 1467.14051)]. If one chooses \(n=6y^2+2\) for an integer \(y\), then \((1,y)\) is an integer solution of the equation above. Hence, the corresponding hyperkähler varieties \(X^{[n]}\) and \(Y^{[n]}\) give examples of varieties, which are D-equivalent (already known by [\textit{D. Ploog}, Adv. Math. 216, No. 1, 62--74 (2007; Zbl 1167.14031)]) and L-equivalent, but not birationally equivalent. More results about birational (in)equivalence of such hyperkähler varieties are obtained in [\textit{C. Meachan} et al., Math. Z. 294, No. 3--4, 871--880 (2020; Zbl 1469.14011)] and [\textit{K. Yoshioka}, Math. Ann. 321, No. 4, 817--884 (2001; Zbl 1066.14013)]. Rational curves on fibered varieties 2021-11-25T18:46:10.358925Z "Anella, Fabrizio" Let \(X\) be a complex projective manifold with trivial first Chern class \(c_1(X)\) but non-zero second Chern class \(c_2(X) \neq 0\), e.g. \(X\) is a Calabi-Yau or hyper-Kähler manifold. Then one expects that \(X\) contains a rational curve, this is known for surfaces by theorem of Bogomolov-Mumford. Initiated by the work of \textit{K. Oguiso} [Int. J. Math. 4, No. 3, 439--465 (1993; Zbl 0793.14030)] and \textit{P. M. H. Wilson} [Invent. Math. 98, No. 1, 139--155 (1989; Zbl 0688.14032)] many authors have discussed the existence of rational curves on higher-dimensional Calabi-Yau manifolds with special structures, for example if \(X\) admits a fibration. In this paper the author starts by studying the existence of rational curves for projective varieties with klt singularities admitting an elliptic fibration \(f: X \rightarrow B\) such that the canonical class is the pull-back of a Cartier divisor from the base. He proves that if \(X\) does not admit a holomorphic one-form, even after finite étale cover, then \(X\) contains a uniruled divisor. In particular the statement applies to singular Calabi-Yau spaces. The author gives an interesting application of his theorem: let \(X\) be a Calabi-Yau space of dimension at least three admitting a nef divisor \(D\) such that \(c_1(D)^2=0\) and \(c_1(D) \cdot c_2(X) = 0\). Then \(X\) contains a rational curve. Note that \(D\) is expected to be semiample but this is a difficult open problem, even for threefolds. For the proof the author shows that \(X\) also admits a divisor \(N\) that is globally generated and defines an elliptic fibration on \(X\). On the constant scalar curvature Kähler metrics. I: A priori estimates 2021-11-25T18:46:10.358925Z "Chen, Xiuxiong" "Cheng, Jingrui" This groundbreaking paper is a technical tour de force on the constant scalar curvature Kähler (CSCK) equation. The basic problem is Calabi's dream of finding canonical metric representatives inside positive line bundle classes. A prototype result is Yau's solution to the Calabi conjecture, and the Chen-Donaldson-Sun solution of the Kähler-Einstein equation in the Fano case subject to K-stability. While a formal infinite dimensional GIT framework has long pointed towards CSCK metrics as the natural generality to consider the canonical metric problem, there are significant technical hurdles to go beyond the Kähler-Einstein case, most notably because the CSCK equation is 4th order instead of second order, and because one loses a priori Ricci curvature bounds, which are essential for applying Cheeger-Colding theory. The goal of this paper is to address the central PDE difficulties. Its techniques are relatively classical, involving maximum principles, Moser style iterations, and Alexandrov maximum principles; the presentation is largely self contained, and accessible to those with basic knowledge of Kähler geometry. However, the application of these techniques are often very clever, and exploit a number of rather delicate cancellation effects which can only be appreciated through substantial calculations. The 4th order CSCK equation on a compact Kähler manifold can be written as a second order coupled system. The authors consider a slightly more general system (needed for their further work on the continuity method) \[ \log \det(g_{i\bar{j}}+ \phi_{i\bar{j}}) = F + \log \det(g_{i\bar{j}}),\ \Delta_\phi F = -f + Tr_\phi \eta. \tag{1} \] When $f = -R$ (the average Ricci scalar) and $\eta = Ric_g$ this is the CSCK equation. The first equation is complex Monge-Ampère, and the second amounts to a prescription of scalar curvature. The main result of this paper is that under an a priori entropy bound $\int e^FFd\mathrm{vol}_g \leq C$, then the solution to this PDE system is bounded to all derivatives. The entropy bound is natural to the problem, because the CSCK equation is the critical point of the Mabuchi functional (also called the K-energy), which can be written as the entropy term plus a well behaved pluripotential term. The stability condition should be thought of as a coercivity condition on the Mabuchi functional, which will essentially force a bound on the entropy as explained in the subsequent works of the authors in the series. Some highlights of this paper are: \begin{itemize} \item In Section 5, the authors prove a $C^0$-estimate on the Kähler potential from the entropy bound. The main ingredient is an ingenious application of the Alexandrov maximum principle, with the additional input of the Skoda inequality. This is inspired by earlier work of Blocki. Even though the method is quite classical, this result is surprisingly strong, especially in the light of Kolodziej's celebrated $L^\infty$-potential estimate from an a priori $L^p$ bound on $F$ with $p > 1$. This part is of significant independent interest in Kähler geometry, especially the analysis of complex Monge-Ampère. \item In Section 2, they prove among others things an a priori gradient bound $|\nabla \phi |^2e^{-F} \leq C$. This uses a maximum principle argument, which involves a delicate cancellation effect to knock out some bad mixed derivative terms in the Laplacian. \item In Section 3, they prove a $W^{2,p}$ type estimate $\int e^{-\alpha(p)F}(n+ \Delta \phi )^pd\mathrm{vol}_g \leq C(p)$ for any exponent $p > 0$. This involves integration by part and an iteration argument. The key is that one can gain exponents on $n+ \Delta \phi$ from the nonlinearity, and any derivative terms of $F$ from the Laplacian computation can be either estimated away in complete squares, or absorbed into the equation for $\Delta F$ which is then a priori controlled. \item In Section 4, they prove simultaneously $|\nabla F| \leq C$ and $n + \Delta \phi \leq C$, whence the metric has $C^2$ bounds, which reduces the CSCK problem to well known higher order estimates. This is proved by a Moser iteration style argument, based on the $W^{2,p}$ estimate established earlier. The reason for $|\nabla F|^2$ to feature in the proof of the metric upper bound $n + \Delta \phi \leq C$, is that one needs the Laplacian of $|\nabla F|^2$ to provide good Hessian terms. The maximum principle quantity involves another subtle cancellation effect to knock out some bad mixed derivative terms. The reason for the Moser iteration to work, is that Sobolev inequality improves the Lebesgue exponent by a definite magnifying factor > 1, while the fact that \(p\) can be arbitrarily large in the $W^{2,p}$ estimate, ultimately ensures that the application of Hölder inequality can only worsen the Lebesgue exponent by a factor which is arbitrarily close to one, so in the end the improvement effect will win over. \end{itemize} On the anti-canonical geometry of weak \(\mathbb{Q}\)-Fano threefolds. II 2021-11-25T18:46:10.358925Z "Chen, Meng" "Jiang, Chen" Summary: By a canonical (resp. terminal) weak \(\mathbb{Q}\)-Fano \(3\)-fold we mean a normal projective one with at worst canonical (resp. terminal) singularities on which the anti-canonical divisor is \(\mathbb{Q}\)-Cartier, nef and big. For a canonical weak \(\mathbb{Q}\)-Fano \(3\)-fold \(V\), we show that there exists a terminal weak \(\mathbb{Q}\)-Fano 3-fold \(X\), being birational to \(V\), such that the \(m\)-th anti-canonical map defined by \(\vert -mK_X\vert\) is birational for all \(m\geq 52\). As an intermediate result, we show that for any \(K\)-Mori fiber space \(Y\) of a canonical weak \(\mathbb{Q}\)-Fano 3-fold, the \(m\)-th anti-canonical map defined by \(\vert -mK_Y\vert\) is birational for all \(m\geq 52\). For Part I, see [the authors, J. Differ. Geom. 104, No. 1, 59--109 (2016; Zbl 1375.14137)]. Fano manifolds and stability of tangent bundles 2021-11-25T18:46:10.358925Z "Kanemitsu, Akihiro" Let \(X\) be a Fano manifold, that is a complex projective manifold such that the anticanonical divisor \(-K_X\) is ample. If the Picard number of \(X\) is one, a widely believed folklore conjecture claims that the tangent bundle \(T_X\) is semistable (in the sense of Mumford-Takemoto). In this paper the author gives a series of counterexamples to this conjecture! \newline The counterexamples are obtained by a family of horospherical varieties classified by \textit{B. Pasquier} [Math. Ann. 344, No. 4, 963--987 (2009; Zbl 1173.14028)]: for these manifolds the action of the group \(\mbox{Aut}^0(X)\) on \(X\) has two orbits, the open orbit \(X^0\) and a closed orbit \(Z\). Moreover the action on the blow-up \(\mbox{Bl}_Z X\) again has two orbits, the open orbit \(X^0\) and the exceptional divisor \(E\). The manifold \(\mbox{Bl}_Z X\) admits a smooth fibration onto a lower-dimensional manifold \(Y\), the push-forward of the relative tangent bundle to \(X\) defines an algebraically integrable foliation \(\mathcal F \subset T_X\). The author shows that this foliation is canonical in the sense that it is the unique algebraically integrable foliation on \(X\) that is \(\mbox{Aut}^0(X)\)-invariant. General arguments show that the stability of \(T_X\) can be verified by computing the slope of the foliation \(\mathcal F\). It turns out that for infinitely many manifolds in Pasquier's list, the subsheaf \(\mathcal F \subset T_X\) destabilises the tangent bundle. The reviewer recommends to any complex geometer to read this beautiful paper. An effective bound for reflexive sheaves on canonically trivial 3-folds 2021-11-25T18:46:10.358925Z "Vermeire, Peter" Let \(X\) be a smooth projective threefold and \(F\) be a rank 2 reflexive sheaf on \(X\). Starting with \textit{R. Hartshorne}'s classical article [Math. Ann. 254, 121--176 (1980; Zbl 0431.14004)], several bounds for \(c_3(F)\) have been derived in various settings, among them, by the author in the case $\mathrm{Pic}(X)=\mathbb Z L\) for \(L\)-semistable \(F\) [Pac. J. Math. 219, No. 2, 391--398 (2005; Zbl 1107.14032)] in terms of \(c_1(X)\), \(c_2(X)\), \(c_1(F)\) and \(c_2(F)\) . \textit{A. Gholampour} and \textit{M. Kool} [J. Pure Appl. Algebra 221, No. 8, 1934--1954 (2017; Zbl 06817567)] conjectured that this remains true for any smooth projective threefold. In the paper under review, the author gives explicit effective bounds in the case of a polarized smooth projective threefold \(X\) with \(\omega_X=\mathcal O_X\) (and general Picard group). Primitive elements for \(p\)-divisible groups 2021-11-25T18:46:10.358925Z "Kottwitz, Robert" "Wake, Preston" Let \(\mathcal G\) be a finite flat group scheme. Using Raynaud's theory of Haar measures of [\textit{M. Raynaud}, Bull. Soc. Math. Fr. 102, 241--280 (1974; Zbl 0325.14020)], the authors define a closed subscheme \(\mathcal G^\times\hookrightarrow\mathcal G\) by decreeing its ideal to be the line bundle of invariant measures on the Cartier dual of \(\mathcal G\). Moreover, if \(\mathcal G\) is a Barsotti-Tate group of level \(n\) for a prime number \(p\), the authors define the closed subscheme \(\mathcal G^{prim}\hookrightarrow\mathcal G\) of ``primitive elements'' to be the inverse image of \(\mathcal G[p]^\times\) under the canonical epimorphism \(\mathcal G\twoheadrightarrow\mathcal G[p]\). This concept paves the way for a new approach to integral models \(\mathfrak X\) of Shimura varieties with level \(\Gamma_1(p^n)\)-structure, simply by looking at the scheme \(\mathcal A[p^n]^{prim}\) where \(\mathcal A\) is the universal abelian \(g\)-fold (possibly with additional structure). These schemes are not necessarily normal, but they are still interesting counterweights to the more classical concept of moduli spaces of points ``with exact order \(p^n\)'', which seems to be limited to settings in which one can work with one-dimensional formal groups, as in [\textit{V. G. Drinfel'd}, Math. USSR, Sb. 23, 561--592 (1976; Zbl 0321.14014); translation from Mat. Sb., n. Ser. 94(136), 594--627 (1974)] or [\textit{N. M. Katz} and \textit{B. Mazur}, Arithmetic moduli of elliptic curves. Princeton, NJ: Princeton University Press (1985; Zbl 0576.14026)]. An analytic application of Geometric Invariant Theory 2021-11-25T18:46:10.358925Z "Buchdahl, Nicholas" "Schumacher, Georg" One of the mathematical highlights of the 1980s is the establishment of the Hitchin-Kobayashi correspondence. Given a holomorphic vector bundle over a compact Kähler manifold, this correspondence states that the bundle admits a Hermite-Einstein metric if and only if it satisfies the algebro-geometric condition of slope polystability [\textit{S. K. Donaldson}, Proc. Lond. Math. Soc. (3) 50, 1--26 (1985; Zbl 0529.53018); \textit{K. Uhlenbeck} and \textit{S. T. Yau}, Commun. Pure Appl. Math. 39, S257--S293 (1986; Zbl 0615.58045)]. Almost equally important as the statement is the following expectation that arises from this correspondence. The notion of slope stability was introduced in relation to considerations from moduli spaces, and so one should be able to form moduli spaces of holomorphic vector bundles which (equivalently) admit a Hermite-Einstein metric or are slope polystable. This expectation is well understood when the compact Kähler manifold is actually a smooth projective variety through work of many authors, such as \textit{C. T. Simpson} [Publ. Math., Inst. Hautes Étud. Sci. 79, 47--129 (1994; Zbl 0891.14005)] and \textit{D. Greb} et al. [Geom. Topol. 25, No. 4, 1719--1818 (2021; Zbl 07379436)]. The paper under review is a key step towards understanding the story beyond the projective setting. The authors use their prior work, demonstrating which deformations of polystable bundles remain polystable, to construct a moduli space of slope polystable bundles over a compact Kähler manifold. They also construct a natural Kähler metric on the moduli space. The method they employ is to view their prior work as producing ``charts'' on the moduli space; this strategy had previously been used in other contexts, but the work of the authors contains some new features. The paper is very clearly and carefully written, and should be viewed as an important contribution to the field. Satellites of spherical subgroups 2021-11-25T18:46:10.358925Z "Batyrev, Victor" "Moreau, Anne" Let \(G\) be a complex, connected, reductive algebraic group and let \(H \subset G\) a spherical subgroup (that is, the Borel subgroups of \(G\) act on \(G/H\) with an open orbit). In the present paper the authors define and study the \textit{satellites} of \(H\). The satellites of \(H\) form a family of spherical subgroups of \(G\) defined up to conjugation. They have the same dimension of \(H\) and they encode important information on \(G/H\) and its equivariant embeddings. If \(r\) denotes the rank of \(G/H\), then \(H\) possesses precisely \(2^r\) satellites, which are naturally parametrized by the subsets of the spherical roots of \(G/H\). The satellites of a spherical subgroup \(H\) have important connections with the embedding theory of \(G/H\), as they impose strong conditions on the stabilizers which can occurr in the toroidal equivariant embeddings of \(G/H\). The satellites of \(H\) can be defined in terms of the \(G\)-invariant valuations of \(G/H\): every such a valuation \(v\) defines a satellite \(H_v\), and two satellites \(H_v\) and \(H_{v'}\) are conjugated if and only if the two invariant valuations \(v\) and \(v'\) belong to the same open face of the valuation cone of \(G/H\). The subgroups of the shape \(H_v\) are called \textit{Brion subgroups}, as they were first studied by \textit{M. Brion} [J. Algebra 134, No. 1, 115--143 (1990; Zbl 0729.14038)]. The Brion subgroup \(H_v\) can also be described in terms of limits of stabilizers of points in the arc space. Building on previous work of \textit{M. Brion} and \textit{E. Peyre} [Compos. Math. 134, No. 3, 319--335 (2002; Zbl 1031.13007)], in the last section of the paper the authors study the virtual Poincaré polynomial of the homogeneous varieties \(G/H'\), where \(H'\) is a satellite of \(H\). In particular they state a conjecture on the ratio between the virtual Poincaré polynomial of \(G/H'\) and that of \(G/H\), which is proved if either \(H\) is connected or the rank of \(G/H\) is one. On Fano complete intersections in rational homogeneous varieties 2021-11-25T18:46:10.358925Z "Bai, Chenyu" "Fu, Baohua" "Manivel, Laurent" Complete intersections in rational homogeneous varieties provide many interesting examples of Fano varieties. It is expected by Hartshorne that all smooth subvarieties \(\mathbb{P}^n\) of small codimension are complete intersections. In this paper, the authors studies two geometrical properties of Fano complete intersections in rational homogeneous varieties: local rigidity and quasi-homogeneity. \textit{R. Bott} [Ann. Math. (2) 66, 203--248 (1957; Zbl 0094.35701)] shows that \(H^i(G/P, T_{G/P}) = 0\) for all \(i \geq 1\) and any rational homogeneous variety \(G/P\), hence they \(G/P\) is locally rigid by Kodaira-Spencer deformation theory. In [\textit{F. Bien} and \textit{M. Brion}, Compos. Math. 104, No. 1, 1--26 (1996; Zbl 0910.14004)], the local rigidity is proven for Fano regular \(G\)-varieties. The case of two-orbits varieties of Picard number one is studied in [\textit{B. Pasquier} and \textit{N. Perrin}, Math. Z. 265, No. 3, 589--600 (2010; Zbl 1200.14097)]. Let \(G/P\) be a rational homogeneous variety with \(G\) simple and \(X =\cap D_i \subset G/P\) a smooth irreducible complete intersection of \(r\) ample divisors. Suppose that \(K^*_{G/P}\otimes \mathcal{O}_{G/P}(-\sum D_i)\) is ample, which implies that \(X\) is Fano. When \(G/P\) is of Picard number one, the converse holds, but in general this condition is stronger than the Fanoness of X. The main theorem of this paper classifies such \(X\) which are locally rigid. It uses the fact that \(H^i(X, T_X) = 0\) for all \(i \geq 2\) by Kodaira-Nakano vanishing theorem; so \(X\) is locally rigid if and only if \(H^1(X, T_X) = 0\). The authors explain their theorem using Vinberg's theory of parabolic prehomogeneous spaces [\textit{L. Manivel}, Rend. Semin. Mat., Univ. Politec. Torino 71, No. 1, 35--118 (2013; Zbl 1362.11099)]. Given a connected Dynkin diagram \(D\), with a special node satisfying some conditions, we obtain a simply connected simple Lie group \(G\) and a maximal parabolic subgroup \(P\) with the following property. There is an embedding of \(G/P\) in the projectivization \(\mathbb{P}V_P^*\) of a (dualized) fundamental representation, such that \(G \times GL_k\) acts on \(V_P \otimes \mathbb{C}^k\) with finitely many orbits. In particular \(G\) acts on \(Gr(k, V_P )\) with only finitely many orbits, and therefore there exists only a finite number of isomorphism types of codimension \(k\) linear sections of \(G/P\). In this situation, the local rigidity of the general section can be expected, and this is exactly what happens. It turns out that most of the varieties obtained in the main theorem are hyperplane sections. The authors classify general hyperplane sections which are quasi-homogeneous. An observation is that a general hyperplane section of \(G/P\) is quasi-homogeneous if and only if it is locally rigid but not a hyperplane section of \(Gr(3, 8)\). In general, there is no direct relation between the two properties. Critical loci in computer vision and matrices dropping rank in codimension one 2021-11-25T18:46:10.358925Z "Bertolini, Marina" "Besana, Gian Mario" "Notari, Roberto" "Turrini, Cristina" The main goal of this work is to conclude the analysis of critical loci started in [\textit{M. Bertolini} et al., J. Symb. Comput. 91, 74--97 (2019; Zbl 1403.14070)]. In [loc. cit.] it is shown that the minimal generators of the ideal of the critical locus for 3 projections from \(\mathbb{P}^4\) to \(\mathbb{P}^2\) are cubic polynomials assuming that such minors do not share any common factors. However, in this work the author study the case of 3 projections from \(\mathbb{P}^4\) to \(\mathbb{P}^2\), while dropping the genericity assumptions. First, the classification of canonical forms of \((n+1)\times n\) matrices, for \(n\leq 3\), of linear forms that drop rank in codimension 1 is introduced. Dropping rank in codimension 1 means that the maximal minors have a non trivial common factor of degree either 1 or 2. Thus, Theorem 2.1 provides all canonical forms of the \(4\times 3\) matrices of linear forms whose maximal minors have a greatest common divisor of degree 1, and Theorem 2.2 when maximal minors have a greatest common divisor of degree 2. Once the classification is done, the authors study the loci where these canonical forms drop rank under some mild generality assumptions and in the main dimensional context of interest for computer vision goals. Thus we arrive at the main theorem of the article, where it is classified the critical locus in the case of three projections from \(\mathbb{P}^4\) to \(\mathbb{P}^2\) in the degenerate case. Finally, last sections introduce the application of their results to the problem of reconstruction in computer vision. Codes on linear sections of the Grassmannian 2021-11-25T18:46:10.358925Z "Carrillo-Pacheco, Jesús" "Zaldívar, Felipe" Let \(\mathbb{F}_q\) denote a finite field having \(q\) elements, where \(q\) is a prime power. For a vector space \(E\) of finite dimension \(m\) over \(\mathbb{F}_q\) and \(\ell\le m\), let \(G(\ell, m)\) denote the Grassmannian variety of vector subspaces of dimension \(\ell\) of \(E\). A projective variety \(X\) is called a linear section of the of the Grassmannian \(G(\ell,m)\) if \(X=G(\ell, m)\cap Z(g_1,\ldots, g_N)\), where \(g_1, g_2, \ldots, g_N\) are linearly independent functionals in the ideal that they generate and \(X(\mathbb{F}_q)=\{P_1, \ldots, P_M\}\) is a non-empty set of \(\mathbb{F}_q\)-rational points of \(X\). In this paper, authors study parity-check codes by showing that for every linear section of a Grassmannian, there exists a parity check code with good properties depending on the linear sections. For the Lagrangian-Grassmannian variety, they reveal that these parity-check codes are the low density parity check (LDPC) codes. They also obtained some properties of parity check codes associated to linear sections of Grassmannians. Maximal indexes of flag varieties for spin groups 2021-11-25T18:46:10.358925Z "Devyatov, Rostislav A." "Karpenko, Nikita A." "Merkurjev, Alexander S." Summary: We establish the sharp upper bounds on the indexes for most of the twisted flag varieties under the spin groups \({\operatorname{\text{Spin}}(n)} \). Cotangent bundles of partial flag varieties and conormal varieties of their Schubert divisors 2021-11-25T18:46:10.358925Z "Lakshmibai, V." "Singh, R."|singh.rupam|singh.robin-vikram|singh.rishi-pal|singh.ranjeet-k|singh.ranveer-kumar|singh.rajrupa|singh.ravinder.2|singh.ram-gopal|singh.reshma|singh.ram-sakal|singh.rajkeshar|singh.r-t|singh.raghvendra|singh.rishi-ranjan|singh.rakhi|singh.rajiv-r-p|singh.raj-narayan|singh.rekha|singh.rupen-pratap|singh.rajnish-kumar|singh.radhey-s|singh.r-k-tarachand|singh.ranjit-j|singh.ravi-p|singh.raman-p|singh.rajpal|singh.ram-chandra|singh.renu|singh.ratna|singh.rajeshwar-prasad|singh.rishi-n|singh.rajwinder|singh.rita|singh.rajiv-kumar|singh.rajmeet|singh.ravendra|singh.rajesh-kumar|singh.ranveer|singh.roli|singh.rajbir|singh.ran-vir|singh.rajendra-prasad|singh.rajkumar-brojen|singh.r-k-p|singh.ram-veer|singh.ramji|singh.raushan|singh.rajindar|singh.ram-narayan|singh.rajani|singh.raghuraj|singh.rajinder-pal|singh.r-d|singh.ramendra-p|singh.rajat-kumar|singh.ritambhara|singh.raj-narain|singh.ram-nawal|singh.rasphal|singh.ram-kishore|singh.rattandeep|singh.rohit-r|singh.rajesh-pratap|singh.ruchi|singh.rattan|singh.ramkaran|singh.robby|singh.raghavendra|singh.rameshwar|singh.reen-nripjeet|singh.rana-p|singh.rajeev-kumar|singh.rahul-kumar|singh.raghuvansh-p|singh.ram-mehar|singh.randhir|singh.rama-krishna|singh.ram-binoy|singh.raghu-nath|singh.rishabh|singh.rajvir|singh.ramanpreet|singh.rajdeep|singh.rituraj|singh.raj-kishor|singh.rayman-preet|singh.ramesh-kumar|singh.ram-naresh|singh.r-a|singh.ramanand|singh.ronal|singh.richa|singh.rakeshwar|singh.rashmi|singh.rangi|singh.randheer|singh.rishi-ram-h-n|singh.ravindra-pratap|singh.ramandeep|singh.rajneesh-kumar|singh.ram-nandan-p|singh.roshani|singh.ripudaman|singh.ranbir Summary: Let \(P\) be a parabolic subgroup in \(G = \mathrm{SL}_n(\mathbf k)\), for \textbf{k} an algebraically closed field. We show that there is a \(G\)-stable closed subvariety of an affine Schubert variety in an affine partial flag variety which is a natural compactification of the cotangent bundle \(T^* G/P\). Restricting this identification to the conormal variety \(N^* X(w)\) of a Schubert divisor \(X(w)\) in \(G/P\), we show that there is a compactification of \(N^* X(w)\) as an affine Schubert variety. It follows that \(N^* X(w)\) is normal, Cohen-Macaulay, and Frobenius split. Fujita's freeness conjecture for \(T\)-varieties of complexity one 2021-11-25T18:46:10.358925Z "Altmann, Klaus" "Ilten, Nathan" For a not too singular projective variety \(X\) and an ample divisor \(H\) on it, \textit{T. Fujita} conjectured in [Adv. Stud. Pure Math. 10, 167--178 (1987; Zbl 0659.14002)] that \(mH+K_X\) is basepoint free, if \(m > \dim(X)\). This conjecture holds for curves by the Riemann-Roch theorem. If one asks only for nefness, then the conjecture holds for any \(X\) with at most rational Gorenstein singularities, as shown by Fujita in [loc. cit.]. As a nef divisor on a toric variety is automatically basepoint free, the original conjecture holds for Gorenstein toric varieties. Based on these two results, it is quite natural to ask, whether the conjecture holds for \(T\)-varieties of complexity one with at most rational Gorenstein singularities. Such a variety \(X\) comes with an effectiv action of a torus \(T\) with \(\dim(T) = \dim(X)-1\), so the Chow quotient \(Y = X/T\) is a curve and a generic fiber a toric variety, see [\textit{K. Altmann} et al., in: Contributions to algebraic geometry. Impanga lecture notes. Based on the Impanga conference on algebraic geometry, Banach Center, Bȩdlewo, Poland, July 4--10, 2010. Zürich: European Mathematical Society (EMS). 17--69 (2012; Zbl 1316.14001)]. The central result of this article is that in this case, Fujita's Freeness Conjecture holds. For the proof, note that for general \(T\)-varieties, nefness does not automatically imply basepoint freeness. Indeed the authors show that implication only for nef divisors of the form \(mH+K_X\), with \(H\) ample and \(m > \dim(X)\). Moreover, for \(k \in \mathbb N\), they give a sequence of smooth \(\mathbb K^*\)-surfaces \(X_k\) (the simplest \(T\)-varieties of complexity one) with ample divisor \(H_k\) such that \(kH_k\) is not basepoint free (but are nef as soon as \(k>2\)). Toric co-Higgs sheaves 2021-11-25T18:46:10.358925Z "Altmann, Klaus" "Witt, Frederik" For a complex toric variety \(X_{\Sigma}\) given by a fan \(\Sigma\subseteq N_{\mathbb R}=N\otimes_{\mathbb Z}{\mathbb R}\) for a lattice \(N\) and acting torus \(T=N\otimes_{\mathbb Z}{\mathbb C}\), by \textit{A. Klyachko} [Math. USSR, Izv. 35, No. 2, 337--375 (1990; Zbl 0706.14010)] a \textit{toric sheaf} \({\mathcal E}\), (that is, an \({\mathcal O}_X\)-module with an action of the torus \(T\) which is linear on the fibers and is compatible with the \(T\)-action of \(X\)) corresponds to the complex vector space \(E={\mathcal E}_1/{\mathfrak m}_{X,1}{\mathcal E}_1\) (where \({\mathcal E}_1\) is the stalk or \({\mathcal E}\) at \(1\in T\subseteq X\) and \({\mathfrak m}_{X,1}\) is the maximal ideal of \({\mathcal O}_{X,1}\)) together with a decreasing \({\mathbb Z}\)-filtration \(E^{\bullet}_{\rho}\) indexed by the rays \(\rho\in \Sigma(1)\). Following \textit{I. Biswas} et al., [Ill. J. Math. 65, No. 1, 181--190 (2021; Zbl 1465.14048)] a \textit{toric co-Higgs sheaf} is a pair \(({\mathcal E},\Phi)\) consisting of a toric sheaf \({\mathcal E}\) on a toric variety \(X_{\Sigma}\) and a \textit{Higgs field}, i.e., a \(T\)-equivariant \({\mathcal O}_X\)-morphism \(\Phi:{\mathcal E}\rightarrow{\mathcal E}\otimes_{{\mathcal O}_X}{\mathcal T}_X\) such that \(\Phi\wedge\Phi=0\), where \({\mathcal T}_X\) is the tangent sheaf of \(X\). Dropping the integrability condition \(\Phi\wedge\Phi=0\), the pair \(({\mathcal E},\Phi)\) is called a \textit{toric pre-co-Higgs sheaf} and \(\Phi\) a \textit{pre-co-Higgs field}. The paper under review considers general co-Higgs fields and not only \(M\)-homogeneous ones as in [Biswas et al., loc. cit]. To study these general co-Higgs sheaves, the authors start characterizing pre-co-Higgs fields using Klyachko's formalism by means of associated contractions in Theorem 8 and then show that every co-Higgs field defines a commutative finitely generated \({\mathbb C}[M]\)-algebra, the \textit{Higgs algebra}. Next, the authors introduce some combinatorial invariants: First, using that a pre-co-Higgs field is a direct sum \(\Phi=\sum \Phi^r\) of maps \(\Phi^r:{\mathcal E}\rightarrow {\mathcal E}\otimes_{{\mathcal O}_X}{\mathcal T}_X\) of degree \(r\in M\), they define the corresponding \textit{Higgs polytope} of \(\Phi\) as the convex hull in \(M_{\mathbb R}\) of its support \(\text{supp}(\Phi)=\{r\in M: \Phi^r\neq 0\}\subseteq M\). The convex hull of the totality of degrees \(r\in M\) of all possible toric pre-co-Higgs fields defines a second polytope, the \textit{Higgs range}. After proving some properties of these polytopes, in the last two sections of the paper they are calculated for several smooth toric surfaces: They compute the Higgs range of the projective plane and Hirzebruch and Fano surfaces, and they also compute the Higgs polytope for some del Pezzo surfaces. The whole paper includes many illustrative examples with explicit calculations nicely complementing the developments. Strong factorization and the braid arrangement fan 2021-11-25T18:46:10.358925Z "Machacek, John" Summary: We establish strong factorization for pairs of smooth fans which are refined by the braid arrangement fan. Our method uses a correspondence between cones and preposets. Singular curves of low degree and multifiltrations from osculating spaces 2021-11-25T18:46:10.358925Z "Buczyński, Jarosław" "Ilten, Nathan" "Ventura, Emanuele" Summary: In order to study projections of smooth curves, we introduce multifiltrations obtained by combining flags of osculating spaces. We classify all configurations of singularities occurring for a projection of a smooth curve embedded by a complete linear system away from a projective linear space of dimension at most two. In particular, we determine all configurations of singularities of non-degenerate degree \(d\) rational curves in \(\mathbb{P}^n\) when \(d-n\leq 3\) and \(d<2n\). Along the way, we describe the Schubert cycles giving rise to these projections. We also reprove a special case of the Castelnuovo bound using these multifiltrations: under the assumption \(d<2n\), the arithmetic genus of any non-degenerate degree \(d\) curve in \(\mathbb{P}^n\) is at most \(d-n\). Syzygies of the apolar ideals of the determinant and permanent 2021-11-25T18:46:10.358925Z "Alper, Jarod" "Rowlands, Rowan" Given a polynomial \(f\in \mathbb K[y_1,\ldots,y_k]\) one defines its apolar ideal \(f^{\bot}\) as \[f^{\bot}=\{g\in\mathbb K[y_1,\ldots,y_k] : \partial g(f)=0\}.\] Recall that to a monomial \(y^{\alpha}=y_1^{\alpha_1}\ldots y_k^{\alpha_k}\) one associates a differential operator \[\frac{\partial}{\partial y^{\alpha}}=\frac{\partial}{\partial y_1^{\alpha_1}\cdots\partial y_k^{\alpha_k}}\] and extends this definition linearly to all polynomials. Thus for \(g=\sum c_{\alpha}y^{\alpha}\) one associates a differential operator \[\partial g= \sum c_{\alpha}\frac{\partial}{\partial y^{\alpha}}.\] The authors of the paper under review are interested in apolar ideals of two specific polynomials \(\mathrm{def}_n\) and \(\mathrm{perm}_n\) which are elements of the ring \(\mathbb K[x_{11},\ldots, x_{1n},x_{21},\ldots,x_{nn}]\) defines as \[\mathrm{def}_n=\sum_{\sigma\in S_n} \mathrm{sgn}(\sigma) x_{1\sigma(1)}\ldots x_{n\sigma(n)}\] and \[\mathrm{perm}_n=\sum_{\sigma\in S_n} x_{1\sigma(1)}\ldots x_{n\sigma(n)}.\] [\textit{S. M. Shafiei}, J. Commut. Algebra 7, No. 1, 89--123 (2015; Zbl 1364.13024)] showed that the ideals \((\mathrm{def}_n)^{\bot}\) and \((\mathrm{perm}_n)^{\bot}\) are generated by quadrics. She provided an explicit minimal set of generators. The authors extend this study to the first syzygies. They show that the first syzygies of \((\mathrm{def}_n)^{\bot}\) are linear except in characteristic two, where both polynomials and hence their apolar ideals coincide. Thus \((\mathrm{def}_n)^{\bot}\) satisfies at lest the \(N_3\) property of \textit{M. L. Green} [J. Differ. Geom. 19, 125--167, 168--171 (1984; Zbl 0559.14008)]. On the other hand syzygies of \((\mathrm{perm}_n)^{\bot}\) require also quadratic generators, in arbitrary characteristic. Thus one can distinguish both polynomials by properties of their minimal graded free resolution. The paper is clearly written and all arguments are kept pretty effective, even if some of them are quite involved. On the Terracini locus of projective varieties 2021-11-25T18:46:10.358925Z "Ballico, Edoardo" "Chiantini, Luca" The aim of this article is to introduce and study the Terracini locus of \((X,L)\), where \(X\) is a projective variety and \(L\) is a line bundle on \(X\). Let \(X \subset \mathbb{P}^N\) be a smooth integral projective variety of dimension \(n\), and let \(L\) be a line bundle on \(X\). If \(S \subset X_{reg}\) is reduced finite set, then denote with \(2S\) the union of fat points \(2p\) for any \(p \in S\). The \(r\)-Terracini locus of \((X,L)\) is the locally closed set given by all the \(S \subset X_{reg}\) such that \(S\) is reduced and of cardinality \(r\) and \(h^{0}(\mathcal{I}_{(2S,X)}\otimes L) > h^0(L)-(n+1)r\). After recalling some basic facts about \(0\)-dimensional schemes, in the third section the authors provide sufficient conditions to get that a reduced ad finite set \(S \subset X_{reg}\) is not in the \(r\)-th Terracini locus, also in the case of \(L = L_1 \otimes L_2\), with \(L_i\) line bundles. The fourth and fifth sections are devoted to show examples when \(X\) is a projective space and \(L = \mathcal{O}(d)\). Moreover, bounds on the dimension of the Terracini locus are provided. Eventually the authors underline the link between the study of the Terracini locus and the \(r\)-identifiability of a point \(u\) of a secant variety \(S_r(X)\), i.e. the uniqueness of the decomposition of \(u\) in sum of \(r\) points of \(X\). Conjecture \(\mathcal{O}\) holds for some horospherical varieties of Picard rank 1 2021-11-25T18:46:10.358925Z "Bones, Lela" "Fowler, Garrett" "Schneider, Lisa" "Shifler, Ryan M." Summary: Property \(\mathcal{O}\) for an arbitrary complex, Fano manifold \(X\) is a statement about the eigenvalues of the linear operator obtained from the quantum multiplication of the anticanonical class of \(X\). Conjecture \(\mathcal{O}\) is a conjecture that property \(\mathcal{O}\) holds for any Fano variety. Pasquier classified the smooth nonhomogeneous horospherical varieties of Picard rank 1 into five classes. Conjecture \(\mathcal{O}\) has already been shown to hold for the odd symplectic Grassmannians, which is one of these classes. We will show that conjecture \(\mathcal{O}\) holds for two more classes and an example in a third class of Pasquier's list. Perron-Frobenius theory reduces our proofs to be graph-theoretic in nature. Gromov-Witten theory with derived algebraic geometry 2021-11-25T18:46:10.358925Z "Mann, Etienne" "Robalo, Marco" Summary: In this survey we add two new results that are not in our paper [Geom. Topol. 22, No. 3, 1759--1836 (2018; Zbl 1423.14320)]. Using the idea of brane actions discovered by Toën, we construct a lax associative action of the operad of stable curves of genus zero on a smooth variety \(X\) seen as an object in correspondences in derived stacks. This action encodes the Gromov-Witten theory of \(X\) in purely geometrical terms. For the entire collection see [Zbl 1471.14005]. Moduli of stable maps in genus one and logarithmic geometry. I 2021-11-25T18:46:10.358925Z "Ranganathan, Dhruv" "Santos-Parker, Keli" "Wise, Jonathan" The paper under review studies moduli space of genus one curves in terms of logarithmic structure. The authors present a smooth and proper construction for compactification of moduli space of stable maps from genus one curves to Kontsevich space. Based on the key Vakil-Zinger's construction of desingularization of the principal component, the authors give a modular understanding their construction by minor modification and develop an analogue construction of moduli space of genus 1 pointed stable quasimaps to Kontsevich space. Especially, the authors develop new techniques to consider the geometry of elliptic \(m\)-fold singularity which results in a modular factorization of the birational maps connecting Smyth's space of genus one curve with marked points. Since there is no modular explanation for blowups of moduli space, so the authors also show how to build a modular explanation for logarithmic blowups of logarithmic moduli spaces by adding tropical information to moduli problem. In summary, the authors apply the techniques of tropical geometry and logarithmic structure to study moduli spaces of stable maps (or quasimaps) from genus one curves to Kontsevich space. It is an interesting exploration about the relationship between tropical geometry, logarithmic moduli space, stable maps and moduli space of elliptic curves. Symmetric non-negative forms and sums of squares 2021-11-25T18:46:10.358925Z "Blekherman, Grigoriy" "Riener, Cordian" The authors study \textit{symmetric} nonnegative homogeneous polynomials and relationships between the cone of symmetric sums of squares and the cone of symmetric nonnegative forms of fixed degree \(2d\) (in arbitrary numbers of variables). They provided a uniform representation of the cone of symmetric sums of squares and its dual cone in terms of linear matrix polynomials. In particular, by using the representation, they completely characterize the sums of squares cone \(\Sigma_{n,4}\) of degree \(4\) in \(n\) variable and its boundary, and therefore certify the difference between symmetric sums of squares and symmetric non-negative quartics. Also, they investigated the asymptotic behavior of the cone of sums of squares and nonnegative forms of fixed degree \(2d\) as the number \(n\) of variables grows. In detail, they showed that the difference between the cone of symmetric nonnegative forms and sums of squares does not grow arbitrarily large for any fixed degree \(2d\) (even though the (volume) difference between the two cones increases exponentially for any even degree \(2d\) as the number \(n\) of variables grows). In particular, they show that the cone of symmetric non-negative quartics and the cone of quartic symmetric sums of squares asymptotically become closer as the number of variables grows by proving the two cones approach the same limit (in degree \(4\)). They conjectured that the limits agree in any degree \(2d\) for \(d>2\). Exponential-constructible functions in \(P\)-minimal structures 2021-11-25T18:46:10.358925Z "Chambille, Saskia" "Cubides Kovacsics, Pablo" "Leenknegt, Eva" There exist many results showing that certain classes of functions are closed under integration; the main result of the present paper is also such a result, for functions on a finite extension \(K\) of the field \(\mathbb Q_p\) of \(p\)-adic numbers. Recall that there is a natural (Haar) measure on such fields \(K\), yielding a notion of integration of functions \(K^n \to \mathbb C\). More precisely, the authors specify certain classes \(\mathcal H\) of such functions which are ``base-stable under integration'', meaning that whenever a function \(f\colon K^{n+1} \to \mathbb C\) lies in \(\mathcal H\), then so does the function \(K^n \to \mathbb C, x \mapsto \int_{K} f(x, y)\,dy\). To be precise, the authors more generally consider functions on certain subsets of \(K^n \times \mathbb Z^m\), where \(\mathbb Z\) is equipped with the counting measure. In this review, I will for simplicity stick to functions on \(K^n\), and I will write \(\mathcal H(K^n)\) for those functions in \(\mathcal H\) which live on \(K^n\). In many results of that kind (including the one in the present paper), one first fixes a certain first-order language \(\mathcal L\) on \(K\), yielding in particular a notion of \(\mathcal L\)-definable functions from \(K^n \to \mathbb Z\) (where \(\mathbb Z\) is considered as the value group of \(K\)). Then \(\mathcal H(K^n)\) is defined as some algebra generated by certain specific functions which are specified in terms of \(\mathcal L\)-definable functions. A classical result of that kind is the one by \textit{J. Denef} [Prog. Math. None, 25--47 (1985; Zbl 0597.12021)], where \(\mathcal L\) is the valued field language, and where \(\mathcal H(K^n)\) is the \(\mathbb Q\)-algebra generated by functions of the form \(\alpha\colon K^n \to \mathbb Z\) and \(K^n \to \mathbb Q, x \mapsto q^{\alpha(x)}\), where \(\alpha\) is \(\mathcal L\)-definable. This has then been generalized in two directions: (a) allow other languages \(\mathcal L\); (b) add more generators to \(\mathcal H(K^n)\). An important generalization of type (b) is to consider classes of functions containing additive characters \(\psi\colon (K, +) \to (\mathbb C^\times, \cdot)\), since those naturally appear in representation theory and in Fourier transforms. In direction (a), it is tempting to believe that one can take \(\mathcal L\) to be any P-minimal language on \(K\); P-minimality is an axiomatic condition implying that \(\mathcal L\)-definable objects are ``tame'' in some geometric sense. Up to recently, assuming P-minimality was not enough to prove the desired closedness under intergration; one additionally had to assume the existence of definable Skolem functions. This assumption has now been dropped. More precisely, in a previous paper, \textit{P. Cubides Kovacsics} and \textit{E. Leenknegt} [J. Symb. Log. 81, No. 3, 1124--1141 (2016; Zbl 1428.03060)] proved closedness under integration for any P-minimal \(\mathcal L\) when \(\mathcal H(K^n)\) is generated by the same kinds of functions as in the above-mentioned result by Denef. In the present paper, they now show that this result also extends to algebras \(\mathcal H(K^n)\) containing additive characters. In particular, the authors had to find the ``right'' algebras \(\mathcal H(K^n)\) (which are not a straight forward generalization of the ones when definable Skolem functions exist). Counting algebraic points in expansions of o-minimal structures by a dense set 2021-11-25T18:46:10.358925Z "Eleftheriou, Pantelis E." The Pila-Wilkie theorem, which was used in Pila's solution of André-Oort conjecture [\textit{J. Pila}, Ann. Math. 173, No. 3, 1779--1840 (2011; Zbl 1243.14022)], assets that a set definable in an o-minimal structure having many rational points contains an infinite semialgebraic set [\textit{J. Pila} et al., Duke Math. J. 133, No. 3, 591--616 (2006; Zbl 1217.11066)]. This paper extends the Pila-Wilkie theorem to an expansion \(\langle \mathcal R, P \rangle\) of an o-minimal structure \(\mathcal R\) by a dense set \(P\), (1) which is either an elementary substructure of \(\mathcal R\), or (2) it is dcl-independent. Under these assumptions, a set definable in \(\langle \mathcal R, P \rangle\) having many algebraic points contains an infinite set which is \(\emptyset\)-definable in \(\langle \mathcal R, P \rangle\). The definition of algebraic points comes from [\textit{J. Pila}, Selecta Math. N. S. 15, No. 1, 151--170 (2009; Zbl 1218.11068)]. The structures of the form \(\langle \mathcal R, P \rangle\) satisfying the assumptions are introduced in [\textit{A. van den Dries}, Fund. Math. 157, No. 1, 61--78 (1998; Zbl 0906.03036)] and [\textit{A. Dolich} et al., Ann. Pure Appl. Logic 167, No. 8, 684--706 (2016; Zbl 1432.03070)], respectively. The above theorem is deduced from Pila's result on algebraic points by introducing a generalization of Pila's algebraic part called algebraic trace part. The proof is brief in the case (1). In the case (2), the theorem is reduced to the case in which the set is a cone using the cone decomposition theorem given in [\textit{P. Eleftheriou} et al., Isr. J. Math. 239, No. 1, 435--500 (2020; Zbl 1472.03037)]. The set of separable states has no finite semidefinite representation except in dimension \(3\times 2\) 2021-11-25T18:46:10.358925Z "Fawzi, Hamza" Given integers \(n \ge m,\) let \(\mathrm{Sep}(n, m)\) be the set of {\em separable states} on the Hilbert space \(\mathbb{C}^n \otimes \mathbb{C}^m,\) i.e., \[\mathrm{Sep}(n, m) := \mathbf{conv}\{x x^\dagger \otimes y y^\dagger \ : \ x \in \mathbb{C}^n, |x| = 1, y \in \mathbb{C}^m, |y| = 1\}.\] Here \(x^\dagger\) indicates conjugate transpose, \(|x|^2 := x^\dagger x\) and \(\mathbf{conv}\) denotes the convex hull. We say that a convex set \(C \subset \mathbb{R}^d\) has a {\em semidefinite representation} (of size \(r\)) if it can be expressed as \(C = \pi(S),\) where \(\pi \colon \mathbb{R}^D \to \mathbb{R}^d\) is a linear map and \(S \subset \mathbb{R}^D\) is a convex set defined using a linear matrix inequality \[S = \{w \in \mathbb{R}^D \ : \ M_0 + w_1M_1 + \cdots + w_DM_D \succeq 0\}\] where \(M_0, \ldots, M_D\) are Hermitian matrices of size \(r \times r.\) It is known, from the earlier work of [\textit{S. L. Woronowicz}, Rep. Math. Phys. 10, 165--183 (1976; Zbl 0347.46063)] that for \(n + m \le 5,\) the set \(\mathrm{Sep}(n, m)\) is just the set of states which have a positive partial transpose, and hence it has a semidefinite representation. In the paper under review, the author shows that for \(n + m > 5,\) the set \(\mathrm{Sep}(n, m)\) has no semidefinite representation, and so this provides a new counterexample to the Helton-Nie conjecture [\textit{J. W. Helton} and \textit{J. Nie}, SIAM J. Optim. 20, No. 2, 759--791 (2009; Zbl 1190.14058)], which was recently disproved by \textit{C. Scheiderer} [SIAM J. Appl. Algebra Geom. 2, No. 1, 26--44 (2018; Zbl 1391.90462)]. The paper is very clear, well written and quite interesting. Differentiable approximation of continuous semialgebraic maps 2021-11-25T18:46:10.358925Z "Fernando, José F." "Ghiloni, Riccardo" This paper concerns the problem of approximating a uniformly continuous semialgebraic map \(f: S \to T\) from a compact semialgebraic set \(S\) to an arbitrary semialgebraic set \(T\) by a semialgebraic map \(g: S \to T\) that is differentiable of class \(C^\nu\), where \(\nu\) is a positive integer or \(\infty\). It is known that if \(T\) is an \(C^\nu\) semialgebraic manifold, then arbitrarily good (in the \(C^\nu\)-norm) \(C^\nu\) semialgebraic approximations exist. The authors show that for \emph{any semialgebraic \(T\)}, arbitrarily good \(\nu = 1\) approximations are possible. For \(\nu \geq 2\), they obtain density results when: (1) \(T\) is compact and locally \(C^\nu\) semialgebraically equivalent to a polyhedron, or (2) \(T\) is an open semialgebraic subset of a Nash set. The paper includes a useful review of approximation results in semialgebraic geometry, including discussion of key references. The real polynomial eigenvalue problem is well conditioned on the average 2021-11-25T18:46:10.358925Z "Beltrán, Carlos" "Kozhasov, Khazhgali" The paper deals with polynomial eigenvalue problem, namely its condition number is studied. First, the solution variety \(\mathcal{S}\) is introduced, which turns out to be real algebraic or semialgebraic subset of \(\mathbb{R}^m\setminus\{0\}\times S^1\), the product of the variety of inputs and the variety of outputs endowed with Finsler structures. The condition number \(\mu(a)\) of a given input \(a\) is the sum of the local condition numbers \(\mu(a,x)\) for all solutions for the input. If the solution variety \(\mathcal{S}\) is co-called nondegenerate, the formula for the squared condition number is proven. Afterwards the formula is used for the case of polynomial eigenvalue problem and so a new proof of the latter is obtained. Computing the equisingularity type of a pseudo-irreducible polynomial 2021-11-25T18:46:10.358925Z "Poteaux, Adrien" "Weimann, Martin" In the paper under review, the authors characterize a class of germs of plane curve singularities, containing irreducible ones, whose equisingularity type can be computed in an expected quasi-linear time with respect to the discriminant valuation of a Weierstrass equation. Prism graphs in tropical plane curves 2021-11-25T18:46:10.358925Z "Jacoby, Liza" "Morrison, Ralph" "Weber, Ben" Summary: Any smooth tropical plane curve contains a distinguished trivalent graph called its skeleton. In [Discrete Math. 344, No. 1, Article ID 112161, 19 p. (2021; Zbl 1455.52013)], the second author and \textit{A. K. Tewari} proved that the so-called big-face graphs cannot be the skeleta of tropical curves for genus \(12\) and greater. In this paper we answer an open question they posed to extend their result to the prism graphs, proving that a prism graph is the skeleton of a smooth tropical plane curve precisely when the genus is at most \(11\). Our main tool is a classification of lattice polygons with two points that can simultaneously view all others, without having any one point that can observe all others. Generalizing classical Clifford algebras, graded Clifford algebras and their associated geometry 2021-11-25T18:46:10.358925Z "Vancliff, Michaela" Summary: This article is based on a talk given by the author at the \textit{12th International Conference on Clifford Algebras and their Applications in Mathematical Physics}. A generalization, introduced by \textit{T. Cassidy} and the author [J. Lond. Math. Soc., II. Ser. 90, No. 2, 631--636 (2014; Zbl 1303.16029)] of a classical Clifford algebra is discussed together with connections between that generalization and a generalization of a graded Clifford algebra. A geometric approach to studying the algebras, viewed through the lens of Artin, Tate and Van den Bergh's noncommutative algebraic geometry, is also presented. Hopf algebras and tensor categories. International workshop, Nanjing University, Nanjing, China, September 9--13, 2019 2021-11-25T18:46:10.358925Z "Andruskiewitsch, Nicolás" "Liu, Gongxiang" "Montgomery, Susan" "Zhang, Yinhuo" Publisher's description: Articles in this volume are based on talks given at the International Workshop on Hopf Algebras and Tensor Categories, held from September 9--13, 2019, at Nanjing University, Nanjing, China. The articles highlight the latest advances and further research directions in a variety of subjects related to tensor categories and Hopf algebras. Primary topics discussed in the text include the classification of Hopf algebras, structures and actions of Hopf algebras, algebraic supergroups, representations of quantum groups, quasi-quantum groups, algebras in tensor categories, and the construction method of fusion categories. The articles of this volume will be reviewed individually. On some algebras associated to genus one curves 2021-11-25T18:46:10.358925Z "Fisher, Tom" Summary: In [\textit{D. Haile} and \textit{I. Han}, J. Algebra 313, No. 2, 811--823 (2007; Zbl 1121.14022); \textit{J.-M. Kuo}, J. Algebra 330, No. 1, 86--102 (2011; Zbl 1229.16013)], the respective cited authors have studied certain non-commutative algebras associated to a binary quartic or ternary cubic form. We extend their construction to pairs of quadratic forms in four variables, and conjecture a further generalisation to genus one curves of arbitrary degree. These constructions give an explicit realisation of an isomorphism relating the Weil-Châtelet and Brauer groups of an elliptic curve. Embedding of the derived Brauer group into the secondary \(K\)-theory ring 2021-11-25T18:46:10.358925Z "Tabuada, Gonçalo" Summary: In this note, making use of the recent theory of noncommutative motives, we prove that the canonical map from the derived Brauer group to the secondary Grothendieck ring has the following injectivity properties: in the case of a regular integral quasi-compact quasi-separated scheme, it is injective; in the case of an integral normal Noetherian scheme with a single isolated singularity, it distinguishes any two derived Brauer classes whose difference is of infinite order. As an application, we show that the aforementioned canonical map is injective in the case of affine cones over smooth projective plane curves of degree \(\geq 4\) as well as in the case of Mumford's (famous) singular surface. Calabi-Yau algebras and the shifted noncommutative symplectic structure 2021-11-25T18:46:10.358925Z "Chen, Xiaojun" "Eshmatov, Farkhod" Summary: In this paper we show that for a Koszul Calabi-Yau algebra, there is a shifted bi-symplectic structure in the sense of Crawley-Boevey-Etingof-Ginzburg [\textit{W. Crawley-Boevey} et al., Adv. Math. 209, No. 1, 274--336 (2007; Zbl 1111.53066)], on the cobar construction of its co-unitalized Koszul dual coalgebra, and hence its DG representation schemes, in the sense of Berest-Khachatryan-Ramadoss [\textit{Y. Berest} et al., Adv. Math. 245, 625--689 (2013; Zbl 1291.14006)], have a shifted symplectic structure in the sense of Pantev-Toën-Vaquié-Vezzosi [\textit{T. Pantev} et al., Publ. Math., Inst. Hautes Étud. Sci. 117, 271--328 (2013; Zbl 1328.14027)]. Quotients of triangulated categories and equivalences of Buchweitz, Orlov, and Amiot-Guo-Keller 2021-11-25T18:46:10.358925Z "Iyama, Osamu" "Yang, Dong" Summary: We give a simple sufficient condition for a Verdier quotient \(\mathcal{T}/\mathcal{S}\) of a triangulated category \(\mathcal{T}\) by a thick subcategory \(\mathcal{S}\) to be realized inside of \(\mathcal{T}\) as an ideal quotient. As applications, we deduce three significant results by \textit{R.-O. Buchweitz} [``Maximal Cohen-Macaulay modules and Tate-cohomology over Gorenstein rings'', Preprint, \url{}], \textit{D. Orlov} [Prog. Math. 270, 503--531 (2009; Zbl 1200.18007)] and \textit{C. Amiot} [Ann. Inst. Fourier 59, No. 6, 2525--2590 (2009; Zbl 1239.16011)], \textit{L. Guo} [J. Pure Appl. Algebra 215, No. 9, 2055--2071 (2011; Zbl 1239.16012)], \textit{B. Keller} [Doc. Math. 10, 551--581 (2005; Zbl 1086.18006); with \textit{M. Van den Bergh}, J. Reine Angew. Math. 654, 125--180 (2011; Zbl 1220.18012)]. Euler characteristic of Springer fibers 2021-11-25T18:46:10.358925Z "Kim, D."|kim.dohyeung|kim.dong-w||kim.dong-yeon|kim.dong-il|||kim.dojin|kim.dongkyu|kim.duksu|kim.dongkeon|kim.donghak|kim.doyeob|kim.donghan.1|kim.daejun|kim.dohwan|kim.doosoo|kim.dennis||kim.daniel|kim.dae-woo|kim.donghyun.1|kim.donghyun.2|kim.donghyun.3||kim.daeki|kim.dong-hwa|kim.donguk|kim.dai-young|kim.dae-sig|kim.dongsup|kim.dong-pyo|kim.dae-kyoo|kim.dong-heon|kim.dongsook|kim.dongwoo|kim.dukpa|kim.dae-nyeon|kim.dongjoo|kim.dohoon|||kim.daesoo|kim.daeju|kim.dong-hee||kim.du-jin|kim.duk-sun|kim.dahn-goon|kim.doo-ho|kim.dai-gyoung|kim.dong-ah||kim.daehwan|kim.dongchoul|kim.du-won|kim.donggyu|kim.dae-eun|kim.dae-june|kim.daeyong|kim.deok-ho|kim.dalho|kim.david-s|kim.dongkyun|kim.dae-su|kim.dae-shik|kim.doo-hyun|kim.doyon|kim.dal-ho|kim.deokhwan|kim.donggun|kim.dok-yong|kim.dongryeol||kim.donglok|kim.dongmok|kim.dong-yeol|kim.dongseok|kim.dong-in|kim.dong-seong||kim.dongku|kim.daekyu|kim.daejung|kim.duckkyung|kim.dae-woon|kim.dae-seoung|kim.dong-yun|kim.dong-sik|kim.dong-hyawn|kim.daeho|kim.dongun|kim.dongwon|kim.daesung|kim.duk-hyun|kim.dohyung|kim.dong-seon|kim.dongryul|kim.dongsu|kim.dookie|kim.doyoon|kim.daewa|kim.duckjin|kim.dokyoung|kim.daehong|kim.dae-hwang|kim.dongwook|kim.dongsoo-s|kim.duk-hun|kim.dongseung|kim.donggyun|kim.duk-gyoo|kim.dongsun|kim.dongju|kim.dano|kim.dmitry|kim.donghyok|kim.david-hong-kyun|kim.doheon||kim.dong-kyoo|kim.daeryong|kim.dae-kyung|kim.dong-seo|kim.doyub||kim.daehak|kim.daegyoum|kim.dae-kwan|kim.denise|kim.djun-maximilian|kim.dong-guen|kim.dong-kie|kim.doochul|kim.doojin|kim.dongmin|kim.deok-jin|kim.dohan|kim.deokseong|kim.dae-mann|kim.daejoong|kim.dong-j|kim.dong-ok|kim.daeyoung|kim.david-b|kim.dean|kim.dongho|kim.doh-suk|kim.dohyeon|kim.daehyun|kim.daecheol|kim.dae-kyung.1|kim.dohyun|kim.dong-hyuk||kim.dae-seung|kim.dokyun|kim.dehee|kim.duhyeong|kim.dongkwan|kim.dongjin|kim.dong-kyue|kim.deok-soo|kim.dukwon|kim.daehee|kim.daehoon|kim.du-yong|kim.dugyu|kim.deokjoo|kim.duk-kyung|kim.dae-hyeong|kim.dongkeun|kim.dongkook|kim.dohyeong|kim.donghoh|kim.donghwan|kim.daeyeoul|kim.doh-hyun|kim.dong-hoi|kim.dae-woong|kim.dongyung|kim.daeg-gun||kim.daewook|kim.dong-chan|kim.daijin|kim.daewon|kim.doe-wan|kim.dongcheol|kim.doo-seok|kim.dongjae|kim.dong-kwon|kim.daejin|kim.dong-chule|kim.dorian|kim.daehyon|kim.don-hee|kim.donghoon Summary: For Weyl groups of classical types, we present formulas to calculate the restriction of Springer representations to a maximal parabolic subgroup of the same type. As a result, we give recursive formulas for Euler characteristics of Springer fibers for classical types. We also give tables of those for exceptional types. Low degree cohomology of Frobenius kernels 2021-11-25T18:46:10.358925Z "Ngo, Nham V." Summary: Let \(G\) be a simple algebraic group defined over an algebraically closed field of characteristic \(p>0\). For a positive integer \(r\), let \(G_r\) be the \(r\)-th Frobenius kernel of \(G\). We determine in this paper a number \(m\) such that the cohomology \(\mathrm{H}^n(G_r,k)\) is isomorphic to \(\mathrm{H}^n(G_1,k)\) for all \(n\le m\) where \(m\) depends on \(p\) and the type of \(G\). For the entire collection see [Zbl 1411.13002]. The mapping class group action on \(\mathrm{SU}(3)\)-character varieties 2021-11-25T18:46:10.358925Z "Goldman, William M." "Lawton, Sean" "Xia, Eugene Z." Let \(\Sigma\) be a compact oriented surface of genus \(g\) with boundary \(\partial \Sigma\), which has \(n \ge 1\) components. Let us consider the group of orientation-preserving homeomorphisms of \(\Sigma\) which fix \(\partial \Sigma\) pointwise. The mapping class group \(\Gamma\) of \(\Sigma\) is the group of connected components of this group of homeomorphisms. The authors also define for algebraic group \(G\) the relative character variety \(\mathcal M_{\mathcal C}(G)\). W. Goldman conjectures many years ago that \(\Gamma\) acts ergodically on \(\mathcal M_{\mathcal C}(K)\) for compact Lie groups \(K\) and proved this conjecture for some compact Lie groups. Later some other results in this direction was proved. In this article the case \(K=\mathrm{SU}(3)\), \(g=1\) and \(n=1\) is considered. It is proved that aforementioned action of \(\Gamma\) in this case is ergodic with respect to the natural symplectic measure on the character variety. Global variants of Hartogs' theorem 2021-11-25T18:46:10.358925Z "Bochnak, Jacek" "Kucharz, Wojciech" This is a very interesting paper. The classical Hartogs theorem about separately analytic complex functions being analytic finds here a very fine generalization, which is also global. Let's have a look at the theorems proved in the paper: Theorem 1.1. Let \(X=X_1\times\cdots\times X_n\) be the product of \(n\) complex algebraic manifolds and let \(f:U\to C\) be a function defined on an open subset \(U\) of \(X\). Assume that for each nonsingular algebraic curve \(C\subset X\), parallel to one of the factors of \(X\), the restriction \(f|U\cap C\) is a holomorphic function. Then \(f\) is a holomorphic function. Theorem 1.2. Let \(X=X_1\times\cdots\times X_n\) be the product of \(n\) complex algebraic manifolds and let \(f:U\to C\) be a function defined on an open subset \(U\) of \(X\). Assume that for each nonsingular algebraic curve \(C\subset X\), parallel to one of the factors of \(X\), the restriction \(f|U\cap C\) is a Nash function. Then \(f\) is a Nash function. Theorems 1.1 and 1.2 have a suitable analogue for regular functions. Definition. Let \(X\) be a complex algebraic manifold. A function \(f:U\to C\), defined on an open subset \(U\) of \(X\), is said to be regular if there exists a rational function \(R\) on \(X\) such that \(U\subset X\smallsetminus\mathrm{Pole}(R)\) and \(f=R|U\), where Pole\((R)\) stands for the polar set of \(R\). Clearly, any regular function on \(U\) is a Nash function. Theorem 1.3. Let \(X=X_1\times\cdots\times X_n\) be the product of \(n\) complex algebraic manifolds and let \(f:U\to C\) be a function defined on an open subset \(U\) of \(X\). Assume that for each nonsingular algebraic curve \(C\subset X\), parallel to one of the factors of \(X\), the restriction \(f|U\cap C\) is a regular function. Then \(f\) is a regular function. The definition of a curve being ``parallel'' goes as follows: We say that a subset \(A\) of \(X\) is parallel to the \(i\)-th factor of \(X\) if \(\pi_j(A)\) consists of one point for each \(j=i\). The paper is very well written, which is typical for these authors, very well organized, very clear, but the methods used are not easy. For instance, in Proposition 2.3 below the Hironaka desingularization theorem is to be used. Not surprising, given the strength of the result, but worth noticing. Proposition 2.3. Let \(X\) be a complex algebraic manifold and let \(f:U\to C\) be a function defined on an open subset \(U\) of \(X\). Assume that for each nonsingular algebraic curve \(C\subset X\) the restriction \(f|U\cap C\) is a holomorphic function. Then \(f\) is a holomorphic function. The proof begins: According to Hironaka's theorem on resolution of singularities [\textit{H. Hironaka}, Ann. Math. (2) 79, 109--203 (1964; Zbl 0122.38603)], we may assume that the manifold \(X\) is projective. Hironaka's theorem cannot be avoided here. The paper is virtually self contained, as the results used are well quoted and easy to find, even if difficult in themselves. Each tool that needs adjusting (like Noether's normalization lemma presented here as Lemma 1.2) is adjusted and a full proof is given. This makes the paper very pleasant for the reader. The results seem very useful. It is often easier to examine functions on algebraic curves (cf. [\textit{J. Kollár} et al., Math. Ann. 370, No. 1--2, 39--69 (2018; Zbl 1407.14056)]). Orthogonality of divisorial Zariski decompositions for classes with volume zero 2021-11-25T18:46:10.358925Z "Tosatti, Valentino" Consider the following statement: Conjecture: Let \((X, \omega)\) be a compact Kähler manifold, and \(\alpha\) a pseudoeffective \((1,1)\) class. Then \[ \langle \alpha^{n-1} \rangle \cdot \alpha = \mathrm{Vol}(\alpha), \] where \(\textrm{Vol}(\alpha)\) is the volume of the class \(\alpha\) and \(\langle \cdot \rangle\) is the moving intersection product of classes in the sense of Boucksom. The above is known as the orthogonality conjecture for divisorial Zariski decompositions, which was observed by \textit{S. Boucksom} et al. [J. Algebr. Geom. 22, No. 2, 201--248 (2013; Zbl 1267.32017); J. Algebr. Geom. 18, No. 2, 279--308 (2009; Zbl 1162.14003)] and is equivalent to the weak transcendental Morse inequalities, the \(C^1\) differentiability of the volume function on the big cone, and the ``cone duality'' conjecture, i.e., \textit{the dual cone of the pseudoeffective cone is the movable cone}. This was proven for \(X\) projective in [\textit{S. Boucksom} et al., J. Algebr. Geom. 22, No. 2, 201--248 (2013; Zbl 1267.32017); \textit{D. W. Nyström}, J. Am. Math. Soc. 32, No. 3, 675--689 (2019; Zbl 1429.32031)], and formulated as a conjecture on arbitrary compact Kähler manifolds in [\textit{S. Boucksom} et al., J. Algebr. Geom. 22, No. 2, 201--248 (2013; Zbl 1267.32017)]. The main result of this note is a proof of the orthogonality conjecture on arbitrary compact Kähler manifolds for pseudoeffective \((1,1)\) classes that are assumed to have volume zero. Computing regular meromorphic differential forms via Saito's logarithmic residues 2021-11-25T18:46:10.358925Z "Tajima, Shinichi" "Nabeshima, Katsusuke" The concept of regular meromorphic differential forms was introduced independently by \textit{E. Kunz} [Manuscr. Math. 15, 91--108 (1975; Zbl 0299.14013)] and \textit{D. Barlet} [C. R. Acad. Sci., Paris, Sér. A 282, 579--582 (1976; Zbl 0323.32006)]. Somewhat later the reviewer [Adv. Sov. Math. 1, 211--246 (1990; Zbl 0731.32005)] proved that in the hypersurface case such forms naturally appear as the image of the logarithmic residue introduced by \textit{K. Saito} [J. Fac. Sci., Univ. Tokyo, Sect. I A 27, 265--291 (1980; Zbl 0496.32007)]. The authors of the paper under review present an algorithm for computing regular meromorphic differential forms using Saito's residue and torsion differentials of the modules of regular holomorphic forms (see [the rewiever, Complex Variables, Theory Appl. 50, No. 7--11, 777--802 (2005; Zbl 1083.32024)]. Then their constructions are applied for explicit computations of the Gauss-Manin connection in the case of isolated hypersurface singularities. As an example, the cases of functions of two and three variables are analyzed in detail. Adiabatic limit and the Frölicher spectral sequence 2021-11-25T18:46:10.358925Z "Popovici, Dan" In complex geometry, it is well known that the Frölicher spectral sequence of a compact Kähler manifold degenerates at \(E_1\) page (in particular it degenerates at \(E_2\) page). Since the Kähler condition is quite restrictive for compact complex manifolds of dimension at least 3, it is natural to seek other metric conditions which ensure the \(E_2\)-degeneration of the Frölicher spectral sequence. Let \(X\) be a compact complex manifold with a Hermitian metric \(\omega\). In this article, the author gives a sufficient metric condition for degeneration at \(E_2\), which roughly says that the torsion of \(\omega\) is ``small''. One of the new ideas is to consider the rescalings of \(\omega\) and \(\partial\), which is an adaption of the adiabatic limit construction associated with a Riemann foliation (see, e.g., [\textit{E. Witten}, Commun. Math. Phys. 100, 197--229 (1985; Zbl 0581.58038)]) to the case of the splitting \(d=\partial +\overline{\partial}\). It seems interesting to point out that similar ideas also appeared in the setting of non-abelian Hodge theory, see [\textit{C. Simpson}, Mixed twistor structures, arXiv preprint alg-geom/9705006, 1997] and Theorem 2.2.4 in [\textit{C. Sabbah}, Polarizable twistor \(\mathcal{D}\)-modules. Paris: Sociéteé Mathématique de France (2005; Zbl 1085.32014)]. Moreover, using a variant of the Efremov-Shubin variational principle, along with the pesudodifferential Laplacian in [the author, Int. J. Math. 27, No. 14, Article ID 1650111, 31 p. (2016; Zbl 1365.53067)] and Demaily's Bochner-Kodaira-Nakano formula for Hermitian metrics, the author finds a formula for the dimensions of the vector spaces on each page of the Frölicher spectral sequence in terms of of the number of small eigenvalues of the rescaled Laplacian. This formula is of independent interest, and is inspired by the analogous result for foliations proven in [\textit{J. A. Álvarez López} and \textit{Y. A. Kordyukov}, Geom. Funct. Anal. 10, No. 5, 977--1027 (2000; Zbl 0965.57024)]. The algebraic theory of fractional jumps 2021-11-25T18:46:10.358925Z "Goldfeld, Dorian" "Micheli, Giacomo" Summary: In this paper, we start by briefly surveying the theory of fractional jumps and transitive projective maps. Then we show some new results on the absolute jump index, on projectively primitive polynomials, and on compound constructions. For the entire collection see [Zbl 1455.11009]. On semigroup orbits of polynomials and multiplicative orders 2021-11-25T18:46:10.358925Z "Mello, Jorge" Summary: We show, under some natural restrictions, that some semigroup orbits of polynomials cannot contain too many elements of small multiplicative order modulo a large prime \(p\), extending previous work of \textit{I. E. Shparlinski} [Glasg. Math. J. 60, No. 2, 487--493 (2018; Zbl 1431.11130)]. The complete classification of empty lattice 4-simplices 2021-11-25T18:46:10.358925Z "Iglesias-Valiño, Óscar" "Santos, Francisco" Summary: An empty simplex is a lattice simplex with only its vertices as lattice points. Their classification in dimension three was completed by G. White in 1964. In 1988, S. Mori, D. R. Morrison, and I. Morrison started the task in dimension four, with their motivation coming from the close relationship between empty simplices and terminal quotient singularities. They conjectured a classification of empty simplices of prime volume, modulo finitely many exceptions. Their conjecture was proved by Sankaran (1990) with a simplified proof by Bober (2009). The same classification was claimed by Barile et al. in 2011 for simplices of non-prime volume, but this statement was proved wrong by Blanco et al. (2016). In this article, we complete the classification of 4-dimensional empty simplices. In doing so, we correct and complete the classification by Barile et al., and we also compute all the finitely many exceptions, by first proving an upper bound for their volume. The whole classification has: \begin{itemize} \item[1)] One 3-parameter family, consisting of simplices of width equal to one. \item[2)] Two 2-parameter families (the one in Mori et al., plus a second new one). \item[3)] Forty-six 1-parameter families (the 29 in Mori et al., plus 17 new ones). \item[4)] 2461 individual simplices not belonging to the above families, with (normalized) volumes ranging between 24 and 419. \end{itemize} We characterize the infinite families of empty simplices in terms of the lower dimensional point configurations that they project to, with techniques that can potentially be applied to higher dimensions and other classes of lattice polytopes. Multi-splits and tropical linear spaces from nested matroids 2021-11-25T18:46:10.358925Z "Schröter, Benjamin" Summary: We present an explicit combinatorial description of a special class of facets of the secondary polytopes of hypersimplices. These facets correspond to polytopal subdivisions called multi-splits. We show that the maximal cells in a multi-split of a hypersimplex are matroid polytopes of nested matroids. Moreover, we derive a description of all multi-splits of a product of simplices. Additionally, we present a computational result to derive explicit lower bounds on the number of facets of secondary polytopes of hypersimplices. A moment map interpretation of the Ricci form, Kähler-Einstein structures, and Teichmüller spaces 2021-11-25T18:46:10.358925Z "García-Prada, Oscar" "Salamon, Dietmar" Summary: This paper surveys the role of moment maps in Kähler geometry. The first section discusses the Ricci form as a moment map and then moves on to moment map interpretations of the Kähler-Einstein condition and the scalar curvature (Quillen-Fujiki-Donaldson). The second section examines the ramifications of these results for various Teichmüller spaces and their Weil-Petersson symplectic forms and explains how these arise naturally from the construction of symplectic quotients. The third section discusses a symplectic form introduced by Donaldson on the space of Fano complex structures. For the entire collection see [Zbl 1461.37002]. Stability of the conical Kähler-Ricci flows on Fano manifolds 2021-11-25T18:46:10.358925Z "Liu, Jiawei" "Zhang, Xi" Summary: In this paper, we study stability of the conical Kähler-Ricci flows on Fano manifolds. That is, if there exists a conical Kähler-Einstein metric with cone angle \(2\pi\beta\) along the divisor, then for any \(\beta'\) sufficiently close to \(\beta\), the corresponding conical Kähler-Ricci flow converges to a conical Kähler-Einstein metric with cone angle \(2\pi\beta'\) along the divisor. Here, we only use the condition that the Log Mabuchi energy is bounded from below. This is a weaker condition than the properness that we have adopted to study the convergence. As applications, we give parabolic proofs of Donaldson's openness theorem and his conjecture for the existence of conical Kähler-Einstein metrics with positive Ricci curvatures. Riccati-type pseudo-potentials, conservation laws and solitons of deformed sine-Gordon models 2021-11-25T18:46:10.358925Z "Blas, H." "Callisaya, H. F." "Campos, J. P. R." Summary: Deformed sine-Gordon (DSG) models \(\partial_\xi \partial_\eta w + \frac{d}{d w} V(w) = 0\), with \(V(w)\) being the deformed potential, are considered in the context of the Riccati-type pseudo-potential approach. A compatibility condition of the deformed system of Riccati-type equations reproduces the equation of motion of the DSG models. Then, we provide a pair of linear systems of equations for the DSG model and an associated infinite tower of non-local conservation laws. Through a direct construction and supported by numerical simulations of soliton scatterings, we show that the DSG models, which have recently been defined as quasi-integrable in the anomalous zero-curvature approach [\textit{L. A. Ferreira} and \textit{W. J. Zakrzewski}, J. High Energy Phys. 2011, No. 5, Paper No. 130, 39 p. (2011; Zbl 1296.81035)], possess new towers of infinite number of quasi-conservation laws. We compute numerically the first sets of non-trivial and independent charges (beyond energy and momentum) of the DSG model: the two third order conserved charges and the two fifth order asymptotically conserved charges in thepseudo-potential approach, and the first four anomalies of the new towers of charges, resp ectively. We consider kink-kink, kink-antikink and breather configurations for the Bazeia et al. potential \(V_q(w) = \frac{64}{q^2} \tan^2 \frac{w}{2}(1 - | \sin \frac{w}{2} |^q)^2\) \((q \in \mathbb{R})\), which contains the usual SG potential \(V_2(w) = 2 [1 - \cos(2w)]\). The numerical simulations are performed using the 4th order Runge-Kutta method supplied with non-reflecting boundary conditions. Four-dimensional Fano quiver flag zero loci 2021-11-25T18:46:10.358925Z "Kalashnikov, Elana" Summary: Quiver flag zero loci are subvarieties of quiver flag varieties cut out by sections of representation theoretic vector bundles. We prove the Abelian/non-Abelian correspondence in this context: this allows us to compute genus zero Gromov-Witten invariants of quiver flag zero loci. We determine the ample cone of a quiver flag variety, and disprove a conjecture of Craw. In the appendices (which can be found in the electronic supplementary material), which are joint work with Tom Coates and Alexander Kasprzyk, we use these results to find four-dimensional Fano manifolds that occur as quiver flag zero loci in ambient spaces of dimension up to 8, and compute their quantum periods. In this way, we find at least 141 new four-dimensional Fano manifolds. Gauge transformations of spectral triples with twisted real structures 2021-11-25T18:46:10.358925Z "Magee, Adam M." "Dąbrowski, Ludwik" Summary: Twisted real structures are well-motivated as a way to implement the conformal transformation of a Dirac operator for a real spectral triple without needing to twist the noncommutative one-forms. We study the coupling of spectral triples with twisted real structures to gauge fields, adopting Morita equivalence via modules and bimodules as a guiding principle and paying special attention to modifications to the inner fluctuations of the Dirac operator. In particular, we analyze the twisted first-order condition as a possible alternative to abandoning the first-order condition in order to go beyond the standard model and elaborate upon the special case of gauge transformations accordingly. Applying the formalism to a toy model, we argue that under certain physically motivated assumptions, the spectral triple based on the left-right symmetric algebra should reduce to that of the standard model of fundamental particles and interactions, as in the untwisted case. {\copyright 2021 American Institute of Physics} A new spectral analysis of stationary random Schrödinger operators 2021-11-25T18:46:10.358925Z "Duerinckx, Mitia" "Shirley, Christopher" The authors consider random Schrödinger operators of the form \[ -\Delta + \lambda V_{\omega} \] and the associated Schrödinger equation, where \(V_{\omega}\) is a realization of a stationary random potential \(V\). The regime under consideration here is \(0<\lambda \ll 1\). The main goal of the authors is to develop a spectral approach to describe the long time behavior of the system beyond perturbative timescales by using ideas from Malliavin calculus, leading to rigorous Mourre type results. In particular, the authors describe the dynamics by a fibered family of spectral perturbation problems. They then state a number of exact resonance conjectures which would require that Bloch waves exist as resonant modes. An approximate resonance result is obtained and the first spectral proof of the decay of time correlations on the kinetic timescale is also provided. Spinorial \(R\) operator and algebraic Bethe ansatz 2021-11-25T18:46:10.358925Z "Karakhanyan, D." "Kirschner, R." Summary: We propose a new approach to the spinor-spinor R-matrix with orthogonal and symplectic symmetry. Based on this approach and the fusion method we relate the spinor-vector and vector-vector monodromy matrices for quantum spin chains. We consider the explicit spinor \(R\) matrices of low rank orthogonal algebras and the corresponding \(RTT\) algebras. Coincidences with fundamental \(R\) matrices allow to relate the Algebraic Bethe Ansatz for spinor and vector monodromy matrices. Field theory and \(\lambda\)-deformations: expanding around the identity 2021-11-25T18:46:10.358925Z "Georgiou, George" "Sfetsos, Konstantinos" Summary: We explore the structure of the \(\lambda\)-deformed \(\sigma\)-model action by setting up a perturbative expansion around the free field point corresponding to the identity group element. We include all field interaction terms up to sixth order. We compute the two- and three-point functions of current and primary field operators, their anomalous dimensions as well as the \(\beta\)-function for the \(\lambda\)-parameter. Our results are in complete agreement with those obtained previously using gravitational and/or CFT perturbative methods in conjunction with the non-perturbative symmetry, as well as with those obtained using methods exploiting the geometry defined in the space of couplings. The advantage of this approach is that all deformation effects are already encoded in the couplings of the interaction vertices and in the \(\lambda\)-dressed operators. Mocking the \(u\)-plane integral 2021-11-25T18:46:10.358925Z "Korpas, Georgios" "Manschot, Jan" "Moore, Gregory W." "Nidaiev, Iurii" The authors consider the topologically twisted counterpart of \(\mathcal{N} = 2\) supersymmetric Yang-Mills theory with gauge group SU(2) in the presence of arbitrary 't~Hooft flux [\textit{E. Witten}, Commun. Math. Phys. 117, No. 3, 353--386 (1988; Zbl 0656.53078)]. The gauge group is broken to U(1) on the Coulomb branch. The Coulomb branch, also known as the \(u\)-plane, can be considered as a three-punctured sphere, where the punctures correspond to the weak coupling limit, and the two strong coupling singularities. The authors consider compact four-manifolds \(M\) with \((b_1, b^+_2 ) = (0, 1)\) and without boundary. Here \(b_j\) denotes the Betti numbers of \(M\), and \(b^+_2\) is the number of positive definite eigenvalues of the intersection form of two-cycles of \(M\). Complex four-manifolds with \(b^+_2= 1\) are well-studied and classified by the Enriques-Kodaira classification reviewed in section 3.2 of this paper. The authors evaluate and analyze the \(u\)-plane contribution to the correlation functions known as the \(u\)-plane integral and show that they can be evaluated by integration by parts leading to expressions in terms of mock modular forms for point observables, and Appell-Lerch sums for surface observables. In the absence of hypermultiplets, we are always free to consider the case where the principal SO(3) gauge bundle has a nontrivial 't~Hooft flux \(w_2\in H^2(M;\mathbb{Z}_2)\). The authors choose an integral lift \(\overline{w_2}\) (which is supposed to exist) such that \(\mu:=\frac{1}{2}\overline{w_2}\in H^2(M;\mathbb{R})\). The path integral over the Coulomb branch of Donaldson-Witten theory, denoted by \(\Phi^J_\mu\) is an integral over the infinite dimensional field space, which reduces to a finite dimensional integral over the zero modes [\textit{G. Moore} and \textit{E. Witten}, Adv. Theor. Math. Phys. 1, No. 2, 298--387 (1997; Zbl 0899.57021)]. \(J\in H^2(M,\mathbb{R})\) is the period point which depends on the metric due to the self-duality condition. The explicit expressions for the \(u\)-plane integrals are given in equation (5.44) for manifolds with odd intersection form and just point observables inserted, equation (5.65) for manifolds with odd intersection form and just surface observables inserted, and equation (5.84) for manifolds with even intersection form and just surface observables inserted. These expressions hold for a special choice of the metric, though the metric dependence only enters through the choice of period point \(J\). Using the expression for the wall-crossing formula in terms of indefinite theta functions, analogous mock modular forms relevant to other chambers can be obtained [\textit{L. Göttsche} and \textit{D. Zagier}, Sel. Math., New Ser. 4, No. 1, 69--115 (1998; Zbl 0924.57025)]. Using the expression for \(\Phi^J_\mu[\mathcal{O}]\) in terms of mock modular forms, one can address analytic properties of the correlators for \(b^+_2= 1\), analogously to the structural results for manifolds with \(b^+_2 > 1\) [\textit{P. B. Kronheimer} and \textit{T. S. Mrowka}, J. Differ. Geom. 41, No. 3, 573--734 (1995; Zbl 0842.57022)]. The authors study the asymptotic behavior of \(\Phi[u^\ell]\) for large \(\ell\) and find experimental evidence that \(\Phi^J_\mu[u^\ell]\sim 1/(\ell\log(\ell))\) for any four-manifold with \((b_1, b^+_2 ) = (0, 1)\). The asymptotic behavior of \(\Phi^J_\mu[u^\ell]\) suggests that \[ \Phi^J_\mu[e^{2pu}]=\sum_{\ell\ge0}(2p)^\ell\Phi^J_\mu[u^\ell]/\ell! \] is an entire function of \(p\) rather than a formal expansion. The authors find similar experimental evidence for the \(u\)-plane contribution to the exponentiated surface observable. Closed superstring moduli tree-level two-point scattering amplitudes in type IIB orientifold on \(T^6/(Z_2 \times Z_2)\) 2021-11-25T18:46:10.358925Z "Aldi, Alice" "Firrotta, Maurizio" Summary: We reconsider the two-point string scattering amplitudes of massless Neveu-Schwarz-Neveu-Schwarz states of Type IIB orientifold superstring theory on the disk and projective plane in ten dimensions and analyse its \(\alpha^\prime\) expansion. We also discuss the unoriented Type IIB theory on \(T^6 / \mathbb{Z}_2 \times \mathbb{Z}_2\) where two-point string scattering amplitudes of the complex Kähler moduli and complex structures of the untwisted sector are computed on the disk and projective plane. New results are obtained together with known ones. Finally, we compare string scattering amplitudes results at \({\alpha^{\prime}}^2\)-order with the (curvature)\(^2\) terms in the low energy effective action of D-branes and \(\Omega \)-planes in both cases. Entropic \(c\)-functions in \(T \overline{T}\), \(J \overline{T}\), \(T \overline{J}\) deformations 2021-11-25T18:46:10.358925Z "Asrat, Meseret" Summary: We study the holographic entanglement entropy of an interval in a quantum field theory obtained by deforming a holographic two-dimensional conformal field theory via a general linear combination of irrelevant operators that are closely related to, but nonetheless distinct from, \(T \overline{T}\), \(J \overline{T}\) and \(T \overline{J} \), and compute the Casin-Huerta entropic \(c\)-function. In the ultraviolet, for a particular combination of the deformation parameters, we find that the leading order dependence of the entanglement entropy on the length of the interval is given by a square root but not logarithmic term. Such power law dependence of the entanglement entropy on the interval length is quite peculiar and interesting. We also find that the entropic \(c\)-function is ultraviolet regulator independent, and along the renormalization group upflow towards the ultraviolet, it is non-decreasing. We show that in the ultraviolet the entropic \(c\)-function exhibits a power law divergence as the interval length approaches a minimum finite value determined in terms of the deformation parameters. This value sets the non-locality scale of the theory. Quantization of Harer-Zagier formulas 2021-11-25T18:46:10.358925Z "Morozov, A."|morozov.a-c|morozov.a-m|morozov.andrei-alekseevich|morozov.a-v|morozov.a-k|morozov.alexander-yu|morozov.a-a|morozov.a-g|morozov.anton|morozov.alexandre-v|morozov.andrew-yu|morozov.alexander-n|morozov.andrey-n|morozov.alexei-yurievich|morozov.andrei-sergeevich "Popolitov, A."|popolitov.a-v "Shakirov, Sh."| Summary: We derive the analogues of the Harer-Zagier formulas for single- and double-trace correlators in the q-deformed Hermitian Gaussian matrix model. This fully describes single-trace correlators and opens a road to \(q\)-deformations of important matrix models properties, such as genus expansion and Wick theorem. Deformations, renormgroup, symmetries, AdS/CFT 2021-11-25T18:46:10.358925Z "Mikhailov, Andrei"|mikhailov.andrei-igorevich Summary: We consider the deformations of a supersymmetric quantum field theory by adding spacetime-dependent terms to the action. We propose to describe the renormalization of such deformations in terms of some cohomological invariants, a class of solutions of a Maurer-Cartan equation. We consider the strongly coupled limit of \(N = 4\) supersymmetric Yang-Mills theory. In the context of AdS/CFT correspondence, we explain what corresponds to our invariants in classical supergravity. There is a leg amputation procedure, which constructs a solution of the Maurer-Cartan equation from tree diagrams of SUGRA. We consider a particular example of the beta-deformation. It is known that the leading term of the beta-function is cubic in the parameter of the beta-deformation. We give a cohomological interpretation of this leading term. We conjecture that it is actually encoded in some simpler cohomology class, which is quadratic in the parameter of the beta-deformation. \(O(d,d)\) transformations preserve classical integrability 2021-11-25T18:46:10.358925Z "Orlando, Domenico" "Reffert, Susanne" "Sekiguchi, Yuta" "Yoshida, Kentaroh" Summary: In this note, we study the action of \(O(d, d)\) transformations on the integrable structure of two-dimensional non-linear sigma models via the doubled formalism. We construct the Lax pairs associated with the \(O(d, d)\)-transformed model and find that they are in general non-local because they depend on the winding modes. We conclude that every \(O(d, d; \mathbb{R})\) deformation preserves integrability. As an application we compute the Lax pairs for continuous families of deformations, such as \(J \bar{J}\) marginal deformations and TsT transformations of the three-sphere with \(H\)-flux. Time-space noncommutativity and Casimir effect 2021-11-25T18:46:10.358925Z "Harikumar, E." "Panja, Suman Kumar" "Rajagopal, Vishnu" Summary: We show that the Casimir force and energy are modified in the \(\kappa\)-deformed space-time. This is shown by solving the Green's function corresponding to \(\kappa \)-deformed scalar field equation in presence of two parallel plates, modelled by \(\delta\)-function potentials. Exploiting the relation between Energy-Momentum tensor and Green's function, we calculate corrections to Casimir force, valid up to second order in the deformation parameter. The Casimir force is shown to get corrections which scale as \(L^{- 4}\) and \(L^{- 6}\) and both these types of corrections produce attractive forces. Using the measured value of Casimir force, we show that the deformation parameter should be below \(10^{-23}\) m. Defects, nested instantons and comet-shaped quivers 2021-11-25T18:46:10.358925Z "Bonelli, G." "Fasola, N." "Tanzini, A." In four-dimensional supersymmetric gauge theories, surface defects are real codimension two submanifolds where a specific reduction of the gauge connection takes place. They were pioneered by \textit{G. 't Hooft} [Nucl. Phys. B 153, 141--160 (1979; \url{doi:10.1016/0550-3213(79)90595-9})] in the classification of the phases of gauge theories and introduced in mathematics by \textit{P. B. Kronheimer} and \textit{T. S. Mrowka} [Topology 34, No. 1, 37--97 (1995; Zbl 0832.57011)] in the study of Donaldson invariants. In this paper, the authors introduce and study surface defects supporting nested instantons with respect to the parabolic reduction of the gauge group at the defect. These defects are engineered from a D7/D3 brane system on a local compact complex surface. Specifically, they consider the product \(S=T^2\times C\) of a 2-torus \(T^2\) and a Riemann surface \(C\) with punctures \(\{p_i\}\) and embed it into CY 5-fold \(T^2\times T^*C\times \mathbb{C}^2\). Here surface operators are located at \(T^2\times \{p_i\}\). Supersymmetric partition functions of these systems provide conjectural formulae for virtual invariants of the moduli spaces. Mathematically, they obtain for a single D7-brane conjectural explicit formulae for the virtual equivariant elliptic genus of a certain bundle over the moduli space of the nested Hilbert scheme of points on the affine plane. Connections to Vafa-Witten theory and Donaldson-Thomas theory of CY 4-folds are also mentioned. Nilpotence varieties 2021-11-25T18:46:10.358925Z "Eager, Richard" "Saberi, Ingmar" "Walcher, Johannes" Algebraic varieties coming from orbits of nilpotent elements of Lie algebras is an important subject, which has also recently found deep connections with theoretical physics. The current paper is a wonderful monograph on this subject, focusing on varieties canonically associated with any Lie superalgebra, and treating them from a systematic and new perspective of natural moduli spaces parameterizing twists of a super-Poincare-invariant physical theory. Towards an axiomatic formulation of noncommutative quantum field theory. II. 2021-11-25T18:46:10.358925Z "Chaichian, M." "Mnatsakanova, M. N." "Vernov, Yu. S." Summary: Classical results of the axiomatic quantum field theory -- irreducibility of the set of field operators, Reeh and Schlieder's theorems and generalized Haag's theorem are proven in \(SO(1, 1)\) invariant quantum field theory, of which an important example is noncommutative quantum field theory. In \(SO(1, 3)\) invariant theory new consequences of generalized Haag's theorem are obtained. It has been proven that the equality of four-point Wightman functions in two theories leads to the equality of elastic scattering amplitudes and thus the total cross-sections in these theories. For Part I, see [the first author et al., J. Math. Phys. 52, No. 3, 032303, 13 p. (2011; Zbl 1315.81096)]. Refined scattering diagrams and theta functions from asymptotic analysis of Maurer-Cartan equations 2021-11-25T18:46:10.358925Z "Leung, Naichung Conan" "Ma, Ziming Nikolas" "Young, Matthew B." The topic of the article arises to the reconstruction problem in [\textit{A. Strominger} et al., Nucl. Phys., B 479, No. 1--2, 243--259 (1996; Zbl 0896.14024)] mirror symmetry. Previously the notion of a scattering diagramm was investigated by [\textit{M. Kontsevich} and \textit{Y. Soibelman}, Prog. Math. 244, 321--385 (2006; Zbl 1114.14027)]. The authors develop an asymptotic analytic approach to the study of scattering diagrams. They investigate the asymptotic behavior of Maurer-Cartan elements of a (dg) Lie algebra. The authors found an alternative geometric differential approach to the proofs of the consistent completion of the scattering diagrams, previously investigated by Kontsevich-Soibelman [loc. cit.], \textit{M. Gross} and \textit{B. Siebert} [Ann. Math. (2) 174, No. 3, 1301--1428 (2011; Zbl 1266.53074)] and \textit{T. Bridgeland} [Algebr. Geom. 4, No. 5, 523--561 (2017; Zbl 1388.16013)]. The paper under reviewing deals with the geometric interpretation of theta-functions, their wall-crossing, which allow to give a combinatorial description of Hall algebra theta functions for acyclic quivers with nondegenerate skew- symmetric Euler functions Dynamical Majorana neutrino masses and axions. I. 2021-11-25T18:46:10.358925Z "Alexandre, Jean" "Mavromatos, Nick E." "Soto, Alex" Summary: We discuss dynamical mass generation for fermions and (string-inspired) pseudoscalar fields (axion-like particles (ALP)), in the context of effective theories containing Yukawa type interactions between the fermions and ALPs. We discuss both Hermitian and non-Hermitian Yukawa interactions, which are motivated in the context of some scenarios for radiative (anomalous) Majorana sterile neutrino masses in some effective field theories. The latter contain shift-symmetry breaking Yukawa interactions between sterile neutrinos and ALPs. Our models serve as prototypes for discussing dynamical mass generation in string-inspired theories, where ALP potentials are absent in any order of string perturbation theory, and could only be generated by stringy non perturbative effects, such as instantons. The latter are also thought of as being responsible for generating our Yukawa interactions, which are thus characterised by very small couplings. We show that, for a Hermitian Yukawa interaction, there is no (pseudo)scalar dynamical mass generation, but there is fermion dynamical mass generation, provided one adds a bare (pseudo)scalar mass. The situation is opposite for an anti-Hermitian Yukawa model: there is (pseudo)scalar dynamical mass generation, but no fermion dynamical mass generation. In the presence of additional attractive four-fermion interactions, dynamical fermion mass generation can occur in these models, under appropriate conditions and range of their couplings. One residue to rule them all: electroweak symmetry breaking, inflation and field-space geometry 2021-11-25T18:46:10.358925Z "Karananas, Georgios K." "Michel, Marco" "Rubio, Javier" Summary: We point out that the successful generation of the electroweak scale via gravitational instanton configurations in certain scalar-tensor theories can be viewed as the aftermath of a simple requirement: the existence of a quadratic pole with a sufficiently small residue in the Einstein-frame kinetic term for the Higgs field. In some cases, the inflationary dynamics may also be controlled by this residue and therefore related to the Fermi-to-Planck mass ratio, up to possible uncertainties associated with the instanton regularization. We present here a unified framework for this hierarchy generation mechanism, showing that the aforementioned residue can be associated with the curvature of the Einstein-frame target manifold in models displaying spontaneous breaking of dilatations. Our findings are illustrated through examples previously considered in the literature. Two anyons on the sphere: nonlinear states and spectrum 2021-11-25T18:46:10.358925Z "Polychronakos, Alexios P." "Ouvry, Stéphane" Summary: We study the energy spectrum of two anyons on the sphere in a constant magnetic field. Making use of rotational invariance we reduce the energy eigenvalue equation to a system of linear differential equations for functions of a single variable, a reduction analogous to separating center of mass and relative coordinates on the plane. We solve these equations by a generalization of the Frobenius method and derive numerical results for the energies of non-analytically derivable states. Multitwistor mechanics of massless superparticle on \(AdS_5 \times S^5\) superbackground 2021-11-25T18:46:10.358925Z "Uvarov, D. V." Summary: Supertwistors relevant to \(AdS_5 \times S^5\) superbackground of IIB supergravity are studied in the framework of the \(D = 10\) massless superparticle model in the first-order formulation. Product structure of the background suggests using \(D = 1 + 4\) Lorentz-harmonic variables to express momentum components tangent to \(A d S_5\) and \(D = 5\) harmonics to express momentum components tangent to \(S^5\) that yields eight-supertwistor formulation of the superparticle's Lagrangian. We find incidence relations of the supertwistors with the \(AdS_5 \times S^5\) superspace coordinates and the set of the quadratic constraints that supertwistors satisfy. It is shown how using the constraints for the (Lorentz-)harmonic variables it is possible to reduce eight-supertwistor formulation to the four-supertwistor one. Respective supertwistors agree with those introduced previously in other models. Advantage of the four-supertwistor formulation is the presence only of the first-class constraints that facilitates analysis of the superparticle model. \(D_k\) gravitational instantons as superpositions of Atiyah-Hitchin and Taub-NUT geometries 2021-11-25T18:46:10.358925Z "Schroers, B. J." "Singer, M. A." Summary: We obtain \(D_k\) ALF gravitational instantons by a gluing construction which captures, in a precise and explicit fashion, their interpretation as nonlinear superpositions of the moduli space of centred \(SU(2)\) monopoles, equipped with the Atiyah-Hitchin metric, and \(k\) copies of the Taub-NUT manifold. The construction proceeds from a finite set of points in euclidean space, reflection symmetric about the origin, and depends on an adiabatic parameter which is incorporated into the geometry as a fifth dimension. Using a formulation in terms of hyperKähler triples on manifolds with boundaries, we show that the constituent Atiyah-Hitchin and Taub-NUT geometries arise as boundary components of the five-dimensional geometry as the adiabatic parameter is taken to zero. Holographic correlators with multi-particle states 2021-11-25T18:46:10.358925Z "Čeplak, Nejc" "Giusto, Stefano" "Hughes, Marcel R. R." "Russo, Rodolfo" Summary: We derive the connected tree-level part of 4-point holographic correlators in \(\mathrm{AdS}_3 \times S^3 \times \mathcal{M} \) (where \(\mathcal{M}\) is \(T^4\) or \(K3\)) involving two multi-trace and two single-trace operators. These connected correlators are obtained by studying a heavy-heavy-light-light correlation function in the formal limit where the heavy operators become light. These results provide a window into higher-point holographic correlators of single-particle operators. We find that the correlators involving multi-trace operators are compactly written in terms of Bloch-Wigner-Ramakrishnan functions --- particular linear combinations of higher-order polylogarithm functions. Several consistency checks of the derived expressions are performed in various OPE channels. We also extract the anomalous dimensions and 3-point couplings of the non-BPS double-trace operators of lowest twist at order \(1/c \) and find some positive anomalous dimensions at spin zero and two in the K3 case. Genus zero Gopakumar-Vafa invariants from open strings 2021-11-25T18:46:10.358925Z "Collinucci, Andrés" "Sangiovanni, Andrea" "Valandro, Roberto" Summary: We propose a new way to compute the genus zero Gopakumar-Vafa invariants for two families of non-toric non-compact Calabi-Yau threefolds that admit simple flops: Reid's Pagodas, and Laufer's examples. We exploit the duality between M-theory on these threefolds, and IIA string theory with D6-branes and O6-planes. From this perspective, the GV invariants are detected as five-dimensional open string zero modes. We propose a definition for genus zero GV invariants for threefolds that do not admit small crepant resolutions. We find that in most cases, non-geometric T-brane data is required in order to fully specify the invariants. Wrapped brane solutions in Romans \(F(4)\) gauged supergravity 2021-11-25T18:46:10.358925Z "Kim, Nakwoo" "Shim, Myungbo" Summary: We explore the spectrum of lower-dimensional anti-de Sitter (AdS) solutions in \(F(4)\) gauged supergravity in six dimensions. The ansatz employed corresponds to D4-branes partially wrapped on various supersymmetric cycles in special holonomy manifolds. Re-visiting and extending previous results, we study the cases of two, three, and four-dimensional supersymmetric cycles within Calabi-Yau threefold, fourfold, \(G_2\), and Spin(7) holonomy manifolds. We also report on non-supersymmetric AdS vacua, and check their stability in the consistently truncated lower-dimensional effective action, using the Breitenlohner-Freedman bound. We also analyze the IR behavior and discuss the admissibility of singular flows. Symmetry adapted Gram spectrahedra 2021-11-25T18:46:10.358925Z "Heaton, Alexander" "Hoşten, Serkan" "Shankar, Isabelle" Efficient message transmission via twisted Edwards curves 2021-11-25T18:46:10.358925Z "Kırlar, Barış Bülent" Summary: In this paper, we suggest a novel public key scheme by incorporating the twisted Edwards model of elliptic curves. The security of the proposed encryption scheme depends on the hardness of solving elliptic curve version of discrete logarithm problem and Diffie-Hellman problem. It then ensures secure message transmission by having the property of one-wayness, indistinguishability under chosen-plaintext attack (IND-CPA) and indistinguishability under chosen-ciphertext attack (IND-CCA). Moreover, we introduce a variant of Nyberg-Rueppel digital signature algorithm with message recovery using the proposed encryption scheme and give some countermeasures to resist some wellknown forgery attacks. Cryptanalysis of a code-based full-time signature 2021-11-25T18:46:10.358925Z "Aragon, Nicolas" "Baldi, Marco" "Deneuville, Jean-Christophe" "Khathuria, Karan" "Persichetti, Edoardo" "Santini, Paolo" Summary: We present an attack against a code-based signature scheme based on the Lyubashevsky protocol that was recently proposed by \textit{Y. Song} et al. [Theor. Comput. Sci. 835, 15--30 (2020; Zbl 1457.94222)]. The private key in the SHMWW scheme contains columns coming in part from an identity matrix and in part from a random matrix. The existence of two types of columns leads to a strong bias in the distribution of set bits in produced signatures. Our attack exploits such a bias to recover the private key from a bunch of collected signatures. We provide a theoretical analysis of the attack along with experimental evaluations, and we show that as few as 10 signatures are enough to be collected for successfully recovering the private key. As for previous attempts of adapting Lyubashevsky's protocol to the case of code-based cryptography, the SHMWW scheme is thus proved unable to provide acceptable security. This confirms that devising secure code-based signature schemes with efficiency comparable to that of other post-quantum solutions (e.g., based on lattices) is still a challenging task. On the existence and construction of maximum distance profile convolutional codes 2021-11-25T18:46:10.358925Z "Muñoz Castañeda, Ángel Luis" "Plaza-Martín, Francisco J." Summary: In this paper, we study the conditions for a convolutional code to be MDP in terms of the size of the base field \(\mathbb{F}_q\) as well as the openness of the MDP property in a given family of convolutional codes. Given \((n,k,\delta)\), our main result is an explicit bound depending on \((n,k,\delta)\) such that if \(q\) is greater than this bound, there exists a \((n,k,\delta)\) MDP convolutional code. A similar result is also offered for complete MDP convolutional codes. We show that these bounds are much lower than that those appeared so far in the literature. Finally, we show an explicit and simple construction procedure for MDP convolutional Goppa codes of dimension one.
ebf23c54914f3ead
Matter and Energy: A False Dichotomy Matt Strassler [April 12, 2012] It is common that, when reading about the universe or about particle physics, one will come across a phrase that somehow refers to “matter and energy”, as though they are opposites, or partners, or two sides of a coin, or the two classes out of which everything is made. This comes up in many contexts. Sometimes one sees poetic language describing the Big Bang as the creation of all the “matter and energy” in the universe. One reads of “matter and anti-matter annihilating into `pure’ energy.” And of course two of the great mysteries of astronomy are “dark matter” and “dark energy”. As a scientist and science writer, this phraseology makes me cringe a bit, not because it is deeply wrong, but because such loose talk is misleading to non-scientists. It doesn’t matter much for physicists; these poetic phrases are just referring to something sharply defined in the math or in experiments, and the ambiguous wording is shorthand for longer, unambiguous phrases. But it’s dreadfully confusing for the non-expert, because in each of these contexts a different definition for `matter’ is being used, and a different meaning — in some cases an archaic or even incorrect meaning of `energy’ — is employed. And each of these ways of speaking implies that either things are matter or they are energy — which is false.  In reality, matter and energy don’t even belong to the same categories; it is like referring to apples and orangutans, or to heaven and earthworms, or to birds and beach balls. On this website I try to be more precise, in order to help the reader avoid the confusions that arise from this way of speaking.  Admittedly I’m only partly successful, as I’ll mention below. Summing Up This article is long, but I hope it is illuminating and informative for those of you who want details.  Let me give you a summary of the lessons it contains: • Matter and Energy really aren’t in the same class and shouldn’t be paired in one’s mind. • Matter, in fact, is an ambiguous term; there are several different definitions used in both scientific literature and in public discourse.  Each definition selects a certain subset of the particles of nature, for different reasons.  Consumer beware!  Matter is always some kind of stuff, but which stuff depends on context. • Energy is not ambiguous (not within physics, anyway).  But energy is not itself stuff; it is something that all stuff has.   • The term Dark Energy confuses the issue, since it isn’t (just) energy after all.  It also really isn’t stuff; certain kinds of stuff can be responsible for its presence, though we don’t know the details. • Photons should not be called `energy’, or `pure energy’, or anything similar.  All particles are ripples in fields and have energy; photons are not special in this regard. Photons are stuff; energy is not. • The stuff of the universe is all made from fields (the basic ingredients of the universe) and their particles.  At least this is the post-1973 viewpoint. What’s the Matter (and the Energy)? First, let’s define (or fail to define) our terms. The word Matter. “Matter” as a term is terribly ambiguous; there isn’t a universal definition that is context-independent.  There are at least three possible definitions that are used in various places: • “Matter” can refer to atoms, the basic building blocks of what we think of as “material”: tables, air, rocks, skin, orange juice — and by extension, to the particles out of which atoms are made, including electrons and the protons and neutrons that make up the nucleus of an atom. • OR it can refer to what are sometimes called the elementary “matter particles” of nature: electrons, muons, taus, the three types of neutrinos, the six types of quarks — all of the types of particles which are not the force particles (the photon, gluons, graviton and the W and Z particles.)  Read here about the known apparently-elementary particles of nature.  [The Higgs particle, by the way, doesn’t neatly fit into the classification of particles as matter particles and force particles, which was somewhat artificial to start with; I have a whole section about this classification below.] • OR it can refer to classes of particles that are found out there, in the wider universe, and that on average move much more slowly than the speed of light. With any of these definitions, electrons are matter (although with the third definition they were not matter very early in the universe’s history, when it was much hotter than it is today.) With the second definition, muons are matter too, and so are neutrinos, even though they aren’t constituents of ordinary material.  With the third definition, some neutrinos may or may not be matter, and dark matter is definitely matter, even if it turns out to be made from a new type of force particle.  I’m really sorry this is so confusing, but you’ve no choice but to be aware of these different usages if you want to know what “matter” means in different people’s books and articles. Now, what about the word Energy. Fortunately, energy (as physicists use it) is a well-defined concept that everyone in physics agrees on.  Unfortunately, the word in English has so many meanings that it is very easy to become confused about what physicists mean by it. I’ve briefly describe the various forms of energy that arise in physics in more detail in an article on mass and energy. But for the moment, suffice it to say that energy is not itself an object.  An atom is an object; energy is not. Energy is something which objects can have, and groups of objects can have — a property of objects that characterizes their behavior and their relationships to one another.  [Though it should be noted that different observers will assign different amounts of energy to a given object — a tricky point that is illustrated carefully in the above-mentioned article on mass and energy.] And for this article, all we really need to know is that particles moving on their own through space can have two types of energy: mass-energy (i.e., E= mc2 type of energy, which does not depend on whether and how a particle moves) and motion-energy (energy that is zero if a particle is stationary and becomes larger as a particle moves faster). Annihilation of Particles and Antiparticles Isn’t Matter Turning Into Energy Let’s first examine the notion that “matter and anti-matter annihilate to pure energy.” This, simply put, isn’t true, for several reasons. In the green paragraphs above, I gave you three different common definitions of “matter.”  In the context of annihilation of particles and anti-particles,  speakers may either be referring to the first definition or the second. Here I want to discuss the annihilation of electrons and anti-electrons (or “positrons”), or the annihilation of muons and anti-muons.  I’ve described this in detail in an article on Particle/Anti-Particle Annihilation. You’ll need it to understand what I say next, so I’m going to assume that you have read it.  Once you’ve done that, you’re ready to try to understand where the (false) notion that matter and antimatter annihilate into pure energy comes from. What is meant by “pure energy”?  This is almost always used in reference to photons, commonly in the context of an electron and a positron (or some other massive particle and anti-particle) annihilating to make two photons (recall the antiparticle of a photon is also a photon.)  But it’s a terrible thing to do.  Energy is something that photons have; it is not what photons are.  [I have height and weight; that does not mean I am height and weight.]   The term “pure energy” is a mix of poetry, shorthand and garbage.   Since photons have no mass, they have no mass-energy, and that means their energy is “purely motion-energy”.  But that does not mean the same thing, either in physics or intuitively to the non-expert, as saying photons are “pure energy”.   Photons are particles just as electrons are particles; they both are ripples in a corresponding field, and they both have energy.  The electron and positron that annihilated had energy too — the same amount of energy as the photons to which they annihilate, in fact, since energy is conserved (i.e. the total amount does not change during the annihilation process.) (See Figure 3 of the particle/anti-particle annihilation article. Moreover (see Figures 1 and 2 of the particle/anti-particle annihilation article), the process muon + anti-muon → two photons  is on exactly the same footing and occurs with almost exactly the same probability as the process muon + anti-muon → electron + positron — which is matter and anti-matter annihilating into another type of matter and anti-matter.  So no matter how you want to express this, it is certainly not true that matter and anti-matter always annihilate into anything you might even loosely call `energy’; there are other possibilities. For these reasons I don’t use the “matter and energy” language on this website when speaking about annihilation.  I just call this type of process what it is: • particle 1 + anti-particle 1 → particle 2 + anti-particle 2 With this plain-spoken terminology it is clear why a muon and anti-muon annihilating to two photons, or to an electron and a positron, or to a neutrino and an anti-neutrino, are all on the same footing. They are all the same class of process. And we need not make distinctions that don’t really exist and that obscure the universality of particle/anti-particle annihilation. Not Everything is Matter or Energy, By a Long Shot Why do people sometimes talk about “matter and energy” as though everything is either matter or energy?   I don’t know the context in which this expression was invented.  Maybe one of my readers knows?  Language reflects history, and often reacts slowly to new information.  Part of the problem is that enormous changes in physicists’ conception of the world and its ingredients occurred between 1900 and 1980. This has mostly stopped for now; it’s been remarkably stable throughout my career. [String theorists might argue with what I’ve just said, pointing out that their great breakthroughs occurred during the 1980s and 1990s.  That’s true, but since string theory hasn’t yet established itself as reality through experimental verification, one cannot say that it has yet been incorporated into our conception of the world.] Our current conception of the physical world is shaped by a wide variety of experiments and discoveries that occurred during the 1950s, 1960s and 1970s. But previous ways of thinking and talking about particle physics partially stuck around even as late as the 1980s and 1990s, while I was being trained as a young scientist. This isn’t surprising; it takes a while for people who grew up with an older vision to come around to a new prevailing point of view, and some never do. And it also takes a while for a newer version to come into sharp focus, and for little niggling problems with it to be resolved. Today, if one wants to talk about the world in the context of our modern viewpoint, one can speak first and foremost of the “fields and their particles.” It is the fields that are the basic ingredients of the world, in today’s widely dominant paradigm.  We view fields as more fundamental than particles because you can’t have an elementary particle without a field, but you can have a field without any particles. [I still owe you a proper article about fields and particles; it’s high on the list of needed contributions to this website.]  However, it happens that every known field has a known particle, except possibly the Higgs field (whose particle is not yet certain to exist, though [as of the time of writing, spring 2012] there are significant experimental hints.) What do “fields and particles” have to do with “matter and energy”? Not much. Some fields and particles are what you would call “matter”, but which ones are matter, and which ones aren’t, depends on which definition of “matter” you are using.  Meanwhile, all fields and particles can have energy; but none of them are energy. Matter Particles and Force Particles — Well… On this website, I’ve divided the known particles up into “matter particles” and “force particles”. I wasn’t entirely happy doing this, because it’s a bit arbitrary. This division works for now; the force particles and their anti-particles are associated with the four forces of nature that we know so far, and the matter particles and their anti-particles are all of the others. And there are many situations in which this division is convenient.  But at the Large Hadron Collider [LHC] we could easily discover particles that don’t fit into this categorization; even the Higgs particle poses a bit of a problem, because it arguably is in neither class. There’s an alternate (but very different) division that makes sense: what I called matter particles all happen to be fermions, and what I called force particles all happen to be bosons. But this could change too with new discoveries. What this really comes down to is that all the particles of nature are simply particles, some of which are each other’s anti-particles, and there isn’t a unique way to divide them up into classes . The reason I used “matter” and “force” is that this is a little less abstract-sounding than “fermions” and “bosons”  — but I may come to regret my choice, because we might discover particles at the LHC, or elsewhere, that break this distinction down. Matter and Energy in the Universe Another place we encounter words of this type is in the history and properties of the cosmos as a whole.   We read about matter, radiation, dark matter, and dark energy. The use of the words by cosmologists is quite different from what you might expect — and it actually involves two or three different meanings, and depends strongly on context. Matter vs. Anti-Matter:  when you hear people talk this way, they’re talking about the first definition within the green paragraphs above.  They are typically referring to the imbalance of matter over anti-matter in our universe — the fact that the particles that make up ordinary material (electrons, protons and neutrons in particular) are much more abundant than any of their anti-particles. Matter vs. Radiation: if you hear this distinction, you’re dealing with the third definition of `matter’.  The universe has a temperature; it was very hot early on and has been gradually cooling, now at 2.7 Celsius-degrees above absolute zero. If you have a gas (or plasma) of particles at a given temperature T, and you measure the energies of these particles, you will find that the average motion-energy per particle is given by k T, where k is Boltzmann‘s famous constant. Now matter, in this context, is any particle whose mass-energy mc2  is large compared to this average motion energy kT; such particles will have velocity much slower than the speed of light. And radiation is any particle whose mass-energy is small compared to kT, and is consequently moving close to the speed of light. Notice what this means. In this context, what is matter, and what is not, is temperature-dependent and therefore time-dependent! Early in the universe, when the temperature was trillions of degrees and even hotter, the electron was what cosmologists consider radiation. Today, with the universe much cooler, the electron is in the category of matter.  In the present universe at least two of the three types of neutrinos are matter, and maybe all three, by this definition; but all the neutrinos were radiation early in the universe. Photons have always been and will always be radiation, since they are massless. What is Dark Matter?  We can tell from studying the motions of stars and other techniques that most of the mass of a galaxy comes from something that doesn’t shine, and lots of hard work has been done to prove that known particles behaving in ordinary ways cannot be responsible. To explain this effect, various speculations have been proposed, and many have been shown (through observation of how  galaxies look and behave, typically) to be wrong. Of the survivors, one of the leading contenders is that dark matter is made from heavy particles of an unknown type.  But we don’t know much more than that as yet.  Experiments may soon bring us new insights, though this is not guaranteed.  [Note also there may be not be any meaning to dark anti-matter; the particles of dark matter, like photons and Z particles, may well be their own anti-particles.] And Dark Energy? It was recently discovered that the universe is expanding faster and faster, not slower and slower as was the case when it was younger.  What is presumably responsible is called “dark energy”, but unfortunately, it’s actually not energy. As my colleague Sean Carroll is fond of saying, it is tension, not energy — a combination of pressure and energy density. So why do people call it “energy”? Part of it is public relations. Dark energy sounds cool; dark tension sounds weird, as does any other word you can think of that is vaguely appropriate. At some level this is harmless.  Scientists know exactly what is being referred to, so this terminology causes no problem on the technical side; most of the public doesn’t care exactly what is being referred to, so arguably there’s no big problem on the non-technical side. But if you really want to know what’s going on, it’s important to know that dark-energy isn’t a dark form of energy, but something more subtle. Moreover, like energy, dark-energy isn’t an object or set of objects, but a property that fields, or combinations of fields, or space-time itself can have.  We don’t yet know what is responsible for the dark-energy whose presence we infer from the accelerating universe.  And it may be quite a while before we do. By the way, do you know what an astronomer means by “metals”?  It’s not what you think… You might conclude from this article that modern physicists and their relatives have not been very inventive, creative, or careful with language.  Apparently it’s not our collective strong suit.  Big Bang?  Black Hole?  The world’s poets will never forgive us for choosing such dull names for such fantastic things…. 294 thoughts on “Matter and Energy: A False Dichotomy” 1. Thanks again for an excellent review. Even though most of it is known as bits and pieces by us the lay readers, this clearly, and coherently explains the intricacies of the commonly misused terms. You write that all fields have corresponding particles and only Higgs field and it’s particle is in doubt but LHC has tantalizing clues of its presence. Wouldn’t gravitational field is something for which no known particle exists even though it is hypothesized? 2. “Energy is something which objects can have”. I can’t say I’m happy with this. The energy of an object, in classical or SR models, at least depends on the frame of reference. I don’t see the Hamiltonian as the generator of time-like translations anywhere here, which at least deserves a mention if we move to QM models. Energy in GR is different again. If we move to signal processing, time-series analysis, we can define an energy of a signal mathematically, but that may not be reducible to phase space concepts (this is arguably engineering, but Physicists do use such concepts in some measurement contexts). I don’t see energy to be as settled in Physics as you suggest. • I think I understand your first point. Energy is a property of an object (or system) but the amount depends on the observer. I emphasized this in my mass and energy article, but not here; based on your comment I’ve added a remark to that effect in the text. Your second point about the Hamiltonian: this technical point is not appropriate for the main readers of this article. The fact that energy conservation is related to the time-independence of physical law has been covered on this site elsewhere; this particular article wasn’t the place for it. The notions of Lagrangian and Hamiltonian are above the assumed knowledge of my readership. Energy in general relativity is an advanced subject, again beyond the scope of this article. Finally: engineering notions of energy in the context of signal processing are beyond the scope of this website. It never arises in any of my research or that of any of my immediate colleagues. I am always referring to that conserved quantity (even in general relativity) that follow from the (local, in GR) time-independence of physical laws. • Just read your article. I think you are right about one thing: We still use language left over from another time. I think you are guilty of this yourself when you use the word “particle.” Sorry, there is no such thing as a particle. Everything is energy. Energy exists as a wave. A wave has momentum. Either that momentum is dedicated to travelling through space at some fraction of the speed of light, or it is turned in on itself as a standing wave to stay in one place as “mass.” Even as mass, it is still a wave. That wave is still energy, it’s just energy that stays in one place rather than travelling through space. When I put my hand on the tabletop, I’m not actually touching it. The electron waves of all the atoms on the tables surface repel the electron waves on the surface of my hand. There is no physical mass or particles touching, merely waves of negatively charged energy repulsing other negatively charged waves of energy. There is no such thing as a “particle.” That description is outdated. • interesting way of explaining the outdated term “particle “…. but isn’t “mass” made of “particles” ? how different is a particle from mass ? Isn’t a standing wave everywhere ? • No. The wave function is just a mathematical concept used to describe the momentum of a particle. It predicts the possibility of where one particle may exist in a single frame of time. It is not the same thing to say that the particle IS the momentum OR the mass. It doesn’t make any sense. This is exactly what the writer is referring to. The use of the words such as mass and energy which are fairly consistent in the macrocosmic world of ours fail to describe anything in the subatomic regions. None of which the writer says is outdated, but a lot of people who don’t really understand the underlying mechanism try to hijack the bandwagon with vague mystical sayings such as ‘Everything is Energy’ and the forth. Energy describes how the particle functions, it is the property of a particle. It is not the same thing as the particle itself. • Strassler made it pretty clear that fields, and not particles, are the fundamental ingredients of the world. He called particles “ripples” in these fields. You saying “everything is energy” leads me to believe you actually wanted to say “all things are fields”. • Hello. I am not a physicist at all. I came upon this article in a search because I could not accept the concept that energy and matter are separate entities and it’s what I keep reading over and over again. That matter has mass but energy doesn’t. Then, how can E=mc^2??? I realize it has been several years since this article has been published online, but I’m still not satisfied with the explanation. How can matter and energy not “be” the same thing? One can’t exist without the other. Where is the example of matter that has no “energy” associated with it? To me its like saying that lung tissue is not “human” even though it contains all of the same DNA as every other cell in the body. The lung tissue is just expressing or taking on the function that it needs to do. Otherwise, how could stem cells be used for various items. I feel the same way about matter and energy. How can you accept E=mc^2 but not accept that matter and energy are just expressions of the same thing? Or like when you say that you “have” height and weight, but you are “not” height and weight. Then what exactly are you? You don’t carry height and weight around as a separate entity in your hand that you can walk away from. It is you, just as the energy that animates our human body “is” each one of us. I am by NO way a scientist, but these concepts have been driving me insane for years. I guess what I’m getting at, if you still access this site: Where is a book, an essay, a study I can turn to that provides some kind of proof? I would be interested to study these concepts further. Thank you. • Even through Quantum Physics and the ToE we find that an ‘unexplained’ conscious ‘force’ is necessary for existence to occur; or something to that effect. The answers will only be found in the quantum realm and common sense. Every atom is built with frequency, energy and vibration which is ‘energy’ regardless of how the textbooks warp the mind. • Sarah, it has been a long time since you asked the questions here. I can answer one of them. You ask “How can matter and energy not “be” the same thing? One can’t exist without the other. Where is the example of matter that has no “energy” associated with it?” This last question should be reversed. There is no matter that has no energy, but there is energy that is not associated with matter. Photons, for instance, have energy, but they are not matter by anyone’s definition. The same is true for gravitational waves; they have energy, but I’ve never heard anyone suggest they should be called matter. Do not be confused by E=mc^2. First, “m” is mass, not mattter, and not all matter has mass, either. Second, “E” does not refer to all possible forms of energy. E=mc^2 means only that: the rest mass m of a particle that has some arises from E, energy, stored inside it. • Hi Matt, I am also bothered by the same question as Sarah. I understand your distinction of matter from energy and, as I’m sure you’re familiar with, the commonly surrendered definition of energy is “the ability to do work”, so its treatment as a property is entirely sensible. The part that spins me for a loop – and I suspect this is why people commonly insist that everything IS energy – is that I don’t know of anything that can exist without energy as a basic property. Even quantum fields have zero-point energy at their lowest-energy state. Then there’s the idea that energy is that thing which is the particle-producing ripple in the field. So if: a) everything Has energy as an irremovable and universal property, and b) things are defined (categorized and distinguished) by their properties, does it not follow that the essence of all things is energy? Even those conceptions which are themselves the absence of things (ie. shadows, absolute zero) are defined by their quality of energy. Maybe I am failing to consider something. Is there anything that can exist without energy? Or is the problem really about the practical issues with using the concept of energy so broadly? Or am I misunderstanding a subtle idea very seriously? Really appreciate your feedback, and the article! • Every human has age. We cannot imagine a human without it. Does that mean every human IS age? Every human has volume. All ordinary objects have volume, in fact. Does that mean the essence of objects is volume? We must not confuse properties that objects have — sometimes even necessary properties — with what those objects *are*. The fact that energy is a property of all objects and even non-objects reflects the fact that time is a fundamental property of the universe. It tells us nothing about the nature of those objects. Does this help? Nothing is made from energy, but everything has energy. 3. Can we say that the building stuff of the cosmos are mere 2 types of vibrations — organized ripples and psuedo ripples — which we call real or virtual particles ?in something we call fields ? • No — you really don’t want to approach it this way. The basic stuff is fields. Now you want to ask what fields can do and how that contributes to the stuff we see around us. You can’t reduce that the two types. • Is field stuff though? Field is more of an information matrix. Strictly speaking, you have a field of mass or a field of properties, but that doesn’t mean the field itself is either mass or energy. I don’t think we should give definition to ‘the basic stuff’ just as yet. I think we need to learn how to distinguish fields from mass-energy, and mass-energy from its carrier particles for this discussion to have any merit. And why are we so intent on reducing everything to one thing anyway? What good does it even do? Even the scientific world sometimes… • In my experience, a field is a mathematical construct, nothing more. Problems arise when one attempts to interpret a field as some sort of physical entity. This is not to say physical phenomena do not exist. Rather, fields are a mathematical device (the best we presently have) to describe physical reality at the micro level. It’s simply a theoretical framework based on mathematics. 4. There are a contradiction here : 1- E is something mass have 2- Mass is E/c^2 3- E is something E/c^2 have !!?? • You’re making some classic mistakes: First, #2 is false. Mass is only E/c^2 for a particle at rest and sitting on its own. [If you define m to be E/c^2, then you’re using the archiac notion of “relativistic mass”, which particle physicists avoid for several reason (to be explained soon.) And I’m not using “m” in this way anywhere on this website.] For a particle on its own, but moving, E = mc^2 + motion energy . For a more general system of particles and fields, you also have to account for other contributions to the total energy that cannot be assigned to any one particle. Second, #1 is false. Energy is something that stuff can have. Mass is also something that stuff can have. Stuff is not mass; it’s something that can have mass. But not necessarily. Some stuff has no mass — photons, for instance. And it is an accident that electrons are massive; remember that the Higgs field being non-zero on average is responsible for this. Electrons would still be stuff even if the Higgs field were zero on average and electrons were massless. [Experts: I know there’s a tiny subtlety with this statement; let it pass. To make the above statement precise I should also turn off some other interactions when I turn off the Higgs field’s value. But the substance of the remark is true.] Part of the point of this article (and my earlier particle-antiparticle annihilation and mass-and-energy articles) is that photons and electrons are both particles. They are both stuff. It happens that photons are massless and electrons are massive, so they behave quite differently. But the equations that govern them are very similar, and one should not think of electrons as stuff and photons as something else. Mass is something they may or may not have; energy is something they have too. • E=mc^2… 1 the energy content isnt determined by either. As you point out energy is intrinsic to the stuff aka it allready poses order to reject the fact photons have 0 rest mass you assign it velocity to give it relative mass . The problem is if the universe is a closed system there is an objective standard & all frames of reference consolidate into 1 objective frame of reference. Relative speed then becomes an objective speed & the phase space oscilates in a uniform fashion (higgs field). If it were not as such higgs field tensor values would varry & we can only imagine the chaos that would ensue as the phenomena of matter would dissolve along with our physical existence. 5. You say that “matter is always some kind of *stuff* …” and “photons are *stuff*”; this would appear to mean that photons are matter (unless photons are *stuff* that is not matter, of course). Yet photons fit none of your green-paragraph definitions of “matter”. I’m mindful that the whole thrust of the article is that matter is an ill-defined concept (and, as usual, I learned some things from it — thanks!); I am perhaps reinforcing that point by noting that your green-paragraph definitions are yet not enough. • Photons are “stuff”, but they are not matter — for almost every definition of “matter”. Matter is a subset of stuff, though which subset depends on context. I do know of one or two contexts where photons would be called “matter” too — but these settings are ones that you won’t come across often, and usually different terminology is used anyway. What I mean by “stuff”, in general, needs a little more working out. A silly but useful working definition is that something is stuff if it can be used to damage other stuff. I can’t damage your cells with mass or energy — I can’t make an “energy beam” or something like that. I have to make a beam of photons or a beam of electrons or muons or protons or neutrinos. That “stuff” carries energy, sure, but the energy has to be carried by a physical object — a particle — stuff — for it to be able to do anything. (The particles of dark matter — whatever they turn out to be — are stuff too, though I’d need one heck of a beam of dark matter particles to damage anything!) Fields are stuff too: they can be used to pull other stuff apart. Maybe you can point out a flaw in that definition? A challenge to the reader… • Space-time can be curved in ways to ripple ‘big’ stuff apart. That means spacetime itself is a “stuff”? If thats the case, can stuff ‘damage’ spacetime? • Sure. That’s Einstein’s point. You can make two black holes, made entirely from space-time curvature, and arrange for them to orbit each other. The two orbiting black holes form a system — a sort of “atom” — which can be stable for a fairly long time. A sufficiently powerful beam of photons, or electrons, could break that system of two black holes apart. It’s not very different from disrupting an ordinary atom using a extremely powerful gravitational wave, which is also possible in principle. In the first example you would be using something which is obviously stuff to damage an object made entirely from curvature of space and time. In the second you would be using space-time to damage an object made from other stuff. Not that doing either of these is practical — but in principle you could do either one. 6. Haha, that is true! I’m not sure Lemaitre’s “primeval atom” was much better than “Big Bang.” All the really dramatic names I can think of smack of religion. Besides, who wouldn’t want to write a poem about the charmed quark? It’s so… charming. (Yes, I’ll be here all night, folks.) I’ve also heard Dark Energy described as a sort of cosmic “pressure”. I’m not sure how valid that comparison turns out to be. • Yet, his ‘atom primitif’ suggestion sounded better than his other suggestion to name it the ‘cosmic egg’ which was probably inspired by one of many eastern mythologies. That would really have been the worst possible name. 7. I am an IT pro and we have trouble with overloaded terminology just within our field. As with particle physics, our objects keep splitting into pieces, too. The only thing to do is have fun with it! What are your thoughts about the popular vision of an object made of matter meeting an object made of antimatter? Now that I’ve read your article, I realize that “an object made of antimatter” is not a well-defined concept, and it might fall apart if we look at what the statement really means, compared with how matter is really composed. Being that “antimatter” is at the center of the popular imagination, what are your comments on hypothetical, macroscopic anti-objects? • If you could collect enough anti-matter [definition #1] and shield it from all the matter [definition #1] that’s flying around (stray electrons and the like), then there would be no problem in principle to construct anti-salt and anti-steel and anti-cells and anti-cars. The laws of nature are sufficiently symmetric (not exactly, but very close) that anything you can do with matter you could do with anti-matter. And indeed if you brought any significant amount of matter in contact with any significant amount of anti-matter you’d make an explosion. But no one has any practical reason to try to construct large amounts of anti-matter. The closest we’ve gotten is making powerful beams of anti-protons and anti-electrons (positrons) for use in particle physics experiments. But the amounts of energy that you could obtain by slamming those beams into a wall is small. In fact that’s exactly what happens to those beams when we’re done with them; we slam them into a wall, underground somewhere. The wall does heat up, but mostly because the anti-electrons or anti-protons are traveling really fast and have lots of motion-energy — not because of the energy released when they find an electron or proton and annihilate to something else (photons or pions, for instance.) And as far as we can tell, the part of the universe that we can see does not naturally have large amounts of anti-matter anywhere. [If there were regions of the universe with large amounts of anti-matter, at the border between regions of matter and regions of anti-matter you’d expect to see large quantities of photons with very particular energies emitted. We don’t see signs of such borders.] • Thanks, Professor. Two things about our hypothetical antimatter #1, that I hope might surprise me. One, is that I don’t understand what happens when two macro objects “touch,” but I’ve been told it has mostly to do with the electromagnetic force. Are there subtle effects, like the exclusion principle, that changes the way antielectron shells would behave near electron shells? Two, is that I’ve read some interesting history about the early atomic energy laboratories, and the way that fission materials went “prompt critical” almost always interrupted the nuclear reaction, milliseconds after it began. It was as if the nature of the reaction was to defuse itself, contrary to popular imagination. So when the first few leptons meet their anti-partners, and throw off some lightweight particles, what happens next? Their atoms become ions and their molecules would break apart, for one, if they have time. But how do we expect the nucleii to come in contact? Would we get more of a chemical than a nuclear reaction? And if there was an explosive force, wouldn’t it force the macroscopic objects apart? That’s assuming they were solid objects, as in the popular imagination; if gas met anti-gas, or liquid met anti-liquid, the macroscopic dynamics would be quite different. That’s why I wonder if, perhaps, objects and anti-objects might behave differently together than the popular imagination dictates. • Ah, I see what you’re asking. There is no question if you took two cubes, one of matter and one of antimatter, and brought them safely together, the actual contact would be instantly different from the contact between matter and matter. At the surfaces of contact, the electrons on the outskirts of the atoms and the positrons (anti-electrons) on the outskirts of the anti-atoms would start finding each other on very short microscopic time scales, and immediately begin turning into pairs of photons of 511,000 electron-volts of energy each. These “gamma rays” would then create an electromagnetic shower of particles: lower-energy electrons, positrons and photons. Since it only takes a few electron-volts of energy to rip the electrons off an atom (or positrons off an anti-atom), all the atoms and the anti-atoms near the surface would be quickly disrupted, vaporizing the material in that region. The force from the released energetic particles smashing into the remaining atoms of the cubes would most definitely push the cubes apart, just as in the fission experiments you mentioned. So there’d be an explosion, but only of the material nearest the surface of contact, and unless the two cubes were slammed together with enormous energy, as in a fission bomb, you’d only get annihilation near the contact surface. • I am sure that someday soon people will make many millions of anti-atoms. One just has to remember that a glass of water has something like a million million million million atoms of hydrogen. I might be off by a factor of a thousand or so (I didn’t check the number carefully), but you get the point. 8. Now you reduced everything to a word ; stuff , which can have mass , energy , can do work ……but what is stuff , you said what it can do but if we want to go deeper is it the end of our search ? it is all some kind of circular definition ….some kind of dna-protein closed cycle !! where we are lost. 9. I think is quite misleading to say such a thing as “you can’t have an elementary particle without a field, but you can have a field without any particles” because the concept of particle depends on the observer, what is vacuum in one frame of reference has lots of particles in another (accelerated) frame. If the Higgs is the caveat of the phrase then, in my mind, not founding the corresponding particle indicates that such a field does not exist. But I’m by no means an expert in elementary particles and could be (very) wrong about this statement. As a complementary note on the anti-matter in the large scale, there are experiments on producing some dozen anti-hydrogen atoms also. Of course nothing macroscopical. Sorry for being somewhat pedantic. • I did not explain what I meant here, because it is technical, but since you ask… A field that strongly interacts with itself, or with other fields, may have no particle states at all. This is a well known fact about conformal field theory. There are many concrete examples in the context of solid state physics, and many hypothetical examples that arise in high-energy physics. Said another way: a non-interacting field always has well-behaved ripples (which in quantum mechanics are made from quanta), a weakly-interacting field has largely well-behaved ripples (though these may have a finite lifetime), but a strong-interacting field may have nothing resembling a ripple at all. So no, what I said is not misleading — it is the particle picture of fields, which assumes that fields are weakly interacting, that is misleading. And you used that weakly-interacting-field intuition in your comment. • Hum, that’s quite interesting and a sign that I should be studying a lot more. My comment was really made with non-interacting fields in mind, so I see what I got wrong. Just to make clear what I meant was that for non-interacting fields the concept of particle is observer-dependent because of things like Unruh Effect. But I couldn’t agree more with you that fields are the essential concept and not particles (the point I was trying to make). Thank you for the explanation. 10. According to all of your articles the end most fundamental stuff is the ripples in fields , i mean if fields are extended stuff , ripples are ” real concrete stuff relative to our senses ” which can have mass , energy , etc. , but if fields are stuff then we reach an absurdity : what is stuff ? stuff is stuff !!?? • Fields are not traditional ‘stuff’ as we know it. They’re almost like things that can have stuff in them. (The ripples.) I think that a nice definition of ‘stuff’ would be ‘particles’ (Ripples.) since they’re the things that ‘have’ all the other things, such as speed, mass, energy and so on which we usually associate with ‘stuff’. But in the end ‘stuff’ is just a label we attach to things. Matter, particles, objects, squiggles, these are all just names that we use. Nature does not care about our neat little ordering systems and so sometimes things can get a bit confusing. 11. Great stuff, Matt. Thanks. A parenthetical question: This is a whole new way of thinking for me, having spent my career in applied physics before 1980. In my struggle to adapt to your post-classical, if not post-modern, way of thinking, I am trying to understand where and how the Higgs mechanism appears in the mathematical constructs that underlie the standard model (what are those mathematical constructs?) and how we know what decay products to expect from Higgs particle decay. If I wanted to find an answer to those two question on Google, what should I google? • Do you understand superconductivity? The photon obtains a mass inside a superconductor when Cooper pairs (represented by a charge-2e scalar field) condense. The W and Z particles obtain a mass within the universe when the Higgs field condenses. The mathematics is almost the same — relativistic instead of non-relativistic, non-Abelian instead of Abelian, but it is the same idea. The most important difference is that we know that for the superconductor, the charge-2e scalar is a kind of bound state of two electrons, but for the universe, we don’t know what the Higgs field is yet — whether it is a composite of something else or not, whether it is just one field or several — and that’s what the LHC is aiming to find out. As for how the Higgs decays; that’s a little more complicated. Did you read what I wrote on Standard Model Higgs decays? situation; we have developed some nice methods and we are looking to swap solutions with others, be sure to shoot me an e-mail if interested. • Hello! This post couldn’t be written any better! Reading a good read. Thank you for sharing! 12. Is mass constant? Or should I be asking is the Higgs field constant? In the beginning of time (and space) there was finite temperature within the absolute maximum of Fermi spheres, the singularity that created the Big Bang. The expansion of this sphere created time and space and hence a field (presumably the gravitational field). With further expansion and lowering of temperature (energy densities) resonances where created and hence more fields, like EMF and Higgs. Now, if energy is a conserved quantity then the sum of all fields must also be constant. So with the continued expansion of time and space (spacetime) the magnitudes of the various ripples as created by the various fields should be reducing. (I say should because the densities continue to lower both absolutely (larger expanse of the entire universe) and locally around the smaller gravity wells. • 1) We do not understand the Big Bang at the very earliest times with the precision that you suggest. 2) Energy is not in any simple sense a conserved quantity in a rapidly expanding universe. Even if it were, fields are not energies and the sum of all fields isn’t even well defined, so it certainly need not be constant. 3) The mass of particles like electrons is not believed to have been constant over time, because the Higgs field was not constant over time. At very high temperatures the Higgs field would have been zero on average. As the universe cooled, an “electro-weak phase transition”, which must have happened a microscopic time after the Big Bang began, must have taken place — its details are not yet known, because we don’t know enough about the Higgs field yet. But the Higgs field is believed to have changed very rapidly and then settled down to its present non-zero value during that transition. 4) As far as we know, the Higgs field’s value, and the electron mass, have not changed a bit since then. In principle they might have varied over space and time, but there is neither experimental evidence nor good theoretical reason to think they actually did, at least within the part of the universe that we can see and over the time since the electroweak phase transition. (For instance, the success of Big Bang Nucleosynthesis in predicting the original helium to hydrogen ratio of the universe could easily have been messed up had the electron or proton or neutron masses been significantly different from what they are today.) Scientists continue to try experimentally and observationally to test whether there is any sign of variation. 13. Hi I was woundering if energy is conserved in general relativity. what i have read on the internet has left me confused. Also woundering where the energy of the cmb has gone. • Ummm… I don’t think I can give a very good answer here that I’m ready to stand behind. Let me say two things. 1) LOCALLY (that is, in any small region of space, for a short time) energy and momentum are conserved the way we are used to. 2) GLOBALLY (this is, across regions or across eons where gravitational fields are very important, such as the universe as a whole) you have to be much more careful about how you define the total energy of a system. It’s quite subtle, and can’t always be done. And I’m not expert enough to answer off the cuff and get all the details right as to when you can and when you can’t, and how you do it in the cases when you can. [Been too long since I reviewed this subject, which is mostly outside my research…] If what you mean, in asking about the energy of the cosmic microwave background radiation (cmb), is how did the photons lose energy as they cooled off during the universe’s expansion, one way to answer that is to say that the expansion of space itself took the energy from the photons, and from all the other particles too. But I’m not sure that’s super-intuitive. I know an intuitive answer that isn’t really correct (namely, to consider how photons in a box lose energy if the box expands). Maybe a general relativity expert here knows a better intuitive and also correct response. H = U + pV H is the enthalpy of the system U is the internal energy of the system p is the pressure at the boundary of the system and its environment V is the volume of the system. As lup mentioned this equation is somewhat confusing. It can be interpreted as the energy is spent to apply pressure to the boundary and expand the volume. But that interpretation implies there is an external environment outside our universe. Are we living in a bubble within a bigger bubble, in a asymmetric phase within a symmetric environment. Are there other asymmetric bubbles nearby? The thing that amazes me Professor is regardless of which theory you believe non explain the real nature of energy. Why was the temperature so high at the Big Bang? Is the symmetric phase an “infinitely” stretched spring that when broken releases all its energy at a very short time (space) and hence the large energy density, temperature? • So energy is conserved when cmb photons have thier wavelength stretched by the expanding universe? is this energy helping to expand the universe further or is it more like a stored potential energy? • The thing about energy in general relativity is that for it to be defined globally the spacetime must be stationary, which is a form of saying that “the space is the same at every time”. The problem is that exapanding universes are not the same at every time, so that energy is not conserved. With that in mind the cmb energy has not gone anywhere but just disappeared. We can’t really ask where has it gone because that implies it is conserved. Since it’s not, the energy has just gone away. I don’t think this has any intuitive answer. The closest to intuitive I have heard is the “photon in a box” thing, which I kind of like even if is not precise. The only thing one must remember is that in general relativity there is no “potential energy” related to gravitational fields, so the energy of photons is not stored in gravity, it has really gone away. To sum everything: the energy of cmb is not conserved when the wavelenght of photons is increased by universe expansion, and the energy is not “stored” in any potential form; it has really gone away. • I’m sure you have heard of the seemingly obsolete theory of “tired “light – could the “hot”photon’s energy not drive the expansion of the universe : in other words red-shifting because the wave packets are themselves expanding (such as water waves dissapate their energy over distance) – rather than being stretched to red because an unknown dark energy is pulling them apart? • Einstein’s equations for the expansion rate of the universe work very well; they are used in deriving the prediction for how much helium there is relative to hydrogen, and in predicting the cosmic microwave background spectrum, both of which agree very well with data. [A nice (if a little out of date) discussion of the latter appears here: ] This is all done without appealing to anything like “tired light”. Moreover, the dominant energy density in the universe hasn’t been from light (and other very light-weight particles such as neutrinos) since the universe was 100,000 years old or so. So it’s hard to suggest that light could be essential to the expansion during most of the last 13.7 billion years. I’m not the world’s expert, but I suspect that with the current precision available in cosmology, there isn’t a lot of room left for exotic theories of why and how the universe is expanding. Of course we don’t know for sure what got the Big Bang started, though there are plenty of theories of that (under the name of “reheating following inflation”). • yes-but as far as I understand Einstein introduced the cosmological constant to make his equations fit the observable reality (non-collapsing universe) rather than this being a mathematical necessity per se and as you point out he seemed have done so particularly well. I also did not want to imply that dark energy be the dominant energy of the universe – it only has to be stronger than the universal gravitational force and supposedly wins the upper hand the further the mass particles are apart. But on the same note, if the universe continues to expand eventually the photons of the background radiation will be stretched beyond the temperature of absolute zero – do they then cease to exist?, do they blend into and become (non-particular) elements of the electromagnetic field? – or could that result in the end of the expansion phase and gravity, however weak, but now unopposed lead to the beginning of the hypothesised big crunch? • Thanks heaps…thats certainly answered my question. The only remaining puzzle for me is that i thought that energy conservation was a consequence of a the laws of physics dont change with time? • So the subtlety here is: if time is just something that sits there and the laws of physics operate within it, then yes, conservation of total energy follows from Noether’s theorem. But once time starts participating in the physics — as it does in Einstein’s theory, where space and time are actually part of the physical phenomena — then to ask whether the laws of nature change with time becomes subtle. For example, you can’t even necessary define a global notion of time in a sufficiently curved space. What survives of our usual notion is that within sufficiently small and weakly-curved regions of space and time, the laws of nature do behave in a time- and space-independent way, to a sufficiently good approximation that energy and momentum are conserved there. This is the notion of LOCAL conservation of energy and momentum. There is still a Noether theorem and still a conservation law, but it applies locally, not globally across all of space and all of time (except in special circumstances, such as a time-independent space-time.) (Technically: there are energy- and momentum-currents that are locally conserved.) • Thanks Matt, thats cleared up a lot of things. Cesar mentioned that the cmb photon energy is really just going away (ceasing to exist) as the universe expands. Can we have the opposite – can we have energy appear that wasnt there before? • Well lup, yes we can have the opposite and it’s pretty much standard. The idea behind energy “going away” is that since the universe is expanding it “stretches” the photon wavelenght and consequentially it’s energy decreases. In order to have the opposite effect all we need is a contracting universe, like those cyclic universe models. To be very clear, in those cases there is so much mass in the whole universe that at some point is stops expanding and starts collapsing. During the contracting phase the photon wavelenght is “compressed” by the contraction of space and the energy increases. It’s really like watching the expanding universe going backwards in time. But remmember that this is just a model, the current astrophysical data supports the accelerated expanding universe which kind of eliminates the recollapsing universes in such a naive way. But from the theoretical point of view there are no problems with photons “heating up”. 14. Hi Matt ! An experiment consisting of a holometer is currently in the design stages at the Fermilab centre. Headed by Dr Craig Hogan, the aim is to determine if spacetime has holographic properties by attempting to measure a quantifiable planck unit. I know that a holographic field is a constructive interference pattern with information encoded in the boundary of the pattern.I also know that I am voyaging into the realms of speculation by suggesting this, but could the fermions be nodal points of a standing wave? Certainly, the fact that they are non-locally connected seems consistent with this. If they are, then could this also mean that mass is a measurement of the negative interferometric-visibility density of a Higgs scalar field instead of a particle ? Could it also imply that the laws of physics are an hierarchical layer of anti-nodal displacement cycles, generated by an underlying constructive interference pattern state, with the gauge bosons being the anti-nodes themselves ? I apologise if these questions seem rather wacky, but ever since I heard of the Fermilab experiment I’ve been racking my brains trying to figure out how the holographic principle would work in practice, and how it could relate to the search for a non-standard model Higgs effect ! • Much as I like and respect Craig Hogan, I’m pretty skeptical about this experiment, I must say. I suspect that the effects he’s looking for are vastly too small to be detected. As far as I can see, there’s absolutely no need for there to be any connection between the Holographic Principle and the Higgs mechanism. One operates on the weak nuclear force at the energy scale of 250 GeV or so; the other relates to space-time viewed at the energy scale of 10,000,000,000,000,000 GeV or so. I’m afraid that the answers to all of your questions are “no” — or better, “be a lot more precise”. Remember particles aren’t little marbles; they’re ripples in fields, and you have to explain the fields first. We have very successful quantum field theory for fermion fields and gauge boson fields, working in some cases to one part in a trillion; so you have to tell me how you could reformulate all of what we know about quantum fields to make the fermions come out as nodes in some kind of standing wave and interacting properly with gauge bosons that come out as anti-nodes in some kind of standing wave. Sounds like an extremely tall mathematical order and I have no idea whatsoever how you would start to make that work. You can’t do theoretical physics with words, because you end up just speaking ambiguities. You have to do math — that’s both the essential part and the hard part, not just because math is technical but because most math you try to write down will be self-inconsistent or inconsistent with existing experiments. Einstein is widely quoted in our culture. But if you read his papers, you’ll find that all those nice-sounding words are backed up with solid calculations — and that’s why it isn’t ambiguous what he means when he starts talking about the subtleties of special and general relativity. And he always checks that what he’s proposed is consistent with existing experiments. 15. In all your posts you mentioned fields so many times , you never explained what is it , the max. you said is ; fields are stuff that can have E , m , charges…. but is this all of what we can define a field ? what are fields , i know that fundamentals cannot be defined as there are nothing more basic to refer to it . Are we to stop at a word ; field? Are fields an ontological reality of a mathematical representation of our observations? Forgive my insistence but i cannot stop at mid road. • In the current way of thinking about the world, fields are about as far as you can go. In specific attempts to go beyond this way of thinking, fields can be manifestations of other things. For example, in theories with extra dimensions, some (but not all) fields can be manifestations of the shapes and sizes of dimensions that are too small for us to see. In other attempts, some of the known fields can be themselves made out of other fields which are more fundamental. But all of these attempts are speculative; we don’t know which ones are correct. So I would say that the fields are currently viewed as the fundamental ingredients; that is where things currently stop. Even space and time are to be understood in terms of gravitational fields. However, knowledge accumulates over time, and what we think is fundamental may change. There can also be multiple equivalent interpretations of the same information — two ways of looking at a problem that have exactly the same mathematics and the same physical predictions. Philosophers are frustrated by this ambiguity, but theoretical physicists have learned that we have to remain light on our feet. • Is it valid to say fields are any change (quantitatively and/or qualitatively) of one or more quantum numbers over the space and/or time domain? In other words, they are defined by the topography of spacetime. I speculate this way because as fundamentals they must all be derived back to the initial field, whether it is gravitation or something more fundamental like vortices created by the rotations of the three dimensional space. This is why in my early post I speculated that the sum of all fields must be constant, because spacetime has an orderly progression. So my logic is there must be a constant constraint that drives order otherwise we would have global chaos. • It’s very important to distinguish what is known (by combining experiments with a clear theoretical framework) from what is speculation and may not end up surviving experimental tests. You say: “as fundamentals they must all be derived back to the initial field”. We don’t know that. You say: “spacetime has an orderly progression. So my logic is there must be a constant constraint that drives order otherwise we would have global chaos.” Again, we don’t know that. You can’t really talk about fields as a change in quantum numbers, no. Quantum numbers are very specific things: they are labels of certain quantum states; in particular, they are eigenvalues of overall quantum operators that are meaningful in specific quantum states. Quantum fields are a more general concept. For example, the electric charge of an electron is a quantum number; but there is no field for that. And conversely, most states involving the electromagnetic field do not have a definite value for the field, and so there’s no sense in which there is a suitable quantum number associated to those states. The electromagnetic field from freshman year physics is the best place to start really understanding classical fields. Quantum fields are like classical fields in that they can support waves; they are unlike them in that the waves cannot come with arbitrary amplitude, but instead must have an amplitude equal to a minimal value (one quantum) times any integer. I’m afraid they are unlike them in a lot of other subtle ways too. • “.. but instead must have an amplitude equal to a minimal value (one quantum) times any integer …” Interesting, is that because we over simplified the math by introducing renormalization because of our inability to visualize the physics, (reached our limit of our conscious awareness to map nature’s secrets). I know that Dirac and Feynman were concern with renormalization because it would contaminate the math in to most of fundamental ways and get us trapped in a our own math. Is renormalization prevent us to involve naturally into more advance physics? Dirac’s criticism was the most persistent. As late as 1975, he was saying: “Most physicists are very satisfied with the situation. They say: ‘Quantum electrodynamics is a good theory and we do not have to worry about it any more.’ I must say that I am very dissatisfied with the situation, because this so-called ‘good theory’ does involve neglecting infinities which appear in its equations, neglecting them in an arbitrary way. This is just not sensible mathematics. Sensible mathematics involves neglecting a quantity when it is small – not neglecting it just because it is infinitely great and you do not want it!” Another important critic was Feynman. Despite his crucial role in the development of quantum electrodynamics, he wrote the following in 1985: “The shell game that we play … is technically called ‘renormalization’. But no matter how clever the word, it is still what I would call a dippy process! Having to resort to such hocus-pocus has prevented us from proving that the theory of quantum electrodynamics is mathematically self-consistent. It’s surprising that the theory still hasn’t been proved self-consistent one way or the other by now; I suspect that renormalization is not mathematically legitimate.” I am inclined to agree with Dirac and Feynman in that we need a better handling of the infinities and constraints to make real progress. I am afraid our ability to experiment is getting very quickly to a stagnation point because of the limits of our machines. I also want to take this opportunity to stress how important it is to convince NASA and their handlers that more resources should be spent in sensors, telescopes, and space experiments than the very expensive human spaceflight projects. • If you think renormalization is about infinities (or that it has to do with an inability to visualize the physics), you have not understood it. Unfortunately most textbooks still talk about it in terms of removing infinities. This is deeply unfortunate and misleading, because in fact, even in theories with no infinities, there is renormalization. Even in quantum mechanics, in the anharmonic oscillator, there is a perfectly finite renormalization. Once you have understood that, then you can understand that renormalization (both perturbative and non-perturbative) has nothing to do with the infinities themselves, but with something more physical and deeper; and you can also see which types of infinities are acceptable in quantum field theory and what types are not. Dirac clearly never grasped this point. As for Feynman, I am sorry I never got to ask him exactly what he meant by his comment. But let’s just say that our comprehension of quantum field theory has come a long way since 1985. Unfortunately this is an extremely subtle and technical subject (which is why almost nobody does it in a sensible way) and I doubt I will be able to explain it on this website. At some point I may write a short technical monograph about it. But suffice it to say that I think you’re not even close to being on the right track. • There are interesting discussions going on below this very nice article 🙂 In particular, I feel really affected by the second paragraph in Prof. Strassler s above comment ending with “… fields can be manifestations of the shapes and sizes of dimensions that are too small for us to see.” 😛 😉 🙂 • Hi Matt, Since you mentioned that “Even space and time are to be understood in terms of gravitational fields”, I want to remark that I have found this concept very tricky to explain to non-experts. We’re so used to thinking about fields as things that “live in” static space and time that understanding space and time themselves in terms of fields is hard to wrap our minds around. If someday you have the time and inclination to tackle an article about this issue, I think many of your readers would appreciate it, and I would be interested to see what approach you take. 16. I read in an article about quantum gravity in stanford encyclopedia of philosophy that fields are specific properties of space-time itself , is this correct ? • No. They can be. But often they are not, even in string theory. My advice: don’t get your physics from philosophy encyclopedias (and I wouldn’t get your philosophy from physics encyclopedias either.) 17. But my dear friend , physics is an ontological science , it is interconnected to philosophy, any thing you say about fundamental level of existence IS philosophy. • (THANKS). This is an interesting point, and I do think I disagree. Physics is interconnected to philosophy, yes. But physics is a quantitative, predictive enterprise. Theoretical physicists will often accept levels of understanding or ambiguity that are unacceptable to philosophers. (And to mathematicians!) Collectively, we are typically much more practically minded than our colleagues in either of these subjects. This allows us to make rapid, but often very ragged, progress. Often when we learn how to calculate something, it is some years, even decades, before we understand what we’ve actually learned well enough for either mathematicians on the one hand, and philosophers on the other, to engage with it. For instance: what people said about particles and fields in the 1950s, before quantum field theory was understood at the much deeper level that is available to us today, was in many cases deeply misleading philosophically. The story I tell you today is based on insights that emerged in the 60s, 70s, 80s and 90s. Indeed, a good chunk of what I learned in grad school was misleading philosophically. Meanwhile mathematicians still haven’t figured out what we’re doing in quantum field theory… and I wish they had, because there are many puzzles about it that we can’t solve. • “physics is a quantitative, predictive enterprise” up to the point where we decide what DoFs and what formalism to use when we construct models for experiments. If we choose the wrong number of DoFs or formalism, we find the accuracy is OK, but not great, but the inaccuracy gives us precious little clue as to what different DoFs or formalism to use. We can’t do much better than guess again. Hence the enormous disparity of models that are published in journals. We also can’t measure the Lagrangian in a comparable way to measuring the field strength, say, supposing we have managed to guess the right DoFs and formalism. The process of guessing again is done slightly better by some Physicists than by others, but that x-factor is not as quantitative and predictive as we’d like it to be. We can call this guessing Foundations of Physics (or we could call it Philosophy done by experienced Physicists, but that’s just names, silly to argue over), in contrast to Philosophy of Physics (Physics done by Philosophers), but there’s a lot of crossover between these two academic communities, with good ideas on both sides being taken seriously by people on both sides. Of course it’s much easier to generate bad ideas, when the subject matter is beyond computationally complex because we don’t know what questions are possible. In the end, however, we can waste 50 years in quantitative predictions if we are using the wrong DoFs and formalism, so it’s worth some people working concurrently on what we think we’re doing, even if many of them waste their time. 18. I think no time is wasted once they are on the grand march to understand existence , no one will put his hand on the ultimate truth , but science and philosophy are 2 faces of same coin……the sacred search for what IS and our place in it , it is the most precious effort humans can perform, even a simple layperson question can lead to some great answer it is our duty and our destiny as humans to think ,to reflect , and to wonder ……nothing is greater than our feeling of awe in front of beauty , design , perfection in a realm which is not perfect itself ….this is the meaning of being human…to enjoy life , to love , to care , to share , to embrace the whole of creation. 19. once upon a time in space,i tried to write a thesis.I went back in time and forward in space…today became yesterday and zero was squared…BANG!!! the singularity of zero was infinite(-0.0000….)and eternal (+0.0000….)…and thus the first law was laid,from which the universe arose…the expanding singularity of infinite totality,,,a total of 1,and a value of 0…as a totality of one singularity,one could not be measured therefore totality was 0-,or 0+ depending on direction as it relates to the opposing direction…giving rise to every thing that followed,,what i speak of is a singularity of time,,,but could have easily mistaken it for the higgs and thee higgs field…or possibly or on a larger scale dark energy,and dark matter…its very hard to make strong case to include all the that this theory covers,without producing a thesis…so just a little snippet for now,as i reduce it to simplicity,,,its a binary universes…where zero has two values,,0,and minus 0,from minus0,0 has the value of enxpanding 1,or 0.9999……and from 0,minus 0 negative 1infinte,or -0.9999 Time is the equal and opposite reaction,preceeding the action of forming space,,,just as all mass is basically congealed energy,all of space,is congealed time,both the positive,and negative…quantum at the particular level,yet in general its relative… thank you for reading,and i value any input 20. A good direction to take the discussion: Why do fermions feel more like “stuff” than bosons? This brings in a little quantum mechanics, which you might not be ready to do yet. But the “rule” that two fermions cannot be in the same quantum state, while bosons can, seems central to a working definition of “matter.” Intuitively, we expect matter to “take up space,” which fermions do, but bosons not so much. Of course fermions don’t take up space by themselves, since any collection of fermions is always accompanied by bosons transmitting forces between fermions, and so on. Thanks for all the great articles! • Well, if you keep running with this too far you start to stumble… For example: in a world with only gluons and no quarks, the gluons would bind together to make hadrons that are called “glueballs”. Those take up some space. Granted you couldn’t make anything macroscopic out of them. If the electron were a boson, you’d still have hydrogen. That takes up space. And hydrogen’s a fermion; you could make things out of it. In our world, hydrogen is a boson, deuterium (heavy hydrogen, whose nucleus contains a neutron as well as a proton) is a fermion. Is one of them matter and the other not? In certain hypothetical universes, you could make a proton-like fermion out of combining a fermion and a boson. And you can make bosons out of combining fermions. Even in our world, protons contain quarks (fermions, which you’d like to call matter) and gluons (which you’d like to say aren’t part of matter so much as representing the force that holds the fermions together.) But you can imagine a world in which some quarks were fermions and others were bosons, and the number of types of gluons was different — and then you could get fermionic proton-like objects which were made from quark fermions and quark bosons as well as gluons. What would you call the quark bosons? Matter or not? The theory of supersymmetry causes a problem, because it combines fermions and bosons in pairs. It doesn’t make sense to say these pairs are part matter and part force; when you write the equations down, the boson-fermion pair appear in a single mathematical object. And in string theory all of these particles can be made from the same type of string. Let’s not forget what we call “dark matter” may be made from bosons. So there’s all sorts of ways in which this line of thinking can break down. At different layers of structure, what you’d want to call matter might change in a profound way. I don’t view the current distinction as likely to survive into the future too much further. • Which is valid, matter is standing spherical waves oscillating at the Compton wavelength or is matter a Fermi sphere with such a radius so as to give a Fermi energy equivalent to the mass-energy of that particular particle? I can understand Pauli exclusion principle via the Fermi sphere definition but not with the standing wave theory. Are we missing some math? • Well, calling fermions the essential ingredient to “matter that takes up space” still seems consistent to me, as long as we add a caveat about how closely you look. An atom or molecule can have the properties of a boson when viewed from the appropriate distance, but when you get “too close,” the properties of the fermions that it is made of become important. How about, for now, say “matter that takes up space” must contain quarks, leptons, or both. Whether or not it has the properties of a boson when viewed from a sufficient distance does not keep it from being “matter that takes up space.” • In a theory with no quarks, the gluons would bind up into hadrons called “glueballs”. These hadrons would take up some space too, individually. Granted I couldn’t make a lattice out of glueballs. But I suspect there are bosonic systems where I could do make a lattice, if I could arrange for some short-distance repulsion from a short-range force. And it is possible to build fermions as solitons in a theory with only fundamental bosons. So this still raises questions about your dichotomy… I think you’re still relying on very specific properties of our universe that would not necessarily be true in other ones. • How do theoretical physicists verify the validity of using the gamma function and the rather very simple energy – momentum (dispersion relationship, E = ap^s) for deriving the thermal de Broglie wavelength. Does this derivation fall in the argument of whether the renormalization is appropriate for quantisation of the “quantum nature of gas? This approximation seems so critical in defining the true natural of the vacuum, how has it been verified and conversely how do you rule out dimensions above the usual 3 that we perceive to live in? 21. Now if the mass of the proton is all of its quarks E divided by c^2 , how can we express the mass of electron or any primary ” particle ” ? if an electron is at “rest” –whatever that means — what is the meaning of its mass ? even what is its E then? are we running in a circle ? • The mass of the proton is not merely all of its quarks’ energies E divided by c^2 (and to the extent a more precise statement is true, it is true only of a stationary proton.) It’s more complicated; that article is coming. An electron can have definite momentum (including zero) as long it is in a state where all position information is lost. So it can be at rest, yes. And I can figure out its mass-energy E_rest in many ways experimentally. One way is to bring an anti-electron close by, watch the two annihilate into two photons, and measure the energies of the photons (which are pure motion-energy, since the photon has no mass). Since energy is conserved, and the initial energy was (to a very good approximation) the mass-energy of the electron plus the mass-energy of the positron, which is 2 E_rest, the energy of each of the two photons is equal to E_rest for the electron. Divide by c^2 and you have the mass of the electron. See for example 22. Hello Matt, Incredible what a interesting subject physics really is, how much there is to say about it and what thorough knowledge you have of every detail of physics! It reminds me of Feynman. For me having studied theoretical physics but not actually working as a physicist, the first two volumes of Weinberg’s “The Quantum Theory of Fields” were a gift from heaven. Thanks to these book I understand physics at a much more fundamental level than when I was a student, when they talked about gauge invariance, fields, particles, representations, CPT and renormalizability and I had no idea how they were all linked together. A remark: To call matter particles fermions is a good thing because of the stability they give to matter due to the Fermi exclusion principle. Usually we see the bosons as force particles and usually this is not too bad a picture. But what about light-light scattering. Here in the box diagram the fermions are the force particles. • Notice Mark Wallace’s comment here, and my reply. You’ll see that I am a little cautious about calling fermions “matter particles” because it runs you into trouble. In fact, your remark suggests another problem. Indeed, pairs of virtual fermions (more precisely, appropriate quantum disturbances in a fermion field) can cause forces. (Though we should recall that virtual particles aren’t really particles: ) In fact, the force which holds a nucleus together is, from some points of view, due to pions — which are bosons, but are made from fermions (quarks and anti-quarks.) So now we have a force particle made from matter particles. Which means our naming scheme is a mess. There is also a (subtle) way to make fermions from bosons (the word Skyrmion appears here.) You see that this distinction just causes problems. At some point I think you have to take the physical phenomena for what they are and not spend too much time worrying about finding the perfect naming scheme for them. 23. Now i really wonder , what is the source of the intrinsic movements / momentum in all sub-atomic entities , what “pushed ” the quarks or electrons to always be in motion , you said – nothing is at rest – well , what physical mechanism is the CAUSE of all that movements , i am beyond some equations equilibrium , i am at the CAUSE , THE URGE , THE DESIRE !!!!! TO BE IN PERPETUAL MOVEMENT!!!! conservation of energy ?but if movement was generated , what caused its generation ? 24. Let me be bold and see if I, an “outside observer”, can “cause” a “disturbance” and create some “fire” in this “medium”. I say, for the lack of a better theory, God caused the universe to ignite and hence the Big Bang. The boundary of our math only goes to the time-energy uncertainty principle was given in 1945 by L. I. Mandelshtam and I. E. Tamm, as follows. For a quantum system in a non-stationary state ψ and an observable B represented by a self-adjoint operator , the following formula holds: σE x (σB / | dB / dt | ) >= Planck’s reduced constant / 2 where σE is the standard deviation of the energy operator in the state ψ, σB stands for the standard deviation of B. Although, the second factor in the left-hand side has dimension of time, it is different from the time parameter that enters Schrödinger equation. It is a lifetime of the state ψ with respect to the observable B. In other words, this is the time after which the expectation value changes appreciably. This principle says the quantum state ψ cannot stay the same forever since that means infinite energy, I hope I am right, 🙂 So, once an external disturbance ignites the charged initial state change will continue until all the useful work is spent. 25. One thing has always puzzled me about dark energy. It is supposed to contribute 74% of the mass-energy density of the universe, but at least naively it doesn’t even seem like it should be commensurate with energy. Obviously that’s wrong. I’m dimly aware that there’s such a thing as the stress-energy tensor. But I would have thought “density” would be a component of the tensor, but it seems like the 74-22-4 decomposition must be based on some kind of norm? I guess my real question is, what does that 74-22-4 decomposition mean? 26. One of the great properties i like much in what Matt. write is his total freedom of any prejudice ane pre-conceptions , he gives us the state of the matter with true honest description , so let me state what — being non-physicist — i understand as the core of all what have been said : 1- We only know that there are something /stuff which is the most – maybe- fundamental kind of physical existence which we call fields of which reality –the thing as it is- we know nothing. 2- That stuff have properties we can observe we call m , E , p , interacting according to pre set codes we call theories , these theories are our POINT OF VIEW AS PER TODAY never to be confused with ultimate reality. Now the main essential difference is : Matt. never claimed that what he says is the final ultimate truth in contrast to 99% of articles and books where they claim that what they present is such……………thanks Matt. , this is the true , honest way of doing science. • I think you are more or less stating my point of view, yes. I don’t know what ultimate reality is and I have no idea how I would come into contact with it. I just know that we have found ways of classifying the objects in the world such that we can predict in great detail how they behave. I can’t tell you that classification is unique or complete. Along these lines I think it is also important to keep in mind that we, as minds, never come in contact with anything physical at all. See the table across the room? What do you know about this table? The only thing you know is the image created somehow in your brain, formed on the basis of electrical impulses down your optic nerve from your eye, which in turn are based on photons impinging on your eye, some of which came down from the sun or from the light bulb in the room, and bounced off the table in just the right direction to enter your retina. There are many steps from your image of the table to the table itself. Even when you touch the table, what you feel is in your brain, created from nerve impulses sent down from your fingers, from nerves firing in response to the deformation of your skin by the inter-atomic forces between your skin and the table. Your brain is not in contact with the table. What you feel is in your brain, not in your fingers. Our senses are no different from the measuring equipment used by scientists, allowing us the ability to detect aspects of the world around us. What we know of the world, through our natural sense organs and through the artificial sense organs of scientific experiments, is always indirect. 27. Objection : NO images or feelings are IN the brain , for the first time in all your presentations you decreed/decided on something where no shred of evidence exists as to the ultimate reality of consciousness / feelings / concepts…..etc. David chalmers wrote an article titled ( consciousness and its place in NATURE ) , that was a prejudice , i sent to him asking : how can you decree that its place IS in nature while your article is searching for its place with no conclusion reached ? that is what i call pre-conception/prejudice where a scientist decide / decree a point of view as fact , i really hope that some day science can free itself from decrees based ONLY on relative time dependent observations far from final absolute knowledge. • You are right, I did not phrase this well. Your statement is, I think, a little too strong — we do know that there are electrical phenomena occurring in the brain that are related in some way to the things we see and think, and we know that damage to areas of the brain (strokes, direct injury, disease) result in correlated damage to conscious experience. But how they are related, we do not know. So I would say that it’s not that there’s “no shred” of evidence — just that there’s no clear understanding of the meaning of the evidence. In any case, all I really wanted to say is that conscious experience does not in any sense involve a direct encounter with the physical objects of which we are conscious. What conscious experience itself arises from, I certainly don’t know. • Neurons —> Bosons Synapse chemicals —> Fermions Eddy currents across the neuron membranes —> Field(s) (EMF) Transfer of chemicals across synapses —> Fields (Colors) Synapse firings —> Atoms (type) Sequence of synaptic firings —> Ensemble of atoms (hadrons) Thoughts —> Relativistic structures Hence, one can speculate that our consciousness is part of the universal consciousness and our body, including our wiring in our brain is just one more fiber and/or groups of fibers of the overall cosmic quilt. • Um — You can speculate all you want, but there’s no math on the left-hand side of your correspondance, while there’s a huge amount of detailed, predictive mathematics on the right side. The reason we take the right-hand side seriously is that it goes along with mathematical equations which predict, correctly, the results of millions of experiments. If you can’t make the link from the right-hand side’s math to some corresponding math for the left-hand side, then we have no reason to believe any correspondance of the form you suggests exists. For example, fermions satisfy a Pauli Exclusion Principle. Are you suggesting synapse chemicals satisfy a similar principle? Do Neurons form condensates the way bosons do? Atoms can be bosons or fermions; are you suggesting that synapse firings can be neurons or synapse chemicals? I insist on precise statements. Because that’s what’s needed for science to get done. • I am glad you brought up Pauli’s Exclusion Principle. I can understand Pauli Exclusion Principle via the Fermi sphere definition but not with the standing wave theory. Are we missing some math? • I think you are confusing some things. Atoms involve a nucleus surrounded by electrons which are standing spherical waves (or more complicated standing waves.) The Pauli Exclusion Principle simply says that no two electrons can be in the same standing wave if their spins have the same orientation. Metals involve matter in which the electrons in a given volume form a Fermi sphere. NOTE: this is not a sphere in physical space. It is a sphere in an abstract space (“momentum-space.”) [The standing waves that electrons in atoms occupy are spheres in physical space.] The vacuum does not have an associated Fermi sphere. The mass-energies of particles in empty space are not associated with a Fermi energy. • Ok, thank you for clarifying. It’s been awhile since I done some of this math so I am quickly trying to catch up before diving into B-E and F-D statistics. So to help me visualize, Pauli Exclusion Principle says if the standing waves are in-phase they cannot interfere due to possible ‘beating” and hence tend to infinite resonance (infinite energy which is not allowed by conservation?) Conversely, a standing wave of 180 degree phase shift will cancel the electron’s wave and bring it down to the ground state, (Dirac’s antiparticle? What does he mean when he describes it, antiparticle, as a hole? Is not opposite phase the same thing?) The different fermions are basically standing waves of different amplitudes and frequencies? Why are the half lives different? So, the Fermi sphere is the “mechanism” of transferring momentum from one standing wave to another? How does that work, superposition? Final question, does the standing wave’s ripples propagate to infinity or do they reduce down to ground state at some radius? Is there any association between this radius and quantum entanglement? • I’m afraid that’s not what the Pauli Exclusion Principle says. You have to first work out, for a particular physical system, what its one-electron states are; then the exclusion principle says that nature cannot put two electrons in the same state. This is not something that I know how to visualize, because it involves a quantum mechanical effect for which there is no visualizable analogue. Not all facts about quantum theory can be represented by a picture in the mind; this is part of what makes it hard. • PS; I apologies for my hieroglyphics but my brain works better with images. i can sit a study a math page and get very quickly but the next day I would forget the details and only the images that were created remain, for a long long time, 🙂 Can one interpret this as saying the second “electron” wave cannot superimpose over the first “electron” wave because the “electric charge” imbalance will “push” the second wave to a different phase, higher state? So it is the combined nucleus-electron interaction that prevents the second “electron” wave from coming too close to the nucleus? What about two loose electrons, can they superimpose or is it that the probability of two electron waves coming close to each other is nil? Does the exclusion principle have any thing to do about the Z and W having mass or is it solely because of the variants of the spin states of these bosons? If I can speculate again, and I have seen the derivation of the equations for the Hiiggs field and I somewhat understand the definition of symmetry “breaking” to give the mass term. But physically specking, is it correct to say that it requires the interaction of two bosons to create a higher enough resonance in the composite wave to give it “mass”? But, again, why do different fermions have different half lives? In other other words why is the electron so stable of the other fermions decay so quickly? • One physical object we do experience more directly is our own brain – or at least parts of it. i think that because certain brain activity appears to have SUBJECTIVE sensations associated with it, there is a certain aspect of the reality around us that is not accessable to our scientific instruments. what do u think? • lup, you say .. I wonder. There are some very interesting ways of configuring “scientific instruments” to do a lot our own brains can achieve. Take a look at this very cool stuff: 28. Back on the subject of photons losing energy in cosmic expansion. If we’re to view that as the result of the absence of any time-like Killing vector and hence no energy conservation, how are we to view gravitational redshift in a Schwarzschild geometry, where is time-translation innvariance? 29. Let us have a mental experiment , a static closed universe where only one electron at rest reside : What is the physical meaning then of its m ? E/ c^2 ? well , what is E ? mc^2 ? see what the problem is ? in all your posts E and m were never defind independent of each other , so , what is E ALONE ? and so for m ……. here we have one electron really at rest , what is the physical reality of its E or of its m are both interconnected in a closed circle where they are in principle beyond our understanding? does E and m really have no independent physical essence that we can ever grasp ? then what is m that belongs to that single static electron ?is it a scale by which we measure the ” amount ” / ” quantity ” of ripples ? 30. Most people are making a major category mistake w.r.t. N.N.W. , neuron networks are designed to achieve specific goals within the chemoelectrical medium of brain and body , N.N.W. are systems of equations designed so that with a system of input -output and pre-set goal it converge to accomodate similar goals. What is missed by most people is the fundamental category chasm between chemoelectrical output and sensations or feelings ……. you can never in principle get feeling as output while your input was electro-chemical impulses , only monists say that nonsense , for a very simple reason : this nonsense is the ONLY one within which materialism is possible. I was obliged to write this to clarify some mistaken comments written above. • If I may concur with this in a simple way: one must distinguish between systems that appear to be conscious and systems that actually are conscious. I believe the rest of you experience consciousness because you are similar to me, and I know I experience it. But I don’t have any way to actually check experimentally that you are experiencing what, from the outside, you appear to be. That fundamental problem — the lack of an experimental test of the experience itself — will make it difficult to determine whether a machine which behaves as though conscious actually is conscious. And a question for which there is no experimental test, even in principle, lies outside the reach of science… so until someone invents a convincing test that establishes whether a particular creature, natural or artificial, experiences its apparent consciousness, I cannot easily be convinced that the issue is ever going to be understood scientifically. • The thing that we are most familiar with is also the most difficult and puzzling of all known phenomena. but nature creates it effortlessly every day. 31. Neurons and neuron networks can be reduced to the realm of forces and particles while sensations and feelings can never be reduced to any thing within material universe , as such all talks about machine consciousness are complete void ………beware of A.I. deception with completely material category claimed to represent completely non material category , but if science retreats , here comes logic and rationality …..a law ….a necessity : any non-material category can never in principle be reduced to material category ….sensations and feelings can never obtained from fields , particles and forces . This is a settled fact once all concerned be free of pre-conceptions , prejudice and materialistic naturalistic philosophy , for any individual , he is free in his stand , but once he addresses the public no personal world view is allowed to pollute the innocents . 32. TO LUP : Again you are trapped in the fallacy of deciding/decreeing a stand where no real scientific proof exists , and all logical / rational understanding converge…… you decreed that NATURE create consciousness , this is total void statement : What is nature ? laws ?fields? both have no power to implement any thing in reality. Nature is the material universe while consciousness is beyond m and E . What is your proof for your decree ? See what i mean ? this is exactly the stand 99% of scientists adopt …to claim the unproved as the ultimate fact, we need to be humble in front of the majestic creation of GOD seeing that human consciousness is the ultimate awe inspiring reality as it is the object of feeling and the feeling itself. You need to read lots of sources just to start to feel the awesome tremendous meaning of the reality of what made you understand and decide………..good luck my friend. • Did you really have to bring god into this! Nature is what I would regard as the knowable reality what ever affects or had observable affects in our reality. Laws describe what we observe in reality and fields are what we call certain entities with certain apparent characteristics. The things we are describing to affect our reality but we can only name the properties they possess which is equivalent to saying this is how they affect our reality. 33. To I was just trying to make the point that we have no idea how to build a robot that is conscious but a fertilised human egg will divide many times over the course of 9 months according to the laws of chemistry/nature etc and produce a baby that is conscious. And its done it countless times since the dawn of the human race. I agree that conscious experience appears to be something completely different from what we call the materlistic world (for a start is a private and subjective not objective ‘thing’) but where ever it appears (at least in this universe/reality) there is also this biological organ we call a brain present. Surley u see nature as GODs design – a nature that is able to support the emergence of consciouness experience. Perhaps that is a better way to say it. • Sorry but consciousness is a manifestations derived from the billions of permutations of electrical / biologic signals. When we are awake we can control our thinking via negative biofeedback processing of billions of “digitized” thoughts. And when we sleep, because we have no conscious control, our dreams become more chaotic and random in nature, but we can remember those which are channeled in conscious thoughts, make sense to us because we created them during our awaken phase. If there is a universal consciousness, (and I believe there could be due to the striking similarity between the universe and our own brain structures. see a wonderful video below) it would not belong to God since the universal consciousness would be like our own, manifestations derived from the billions of permutations of particle interactions. 34. If Matt has trouble defining the rules of the game, perhaps we are just goldfish in a bowl. No. I refuse to give up on science. 35. I was reading through this and several comments, and all seem to agree that Z and W bosons of the weak force act as massive particles because of the Higgs field, but prior I was reading Fear of Physics by Lawrence M. Krauss, and came to the conclusion that these bosons act as massive particles because of virtual particles. As virtual particles pop into existence, it takes energy to do so, so pop back out to conserve energy and momentum, but if two virtual particles are attracted to each other, and instead of one pair, fill the entire system being observed, if these particles were to be adjusted in a way to have less energy than in a system with no such particles, due to the fact that two objects bound together have less energy than two separate objects, the system could fill with these particles, which could in turn affect the weak force bosons, causing them to act massive, for the same reason that photons act as massive particles in superconductors. Either I am entirely wrong (very likely) or is it possible that either these virtual particles are themselves the Higgs field, or that these virtual particles are waves in the Higgs field, rather than the specific weak force boson field? P.S. this is a great sight for the novice high school student of 16! Thank you for such a great resource professor! That conclusion is not correct. One thing you should know: `virtual particles’ are not particles (Krauss is not alone in not explaining this well); you should read at some point. Also, have you read the Higgs FAQ yet? If not you should. Now — there are two separate questions: 1) What does the Higgs field do? 2) What is the Higgs field made from (if anything)? Answer to 1) The Higgs field gives masses to the W and Z particles; and a Higgs-like field (often called the Landau-Ginsburg field) gives mass to the photon inside a superconductor. Answer to 2) It could be a single fundamental scalar field, or several such fields, or it could be made as a bound state (simple or complicated) of other particles. The Landau-Ginsburg field turns out to be a field made from electron-electron bound states (Cooper pairs) bound together by phonons. That need not have been the case; it could have been a fundamental field of spin two, or a much more complex object that Krauss would have had even more trouble describing, and it would still have given mass to photons in a superconductor and given us all the phenomena of superconductivity. The Higgs field CANNOT be of a precisely similar form as the Landau-Ginsburg field; Cooper-pair-like objects would break Einstein’s relativity equations. There is something analogous (called `technicolor’) but then the binding between the virtual particles that make up the Higgs field must in such a case be much, much stronger, relatively speaking, than in a typical superconductor. (Incidentally, if there really is a Higgs particle with a mass around 125 GeV/c-squared, technicolor is significantly disfavored.) So you see that there is no contradiction between the two statements; they are simply operating a different levels. Question (1) is about what the Higgs field does, which we know, but knowing the answer does not answer question (2); question (2) is about the very nature of the Higgs field, which we do NOT know, and that’s why we have a Large Hadron Collider, to help us answer this question. Similarly, for superconductors, the answer to question (1) was known long before the answer to question (2). 36. I am a bit confused: I find your distinctions very insightful and helpful, but my understanding is not satisfied: 1. What is an energy what is matter? I think you should mention positive definitions so one can refute them otherwise they are too vague. 2. Nothing wrong with vagueness though, words as said are problematic, but when words have an intrinsic failure, I tend to attribute it to the failure of ‘way to observe the world as human’. I.e. when our senses cannot grab the world. EX: Time in SR as 4th dimension (or 11 dimensions in the strings theory, or particle-vague duality) – the 4th dimension is nothing but a poetic metaphor of the universe that help me to understand the meaning of it, knowing that I am condemn to observe my surrounding in 3 dimensions only and cannot really grab this dimension. Although in physics these concepts makes perfect sense, they are limited to mathematical language and cannot be translated into human one. YOU are doing a precious job in helping me and other to have a better understanding but if words fail it is because it just does not match our human perception. BTW, no problem for me, as after Copernicus I lost hope… 3. A characteristic of an object is also an object UNLESS this characteristic in a a-physical attribute. Ex: A wooden table vs. nice table. As I can assume you refereed only to physical attributes, I cannot see how they are not part of the object itself. 4. “…I have height and weight; that does not mean I am height and weight” – this is at best a problematic example. What is the I is a long philosophical question and unless you are a pure materialist the I to which you refer is not a physical object and therefore cannot be defined by these adjectives (if you are a materialist you should define what that ‘I’ is). • you’re in reality a excellent webmaster. The web site loading speed Also, The contents are masterwork. you have done a magnificent task in this subject! 37. you say that a Photon’s energy is equal to it’s motion, every calculation I have seen, says it’s energy is dependent on it’s frequency, not it’s motion. You also say that A Photon is not pure energy, but that it is a particle made up out of stuff, since a resting photon has never been seen, are you sure that a photon is not a wave/particle potential of pure energy. It seems to me that the reason a Photon can travel at C is that it is mass less, hence it experiences no “drag”, per se, as it travels. If you are saying that a Photon is not energy but has energy, then it can not be mass less as a particle it seems that in a Photons wave/particle duality. As a particle it has mass and interacts, as a wave it is pure energy and has only momentum.. A photon as pure energy explains how it can be mass less and yet still carry information as a wave. 38. Okay, I checked out the links you provided, very good information, any knowledge into this subject I welcome into my deposit of knowledge. I understand your phrase of “motion energy” in the context it was used now. As far as mass experiencing drag per se, I was using the word in context to a Photon moving in space at “C”, having no mass it experiences no loss of motion, maybe the word “drag” is not the best to use, maybe friction, or another one is more appropriate. As far as particle/wave duality, my understanding on a Photon is that it can either be a wave or a particle depending upon observation and/or.interaction as illustrated in the double slit experiment. As far as “mass” I see this the same as the emergence of consciousness, for example one neuron can not create consciousness, just as one particle can not create significant mass, but it is the number of neurons and the complexity of the neural networks that create consciousness, I see object mass in the same way, once a grouping of particles come together into a certain “level” of complexity, then “mass’ follows in the same way that consciousness follows, that is how I see the “Higgs” field, this field is the result of this complexity, the more complex the structure and the more mass it has, the more it is affected by Gravity, Hence the heavier elements are more dense, thereby they have more mass, gold or lead being a good example, but there are of course many exceptions. And as far as a Photon traveling at “c” as a wave of energy I use this observation as my basis, as one example. This link shows a cluster of stars 1 Billion light years away, which means that each Photon from this star had to travel for 1 Billion years in order to reach our solar system. (More amazing to me, is the fact that some of those stars probably do not even exist anymore, yet we still are able to see them as they did exist 1 Billion years ago). In order for the Photons to travel over space and time for 1 Billion years, it seems to me the probability that a particle could travel these distance and time frames at ‘c” for 1 Billion years and not decay is remote. I see a Photon as being a ‘timeless” carrier of information, and I do not see how a particle could travel at “c” for 1 Billion years, and not decay. It seems more probable that in order to do that, it would have to be a wave of energy to accomplish this amazing feat. Of course since no human being has ever “observed” a Photon traveling at “c” or observed one at “rest”, Ie. snapped a Photo of one in either state I think it is too early to come to any conclusions as to exactly how a Photon for example can travel through space and time for 1 Billion years all the time, maintaining “C”, and not decay. I appreciate you quick response to my inquiry, and realize that so many of these answers are not going to be available until the technology to settle some of these big issues comes into play. But I am always striving to improve my outlook and knowledge on these matters, and value any and all inputs I can gather.. Thank You 39. as a side note : The total energy contained in an object is identified with its mass, and energy cannot be created or destroyed. When matter (ordinary material particles) is changed into energy (such as energy of motion, or into radiation), the mass of the system does not change through the transformation process. this does allow for a Photon to change from a wave of energy then into a particle, and so on. Maybe this is how it travels at “c” and never decay’s as a wave of energy it is basically eternal and there fore could travel at “c” for infinity, or for 1 Billion years for example, and then as interaction or observation dictates it changes into a particle, and interacts as dictated by the contact. your thoughts on this . 40. Another side note ( should of put this all on one message, I apologize ): .the initial energy for a Photon comes from its source ( for example a star ) Energy may be stored in systems without being present as matter, or as kinetic or electromagnetic energy. Stored energy is created whenever a particle has been moved through a field it interacts with (requiring a force to do so), but the energy to accomplish this is stored as a new position of the particles in the field—a configuration that must be “held” or fixed by a different type of force (otherwise, the new configuration would resolve itself by the field pushing or pulling the particle back toward its previous position). This type of energy “stored” by force-fields and particles that have been forced into a new physical configuration in the field by doing work on them by another system, is referred to as potential energy Any form of energy may be transformed into another form. For example, all types of potential energy are converted into kinetic energy when the objects are given freedom to move to different position This mathematical entanglement of energy and time also results in the uncertainty principle – it is impossible to define the exact amount of energy during any definite time interval. The uncertainty principle should not be confused with energy conservation – rather it provides mathematical limits to which energy can in principle be defined and measured. It seems to me that as a wave of (electromagnetic) energy a photon travels at ‘C” then changes to a particle upon observation or interaction. Everything I read leads to this conclusion. 41. sorry, I just seen this quote from one the the links you gave me. your quote’s : light waves (and the waves of any relativistic field satisfying the relativistic Class 0 equation) move at the speed c. the energy stored in the wave is E = (n+1/2) h ν where h is Planck’s constant, which always appears when quantum mechanics is important. In other words, the energy associated with each quantum of oscillation depends only on the frequency of oscillation of the wave, and equals E = h ν (for each additional quantum of oscillation) This relation was first suggested, for light waves specifically, by Einstein, in 1905, in his proposed explanation of the photo-electric effect. Please explain how this does not agree with my statements. 42. Date: 05 November 2012 Time: 07:29 AM ET Now, for the first time, a new type of experiment has shown light behaving like both a particle and a wave simultaneously, providing a new dimension to the quandary that could help reveal the true nature of light, Depending on which type of experiment is used, light, or any other type of particle, will behave like a particle or like a wave. So far, both aspects of light’s nature haven’t been observed at the same time. But still, scientists have wondered, does light switch from being a particle to being a wave depending on the circumstance? Or is light always both a particle and a wave simultaneously? 43. You said in this article “Early in the universe, when the temperature was trillions of degrees and even hotter, the electron was what cosmologists consider radiation. Today, with the universe much cooler, the electron is in the category of matter.” Why does “high temperature” make the Higgs (and gluon?) field being “zero” on average early in the universe? Why/how does “temperature” affect these fields? Why does “high temperature” prevent these forces (gluon, Higgs, etc.) acting on particles? And why do these different forces start acting on particles at “different temperatures”? 44. Thank you for your excellent article, and website. I would like to ask how are ‘mass energy’ and ‘motion energy’ related? Can motion energy ever be converted into mass energy and/or vice versa? You seem to be saying in your replies to other posts that ‘E’ in Einstein’s equation E=MC2 doesn’t include motion energy. If not then why does science use the same word ‘energy’ for both concepts when to do so leads to confusion? • you’re truly a excellent webmaster. The website loading velocity is incredible. you’ve done a fantastic job in this topic! 45. Everything you said about matter, stuff, energy, I agree with. A learner has to create separation between the two concepts in order to learn these. Its just like learning the language. A child will break up a long word into sylabils and memorize its pronunciation and meaning that way. And since its almost impossible to define either one of the terms (matter or, energy), why worry about it. Or, a layperson might reach a wrong conclusion that physicist in general are afraid of the word ‘spirit’ (pure energy). I meant spirits, like God for example. Physicists may cringe but the truth is this ‘stuff’ falls into category all on it own and being ethereal it can not be detected, tested or, experimented on in LHC. There is also a big likelihood that this ‘stuff’ disapproves of humans splitting the atom and destroying something he created for the benefit of all humans_ the blue planet, third rock from the Sun. 46. Most people do not have a clue as to the things they speak, nor can they prove them. But they speak none the less, it is all theory, opinion, best guesses, until it is proven to be true. And anyone that claims to understand Quantum Mechanics and the Sub-Atomic plane, is the same person that claims to be \/\/ise, anyone can claim to be anything, but it is one thing to “understand” the “truth”, and one thing to guess about it. But it is fun to read other peoples opinions, but it be better to read the truths, but those are yet hidden. Even though much is understood, he that speaks as if understanding is a given, speaks in circles, because they do not understand the truth. It is ok professor, no one understands it completely, or has the truth. but your opinions do have merit. • Are you trying to reassure me? 🙂 No reassurance needed; my knowledge is used to make cell phones, rocket ships and lasers. What are your alternatives used for? • cell phones, rocket ships, and lasers, are great technologies for the advancement of understanding. But they are each only different manipulations of the same energy that is present in all. A photon of light travels at the speed of light throughout the universe carrying the information of its source for eternity, unless interaction upon another force disrupts or absorbs its energy, such as in vision, if not for this information your eye could not interpret the origins of said photon to turn that information into a “picture” for your neurons to process. In order to travel at the speed of light a photon needs to be a particle, a wave, or both at the same time, and is basically mass-less, but yet still has the ability to transfer information, as Darpa has shown transferring information upon photons. As soon as a Photon “interacts: upon the human eye for example, it has to assume the form of a particle in order for the information to be “read” and transported to neurons for interpretation, this biological process does not occur in non-living matter, also in order to travel at the speed of light, a photon needs to maintain the mass less form of energy in order to reach the speed of light threshold, as no particle of mass may reach this thresh hold, as I am sure you understand completely during experiments preformed using accelerated particles, none of them have achieved 100% equal to the speed of a photon, that reason is established. It has been observed in many recent experiments that light ( a photon ) can be any or all forms at once, or that a photon can change forms as needed in its travels. Decay being the key, for Photons have been collected from telescopes that collected them after they traveled billions of light years across space and time, there are no particles that can achieve this velocity unless they change into pure energy to achieve this speed. Experiments at Cern have even held this constant to be true.. so for you to say that light is not pure energy, is disputed by many recent experiments, and by many old experiments… • Wow! ya why listen at all? Especially when you have everything figured out in your own head and can just vomit streams of metaphysics all over the comment section. No wonder the good professor stopped responding to you 47. I know you use “mass” to mean rest mass, but aren’t there some cases where we really need to talk about the mass corresponding to the total energy of a system? Like the fact that a system as a whole is literally heavier (has greater mass, as one can measure by the inertia or gravitation of the system as a whole) when DeltaE of *any* type is added to it (without allowing any energy to escape of course): let in some light, add heat, set a top spinning that was at rest before, compress a spring, pull positive and negatively charged objects further away from one another within the system, etc. I’m assuming that we’re examining this system in a single inertial frame in which the system as a whole is at rest, with no external forces or fields, so we don’t have to think about the kinetic or potential energy of the system as a body itself within its environment. The extra mass we’ll measure will be given of course by DeltaE/c^2, but here the E and m both refer to the total energy of the system. 48. Thank you for a solid blog on an important subject. I’m 65 and have a question based on the above mentioned “pre-1973” science era I grew up in. In today’s Physics, is the fundamental essence of the Universe still considered to be Space, Time, and Matter, or, as I gather from the above, something more akin to one of the following: A. Space, Time, Energy B. Space-Time, Matter-Energy C. Energy (when stretched out=Space, when condensed=Matter, and otherwise simply Energy) D. None of the above 49. Excellent article, thank you Matt! I think the abuse of terminology is common to layman description of all technical fields, and is not a prerogative of physics. Part of it comes from public relations (try explaining your discovery to a reporter, and, what is way more difficult, making sure it is published without inaccuracies!), and part of a linguistic inertia. I think as the boldest example of the latest we should recall the fact that in numerous languages (less in English, though) the word “ether” is used to indicate the broadcast of a radio or tv program. “In ether” is used in these languages as a synonym to “on air”. This archaic heritage of the long-obsolete luminiferous ether theory still remains in our terminology, and even modern network technology that has nothing to do with ether bears the name of “Ethernet”. 50. Hi Matt, Below I quote two things that you have said about dark energy and ask for a bit more explanation:- “Dark energy is a property that fields, or combinations of fields, can have.” If there is one thing you’ve got into my head is the idea of “fields and ripples”. A field can have a ripple, an object (stuff) like an electron or a photon. “Dark energy isn’t an object or a set of objects.” Do you say this because the effect in the field is more subtle, like the distrubance in space-time that gives us the gravitational field (stuff, but not defined enough to be called an object)? And if gravity can be visualised as a heavy ball on a trampoline, could dark matter be visualised as something bending its field(s) to make a hill as opposed to gravity’s valley? 51. Matt, You say “it happens that every known field has a known particle, except possibly the Higgs field (whose particle is not yet certain to exist, …” I had always thought that evidence for the existence of the Higgs field and that of the Higgs particle were more or less on equal footing, in fact, that acceptance of the Higgs field’s existence was predicated on detection of the Higgs particle. But your statement above suggests to me that even if a Higgs particle were not to be found (or somehow proved to not exist), then the prevailing point of view would be that the Higgs field exists nonetheless, only having no minimum eigenstate. Is this slight (and very respectful) criticism anywhere close to accurate? I feel blessed to have stumbled upon your site, and will be an ardent reader of your posts/articles. You have an extraordinary talent for expressing complex ideas in a clear, yet scientifically responsible, manner. This is very, very rare. – Doc 52. A Photon is an eternal carrier of information that can transfer and receive energy through contact and interaction upon various fields of mass. This ability can best be represented as you gaze upon a full moon at night, for you do see the moon light shining, and you do see the moon, but in reality you are seeing photons that came from the Sun, interacted upon the mass field of the moon, then imprint the information of this mass field upon the photon, then as it enters your optic nerves the information your neurons receive is that of the physical image of the moon shining from photons originating inside of the Sun. The eternal energy and motion of these photons is lost as each transfers its energy into the organic neural/electrical network becomes a particle transfers its energy ( information ), and ceases to exist in its previous form. Photons have been captured from telescopes that have traveled billions of lights years across space and time, yet they retain there speed only slightly influenced by gravitational fields of fields of mass, and they retain the information of there source of origin, and can attain constant updates to this information as it interacts upon various fields of mass, sometimes exchanging its energy, and sometimes transferring its energy to another medium. this vast universal communication system is recently being discovered as quantum mechanics and physics are relating it to the vary method used by the brain to communicate along vast and complicated neural networks, so, as above, then so below. Energy is eternal it can not be created nor destroyed, only transferred between fields of mass…. 53. Also it is my opinion that dark energy is a result of entropy caused by the transfer of energy not being 100 % efficient during the energy transfer process. And as the universe has expanded and aged, the entropy has increased and that has caused an increase in dark energy that is in a chaotic state. This “dark energy’ then behaves as regular energy, but instead creates dark matter as a result of its chaotic state. energy in is greater then energy lost in most efficient energy systems, and this loss/entropy is the reason for the related increase and amount of dark energy in the universe, and the amount shall ever increase in relation to entropy due to inefficiency. just my opinion though… 54. Well Sir, After reading all this “stuff” , off course not that stuff which you have been mentioning about, I’ve got a feeling that some of my doubts can be cleared by this well learned friend of mine namely Mr. Matt. The following are my doubts Sir, Please address to them: 1. Is the light a form of energy as every one is thinking as on now? 2. If it happens to be right, Is the energy incident on earth’s surface from sun is in form of sunlight? 3. If it is only partially right, what are the other forms in which the energy is being conveyed to earth from the sun? Based on the answers for this queeries there may be further observations you have to answer Sir, Please reply. 55. sorry but I haven’t read the full website … can you succintly put everything in a simple short phrase ?? would it be fair to say that everything is energy and it manifests only in two ways: matter and force fields ?? 56. Hello Professor Strassler. In your post you say that energy is something that objects have, that energy is not an object itself. But I have heard string theorists like Brian Greene say on science tv shows that if string theory is true then everything is made of strings. And what are these strings? He says they are strings of energy. He says these strings are made of energy. But if strings can be made of energy, wouldn’t that mean energy is stuff? And wouldn’t that mean matter is energy at the fundamental level? To hear Brian Greene say this watch these short videos: 57. If energy is neither created nor destroyed than it is neither present nor absent in a particular space and time.Energy and matter are one and same.There is nothing called dark matter and dark energy.All divisions are like dreams.For mind time is understanding through thought/memory.Mind cannot differentiate between dream and reality because it functions by continuous reciprocal causation.One cannot prove in unreal{dream like} time and space. 58. Not sure if this will make sense to someone with knowledge, but when the big bang happened you had pure energy and massive temperatures, as the universe expanded it cooled and turned into (matter) could dark energy be the remaining energy and temperature in the universe wanting to turn into (matter)? Also when the universe started it was extremely hot its cooled so much why did it stop cooling? Why did it stop just above absolute zero in deep space? These are probably stupid questions just thought I’d try to better understand something that interests me. 59. thank you that was very informative and has given me a lot to think about , i just don’t understand how light speed or fster than light speed was achieved in the first few seconds of the start , in whatever state of energy or near mass everything was :_) 60. Is it possible for energy to act within and upon itself? Is it at least possible that consciousness is a form of energy? Or, are there reasons i tcannot be? 61. def(matter)by Google “physical substance in general, (in physics) that which occupies space and possesses rest mass, especially as distinct from energy.” It all depends on your perspective. I meant matter in general, not A specific bit. I also meant that when matter takes a form, that form IS a result of energy. So, for example, the form of a balloon, and the material components of the balloon, are shaped by the forces acting on it and them. “…especially as distinct from energy.” Matter IS distinct from energy, but is not separate from energy, since no matter can exist without energy. Just to emphasize the ambiguity of language, your phrase, “matter is A form that energy CAN take”, implies that energy can take other forms. Of course, you can say that energy takes different forms, such as EM and gravitational, taking “form” in a different sense. When you experience matter, EM energy, and gravity, are you experiencing energy? What about mass? When you perceive mass are you perceiving energy? Aren’t mater and mass simply phantasms composed of energy? In fact, it may be that energy is all there is. : ) 62. not only “may be that energy is all there is” but energy is all there is, and its different forms (force fields and “matter”) 63. And now I can say, consciousness is aware-ized energy, and, all energy is aware-ized. Of course, only subjective experience can tell. I think Matt is missing out on that part. Science has bound up the minds of even its own even most original thinkers, like Matt, for they dare not stray from certain scientific principles. When his private experience of himself does not correlate with what he is told by science, then he may become familiarized with the roots of ego consciousness. This will be done under the direction of an enlightened and expanding egotistical awareness. Then, he could use his talents to organize the hereto neglected knowledge. Not a judgement, just an observation. 64. Incredible story and string – and I always thought it was only we economists who were all screwed up with ‘on the one hand, this’ but ‘on the other hand, that’, ‘but then again maybe it’s…….’ and so on and on. How refreshing. 65. Is consciousness matter or energy, or some combination? Well, actually, you don’t know. So, I will tell you. Consciousness is aware-ized energy, and all energy is aware-ized, whether you believe me or not. : ) That statement is scientific heresy. Subjectivity cannot be demonstrated within the context of current science. How do I know? I know by experiencing my own consciousness in many ways. Learn by doing. 66. Hello there, You’ve done a great job. I’ll certainly digg it and personally suggest to my friends. I am confident they’ll be benefited from this website. 67. Do fields extend endlessly across or through the known universe? Can any distinction be made between fields and their particles on the one hand and the phenomenal world we actually experience by way of our sense modalities? When you suggest that “objects” have (energy), are you arguing for “entities” that have an independent, autonomous existence and an intrinsic identify? I haven’t encountered such as yet, at least not in the phenomenal sense, wherein a thing exists ONLY in relation to other things. – Cal 68. recalling my highscholl physics where the teacher would tell us that the electrons quantum jumps between orbits ? Where does it go ? 69. Two words regarding the matter\energy equivalence debate, “conserved quantities”… energy and matter are interchangeable. There is no dichotomy. Matter is an observed point particle at a specific place and time within a collapsed wave function, just an observed probability. We know that where there is energy we can observe matter popping in and out of existence (for want of a better word)ad nauseam. In my mind the day will come where we realise that scale-symmetry will clarify unequivocally a fundamental view of “what stuff is”. we will drop the idea of super-symmetry and accept that matter having physical dimensions e.g the current zoo of point particles we observe is just a misleading by-product of fundamental symmetry breaking. last month at the International Conference of High-Energy Physics in Valencia, Spain, researchers analysing the largest data set yet from the LHC said and I quote “we have found no evidence of super symmetric particles”. Yes you can always say that at higher energies we can expect to see a super massive primordial particle beyond the combined 14GeV the LHC can pump out, now we have 5 sigma results on the Higgs what’s next? I will retract this when I see a 5 sigma result on a Electron\Selectron pair stopping the Higgs boson mass inflating exponentially being observed … To quote the matrix… “there is no spoon”, but again in my mind there is (at the scales we observe from the observable universe down to the Planck length) a bloody good 4D observable representation of that spoon gaining mass from our old friend Mr H Boson…or if your a M Theorist an 11/6D one bent Uri Geller like through an unobservable Calibi Yau Manifold ;o) Or to put it more accurately the energy that constitutes it appears on our narrow scale of observation to be a spoon, because the interaction it has with the Higgs field ascribes mass to it on our scale … please understand my tongue is placed firmly in my cheek here… ;o) thanks for the article I enjoyed it and the comments below it very much… has a lot of exclusive content I’ve either written myself or content from being stolen? I’d genuinely appreciate it. 71. I knew someone who had exactly this problem, in fact I myself discovered the problem while searching the net for a particular literary piece he had written. I put several keywords into a search engine, and lo and behold, I was directed to another site. Worse still, the owner of this other site was passing off his intellectual property as her own. He had a lawyer draw up a cease and desist order (really just a letter), and the plagiarized material was gone within the next 48 hours. I’m not sure an actual lawyer is necessary, since you could probably draw up a convincing letter yourself and self-promote to “Esquire” 😉 72. Hey There. I discovered your blog the usage of msn. That more of your useful information. Thanks for the post. I will certainly return. 73. Polarity is required for thinking. So, for example, if you don’t know what up is, you don’t know what down is. If there is no up, there is no down. Energy has been a difficult concept to define because no one knows its polar opposite. Feynman has said that science has no idea about what energy is, and that makes sense, because there is no polar opposite to energy. I suggest science and philosophy will have the same problem with the objectivity and subjectivity. Like energy, science, by its own requirements, cannot define objectivity, because, there is no polar opposite. Subjectivity, and hence consciousness, can only be defined by science in objective terms. According to current science, as I understand it, subjectivity is only a kind of objectivity we do not understand yet. Hence, since there is no real polar opposite to objectivity, objectivity cannot be defined. You know, and like “consciousness,”like “energy”. 74. Sorry everyone but you are over complicating ! Consider particles are constricted energy, energy seen is variation in dark matter energy field but what is energy? 75. hello, I agree with the cringing feeling you mentioned: I wish there was a cleanly defined ontology for physics too, as there are many for other scientific fields (“”). I also agree that the dichotomy matter/energy is inappropriate (a correct one would rather be mass/energy, as they can be tranformed into each other, right?), as matter is “stuff” and mass or energy are properties of that stuff. But then another dichotomy would be stuff/void or matter/spacetime, right? • ok, no replies… Maybe ontologies are out of scope here! but that’s strange, since both “here” (the web) and ontologies are by Tim Barners Lee, a physician… Anyway, to strictly speak your language and tell about things you know/care… The below is what I wondered about, when I studied (as an hobby) Susskind’s “Special Relativity and Electrodynamics”: So, just take the SR’s invariant (a metric!): dTau^2 = dt^2 – dx^2 – dy^2 – dz^2, then divide it by dTau^2, as Susskind does in one of those lectures. You get an equation on the components of the four-velocity, but what does it say? It says that the square of time component of the 4-velocity, minus the sum of the squares of the spacial components of the 4-velocity is always equal to one. Now, what does this mean/imply? Let’s take “me”, or any “material thing”: I am always “at rest” in my own frame of reference, right? So the spatial components of my 4-velocity shall all be 0, right? That then means the time component shall be 1, right? I, in my own frame or reference, have “some kind of motion” in time, indeed I see it “passing by on my clock”, it’s “moving” with respect to me and vice-versa: the time component of my 4-velocity is 1! To me, that is basically equivalent to say “I ‘keep existing’ as time goes by”. But there’s not just things like “me”, not just (massive) “particles”, out there, right? There also are things for which “time does not pass by” and “all space is compressed in the direction of motion”, so that they cannot have a frame of reference of their own… pretty weird, right? If they could have a reference frame of their own, they would say about themselves that their time component of the 4-velocity is 0 and their space components squared sum up to -1, so if you take the motion along one single space component, their speed is i=sqrt(-1)… too weird, right? BTW: to me, this is pretty much the same thing that happens in EMF with the components of the 4-current: – take the time component: we call it charge density – take the space components: we call them currents why do we do so? To me, it’s us introducing this dichotomy, nature hasn’t (this one). 76. Hello Dr. Strassler, I love your site and the scintific facts and knowledge you share. I would like you to please explain a phenomenon I’m interested in for many years: what cause the matter to became a matter? In psychological language (Jungian and others), it is the anima (the feminine aspect), that sedused the masculin aspect to became a matter. Okay, I can understand this. But what is the equivalent scientific explanation, for lay people like me. very comlicated indeed. Would very much appreciate your reply. 77. I believe we are missing an an equation to satisfy a problem. Matter to energy must have something quantifiable to keep a constant. Since energy can not be created nor destroyed we are left with doing the math to convert both in a dance of harmony. Dark matter is the gorilla in the room. Unless we can move past the speed of light, we can never know. 78. Dr. Strassler: As a novice and a much older high school dropout, any of the sciences leaves me in shambles. However I do ask many questions which usually get me in trouble. So here goes. Might there be an antithesis of energy at the very core of this universe growing exponentially as our material universe dies at an ever increasing speedy death of attrition? While modern science tells me our universe has no center, I have my doubts. I am hoping this may turn into a good and constructive discussion? Thanks, LeftyR 79. Hi, to relieve your mind of the contradiction and ambiguity 🙂 I can tell you this: First of all, this dichotomy clearly originates from before the early twentieth century, when the dual nature of light, and with that of all matter, was not yet apparent. I mean, from before the de-Broglie equation. This dichotomy of matter and energy, actually originates more from chemists than from physicists. In chemistry we use it primarily to define the traditional study of chemistry, as opposed to the study of physics. And we frequently say, chemistry is the study of matter – properties, changes… etc. and physics is the study of everything else (energy). when we say matter, we mean atomic matter. This is not accurate, and modern chemistry and physics overlap quite a lot, but what we mean when we say this is: properties of a substance -> properties of matter -> chemistry A reaction between two substances -> changes of matter -> chemistry energy changes during a reaction -> iffy, could be physics or chemistry, but chemists study it differently intermolecular and intramolecular forces -> same as above other forces -> physics the electromagnetic spectrum -> physics subatomic particles -> physics 80. Very helpful. I landed here with a question . Assuming the properties of only a gravitational interaction, why not just call it like unbound gravity or something? I mean we won’t ever see anything else there. 81. Thank you. But I had a question. Accordingto your article: what is energy in a field? Its amplitude or its wave velocity (I guess this would be C)? 82. Let me begin by saying that every atom in the entire universe has one intrinsic characteristic and that is, Magnetism. No matter how thin we slice or dice it, the atom, even down to what we believe is its lowest common denominator, quarks, has one thing in common, spin, angular momentum or, magnetism. Is it possible that energy itself is derived from something below this level, or in the way atoms get bounced around? 83. As a novice, in trying to make sense of matter vs. energy I have read several papers on the subject and concluded that the word “energy” is nothing more than a way of describing an action and reaction on mass/matter. An example is 2 iron balls, of 5# and 10# respectively, hung at equal heights in a vacuum. Dropped simultaneously, both will hit the ground at the same time with one exception, the heavier ball will dig a deeper hole. And why? Because of “ENERGY” generated by the weight of the mass. 84. Hi, I’m in my late 50’s and have taken a fascination with science, and am thoroughly enjoying learning about it. I found your article certain helpful, although much was over my head. I have saved it and will re read when will understand more. Thank You, 85. Matt, Stumbled onto the post searching for publications on this topic. Find much of your presentation w/r to matter and energy is agreeable to me. We have recently published a short article on this topic, and perhaps should have used the term “radiation”, as you did, rather than “radiant energy” as we wrote. Don’t hold your breathe expecting solid evidence for “dark energy” or “dark matter”. 86. Hello ALL, I bumped into this article and Matter-Energy Blog just on this Thanksgiving Day. Happy Thanksgiving to all. I am searching for particles /energy that explain physical and spiritual dimensions. Is Higgs’ Filed/Bosons the fundamental physical particle representing the Cosmos and we are still after the Holy-Grail, Spritual/Atman Particle? I am convinced that Higgs’ Field explains the material construction. 87. My Question is for Mr Matt Strassler – You say there are field without mass but no mass without field. What is the basic thing for which field obtain mass and also field obtain energy. Ultimately we can not ignore E=mc^2, which means mass and energy are convertible. Also could not understand proper relation between mc^2 and kT. How they are related if say the velocity of the particle (or field?) is v? Is there any mathematical formulae for the same. I also have a question on the concept of “Point of Singularity”. All the fundamental particles which form into “matter” have some basic dimension, however small it may be, which is why they have mass (leaving the particles with zero mass, as matter can not be formed with particles of zero mass alone). Therefore, even if we reduce particles to such a level a physical space would be available for the particle. Therefore, I conclude that there may not be any point of singularity at all – it would have some minimum space consideration which is what we may assume as “singularity” although it will not be same as mathematical singularity. So it may happen that our very assumption that big bang happened from a singularity may not be a valid assumption. However, supposing such a point of singularity to even exist, how does one visualize existence of energy in a concept of singularity. 88. as a developer, I’d say that: – mass is an “excess” of position information (stuff moving in a stationary way) – energy is an “excess” of speed/momentum information (stuff moving “freely”) 89. you might find this interesting to think about ” the universe being made by energy” instead of “being made of energy” this allows for the creation of non physical properties such as space time. Try to start by thinking of energy as a wave and a particle as the work being done, where the work being done is turning nothing into something. 90. Try thinking of energy in an evolution way maybe something like. Energy starts with zero complexity then increases in complexities until it finds symmetry with entropy thus returning energy to zero complexity. Then you can map complexity on an evolutionary tree. It soon becomes clear that gravity is the least complex form of all energy and maybe the source of all energy, from a zero point singularity formed by the single characteristic of nothing. “You cannot separate objects without space time.” A zero point singularity is inevitable in a zero dimension. 91. start with least complexity space time. how energy might create it? it should answer why the speed of light is consonant whither traveling toward or away from the source in very simple terms. then understand the function of spin and you have it. so simple 92. One last question. Is it time or distance that dilates at velocity? All quantum weirdness should vanish if you get all the above right. The end result could be, not a big bang but the continuous bang of a “white hole” and not a big crunch but a continuous crunch of entering a zero dimension a “black hole”. 93. How does one explain to children the difference between solids, liquids, kinetic energy, potential energy…but oh by the way…everything we know about matter vs energy is bogus? How should one approach educating children about the distinction between matter and energy or the types of energy and matter if this is an incorrect dichotomy? For example, how would I describe the basic state of a car vs the sun or electricity or sound or motion…etc. now that doing so will only lead ultimately to misleading information – and that our basic knowledge regarding energy and matter is a false paradigm? Thanks 94. energy is nonmatter, as matter possess energy and energy do not possess matter they are 2 different sides of a single coin , and do not get confused in terms of photons particle , as photon particles as photon particles are stuff of no mass and infinite energy…their mass converts into energy…..Thanx ….by Abhinav…a highschool student…..India ….u.p..alld….ghoorpur 96. Can one describe energy as a measure of the matter which acts on a system to put it back to it’s balanced state…and if so, that without matter energy would be undetectable? 97. “All particles are ripples in fields and have energy…” As it stands this statement is blather. A field of what? A ripple caused by what? One can substitute ‘strings’ for ‘fields’ in this statement and many a Ph.D. physicist might agree. Yet both versions are without any proof; see Lee Smolin ‘The Trouble with Physics’ 98. Energy as in photons always moves at light speed. Matter cannot move at light speed. How can they be two different sides of the same coin? Sure matter has energy inherent in it, as in e=mc2, but I think if you release all that energy, the matter is still there. Dead matter, so to speak. Atomic dust if you will, with no motion and no charge so probably impossible to detect. Photons if anything are a wave. This always moving at light speed suggest they are not a sphere since no part is going to move faster or slower than light speed, whatever happens to it. It is not going to rotate. The photon has two speed components. It moves at light speed in the direction of travel, and moves from side to side, so giving us wavelength and frequency, which seems to work out at one third of light speed (one metre wavelength gives a frequency of 10^8 Hz, so one hundred million metres per second). The more energy a photon has, the smaller it’s wavelength, so the side to side movement in turn becomes faster. The constant here gives us that effect; that what it adds to the speed, the frequency, it must take from the width, the wavelength. I know what I mean but I am not good at explaining this. • Isnt frequency simply the duration of the wave? And don’t photons have different frequency? As in visable light verses microwave? • Frequency is the number of waves that pass through a fixed point over a given unit of time (so not a time measurement per se). Any yes, photons have different frequencies. Matter has the following fundamental properties: 101. I came across this discussion because I was looking up the word ‘thing’. To me a thing is something that I can either see, or hear, or touch/feel, or taste, or smell. So, aren’t matter and energy both things?
0b68a693cb9bf1a2
What is Quantum Entanglement? Part 1: Waves and particles If you follow science, or science fiction, to any degree, great or small, you’ve probably heard the term “quantum entanglement” before.  You may also have heard it referred to as “spooky action at a distance,” and understand that it somehow involves a weird connection between separated quantum particles that can “communicate,” in a sense, over long distances instantaneously.  You may have read that quantum entanglement is a key aspect in proposed technologies that could transform society, namely quantum cryptography and quantum computing. But it is difficult for a non-physicist to learn more about quantum entanglement than this, because even understanding it in a non-technical sense requires a reasonably thorough knowledge of how quantum mechanics works. In writing my recently-published textbook on Singular Optics, however, I had to write a summary of the relevant physics for a chapter on the quantum aspects of optical vortices. I realized that, with some modification, this summary could serve as an outline for a series of non-technical blog posts on the subject; so here we are! It will take a bit of work to really get at the heart of the problem; in this first post, I attempt to outline the early history of quantum physics, which will be necessary to understand what quantum entanglement is, why it is important, and why it has caused so much mischief for nearly 100 years! Small disclaimer: though I am a physicist, I am not an expert on the weirder aspects of quantum physics, which have many pitfalls in understanding for the unwary! There is the possibility that I may flub some of the subtle parts of the explanation. This post is, in fact, an exercise for me to test my understanding and ability to explain things. I will revise anything that I find is horribly wrong. Near the end of the 19th century, there was a somewhat broad perception that the science of physics was complete; that is, there were no more important discoveries to be made.  This is encapsulated perfectly in an 1894 statement by Albert Michelson, “… it seems probable that most of the grand underlying principles have been firmly established … An eminent physicist remarked that the future truths of physical science are to be looked for in the sixth place of decimals.” By 1900, the universe seemed to be well-described as a duality. Matter consisted of discrete particles (atoms), whose motion could be described by Newton’s laws of motion and law of gravitation, and light consisted of waves, whose evolution could be described by Maxwell’s equations for electromagnetism.  In short: matter was made of particles, light was made of waves, and that covered everything that we observed.  We will, in shorthand, call this “classical physics” going forward. But there were still a number of mysteries that were perplexing and unsolved at the time.  One mystery was the nature of atoms: atoms clearly had some sort of structure, because they absorbed and emitted light at isolated frequencies (colors), but what was that structure? There was much speculation in the early years of the 20th century related to this. Fraunhofer’s 1814 drawing of the spectrum of sunlight. The dark lines in the lower color image aren’t mistakes; they’re discrete colors of light that are absorbed by atoms at the sun’s surface. Another unsolved mystery was the origin of the phenomenon known as the photoelectric effect. In short: when light shines onto a metal surface under the right conditions, it can kick off electrons, as illustrated crudely below. However, the photoelectric effect didn’t seem to work as classical physics predicted it would.  The energy of electrons being kicked off of the metal didn’t increase with the brightness of the light beam, as one would expect from the classical theory; it increased with the frequency of light. If the light was below a certain frequency, no electrons at all would be kicked off. The brightness of the light beam only increased the number of electrons ejected. The puzzle was solved by none other than Albert Einstein. In a 1905 paper, he argued that the photoelectric effect could be explained if light not only behaved as a wave but also as a stream of particles, later dubbed photons, each of which has an energy proportional to frequency.  Higher frequency photons therefore transfer more energy to the ejected electrons. Also, a brighter light beam has more photons in it, resulting in more electrons getting ejected. This was the first illustration of the concept of wave-particle duality: the idea that light has a dual nature as a wave and a stream of particles.  Depending on the circumstances, sometimes the wave properties are dominant, sometimes the particle properties are; sometimes, both must be taken into account. Einstein’s argument was a profound one, and answered other questions that had been troubling physicists for a number of years. For instance, the shape of the upper curve in Fraunhofer’s spectrum above, which shows the relative brightness of the different colors of sunlight, is known as a blackbody spectrum.  It can be shown that the shape of the curve arises from the particle nature of light.  Einstein won the 1921 Nobel Prize in Physics for his work on the photoelectric effect, which provided clear evidence that there was still more to understand about the fundamentals of physics. So light, which was long thought to only be a wave, turns out to also be a particle!  One might naturally wonder if the reverse is true: that matter, long thought to consist of particles, might also have wave properties?  This was the idea that occurred to French physicist and PhD candidate Louis de Broglie in the early 1920s.  As he would later say in his 1929 Nobel Lecture, Louis de Broglie put forth this hypothesis in his 1924 PhD dissertation, and though his work was considered radical at the time, the wave nature of electrons was demonstrated in 1927 in what is now known as the Davisson-Germer experiment. The idea that electrons have wave properties resolved other physics mysteries. Remember the question about the structure of the atom?  The first major piece of the puzzle to be found was the experimental discovery of the atomic nucleus in 1910 by Ernest Rutherford and his colleagues.  It naturally followed that electrons must orbit the atomic nucleus, much like planets orbit the sun, but this still did not explain why atoms would only absorb and emit light at distinct frequencies. In 1914, Danish physicist Niels Bohr solved the problem by introducing new physics.  In the Bohr model of the atom, electrons are only allowed to orbit the nucleus with discrete values of orbital angular momentum, and could only release or absorb light by “jumping” between these discrete orbits.  The orbits are labeled by an integer index n, as illustrated below.  Bohr’s model reproduced exactly the emission and absorption spectrum of hydrogen, and was viewed as a major step in understanding atomic structure. The Bohr model. a photon is emitted when an electron “jumps” from one orbit to another. But why would electrons only orbit with those discrete values of angular momentum? This was a question that the physics of the time could not answer, and was in essence an unexplained assumption in Bohr’s model. It so happened that de Broglie’s hypothesis, that electrons have wave properties, provided the explanation!  de Broglie realized that, if the electron acted like a wave, then those waves could only “fit” around the nucleus when an integer number of wavelengths fit in an orbit.  A rough illustration of this is below. Visualization of de Broglie waves around an atom. Each more distant electron orbit has one extra “hump” in the electron wave. Louis de Broglie was actually inspired by a very mundane example: a vibrating string! As he recalled in his Nobel lecture, On the other hand the determination of the stable motions of the electrons in the atom involves whole numbers, and so far the only phenomena in which whole numbers were involved in physics were those of interference and of eigenvibrations. That suggested the idea to me that electrons themselves could not be represented as simple corpuscles either, but that a periodicity had also to be assigned to them too. This is an experiment you can try at home with a string or a phone cord!  Though you can shake a string at any frequency you want, there are only certain special isolated frequencies that will feel natural, known as resonance frequencies. First few resonance modes of a string with fixed ends. Each mode has one more “hump” than the previous one. So, by 1924, physicists were aware that both matter and light possess a dual nature as waves and particles.  However, the situation between matter and light was not entirely equal.  Since James Clerk Maxwell’s work in the 1860s, physicists had a set of equations, known as Maxwell’s equations, that could be used to describe how a light wave evolves in space and time.  But nobody had yet derived an equation or set of equations that could describe how the wave associated with matter evolves. This was a challenge undertaken by Austrian physicist Erwin Schrödinger in 1925, soon after de Broglie had suggested matter has wave properties.  Within a year, using very clever arguments and intuition, Schrödinger derived an equation that accurately modeled the wave properties of electrons, now known as the Schrödinger equation.  With the Schrödinger equation, it became possible to quantitatively model the behavior of electrons in an atom and accurately predict how they would absorb and emit light. So, by 1927, the new quantum theory of light and matter was in reasonably good shape.  There was experimental evidence of both the particle and wave nature of both light and matter, and the particle nature of light and the wave nature of matter had been experimentally confirmed.  This is summarized in the table below, for convenience. But there was one big puzzle on the matter side, as illustrated in the lower right corner: what, exactly, is “waving”?  In other words, what does the wave part of a matter wave represent?  In water waves, it is the water “waving” up and down, and conveying energy through this motion.  In sound waves, it is the molecules of the air “waving” forward and back.  In light waves, it is the electric and magnetic fields that are “waving.”  But there was no obvious way to interpret what was doing the waving in a matter wave. It turns out that the initial answer to the question, which would be formulated right around the years 1926-1927, would lead to some very strange philosophical implications of the new quantum theory.  This will be discussed in part 2 of this series of posts! 19 Responses to What is Quantum Entanglement? Part 1: Waves and particles 1. Richard Cunliff says: I need part 2 2. Doug Cook says: I am going to share this with my 10 year old Grandson…..He will get it. Been looking for “Quantum Physics for Kids ” or QP for Dummies. Doug Cook 3. Jay Gillette says: Cliff hanger! 4. jaime garavito says: Good job. I hope second part will come soon. 5. This post is awesome in every single aspect. Can’t wait for part 2! 6. Cy Coote says: Brilliant! So accessible, thank you. 7. Rob Giunta says: Good so far. When is the next post? 8. Rob Giunta says: When is the next post? 9. Raman Kohli says: Great post! looking forward to Part 2 🙂 10. Walter says: Nice. A partical on Tuesday and a wave on Thursday. 11. Cam Hough says: Awesome post! In the fourth figure (de Broglie waves), you say higher modes correspond to larger distances from the atom. What is preventing higher modes from existing on every orbital level? For example, couldn’t you “fit” three, four, five, etc. humps on the n=2 circle? • The answer, I believe, is a mix of classical and quantum. A larger number of humps = a shorter wavelength = more momentum. A higher momentum particle in the same orbit, however, will not be in a stable orbit. 12. Christine says: And we breathe it. 13. S.danny says: Very good explanations. 14. NoOne says: Reblogged this on Transcendence and commented: For the QM freaks, here’s something awesome… 15. Brandon says: It is extremely difficult for three dimensional beings that see two dimensionally to understand fourth, fifth, etc. dimensional cubes or “masses.” This is a good first step for people who are interested in exercising their perception of our Universe. I remind people, constantly; not that long ago (considering the age of our planet) it was a “fact” that the Earth was flat and if you sailed in one direction…you would fall off the edge of the Earth. We now know that the Earth is not flat, but how long will it take for the majority to understand that it is not only three dimensional, but also…simultaneously…fourth, fifth, etc dimensional? 16. Jitendra Dhanky says: I salute you with Thanks ! You have shared your rare gift to put across the interconnectedness of complex into simple clear coherent that a layperson can understand and say ‘ Now I can see ‘ and ‘ Now I Understand ‘ 17. Dr. Nabarun Moitra says: You have inadvertently forgotten the Father of it all — Max Plank. He deserves at least a footnote! Leave a Reply to jaime garavito Cancel reply WordPress.com Logo Twitter picture Facebook photo Connecting to %s
9ddeda4ce1aa4e0f
Einstein and the speed limit of the universe Einstein did not support the fundamental uncertainty of quantum physics. He stubbornly maintained the idea that reality was permanent and objective and that the observer played not a significant role. Yet the observer plays quite an important role in his best-known work, the theory of relativity. Precisely if you assume that the observer makes the observed ‘true’ and thus actually creates reality, his approach to the relativity of space and time offers a surprising outcome. Special relativity The special theory of relativity can be followed perfectly by using nothing more complicated than Pythagoras and a dose of high school algebra. But I’m not going to do that here now. There is a lot to be found on the internet doing that. Read for example: Special relativity math2410 from Leeds University. An extremely important premise for Einstein was that the universe should basically look the same for two observers moving relative to each other. Ultimately, that’s a symmetry argument. Symmetry has been an important criterion in the theories of physics since Emmy Noether introduced it in 1918. He combined this criterion with the insight that the observed speed of light – in a vacuum – must be the same in all circumstances. This followed from Maxwell’s equations for electromagnetic waves and was indirectly confirmed by the experiments of Michelson and Morley who sought to determine the speed at which the Earth traveled through the supposed aether by measuring differences in the speed of light going in different directions with regard to this aether. The outcome was that they could not measure differences in speed, no matter how accurate their experimental set-up was. To ride with a light wave In addition, Einstein had realized from an early age that you cannot overtake or even keep up with a light wave. If you could keep up with light, Maxwell’s electromagnetic wave would no longer oscillate from your moving point of view, it would look like a frozen wave. But since the wave’s propagation is both caused and sustained by its ceaselessly oscillating fields, that couldn’t be right. Light must therefore always move at exactly 300,000 km/s for every observer. This follows also undisputedly from Maxwell’s equations because these do not contain any parameter relative to the position of the observer. Einstein riding the light wave. The wave will seem frozen from his viewpoint. This is not possible. © Paul J. van Leeuwen Einstein now imagined two observers moving relative to each other but who should both observe the same speed of light. Imagine a light source C standing still for observer Alice. Alice sees the light of C approaching her at c = 300,000 km/s. Observer Bob whizzes at great speed towards ligt source C, say 1/10 of c. Alice now considers that the light coming from C towards Bob must therefore move at 11/10 of the speed of light for Bob. I hope you can follow Alice’s reasoning. Otherwise, try to think of two cars driving towards each other while Alice watches along the roadside. Car with driver Bob drives at 10 km/h and car C drives at 100 km/h towards Bob and Alice. Car C here stands for the light that comes towards Bob and Alice. Alice observes (with radar) that the speed of car C is 100 km/h and that Bob and car C are speeding towards each other at 110 km/h. Now suppose that Bob would also perceive the speed of the oncoming car C relative to him as 100 km/h. That could only be if Bob’s clock ticked at 10/11 the speed of Alice’s watch. And not only Bob’s clock but also Bob’s entire perception of time would have to be slowed down so that Bob actually experiences the speed of car C as 100 km/h. In that case Bob will live a little bit slower. As far as Alice is concerned, Bob is now aging more slowly than Alice. Time slows down and space shrinks Now back to the light that is always experienced by every observer at the same constant speed. If Bob moves relative to Alice at 1/10 the speed of light and Bob sees the light move at 300,000 km/s, then that is possible if the time for Bob slows down by 10/11. Bob doesn’t feel that way because he himself is sitting in his delayed time capsule, his car. This simplified estimate of the slowing of Bob’s time is not 100% correct because something also happens with Bob’s yardsticks, but what matters to me is that you get an understanding of relativity reasoning. If you want to do this completely right, then, as already mentioned, some algebra and Pythagoras are involved and the time dilation, the slowing down of Bob’s time, is described with: Time dilation T for Bob’s clock moving at speed v relative to Alice’s stationary clock. T0 is the time of Alice’s clock. The closer Bob moves to the speed of light c, the slower his clock ticks as seen from Alice’s viewpoint. Here v is Bob’s speed, relative to Alice (or Alice’s speed relative to Bob). If you enter here 1/10 of the speed of light c for v, then Bob’s clock turns out to tick 0.5% slower than Alice’s clock. Now we apply the principle of symmetry that Einstein argued. There is no absolute speed, speed is always relative. Bob, who experiences himself as stationary, observes Alice moving away from him at 1/10 the speed of light. So Bob also sees Alice’s clock ticking slower by 0.5%. This seems a paradox, but the theory is correct and has been experimentally confirmed in countless experiments. The solution is that Bob and Alice can’t compare their clocks until they come together and for that at least one of them has to turn around which means speeding up and slowing down. This breaks the symmetry. You can see from the above time dilation formula that the maximum speed that applies in the universe is 300,000 km/s. The term under the radical becomes negative when v becomes greater than c, which would make the time dilation imaginary. That’s too bad because it makes non-imaginary trips to even the nearest stars impossible for us. From Alice’s point of view, Bob’s rulers also shorten in the direction of his movement. For completeness, this is the formula for the contraction of fast-moving rulers, the so-called Lorentz contraction: Lorentz contraction of a ruler L moving with speed v relative to the observer. L0 is the lenght of the ruler when at rest relative to the observer. It goes without saying that this sparked a lot of discussion in the first half of the 20th century. Einstein took the position that the observers of the clocks and rulers did not play a vital role in relativity effects. According to him, they could just as easily be left out of the equations. Fast-moving clocks would automatically slow down, fast-moving rulers would shorten without the need for an observer. This elasticity of space and time and of the material objects therein was, and is still difficult to grasp but has been confirmed experimentally time and again. We, the physicists, are more or less used to it now, but we do not really understand it. It’s not natural. Einstein fighting versus the probability interpretation of quantum physics Einstein seriously put quantum physics on the map with his explanation of the photoelectric effect, for which he received the Nobel Prize. Light consists of particles with an energy per particle according to the Planck formula (f here stands for the frequency): Planck’s law: the energy of a quantum of radiation energy is propertional to its frequency and is inversely proportional to its wavelength But after that he argued vigorously against quantum physics and especially its implications, to no avail. Especially against the probability interpretation of Bohr, Heisenberg and Born: that the state wave, the solution of the Schrödinger equation, represents the probability that the particle will be found at a given location and time when measured. That went against Einstein’s gut view of the world as an objectively permanent collection of material objects. Einstein’s objection is understandable if you adhere to the materialistic view of the world, because a probablity is not an objective material object. It is something that exists in our mind. A thought. And that’s exactly my own idea of how the universe works. Everything we experience takes place in the mind. The perception of the measured particle thus becomes identical to the thought of it. The experience is then the same as its creation. That explains to me very well why the laws of physics behave according to mathematical formulas. That is something that many physicists, including Einstein, have expressed their amazement about. So the observers’ mind plays an indispensable role in the universe, it creates it. Mathematics is something of and in the mind. The mind uses apparantly mathematics in its creation of the universe. Time and space are concepts of the mind. That idea suddenly makes things like the slower passing of time, the shrinking yardsticks and the curved space of general relativity, much more palatable. In a dream we would really not notice these things either. There exists no real objective time outside of us that does slow down, there is no objective space outside of us that does shrink, it’s all happening in the mind of every observer. Science Fiction? That offers hope for the possibility of exploration of the cosmos. The maximum speed in the universe that we observe – that of light – seems to be something that the mind has imposed on itself. But as soon as we can accept that time and space is happening within the mind, the possibility opens up that we could move through the universe beyond that limitation. Traveling within the mind is not bound by the restrictions of relativity. This, I believe, is also the correct interpretation of entanglement and instantaneous action over long distances, as confirmed by all those Bell tests. Traveling through the universe by means of the mind could even be the way – one that intelligent beings existing elsewhere in this vast universe already have discovered – to travel through the cosmos despite Einstein’s speed limit. And to visit us. Experiments have already been conducted confirming that quantum tunneling shows speeds greater than that of light. A universe like a slowly fading flare That the universe is a creation of the mind also offers an alternative for the pending entropy death of the universe that physics has been predicting for a century and a half now. Even if that is a immeasurably distant future away, it remains a bleak prospect contradicting any sense of purpose of the world. What was that fantastic spectacle all for if that is to be the end? But if the universe is the product of the creative mind, then that is by no means an unavoidable end to everything. On the contrary. What I want to say with this story is that there is a good chance that two apparently incompatible theories – relativity and quantum physics – can be merged together very well when we start to include the all important role of consciousness. The intelligibility of the nature of reality would only increase as a result. Epicycles and quantum fields Feynman diagrams Feynman diagrams are used by physicists to represent the possible interactions between elementary particles. Wikipedia: The lines in Feynman diagrams represent particles interacting with each other in some fashion. Mathematical expressions correspond to every line and node. The probability of certain interactions occurring can be calculated by drawing the corresponding diagrams and using them to find the correct mathematical expression. The diagrams are basically accounting tools with a simple visual representation of an interaction of particles. So it is not the case that physicists assume that those particles exist physically during their lifetime and that they follow trajectories. That contradicts the wave aspect that quantum physics assigns to them. They prefer to assume that the particles in some virtual way do ‘try out’ all possible paths, where one is always chosen and becoming physical on measurement. Each Feynman diagram is just a way of visualizing one the possible interactions. But the temptation to view these interactions as objective physical events is strong. Feynman diagram with two electrons and one single foton for repulsive field interaction The above figure is one of the simplest Feynman diagrams you can find on the internet. Shown vertically is the time (t), horizontally the position (x). This diagram shows the simplest way two electrons can affect each other. Two electrons fly towards each other, repel each other at time t0 and fly apart again at the same speeds. At the moment t0, when they are at positions x1 and x2, they exchange a photon. A photon carries a certain momentum, transfers it and thus exchanges the momentum of both electrons. After the exchange, the electrons fly apart at the same speeds at which they first approached each other. The question is, of course, how electrons ‘feel’ that there are other electrons nearby so that they have to exchange photons. The exchange, as shown here, is an instantaneous process, the path of the photon is horizontal at t0. The photon exchanges the electron momenta Hey, that’s curious, that makes the speed of the photon infinite. That will certainly not be the intent of the diagram. However, there is more that raises questions. The direction of the photon is not indicated. The photon could move from right to left as well as from left to right. The electrons both undergo an momentum change due to the exchange of the photon. Momentum is the amount of movement expressed in mass m times velocity v: p = mv. The left electron undergoes a velocity change Δv1. From this follows a momentum change Δp1= mΔv1, ditto for the right electron: Δp2= mΔv2. The velocity changes Δv1 and Δv2 are of equal magnitude and of opposite direction: Δv1 = -Δv2. This means that the total momentum does not change: Δv1 + Δv2 = 0 so Δp1 + Δp2 = 0. That is 100% according to an important law in physics: The total momentum of a closed system does not change. The accounting is correct for the momenta The photon does the transfer of the momentum, because a photon carries a momentum according to De Broglie: p=h/λ. Both electrons undergo an equal and opposite momentum change which is transferred through the photon, whether the photon moves to the left or the right. For example, suppose the photon moves to the right. The left electron undergoes an impulse change Δp1= mΔv1 = h/λ, the right-hand electron Δp2= mΔv2 = – h/λ. This last minus sign is because the photon loses its momentum when interacting with the electron on the right. Since v1 = – Δv2 holds, the total impulse Δp1 +Δp2 is preserved. If the photon travels in the opposite direction, the result is the same. So it doesn’t matter in which direction the photon moves. If the photon has the speed of light, then the impulse changes will be slightly consecutive in time. The emitting electron changes its momentum first in time, the receiving electron a little later. But that’s not a real problem. The accounting of the momenta is correct. At least two photons are needed. But what about the energy? A photon also carries an amount energy that is proportional to its frequency f: E=hf. That’s Planck’s law. If the photon moves to the right, the left electron must have lost some amount of kinetic energy because it has transferred it to the flying away photon: ΔE= – hf. As a result the electron on the left has lost some kinetic energy. The receiving right electron then receives this energy as gained kinetic energy. So it has a higher speed. And if the photon were to move to the left, the right electron loses the kinetic energy that the left electron gains. That can’t be right. Both scenarios conflict with the elastic collision of two objects and cause an asymmetry in the course of the interaction. If we want to achieve the same as in an elastic collision we must assume two simultaneous photons, one going from left to right and one from right to left. Both transfer energy and impulse. This way the accounting is correct again. The sum of the transmitted impulses is zero and there is no transmitted kinetic energy. We need two photons for that. In itself, a Feynman diagram can be supplemented in this way. There is no objection to that. Feynman diagram with two electrons and two photons for complete repulsive quantum field interaction There should be a simpler story A correct story with the exchange of photons becomes considerably more difficult with particles that attract each other, such as an electron and a positron. Isn’t it actually simpler to assume a single interaction in which the charged particles exchange their momentum but no energy? In my opinion, a photon is nothing more or less than the observation of an energy exchange that must have occurred. The assumption that it should be a physical particle is the result of the image imposed on us by classical physics. A photon can therefore also be regarded as nothing more than the observation of an impulse exchange. Elsewhere on this website, and also in my book, I argue extensively that the photon does not physically exist and thus does not travel. The photon is, I think, a reified abstraction. Quantum field theory In quantum field theory it is assumed that a moving electron, which is a non-physical probability wave as long as it is not measured, is surrounded by a cloud of virtual (!) photons, where two of them become real photons in this case, to take care of the momentum exchange . This representation replaces Maxwell’s electromagnetic field concept. Actually Maxwell wasn’t very happy with his field concept since he had to assign properties to empty space. Quantum field theory now replaces that electromagnetic field by assuming large amounts of virtual photons popping in and out thin air. In this way you avoid the troublesome idea that electrons would have ‘feel’ each other’s proximity and decide ‘in time’ to perform an impulse exchange in order to move away from each other again. In this way the objective electromagnetic field has been replaced by something even more complex and ultimately based on the field concept, a state of empty space, this time chock-full of virtual particles. Admittedly, quantum field theory does provide very precise predictions. But that could also be said about the epicycles of Ptolemy. A mini Big Bang in a mini universe of billiard balls Virtual dancing with quantum fields, a dream Before 1900 we had the rather simple billiard ball model of the universe. Quantum field theory has now taken its place. To get an idea of its message let’s assume that you’ve been worrying deeply about quantum fields and those virtual photons. Exhausted you fall asleep and you start dreaming. You find yourself in a dance outfit on a huge expanse of ultra smooth dance floor where you can’t see the walls. Everywhere people are dancing, it is swarming with them in some places. In quieter places you see someone alone doing a pretty good pirouette. The floor so slippery that there’s no way you can move from your place. How do the others do that? Then you notice that billiard balls are constantly appearing and disappearing everywhere in the air. The heavier the ball, the faster it disappears again. The smaller and lighter ones last a little longer but eventually they disappear too. You understand that those balls are virtual but that they are physical for a short time. Now you understand, you want to dance and you are looking for someone to dance with. Then you see someone repulsive sliding towards you. You don’t want to dance with this person. So, you grab a large heavy virtual billiard ball, that just appears in the air near you, and you throw it in this person’s direction. The other person catches the ball neatly after which it immediately disappears again into thin air. The result is that you two are sliding apart again. Then you see someone really attractive. You want to dance with that person, but the person is gliding along a trajectory that does not come close to your trajectory. So, you grab another billiard ball, that conveniently pops up at that moment. You throw the ball in the opposite direction and you see to your pleasure that the other person does the same. You move towards each other and begin to dance happily … and then you wake up. End of dream. Regrettable. But you now suddenly understand the idea of ​​the quantum fields a lot better. It’s just the old billiard balls story again. But now they are ‘virtual’. Virtual is a concept from optics and means that an object exists physically but not physically, it is not tangible. A rainbow is a virtual object. You can’t grab it but materialistic thinking tries to do just that. Virtual epicycles When I think about this tortuous explanation with virtual photons, it inadvertently reminds me of the epicycles of Ptolemy that ‘explained’ the movements of the planets in the heavens in a very complex way and that lasted for 1400 years because the idea of the earth at the center was something people preferred rather strongly and, more important, because it was so accurate in its predictions. Take a look at the Ptolemaic animation of the movement of Mars through the heavens below to get an understanding of its utter tortuous complexity. Ptolemy’s epicycles were indeed virtual. The Ptolemaic model of the solar system. The Earth (blue) right next to the center of the deferent, the great circle. Mars moves around the Earth in epicycles, small circular orbits whose center moves across the deferent in a year. The yellow ball is the sun as it moves through the zodiac in a year.
937f5f41f4498dc8
Tag Archives: relativity Imaginary Angles You would have heard about imaginary numbers and most famous of them is i=\sqrt{-1}. I personally don’t like this name because all of mathematics is man/woman made, hence all mathematical objects are imaginary (there is no perfect circle in nature…) and lack physical meaning. Moreover, these numbers are very useful in physics (a.k.a. the study of nature using mathematics). For example, “time-dependent Schrödinger equation But, as described here: Complex numbers are a tool for describing a theory, not a property of the theory itself. Which is to say that they can not be the fundamental difference between classical and quantum mechanics (QM). The real origin of the difference is the non-commutative nature of measurement in QM. Now this is a property that can be captured by all kinds of beasts — even real-valued matrices. [Physics.SE] For more of such interpretation see: Volume 1, Chapter 22 of “The Feynman Lectures in Physics”. And also this discussion about Hawking’s wave function. All these facts may not have fascinated you, but the following fact from Einstein’s Special Relativity should fascinate you: In 1908 Hermann Minkowski explained how the Lorentz transformation could be seen as simply a hyperbolic rotation of the spacetime coordinates, i.e., a rotation through an imaginary angle. [Wiki: Rapidity] Irrespective of the fact that you do/don’t understand Einstein’s relativity, the concept of imaginary angle appears bizarre. But, mathematically its just another consequence of non-euclidean geometry which can be interpreted as Hyperbolic law of cosines etc. For example: \displaystyle{\cos (\alpha+i\beta) = \cos (\alpha) \cosh (\beta) - i \sin (\alpha) \sinh (\beta)} \displaystyle{\sin (\alpha+i\beta) = \sin (\alpha) \cosh (\beta) + i \cos (\alpha) \sinh (\beta)} Let’s try to understand what is meant by “imaginary angle” by following the article “A geometric view of complex trigonometric functions” by Richard Hammack. Consider the complex unit circle  U=\{z,w\in \mathbb{C} \ :  \  z^2+w^2=1\} of \mathbb{C}^2, in a manner exactly analogous to the definition of the standard unit circle in \mathbb{R}^2. Apparently U is some sort of surface in \mathbb{C}^2, but it can’t be drawn as simply as the usual unit circle, owing to the four-dimensional character of \mathbb{C}^2. But we can examine its lower dimensional cross sections. For example, if  z=x+iy and w=u+iv then by setting y = 0 we get the circle x^2+u^2=1 in x-u plane for v=0 and the hyperbola x^2-v^2 = 1 in x-vi plane for u=0. The cross-section of complex unit circle (defined by z^2+w^2=1 for complex numbers z and w) with the x-u-vi coordinate space (where z=x+iy and w=u+iv) © 2007 Mathematical Association of America These two curves (circle and hyperbola) touch at the points ±o, where o=(1,0) in \mathbb{C}^2, as illustrated above. The symbol o is used by Richard Hammack because this point will turn out to be the origin of complex radian measure. Let’s define complex distance between points \mathbf{a} =(z_1,w_1) and \mathbf{b}=(z_2,w_2) in \mathbb{C}^2 as where square root is the half-plane H of \mathbb{C} consisting of the non-negative imaginary axis and the numbers with a positive real part. Therefore, the complex distance between two points in \mathbb{C}^2 is a complex number (with non-negative real part). Starting at the point o in the figure above, one can move either along the circle or along the right-hand branch of the hyperbola. On investigating these two choices, we conclude that they involve traversing either a real or an imaginary distance. Generalizing the idea of real radian measure, we define imaginary radian measure to be the oriented arclength from o to a point p on the hyperbola, as (a) Real radian measure (b) Imaginary radian measure. © 2007 Mathematical Association of America If p is above the x axis, its radian measure is \beta i with \beta >0, while if it is below the x axis, its radian measure is \beta i with \beta <0. As in the real case, we define \cos (\beta i) and \sin (\beta i) to be the z and w coordinates of p. According to above figure (b), this gives \displaystyle{\cos (\beta i) = \cosh (\beta); \qquad \sin (\beta i) = i \sinh (\beta)} \displaystyle{\cos (\pi + \beta i) = -\cosh (\beta); \qquad \sin (\pi + \beta i) = -i \sinh (\beta)} Notice that both these relations hold for both positive and negative values of \beta, and are in agreement with the expansions of  \cos (\alpha+i\beta)  and \sin (\alpha+i\beta)  stated earlier. But, to “see” what a complex angle looks like we will have to examine the complex versions of lines and rays. Despite the four dimensional flavour, \mathbb{C}^2 is a two-dimensional vector space over the field \mathbb{C}, just like \mathbb{R}^2 over \mathbb{R}. Since a line (through the origin) in \mathbb{R}^2 is the span of a nonzero vector, we define a complex line in \mathbb{C}^2 analogously. For a nonzero vector u in \mathbb{C}^2, the complex line \Lambda through u is span(u), which is isomorphic to the complex plane. In \mathbb{R}^2, the ray \overline{\mathbf{u}} passing through a nonzero vector u can be defined as the set of all nonnegative real multiples of u. Extending this to \mathbb{C}^2 seems problematic, for the word “nonnegative” has no meaning in \mathbb{C}. Using the half-plane H (where complex square root is defined) seems a reasonable alternative. If u is a nonzero vector in \mathbb{C}, then the complex ray through u is the set \overline{\mathbf{u}} = \{\lambda u \ : \  \lambda\in H\}. Finally, we define a complex angle is the union of two complex rays \overline{\mathbf{u}_1} and \overline{\mathbf{u}_2} . I will end my post by quoting an application of imaginary angles in optics from here: … in optics, when a light ray hits a surface such as glass, Snell’s law tells you the angle of the refracted beam, Fresnel’s equations tell you the amplitudes of reflected and transmitted waves at an interface in terms of that angle. If the incidence angle is very oblique when travelling from glass into air, there will be no refracted beam: the phenomenon is called total internal reflection. However, if you try to solve for the angle using Snell’s law, you will get an imaginary angle. Plugging this into the Fresnel equations gives you the 100% reflectance observed in practice, along with an exponentially decaying “beam” that travels a slight distance into the air. This is called the evanescent wave and is important for various applications in optics. [Mathematics.SE]
21401160fdd13321
Thursday, July 1, 2021 Whitehead's Process Speculation about Multiverses before there was Speculation Whitehead's Process Speculation about Multiverses before there was Speculation I just recently put together Over the last several days a couple articles on EM/QED and then saw Paul's statement below in connection with Whitehead's multiverse theory and electromagnetic societies as spacetime singularities: Alfred North Whitehead, the smartest man who ever lived [in my opinion], foretold of our universe existing as only one of many. Today it is known as the multiverse theory. Seventy years before modern physics, [mathematician-philosopher] Alfred North Whitehead pioneered the framework of multiverse theory by what he described as a "plurality of cosmic epochs", “the theory of society,” and the notion of "the geometrical society" which harbors the existence of the cosmic epochs - one which [may] contain all possible geometrical configurations, allowing multiple dimensions required by M-theory. Whitehead also foretold that evidence of “our cosmic epoch” (our universe) is all we would be able to to trace.  The phrase "cosmic epoch is used to mean the widest society of actual entities whose immediate relevance to ourselves is traceable.” Whitehead also called “our cosmic epoch” an "electromagnetic society that began as a spacetime singularity" - now known as the big bang roughly 14 billion years ago and has been expanding and cooling ever since.  Following up Paul's reference turned up this little gem by Leemon McHenry at California State University: The Multiverse Conjecture: Whitehead’s Cosmic Epochs and Contemporary Cosmology (21 pages) *Leemon McHenry teaches philosophy at California State University, Northridge, CA  Abstract: Recent developments in cosmology and particle physics have led to speculation that our universe is merely one of a multitude of universes. While this notion, the multiverse hypothesis, is highly contested as legitimate science, it has nonetheless struck many physicists as a necessary consequence of the effort to construct a final, unified theory. In Process and Reality (1929), his magnum opus, Alfred North Whitehead advanced a cosmology as part of his general metaphysics of process. Part of this project involved a theory of cosmic epochs which bears a remarkable affinity to current cosmological speculation. This paper demonstrates how the basic framework of a multiverse theory is already present in Whitehead’s cosmology and defends the necessity of speculation in the quest for an explanatory description.  An example of entropy in biological systems Process as a Continuous State of Unfolding Entropy For myself, I can see the historical appropriations of Whitehead connecting process philosophy to Maxwell's electromagnetic theory (it's easy enough to do between both systems as I hinted at here). And though I have no problem with multiverse theory (it'd be highly unusual if our cosmos were the only universe... without having simultaneous derivatives of all kinds of universes as proposed under M-theory). But I didn't think the many worlds concept came out until the late fifties by Hugh Everett (1957). Still, process philosophy would very easily connect with this theory too as apparently Whitehead speculated when sensing the flow and rhythm of an organic universe. That is to say, things do not arise by themselves, but in relational communities with one another, which is the nub of multiverse theory. Said differently, even as evolutionary theory shows the process of entropy attempting to lowering its loss of energy (thus a hot earth is cooled by living organic processes) so too would one expect an evolution of cosmi (plural of cosmos?) which come-and-go exploiting all connective opportunities while driven towards novelty and in-state wellbeing. Thus, these perturbations of our cosmos finds ourselves in it and asking the kind of questions sentient beings might ask within the framework and conditions of this cosmos (I lean strongly towards the weak anthropic principle - see here and here: "A Tale of Two Cities" - where there can be no preconditions, no divine fiats or commands overruling the process proceeding from God's Self, only undirected interactions in relational context to the whole. My only argument for the strong anthropic principle lies in the embedding of God's Self and His Love within the process itself granting a positive creativity and need for wellbeing. Combined, both concepts give a teleology to process theology). Does Process Thought Allow for a Teleology? That said, I also believe God gave to all universes indeterminant freewill underlaid by the process principles of divine love speaking to not only freedom but wellbeing (a kind of entropic statism, if you will). How Do We Explain the Incredible Uniqueness of Our Form of Multiverse? [Excerpt] "The concept of other universes has been proposed to explain how our Universe appears to be fine-tuned for conscious life as we experience it. If there were a large (possibly infinite) number of universes, each with possibly different physical laws (or different fundamental physical constants), some of these universes, even if very few, would have the combination of laws and fundamental parameters that are suitable for the development of matter, astronomical structures, elemental diversity, stars, and planets that can exist long enough for life to emerge and evolve. "The weak anthropic principle could then be applied to conclude that we (as conscious beings) would only exist in one of those few universes that happened to be finely tuned, permitting the existence of life with developing consciousness. Thus, while the probability might be extremely small that any particular universe would have the requisite conditions for life (as we understand [carbon-based] life) to emerge and evolve, this would not necessarily require intelligent design per a teleological argument for the strong anthropic principle as the only explanation for the conditions in the Universe which promotes our existence in it." - res, June14.2012   Summary Speculations Which means, our universe is neither the first, nor the last, in a long line of novel creations but are always found in the perpetual state of outcome as "organically relational cosmic entities" developing from states of being towards endless states of becoming. And not simply one after the other in linear progression, but as many, becoming many more, like bubbles shot out of an infinite number of bubble guns! This is the kind of process creation which I would expect a process God to have created. A God who himself is the first process of all succeeding sub tending processes as they mix and break apart from one another in differing combinations of novelty and creative relational bundling. And lastly, we live in a much older "-Verse" than mere physical light years can measure when thinking of all the preceding -verses which have come and gone before our own. -Verses which are more than their matter, of which makes they are composed. But of a summing up of a panpsychic, panexistential, albeit "spiritual," presence we seldom seem to senseas we bustle about like ants on the ground, but can feel vibrating all around us in the aftermaths of creational spaces. The vagabond butterfly, the whispering tree, the moving wind... even our own personal beings and presence with one another with nature itself. There is a there, there, which one might call divine or spiritual but we might all call beauty and wonder. And like Whitehead's speculation, God's handiwork extends everywhere... both in this world and far beyond it. Though a Process God does not determine the future, as the Very Process itself God is steeped within it, infilling all its spaces and relational processes. God does not need to know the future because God is the future. God is the One who ever lives on the edge of the becoming future. Process Theology then is a different kind of animal then the church has witnessed before even as evolutionary processes and the quantum sciences. And it's time to re-read the bible's narratival stories with an eye towards the processes occuring in spacetime amongst the ancients - and even before them in the primal dawns of the living, the lit, and the whole. Peace, my friends. Peace. July 1, 2021 * * * * * * * * Are we living in a Multiverse? A closer look at four different types of parallel universe(s) by Prince March 26, 2018 I have found the multiverse subject extremely enticing, as it provided me a way to reflect upon my existence and forced me to question everything. In this article I want to share and explain from different perspectives (scientific, theological, fictional, philosophical) the four different levels of multiverses suggested by scientists and astronomers. We usually think of our universe as a vast nearly endlessly expanse that contains every star [and] galaxy in existence. But, what if there is more than one universe? Could it be that we live in a multiverse? Our universe as we know, originated from a huge explosion that is known as the big bang. In the first split second after the big bang the universe underwent a fast expansion, known as the cosmic inflation.  Our universe in the last 13.7 billion years has expanded enormously from the size of an atom and [it has kept expanding ever] since then. There was a time when the universe was expanding so rapidly that the parts of it were moving away from each other faster than the speed of light. Why we might be living in a Multiverse? In ancient times it was believed that the earth [was the center of the universe and the other planet revolved around it. Then], later on, we discovered that the earth revolves around the Sun which is part of the solar system, and our solar system was found to be a part of the Milky Way galaxy. By further observations scientists learned that our universe is composed with other billions of such galaxies, each galaxy containing billions of stars. We can only see a small portion of the entire universe, known as the observable universe, which has a diameter of 93 billion light years and the radius of approximately 46 billion light years. According to modern theories of particle physics there might be other parallel universes like ours in the vast collection of universes so called multiverse. Scientists have started taking the idea of parallel universes very seriously in the past years and the majority of cosmologists today agree with the concept of a multiverse, which is the idea that our universe might not be the only one of its kind. There are number of theories about what the multiverse could be and there are four different perspectives of looking at them. Level I — The Quilted Multiverse Quilted Universe There isn’t one single multiverse hypothesis, cosmologist Max Tegmark has proposed four different types of multiverse that might exist. The quilted universes model is predicted by the theory of inflation developed by Alan Guth and Andrei Linde, which suggests that the space itself is not just big but is infinite in size. Beyond the range of our telescopes are other regions of space, those regions are a type of parallel universes with the same physical laws and constants, some identical to ours while some very different from ours, and there is a probability for one of those parallel universes being identical to ours. From scientific point of view, we are just a configuration of different particles, and according to science matter can be arranged only in finite ways and afterwards it tends to repeat itself. Based on the same idea according to Dr Tegmark an identical Hubble space to ours should be around 10 to the 10¹¹⁸, beyond that it must repeat, which means there might another you in another universe. However, we cannot observe those regions of space with our current technology, the farthest that we can observe is about 42 billion light years, which is the distance that light has been able to travel to us since the big bang happened. This quilted multiverse model is not really a theory, but rather a prediction, because it is predicted by the theory of inflation, and it agrees with the data provided by the cosmologists. For example, Einstein’s general theory of relativity is not a theory anymore, it was in the past, but it has been proved and tested, and now scientists take it very seriously and use it as a scientific model in order to make sense of the events, even though it predicts lots of things which cannot be tested or observed, such as what happens if you fall inside a black hole. Hence, this model makes sense from scientific perspective, which requires logic and nature to make sense of the events. Even though this model is not a scientific model yet, as it is not testable, and we don’t have any observational evidences to prove this model, but it is predictable. For a theory to be scientific, you don’t have to observe everything that it predicts, but be able to observe only one thing that it predicts. Therefore, the lack of evidence for the existence of something, is not the evidence for the lack of something. Level II — Inflationary Multiverse Inflationary Multiverse The level I multiverse model was complicated to comprehend, the level II model forces us to open our imagination to infinite possibilities. The second model is based on the idea of infinite bubble universes, known as the inflationary multiverse model. In order to understand this model it is necessary to understand how the theory of inflation works, which tells us how the big bang occurred. The inflationary multiverse model suggests that the universe is infinite in size, and according to the theory of cosmic inflation the big bang that created our universe may not have been a onetime event, instead it could have happened again and again and going on forever known as eternal inflation. As you are reading this sentence, there might be another big bang happening out in the cosmos, giving births to other universes or bubble universes. Our universe is part of one of those infinite bubbles, which is filled with matter deposited by the energy field that drove inflation, a process that would continue eternally. We will never get to those bubbles even if we travel at the speed of light. The bubbles vary not only in the initial conditions but also in aspects of nature with different space-time dimensions and different physical constants. This model is not scientific either, as it lacks observable evidences. This idea of the multiverse is sounder from a theological perspective, where all the natural laws can be broken. Much of modern theology tries to address the questions concerning our existence on this planet. Since theology doesn’t require any observational evidences in order to make sense of the events. Theists can easily argue that God is the creator and sustainer of the entire universe. Being omnipotent and omniscient God has the power to control everything. Hence, God decided to create not one universe but many, as God would be the one who created space and time, the inflation and the big bang. According to theism, once you have a transcendence source of everything, space and time, matter and energy, then God is free to create any type physical reality he wants. Therefore, theology can agree with this concept more than science, as science require evidence, unlike theology. Level III — Quantum Universe Quantum Universe The third level multiverse model, is known as quantum many worlds, which is the most controversial type. This quantum multiverse model concentrates on the idea of quantum mechanics, and is very different from the first two models. Quantum mechanics works on probabilities, it states that there are a range of possible observations, each with a different probability. In order to be clear, if in this universe you are reading this paper, in another quantum universe you might be reading a different paper, in yet another universe you got offered a job, or perhaps in many you don’t exist at all. This idea tells us that there are an infinite number of universes, with infinite number of possible outcomes, where random quantum processes splits the universe into multiple copies. At every point, a new universe is being created. This model makes more sense from fictional point of view, as there are no limits to the realms, events can make sense or not, one can either obey the logic or defy it. I am firmly unconvinced by this theory, it is still fictional to me, as I believe that we do not have established the analysis about how the quantum thinking would describe this observation. Quantum physics is the science that attempts to explain phenomena which cannot be explained by science and physics. This is why perhaps scientists love the quantum world idea as it explains mathematically things which are not observable. I believe that we do not understand the quantum physics completely yet. In order to understand quantum mechanics it is very important to understand how quantum mechanics would link up with the observation. The link between quantum mechanics and the observation is still missing.If the ideas which makes sense mathematically are linked up with the observation, then perhaps it can enhance our understanding for the quantum multiverse. Nevertheless, there needs to be done more research on this theory in order to understand it completely which will require more time from physicists. Level IV- Brane Multiverse Brane Multiverse The level I, II and III varies from each other but they are governed by the same natural fundamental laws. Moving to the fourth level multiverse, which revolves around string theory, is called the brane multiverse. This model suggests that universes can differ not only in shape, but also in different laws of physics. The brane multiverse theory suggests that there can be more dimensions than three. We live in a four dimensional universe including time, but in brane multiverse, our four dimensional universe lives on a membrane, or brane, that is embedded with more than four dimensions. The idea here is that our membrane is not the only one, there might be other membranes. Existing outside of space and time, they are almost impossible to visualize. This model is the most unclear and sounds crazy to me. It is definitely not a scientific model, as String theory is not a complete theory yet. I would see this model more from a philosophical point of view, where observational evidences are not required, one can use logic to make sense of the events. I strongly believe that the brane multiverse hypothesis has high probability to fall under scientific realm in the near future, as the brane multiverse model has the chance of being experimentally tested based on string theory under shortest time frame. String theory states that the space is made up of tiny little filaments known as strings which vibrates at different patterns, and according to scientists this proposal might be tested at the LSG (large hadron collider). There is no doubt that the concept remains science fiction for now, however the lack of scientific proof should not be the reason to stop questioning. Hence, it is important for the concept of parallel universes to be explored completely, even though they lack observational evidences. One way can be to work on the multiverse theories which have highest chances of being tested in the shortest period of time frame. I can confidently conclude, none of the multiverse models mentioned above are scientific models and remains unproven for now, as they lack observational evidences, but this should not stop science to investigate further on these ideas. I am eager to find out what the next big discovery will be in the multiverse hypothesis. * * * * * * * * Sean Carroll: Many-Worlds Interpretation of Quantum Mechanics Nov 5, 2019 The Many Worlds of the Quantum Multiverse | Space Time | PBS Digital Studios Oct 26, 2016 Parallel Worlds Probably Exist. Here’s Why Mar 6, 2020 Sean Carroll explains: what is the many-worlds interpretation? Jan 8, 2020 Roger Penrose - Many Worlds of Quantum Theory Mar 16, 2020 Sean Carroll: The many worlds of quantum mechanics Jun 24, 2020 * * * * * * * * Many-worlds interpretation Jump to navigationJump to search The quantum-mechanical "Schrödinger's cat" paradox according to the Many-Worlds interpretation. In this interpretation, every quantum event is a branch point; the cat is both alive and dead, even before the box is opened, but the "alive" and "dead" cats are in different branches of the universe, both of which are equally real, but which do not interact with each other.[a] The many-worlds interpretation (MWI) is an interpretation of quantum mechanics that asserts that the universal wavefunction is objectively real, and that there is no wavefunction collapse.[2] This implies that all possible outcomes of quantum measurements are physically realized in some "world" or universe.[3] In contrast to some other interpretations, such as the Copenhagen interpretation, the evolution of reality as a whole in MWI is rigidly deterministic.[2]:8–9 Many-worlds is also called the relative state formulation or the Everett interpretation, after physicist Hugh Everett, who first proposed it in 1957.[4][5] Bryce DeWitt popularized the formulation and named it many-worlds in the 1960s and 1970s.[1][6][7][2] In many-worlds, the subjective appearance of wavefunction collapse is explained by the mechanism of quantum decoherence. Decoherence approaches to interpreting quantum theory have been widely explored and developed since the 1970s,[8][9][10] and have become quite popular. MWI is now considered a mainstream interpretation along with the other decoherence interpretations, collapse theories (including the Copenhagen interpretation), and hidden variable theories such as Bohmian mechanics. The many-worlds interpretation implies that there are very many universes, perhaps infinitely many.[11] It is one of many multiverse hypotheses in physics and philosophy. MWI views time as a many-branched tree, wherein every possible quantum outcome is realised. This is intended to resolve some paradoxes of quantum theory, such as the EPR paradox[5]:462[2]:118 and Schrödinger's cat,[1] since every possible outcome of a quantum event exists in its own universe. In 1952, Erwin Schrödinger gave a lecture in Dublin in which at one point he jocularly warned his audience that what he was about to say might "seem lunatic". He went on to assert that while the Schrödinger equation seemed to be describing several different histories, they were "not alternatives but all really happen simultaneously". Schrödinger stated that replacing "simultaneous happenings" with "alternatives" followed from the assumption that "what we really observe are particles", calling it an inevitable consequence of that assumption yet a "strange decision". According to David Deutsch, this is the earliest known reference to many-worlds, while Jeffrey A. Barrett describes it as indicating the similarity of "general views" between Everett and Schrödinger.[12][13][14] MWI originated in Everett's Princeton Ph.D. thesis "The Theory of the Universal Wavefunction",[2] developed under his thesis advisor John Archibald Wheeler, a shorter summary of which was published in 1957 under the title "Relative State Formulation of Quantum Mechanics" (Wheeler contributed the title "relative state";[15] Everett originally called his approach the "Correlation Interpretation", where "correlation" refers to quantum entanglement). The phrase "many-worlds" is due to Bryce DeWitt,[2] who was responsible for the wider popularisation of Everett's theory, which was largely ignored for a decade after publication.[11] ~ The Overview and Science sections are skipped in this post ~ MWI's initial reception was overwhelmingly negative, with the notable exception of DeWitt. Wheeler made considerable efforts to formulate the theory in a way that would be palatable to Bohr, visited Copenhagen in 1956 to discuss it with him, and convinced Everett to visit as well, which happened in 1959. Nevertheless, Bohr and his collaborators completely rejected the theory.[d] Everett left academia in 1956, never to return, and Wheeler eventually disavowed the theory.[11] One of MWI's strongest advocates is David Deutsch.[64] According to Deutsch, the single photon interference pattern observed in the double slit experiment can be explained by interference of photons in multiple universes. Viewed this way, the single photon interference experiment is indistinguishable from the multiple photon interference experiment. In a more practical vein, in one of the earliest papers on quantum computing,[65] he suggested that parallelism that results from MWI could lead to "a method by which certain probabilistic tasks can be performed faster by a universal quantum computer than by any classical restriction of it". Deutsch has also proposed that MWI will be testable (at least against "naive" Copenhagenism) when reversible computers become conscious via the reversible observation of spin.[66] Asher Peres was an outspoken critic of MWI. A section of his 1993 textbook had the title Everett's interpretation and other bizarre theories. Peres argued that the various many-worlds interpretations merely shift the arbitrariness or vagueness of the collapse postulate to the question of when "worlds" can be regarded as separate, and that no objective criterion for that separation can actually be formulated.[67] Some consider MWI[68][69] unfalsifiable and hence unscientific because the multiple parallel universes are non-communicating, in the sense that no information can be passed between them. Others[66] claim MWI is directly testable. Victor J. Stenger remarked that Murray Gell-Mann's published work explicitly rejects the existence of simultaneous parallel universes.[70] Collaborating with James Hartle, Gell-Mann had been, before his death, working toward the development a more "palatable" post-Everett quantum mechanics. Stenger thought it fair to say that most physicists dismiss the many-worlds interpretation as too extreme, while noting it "has merit in finding a place for the observer inside the system being analyzed and doing away with the troublesome notion of wave function collapse".[e] Philosophers of science James Ladyman and Don Ross state that the MWI could be true, but that they do not embrace it. They note that no quantum theory is yet empirically adequate for describing all of reality, given its lack of unification with general relativity, and so they do not see a reason to regard any interpretation of quantum mechanics as the final word in metaphysics. They also suggest that the multiple branches may be an artifact of incomplete descriptions and of using quantum mechanics to represent the states of macroscopic objects. They argue that macroscopic objects are significantly different from microscopic objects in not being isolated from the environment, and that using quantum formalism to describe them lacks explanatory and descriptive power and accuracy.[71] A poll of 72 "leading quantum cosmologists and other quantum field theorists" conducted before 1991 by L. David Raub showed 58% agreement with "Yes, I think MWI is true".[72] Max Tegmark reports the result of a "highly unscientific" poll taken at a 1997 quantum mechanics workshop. According to Tegmark, "The many worlds interpretation (MWI) scored second, comfortably ahead of the consistent histories and Bohm interpretations."[73] In response to Sean M. Carroll's statement "As crazy as it sounds, most working physicists buy into the many-worlds theory",[74] Michael Nielsen counters: "at a quantum computing conference at Cambridge in 1998, a many-worlder surveyed the audience of approximately 200 people... Many-worlds did just fine, garnering support on a level comparable to, but somewhat below, Copenhagen and decoherence." But Nielsen notes that it seemed most attendees found it to be a waste of time: Peres "got a huge and sustained round of applause…when he got up at the end of the polling and asked 'And who here believes the laws of physics are decided by a democratic vote?'"[75] A 2011 poll of 33 participants at an Austrian conference found 6 endorsed MWI, 8 "Information-based/information-theoretical", and 14 Copenhagen;[77] the authors remark that MWI received a similar percentage of votes as in Tegmark's 1997 poll.[77] Debate whether the other worlds are real Everett believed in the literal reality of the other quantum worlds.[22] His son reported that he "never wavered in his belief over his many-worlds theory".[78] According to Martin Gardner, the "other" worlds of MWI have two different interpretations: real or unreal; he claimed that Stephen Hawking and Steven Weinberg both favour the unreal interpretation.[79] Gardner also claimed that most physicists favour the unreal interpretation, whereas the "realist" view is supported only by MWI experts such as Deutsch and DeWitt. Hawking has said that "according to Feynman's idea", all other histories are as "equally real" as our own, [f] and Gardner reports Hawking saying that MWI is "trivially true".[81] In a 1983 interview, Hawking also said he regarded MWI as "self-evidently correct" but was dismissive of questions about the interpretation of quantum mechanics, saying, "When I hear of Schrödinger's cat, I reach for my gun." In the same interview, he also said, "But, look: All that one does, really, is to calculate conditional probabilities—in other words, the probability of A happening, given B. I think that that's all the many worlds interpretation is. Some people overlay it with a lot of mysticism about the wave function splitting into different parts. But all that you're calculating is conditional probabilities."[82] Elsewhere Hawking contrasted his attitude towards the "reality" of physical theories with that of his colleague Roger Penrose, saying, "He's a Platonist and I'm a positivist. He's worried that Schrödinger's cat is in a quantum state, where it is half alive and half dead. He feels that can't correspond to reality. But that doesn't bother me. I don't demand that a theory correspond to reality because I don't know what it is. Reality is not a quality you can test with litmus paper. All I'm concerned with is that the theory should predict the results of measurements. Quantum theory does this very successfully."[83] For his own part, Penrose agrees with Hawking that quantum mechanics applied to the universe implies MW, but he believes the lack of a successful theory of quantum gravity negates the claimed universality of conventional quantum mechanics.[27] Quantum Physics - The Electromagnetic Spectrum parallels Process Thought in Theology Why Theology Must Pay Attention to the Sciences by R.E. Slater I occasionally reach forwards and backwards into non-biblical topics generally classified by the church as "Natural Theology" to help bring home newer theological ideas that may be unknown to the bible reader; a bit strange and out of the ordinary to the Platonic et al systems we've been raised in as Western thinkers; and perhaps more helpful in discussing the harder subjects of who God is, what God is doing, and how we might operate in God's wonderful-and-terrifying worlds of creation. The bible tells us about salvation through the humble narratives of those having sought God  and of those who did not. These narratives were set in the harsh climes those seekers lived as they saw, or experienced, poverty and misery, injustice and inequality, brutality and cruelty. But the bible was not written to explain the natural world we know as creation. This is the area described by older church ages as "natural theology" - which is a descriptor for the sciences of all stripes and flavors. Nor does the bible speak to areas of philosophical concern... primarily because its many narratives were set at one time historically, and later, in the retelling, from other historical perspectives and era settings. Each setting holding its own milieu of philosophical thought which the common people accepted. To tell a narrative from one era-bound anthropologic setting carried with it a common perspective of that time. But to then retell it from within another cultural/geological era held a different meaning philosophically than it did in its one-time original context. Hence, today, as Westernized Christians, we proclaim the bible stories within our own embraced religious beliefs based upon our perceived and accepted philosophical cultural perspectives. Which is why there are so many different kinds of Christian beliefs and practices within competing cultural perspectives and practices we naively call secular not realizing our own Christian tenets and dogmas have been similarly secularized. Embracing the Ancients All this is said to say that when describing and projecting a Process Theology upon the Christian faith from Whitehead's Process Philosophy, we are attempting to (i) update our thinking of the world of creation even as we are attempting to (ii) recognize the competing philosophies of the bible's narratives, told and retold, through the eras. Today's quantum sciences are likewise doing the same with older cultural perspectives of the world and man. The old-line thinking of the Ancients v Greeks v Scholasticists v Enlightenment v Modernists have fallen away even as remnants of those traditions hang on to pervade our thinking. In a postmodern era it seems that process thought best corresponds with the newer worlds of evolution and the chaotic theories of the maths and sciences. To now think in terms of process flow and symmetry, rhythm and balance, seems the more apt description of the world altogether. Yet this is not a new idea... the ancients from long ago felt and shared this... from the African to Greek to Indian to Asian to Native American cultures. They each understood the living, restless world they lived in terms of liveliness, organic relationships, and as a dynamic force informing human civilizations how to live, and be, and respond. Similarly, today's quantum physics is describing the classical worlds of mechanistic reductionism backwards towards the more ancient ideas of flow and rhythm. Or, in evolutionary terms,  as describing the classical worlds as progressing from one form or stage of social-biological evolvement to another form (either higher or lower) of transitionary evolvement. The world is in itself a living organism. It is neither static nor isolated from its parts. It is one, and whole, and operates together. Welcome to the world of process philosophy... a view as ancient as it is helpfully informative in today's quantum worlds of science. Stories of Becoming To describe Process Thought in its religious-Christian derivatives one must pay attention to the newer profound worlds of science and anthropology. Nothing lives isolated from the other. We live in relational worlds of being which are always moving towards becoming or un-becoming. As Westernized thinkers we have never learned to read the bible in this way. Process relational thinking was no less real then as it is today. We just weren't seeing it. Our religious perceptual maps were being informed by other cultural-religious perspectives. In the stories of salvation of the bible were also stories of real live people caught in their personal trajectories of becoming enlivened to the spirit of God. They lived one way, then became informed in some spiritual sense, to then live another way. Their being was in transition. They were becoming something else because of their encounter with some facet of God or God's creation as they navigated life's many obstacles, wonders, and miracles. Process Theology, as does the Process Philosophical perspective, says that we are beings who are becoming more than we once were. But if we do not, than we may also transition downwards into fundamentally non-becoming beings. These are the stories of light and dark of the bible. Of those who comprehended life as God drew them towards the real meanings of life, as versus those who escalated downwards into darkness and away from the light. The cosmos is more than matter and forces. The cosmos is where life and light might be apprehended for those willing to look and to become nestled into the flow and rhythm of the very God whose cosmos lives and breathes movement from disorder. Peace. R.E. Slater July 1, 2021 * * * * * * * * What is the main difference between classical and quantum physics? There is a huge difference between Classical and Quantum Theory. In Classical theory, a body always chooses the least action path and there is only one path. In Quantum theory, a particle also always chooses the least action path and it chooses multiple least action paths simultaneously. What is the difference between classical physics and quantum physics? 1. Classical physics is causal - complete knowledge of the past allows computation of the future. Likewise, complete knowledge of the future allows precise computation of the past. (Chaos theory is irrelevant to this statement; it talks about how well you can do with incomplete knowledge.) Not so in quantum physics. Objects in quantum physics are neither particles nor waves; they are a strange combination of both. Given complete knowledge of the past, we can make only probabilistic predictions of the future. In classical physics, two bombs with identical fuses would explode at the same time. In quantum physics, two absolutely identical radioactive atoms can and generally will explode at very different times. Two identical atoms of uranium-238 will, on average, undergo radioactive decay separated by billions of years, despite the fact that they are identical. There is a rule that physicist often use to separate classical physics from quantum. If Planck's constant appears in the equations, it is quantum physics. If it doesn't, it is classical physics. Most physicists believe that quantum physics is the right theory, even though many details are yet to be worked out. Classical physics can be derived from quantum physics in the limit that the quantum properties are hidden. That fact is called the "correspondence principle." 2. Quantum physics is the revolution that overthrew classical physics. Describing the difference between them is like describing the difference between the Bolsheviks and the Tsars. Where do we even begin? On the one hand, we have the Newtonian picture of a clockwork universe. In this paradigm, all of physical reality is a giant machine that ticks forward in time, changing its configuration predictably according to deterministic laws. Newton saw his God as a mathematician who constructed the cosmos out of physical elements, setting them in motion according to a small set of simple mathematical laws. These laws are ultimately responsible for all the complexity and diversity of natural phenomena. Likewise, all phenomena, no matter how complex, can be understood in terms of these simple laws. "All discord," wrote Alexander Pope, is "harmony not understood." On the other hand, we have the quantum universe, which from our perspective, seems to resemble more of a slot machine than a clock. In the quantum universe, we see the machinery as fundamentally probabilistic. If there is harmony underlying quantum discord, it is inaccessible to the experimenter. In fact, the quantum revolution goes much deeper than merely introducing probability as a fundamental feature. It altogether trashes the Newtonian clock, replacing it with a completely alien device built out of much more advanced mathematics. The quantum revolution tells us that the classical perspective isn't just wrong, it is fundamentally unsalvageable. Lets proceed by discussing some Newtonian components to be thrown in the trash: 1. Particles and fields possess well defined dynamic variables at all times. Dynamic variables are the quantities used to describe the motion of objects, such as position, velocity, momentum, and energy. Classical physics presupposes that the dynamic variables of a system are well defined and can be measured to perfect precision. For example, at any given point in time, a classical particle exists at a single point in space and travels with a single velocity. Even if the exact values of the variables are uncertain, we assume that they exist and only take one specific value. 2. Particles as point-like objects following predictable trajectories. In classical mechanics, a particle is treated as a dimensionless point. This point travels from A to B by tracing out a continuous path through the intermediate space. A billiard ball traces out a straight line as it rolls across the table, a satellite in orbit traces out an ellipse, and so on. The idea of a definite trajectory presupposes well defined dynamic variables, and so once the first point above is abandoned, the idea of a definite trajectory must be discarded as well. 3. Dynamic variables as continuous real numbers. In classical physics, dynamic variables are smoothly varying continuous values. Quantum physics takes its name from the observation that certain quantities, most notably energy and angular momentum, are restricted to certain discrete or 'quantized' values under special circumstances. The in-between values are forbidden. 4. Particles and waves as separate phenomena. Classical physics has one framework for particles and a different framework for waves and fields. This matches the intuitive notion that a billiard ball and a water wave move from A to B in completely different fashions. In quantum physics however, these two phenomena are synthesized and treated under a unified, magnificent framework. All physical entities are particle/wave hybrids. 5. Newton's Second Law. Without the four kinematic features mentioned above, ∑F=ma∑F=ma is more than wrong, it's nonsensical. A radically different dynamics must be developed that is governed by a very different equation of motion. 6. Predictability of measurement outcomes. In classical physics, the outcomes of measurements can be predicted perfectly, assuming full knowledge of the system beforehand. In quantum mechanics, even if you have full knowledge of a system, the outcomes of certain measurements will be impossible to predict. With the above list in mind, its no wonder that quantum mechanics took an international collaboration several decades to develop. How do you build a coherent model of the universe without these features? Well, thankfully, not quite everything from classical physics had to be scrapped. The conservation laws are preserved (or Great Conservation Principles as Feynman called them, always capitalized to highlight their centrality to all areas of physics). Quantum physics conserves things like momentum, energy, and electric charge as perfectly as classical physics. Also, while Newton's formulation of classical mechanics is completely abandoned, the conservation laws encourage us adapt tools from the more mathematically elegant Hamiltonian and Lagrangian formulations of classical mechanics. Erwin Schrodinger chose to adapt the Hamiltonian formalism which led to his eponymous equation. Richard Feynman adapted Lagrangian mechanics which led to his path integral formulation. Heisenberg developed his own esoteric approach called matrix mechanics. All three approaches to quantum mechanics are mathematically equivalent and useful in their own right (there's more than three, but these are the standard formulations). Schrodinger's formulation of quantum mechanics is usually the one everyone encounters first, and his is the formalism most widely used in the field. So lets go back through the list above, and replace Newton's components with Schrodinger's: 1. Particles possess a wave function Ψ(x,t)Ψ(x,t) at all times. The wave function assigns a complex number to each point in space at each moment in time. This function contains within it all available information about the particle. Everything that can be known about the particle's motion is extracted from Ψ(x,t)Ψ(x,t). To recover dynamic information, we use Born's rule and calculate ΨΨ∗ΨΨ∗ to get the probability density of the particle's position, and we calculate ϕϕ∗ϕϕ∗ to get the probability density of the particle's momentum, where ϕ(p,t)ϕ(p,t) is the Fourier transform of the wave function. This is a radically different approach to kinematics than in classical mechanics, which describes particles by listing off the values of the dynamic variables. 2. Trajectories are replaced with wave function evolution. As the wave function changes in time, so do the probabilities of observing particular positions and momenta for the particle. The evolution equation is the time-dependent Schrodinger equation: iℏΨ˙(x,t)=HΨ(x,t)iℏΨ˙(x,t)=HΨ(x,t). HH is the Hamiltonian operator for the system, i.e. the self-adjoint operator corresponding to the total energy of the system (described in point 3 below). 3. Dynamic variables are Hermitian matrices. Instead of real-valued, continuously evolving dynamic variables, Schrodinger uses fixed Hermitian matrices (or self-adjoint operators) to represent observable quantities. Each observable such as position, momentum, energy, etc has a corresponding matrix/operator. The eigenvalues of the matrix/operator determines the allowed values of the corresponding observable. The energy levels of atoms, for example, are eigenvalues of the Hamiltonian operator. This is another completely radical shift from how classical physics treats motion. 4. Unification of particles and waves. A mathematical analysis of the Schrodinger equation reveals that it has wavelike solutions, and so particles propagate as waves. This means that we shouldn't picture particles as tiny spheres bouncing around their environment. The closest you can get to visualizing a particle is by visualizing its wave function. As stated previously in the first point above, the wave function assigns a complex number to each point in space. This field of complex numbers evolves in time. What does this evolution look like? Well, if you are familiar with phasors, it looks like a field of rapidly rotating phasors (excellent visualization). To be more specific, the field of phasors for a particular particle looks like a screw that twists in the direction of motion. 5. The time-dependent Schrodinger equation replaces Newton's second law. 6. Measurement is random. Even if you have full knowledge of a quantum system prior to measurement (i.e. you know Ψ(x,t)Ψ(x,t)), you still will not be able to predict the outcomes of measurements in general. The outcome of the measurement is probabilistic. The possible outcomes are determined by the eigenvalues of the operator you are observing (see point 3), and the probability of each outcome is determined by the projection of the wave function onto the eigenvectors of that operator. So this is a sketch of what Schrodinger's quantum mechanics looks like. Alternate formulations would have different details, but the gist is the same. Hopefully it is now clear that the differences between classical physics and quantum physics are vast. The quantum revolution is really one of the most stunning intellectual developments of the 20th century, and in many ways the effects of the revolution have yet to be fully felt. Quantum computing, for example, is one ramification that hasn't quite yet materialized. The philosophical and technological ramifications will most certainly continue to transform the 21st century in extraordinary ways. 3. I am going to make this as simple as possible and not throw a lot of math at you. In classical physics, there is an "in-principle" determinism. If you had N atoms of neon in a gas canister, and you knew the position, and momentum of every one, in principle you could describe the history fully for all time. That doesn't mean that you can't use statistical methods or treat the motions as random (to treat them deterministically you'd need to keep track of 6N numbers as a function of time!). And in fact, such methods are extremely useful in classical physics. It just means that there are exact knowable properties such as position and momentum that are measurable to any accuracy, independent of the process of observation. In classical physics, things like electrons and atoms were supposed to be treated as strictly particles, and things like light and other forms of electromagnetic radiation treated strictly as waves. (It turns out that there are a lot of things that happen with light and electrons that cannot be properly explained in classical physics!) In classical physics. Each particle has an exact position and momentum. The pool table has an almost completely uniform coefficient of friction, and the collisions are approximately elastic. True, some of the tricks with backspin appear a little freaky.... In quantum physics, there properties such as position and momentum that are NOT measurable to any accuracy, independent of the process of observation. Specifically in the case of position and momentum, there is a limit on how accurately you can measure both at once. You can think of a particle as being described as a wave, which encodes the probability of making a specific measurement. Possible observations are determined by the probabilities, and are not determinate. There is no "trajectory" between subsequent observations. The variation becomes significant on the atomic scale and below. Large macroscopic objects that have, say maybe 7,000,000,000,000,000,000,000,000,000 atoms in them, like you and me, can have variations due to quantum uncertainty that are such a tiny fraction of them, they can effectively be treated as classical objects for almost all purposes. Indeed, the formula for the wave associated with a human body, or a pool ball or table, gives a wavelength that is so incredibly short, that the quantum calculations approximate the classical ones for these large objects to a tremendous degree of accuracy. The double slit experiment. A wave that strikes a surface with two small nearby openings will interfere with itself, producing interference fringes. In this video electrons are fired at a pair of slits one at a time. Electrons are definitely particles! Yet, the electrons don't seem to follow a definite trajectory, and show up randomly. When a lot have been transmitted, they form interference fringes! 4. Classical physics took form when Newton developed his theory of gravity and the mathematics we commonly known as calculus. Newtonian physics were three dimensional: width, height and depth. Energy comes in tiny lumps, in packets whereas a single packet is a quantum and Planck's ideas were soon called the "quantum theory." Quanta can behave like particles and quanta can behave like waves. It seems counter-intuitive, but, light can be both particles and waves and the difference depends fundamentally on how it is studied. 5. There is a huge difference between Classical and Quantum Theory: 2. If there are 9 boxes and 10 pigeons, then at least one box will end up with two pigeons. This is in Classical Theory. No such thing happens in Quantum Theory. We can pass infinite electrons just from two boxes. 3. We can determine position and velocity of a particle simultaneously with great accuracy in Classical Physics. Quantum Physics follows the Heisenberg Uncertainty Principle. 4. Classical Physics is applicable to macroscopic particles. Quantum Physics is applicable to microscopic particles. 6. Here's a simple analogy. Suppose you are playing squash with a sponge ball. And you wish to build a machine that can play it with you, the first thing you would need is to mathematically model the mechanics of the sponge ball so that you can incorporate it in the design of the machine. For this a classical model would suffice. Now lets go quantum, if you want to replace the sponge ball with an electron, the classical model of a sponge ball breaks apart. First of there is no deterministic way of knowing the location of the ball before it hits your bat. Then there is a probability that it will tunnel through your bat even if you got it right. And so, we have just started with the long list of phenomenon unseen in classical mechanics. These phenomenon are modelled into the mathematics in QM and for a probabilistic theory it does explain why things happen beautifully. The problem however is that with this new model, the world looks like a much stranger place. The ball is no longer a ball anymore, but an *eigenvalue in a wave equation. It's nothing like the world that we are familiar with. This posts interesting puzzles on what the mathematics mean. It's both mind bending and confusing to visualize yet so intriguing because it's very counter intuitive. *The word eigen is German for “own”, “particular”, or “proper.” so when combined with value, it can be thought of as the “the particular value”, the value that's “just right” for the situation at hand. Eigenvalues and eigenvectors feature prominently in the analysis of linear transformations. The prefix eigen- is adopted from the German word eigen (cognate with the English word own) for "proper", "characteristic", "own". * * * * * * * * Electromagnetic Spectrum and Quantum Theory Planck's Quantum Theory | Electromagnetic Waves and Wave Optics | Physics Videos * * * * * * * * QUANTUM ELETRODYNAMICS (qed) - second article Quantum Physics Perspective on Electromagnetic and Quantum Fields Inside the Brain * * * * * * * * Electromagnetic radiation Jump to navigationJump to search In physicselectromagnetic radiation (EM radiation or EMR) refers to the waves (or their quantaphotons) of the electromagnetic field, propagating through space, carrying electromagnetic radiant energy.[1] It includes radio wavesmicrowavesinfrared(visible) lightultravioletX-rays, and gamma rays. All of these waves form part of the electromagnetic spectrum.[2] Classically, electromagnetic radiation consists of electromagnetic waves, which are synchronized oscillations of electric and magnetic fields. Electromagnetic radiation or electromagnetic waves are created due to periodic change of electric or magnetic field. Depending on how this periodic change occurs and the power generated, different wavelengths of electromagnetic spectrum are produced. In a vacuum, electromagnetic waves travel at the speed of light, commonly denoted c. In homogeneous, isotropic media, the oscillations of the two fields are perpendicular to each other and perpendicular to the direction of energy and wave propagation, forming a transverse wave. The wavefront of electromagnetic waves emitted from a point source (such as a light bulb) is a sphere. The position of an electromagnetic wave within the electromagnetic spectrum can be characterized by either its frequency of oscillation or its wavelength. Electromagnetic waves of different frequency are called by different names since they have different sources and effects on matter. In order of increasing frequency and decreasing wavelength these are: radio waves, microwaves, infrared radiation, visible light, ultraviolet radiation, X-rays and gamma rays.[3] Electromagnetic waves are emitted by electrically charged particles undergoing acceleration,[4][5] and these waves can subsequently interact with other charged particles, exerting force on them. EM waves carry energy, momentum and angular momentum away from their source particle and can impart those quantities to matter with which they interact. Electromagnetic radiation is associated with those EM waves that are free to propagate themselves ("radiate") without the continuing influence of the moving charges that produced them, because they have achieved sufficient distance from those charges. Thus, EMR is sometimes referred to as the far field. In this language, the near field refers to EM fields near the charges and current that directly produced them, specifically electromagnetic induction and electrostatic induction phenomena. In quantum mechanics, an alternate way of viewing EMR is that it consists of photons, uncharged elementary particles with zero rest mass which are the quanta of the electromagnetic field, responsible for all electromagnetic interactions.[6] Quantum electrodynamics is the theory of how EMR interacts with matter on an atomic level.[7] Quantum effects provide additional sources of EMR, such as the transition of electrons to lower energy levels in an atom and black-body radiation.[8] The energy of an individual photon is quantized and is greater for photons of higher frequency. This relationship is given by Planck's equation E = hf, where E is the energy per photon, f is the frequency of the photon, and h is Planck's constant. A single gamma ray photon, for example, might carry ~100,000 times the energy of a single photon of visible light. The effects of EMR upon chemical compounds and biological organisms depend both upon the radiation's power and its frequency. EMR of visible or lower frequencies (i.e., visible light, infrared, microwaves, and radio waves) is called non-ionizing radiation, because its photons do not individually have enough energy to ionize atoms or molecules or break chemical bonds. The effects of these radiations on chemical systems and living tissue are caused primarily by heating effects from the combined energy transfer of many photons. In contrast, high frequency ultraviolet, X-rays and gamma rays are called ionizing radiation, since individual photons of such high frequency have enough energy to ionize molecules or break chemical bonds. These radiations have the ability to cause chemical reactions and damage living cells beyond that resulting from simple heating, and can be a health hazard. Shows the relative wavelengths of the electromagnetic waves of three different colours of light (blue, green, and red) with a distance scale in micrometers along the x-axis. Maxwell's equations James Clerk Maxwell derived a wave form of the electric and magnetic equations, thus uncovering the wave-like nature of electric and magnetic fields and their symmetry. Because the speed of EM waves predicted by the wave equation coincided with the measured speed of light, Maxwell concluded that light itself is an EM wave.[9][10] Maxwell's equations were confirmed by Heinrich Hertz through experiments with radio waves. [11] Maxwell realized that since a lot of physics is symmetrical and mathematically artistic in a way, that there must also be a symmetry between electricity and magnetism. He realized that light is a combination of electricity and magnetism and thus that the two must be tied together. According to Maxwell's equations, a spatially varying electric field is always associated with a magnetic field that changes over time.[12] Likewise, a spatially varying magnetic field is associated with specific changes over time in the electric field. In an electromagnetic wave, the changes in the electric field are always accompanied by a wave in the magnetic field in one direction, and vice versa. This relationship between the two occurs without either type of field causing the other; rather, they occur together in the same way that time and space changes occur together and are interlinked in special relativity. In fact, magnetic fields can be viewed as electric fields in another frame of reference, and electric fields can be viewed as magnetic fields in another frame of reference, but they have equal significance as physics is the same in all frames of reference, so the close relationship between space and time changes here is more than an analogy. Together, these fields form a propagating electromagnetic wave, which moves out into space and need never again interact with the source. The distant EM field formed in this way by the acceleration of a charge carries energy with it that "radiates" away through space, hence the term. Near and far fields In electromagnetic radiation (such as microwaves from an antenna, shown here) the term "radiation" applies only to the parts of the electromagnetic field that radiate into infinite space and decrease in intensity by an inverse-square law of power, so that the total radiation energy that crosses through an imaginary spherical surface is the same, no matter how far away from the antenna the spherical surface is drawn. Electromagnetic radiation thus includes the far field part of the electromagnetic field around a transmitter. A part of the "near-field" close to the transmitter, forms part of the changing electromagnetic field, but does not count as electromagnetic radiation. Maxwell's equations established that some charges and currents ("sources") produce a local type of electromagnetic field near them that does not have the behaviour of EMR. Currents directly produce a magnetic field, but it is of a magnetic dipole type that dies out with distance from the current. In a similar manner, moving charges pushed apart in a conductor by a changing electrical potential (such as in an antenna) produce an electric dipole type electrical field, but this also declines with distance. These fields make up the near-field near the EMR source. Neither of these behaviours are responsible for EM radiation. Instead, they cause electromagnetic field behaviour that only efficiently transfers power to a receiver very close to the source, such as the magnetic induction inside a transformer, or the feedback behaviour that happens close to the coil of a metal detector. Typically, near-fields have a powerful effect on their own sources, causing an increased "load" (decreased electrical reactance) in the source or transmitter, whenever energy is withdrawn from the EM field by a receiver. Otherwise, these fields do not "propagate" freely out into space, carrying their energy away without distance-limit, but rather oscillate, returning their energy to the transmitter if it is not received by a receiver.[citation needed] By contrast, the EM far-field is composed of radiation that is free of the transmitter in the sense that (unlike the case in an electrical transformer) the transmitter requires the same power to send these changes in the fields out, whether the signal is immediately picked up or not. This distant part of the electromagnetic field is "electromagnetic radiation" (also called the far-field). The far-fields propagate (radiate) without allowing the transmitter to affect them. This causes them to be independent in the sense that their existence and their energy, after they have left the transmitter, is completely independent of both transmitter and receiver. Due to conservation of energy, the amount of power passing through any spherical surface drawn around the source is the same. Because such a surface has an area proportional to the square of its distance from the source, the power density of EM radiation always decreases with the inverse square of the distance from the source; this is called the inverse-square law. This is in contrast to dipole parts of the EM field close to the source (the near-field), which vary in power according to an inverse cube power law, and thus do not transport a conserved amount of energy over distances, but instead fade with distance, with its energy (as noted) rapidly returning to the transmitter or absorbed by a nearby receiver (such as a transformer secondary coil). The far-field (EMR) depends on a different mechanism for its production than the near-field, and upon different terms in Maxwell's equations. Whereas the magnetic part of the near-field is due to currents in the source, the magnetic field in EMR is due only to the local change in the electric field. In a similar way, while the electric field in the near-field is due directly to the charges and charge-separation in the source, the electric field in EMR is due to a change in the local magnetic field. Both processes for producing electric and magnetic EMR fields have a different dependence on distance than do near-field dipole electric and magnetic fields. That is why the EMR type of EM field becomes dominant in power "far" from sources. The term "far from sources" refers to how far from the source (moving at the speed of light) any portion of the outward-moving EM field is located, by the time that source currents are changed by the varying source potential, and the source has therefore begun to generate an outwardly moving EM field of a different phase.[citation needed] A more compact view of EMR is that the far-field that composes EMR is generally that part of the EM field that has traveled sufficient distance from the source, that it has become completely disconnected from any feedback to the charges and currents that were originally responsible for it. Now independent of the source charges, the EM field, as it moves farther away, is dependent only upon the accelerations of the charges that produced it. It no longer has a strong connection to the direct fields of the charges, or to the velocity of the charges (currents).[citation needed] In the Liénard–Wiechert potential formulation of the electric and magnetic fields due to motion of a single particle (according to Maxwell's equations), the terms associated with acceleration of the particle are those that are responsible for the part of the field that is regarded as electromagnetic radiation. By contrast, the term associated with the changing static electric field of the particle and the magnetic term that results from the particle's uniform velocity, are both associated with the electromagnetic near-field, and do not comprise EM radiation.[citation needed] Electromagnetic waves can be imagined as a self-propagating transverse oscillating wave of electric and magnetic fields. This 3D animation shows a plane linearly polarized wave propagating from left to right. The electric and magnetic fields in such a wave are in-phase with each other, reaching minima and maxima together. Electrodynamics is the physics of electromagnetic radiation, and electromagnetism is the physical phenomenon associated with the theory of electrodynamics. Electric and magnetic fields obey the properties of superposition. Thus, a field due to any particular particle or time-varying electric or magnetic field contributes to the fields present in the same space due to other causes. Further, as they are vector fields, all magnetic and electric field vectors add together according to vector addition.[13] For example, in optics two or more coherent light waves may interact and by constructive or destructive interference yield a resultant irradiance deviating from the sum of the component irradiances of the individual light waves.[citation needed] The electromagnetic fields of light are not affected by traveling through static electric or magnetic fields in a linear medium such as a vacuum. However, in nonlinear media, such as some crystals, interactions can occur between light and static electric and magnetic fields—these interactions include the Faraday effect and the Kerr effect.[14][15] In refraction, a wave crossing from one medium to another of different density alters its speed and direction upon entering the new medium. The ratio of the refractive indices of the media determines the degree of refraction, and is summarized by Snell's law. Light of composite wavelengths (natural sunlight) disperses into a visible spectrum passing through a prism, because of the wavelength-dependent refractive index of the prism material (dispersion); that is, each component wave within the composite light is bent a different amount.[16] EM radiation exhibits both wave properties and particle properties at the same time (see wave-particle duality). Both wave and particle characteristics have been confirmed in many experiments. Wave characteristics are more apparent when EM radiation is measured over relatively large timescales and over large distances while particle characteristics are more evident when measuring small timescales and distances. For example, when electromagnetic radiation is absorbed by matter, particle-like properties will be more obvious when the average number of photons in the cube of the relevant wavelength is much smaller than 1. It is not so difficult to experimentally observe non-uniform deposition of energy when light is absorbed, however this alone is not evidence of "particulate" behavior. Rather, it reflects the quantum nature of matter.[17] Demonstrating that the light itself is quantized, not merely its interaction with matter, is a more subtle affair. Some experiments display both the wave and particle natures of electromagnetic waves, such as the self-interference of a single photon.[18] When a single photon is sent through an interferometer, it passes through both paths, interfering with itself, as waves do, yet is detected by a photomultiplier or other sensitive detector only once. Electromagnetic waves can be polarized, reflected, refracted, diffracted or interfere with each other.[19][20][21] Wave model In homogeneous, isotropic media, electromagnetic radiation is a transverse wave,[22] meaning that its oscillations are perpendicular to the direction of energy transfer and travel. The electric and magnetic parts of the field stand in a fixed ratio of strengths to satisfy the two Maxwell equations that specify how one is produced from the other. In dissipation-less (lossless) media, these E and B fields are also in phase, with both reaching maxima and minima at the same points in space (see illustrations). A common misconception[citation needed] is that the E and B fields in electromagnetic radiation are out of phase because a change in one produces the other, and this would produce a phase difference between them as sinusoidal functions (as indeed happens in electromagnetic induction, and in the near-field close to antennas). However, in the far-field EM radiation which is described by the two source-free Maxwell curl operator equations, a more correct description is that a time-change in one type of field is proportional to a space-change in the other. These derivatives require that the E and B fields in EMR are in-phase (see mathematics section below).[citation needed] An important aspect of light's nature is its frequency. The frequency of a wave is its rate of oscillation and is measured in hertz, the SI unit of frequency, where one hertz is equal to one oscillation per second. Light usually has multiple frequencies that sum to form the resultant wave. Different frequencies undergo different angles of refraction, a phenomenon known as dispersion. A monochromatic wave (a wave of a single frequency) consists of successive troughs and crests, and the distance between two adjacent crests or troughs is called the wavelength. Waves of the electromagnetic spectrum vary in size, from very long radio waves longer than a continent to very short gamma rays smaller than atom nuclei. Frequency is inversely proportional to wavelength, according to the equation:[23] where v is the speed of the wave (c in a vacuum or less in other media), f is the frequency and λ is the wavelength. As waves cross boundaries between different media, their speeds change but their frequencies remain constant. Electromagnetic waves in free space must be solutions of Maxwell's electromagnetic wave equation. Two main classes of solutions are known, namely plane waves and spherical waves. The plane waves may be viewed as the limiting case of spherical waves at a very large (ideally infinite) distance from the source. Both types of waves can have a waveform which is an arbitrary time function (so long as it is sufficiently differentiable to conform to the wave equation). As with any time function, this can be decomposed by means of Fourier analysis into its frequency spectrum, or individual sinusoidal components, each of which contains a single frequency, amplitude and phase. Such a component wave is said to be monochromatic. A monochromatic electromagnetic wave can be characterized by its frequency or wavelength, its peak amplitude, its phase relative to some reference phase, its direction of propagation, and its polarization. Interference is the superposition of two or more waves resulting in a new wave pattern. If the fields have components in the same direction, they constructively interfere, while opposite directions cause destructive interference. An example of interference caused by EMR is electromagnetic interference (EMI) or as it is more commonly known as, radio-frequency interference (RFI).[citation needed] Additionally, multiple polarization signals can be combined (i.e. interfered) to form new states of polarization, which is known as parallel polarization state generation.[24] The energy in electromagnetic waves is sometimes called radiant energy.[25][26][27] Particle model and quantum theory An anomaly arose in the late 19th century involving a contradiction between the wave theory of light and measurements of the electromagnetic spectra that were being emitted by thermal radiators known as black bodies. Physicists struggled with this problem unsuccessfully for many years. It later became known as the ultraviolet catastrophe. In 1900, Max Planck developed a new theory of black-body radiation that explained the observed spectrum. Planck's theory was based on the idea that black bodies emit light (and other electromagnetic radiation) only as discrete bundles or packets of energy. These packets were called quanta. In 1905, Albert Einstein proposed that light quanta be regarded as real particles. Later the particle of light was given the name photon, to correspond with other particles being described around this time, such as the electron and proton. A photon has an energy, E, proportional to its frequency, f, by where h is Planck's constant is the wavelength and c is the speed of light. This is sometimes known as the Planck–Einstein equation.[28] In quantum theory (see first quantization) the energy of the photons is thus directly proportional to the frequency of the EMR wave.[29] The source of Einstein's proposal that light was composed of particles (or could act as particles in some circumstances) was an experimental anomaly not explained by the wave theory: the photoelectric effect, in which light striking a metal surface ejected electrons from the surface, causing an electric current to flow across an applied voltage. Experimental measurements demonstrated that the energy of individual ejected electrons was proportional to the frequency, rather than the intensity, of the light. Furthermore, below a certain minimum frequency, which depended on the particular metal, no current would flow regardless of the intensity. These observations appeared to contradict the wave theory, and for years physicists tried in vain to find an explanation. In 1905, Einstein explained this puzzle by resurrecting the particle theory of light to explain the observed effect. Because of the preponderance of evidence in favor of the wave theory, however, Einstein's ideas were met initially with great skepticism among established physicists. Eventually Einstein's explanation was accepted as new particle-like behavior of light was observed, such as the Compton effect.[citation needed][30] As a photon is absorbed by an atom, it excites the atom, elevating an electron to a higher energy level (one that is on average farther from the nucleus). When an electron in an excited molecule or atom descends to a lower energy level, it emits a photon of light at a frequency corresponding to the energy difference. Since the energy levels of electrons in atoms are discrete, each element and each molecule emits and absorbs its own characteristic frequencies. Immediate photon emission is called fluorescence, a type of photoluminescence. An example is visible light emitted from fluorescent paints, in response to ultraviolet (blacklight). Many other fluorescent emissions are known in spectral bands other than visible light. Delayed emission is called phosphorescence.[31][32] Wave–particle duality The modern theory that explains the nature of light includes the notion of wave–particle duality. More generally, the theory states that everything has both a particle nature and a wave nature, and various experiments can be done to bring out one or the other. The particle nature is more easily discerned using an object with a large mass. A bold proposition by Louis de Broglie in 1924 led the scientific community to realize that matter (e.g. electrons) also exhibits wave–particle duality.[33] Wave and particle effects of electromagnetic radiation Together, wave and particle effects fully explain the emission and absorption spectra of EM radiation. The matter-composition of the medium through which the light travels determines the nature of the absorption and emission spectrum. These bands correspond to the allowed energy levels in the atoms. Dark bands in the absorption spectrum are due to the atoms in an intervening medium between source and observer. The atoms absorb certain frequencies of the light between emitter and detector/eye, then emit them in all directions. A dark band appears to the detector, due to the radiation scattered out of the beam. For instance, dark bands in the light emitted by a distant star are due to the atoms in the star's atmosphere. A similar phenomenon occurs for emission, which is seen when an emitting gas glows due to excitation of the atoms from any mechanism, including heat. As electrons descend to lower energy levels, a spectrum is emitted that represents the jumps between the energy levels of the electrons, but lines are seen because again emission happens only at particular energies after excitation.[34] An example is the emission spectrum of nebulae.[citation needed] Rapidly moving electrons are most sharply accelerated when they encounter a region of force, so they are responsible for producing much of the highest frequency electromagnetic radiation observed in nature. These phenomena can aid various chemical determinations for the composition of gases lit from behind (absorption spectra) and for glowing gases (emission spectra). Spectroscopy (for example) determines what chemical elements comprise a particular star. Spectroscopy is also used in the determination of the distance of a star, using the red shift.[35] Propagation speed When any wire (or other conducting object such as an antenna) conducts alternating current, electromagnetic radiation is propagated at the same frequency as the current. In many such situations it is possible to identify an electrical dipole moment that arises from separation of charges due to the exciting electrical potential, and this dipole moment oscillates in time, as the charges move back and forth. This oscillation at a given frequency gives rise to changing electric and magnetic fields, which then set the electromagnetic radiation in motion.[citation needed] At the quantum level, electromagnetic radiation is produced when the wavepacket of a charged particle oscillates or otherwise accelerates. Charged particles in a stationary state do not move, but a superposition of such states may result in a transition state that has an electric dipole moment that oscillates in time. This oscillating dipole moment is responsible for the phenomenon of radiative transition between quantum states of a charged particle. Such states occur (for example) in atoms when photons are radiated as the atom shifts from one stationary state to another.[citation needed] As a wave, light is characterized by a velocity (the speed of light), wavelength, and frequency. As particles, light is a stream of photons. Each has an energy related to the frequency of the wave given by Planck's relation E = hf, where E is the energy of the photon, h is Planck's constant, 6.626 × 10−34 J·s, and f is the frequency of the wave.[36] One rule is obeyed regardless of circumstances: EM radiation in a vacuum travels at the speed of lightrelative to the observer, regardless of the observer's velocity. (This observation led to Einstein's development of the theory of special relativity.)[citation needed] In a medium (other than vacuum), velocity factor or refractive index are considered, depending on frequency and application. Both of these are ratios of the speed in a medium to speed in a vacuum.[citation needed] Special theory of relativity By the late nineteenth century, various experimental anomalies could not be explained by the simple wave theory. One of these anomalies involved a controversy over the speed of light. The speed of light and other EMR predicted by Maxwell's equations did not appear unless the equations were modified in a way first suggested by FitzGerald and Lorentz (see history of special relativity), or else otherwise that speed would depend on the speed of observer relative to the "medium" (called luminiferous aether) which supposedly "carried" the electromagnetic wave (in a manner analogous to the way air carries sound waves). Experiments failed to find any observer effect. In 1905, Einstein proposed that space and time appeared to be velocity-changeable entities for light propagation and all other processes and laws. These changes accounted for the constancy of the speed of light and all electromagnetic radiation, from the viewpoints of all observers—even those in relative motion. History of discovery Electromagnetic radiation of wavelengths other than those of visible light were discovered in the early 19th century. The discovery of infrared radiation is ascribed to astronomer William Herschel, who published his results in 1800 before the Royal Society of London.[37] Herschel used a glass prism to refract light from the Sun and detected invisible rays that caused heating beyond the red part of the spectrum, through an increase in the temperature recorded with a thermometer. These "calorific rays" were later termed infrared.[38] In 1801, German physicist Johann Wilhelm Ritter discovered ultraviolet in an experiment similar to Herschel's, using sunlight and a glass prism. Ritter noted that invisible rays near the violet edge of a solar spectrum dispersed by a triangular prism darkened silver chloride preparations more quickly than did the nearby violet light. Ritter's experiments were an early precursor to what would become photography. Ritter noted that the ultraviolet rays (which at first were called "chemical rays") were capable of causing chemical reactions.[39] In 1862–64 James Clerk Maxwell developed equations for the electromagnetic field which suggested that waves in the field would travel with a speed that was very close to the known speed of light. Maxwell therefore suggested that visible light (as well as invisible infrared and ultraviolet rays by inference) all consisted of propagating disturbances (or radiation) in the electromagnetic field. Radio waves were first produced deliberately by Heinrich Hertz in 1887, using electrical circuits calculated to produce oscillations at a much lower frequency than that of visible light, following recipes for producing oscillating charges and currents suggested by Maxwell's equations. Hertz also developed ways to detect these waves, and produced and characterized what were later termed radio waves and microwaves.[40]:286,7 Wilhelm Röntgen discovered and named X-rays. After experimenting with high voltages applied to an evacuated tube on 8 November 1895, he noticed a fluorescence on a nearby plate of coated glass. In one month, he discovered X-rays' main properties.[40]:307 The last portion of the EM spectrum to be discovered was associated with radioactivityHenri Becquerel found that uranium salts caused fogging of an unexposed photographic plate through a covering paper in a manner similar to X-rays, and Marie Curie discovered that only certain elements gave off these rays of energy, soon discovering the intense radiation of radium. The radiation from pitchblende was differentiated into alpha rays (alpha particles) and beta rays (beta particles) by Ernest Rutherford through simple experimentation in 1899, but these proved to be charged particulate types of radiation. However, in 1900 the French scientist Paul Villard discovered a third neutrally charged and especially penetrating type of radiation from radium, and after he described it, Rutherford realized it must be yet a third type of radiation, which in 1903 Rutherford named gamma rays. In 1910 British physicist William Henry Bragg demonstrated that gamma rays are electromagnetic radiation, not particles, and in 1914 Rutherford and Edward Andrade measured their wavelengths, finding that they were similar to X-rays but with shorter wavelengths and higher frequency, although a 'cross-over' between X and gamma rays makes it possible to have X-rays with a higher energy (and hence shorter wavelength) than gamma rays and vice versa. The origin of the ray differentiates them, gamma rays tend to be natural phenomena originating from the unstable nucleus of an atom and X-rays are electrically generated (and hence man-made) unless they are as a result of bremsstrahlung X-radiation caused by the interaction of fast moving particles (such as beta particles) colliding with certain materials, usually of higher atomic numbers.[40]:308,9 Electromagnetic spectrum Electromagnetic spectrum with visible light highlighted γ = Gamma rays HX = Hard X-rays SX = Soft X-Rays EUV = Extreme-ultraviolet NUV = Near-ultraviolet Visible light (colored bands) NIR = Near-infrared MIR = Mid-infrared FIR = Far-infrared EHF = Extremely high frequency (microwaves) SHF = Super-high frequency (microwaves) UHF = Ultrahigh frequency (radio waves) VHF = Very high frequency (radio) HF = High frequency (radio) MF = Medium frequency (radio) LF = Low frequency (radio) VLF = Very low frequency (radio) VF = Voice frequency ULF = Ultra-low frequency (radio) SLF = Super-low frequency (radio) ELF = Extremely low frequency (radio) EM radiation (the designation 'radiation' excludes static electric and magnetic and near fields) is classified by wavelength into radiomicrowaveinfraredvisibleultravioletX-rays and gamma rays. Arbitrary electromagnetic waves can be expressed by Fourier analysis in terms of sinusoidal monochromatic waves, which in turn can each be classified into these regions of the EMR spectrum. For certain classes of EM waves, the waveform is most usefully treated as random, and then spectral analysis must be done by slightly different mathematical techniques appropriate to random or stochastic processes. In such cases, the individual frequency components are represented in terms of their power content, and the phase information is not preserved. Such a representation is called the power spectral density of the random process. Random electromagnetic radiation requiring this kind of analysis is, for example, encountered in the interior of stars, and in certain other very wideband forms of radiation such as the Zero point wave field of the electromagnetic vacuum. The behavior of EM radiation and its interaction with matter depends on its frequency, and changes qualitatively as the frequency changes. Lower frequencies have longer wavelengths, and higher frequencies have shorter wavelengths, and are associated with photons of higher energy. There is no fundamental limit known to these wavelengths or energies, at either end of the spectrum, although photons with energies near the Planck energy or exceeding it (far too high to have ever been observed) will require new physical theories to describe. Radio and microwave Radio waves have the least amount of energy and the lowest frequency. When radio waves impinge upon a conductor, they couple to the conductor, travel along it and induce an electric current on the conductor surface by moving the electrons of the conducting material in correlated bunches of charge. Such effects can cover macroscopic distances in conductors (such as radio antennas), since the wavelength of radiowaves is long. Electromagnetic radiation phenomena with wavelengths ranging from as long as one meter to as short as one millimeter are called microwaves; with frequencies between 300 MHz (0.3 GHz) and 300 GHz. At radio and microwave frequencies, EMR interacts with matter largely as a bulk collection of charges which are spread out over large numbers of affected atoms. In electrical conductors, such induced bulk movement of charges (electric currents) results in absorption of the EMR, or else separations of charges that cause generation of new EMR (effective reflection of the EMR). An example is absorption or emission of radio waves by antennas, or absorption of microwaves by water or other molecules with an electric dipole moment, as for example inside a microwave oven. These interactions produce either electric currents or heat, or both. Like radio and microwave, infrared (IR) also is reflected by metals (and also most EMR, well into the ultraviolet range). However, unlike lower-frequency radio and microwave radiation, Infrared EMR commonly interacts with dipoles present in single molecules, which change as atoms vibrate at the ends of a single chemical bond. It is consequently absorbed by a wide range of substances, causing them to increase in temperature as the vibrations dissipate as heat. The same process, run in reverse, causes bulk substances to radiate in the infrared spontaneously (see thermal radiation section below). Infrared radiation is divided into spectral subregions. While different subdivision schemes exist,[41][42] the spectrum is commonly divided as near-infrared (0.75–1.4 μm), short-wavelength infrared (1.4–3 μm), mid-wavelength infrared (3–8 μm), long-wavelength infrared (8–15 μm) and far infrared (15–1000 μm).[43] Visible light Natural sources produce EM radiation across the spectrum. EM radiation with a wavelength between approximately 400 nm and 700 nm is directly detected by the human eye and perceived as visible light. Other wavelengths, especially nearby infrared (longer than 700 nm) and ultraviolet (shorter than 400 nm) are also sometimes referred to as light. As frequency increases into the visible range, photons have enough energy to change the bond structure of some individual molecules. It is not a coincidence that this happens in the visible range, as the mechanism of vision involves the change in bonding of a single molecule, retinal, which absorbs a single photon. The change in retinal, causes a change in the shape of the rhodopsin protein it is contained in, which starts the biochemical process that causes the retina of the human eye to sense the light. Photosynthesis becomes possible in this range as well, for the same reason. A single molecule of chlorophyll is excited by a single photon. In plant tissues that conduct photosynthesis, carotenoids act to quench electronically excited chlorophyll produced by visible light in a process called non-photochemical quenching, to prevent reactions that would otherwise interfere with photosynthesis at high light levels. Animals that detect infrared make use of small packets of water that change temperature, in an essentially thermal process that involves many photons. Infrared, microwaves and radio waves are known to damage molecules and biological tissue only by bulk heating, not excitation from single photons of the radiation. Visible light is able to affect only a tiny percentage of all molecules. Usually not in a permanent or damaging way, rather the photon excites an electron which then emits another photon when returning to its original position. This is the source of color produced by most dyes. Retinal is an exception. When a photon is absorbed the retinal permanently changes structure from cis to trans, and requires a protein to convert it back, i.e. reset it to be able to function as a light detector again. Limited evidence indicate that some reactive oxygen species are created by visible light in skin, and that these may have some role in photoaging, in the same manner as ultraviolet A.[44] As frequency increases into the ultraviolet, photons now carry enough energy (about three electron volts or more) to excite certain doubly bonded molecules into permanent chemical rearrangement. In DNA, this causes lasting damage. DNA is also indirectly damaged by reactive oxygen species produced by ultraviolet A (UVA), which has energy too low to damage DNA directly. This is why ultraviolet at all wavelengths can damage DNA, and is capable of causing cancer, and (for UVB) skin burns (sunburn) that are far worse than would be produced by simple heating (temperature increase) effects. This property of causing molecular damage that is out of proportion to heating effects, is characteristic of all EMR with frequencies at the visible light range and above. These properties of high-frequency EMR are due to quantum effects that permanently damage materials and tissues at the molecular level.[citation needed] At the higher end of the ultraviolet range, the energy of photons becomes large enough to impart enough energy to electrons to cause them to be liberated from the atom, in a process called photoionisation. The energy required for this is always larger than about 10 electron volt (eV) corresponding with wavelengths smaller than 124 nm (some sources suggest a more realistic cutoff of 33 eV, which is the energy required to ionize water). This high end of the ultraviolet spectrum with energies in the approximate ionization range, is sometimes called "extreme UV." Ionizing UV is strongly filtered by the Earth's atmosphere.[citation needed] X-rays and gamma rays Electromagnetic radiation composed of photons that carry minimum-ionization energy, or more, (which includes the entire spectrum with shorter wavelengths), is therefore termed ionizing radiation. (Many other kinds of ionizing radiation are made of non-EM particles). Electromagnetic-type ionizing radiation extends from the extreme ultraviolet to all higher frequencies and shorter wavelengths, which means that all X-rays and gamma rays qualify. These are capable of the most severe types of molecular damage, which can happen in biology to any type of biomolecule, including mutation and cancer, and often at great depths below the skin, since the higher end of the X-ray spectrum, and all of the gamma ray spectrum, penetrate matter. Atmosphere and magnetosphere Rough plot of Earth's atmospheric absorption and scattering (or opacity) of various wavelengths of electromagnetic radiation Most UV and X-rays are blocked by absorption first from molecular nitrogen, and then (for wavelengths in the upper UV) from the electronic excitation of dioxygen and finally ozone at the mid-range of UV. Only 30% of the Sun's ultraviolet light reaches the ground, and almost all of this is well transmitted. Visible light is well transmitted in air, as it is not energetic enough to excite nitrogen, oxygen, or ozone, but too energetic to excite molecular vibrational frequencies of water vapor.[citation needed] Absorption bands in the infrared are due to modes of vibrational excitation in water vapor. However, at energies too low to excite water vapor, the atmosphere becomes transparent again, allowing free transmission of most microwave and radio waves.[citation needed] Finally, at radio wavelengths longer than 10 meters or so (about 30 MHz), the air in the lower atmosphere remains transparent to radio, but plasma in certain layers of the ionosphere begins to interact with radio waves (see skywave). This property allows some longer wavelengths (100 meters or 3 MHz) to be reflected and results in shortwave radio beyond line-of-sight. However, certain ionospheric effects begin to block incoming radiowaves from space, when their frequency is less than about 10 MHz (wavelength longer than about 30 meters).[45] Thermal and electromagnetic radiation as a form of heat The basic structure of matter involves charged particles bound together. When electromagnetic radiation impinges on matter, it causes the charged particles to oscillate and gain energy. The ultimate fate of this energy depends on the context. It could be immediately re-radiated and appear as scattered, reflected, or transmitted radiation. It may get dissipated into other microscopic motions within the matter, coming to thermal equilibrium and manifesting itself as thermal energy, or even kinetic energy, in the material. With a few exceptions related to high-energy photons (such as fluorescenceharmonic generationphotochemical reactions, the photovoltaic effect for ionizing radiations at far ultraviolet, X-ray and gamma radiation), absorbed electromagnetic radiation simply deposits its energy by heating the material. This happens for infrared, microwave and radio wave radiation. Intense radio waves can thermally burn living tissue and can cook food. In addition to infrared lasers, sufficiently intense visible and ultraviolet lasers can easily set paper afire.[46][citation needed] Ionizing radiation creates high-speed electrons in a material and breaks chemical bonds, but after these electrons collide many times with other atoms eventually most of the energy becomes thermal energy all in a tiny fraction of a second. This process makes ionizing radiation far more dangerous per unit of energy than non-ionizing radiation. This caveat also applies to UV, even though almost all of it is not ionizing, because UV can damage molecules due to electronic excitation, which is far greater per unit energy than heating effects.[46][citation needed] Infrared radiation in the spectral distribution of a black body is usually considered a form of heat, since it has an equivalent temperature and is associated with an entropy change per unit of thermal energy. However, "heat" is a technical term in physics and thermodynamics and is often confused with thermal energy. Any type of electromagnetic energy can be transformed into thermal energy in interaction with matter. Thus, any electromagnetic radiation can "heat" (in the sense of increase the thermal energy temperature of) a material, when it is absorbed.[47] The inverse or time-reversed process of absorption is thermal radiation. Much of the thermal energy in matter consists of random motion of charged particles, and this energy can be radiated away from the matter. The resulting radiation may subsequently be absorbed by another piece of matter, with the deposited energy heating the material.[48] The electromagnetic radiation in an opaque cavity at thermal equilibrium is effectively a form of thermal energy, having maximum radiation entropy.[49] Biological effects Bioelectromagnetics is the study of the interactions and effects of EM radiation on living organisms. The effects of electromagnetic radiation upon living cells, including those in humans, depends upon the radiation's power and frequency. For low-frequency radiation (radio waves to visible light) the best-understood effects are those due to radiation power alone, acting through heating when radiation is absorbed. For these thermal effects, frequency is important as it affects the intensity of the radiation and penetration into the organism (for example, microwaves penetrate better than infrared). It is widely accepted that low frequency fields that are too weak to cause significant heating could not possibly have any biological effect.[50] Despite the commonly accepted results, some research has been conducted to show that weaker non-thermal electromagnetic fields, (including weak ELF magnetic fields, although the latter does not strictly qualify as EM radiation[50][51][52]), and modulated RF and microwave fields have biological effects.[53][54][55] Fundamental mechanisms of the interaction between biological material and electromagnetic fields at non-thermal levels are not fully understood.[50] The World Health Organization has classified radio frequency electromagnetic radiation as Group 2B - possibly carcinogenic.[56][57] This group contains possible carcinogens such as lead, DDT, and styrene. For example, epidemiological studies looking for a relationship between cell phone use and brain cancer development, have been largely inconclusive, save to demonstrate that the effect, if it exists, cannot be a large one. At higher frequencies (visible and beyond), the effects of individual photons begin to become important, as these now have enough energy individually to directly or indirectly damage biological molecules.[58] All UV frequences have been classed as Group 1 carcinogens by the World Health Organization. Ultraviolet radiation from sun exposure is the primary cause of skin cancer.[59][60] Thus, at UV frequencies and higher (and probably somewhat also in the visible range),[44] electromagnetic radiation does more damage to biological systems than simple heating predicts. This is most obvious in the "far" (or "extreme") ultraviolet. UV, with X-ray and gamma radiation, are referred to as ionizing radiation due to the ability of photons of this radiation to produce ions and free radicals in materials (including living tissue). Since such radiation can severely damage life at energy levels that produce little heating, it is considered far more dangerous (in terms of damage-produced per unit of energy, or power) than the rest of the electromagnetic spectrum. * * * * * * * * Quantum electrodynamics Jump to navigationJump to search In technical terms, QED can be described as a perturbation theory of the electromagnetic quantum vacuumRichard Feynman called it "the jewel of physics" for its extremely accurate predictions of quantities like the anomalous magnetic moment of the electron and the Lamb shift of the energy levels of hydrogen.[1]:Ch1 The first formulation of a quantum theory describing radiation and matter interaction is attributed to British scientist Paul Dirac, who (during the 1920s) was able to compute the coefficient of spontaneous emission of an atom.[2] Difficulties with the theory increased through the end of the 1940s. Improvements in microwave technology made it possible to take more precise measurements of the shift of the levels of a hydrogen atom,[7] now known as the Lamb shift and magnetic moment of the electron.[8] These experiments exposed discrepancies which the theory was unable to explain. A first indication of a possible way out was given by Hans Bethe in 1947,[9] after attending the Shelter Island Conference.[10] While he was traveling by train from the conference to Schenectady he made the first non-relativistic computation of the shift of the lines of the hydrogen atom as measured by Lamb and Retherford.[9] Despite the limitations of the computation, agreement was excellent. The idea was simply to attach infinities to corrections of mass and charge that were actually fixed to a finite value by experiments. In this way, the infinities get absorbed in those constants and yield a finite result in good agreement with experiments. This procedure was named renormalization. Based on Bethe's intuition and fundamental papers on the subject by Shin'ichirō Tomonaga,[11] Julian Schwinger,[12][13] Richard Feynman[14][15][16] and Freeman Dyson,[17][18] it was finally possible to get fully covariant formulations that were finite at any order in a perturbation series of quantum electrodynamics. Shin'ichirō Tomonaga, Julian Schwinger and Richard Feynman were jointly awarded with the 1965 Nobel Prize in Physics for their work in this area.[19] Their contributions, and those of Freeman Dyson, were about covariant and gauge-invariant formulations of quantum electrodynamics that allow computations of observables at any order of perturbation theory. Feynman's mathematical technique, based on his diagrams, initially seemed very different from the field-theoretic, operator-based approach of Schwinger and Tomonaga, but Freeman Dyson later showed that the two approaches were equivalent.[17] Renormalization, the need to attach a physical meaning at certain divergences appearing in the theory through integrals, has subsequently become one of the fundamental aspects of quantum field theory and has come to be seen as a criterion for a theory's general acceptability. Even though renormalization works very well in practice, Feynman was never entirely comfortable with its mathematical validity, even referring to renormalization as a "shell game" and "hocus pocus".[1]:128 QED has served as the model and template for all subsequent quantum field theories. One such subsequent theory is quantum chromodynamics, which began in the early 1960s and attained its present form in the 1970s work by H. David PolitzerSidney ColemanDavid Gross and Frank Wilczek. Building on the pioneering work of SchwingerGerald GuralnikDick Hagen, and Tom Kibble,[20][21] Peter HiggsJeffrey Goldstone, and others, Sheldon Lee GlashowSteven Weinberg and Abdus Salam independently showed how the weak nuclear force and quantum electrodynamics could be merged into a single electroweak force. Feynman's view of quantum electrodynamics Near the end of his life, Richard Feynman gave a series of lectures on QED intended for the lay public. These lectures were transcribed and published as Feynman (1985), QED: The Strange Theory of Light and Matter,[1] a classic non-mathematical exposition of QED from the point of view articulated below. The key components of Feynman's presentation of QED are three basic actions.[1]:85 Feynman diagram elements As well as the visual shorthand for the actions Feynman introduces another kind of shorthand for the numerical quantities called probability amplitudes. The probability is the square of the absolute value of total probability amplitude, . If a photon moves from one place and time  to another place and time , the associated quantity is written in Feynman's shorthand as . The similar quantity for an electron moving from  to  is written . The quantity that tells us about the probability amplitude for the emission or absorption of a photon he calls j. This is related to, but not the same as, the measured electron charge e.[1]:91 QED is based on the assumption that complex interactions of many electrons and photons can be represented by fitting together a suitable collection of the above three building blocks and then using the probability amplitudes to calculate the probability of any such complex interaction. It turns out that the basic idea of QED can be communicated while assuming that the square of the total of the probability amplitudes mentioned above (P(A to B), E(C to D) and j) acts just like our everyday probability (a simplification made in Feynman's book). Later on, this will be corrected to include specifically quantum-style mathematics, following Feynman. The basic rules of probability amplitudes that will be used are:[1]:93 1. If an event can happen in a variety of different ways, then its probability amplitude is the sum of the probability amplitudes of the possible ways. 2. If a process involves a number of independent sub-processes, then its probability amplitude is the product of the component probability amplitudes. Basic constructions Suppose, we start with one electron at a certain place and time (this place and time being given the arbitrary label A) and a photon at another place and time (given the label B). A typical question from a physical standpoint is: "What is the probability of finding an electron at C (another place and a later time) and a photon at D (yet another place and time)?". The simplest process to achieve this end is for the electron to move from A to C (an elementary action) and for the photon to move from B to D (another elementary action). From a knowledge of the probability amplitudes of each of these sub-processes – E(A to C) and P(B to D) – we would expect to calculate the probability amplitude of both happening together by multiplying them, using rule b) above. This gives a simple estimated overall probability amplitude, which is squared to give an estimated probability.[citation needed] But there are other ways in which the end result could come about. The electron might move to a place and time E, where it absorbs the photon; then move on before emitting another photon at F; then move on to C, where it is detected, while the new photon moves on to D. The probability of this complex process can again be calculated by knowing the probability amplitudes of each of the individual actions: three electron actions, two photon actions and two vertexes – one emission and one absorption. We would expect to find the total probability amplitude by multiplying the probability amplitudes of each of the actions, for any chosen positions of E and F. We then, using rule a) above, have to add up all these probability amplitudes for all the alternatives for E and F. (This is not elementary in practice and involves integration.) But there is another possibility, which is that the electron first moves to G, where it emits a photon, which goes on to D, while the electron moves on to H, where it absorbs the first photon, before moving on to C. Again, we can calculate the probability amplitude of these possibilities (for all points G and H). We then have a better estimation for the total probability amplitude by adding the probability amplitudes of these two possibilities to our original simple estimate. Incidentally, the name given to this process of a photon interacting with an electron in this way is Compton scattering.[citation needed] There is an infinite number of other intermediate processes in which more and more photons are absorbed and/or emitted. For each of these possibilities, there is a Feynman diagram describing it. This implies a complex computation for the resulting probability amplitudes, but provided it is the case that the more complicated the diagram, the less it contributes to the result, it is only a matter of time and effort to find as accurate an answer as one wants to the original question. This is the basic approach of QED. To calculate the probability of any interactive process between electrons and photons, it is a matter of first noting, with Feynman diagrams, all the possible ways in which the process can be constructed from the three basic elements. Each diagram involves some calculation involving definite rules to find the associated probability amplitude. Probability amplitudes Quantum mechanics introduces an important change in the way probabilities are computed. Probabilities are still represented by the usual real numbers we use for probabilities in our everyday world, but probabilities are computed as the square modulus of probability amplitudes, which are complex numbers. Feynman avoids exposing the reader to the mathematics of complex numbers by using a simple but accurate representation of them as arrows on a piece of paper or screen. (These must not be confused with the arrows of Feynman diagrams, which are simplified representations in two dimensions of a relationship between points in three dimensions of space and one of time.) The amplitude arrows are fundamental to the description of the world given by quantum theory. They are related to our everyday ideas of probability by the simple rule that the probability of an event is the square of the length of the corresponding amplitude arrow. So, for a given process, if two probability amplitudes, v and w, are involved, the probability of the process will be given either by Addition of probability amplitudes as complex numbers Multiplication of probability amplitudes as complex numbers That change, from probabilities to probability amplitudes, complicates the mathematics without changing the basic approach. But that change is still not quite enough because it fails to take into account the fact that both photons and electrons can be polarized, which is to say that their orientations in space and time have to be taken into account. Therefore, P(A to B) consists of 16 complex numbers, or probability amplitude arrows.[1]:120–121 There are also some minor changes to do with the quantity j, which may have to be rotated by a multiple of 90° for some polarizations, which is only of interest for the detailed bookkeeping. Finally, one has to compute P(A to B) and E(C to D) corresponding to the probability amplitudes for the photon and the electron respectively. These are essentially the solutions of the Dirac equation, which describe the behavior of the electron's probability amplitude and the Maxwell's equations, which describes the behavior of the photon's probability amplitude. These are called Feynman propagators. The translation to a notation commonly used in the standard literature is as follows: where a shorthand symbol such as  stands for the four real numbers that give the time and position in three dimensions of the point labeled A. Mass renormalization A problem arose historically which held up progress for twenty years: although we start with the assumption of three basic "simple" actions, the rules of the game say that if we want to calculate the probability amplitude for an electron to get from A to B, we must take into account all the possible ways: all possible Feynman diagrams with those endpoints. Thus there will be a way in which the electron travels to C, emits a photon there and then absorbs it again at D before moving on to B. Or it could do this kind of thing twice, or more. In short, we have a fractal-like situation in which if we look closely at a line, it breaks up into a collection of "simple" lines, each of which, if looked at closely, are in turn composed of "simple" lines, and so on ad infinitum. This is a challenging situation to handle. If adding that detail only altered things slightly, then it would not have been too bad, but disaster struck when it was found that the simple correction mentioned above led to infinite probability amplitudes. In time this problem was "fixed" by the technique of renormalization. However, Feynman himself remained unhappy about it, calling it a "dippy process".[1]:128 * * * * * * * * Logo of mjms  2020 Feb; 27(1): 1–5. Published online 2020 Feb 27. doi: 10.21315/mjms2020.27.1.1 PMCID: PMC7053547 PMID: 32158340 Quantum Physics Perspective on Electromagnetic and Quantum Fields Inside the Brain Quantum Physics for the Brain Quantum physics is the branch of physics that deals with tiny objects and quantisation (packet of energy or interaction) of various entities. In contrast to classical Newtonian physics which deals with large objects, quantum physics or mechanics is a science of small scale objects such as atom and subatomic particles. In general, central elements of quantum physics are: i) particle-wave duality for quantum entity such as elementary particles, and even for compound particles, for instances, atoms and molecules; ii) quantum entanglement. It can be defined as a phenomenon in which the quantum states of two or more objects have to be described with reference to each other, even though the individual objects may be spatially separated; iii) coherence and decoherence. Coherence is referred to waves that have a constant phase difference, same frequency, or same waveform morphology, whilst decoherence means interference is present; iv) superposition. It is a complex property of a wave. In a wave, there could be many other smaller waves; v) quantum tunneling. It is a phenomenon where a quantum particle passes through a barrier; vi) quantum uncertainty. It is also known as Heisenberg’s uncertainty principle. It states that the more precisely the position of some particle is determined, the less precisely its velocity can be known, and vice versa (). Thus, quantum physics is seen as dealing with ambiguity in the physical world. Based upon the first principle, human brain can be viewed entirely as either in particle or in wave form. The particle perspective portrays brain in anatomical form, whilst the wave perspective depicts the brain in wave form (). Waves of the brain can be classified further into two main entities: i) brainwaves that commonly detected or studied using electroencephalography (EEG) or magnetoencephalography (MEG) and is based on electromagnetic principles; and ii) waves perspective of brain anatomical particles. The first waves or brainwaves can be named as electric waves with energy or field in it—thus stated here as electromagnetic field (EMF) and for the second waves, quantum waves or quantum field (QF) is used. Therefore, brain can be viewed as either i) anatomical brain with brainwaves (classical) or ii) all are waves with onefield or energy, but this field can be divided further into EMF (large object physics) and QF (small object physics). In this editorial commentary, the author describes briefly these two concepts of brain fields and invite readers to think of quantum physics as a science that not only capable in describing the behaviour of subatomic particles, but also behaviour of people (people mind). Electromagnetic and Quantum Fields Inside the Brain A physiological principle states a neuron communicates with each other by using electrical signals. The electrical signal called action potential travels along the axon and triggers neurotransmitter at the synapse, hence further electrical signal can be passed to other neurons. With electrical signal, there is always a simultaneous presence of magnetic field. Thus, this type of communication is known as EMF communication (). On the contrary the QF type of communication considers all brain elements are waves, thus the energy is still wavy (ups and downs) and perhaps in diffused pattern with more complex networks. In this perspective, EMF of the brain is viewed as arising from: i) a projected stimuli outside the brain such as our five senses of stimuli—seeing, hearing, touching, tasting, and smelling; or ii) the brain itself such as in virtual reality, dreaming and hypnosis (without external stimuli); and iii) non-cognition such as pure motor movements. On the other hand, brain QF is viewed as onefield or wholeness or oneness with our universe. Thus, it is commonly regarded as having one consciousness (). With this understanding, consciousness concept in quantum realm is not restricted only to human brain. In other words, we may say QF permeates whole of our universe. The quantum entity that suits this permeating energy concept is the light, whilst the non-quantum entity (Newtonian physics) that suits the focus or limited projection is electricity. Hence, EMF has electric feature whilst QF has light feature. This is summarised in Figure 1 and Table 1. An external file that holds a picture, illustration, etc. Object name is 01mjms27012020_edf1.jpg Limited consciousness for the brain and limited projection for the universe form a principle for the collapse of wave function of the particle. Brain and universe are permeated by quantum field, whilst electromagnetic field inside the brain arises from a discrete or limited projection from our universe Table 1 General features of brain EMF and brain QF 1Wave patternPresence of pilot/directional waveDiffused waves 2Wave characteristicsHigh frequency waveLow frequency wave 3WavelengthShort wavelengthLong wavelength 4Quantum conceptDeterministic (locatable)Non-deterministic (unlocatable, varied) 5DimensionHigh dimension (electric)Low dimension (light) 6Physics conceptBohmian mechanicsQuantum mechanics 7Brain networkSimple network (few nodes)Complex network (many nodes/varied) 8SymmetryMore asymmetryMore symmetry 9Evoked responseLarge evoked response with few stimuli (few trials/tests)Smaller evoked response and need higher amount/number of stimuli (multiple trials/tests) 11Wholeness/Oneness/Onefield conceptNo (it is a projected/limited field)Yes (spreading or permeating whole of universe/field) 12Related to psychiatryLess relevantMore relevant, because this QF is related to wholeness or reality or one consciousness concept 13The way to alter the networkFocus few electrode [deep brain stimulation (DBS) like electrode for Parkinson etc.]Smaller and multiple electrodes (toothbrush like electrodes) 14The way to alter the network using frequencyHigh frequency is preferred in most cases (inhibition)Low (stimulation) and high frequency stimulation depends on clinical manifestations Brain functionCombination of EMF and QF (two brain fields/energy) ABrain function (motor, sensory, vision, sound, touch) and its impairmentNon-cognitive impairment such as stroke affecting motor, sensory, vision, sound, touch EMF is more affected than QF Associated with degree of impairment BBrain function (language, emotion, memory, attention, planning etc.) and its impairmentCognitive impairment for language, emotion, memory, attention, planning etc. QF is affected significantly together with EMF Associated with degree of impairment CBrain function and psychosisPsychotic manifestations such as auditory or visual hallucination, thought insertion, delusions etc. QF is more affected than EMF Yes or No (presence or not) (not associated with degree) The two aforementioned concepts (i.e. projected stimuli or internally arising stimuli for EMF and brain as part of one consciousness) unintentionally introduce a ‘limited’ principle for both our universe and brain. For our universe, the projected stimuli is a limited stimuli from only certain aspect, area or dimension of our universe; and for the brain, the limited principle is applied for the consciousness—our brain is part of one consciousness or has limited consciousness. With these ‘limitation principle of our universe and brain’, our brain (3D-vision) cannot see wave function of a particle or atom, we can only see them in particle, atom, molecule, matter or object form. In other words, we may say partial consciousness that we have, collapsing the wave function of a particle and thus limited our perception to only three-dimensional vision (particle, atom, molecule, matter). EMF and QF in Relation to Medicine In reference to Table 1, brain EMF is based on electric signal that has pilot or directional high frequency, short wavelength waves that are locatable or can be determined using few stimuli or trial with noted large evoked electrical (or magnetic) response. Thus, it seems to cover our five basic senses with simpler brain networks. Conversely, light (also a type of electromagnetic spectrum) is regarded as main entity for brain QF. It is diffused or non-directional lower frequency and longer wavelength waves that are unlocatable or non-deterministic or varied. Other features of QF are more symmetry (light feature) and having more complex networks. Thus, QF may play bigger role not in our five common senses, but more in our brain cognitions. With these features (varied, complex, diffused), neuroplasticity is thought to happen more likely in cognitive (language, emotion, memory, attention etc.) than in our five-sense or motor impairments. With the aforementioned reasons, patients who are suffering from motor, sensory or cognitive impairments obviously require EEG or/and MEG in order to come out with better diagnosis or better knowing the extent of impairment. Those suffering from psychiatric disorders or psychosis spectrum disorders, QF is probably a better energy that should be studied in them. It is because of oneness or wholeness concept for QF; any fragmentation in this onefield (loss from reality) likely causes psychotic-like manifestation (). Noteworthy, more symmetry and lower frequency waves features of QF may be utilised in making diagnosis and monitoring for psychiatric disorders. In relation to this understanding, one may treat cognitively impaired or psychotic patient by using a more diffused, smaller and multiple electrodes (toothbrush-like electrodes) implanted at certain cognitive or psychosis brain networks. The universe and brain are considered as two most complicated entities with obvious links exist between them. One of those links is ‘limitation principle’ for both. The energy in our brain is thought as pairing, with obvious EMF and more hidden QF energy. QF is thought as a permeating background energy for our brain and universe, while EMF is more focus-, limited- and projected-like brain energy. Greater understanding in QF may open new ways on how to treat some medical disorders, particularly ones that related to cognitive impairment and psychiatry. The author is thankful to Professor Dato’ Dr Hj Jafri Malin Abdullah for establishing the Neurosurgery, Clinical and Fundamental Neurosciences Centre in Universiti Sains Malaysia (USM) with modern facilities to study human brainwaves (EEG, ECOG and MEG). Conflict of Interest 2. Idris Z. An infinite-light and infinite-frequency in cosmology and neurosciences. Open Journal of Philosophy. 2019;9:236–251. doi: 10.4236/ojpp.2019.92016. [CrossRef[] 3. Jamali M, Golshani M, Jamali Y. A proposed mechanism for mind-brain interaction using extended Bohmian quantum mechanics in Avicenna’s monotheistic perspective. Heliyon. 2019;5(7):e02130. doi: 10.1016/j.heliyon.2019.e02130. [PMC free article] [PubMed] [CrossRef[] 4. Idris Z, Faruque R, Jafri MA. Microgravity, hemispheric brain specialisation and death of a person. In: Sisu AM, editor. Human anatomy—reviews and medical advances. London, UK: IntechOpen; 2017. https://doi.org/10.5772/67897. Available at: https://www.intechopen.com/books/human-anatomy-reviews-and-medical-advances/human-brain-anatomy-prospective-microgravity-hemispheric-brain-specialisation-and-death-of-a-person[] 5. Idris Z, Kandasamy R, Reza F, Jafri MA. Neural oscillation, network, eloquent cortex and epileptogenic zone revealed by magnetoencephalography and awake craniotomy. Asian J Neurosurg. 2014;9(3):144–152. doi: 10.4103/1793-5482.142734. [PMC free article] [PubMed] [CrossRef[] 6. Bulbring E, Burn JH. Observations bearing on synaptic transmission by acetylcholine in the spinal cord. J Physiol. 1941;100(3):337–368. doi: 10.1113/jphysiol.1941.sp003947. [PMC free article] [PubMed] [CrossRef[] 7. Hameroff S, Penrose R. Consciousness in the universe: a review of the ‘Orch OR’ theory. Phys Life Rev. 2014;11(1):39–78. doi: 10.1016/j.plrev.2013.08.002. [PubMed] [CrossRef[] 8. Singer W. A naturalistic approach to the hard problem of consciousness. Front Syst Neurosci. 2019;13:58. doi: 10.3389/fnsys.2019.00058. [PMC free article] [PubMed] [CrossRef[] 9. Latif WA, Ggha S. Understanding neurobehavioural dynamics: a close-up view on psychiatry and quantum mechanics. Malays J Med Sci. 2019;26(1):147–156. doi: 10.21315/mjms2019.26.1.14. [PMC free article] [PubMed] [CrossRef[] 10. Maung HH. Dualism and its place in a philosophical structure for psychiatry. Med Health Care Philos. 2019;22(1):59–69. doi: 10.1007/s11019-018-9841-2. [PMC free article] [PubMed] [CrossRef[] Articles from The Malaysian Journal of Medical Sciences : MJMS are provided here courtesy of School of Medical Sciences, Universiti Sains Malaysia
47546e55bcea379e
@article{31112, abstract = {{AbstractCoordinative challenging exercises in changing environments referred to as open-skill exercises seem to be beneficial on cognitive function. Although electroencephalographic research allows to investigate changes in cortical processing during movement, information about cortical dynamics during open-skill exercise is lacking. Therefore, the present study examines frontal brain activation during table tennis as an open-skill exercise compared to cycling exercise and a cognitive task. 21 healthy young adults conducted three blocks of table tennis, cycling and n-back task. Throughout the experiment, cortical activity was measured using 64-channel EEG system connected to a wireless amplifier. Cortical activity was analyzed calculating theta power (4–7.5 Hz) in frontocentral clusters revealed from independent component analysis. Repeated measures ANOVA was used to identify within subject differences between conditions (table tennis, cycling, n-back; p < .05). ANOVA revealed main-effects of condition on theta power in frontal (p < .01, ηp2 = 0.35) and frontocentral (p < .01, ηp2 = 0.39) brain areas. Post-hoc tests revealed increased theta power in table tennis compared to cycling in frontal brain areas (p < .05, d = 1.42). In frontocentral brain areas, theta power was significant higher in table tennis compared to cycling (p < .01, d = 1.03) and table tennis compared to the cognitive task (p < .01, d = 1.06). Increases in theta power during continuous table tennis may reflect the increased demands in perception and processing of environmental stimuli during open-skill exercise. This study provides important insights that support the beneficial effect of open-skill exercise on brain function and suggest that using open-skill exercise may serve as an intervention to induce activation of the frontal cortex.}}, author = {{Visser, Anton and Büchel, Daniel and Lehmann, Tim and Baumeister, Jochen}}, issn = {{0014-4819}}, journal = {{Experimental Brain Research}}, keywords = {{General Neuroscience}}, publisher = {{Springer Science and Business Media LLC}}, title = {{{Continuous table tennis is associated with processing in frontal brain areas: an EEG approach}}}, doi = {{10.1007/s00221-022-06366-y}}, year = {{2022}}, } @article{32087, abstract = {{ Agility, a key component of team ball sports, describes an athlete´s ability to move fast in response to changing environments. While agility requires basic cognitive functions like processing speed, it also requires more complex cognitive processes like working memory and inhibition. Yet, most agility tests restrict an assessment of cognitive processes to simple reactive times that lack ecological validity. Our aim in this study was to assess agility performance by means of total time on two agility tests with matched motor demands but with both low and high cognitive demands. We tested 22 female team athletes on SpeedCourt, using a simple agility test (SAT) that measured only processing speed and a complex agility test (CAT) that required working memory and inhibition. We found excellent to good reliability for both our SAT (ICC = .79) and CAT (ICC =.70). Lower agility performance on the CAT was associated with increased agility total time and split times ( p < .05). These results demonstrated that agility performance depends on the complexity of cognitive demands. There may be interference-effects between motor and cognitive performances, reducing speed when environmental information becomes more complex. Future studies should consider agility training models that implement complex cognitive stimuli to challenge athletes according to competitive demands. This will also allow scientists and practitioners to tailor tests to talent identification, performance development and injury rehabilitation. }}, author = {{Büchel, Daniel and Gokeler, Alli and Heuvelmans, Pieter and Baumeister, Jochen}}, issn = {{0031-5125}}, journal = {{Perceptual and Motor Skills}}, keywords = {{Sensory Systems, Experimental and Cognitive Psychology}}, publisher = {{SAGE Publications}}, title = {{{Increased Cognitive Demands Affect Agility Performance in Female Athletes - Implications for Testing and Training of Agility in Team Ball Sports}}}, doi = {{10.1177/00315125221108698}}, year = {{2022}}, } @inbook{32363, author = {{zur Heiden, Philipp and Priefer, Jennifer and Beverungen, Daniel}}, booktitle = {{Forum Dienstleistungsmanagement}}, editor = {{Bruhn, Manfred and Hadwich, Karsten}}, isbn = {{9783658373436}}, issn = {{2662-3382}}, pages = {{435--457}}, publisher = {{Springer Fachmedien Wiesbaden}}, title = {{{Smart Service für die prädiktive Instandhaltung zentraler Komponenten des Mittelspannungs-Netzes}}}, doi = {{10.1007/978-3-658-37344-3_14}}, year = {{2022}}, } @inproceedings{32388, author = {{Rossel, Moritz Sebastian and Meschut, Gerson}}, booktitle = {{Proceedings of the 6th Conference on Steels in Cars and Trucks}}, location = {{Milan}}, title = {{{Method development for increasing the prediction quality of mechanical joining process simulations by friction modeling based on local joining process parameters}}}, year = {{2022}}, } @article{29673, abstract = {{Koopman operator theory has been successfully applied to problems from various research areas such as fluid dynamics, molecular dynamics, climate science, engineering, and biology. Applications include detecting metastable or coherent sets, coarse-graining, system identification, and control. There is an intricate connection between dynamical systems driven by stochastic differential equations and quantum mechanics. In this paper, we compare the ground-state transformation and Nelson's stochastic mechanics and demonstrate how data-driven methods developed for the approximation of the Koopman operator can be used to analyze quantum physics problems. Moreover, we exploit the relationship between Schrödinger operators and stochastic control problems to show that modern data-driven methods for stochastic control can be used to solve the stationary or imaginary-time Schrödinger equation. Our findings open up a new avenue towards solving Schrödinger's equation using recently developed tools from data science.}}, author = {{Klus, Stefan and Nüske, Feliks and Peitz, Sebastian}}, journal = {{Journal of Physics A: Mathematical and Theoretical}}, number = {{31}}, pages = {{314002}}, publisher = {{IOP Publishing Ltd.}}, title = {{{Koopman analysis of quantum systems}}}, doi = {{10.1088/1751-8121/ac7d22}}, volume = {{55}}, year = {{2022}}, } @article{32392, author = {{Duffe, Tobias and Kullmer, Gunter and Tews, Karina and Aubel, Tobias and Meschut, Gerson}}, journal = {{Theoretical and Applied Fracture Mechanics}}, title = {{{Global energy release rate of small penny-shaped cracks in hyperelastic materials under general stress conditions}}}, doi = {{10.1016/j.tafmec.2022.103461}}, year = {{2022}}, } @book{32394, editor = {{Karsten, Andrea and Haacke-Werron, Stefanie and Brinkschulte, Melanie}}, publisher = {{wbv}}, title = {{{Begriffe für eine Schreibwissenschaft}}}, year = {{2022}}, } @article{19941, abstract = {{In backward error analysis, an approximate solution to an equation is compared to the exact solution to a nearby ‘modified’ equation. In numerical ordinary differential equations, the two agree up to any power of the step size. If the differential equation has a geometric property then the modified equation may share it. In this way, known properties of differential equations can be applied to the approximation. But for partial differential equations, the known modified equations are of higher order, limiting applicability of the theory. Therefore, we study symmetric solutions of discretized partial differential equations that arise from a discrete variational principle. These symmetric solutions obey infinite-dimensional functional equations. We show that these equations admit second-order modified equations which are Hamiltonian and also possess first-order Lagrangians in modified coordinates. The modified equation and its associated structures are computed explicitly for the case of rotating travelling waves in the nonlinear wave equation.}}, author = {{McLachlan, Robert I and Offen, Christian}}, journal = {{Journal of Geometric Mechanics}}, number = {{3}}, pages = {{447 -- 471}}, publisher = {{AIMS}}, title = {{{Backward error analysis for variational discretisations of partial differential equations}}}, doi = {{10.3934/jgm.2022014}}, volume = {{14}}, year = {{2022}}, } @article{32403, abstract = {{Due to failures or even the absence of an electricity grid, microgrid systems are becoming popular solutions for electrifying African rural communities. However, they are heavily stressed and complex to control due to their intermittency and demand growth. Demand side management (DSM) serves as an option to increase the level of flexibility on the demand side by scheduling users’ consumption patterns profiles in response to supply. This paper proposes a demand-side management strategy based on load shifting and peak clipping. The proposed approach was modelled in a MATLAB/Simulink R2021a environment and was optimized using the artificial neural network (ANN) algorithm. Simulations were carried out to test the model’s efficacy in a stand-alone PV-battery microgrid in East Africa. The proposed algorithm reduces the peak demand, smoothing the load profile to the desired level, and improves the system’s peak to average ratio (PAR). The presence of deferrable loads has been considered to bring more flexible demand-side management. Results promise decreases in peak demand and peak to average ratio of about 31.2% and 7.5% through peak clipping. In addition, load shifting promises more flexibility to customers.}}, author = {{Philipo, Godiana Hagile and Kakande, Josephine Nakato and Krauter, Stefan}}, issn = {{1996-1073}}, journal = {{Energies}}, keywords = {{Energy (miscellaneous), Energy Engineering and Power Technology, Renewable Energy, Sustainability and the Environment, Electrical and Electronic Engineering, Control and Optimization, Engineering (miscellaneous), Building and Construction}}, number = {{14}}, publisher = {{MDPI AG}}, title = {{{Neural Network-Based Demand-Side Management in a Stand-Alone Solar PV-Battery Microgrid Using Load-Shifting and Peak-Clipping}}}, doi = {{10.3390/en15145215}}, volume = {{15}}, year = {{2022}}, } @unpublished{32407, abstract = {{Estimating the ground state energy of a local Hamiltonian is a central problem in quantum chemistry. In order to further investigate its complexity and the potential of quantum algorithms for quantum chemistry, Gharibian and Le Gall (STOC 2022) recently introduced the guided local Hamiltonian problem (GLH), which is a variant of the local Hamiltonian problem where an approximation of a ground state is given as an additional input. Gharibian and Le Gall showed quantum advantage (more precisely, BQP-completeness) for GLH with $6$-local Hamiltonians when the guiding vector has overlap (inverse-polynomially) close to 1/2 with a ground state. In this paper, we optimally improve both the locality and the overlap parameters: we show that this quantum advantage (BQP-completeness) persists even with 2-local Hamiltonians, and even when the guiding vector has overlap (inverse-polynomially) close to 1 with a ground state. Moreover, we show that the quantum advantage also holds for 2-local physically motivated Hamiltonians on a 2D square lattice. This makes a further step towards establishing practical quantum advantage in quantum chemistry.}}, author = {{Gharibian, Sevag and Hayakawa, Ryu and Gall, François Le and Morimae, Tomoyuki}}, booktitle = {{arXiv:2207.10250}}, title = {{{Improved Hardness Results for the Guided Local Hamiltonian Problem}}}, year = {{2022}}, } @article{32412, abstract = {{Friction-spinning as an innovative incremental forming process enables large degrees of deformation in the field of tube and sheet metal forming due to a self-induced heat generation in the forming zone. This paper presents a new tool and process design with a driven tool for the targeted adjustment of residual stress distributions in the friction-spinning process. Locally adapted residual stress depth distributions are intended to improve the functionality of the friction-spinning workpieces, e.g. by delaying failure or triggering it in a defined way. The new process designs with the driven tool and a subsequent flow-forming operation are investigated regarding the influence on the residual stress depth distributions compared to those of standard friction-spinning process. Residual stress depth distributions are measured with the incremental hole-drilling method. The workpieces (tubular part with a flange) are manufactured using heat-treatable 3.3206 (EN-AW 6060 T6) tubular profiles. It is shown that the residual stress depth distributions change significantly due to the new process designs, which offers new potentials for the targeted adjustment of residual stresses that serve to improve the workpiece properties.}}, author = {{Dahms, Frederik and Homberg, Werner}}, issn = {{1662-9795}}, journal = {{Key Engineering Materials}}, keywords = {{Mechanical Engineering, Mechanics of Materials, General Materials Science}}, location = {{Braga, Portugal}}, pages = {{683--689}}, publisher = {{Trans Tech Publications, Ltd.}}, title = {{{Manufacture of Defined Residual Stress Distributions in the Friction-Spinning Process: Driven Tool and Subsequent Flow-Forming}}}, doi = {{10.4028/p-3rk19y}}, volume = {{926}}, year = {{2022}}, } @article{29357, abstract = {{Friction-spinning as an innovative incremental forming process enables high degrees of deformation in the field of tube and sheet metal forming due to self-induced heat generation in the forming area. The complex thermomechanical conditions generate non-uniform residual stress distributions. In order to specifically adjust these residual stress distributions, the influence of different process parameters on residual stress distributions in flanges formed by the friction-spinning of tubes is investigated using the design of experiments (DoE) method. The feed rate with an effect of −156 MPa/mm is the dominating control parameter for residual stress depth distribution in steel flange forming, whereas the rotation speed of the workpiece with an effect of 18 MPa/mm dominates the gradient of residual stress generation in the aluminium flange-forming process. A run-to-run predictive control system for the specific adjustment of residual stress distributions is proposed and validated. The predictive model provides an initial solution in the form of a parameter set, and the controlled feedback iteratively approaches the target value with new parameter sets recalculated on the basis of the deviation of the previous run. Residual stress measurements are carried out using the hole-drilling method and X-ray diffraction by the cosα-method.}}, author = {{Dahms, Frederik and Homberg, Werner}}, issn = {{2075-4701}}, journal = {{Metals}}, keywords = {{General Materials Science, Metals and Alloys}}, number = {{1}}, publisher = {{MDPI AG}}, title = {{{Manufacture of Defined Residual Stress Distributions in the Friction-Spinning Process: Investigations and Run-to-Run Predictive Control}}}, doi = {{10.3390/met12010158}}, volume = {{12}}, year = {{2022}}, } @misc{32409, abstract = {{Context: Cryptographic APIs are often misused in real-world applications. Therefore, many cryptographic API misuse detection tools have been introduced. However, there exists no established reference benchmark for a fair and comprehensive comparison and evaluation of these tools. While there are benchmarks, they often only address a subset of the domain or were only used to evaluate a subset of existing misuse detection tools. Objective: To fairly compare cryptographic API misuse detection tools and to drive future development in this domain, we will devise such a benchmark. Openness and transparency in the generation process are key factors to fairly generate and establish the needed benchmark. Method: We propose an approach where we derive the benchmark generation methodology from the literature which consists of general best practices in benchmarking and domain-specific benchmark generation. A part of this methodology is transparency and openness of the generation process, which is achieved by pre-registering this work. Based on our methodology we design CamBench, a fair "Cryptographic API Misuse Detection Tool Benchmark Suite". We will implement the first version of CamBench limiting the domain to Java, the JCA, and static analyses. Finally, we will use CamBench to compare current misuse detection tools and compare CamBench to related benchmarks of its domain.}}, author = {{Schlichtig, Michael and Wickert, Anna-Katharina and Krüger, Stefan and Bodden, Eric and Mezini, Mira}}, keywords = {{cryptography, benchmark, API misuse, static analysis}}, title = {{{CamBench -- Cryptographic API Misuse Detection Tool Benchmark Suite}}}, doi = {{10.48550/ARXIV.2204.06447}}, year = {{2022}}, } @inbook{32419, author = {{Tönsing, Johanna}}, booktitle = {{Rassismussensibler Literaturunterricht. Neue Perspektiven einer kulturwissenschaftlichen Literaturdidaktik}}, editor = {{Hofmann, Michael and Becker, Karina}}, title = {{{Chancen für einen rassismussensiblen Literaturunterricht - didaktische Perspektiven für das Lesen von Menschenzoogeschichten in der Grundschule am Beispiel von Rainer Maria Rilkes Gedicht "Die Aschanti. Jardin d´acclimatation" (1902)}}}, year = {{2022}}, } @inbook{32417, author = {{Tönsing, Johanna}}, booktitle = {{Interpretationsverfahruen der germanistischen Literaturdidaktik und didaktische Referenzkonzepte}}, editor = {{Bernhardt, Sebastian and Hardtke, Thomas}}, title = {{{(K)eine kinderleichte Gattung: Konsequenzen einer kulturwissenschaftlich informierten Märchendidaktik}}}, year = {{2022}}, } @phdthesis{32414, author = {{Lass, Michael}}, publisher = {{Universität Paderborn}}, title = {{{Bringing Massive Parallelism and Hardware Acceleration to Linear Scaling Density Functional Theory Through Targeted Approximations}}}, doi = {{10.17619/UNIPB/1-1281}}, year = {{2022}}, } @inbook{32418, author = {{Tönsing, Johanna}}, booktitle = {{Sammelband über den deutsch-türkischen Film}}, editor = {{Schulte-Eickholt, Swen and Hofmann, Michael}}, title = {{{Über „Gleis 11“ [Dokumentarfilm von 2021]}}}, year = {{2022}}, } @inbook{32423, author = {{Tönsing, Johanna}}, booktitle = {{Neue Perspektiven einer kulturwissenschaftlichen Literaturdidaktik}}, editor = {{Hofmann, Michael}}, title = {{{Weiblichkeitsdiskurse in der Gegenwartsliteratur und deren Thematisierung im genderorientierten Unterricht}}}, year = {{2022}}, } @inproceedings{32410, abstract = {{Static analysis tools support developers in detecting potential coding issues, such as bugs or vulnerabilities. Research on static analysis emphasizes its technical challenges but also mentions severe usability shortcomings. These shortcomings hinder the adoption of static analysis tools, and in some cases, user dissatisfaction even leads to tool abandonment. To comprehensively assess the current state of the art, this paper presents the first systematic usability evaluation in a wide range of static analysis tools. We derived a set of 36 relevant criteria from the scientific literature and gathered a collection of 46 static analysis tools complying with our inclusion and exclusion criteria - a representative set of mainly non-proprietary tools. Then, we evaluated how well these tools fulfill the aforementioned criteria. The evaluation shows that more than half of the considered tools offer poor warning messages, while about three-quarters of the tools provide hardly any fix support. Furthermore, the integration of user knowledge is strongly neglected, which could be used for improved handling of false positives and tuning the results for the corresponding developer. Finally, issues regarding workflow integration and specialized user interfaces are proved further. These findings should prove useful in guiding and focusing further research and development in the area of user experience for static code analyses.}}, author = {{Nachtigall, Marcus and Schlichtig, Michael and Bodden, Eric}}, booktitle = {{Proceedings of the 31st ACM SIGSOFT International Symposium on Software Testing and Analysis}}, isbn = {{9781450393799}}, keywords = {{Automated static analysis, Software usability}}, pages = {{532 -- 543}}, publisher = {{ACM}}, title = {{{A Large-Scale Study of Usability Criteria Addressed by Static Analysis Tools}}}, doi = {{10.1145/3533767}}, year = {{2022}}, } @inproceedings{31133, abstract = {{Application Programming Interfaces (APIs) are the primary mechanism that developers use to obtain access to third-party algorithms and services. Unfortunately, APIs can be misused, which can have catastrophic consequences, especially if the APIs provide security-critical functionalities like cryptography. Understanding what API misuses are, and for what reasons they are caused, is important to prevent them, e.g., with API misuse detectors. However, definitions and nominations for API misuses and related terms in literature vary and are diverse. This paper addresses the problem of scattered knowledge and definitions of API misuses by presenting a systematic literature review on the subject and introducing FUM, a novel Framework for API Usage constraint and Misuse classification. The literature review revealed that API misuses are violations of API usage constraints. To capture this, we provide unified definitions and use them to derive FUM. To assess the extent to which FUM aids in determining and guiding the improvement of an API misuses detectors' capabilities, we performed a case study on CogniCrypt, a state-of-the-art misuse detector for cryptographic APIs. The study showed that FUM can be used to properly assess CogniCrypt's capabilities, identify weaknesses and assist in deriving mitigations and improvements. And it appears that also more generally FUM can aid the development and improvement of misuse detection tools.}}, author = {{Schlichtig, Michael and Sassalla, Steffen and Narasimhan, Krishna and Bodden, Eric}}, booktitle = {{2022 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER)}}, keywords = {{API misuses, API usage constraints, classification framework, API misuse detection, static analysis}}, pages = {{673 -- 684}}, title = {{{FUM - A Framework for API Usage constraint and Misuse Classification}}}, doi = {{https://doi.org/10.1109/SANER53432.2022.00085}}, year = {{2022}}, }
7cf04f3be8e3c2cc
August 1995 Perturbation Theory around Non–Nested Fermi Surfaces I. Keeping the Fermi Surface Fixed Joel Feldman, Manfred Salmhofer, Eugene Trubowitz Mathematics Department, University of British Columbia, Vancouver, Canada V6T 1Z2 Mathematik, ETH–Zentrum, CH–8092 Zürich, Switzerland This paper is dedicated to the memory of Ansgar Schnizer The perturbation expansion for a general class of many–fermion systems with a non–nested, non–spherical Fermi surface is renormalized to all orders. In the limit as the infrared cutoff is removed, the counterterms converge to a finite limit which is differentiable in the band structure. The map from the renormalized to the bare band structure is shown to be locally injective. A new classification of graphs as overlapping or non–overlapping is given, and improved power counting bounds are derived from it. They imply that the only subgraphs that can generate factorials in the order of the renormalized perturbation series are indeed the ladder graphs and thus give a precise sense to the statement that ‘ladders are the most divergent diagrams’. Our results apply directly to the Hubbard model at any filling except for half–filling. The half–filled Hubbard model is treated in another place. Table of Contents 1. Introduction and Overview  1.1 The Problem  1.2 The Formal Perturbation Expansion  1.3 Assumptions  1.4 Divergences and Hartree–Fock Theory  1.5 Results  1.6 Discussion 2. Renormalization and Convergence  2.1 Scale Decomposition and Power Counting  2.2 Localization Operator  2.3 Flow of Effective Actions  2.4 Non–Overlapping Graphs  2.5 Decomposition of the Tree of a Labelled Graph  2.6 Improved Power Counting  2.7 Convergence of the Renormalized Green Functions 3. The Derivative With Respect to the Band Structure  3.1 Integration by Parts  3.2 Bounds for the Directional Derivative  3.3 Convergence of the Derivative Appendix A: Volume Estimates Appendix B: The One–Fermion Problem 1. Introduction and Overview 1.1 The Problem Consider the following problem in many–body physics. Let be a finite box in –dimensional space, i.e. or , where is a lattice in , and let and be fermionic annihilation and creation operators obeying the canonical anticommutation relations and let be the fermionic Fock space generated by this algebra [BR]. Let be the operator on given by where is an operator describing the one–particle kinetic energy, is multiplication by a periodic potential, and denotes for a continuous system and for a system on a lattice. Let be the number operator at for spin . The interaction is assumed to be short-ranged (see Assumption A1 below). The Hamiltonian describes many electrons in a crystal or on a lattice, that interact with a stationary ionic background through and with each other through the pair potential . If the coupling strength of the electron–electron interaction , the electrons move independently according to the one-particle Schrödinger operator . In the continuum system is the Laplacean and for all , where the lattice is generated by linearly independent vectors in (e.g. ); in the case of a lattice system, and the kinetic energy is defined by the hopping matrix between the sites of the lattice. For , the potential takes into account interactions such as screened electromagnetic interactions. A slight generalization of allows for inclusion of phonon–mediated interactions. Let be the inverse temperature and define the grand canonical partition function as is the number operator, is the chemical potential and the trace is over Fock space. For an observable , i.e. a polynomial in the fermion operators, the thermal expectation value is defined as The question we are interested in is whether the thermodynamic limit of the connected Green functions , which are special cases of above, exists and whether in infinite volume a weak-coupling expansion can be used to determine the dependence of on . For this question the most interesting, because most singular, case is that of zero temperature, . For positive temperature or the finite volume lattice case the expansion obtained by expanding the factor in is convergent, but its radius of convergence shrinks to zero in the thermodynamic and zero-temperature limit: at and in infinite volume, one can not even pose the question of convergence of the expansion in because the coefficients already diverge for . In the limit , reduces to expectation values in the ground state of the system, so physically the question is about the nature of the many–particle ground state of the system and the validity of perturbation theory to calculate -point-functions. The radius of convergence of the unrenormalized expansion in finite volume shrinks to zero as the volume goes to infinity. Thus, although the expansion converges for the large but finite systems which these models are to describe, this is true only if is of order 1/volume, which is obviously unrealistic for any macroscopic system. Consequently, the unrenormalized expansion will not give insight into the properties of the ground state. In this paper we consider formal perturbation theory. That is, we study the thermodynamic limit of the coefficient functions . By an analysis similar to [FT1], the expansion is renormalized so that these functions converge as the volume goes to infinity. More precisely, we introduce a well–defined infinite volume model obtained by cutting off the singularity at the Fermi surface (i.e. introducing an infrared cutoff) and renormalized by including counterterms in the action, and then show that all coefficients have limits as the infrared cutoff is removed. These counterterms are bilinear in the fermions and can therefore be viewed as a modification of (although they are treated as extra interaction vertices in the formal expansion). They also have finite limits as the infrared cutoff is removed. The limiting counterterms reflect the modification of the band structure due to the interaction. The precise meaning of this will be discussed in much more detail below. Although we do not go through the finite–volume bounds here, it will be clear from the way our bounds are derived that the same procedure can be applied to obtain an expansion in finite volume with coefficients that converge in the thermodynamic limit. Except for special cases, the renormalized expansion is, as an expansion in , not convergent but only locally Borel summable because the coefficients behave as . The occurence of these factorials indicates that the nonperturbative ground state may exhibit symmetry breaking. For example, if the interaction is attractive in the zero angular momentum sector, this is the case [FT2]. One of the main results we shall prove here is that for a very wide class of models (and regardless of the sign of the interaction), the factorials in individual graphs come only from ladder diagrams. Renormalization has been done in [FT1] for the continuum case where and . We shall refer to this case as the spherical case since the band structure (defined below) has an rotational symmetry. The procedure for removing the divergences in the present case is similar to the spherical case in that we have to renormalize two–legged insertions. However, the present work is a nontrivial extension of [FT1] because in contrast to the spherical case the counterterms are not constants. In brief, subtracting functions is much more complicated than subtracting constants. In particular, the regularity properties of the counterterms are quite subtle. In the remainder of this introductory section, we give a non–rigorous, physical discussion of why divergences occur and how they may be removed by renormalization. We hope that this will convince the reader, before going through all the details, that the renormalization subtractions are natural and the divergences of the naive expansion are artificial in these models. We state our main results in Section 1.5 and then discuss their physical interpretation. Finally, we give an overview of the sections containing the proofs. Every section begins with a brief explanation of what is done and how it fits into the general strategy. 1.2 The Formal Perturbation Expansion The models have the formal functional integral representation where , is the formal measure , where now stands for the integral over the spatial variable and imaginary time , , with an appropriate measure, e.g. for a continuous system on and for a lattice system on , e.g. . Here is the inverse temperature. The imaginary time is introduced to get a functional integral representation for the trace over Fock space in the standard way. The connected Green functions can formally be calculated as derivatives of with respect to the sources and . In this paper, we consider the limiting case , so and the configuration spaces are and (e.g. ), respectively. The spin index , the interaction is assumed to be translation invariant, so that and short–range, i.e. decreasing so fast that its Fourier transform is at least twice differentiable (see Assumption A1 below). Note that we do not assume that it is instantaneous. For simplicity, we also assume that it is spin–diagonal, i.e. . In contrast to the assumption about the decay of , the latter assumption is merely for notational convenience and can easily be dropped. One may imagine to arise from exchange of (quasi)particles like photons or phonons and formalize this by a Hubbard–Stratonovich transformation, introducing one or more scalar fields with covariance so that the interaction vertex is resolved as an exchange of fields and the interaction becomes bilinear in the fermion fields. For the purposes of the perturbation expansion we shall not need this. In particular since we assume smoothness of , we shall not need a cutoff on the interaction lines, and we shall often draw graphs with four–legged vertices instead of ones with interaction lines. For the lattice models, we take where is the chemical potential and is the amplitude for hopping from site to site , which we assume to be symmetric and short-ranged (see Assumptions A2 and A3 on below). A model of particular interest that is easy to formulate but difficult to analyze is the Hubbard model, for which with the so–called hopping parameters. In the simplest version of the model, is the same for all of length one, so the operator is just the discrete Laplacean on , with the diagonal term omitted since it can be absorbed in the chemical potential , and the interaction term is on-site and spin-diagonal, Various extensions of this model, e.g. with more complicated finite range hopping have been studied in connection to high–temperature superconductivity. For suitable values of the filling factor, they all fall into the class of band structures discussed here. For a review of mathematically rigorous results about the Hubbard model, see [L]. Formally equivalent to , but in fact much more convenient is the generating functional for connected amputated Green functions where the constant takes out the field–independent term so that . , as written above, is not a well–defined object in infinite volume; it can be made well-defined by restricting to a finite volume , or by introducing a suitable cutoff. If the free covariance is bounded and any power of it is integrable, exists and is analytic in , as was first observed by Caianiello. However, for any realistic model, will not have this properties, unless cutoffs are imposed. The radius of convergence obtained using naive bounds shrinks to zero when the cutoffs are removed, and establishing analyticity uniformly in the cutoffs requires techniques as in [FMRT]. Our analysis is done in momentum space, where from now on momentum is short for Bloch’s quasi–momentum, which can be used to label one–particle states because of the periodicity of the one–particle potential . In infinite volume, momentum space is the first Brillouin zone , i.e. the torus where is the dual lattice to , e.g. for . In finite volume, the momenta are in a finite subset of , with if the volume is a box of sidelength . The eigenfunction expansions used to transform into momentum space are discussed briefly in Appendix B for the general case; for the purposes of this introduction, we just give the formulas for the case of a lattice model on , where we can simply do a Fourier expansion. The only changes in the general case are (of course) that the Brillouin zone will differ with the lattice and that the formulas for switching between position and quasi–momentum space involve the eigenfunctions of the one–particle Hamiltonian with the periodic potential. Under the Fourier transform the quadratic part of the action becomes where we have dropped the hats and introduced the band structure and the interaction becomes, with , and here is the delta function on , more explicitly where the on the right side denotes that on . In general, the solution of the one-particle problem will produce crossing bands. We exclude this case here, and we also introduce an ultraviolet cutoff that removes the high energy bands. For the lattice systems, such a cutoff is already built in as the lattice spacing; for continuous systems it is not a real physical restriction since high energies do not occur in a crystal. If there are finitely many bands that do not cross, the band index is just a bookkeeping device dragged along, so, without loss, we restrict to the one–band case here. For , the fermions do not influence each other and the model is completely characterized by the covariance , in the sense that all –point functions are determinants of matrices with elements . The propagator in momentum space, , has a singularity at for all , where is the Fermi surface of the independent electron approximation. Although the function is in for all , graphs in the perturbation expansion diverge because of the singularity on and because in the expansion, arbitrary powers of are integrated. The numerator is included in the standard way since we want to consider the expansion around the situation where all states inside the Fermi surface, i.e. those with , are already occupied. Expanding in a formal power series in , we can write where the coefficient function is totally antisymmetric in the simultaneous exchange of momenta and spin indices (see Section 2.3). Again, the is periodic with respect to in the spatial part of the momentum. The coefficient can be expressed in the usual way as a sum over values of connected Feynman diagrams. The sum over runs over a finite index set for each fixed because the number of vertices is and the graphs are connected with external legs. The Feynman graphs are similar to those in quantum electrodynamics: there are two types of lines, namely fermion lines (drawn solid), carrying a direction, and interaction lines (drawn dashed). The vertices have two legs to which fermion lines can be connected (one incoming, one outgoing), and one leg for an interaction line. The action determines the assignment of propagators to fermion lines, to interaction lines, and momentum conservation delta functions to vertices. Equivalently, one can replace two vertices that are joined by an interaction line by a single four–fermion vertex with exactly two incoming fermion legs and exactly two outgoing fermion legs. The graphs then have only four–legged fermion vertices and only fermion lines. There is one notable difference between the cases and : In the spherical case (), where , . The corresponding ultraviolet problem (behaviour at large ) was solved in [FT1]. In presence of a crystal potential (), the integrals over the spatial part of the momentum are over the first Brillouin zone , which is a compact set. Thus there is no case of large here. Momentum conservation at every vertex means conservation in , as given by above. If one prefers to think of the momenta in , fixing momenta with means that at every vertex, there remains a sum over . Although formally infinite, this sum always contains only one nonzero term since there is a unique that translates back a vector in into the fundamental domain of the translational group . However, it is natural and simpler to consider momentum space as the torus since is –periodic. For example, in the Hubbard model, is the tight–binding band relation and . The much more general class of models and the range of chemical potential that we treat in this paper is given by the following assumptions. 1.3 Assumptions We assume that the one–particle problem (discussed in Appendix B) is such that we have a Brillouin zone which is a -dimensional torus of type . We assume that (see ) is a continuous function on and that for some value of the chemical potential, the Fermi surface has only a finite number of connected components. Furthermore, there is and a neighbourhood of such that: A1 The interaction . The sup norm over of the first derivatives is finite. A2 The band structure , and for all . The third assumption is a geometrical condition on the Fermi surface. It is very simple to understand and is fulfilled for generic surfaces. Let , , be the unit normal to the surface. By A2, is a submanifold of , and is a unit vector field. If consists of more than one connected component, choose a normal field for any component. For , define the angle between and by and denote the –dimensional measure of by . Also, for any and denote by the open -neighbourhood of . For fixed and , we assume: A3 There is an open interval around and there are strictly positive numbers and such that for all , the Fermi surface has the following properties: , and for all and all ,  if , then . Throughout this paper, A1A3  will be assumed to hold, and will be assumed to lie in the interval specified in A3. We now explain what these assumptions mean. Assumption A1 on is a decay assumption in position space, e.g. for an instantaneous interaction on a lattice system on and , A1 holds if For continuous systems, A1 is implied by a similar integral condition. Assumption A2 excludes singular points. For example, a point on where is called a van Hove singularity. The condition that is continuously differentiable is fulfilled for the case where comes from a Schrödinger equation for the one–body problem with a regular periodic potential, if there is no level–crossing. Indeed, it is real analytic. In lattice models with finite–range hopping, is analytic. However, infinite–range hopping is also allowed: if the moment of the hopping amplitude exists, i.e. . Assumption A3 is, more informally, that for every  the set of points where the normal is parallel or antiparallel to , has positive codimension in and  if is not in the set , where the normal is (anti)parallel to , the angle between and increases with some power of the distance between and . Thus in order to violate these assumptions, the surface must have flat regions or subsets where vanishes exponentially fast as . To illustrate A3, we draw an example of a Fermi surface that satisfies A3 in (i.e. a Fermi curve) on (the square bounds the fundamental region for the torus , and the shaded areas indicate ): For everything else, email us at [email protected].
3eba29da16d5561f
Quantum Chemistry Malte Döntgen Group Leader Computational Chemistry +49 241 80 26907   Molecule with Orbital Copyright: © 2020, American Chemical Society A molecular orbital of dimethoxy methane. Reprinted with permission from https://doi.org/10.1021/acs.jcim.0c00787. Quantum chemical calculations are based on the electronic structure of molecules and make use of numerical methods for solving the stationary Schrödinger equation. The results of these calculations allow for the computation of multiple physical and chemical properties of molecules: Heat capacity, heating value, and reaction rates of chemical reactions the molecules might be involved in, to name a few. Among other applicatoins, these properties are used in complex chemical models in order to describe the formation and consumption of chemical compounds. At our institute, we are using a broad spectrum of software to utilize quantum chemistry. The quantum mechanical software programms Gaussian and Orca are being used to solve the stationary Schrödinger equation numerically. The software programms Tamkin and MESS are using the results of the quantum mechanical calculations to predict rate coefficients of chemical reactions. Special emphasis is put on including pressure-dependency and non-Boltzmann effects. These effects are of particular importance for gas phase chemical reactions.
cafca1e2ef5b5813
Take the 2-minute tour × At school I really struggled to understand the concept of imaginary numbers. My teacher told us that an imaginary number is a number which has something to do with the square root of -1. When I tried to calculate the square root of -1 on my calculator, it gave me an error. To this day I do not understand imaginary numbers. It makes no sense to me at all. Is there someone here who totally gets it and can explain it? Why is the concept even useful? share|improve this question I don't get it. –  Sachin Kainth Sep 20 '12 at 12:28 @SachinKainth: What is a real number? I mean, what do you understand a real number to be and why do you not struggle with that concept? If people see that you understand such numbers for particular reasons, they may be able to give similar reasons for the existence of complex numbers, or at least gauge what would be required to convince you that complex numbers are useful. –  Michael Albanese Sep 20 '12 at 12:54 $\mathbb{R}eally$ exist? –  user1729 Sep 20 '12 at 13:14 Real numbers don't "exist" either, they're all just mathematicians' ideas. –  akkkk Sep 20 '12 at 13:30 @ivan Your comment is misleading. A complex number is a number on the plane. An imaginary number is merely the second coordinate in 2D, the imaginary part of the complex number. –  Matt N. Sep 20 '12 at 15:56 show 52 more comments 18 Answers up vote 512 down vote accepted Let's go through some questions in order and see where it takes us. [Or skip to the bit about complex numbers below if you can't be bothered.] What are natural numbers? It took quite some evolution, but humans are blessed by their ability to notice that there is a similarity between the situations of having three apples in your hand and having three eggs in your hand. Or, indeed, three twigs or three babies or three spots. Or even three knocks at the door. And we generalise all of these situations by calling it 'three'; same goes for the other natural numbers. This is not the construction we usually take in maths, but it's how we learn what numbers are. Natural numbers are what allow us to count a finite collection of things. We call this set of numbers $\mathbb{N}$. What are integers? Once we've learnt how to measure quantity, it doesn't take us long before we need to measure change, or relative quantity. If I'm holding three apples and you take away two, I now have 'two fewer' apples than I had before; but if you gave me two apples I'd have 'two more'. We want to measure these changes on the same scale (rather than the separate scales of 'more' and 'less'), and we do this by introducing negative natural numbers: the net increase in apples is $-2$. We get the integers from the naturals by allowing ourselves to take numbers away: $\mathbb{Z}$ is the closure of $\mathbb{N}$ under the operation $-$. What are rational numbers? My friend and I are pretty hungry at this point but since you came along and stole two of my apples I only have one left. Out of mutual respect we decide we should each have the same quantity of apple, and so we cut it down the middle. We call the quantity of apple we each get 'a half', or $\frac{1}{2}$. The net change in apple after I give my friend his half is $-\frac{1}{2}$. We get the rationals from the integers by allowing ourselves to divide integers by positive integers [or, equivalently, by nonzero integers]: $\mathbb{Q}$ is (sort of) the closure of $\mathbb{Z}$ under the operation $\div$. What are real numbers? I find some more apples and put them in a pie, which I cook in a circular dish. One of my friends decides to get smart, and asks for a slice of the pie whose curved edge has the same length as its straight edges (i.e. arc length of the circular segment is equal to its radius). I decide to honour his request, and using our newfangled rational numbers I try to work out how many such slices I could cut. But I can't quite get there: it's somewhere between $6$ and $7$; somewhere between $\frac{43}{7}$ and $\frac{44}{7}$; somewhere between $\frac{709}{113}$ and $\frac{710}{113}$; and so on, but no matter how accurate I try and make the fractions, I never quite get there. So I decide to call this number $2\pi$ (or $\tau$?) and move on with my life. The reals turn the rationals into a continuum, filling the holes which can be approximated to arbitrary degrees of accuracy but never actually reached: $\mathbb{R}$ is the completion of $\mathbb{Q}$. What are complex numbers? [Finally!] Our real numbers prove to be quite useful. If I want to make a pie which is twice as big as my last one but still circular then I'll use a dish whose radius is $\sqrt{2}$ times bigger. If I decide this isn't enough and I want to make it thrice as big again then I'll use a dish whose radius is $\sqrt{3}$ times as big as the last. But it turns out that to get this dish I could have made the original one thrice as big and then that one twice as big; the order in which I increase the size of the dish has no effect on what I end up with. And I could have done it in one go, making it six times as big by using a dish whose radius is $\sqrt{6}$ times as big. This leads to my discovery of the fact that multiplication corresponds to scaling $-$ they obey the same rules. (Multiplication by negative numbers responds to scaling and then flipping.) But I can also spin a pie around. Rotating it by one angle and then another has the same effect as rotating it by the second angle and then the first $-$ the order in which I carry out the rotations has no effect on what I end up with, just like with scaling. Does this mean we can model rotation with some kind of multiplication, where multiplication of these new numbers corresponds to addition of the angles? If I could, then I'd be able to rotate a point on the pie by performing a sequence of multiplications. I notice that if I rotate my pie by $90^{\circ}$ four times then it ends up how it was, so I'll declare this $90^{\circ}$ rotation to be multiplication by '$i$' and see what happens. We've seen that $i^4=1$, and with our funky real numbers we know that $i^4=(i^2)^2$ and so $i^2 = \pm 1$. But $i^2 \ne 1$ since rotating twice doesn't leave the pie how it was $-$ it's facing the wrong way; so in fact $i^2=-1$. This then also obeys the rules for multiplication by negative real numbers. Upon further experimentation with spinning pies around we discover that defining $i$ in this way leads to numbers (formed by adding and multiplying real numbers with this new '$i$' beast) which, under multiplication, do indeed correspond to combined scalings and rotations in a 'number plane', which contains our previously held 'number line'. What's more, they can be multiplied, divided and rooted as we please. It then has the fun consequence that any polynomial with coefficients of this kind has as many roots as its degree; what fun! The complex numbers allow us to consider scalings and rotations as two instances of the same thing; and by ensuring that negative reals have square roots, we get something where every (non-constant) polynomial equation can be solved: $\mathbb{C}$ is the algebraic closure of $\mathbb{R}$. [Final edit ever: It occurs to me that I never mentioned anything to do with anything 'imaginary', since I presumed that Sachin really wanted to know about the complex numbers as a whole. But for the sake of completeness: the imaginary numbers are precisely the real multiples of $i$ $-$ you scale the pie and rotate it by $90^{\circ}$ in either direction. They are the rotations/scalings which, when performed twice, leave the pie facing backwards; that is, they are the numbers which square to give negative real numbers.] What next? I've been asked in the comments to mention quaternions and octonions. These go (even further) beyond what the question is asking, so I won't dwell on them, but the idea is: my friends and I are actually aliens from a multi-dimensional world and simply aren't satisfied with a measly $2$-dimensional number system. By extending the principles from our so-called complex numbers we get systems which include copies of $\mathbb{C}$ and act in many ways like numbers, but now (unless we restrict ourselves to one of the copies of $\mathbb{C}$) the order in which we carry out our weird multi-dimensional symmetries does matter. But, with them, we can do lots of science. I have also completely omitted any mention of ordinal numbers, because they fork off in a different direction straight after the naturals. We get some very exciting stuff out of these, but we don't find $\mathbb{C}$ because it doesn't have any natural order relation on it. Historical note The above succession of stages is not a historical account of how numbers of different types are discovered. I don't claim to know an awful lot about the history of mathematics, but I know enough to know that the concept of a number evolved in different ways in different cultures, likely due to practical implications. In particular, it is very unlikely that complex numbers were devised geometrically as rotations-and-scalings $-$ the needs of the time were algebraic and people were throwing away (perfectly valid) equations because they didn't think $\sqrt{-1}$ could exist. Their geometric properties were discovered soon after. However, this is roughly the sequence in which these number sets are (usually) constructed in ZF set theory and we have a nice sequence of inclusions $$1 \hookrightarrow \mathbb{N} \hookrightarrow \mathbb{Z} \hookrightarrow \mathbb{Q} \hookrightarrow \mathbb{R} \hookrightarrow \mathbb{C}$$ Stuff to read • The other answers to this question give very insightful ways of getting $\mathbb{C}$ from $\mathbb{R}$ in different ways, and discussing how and why complex numbers are useful $-$ there's only so much use to spinning pies around. • A Visual, Intuitive Guide to Imaginary Numbers $-$ thanks go to Joe, in the comments, for pointing this out to me. • Some older questions, e.g. here and here, have some brilliant answers. I'd be glad to know of more such resources; feel free to post any in the comments. share|improve this answer I registered to math.stackexchange just to vote for this wonderful answer! –  lukas.pukenis Sep 20 '12 at 13:35 This is probably the best plain-English, grade-school-level explanation of the various sets of numbers I have ever heard. WAY better than anything my teachers on the subject could come up with. –  KeithS Sep 20 '12 at 14:42 +1 but you didn't complete the definition that the imaginary number line as simply the axis orthogonal to the real number line in the complex number plane so that every complex number can be expressed as the sum of a real number and an imaginary number. –  StarNamer Sep 20 '12 at 14:59 Excellent explanation! The first time I've seen the analogy of rotation was here, which also contains a really good, visual exploration of imaginary numbers: betterexplained.com/articles/… –  Joe Sep 20 '12 at 15:01 Oh, please, please, please, expand your answer to quaternions, please! –  Daniel Excinsky Sep 21 '12 at 7:12 show 34 more comments You ask why imaginary numbers are useful. As with most extensions of number systems, historically such generalizations were invented because they help to simplify certain phenomena in existing number systems. For example, negative numbers and fractions permit one to state in a single general form the quadratic equation and its solution (older solutions bifurcated into many cases, avoiding negative numbers and fractions). One of the primary reasons motivating the invention of complex numbers is that they serve to linearize what would otherwise be nonlinear phenomena - thus greatly simplifying many problems. Here are some examples. Consider the problem of representing integers as sums of squares $\rm\: n = x^2 + y^2$. Early solutions to this and related problems employed a complicated arithmetic of binary quadratic forms. Such arithmetic was quite intricate and often very nonintuitive, e.g. even the proof of associativity of composition of such forms was a tour de brute force, occupying pages of unmotivated computations in Gauss' Disq. Arith. But this quadratic arithmetic of binary quadratic forms can be linearized. Indeed, by factorization $\rm\: x^2 + y^2 = (x+y{\it i})(x-y{\it i}),$ which allows us to view sums of squares as norms of Gaussian integers $\rm\:x+y{\it i},\ \ x,y\in \Bbb Z.\:$ But just like the rational integers $\Bbb Z,$ these "imaginary" integers have a Euclidean algorithm, so enjoy unique factorization into primes. By considering all the possible factorizations of $\rm\:n\:$ in the Gaussian integers we obtain all the possible representations of $\rm\:n\:$ as a sum of squares. In a similar way, "rational, real" arithmetic of integral quadratic forms becomes much simpler by passing to the "irrational" and/or "imaginary" arithmetic of quadratic number fields. This line of research led to the discovery of ideals and modules, fundamental linear structures at the heart of modern number theory and algebra. Thus, by factorizing completely over $\Bbb C$, we have reduced the complicated nonlinear arithmetic of binary quadratic forms to the simpler, linear arithmetic of Gaussian integers, i.e. to the more familiar arithmetical structure of a unique factorization domain (in fact a Euclidean domain). Analogous linearization serves to simplify many problems. For example, when integrating or summing rational functions (quotients of polynomials), by factoring denominators over $\Bbb C$ (vs. $\Bbb R)$ and taking partial fraction decompositions, the denominators are at worst powers of linear (vs. quadratic) polynomials - which greatly simplifies matters. More generally, when solving constant coefficient differential or difference equations (recurrences), by factoring their characteristic (operator) polynomials over $\Bbb C,$ we reduce to solutions of linear (vs. quadratic) differential or difference equations. In the same way, there are many real problems (over $\Bbb R)$ whose simplest solutions are obtained by an imaginary detours (over $\Bbb C).$ Perhaps readers will mention more such problems in the comments. share|improve this answer That's up to now by far the best answer about the usefulness. +1 –  celtschk Sep 20 '12 at 15:56 Beautifully stated. I wasn't aware of the Gaussian integers, very cool –  acjohnson55 Sep 20 '12 at 23:26 add comment I went to school for electrical engineering (7 years total) and we used imaginary numbers all over the place. Even with all that schooling, this is probably the clearest explanation of imaginary numbers I've seen: share|improve this answer I especially loved the historical note showing how offended people in the mid-1700's were by negative numbers... it clarifies the fact that "imaginary" and "negative" are just labels for parts of the plane. –  Jerry Andrews Sep 20 '12 at 21:22 I had seen this article before, and I must agree -- IT IS EXCELLENT. If you're grappling with the concept of imaginary numbers, you MUST check it out! –  Charlie Flowers Sep 21 '12 at 18:15 That article is great! This should've been the accepted answer! –  Meysam Sep 24 '12 at 11:13 This link was my first thought - completely worth the time to read –  Deebster Sep 24 '12 at 11:55 add comment Well, as you know there's no real number whose square is negative. But now imagine numbers which are. Let's call them imaginary. Now what properties would such numbers have? Well, there would be for example a number whose square is $-1$. Let's call that number the imaginary unit and give it the name $\mathrm i$. Now if we multiply this number with some real number, that is, use $r\mathrm i$, we get a number whose square is $(\mathrm ir)^2 = \mathrm i^2r^2 = -r^2$. Since all positive numbers can be written as $r^2$, we get that all negative numbers can be written as $(\mathrm ir)^2$. Thus the products $\mathrm ir$ are our imaginary numbers. We also see that $(-\mathrm i)^2 = (-1)^2\mathrm i^2 = -1$, so there are actually two numbers whose square is $-1$ (which makes sense because, after all, there are also two numbers whose square is $1$, namely $1$ and $-1$). OK, but what happens if we add a real number and one of out imaginary numbers. Well, now things get complex. We get general complex numbers. OK, but how do we know that we've not just made some nonsense, similar to the nonsense that we get when we invent a number $o$ so that $0o=1$? Well to see that, we recognize that all complex numbers are of the form $x+\mathrm iy$ with real numbers $x$ and $y$, and thus the pair $(x,y)$ completely specifies a complex number. Therefore now we re-derive the complex numbers as pairs of real numbers, but now using proper mathematical instruments so we know for sure that whatever we do is well defined. Since doing that we arrive at the very same structure which we just had derived in a quite informal way, we know that the complex numbers are a sound mathematical structure. OK, now that we have invented the imaginary and complex numbers, are they useful for something? Well, indeed they are. For example, several mathematical statements are much easier in complex numbers than in real numbers. For example, with complex numbers, every polynomial can be written in the form $a(x-x_1)(x-x_2)\cdots(x-x_n)$. With real numbers, this is impossible for polynomials having for example factors of the form $(x^2+1)$. Moreover, we have the very useful relation $\mathrm e^{\mathrm i\phi} = \cos\phi + \mathrm i\sin\phi$. So forget about complicated addition theorems for sine and cosine. Just rewrite your formula in complex exponentials and enjoy the simple relation $\mathrm e^{\mathrm i(\alpha+\beta)}=\mathrm e^{\mathrm i\alpha}\mathrm e^{\mathrm i\beta}$. Finally, if you want to do quantum physics (and almost all modern physics is quantum physics) you'll find that you have to use complex numbers. share|improve this answer add comment The term "imaginary" is somewhat disingenuous. It's a real concept, with real (at least theoretical) application, just like all the "real" numbers. Think back to that algebra class. You were asked to solve a polynomial equation; that is, find all the values of X for which the entire equation evaluates to zero. You learned to do this by polynomial factoring, simplifying the equation into a series of first-power terms, and then it was easy to see that if any one of those terms evaluated to zero, then everything else, no matter its value, was multiplied by zero, producing zero. You tried this on a few quadratic equations. Sometimes you got one answer (because the equation was $y=ax^2$ and so the only possible answer was zero), sometimes you got two (when the equation boiled down to $y= (x\pm n)(x \pm m)$, and so when $x=-m$ or $x=-n$ the equation was zero), and a couple of times, you got no answers at all (usually, an equation that breaks down to $y=(x+n)(x+m)$ doesn't evaluate to zero at $x=-m$ or $x=-n$). In your algebra class, you're told this just happens sometimes, and the only way to make sure any factored term $(x\pm k)$ represents a real root is to plug in $-k$ for $x$ and solve. But, this is math. Mathematicians like things to be perfect, and don't like these "rules of thumb", where a method works sometimes but it's really just a "hint" of where to look. So, mathematicians looked for another solution. This leads us to application of the quadratic formula: for $ax^2 + bx + c = 0$, $x=\dfrac{-b \pm \sqrt{b^2-4ac}}{2a}$. This formula is quite literally the solution of the general form of the equation for x, and can be derived algebraically. We can now plug in the coefficients, and find the values of $x$ where $ax^2 + bx + c=0$. Notice the square root; we're first taught, simply, that if $b^2-4ac$ is ever negative, then the roots you'd get by factoring the equation won't work, and thus the equation has no real roots. $b^2-4ac$ is called the determinant for this reason. But, the fact that $b^2-4ac$ can be negative remains a thorn in our side; we want to solve this equation. It's sitting right in front of us. If the determinant were positive, we would have solved it already. It's that pesky negative that's the problem. Well, what if there was something we could do, that conforms to the rules of basic algebra, to get rid of the negative? Well, $-m = m*-1$, so what if we took our term that, for the sake of argument, evaluated to $-36$, and made it $36*-1$? Now, because $\sqrt{mn} = \sqrt{m}\sqrt{n}$, $\sqrt{-36} = \sqrt{36}\sqrt{-1} = 6\sqrt{-1}$. We've simplified the expression by removing what we can't express as a real number from what we can. Now to clean up that last little bit. $\sqrt{-1}$ is a common term whenever the determinant is negative, so let's abstract it behind a constant, like we do $\pi$ and $e$, to make things a little cleaner. $\sqrt{-1} = i$. Now, we can define some properties of $i$, particularly a curious thing that happens as you raise its power: $$i^2 = \sqrt{-1}^2 = -1$$ $$i^3 = i^2*i = -i$$ $$i^4 = i^2*i^2 = -1*-1 = 1$$ $$i^5 = i^4*i = i$$ We see that $i^n$ transitions through four values infinitely as its power $n$ increases, and also that this transition crosses into and then out of the real numbers. Seems almost... circadian, rotational. As Clive N's answer so elegantly explains it, that's what imaginary numbers represent; a "rotation" of the graph through another plane, where the graph DOES cross the $x$-axis. Now, it's not actually really a circular rotation onto a new linear z-plane. Complex numbers have a real part, as you'd see by solving the quadratic equation for a polynomial with imaginary roots. We typically visualize these values in their own 2-dimensional plane, the complex plane. A quadratic equation with imaginary roots can thus be thought of as a graph in four dimensions; three real, one imaginary. Now, we call $i$ and any product of a real number and $i$ "imaginary", because what $i$ represents doesn't have an analog in our "everyday world". You can't hold $i$ objects in your hand. You can't measure anything and get $i$ inches or centimeters or Smoots as your result. You can't plug any number of natural numbers together, stick a decimal point in somewhere and end up with $i$. $i$ simply is. As far as having use outside "ivory tower" math disciplines, a big one is in economics; many economies of scale can be described as a function of functions of the number of units produced, with a cost term and a revenue term (the difference being profit or loss), each of these in turn defined by a function of the per-unit sale price or cost and the number produced. This all generally simplifies to a quadratic equation, solvable by the quadratic formula. If the roots are imaginary, so are the breakeven points (and your expected profits). Another good one is in visualizations of complex numbers, and of their interactions when multiplied. The first one I was exposed to is a well-known series set, produced by taking an arbitrary complex number, squaring it ($(a+bi)^2 = (a+bi)(a+bi) = a^2 + 2abi + b^2i^2 = a^2-b^2 + 2abi$), and then adding back its original value. Repeated to infinity with this number, the series either converges to zero or diverges to infinity (with a few starting numbers exhibiting periodicity; they'll jump around infinitely between a finite number of points much like $i$ itself does). The set of all complex numbers for which the series does not diverge is the Mandelbrot set or M-set, and while the area of the graph is finite, its perimeter is infinite, making the graph of this set a fractal (one of the most highly-studied, in fact). The Mandelbrot set can in turn be defined as the set of all complex numbers $c$ for which the Julia set $J(f)$ of $f(z)=z^2 + c \to z$ is connected. A Julia set exists for every complex polynomial function, but usually the most interesting and useful sets are the ones for values of $c$ that belong to the M-set; Julia fractals are produced much the same way as the M-set (by repeated iteration of the function to determine if a starting $z$ converges or diverges), but $c$ is constant for all points of the set instead of being the original point being tested. You can define Julia sets with all sorts of fractal shapes. These fractals, more accurately the iterative evaluation behind them, are used for pseudorandom number generation, computer graphics (the sets can be plotted in 3-d to create landscapes, or they can be used in shaders to define complex reflective properties of things like insect shells/wings), etc. share|improve this answer add comment This question has already been answered quite thoroughly, but I just want to add that generally speaking, besides the whole numbers, none of the numbers we use "exist" in the real world. The only reason we have adopted extensions to the whole numbers to the natural, integer, rational, real, and complex sets in turn is because these extensions make problems solvable when thinking abstractly. At the end of the day, everything relates back to the whole numbers, however. Most people use all of the sets except for complex numbers in very commonplace, everyday situations, which is why we've come to view everything up to the real numbers as being fairly intuitive, at least at first glance. (When you dig under the surface, everything gets a great deal more subtle, which is why there are people who study primarily numbers, who we call number theorists. But that's a whole other story.) It's important to note that this progression isn't the only way to extend the whole numbers. There are hundreds of different arithmetics that have been designed, many not even based on the whole numbers. It's just that the usual extension applies to so many situations that come up commonly. (People who study universal algebra study the ways in which different possible math systems are alike and different. But that's a whole other story as well.) Complex numbers have taken their place as the normal extension to the reals because they are so useful when dealing with polynomials, which happen to arise in a massive number of mathematical situations. They also allow the exponential function and the trigonometric functions to be viewed as special cases of the same thing, through Euler's Formula, which enables all sorts of great algebra tricks. Specifically, these sorts of functions pop up constantly when using either Taylor or Fourier series to simplify the process of working on problems with tricky transcendental functions. Complex numbers make dealing with these representations a breeze (relatively). There are even further extensions. If instead of worrying about how to take the square root of -1, you worry about what happens past infinity, the real numbers can alternatively be expanded in several jumps to include hyperreals, superreals, and surreals. None of these systems have caught on, though, because we have alternative ways of dealing with the infinite and infinitesimal quantities in calculus that people find more powerful/convenient. You can also zip on past complex numbers to Quaternions, and octonions on top of that. Vectors generalize all of the above. They aren't often though of as numbers, but are similar in that they generalize the concept of a property of an object having a mathematical value. Matrixes generalize vectors, and tensors generalize matrixes. As you climb this ladder, you gain more and more mathematical power, but you start to lose properties that we expect of whole numbers. For complex numbers, order (greater than/less than) begins to become ambiguous. We generally don't think of vectors as "numbers" because we want all operations on vectors to work regardless of dimension, and most of the arithmetic operations don't really generalize. With matrixes, the commutative property goes out the window, and things start to get really weird, especially when the matrixes aren't square. And so forth. All of this to say that numbers are best viewed as machinery. Different number systems are really only used to the extent which they make a given math situation or problem easier to think about. If you're an engineer, complex numbers do this in many, many situations, which justifies their added...complexity. If you're not an engineer, they're definitely worth understanding, but you may not find uses for them on a daily basis. share|improve this answer add comment Complex numbers are just a handy way to handle two dimensional points and move them around. The key to it is understanding that i × i = −1 is just a simple by-product of moving these points around. Real numbers correspond to numbers on a line (one dimension), which is usually how they are represented: a single axis where each number has a position. Operations on these real numbers have been defined to apply the two most basic transformations: • Translation (addition) Move a point by a given amount. • Scaling (multiplication) Move a point by an amount related to its value, e.g., two times further than it was. Now, for a number of situations, you need to handle elements that are not on a line, but on a plane—you are now in two dimensions. When working in two dimensions, you need to know where you are horizontally and vertically, which you usually represent with two numbers. For instance, (3, 2) is “3 to the right, 2 up”. Complex numbers are designed to manipulate these elements with dimensions with “simple” mathematics. We define i as being the vertical unit. 2i is “2 up”, −4i is “4 down”, and 3 + 2i is “3 to the right and 2 up”. We still can use translation and scaling like in the one-dimension case, but we would like to add something: rotation. How do I turn “2 to the right” into “2 up”? The solution comes with multiplying by i. If 1 is “1 to the right” and 1 × i is “1 up”, then it means that multiplying by i is simply rotating by 90 degrees with point 0 as a center, counter-clockwise. 2 × i = 2i means “2 to the right” multiplied by i gives “2 up”. And this is where it gets interesting: rotating the point “1 to the right” by 90 degrees gives “1 up”. Rotating it again by 90 degrees gives “1 left”. This means that multiplying 1 twice by i gives −1. We have 1 × i × i = −1, and since i × i = −1, i is by definition the square root of −1. share|improve this answer "Complex numbers are just a handy way to handle two dimensional points and move them around." No, it's not that simple. The Schrödinger equation(the fundamental equation of quantum mechanics) uses them. Its use is essential, contrary to the use of them in, e.g. electronic circuits. IMO, complex numbers are indispensable to describe the laws of physics. –  Makoto Kato Sep 21 '12 at 5:12 add comment I think of i as just a symbol to represent an operation When we want the square root of -1, just represent the whole statement with a symbol without evaluating it. This avoids the necessity of trying to explain it further, we don't need to map the answer to some real world concept, it's just a saved operation. We also know that the square root has the following property: √x √x = x No matter what the x. i.e. √-1 √-1 = -1 Numbers are useful to me when they represent concepts in the real world. I don't map i to anything in the real world but with this ability to represent the operation, I can now manipulate it in algebraic expressions to ultimately get back to non-imaginary numbers that I do find useful. http://en.wikipedia.org/wiki/Euler%27s_formula share|improve this answer add comment This argument is a loose argument for the sake of simplicity and because I know little about the subject. However, I think it may be good for non-mathematicians. The simplistic view is to note that imaginary numbers (or Complex Numbers) are numbers that are defined by humans to describe quantities different from the numbers we use in our day-to-day life (unless you are a scientist). They have certain rules that are somewhat different than those we use to calculate with non-complex numbers. Hence, the subjects of (Complex Variables and Complex Analysis). In mathematics, this is not strange. There are concepts that may look surprising until you study them carefully. For example, in Binary Numbers $1+1=10$. This result does not make any sense unless you understand and realize that the result is valid in the Binary System, domain or framework. Personally, I thought about this before I read your question, and found that the problem comprehending such concepts could arise when you think about a concept outside its framework (or domain) and try to rationalize the results using our every day concepts. For example trying to evaluate the $\sqrt{-1}$ on a regular calculator with no setting for Imaginary Arithmetic (the proper name is probably Complex Arithmetic). The calculator has to be set to the correct mode (or framework) to give a correct result. In fact, the software in your calculator should have given you a decent error message (or better yet the result of $i$ with a warning note). Again, the same thing will happen if you are using your calculator in Binary mode to add $1+1$, you will not get the familiar $2$. Many other examples can be driven around the same concept. I hope this helps. share|improve this answer add comment I'm surprised that, as far as I can see, no one has mentioned Paul Nahin's book "An imaginary tale : the story of √-1", pub: Princeton University Press ISBN 0-691-12798-0. It is a historical account of how √-1 became a necessary mathematical tool, and is written in an easy to read conversational style. I keep re-reading parts of it, like going over old ground again with a friend. Two reviews give contrasting opinions: the first very favourable http://plus.maths.org/content/imaginary-tale; the second giving a long list of (alleged -- I haven't checked them independently) inaccuracies and omissions: http://www.ams.org/notices/199910/rev-blank.pdf share|improve this answer The review published in Notices of AMS is not very flattering. –  Martin Sleziak Sep 26 '12 at 14:30 Thank you @Martin. I followed the link and am suitably chastened, but I did, and do, enjoy reading this book. I still think it might well be a nice introduction to complex numbers for someone who has not become acquainted with them already. –  Harry Weston Sep 26 '12 at 14:50 The book still might be a good read (I'm not saying that the opinion of the reviewer should be taken as infallible). But link to a review might be useful for people reading your answer anyway. –  Martin Sleziak Sep 26 '12 at 14:58 add comment One answer for why imaginary (and complex) numbers are useful is that they provide solutions to polynomial equations. (The square root of -1 part comes from trying to solve the equation $x^2 = -1$, which has no real number solutions.) The Fundamental Theorem of Algebra states that any polynomial equation with real (or even complex!) coefficients has solutions in the complex number system. The theorem doesn't always seem very powerful, because a lot of times we discard all non-real solutions. But, this isn't always the case. Linear (ordinary) differential equations can be solved by first solving an associated polynomial equation. The complex solutions to the polynomial equation end up influencing the solution to the differential equation. share|improve this answer BTW, these differential equations I speak of can be used to solve quite a few real-world problems. The type of diff-eq I mentioned arises when studying oscillators. Forced oscillators, for example, can give rise to resonance. –  Hugh Denoncourt Sep 21 '12 at 4:58 add comment This is probably not helpful for someone first learning about imaginary numbers, but my personal motivation for complex numbers is so that every linear transformation over the reals can be decomposed into a direct sum of shift plus scaling operators, ie. the Jordan normal form exists. If you work with matrices/linear operators over the reals for long enough, this is something that "feels" like it should be true - like some sort of linear algebra version of the pigeonhole principle - but it doesn't quite work over the reals because of rotation matrices. On the other hand, rotation is "like" a scaling because if you apply a rotation twice, it's the same as rotating twice as much once, so one feels this shouldn't really be an obstruction. In any case, complex numbers are exactly the number system you need to ensure Jordan normal form exists, where rotations are scalings of complex eigenvectors by a complex number. share|improve this answer add comment I just think of imaginary numbers as a definition. In the "real world" you cannot take the square root of $−1$ (which is what is happening with your calculator). However, we just define some "number", call it $i$, such that $i^2=−1$, add it to our number system and see what happens. So when you study imaginary numbers, you are just "seeing what happens". One can then write every number as $a+ib$ where $a,b\in\mathbb{R}$ ($a$ and $b$ are real numbers) and $i^2=−1$. In his comment, ivan is taking this pair $(a,b)$ and pointing out that this pair defines a point on a plane (so, like, a piece of paper, as when you draw a graph). This is the way that people often view imaginary numbers - as points on the plane (and the plane is the Complex Plane, or an Argand diagram). share|improve this answer In the real world, there is no such number as -1, if we're going that route. The complex numbers are as real or as fake as the negative numbers. –  acjohnson55 Sep 20 '12 at 23:28 Thus the quote marks. –  user1729 Sep 21 '12 at 9:05 add comment Check it out, I just learned this very recently: Define the set of all ordered pairs $(x, y)$, call it $\mathbb{C}$, the set of complex numbers. We call $x$ the real part, and y the imaginary part. Now define multiplication like this: $(x, y) \cdot (a,b) = (xa-yb, xb+ya) $ Now I'm not sure what that's supposed to be but observe: $(0, 1)^2 = (0,1) \cdot(0,1)= (0-1, 0+0) = (-1, 0)$ Since the second number in the ordered pair is the imaginary part, (0,1) corresponds to $0+1\cdot i =i$. (In fact all complex numbers $(x, y)$ correspond to $x+yi$ ). So I have just shown you how defining multiplication that way results in $i^2 = -1$. But that multiplication isn't the multiplication I'm familiar with!, you say. Well guess what: $a\cdot b = (a, 0) \cdot (b, 0) = (ab-0,0+0) = (ab, 0) = ab $ Yes it is! So what I get from this is that essentially someone said: "What if there was a number that could be squared to get -1", and there you have it! In fact, once you define addition like this: $(a, b) + (x, y) = (a+b, x+y)$ I'm pretty sure you'll find this new system of complex numbers, $\mathbb{C}$, to be compatible with the old set of real numbers, $\mathbb{R}$ . share|improve this answer $(x,y)⋅(a,b)=(xa−yb,xb+ya)$ is just FOIL, as you're used to, observe: $(x+yi)\cdot (a+bi)$ is equivalent to the above. Just dawned on me XD, I'm a bit slow sometimes... –  Christian Burke Sep 20 '12 at 21:53 If that blows your mind, check out polar coordinates. Polar coordinates: x + yi = r ⋅ e^(Θi) (where theta is the angle from 0 in radians, and r is the distance from 0). (ed. fixed formula getting messed up in paste). –  fennec Sep 20 '12 at 22:39 add comment Imaginary numbers can also be thought of as a simple hack mathematicians use when they want to keep units separate. Need a result with more than one component? make it a multiple of something that won't resolve. Pretty handy. share|improve this answer That works if all you're doing is adding or subtracting, but once you start doing more complicated operations than that, the real and imaginary parts interact. I think complex numbers are best thought of as describing situations where you have two quantities that can be thought of as being separate that interact in particular ways at times, like phases. In any honest use of complex numbers, the real and imaginary parts must have the same units. –  acjohnson55 Sep 21 '12 at 20:48 add comment @ Sachin Kainth Hmmm. The question you ask is a deep one. The answer is far from easy. The quote above from one of the earlier answers is not right. I am not sure that "whole numbers" do "exist in the real world", let alone that "real numbers" do, or that complex numbers, quaternions or octonions don't. The relationship between maths and the real world is extremely mysterious. It goes straight into the classic "God is a Mathematician" statement. I do not remotely have time to go into it properly here. Whole books have been written about it. Some better than others. One viewpoint is simply to ignore questions of "reality" or relationship to the "real world" and say that complex numbers are exceedingly useful. Another approach is to go down the Clifford Algebra route, originally pioneered by William Clifford (1845-79) at my old college (Trinity, Cambridge) and which has recently seen an explosion of interest by theoretical physicists led perhaps by Stephen Gull at the Cavendish. Roger Penrose (Oxford) is also interesting on the subject of complex numbers. But all that stuff requires some math sophistication to understand. An important prior question is "real numbers" or even fractions. There are many deeply puzzling and paradoxical questions about them. I suspect you have not been exposed to them. Looking for someone who "totally gets it" is likely to be a vain hope. If you find them, let me know! share|improve this answer add comment Imaginary numbers were invented to make calculations easier. Everyone knows the quadratic formula; when Cardano was working on the formula for cubics (known as Cardano's formula), he found out that it was extremely hard to write down a formula unless you out down some symbol as a placeholder for $\sqrt{-1}$, which you manipulate like a number and which always cancelled out in the end. So he left it in. He was embarrassed by it, and called it imaginary, but the formula worked. Mathematicians later found out that imaginary numbers made a lot of formulas easier, like finding a formula for $\sin(3x)$, and so they found consistent rules for them. Ever since then, they've kept making formulas easier. share|improve this answer add comment It's an extension field . . . but since you probably don't know that, the terminology is horrible! Just think of imaginary numbers as the completion of real numbers so that you can find solutions to the equation $x^2 + 1 = 0.$ If you set $i = \sqrt{-1}$, then $i$ and $-i$ are solutions to this polynomial. There are no 'real solutions'. It is known that any univariate system of degree $n$ has exactly $n$ solutions in the complex plane. This can be generalized further to multivariate square systems in that the max number of solutions is the multiplication of the largest degree of each function. For example, $x_1^3 + x_1*x_2 + 4 = 0$ $x_1^2*x_2 + x_1 + x_2 - 2 = 0$ has (at most) $3*4 = 12$. This is known as Bezout's Theorem and is a result of classical algebraic geometry. Letting $i = \sqrt{-1}$ is necessary to find all the solutions (and it is possible to find these solutions numerically, up to an arbitrary choice of precision). share|improve this answer add comment protected by Qiaochu Yuan Sep 22 '12 at 8:21
98526b1dee8bbf52
berfrois » Science Intellectual Jousting in the Republic of Letters Thu, 28 Aug 2014 20:22:51 +0000 en-US hourly 1 Many, Many, Many, Many Worlds Mon, 07 Jul 2014 08:01:49 +0000 936full-sliders-screenshot From Sliders, NBCUniversal by Sean Carroll I have often talked about the Many-Worlds or Everett approach to quantum mechanics — here’s an explanatory video, an excerpt from From Eternity to Here, and slides from a talk. But I don’t think I’ve ever explained as persuasively as possible why I think it’s the right approach. So that’s what I’m going to try to do here. Although to be honest right off the bat, I’m actually going to tackle a slightly easier problem: explaining why the many-worlds approach is not completely insane, and indeed quite natural. The harder part is explaining why it actually works, which I’ll get to in another post. Any discussion of Everettian quantum mechanics (“EQM”) comes with the baggage of pre-conceived notions. People have heard of it before, and have instinctive reactions to it, in a way that they don’t have to (for example) effective field theory. Hell, there is even an app, universe splitter, that lets you create new universes from your iPhone. (Seriously.) So we need to start by separating the silly objections to EQM from the serious worries. The basic silly objection is that EQM postulates too many universes. In quantum mechanics, we can’t deterministically predict the outcomes of measurements. In EQM, that is dealt with by saying that every measurement outcome “happens,” but each in a different “universe” or “world.” Say we think of Schrödinger’s Cat: a sealed box inside of which we have a cat in a quantum superposition of “awake” and “asleep.” (No reason to kill the cat unnecessarily.) Textbook quantum mechanics says that opening the box and observing the cat “collapses the wave function” into one of two possible measurement outcomes, awake or asleep. Everett, by contrast, says that the universe splits in two: in one the cat is awake, and in the other the cat is asleep. Once split, the universes go their own ways, never to interact with each other again. And to many people, that just seems like too much. Why, this objection goes, would you ever think of inventing a huge — perhaps infinite! — number of different universes, just to describe the simple act of quantum measurement? It might be puzzling, but it’s no reason to lose all anchor to reality. To see why objections along these lines are wrong-headed, let’s first think about classical mechanics rather than quantum mechanics. And let’s start with one universe: some collection of particles and fields and what have you, in some particular arrangement in space. Classical mechanics describes such a universe as a point in phase space — the collection of all positions and velocities of each particle or field. What if, for some perverse reason, we wanted to describe two copies of such a universe (perhaps with some tiny difference between them, like an awake cat rather than a sleeping one)? We would have to double the size of phase space — create a mathematical structure that is large enough to describe both universes at once. In classical mechanics, then, it’s quite a bit of work to accommodate extra universes, and you better have a good reason to justify putting in that work. (Inflationary cosmology seems to do it, by implicitly assuming that phase space is already infinitely big.) That is not what happens in quantum mechanics. The capacity for describing multiple universes is automatically there. We don’t have to add anything. The reason why we can state this with such confidence is because of the fundamental reality of quantum mechanics: the existence of superpositions of different possible measurement outcomes. In classical mechanics, we have certain definite possible states, all of which are directly observable. It will be important for what comes later that the system we consider is microscopic, so let’s consider a spinning particle that can have spin-up or spin-down. (It is directly analogous to Schrödinger’s cat: cat=particle, awake=spin-up, asleep=spin-down.) Classically, the possible states are “spin is up” “spin is down”. Quantum mechanics says that the state of the particle can be a superposition of both possible measurement outcomes. It’s not that we don’t know whether the spin is up or down; it’s that it’s really in a superposition of both possibilities, at least until we observe it. We can denote such a state like this: (“spin is up” + “spin is down”). While classical states are points in phase space, quantum states are “wave functions” that live in something called Hilbert space. Hilbert space is very big — as we will see, it has room for lots of stuff. To describe measurements, we need to add an observer. It doesn’t need to be a “conscious” observer or anything else that might get Deepak Chopra excited; we just mean a macroscopic measuring apparatus. It could be a living person, but it could just as well be a video camera or even the air in a room. To avoid confusion we’ll just call it the “apparatus.” In any formulation of quantum mechanics, the apparatus starts in a “ready” state, which is a way of saying “it hasn’t yet looked at the thing it’s going to observe” (i.e., the particle). More specifically, the apparatus is not entangled with the particle; their two states are independent of each other. So the quantum state of the particle+apparatus system starts out like this: (“spin is up” + “spin is down” ; apparatus says “ready”)                (1) The particle is in a superposition, but the apparatus is not. According to the textbook view, when the apparatus observes the particle, the quantum state collapses onto one of two possibilities: (“spin is up”; apparatus says “up”) (“spin is down”; apparatus says “down”). When and how such collapse actually occurs is a bit vague — a huge problem with the textbook approach — but let’s not dig into that right now. But there is clearly another possibility. If the particle can be in a superposition of two states, then so can the apparatus. So nothing stops us from writing down a state of the form (spin is up ; apparatus says “up”) + (spin is down ; apparatus says “down”).                                   (2) The plus sign here is crucial. This is not a state representing one alternative or the other, as in the textbook view; it’s a superposition of both possibilities. In this kind of state, the spin of the particle is entangled with the readout of the apparatus. What would it be like to live in a world with the kind of quantum state we have written in (2)? It might seem a bit unrealistic at first glance; after all, when we observe real-world quantum systems it always feels like we see one outcome or the other. We never think that we ourselves are in a superposition of having achieved different measurement outcomes. This is where the magic of decoherence comes in. (Everett himself actually had a clever argument that didn’t use decoherence explicitly, but we’ll take a more modern view.) I won’t go into the details here, but the basic idea isn’t too difficult. There are more things in the universe than our particle and the measuring apparatus; there is the rest of the Earth, and for that matter everything in outer space. That stuff — group it all together and call it the “environment” — has a quantum state also. We expect the apparatus to quickly become entangled with the environment, if only because photons and air molecules in the environment will keep bumping into the apparatus. As a result, even though a state of this form is in a superposition, the two different pieces (one with the particle spin-up, one with the particle spin-down) will never be able to interfere with each other. Interference (different parts of the wave function canceling each other out) demands a precise alignment of the quantum states, and once we lose information into the environment that becomes impossible. That’s decoherence. Once our quantum superposition involves macroscopic systems with many degrees of freedom that become entangled with an even-larger environment, the different terms in that superposition proceed to evolve completely independently of each other. It is as if they have become distinct worlds — because they have. We wouldn’t think of our pre-measurement state (1) as describing two different worlds; it’s just one world, in which the particle is in a superposition. But (2) has two worlds in it. The difference is that we can imagine undoing the superposition in (1) by carefully manipulating the particle, but in (2) the difference between the two branches has diffused into the environment and is lost there forever. All of this exposition is building up to the following point: in order to describe a quantum state that includes two non-interacting “worlds” as in (2), we didn’t have to add anything at all to our description of the universe, unlike the classical case. All of the ingredients were already there! Our only assumption was that the apparatus obeys the rules of quantum mechanics just as much as the particle does, which seems to be an extremely mild assumption if we think quantum mechanics is the correct theory of reality. Given that, we know that the particle can be in “spin-up” or “spin-down” states, and we also know that the apparatus can be in “ready” or “measured spin-up” or “measured spin-down” states. And if that’s true, the quantum state has the built-in ability to describe superpositions of non-interacting worlds. Not only did we not need to add anything to make it possible, we had no choice in the matter. The potential for multiple worlds is always there in the quantum state, whether you like it or not. The next question would be, do multiple-world superpositions of the form written in (2) ever actually come into being? And the answer again is: yes, automatically, without any additional assumptions. It’s just the ordinary evolution of a quantum system according to Schrödinger’s equation. Indeed, the fact that a state that looks like (1) evolves into a state that looks like (2) under Schrödinger’s equation is what we mean when we say “this apparatus measures whether the spin is up or down.” The conclusion, therefore, is that multiple worlds automatically occur in quantum mechanics. They are an inevitable part of the formalism. The only remaining question is: what are you going to do about it? There are three popular strategies on the market: anger, denial, and acceptance. The “anger” strategy says “I hate the idea of multiple worlds with such a white-hot passion that I will change the rules of quantum mechanics in order to avoid them.” And people do this! In the four options listed here, both dynamical-collapse theories and hidden-variable theories are straightforward alterations of the conventional picture of quantum mechanics. In dynamical collapse, we change the evolution equation, by adding some explicitly stochastic probability of collapse. In hidden variables, we keep the Schrödinger equation intact, but add new variables — hidden ones, which we know must be explicitly non-local. Of course there is currently zero empirical evidence for these rather ad hoc modifications of the formalism, but hey, you never know. The “denial” strategy says “The idea of multiple worlds is so profoundly upsetting to me that I will deny the existence of reality in order to escape having to think about it.” Advocates of this approach don’t actually put it that way, but I’m being polemical rather than conciliatory in this particular post. And I don’t think it’s an unfair characterization. This is the quantum Bayesianism approach, or more generally “psi-epistemic” approaches. The idea is to simply deny that the quantum state represents anything about reality; it is merely a way of keeping track of the probability of future measurement outcomes. Is the particle spin-up, or spin-down, or both? Neither! There is no particle, there is no spoon, nor is there the state of the particle’s spin; there is only the probability of seeing the spin in different conditions once one performs a measurement. I advocate listening to David Albert’s take at our WSF panel. The final strategy is acceptance. That is the Everettian approach. The formalism of quantum mechanics, in this view, consists of quantum states as described above and nothing more, which evolve according to the usual Schrödinger equation and nothing more. The formalism predicts that there are many worlds, so we choose to accept that. This means that the part of reality we experience is an indescribably thin slice of the entire picture, but so be it. Our job as scientists is to formulate the best possible description of the world as it is, not to force the world to bend to our pre-conceptions. Such brave declarations aren’t enough on their own, of course. The fierce austerity of EQM is attractive, but we still need to verify that its predictions map on to our empirical data. This raises questions that live squarely at the physics/philosophy boundary. Why does the quantum state branch into certain kinds of worlds (e.g., ones where cats are awake or ones where cats are asleep) and not others (where cats are in superpositions of both)? Why are the probabilities that we actually observe given by the Born Rule, which states that the probability equals the wave function squared? In what sense are there probabilities at all, if the theory is completely deterministic? These are the serious issues for EQM, as opposed to the silly one that “there are just too many universes!” The “why those states?” problem has essentially been solved by the notion of pointer states — quantum states split along lines that are macroscopically robust, which are ultimately delineated by the actual laws of physics (the particles/fields/interactions of the real world). The probability question is trickier, but also (I think) solvable. Decision theory is one attractive approach, and Chip Sebens and I are advocating self-locating uncertainty as a friendly alternative. That’s the subject of a paper we just wrote, which I plan to talk about in a separate post. There are other silly objections to EQM, of course. The most popular is probably the complaint that it’s not falsifiable. That truly makes no sense. It’s trivial to falsify EQM — just do an experiment that violates the Schrödinger equation or the principle of superposition, which are the only things the theory assumes. Witness a dynamical collapse, or find a hidden variable. Of course we don’t see the other worlds directly, but — in case we haven’t yet driven home the point loudly enough — those other worlds are not added on to the theory. They come out automatically if you believe in quantum mechanics. If you have a physically distinguishable alternative, by all means suggest it — the experimenters would love to hear about it. (And true alternatives, like GRW and Bohmian mechanics, are indeed experimentally distinguishable.) Sadly, most people who object to EQM do so for the silly reasons, not for the serious ones. But even given the real challenges of the preferred-basis issue and the probability issue, I think EQM is way ahead of any proposed alternative. It takes at face value the minimal conceptual apparatus necessary to account for the world we see, and by doing so it fits all the data we have ever collected. What more do you want from a theory than that? Piece crossposted with Sean Carroll’s website. Via ]]> 0 The Blackness by Jenny Diski Mon, 26 May 2014 08:00:23 +0000 Edvard_Munch_-_Red_and_White Red and White, Edvard Munch, 1899 – 1900 by Jenny Diski Piece crossposted with This and That Continued. Originally published in Swedish in Goteborgs-Posten. ]]> 1 Mario Carpo: Voice, Words, Memory Fri, 11 Apr 2014 08:00:28 +0000 The Village School, Albert Anker, 1896 by Mario Carpo It all started with cellphones, a long time ago. No student, and few teachers, would make voice calls from class, but in the early 2000s GSM phones started to offer nearly free text messaging, and students (and faculty) started to text during lectures and seminars. Before long students were composing text messages without even looking at their phones, courtesy of the good old duodecimal keyboard; some could actually text from a phone in their pocket. Then of course 3G, web-enabled smartphones came, followed by tablets, and as most of our classrooms generously provide Wi-Fi connections, everyone sitting in a class these days has endless ways to reach out to whatever can be found online, which is to say almost anything. Permanent connectivity is a wondrous development and we all profit immensely from it, for all kinds of purposes. But permanent connectivity in a higher education environment, and particularly in research seminars, is a mixed blessing, and can easily get out of hand. I do not worry here about the many unsuitable, illegal or just plain silly classroom uses of web-enabled information technologies. I recently saw a student choose and buy a sweater from her tablet while sitting in a lecture theatre, only a few rows from the speaker. This would not have been possible only a few years ago, but people who want to waste their time will always find a way, irrespective of the technologies at hand. While in high school I was myself very advanced in the art of not being seen reading a newspaper (in print) during some classes. Today’s students striving to use touch screens to type notes may be inflicting unnecessary pain upon themselves, but they are not very different from many students of my generation that took handwritten notes from the first to the last minute of class, non-stop, as if writing under dictation. Generally speaking, the degree of attention that teachers and students can muster in class is probably constant in time, and it is only marginally and temporarily affected by technological change. One reason for this is that classroom lectures or seminars have always been based upon just one medium, and one information technology: the human voice, and the spoken word, whether in the form of discourse (one-to-many, in the case of lectures), or dialogue (one-to-one or many-to-many, in the case of seminars). This format has survived all technological change to this day, which makes it today almost absurdly anachronistic; and it may now be under its most insidious attack ever, as today’s enemy is coming from within, and in disguise. Socrates famously taught by oral questions and answers  – viva voce. His dialogues are known to us only because they were put into writing (purportedly by Plato) one generation later. A bit later still Aristotle abandoned the dialogic format – it is not clear if by chance or by design – and his writings expound arguments without anyone to speak for them: Aristotle’s speaker is in fact the book itself, not a living person whose voice has been recorded and transcribed. Not surprisingly, teaching in medieval (and mostly Aristotelian) universities was, in theory, based on written texts. But scribal transmission was aleatoric and expensive, and good copies were few and far between. As a result, manuscripts were often read aloud, and the few extant authors were memorized, annotated and commented upon ad infinitum. This of course changed with print – to this day, most classroom assignments imply reading many texts, not parsing and memorizing just one. Digital technologies may not, or not yet, have significantly extended the corpus of relevant texts for each discipline, but they have already made many relevant texts permanently and immediately available, and searchable, through any web-connected device – at the time of writing, as small as a cellphone, a pair of glasses, a wristwatch. Text and image-based information retrieval technologies are an extraordinary scholarly asset, but when a universal catalogue of all kinds of sources and an ever-growing repository of data of all sorts is brought to a seminar or a lecture class, where anyone can search through it on the spot and on the fly, by a simple tap on a tablet screen, strange and unwieldy things start to happen. There are only so many things one can say in two hours, so every teacher preparing a lecture, or a discussion session, makes a careful selection of things to say – which also means, implicitly, a much longer list of things that should remain unsaid, at least as long as the class will last. Facts are sifted, compared, selected, and those that do not fit the topic of the day are dropped. That’s the way the human mind works – and most likely always has: by building simple theories out of many apparently unrelated events. We cope with Big Data by dropping most of them, and carefully arranging the few data we need to make some limited sense of the world – by inferring patterns, laws, rules, principles, mathematical functions, etc. And one needs not invoke a general theory of science to understand that a two-hour session can only deal with very few data indeed – those we can fit into an argument we can memorize and that takes no more than two hours to present and discuss. The classroom, based as it is on voice, words, and memory, is the realm of Small Data, not of Big Data. But today each web-enabled tablet, phone and computer is a window open onto Big Data – and Big Data are instantaneously searchable. This means that every fact mentioned in class can be immediately checked or further researched by all – a good thing, evidently. But today, twenty students checking the same banal item of information online will in most cases quickly come up with twenty slightly different results. This is partly due to algorithmic search customization, and partly to the unauthorized nature of most freely searchable digital data: there is so much of it online precisely because most of it is raw, crowdsourced, and often inaccurate. The nature of hypertextual information favors post-modern aggregation, to the detriment of modern authorial precision. This is why Wikipedia works, sometimes surprisingly well, while the Encyclopedia Britannica in print recently went out of business. By their own technical logic, most digital data are very reliable on average or in aggregate, but they are never entirely trustworthy if taken one by one. Even if they were, their sheer quantity suggests that they would not sit well in a two-hour seminar. Twenty years ago no one would have come to class with a 35 volume encyclopedia. Today, many students and instructors think they can bring to class almost all the data in the world, to search at will. For the time being, the irruption of Big Data in the word-based environment of the classroom may appear as little more than an occasional nuisance, but it is one that flags and points to a major cultural and technological issue of our time. Digital information retrieval systems are increasingly at odds with the processes and logic of orality, human memory, and even of alphabetical writing and print. No one can prove that we still need oral teaching, and that we can still profit from this fossilized survival of the most ancestral of all information technologies. But in case we want to preserve this format, perhaps just for two hours a week, then for those two hours Big Data should be shut down. Evidently, we would still need digital technologies in class to show and process documents, and for a number of other very good reasons. But additional data should not be brought in after class has started. The best means to this end is for all (including instructors) to abstain from using any web-connected browser while the class is in session. In fact, in many cases, instructors and students could easily come to class with no information technology at all, except their memory, and words to give it voice. For those that do not trust their memory, a sheet of paper and a pen may help– but few arguments that cannot be memorized and oralized from memory may be worth remembering anyway. Either way, fact checking should be left outside of class – and with that, the hypertextual, serendipitous pleasures of fact surfing. Thanks to digital searches, new data will be found, sifted, collated, streamlined and new arguments will arise to present and discuss the next time the class meets. A long time ago, when marketplaces were physical places, traders did not bring stock or cash to the exchange: they traded by voice, and their word was trusted. At the end of the day, accounts were reconciled and woe to the trader who had sold stock he did not own, or paid with cash he did not have. The same principle should apply to the voice-based environment of classrooms and seminars. The only tablet Aristotle could bring to school, or Cicero to the senate, was a wax tablet. In alphabetical mode, a wax tablet can hold approximately 2 kilobytes of data. We are lucky to have so many more technological options to choose from today. But when we are in class, if we want to profit from it, similar limits would still make plenty of sense. About the Author: Mario Carpo, architectural historian and critic, is the author of Architecture in the Age of Printing (MIT Press, 2001), The Alphabet and the Algorithm (MIT Press, 2011), The Digital Turn in Architecture (Wiley, 2012), and other books. He was recently appointed professor of architectural history at the Bartlett School of Architecture in London. ]]> 0 Unnaturally Thu, 27 Mar 2014 08:00:44 +0000 0 Before depression became a psychologist’s object of study, it nearly took his life… Fri, 31 Jan 2014 12:21:00 +0000 0 Ready to Go? Sat, 25 Jan 2014 08:00:00 +0000 0 What Happened Before Then? Fri, 24 Jan 2014 08:02:48 +0000 0 Michael B. Katz: Poor Science Wed, 08 Jan 2014 08:02:52 +0000 Photograph by Tomas Castelazo by Michael B. Katz . . . if the misery of our poor be caused not by the laws of nature, but by our institutions, great is our sin. . . . — Charles Darwin (1839) For most of recorded history, poverty reflected God’s will. The poor were always with us. They were not inherently immoral, dangerous, or different. They were not to be shunned, feared, or avoided. In the late eighteenth and early nineteenth centuries, a harsh new idea of poverty and poor people as different and inferior began to replace this ancient biblical view. In what ways, exactly, are poor people different from the rest of us became – and remains – a burning question answered with moral philosophy, political economy, social science, and, eventually, biology. Why did biological conceptions of poverty wax and wane over the last century and a half? What forms have they taken? What have been their consequences? The biological definition of poverty reinforces the idea of the undeserving poor, which is the oldest theme in post-Enlightenment poverty discourse. Its history stretches from the late eighteenth century through to the present. Poverty, in this view, results from personal failure and inferiority. Moral weaknesses – drunkenness, laziness, sexual promiscuity – constitute the most consistent markers of the undeserving poor. The idea that a culture of poverty works its insidious influence on individuals, endowing them with traits that trap them in lives of destitution, entered both scholarly and popular discourse somewhat later and endures to this day. Faulty heredity composes the third strand in the identification of the undeserving poor; backed by scientific advances in molecular biology and neuroscience, it is enjoying a revival. The historical record shows this idea in the past to have been scientifically dubious, ethically suspect, politically harmful, and, at its worst, lethal. That is why we should pay close attention to its current resurgence. This article excavates the definition of poor people as biologically inferior. It not only documents its persistence over time but emphasizes three themes. First, the concept rises and falls in prominence in response to institutional and programmatic failure. It offers a convenient explanation for why the optimism of reformers proved illusory or why social problems remained refractory despite efforts to eliminate them. Second, its initial formulation and reformulation rely on bridging concepts that try to parse the distance between heredity and environment through a kind of neo-Lamarkianism. These early bridges invariably crumble. Third, hereditarian ideas always have been supported by the best science of the day. This was the case with the ideas that ranked “races”; underpinned immigration restrictions; and encouraged compulsory sterilization – as well as those that have written off the intellectual potential of poor children. In its review of the biological strand in American ideas about poverty, this article begins in the 1860s with the first instance of the application of hereditarian thought I have discovered; moves forward to social Darwinism and eugenics, immigration restriction, and early IQ testing. It then picks up the story with Arthur Jensen’s famous 1969 article in the Harvard Educational Review, follows it to the Bell Curve, and ends with the astonishing rise of neuroscience and the field of epigenetics. It concludes by arguing that despite the intelligence, skill, and good intentions of contemporary scientists, the history of biological definitions of poor persons calls for approaching the findings of neuroscience with great caution. In 1866 the Massachusetts Board of State Charities, which had oversight of the state’s public institutions, wrote, “The causes of the evil [“the existence of such a large proportion of dependent and destructive members of our community”] are manifold, but among the immediate ones, the chief cause is inherited organic imperfection, -vitiated constitution or poor stock.” This early proclamation of the biological inferiority of the undeserving poor arose as a response to institutional failure. Recurrent institutional and programmatic failure has kept it alive in writing about poverty ever since, supported always by scientific authority. Beginning in the early nineteenth century, reformers sponsored an array of new institutions designed to reform delinquents, rehabilitate criminals, cure the mentally ill, and educate children. Crime, poverty, and ignorance, in their view, were not distinct problems. The “criminal,” “pauper,” and “depraved” represented potentialities inherent in all people and triggered by faulty environments. Poverty and crime, for instance, appeared to cause each other and to occur primarily in cities, most often among immigrants. This stress on the environmental causes of deviance and dependence, prominent in the 1840s, underpinned the first reform schools, penitentiaries, mental hospitals, and, even, public schools. By the mid-1860s it had become clear that none of the new institutions built with such optimism had reached their goals. They manifestly failed to rehabilitate criminals, cure the mentally ill, reeducate delinquents, or reduce poverty and other forms of dependence. The question was, why? Answers did not look hard at the failures in institutional design and implementation or at the contexts of inmates’, prisoners’, and patients’ lives. Rather, they settled on individual-based explanations: inherited deficiencies. The Massachusetts Board of State Charities supported its belief that the inheritance of acquired characteristics (later known as Lamarkianism) reproduced the underserving poor as well as criminals, the mentally ill, and other depraved and dependent individuals with scientific evidence from physiologists which emphasized the toxic impact of large amounts of alcohol on the stimulation of the “animal passions” and the repression of “will”. The State Board’s gloomy emphasis on heredity did not lead it to pessimistic conclusions, however. It believed, rather, in the body’s recuperative power over time. Vice had a standard deviation that, if not exceeded, could be eradicated by the body’s natural capacity for healing. In fact, the Board still believed that the persistence of crime and poverty was “phenomenal- not essential in society . . . their numbers depend on social conditions within human control.” The Board had revealed the source of social pathologies through the scientific study of heredity; through the scientific study of society it would excavate the laws governing its prevention. The Board started out with an ideology prefiguring eugenics and ended with one anticipating Progressivism. Its early bridge between heredity and environmentalism, or biology and reform, remained one crossed by reformers for only a relatively short time until it was broken by social Darwinism. It was rebuilt in the early twentieth century until demolished once more by eugenicists and their successors and then reconstructed yet again in the early twenty-first century by the proponents of epigenetics. By the 1920s, two initially separate streams – social Darwinism and eugenics – converged in the hard-core eugenic theory that justified racism and social conservatism. Social Darwinism attempted to apply the theory of Darwinian evolution to human behavior and society. Social Darwinists – whose leading spokesperson, Herbert Spencer, enjoyed a triumphant tour of the U.S. in 1882 – insisted on the heritability of socially harmful traits, including pauperism, mental illness, and criminality and on the harmful effects of public and private charities that interfered with the survival of the fittest. They viewed the “unfit” not only as unworthy losers but as savage throwbacks to a primitive life. Hereditarian beliefs thus fed widespread fears of “race suicide” giving an urgency to the problem of population control. The “ignorant, the improvident, the feeble-minded, are contributing far more than their quota to the next generation,” warned Frank Fetter of Cornell University. The English scientist Francis Galton originally coined the term eugenics in 1883 to denote the improvement of human stock by giving “the more suitable races or strains of blood a better chance of prevailing speedily over the less suitable.” In the United States, eugenic “science” owed more to the genetic discoveries of Gregor Mendel, first published in 1866 but unrecognized until the end of the century, than to mathematical genetics as practiced by Galton and his leading successor Karl Pearson. In 1904 Charles Davenport, the leading US eugenics promoter, used funds from the newly established Carnegie Corporation to set up a laboratory at Cold Springs Harbor on Long Island. Davenport looked forward to the “new era” of cooperation between the sociologist, legislator, and biologist who together would “purify our body politics of the feeble-minded, and the criminalistic and the wayward by using the knowledge of heredity.” Eugenics entered public policy through its influence on immigration restriction and social reform as well as through state sterilization laws. Indiana passed the first of these in 1907. By the end of the 1920s, twenty-four states passed laws permitting the involuntary sterilization of the mentally unfit, a practice upheld by the U.S. Supreme Court in 1920 in Buck v. Bell. In the United States, the application of evolutionary and genetic ideas to social issues gained traction in the late nineteenth century as a tool for explaining and dealing with the vast changes accompanying industrialization, urbanization, and immigration. Eugenics drew support from both conservatives and progressives and underlay the emerging consensus on the need for immigration restriction that resulted in the nationality based immigration quotas legislated by Congress in 1924. “In the early twentieth century,” point out Hilary Rose and Steven Rose in Genes, Cells, and Brains, “barring Catholics, eugenics commanded the support of most EuroAmerican intellectuals – not just racists and reactionaries but feminists, reformers, and Marxists.” Conservatives found in eugenics and social Darwinism justification for opposing public and private charities that would contribute to the reproduction of the unfit. But eugenics found enthusiasts as well in birth control advocate Margaret Sanger and in settlement house workers preoccupied with the alleged degeneracy of an immigrant working class. Like their predecessors on the Massachusetts Board of State Charities decades earlier, they turned to the heritability of acquired characteristics and the plasticity of human nature to reconcile their belief in the biological foundation of physical and moral degeneration with their commitment to the power of social reform to build character and instill habits. Nonetheless, by the 1920s, cracks appeared in the bridge that linked the environmentalists and hereditarians. Hereditarians took an increasingly hard line, manifest in the new science of intelligence tests as well as in their continued advocacy of sterilization. Developed by the French psychologist Alfred Binet, intelligence tests were brought to the United States in 1880 by American psychologist Henry H. Goddard who first applied them at the Vineland, New Jersey, Training School for Feeble-Minded Boys and Girls – he directed its new laboratory for the study of mental deficiency. Other psychologists picked up Goddard’s work on intelligence testing, extended it other populations, and experimented with different methods. Lewis Terman at Stanford, one of the most prominent and a proponent of the hereditarian view of intelligence, introduced the term “IQ,” which stood for “intelligence quotient,” a concept developed in 1912 by William Stern, a German psychologist. Intelligence testing, which at first aroused skepticism and hostility, received a tremendous boost during World War I, when a trial of the tests on more than 1.7 million people during the war dramatically brought them to public attention. The tests purported to show that nearly one-fourth of the draft army could not read a newspaper or write a letter home and, by implication, that the mental ages of the average white and black Americans were, respectively, thirteen and ten. Davenport, Goddard, and others blamed the results for whites on the immigration of inferior races and used them as ammunition in their advocacy of immigration restriction. The tests, they argued, demonstrated the genetic heritability of mental deficiency. These ideas worked their way into public education in the 1920s, underpinning the educational psychology taught in teacher preparation courses and the massive upsurge in testing used to classify students, predict their futures, and justify unequal educational outcomes. “Terman and other psychologists” points out historian Paul Fass, “were quick to point out that opening up avenues of opportunity to the children of the lower socioeconomic groups probably made no sense; they did not have the I.Q. points to compete.” In the minds of its prominent advocates, intelligence testing was linked with beliefs that science had demonstrated the primacy of heredity over environment and that the immigration of inferior races was driving America toward a dysgenic future. Even before the 1920s, strains between eugenicists and reformers had opened fissures in the consensus around the heritability of mental and character defect. Eugenicists’ commitment to “germ plasm” pulled them away from the environmental and neo-Lamarkian theories underpinning Progressive reform. Then, after the 1920s, biochemistry and the rise of the Nazis combined to drive eugenics into eclipse and disrepute. The more research revealed about the complexity of human genetics, the less defensible even reform genetics appeared. The American Eugenics Society praised Hitler’s 1933 sterilization law while German eugenicists flattered their American counterparts by pointing out the debt they owed them, and the Nazi regime welcomed and honored prominent American eugenicists. The fall of eugenics left the field open to environmental explanations. Nurture rather than nature became the preferred explanation for crime, poverty, delinquency, and low educational achievement. The emphasis on environment fit with the emergent civil rights movement, which rejected racial, or biological, explanations for differences between blacks and whites – explanations that had been used to justify slavery, lynching, segregation, and every other form of violent and discriminatory activity. Hereditarian explanations fit badly, too, with the optimism underlying the War on Poverty and Great Society that assumed the capacity of intelligent government action to ameliorate poverty, ill health, unemployment, and crime. Nonetheless, by the late 1960s a new eugenics began to challenge the environmental consensus. Its appearance coincided with the white backlash against government-sponsored programs favoring African Americans and the disenchantment following on what appeared to be the failure of programs of compensatory education designed to make up for the culturally deficient home life of poor, especially poor black, children. Psychologist Arthur R. Jensen’s 1969 article in the Harvard Educational Review, “How Much Can We Boost IQ and Scholastic Achievement?” led the revival of hereditarianism. “Compensatory education,” Jensen argued, “has been tried and it apparently has failed.” The reason was that compensatory education programs ran up against a genetic wall. Poor, minority children lacked the intelligence to profit from them. Jensen’s article provoked a furious counter-attack. Nonetheless, the controversy breathed new life into research and writing on the influence of heredity on intelligence and seeped into the rationales for failure offered by educators. (I recall sitting in a meeting in the early 1970s with a high-level Toronto school administrator who, in a discussion of the low achievement of poor students, said, in effect, “well, Jensen has told us why.”) The new field of sociobiology, founded by Harvard zoologist E. O. Wilson, a leading authority on insect societies, reinforced the renewed emphasis on heritability. Sociobiology, Wilson wrote, focused on “the study of the biological basis of social behavior in every kind of organism, including man.” This new emphasis on heritability, however, met strong scientific as well as political criticism and failed to clear away the taint that still clung to eugenics and genetically-based theories of race, intelligence, and behavior. The idea that the undeserving poor were genetically inferior had not been wiped from the map by any means, but it remained muted, unacceptable in most academic circles. In 1994, in their widely publicized and discussed The Bell Curve, Richard Herrnstein and Charles Murray – whose notorious Losing Ground had served as a bible for anti-welfare state politicians – challenged the reigning environmentalist view of intelligence. Success in American society, they argued, was increasingly a matter of the genes people inherit. Intelligence, in fact, had a lot to do with the nation’s “most pressing social problems” such as poverty, crime, out-of wedlock births, and low educational achievement. They wrote that “low intelligence is a stronger precursor of poverty than low socioeconomic background.” Poverty, they argued, “is concentrated among those with low cognitive ability,” which, itself, was largely inherited. It also was racially tinged because blacks, they found, revealed lower cognitive ability at every socioeconomic level. Evidence points “toward a genetic factor in cognitive ethnic differences” because “blacks and whites differ most on tests” measuring “g, or general intelligence”, which is a fixed, inherited index of mental capacity. In Inequality by Design, a powerful demolition of The Bell Curve, Claude Fischer and his colleagues show how Murray and Herrnstein misued their principal sources, leaving their empirical conclusions utterly unreliable and their larger argument in shambles. Nonetheless, despite assaults in the public media and by scholars, hundreds of thousands of copies of the 800-plus page hard cover edition of the book were sold. The Bell Curve is best understood not as a popularization of science but as an episode in the sociology of knowledge. Clearly, even if it often did not dare speak its name, the suspicion remained alive that heredity underlay the growth and persistence of the “underclass” and the black-white gap in educational achievement, which seemed to many impervious to increased public spending or reform. This suspicion was nurtured by a small set of academics and some foundations, like the Pioneer Fund, which claims that it “has changed the face of the social and behavioral sciences by restoring the Darwinian-Galtonian perspective to the mainstream of traditional fields such as anthropology, psychology, and sociology, as well as fostering the newer disciplines of behavioral genetics, neuroscience, evolutionary psychology, and sociobiology.” From the 1990s onward, a profusion of new scientific technologies has provided the tools with which to explore mechanisms underlying the linkages between biology and society and fostered the astounding growth of the bioscience industry in genetics (the Human Genome Project), stem cell research, and, most recently, neuroscience. Teachers, point out Hilary and Steven Rose, “report receiving up to seventy mailshots a year promoting a variety of neurononsense. . . . The snake-oil entrepreneurs are in there selling hard to teachers who are without the protection provided by clinical trials” and other tools available to physicians. With astonishing acceleration, neuroscience, evolutionary psychology, genomics, and epigenetics emerged as important scientific fields – in practice, often combined in the same programs. Neuroscience and other biological advances promised new ways of explaining social phenomena, like crime, and medical issues, such as the black-white gap in cardiovascular diseases, the increase in diabetes, the rise of obesity, and the origins and treatment of cancer-related disease. They promised, as well, the possibility of understanding how the brain ages and how Alzheimer’s disease and dementia might be mitigated or delayed. Research focuses, too, on how the environmental stresses associated with poverty in childhood could damage aspects of mental functioning and learning capacity with lasting impact throughout individuals’ lives, and, some scientists believe, beyond through the inheritance of acquired deficiencies. In its January 18, 2010, cover story, Time announced, “The new field of epigenetics is showing how your environment and your choices can influence your genetic code – and that of your kids.” Epigenetics, the article explained, “is the study of changes in gene activity that do not involve alterations to the genetic code but still get passed down to at least one generation. These patterns of gene expression are governed by the cellular material – the epigenome – that sits on top of the genome, just outside it. . .It is these ‘epigenetic’ marks that tell your genes to switch on or off, to speak loudly or whisper. It is through eugenic marks that environmental factors like diet, stress and prenatal natal nutrition,” which “can make an imprint on genes,” are transmitted across generations. More soberly, the eminent child psychiatrist Sir Michal Rutter offered this definition: “The term ‘epigenetics’ is applied to mechanisms that change genetic effects (through influences on gene expression) without altering gene sequence.” “Epigenetic studies,” Hilary and Steven Rose report, are uncovering a dazzling array of regulatory processes by which signaling molecules – sometimes themselves proteins, sometimes small molecules, some generated internally by each cell, some diffusing from other regions of the developing foetus – act as switches, turning particular stretches of DNA on or off so as to ensure that particular proteins are synthesized at the appropriate moment in the development sequences. Alterations in the timing of these switches may result in huge changes in the adult phenotype, producing new variations on which evolution can act. Genes are no longer thought of as acting independently but rather in constant interaction with each other and with the multiple levels of the environment in which they are embedded. The flood of scholarly research and popular writing on epigenetics justified science writer Nessa Carey giving her book the title, The Epigenetics Revolution. Epigenetics found such a receptive audience, in part, because once again scientific advance coincided with a major conundrum – the persistent “achievement gap” between blacks and whites which bedeviled educators. A large literature suggested a variety of sources, most of which focused in one way or another on the handicaps associated with growing up in poverty while the proponents of hereditary explanations lurked in the background. What the environmentalists lacked was a mechanism that explained exactly how the environment of poverty was translated into low school achievement. This is what epigenetics offered. It promised as well to parse the acrimonious differences between environmentalists and hereditarians in explaining the sources of criminality and virtually all other behavior. The breathless embrace of epigenetics ran ahead of the evidence about the heritability of acquired characteristics and limits of existing epigenetic knowledge. Even Carey, an epigenetics enthusiast, warned, writing specifically about neuro-epigenetics, “this whole area, sometimes called neuro-epigenetics, is probably the most scientifically contentious field in the whole of epigenetic research.” In fact, the links between children, poverty, and biology are exceedingly complicated and only partly understood, as serious scientists working in the area readily admit. The significance of epigenetic research on how environment alters gene expression, according to Nobel laureate economist James Heckman, is that it “teaches us that the sharp distinction between acquired skills and ability featured in the early human capital literature is not tenable. . . . Behaviors and abilities have both a genetic and an acquired character. Measured abilities are the outcome of environmental influence, including in utero experiences, and also have genetic components.” For Heckman, most of the gaps at age eighteen that explain adult outcomes are present by age five. By the time disadvantaged children reach school, the clear implication is that it is too late to remedy their cognitive deficiency or to put them on a road to escape poverty. Other neuroscientists are not so sure. They view brain development as more plastic, with changes possible through adolescence and, possibly, even in old age, although they find direct evidence of early childhood disadvantage on the size of key areas of the brain, especially those that control memory and executive function. Rutter points out, “it is now clear that the brain is intrinsically plastic right into adult life, although plasticity reduces with increasing age. The sensitive periods are not as fixed and immutable as was once thought, and they can be extended pharmacologically. . . . In addition, plasticity can be increased by vigorous extended exercise.” Epigenetics has facilitated and revived the reconciliation of hereditarianism and reform that flourished before social Darwinism in the late 1860s and then again in the Progressive Era, before splitting apart in the 1920s. Epigenetics promises to move beyond the long-standing war between explanations for the achievement gap, persistent poverty, crime and other social problems based on inheritance and those that stress environment. It gives scientific sanction for early childhood education and other interventions in the lives of poor children. As with earlier invocations of science, popular understanding fed by media accounts threatens to run ahead of the qualifications offered by scientists and the limits of evidence. Herein lies the danger. In the past, the link between hereditarianism and reform proved unstable, and when it broke apart the consequences were ugly. Even when in place the link supported racially-tinged immigration reform and compulsory sterilization – all in the name of the best “science.” Indeed, every regime of racial, gender, and nationality-based discrimination and violence has been based on the best “science” of the day. “It is when scientists and doctors insist that their use of race is purely biological,” cautions legal scholar and sociologist Dorothy Roberts, “that we should be most wary.” Philosopher Jesse J. Prinz warns that “When we assume that human nature is biologically fixed, we tend to regard people with different attitudes and capacities as inalterably different. We also tend to treat differences as pathologies.” It is not a stretch to imagine epigenetics and other biologically based theories of human behavior used by conservative popularizers to underwrite a harsh new view of the undeserving poor and the futility of policies intended to help them. This is not the aim, or underlying agenda, of scientists in the field, or a reason to try to limit research. It is, rather, a cautionary note from history about the uses of science and a warning to be vigilant and prepared. Piece adapted from The Undeserving Poor: America’s Enduring Confrontation with Poverty, by Michael B. Katz, 2013. A version of this article appeared in Social Work & Society, Vol 11, No 1, 2013. The author would like to give special thanks to Mike Rose. Cover image by Alex Proimos Creative Commons Licence About the Author: Michael B. Katz is Walter H. Annenberg Professor of History and Research Associate in the Population Studies Center at the History Department at the University of Pennsylvania. Educated at Harvard, he has been a Guggenheim Fellow and a resident fellow at the Institute for Advanced Study, the Shelby Cullom Davis Center for Historical Studies (Princeton), the Russell Sage Foundation, and the Woodrow Wilson International Center for Scholars; he also has held a fellowship from the Open Society Institute. He is a fellow of the National Academy of Education, National Academy of Social Insurance, and the Society of American Historians and the American Philosophical Society. In 1999, he received a Senior Scholar Award – a lifetime achievement award – from the Spencer Foundation. From 1989-1995, he served as archivist to the Social Science Research Council’s Committee for Research on the Urban Underclass and in 1992 was a member of the Task Force to Reduce Welfare Dependency appointed by the Governor of Pennsylvania. From 1991-1995 and 2011-2012, he was Chair of the History Department at the University of Pennsylvania; from 1983-1996 he directed or co-directed the University’s undergraduate Urban Studies Program; in 1994, he founded the graduate certificate program in Urban Studies, which he co-directs. He is a past-president of the History of Education Society and of the Urban History Association. In 2007, he was given the Provost’s Award for Distinguished Graduate Student Teaching and Mentoring. ]]> 0 Electric on Them Fri, 18 Oct 2013 08:01:45 +0000 0 David S. Jones: Take Care Tue, 17 Sep 2013 08:00:49 +0000 Heart and its Blood Vessels, Leonardo Da Vinci, 1452 – 1519 by David S. Jones Every day all over America, ambulances whisk people with chest pain into emergency rooms. Doctors take a history, perform a physical exam, order diagnostic tests, and, when suspicion of a heart attack is high, send the patient to coronary angiography. Once the results are available, the doctor and patient can review clinical trials, practice guidelines and other tools of evidence-based medicine. Such knowledge should enable good decisions about aspirin, thrombolytic therapy, angioplasty and bypass surgery. If medicine were nothing more than facts and data, the best decision would then be clear to the fully informed patient and doctor. But something else intervenes. Medical care, for instance, depends on where you happen to live and what doctor you happen to see. Consider rates of coronary angioplasty. Doctors in one Ohio town in 2003 performed angioplasty at a rate 10-times that in Honolulu, and even 3-times that in Cleveland, a mere thirty miles away. Such variation is ubiquitous in medicine. Practice varies between doctors within a hospital, between hospitals within a city, between cities within a state, and between different states or countries. For centuries patients and doctors have had to decide whether to try a particular therapy for a disease. This is usually seen as a pragmatic question for the science and art of medicine. But it is also an epistemological question: what counts as a good decision, and how would you know? The history of coronary artery bypass surgery and angioplasty reveals the range of factors, some appropriate and others less so, that influence how medical decisions get made. This history raises important questions for health policy and social justice. There are now enormous disparities in access to cardiac care, with patients in much of the world unable to receive what would be considered the standard of care in the United States. What counts as an acceptable medical decision clearly depends as much on wealth and access to care as it does on medical science. Coronary artery disease (CAD) provides a useful case study. Long the leading cause of death in the United States, and now in most countries worldwide, CAD has been the object of intense study by physicians. How can they tell if treatments work? One approach, which has developed into the field of evidence based medicine, involves comparing a new treatment to the existing standard of care and seeing if it improves outcomes. If it does, then the treatment is a good one and should be adopted. But this empirical approach is only one part of doctors’ thinking. They also try to understand the mechanisms of disease and to develop treatments that intervene on the mechanism to fix the problem. For most of the twentieth century, doctors believed that CAD was caused by the progressive growth of atherosclerotic plaques. A large plaque limits the flow of blood through an artery, depriving the heart’s muscle of needed oxygen and causing ischemia and then infarction – a heart attack. This model of blocked pipes inspired plumbing-based therapeutics. Cardiac surgeons learned how to use vein and artery grafts to bypass the obstructive plaques, while cardiologists learned how to use catheter-based balloons to compress the plaques and stents to prop the arteries open. With well over 1,000,000 of these procedures performed each year, they form a $100 billion industry. Do they work? If you believe the plumbing model, they certainly should. Many patients, for instance, believe that angioplasty will reduce their risk of heart attack and death, and extend their life expectancy by as much as ten years. Such a dramatic benefit, however, has never been shown by the clinical trials of evidence based medicine. While revascularization extends the lives of the sickest patients, in many others it provides no benefit beyond what can be achieved with medications and lifestyle changes alone. What accounts for the mismatch between expectation and outcome? History provides several clues. Even as bypass surgery and angioplasty rose to prominence from the 1960s into the 1990s, pathologists and cardiologists abandoned the old plumbing model of progressive obstruction. They now believe that heart attacks are caused by the rupture of “fragile plaques,” often ones that are too small to be seen with angiography or treated with revascularization. The old model, despite its simple, intuitive logic, proved to be misleading basis for clinical decisions. Nonetheless it maintains a firm grip on the thinking of patients and doctors, who sometimes feel compelled to intervene even when they suspect that the procedure will provide little benefit. Figuring out the safety of a medical treatment can be just as difficult. When coronary artery bypass surgery was introduced in the 1960s, everyone wanted to know whether it would be safe. After all, surgeons had to open the rib cage and stop the heart (in most cases) to sew in the bypass grafts. Surgeons quickly reassured themselves that the operation had an acceptable risk profile. This assessment, however, was based on a narrow vision: they focused on the risk of dying or having a heart attack during or soon after the surgery. While these were certainly the most important risks, they were not the only ones. When surgeons began doing open heart surgery in the 1950s and 1960s, they realized that the heart-lung machines that they used to keep the body alive while they stopped the heart could damage the patient’s brain, sometimes significantly. Patients suffered strokes, seizures, delirium and subtler cognitive deficits. The well-known risks, however, were largely ignored in the early years of bypass surgery. Of the first 200 studies published about bypass surgery, only four included serious discussions of these complications. What had happened? It turns out that it is far harder to collect good data about the adverse effects of a treatment than about its desire outcomes. Sometimes the side effects cannot be predicted. But even when the complications are foreseen, many factors direct researchers’ attention away from them. Surgeons struggled to keep up with the demand for bypass surgery and had little time for thorough post-operative assessments of their patients. When they had time, they lacked the expertise needed to conduct elaborate neuropsychiatric assessments. Neurologists and psychiatrists, meanwhile, were more interested in developing treatments for their own diseases than in documenting the side effects of surgery. As a result, the full risk profile of a new treatment is often recognized slowly, only years after a procedure has been introduced into practice. Solutions to this problem are easy to design but difficult to implement. Researchers, clinicians and regulators could all take responsibility for studying complications more seriously than they do, but none yet have made it a priority. If one set of factors leads doctors and patients to exaggerate the efficacy of interventions, and another set leads them to under-estimate risk, together they cause a serious problem. This paired error introduces an asymmetry into medical decision making that skews decisions in favor of intervention, something that drives the overuse of medical technology in the United States. These examples show how hard it can be to generate the kind of information that doctors should use to make decisions, information about the efficacy and safety of procedures. Another set of examples reveals an equally concerning problem: evidence that a wide array of non-clinical factors influence medical decisions. The simplest way to illustrate this problem is with the geographic variations in medical care. If you keep track of the millions of medical decisions made each year by patients and their doctors, you can estimate the utilization rate of any particular procedure in different hospitals, cities, states, or countries. If evidence based medicine produced clear guidelines for medical decisions, and doctors followed them, then procedure rates should simply follow the prevalence of disease. But this is not what happens. The problem was first recognized in 1938 when a British physician, J. Alison Glover, tabulated tonsillectomy rates and learned that they varied 27-fold across London neighborhoods, a pattern that could not be explained by underlying differences in the prevalence of disease. Although this finding was largely ignored for thirty years, researchers have now found geographical variation in surgical practice and in most other areas of medicine, everywhere that they have looked. The Dartmouth Atlas of Health Care, for instance, studied 2003 Medicare data and found that bypass surgery varied 4-fold between Mobile, Alabama, and Grand Junction, Colorado. Angioplasty varied more than 10-fold, from a high in Elyria, Ohio, to a low in Honolulu, Hawaii. Since the variation does not parallel the burden of disease, researchers have concluded that the variation is “unwarranted.” How can this be explained? Many observers have been quick to suspect financial conflicts of interest. In traditional reimbursement systems, surgeons get paid for every procedure they do. This gives them an incentive to operate whenever it can be justified. As Boston surgeon Francis Moore observed in 1970, “For the pecuniary-minded physician and surgeon alike, or for psychiatrist or pediatrician, the American population is a happy hunting ground.” Economists have described a related phenomenon, supplier-induced demand. Physicians are more likely to use a particular procedure if that procedure is readily available. Increase the supply of surgeons in an area, and the number of referrals to surgeons increases. Some of the variation also reflects differences in medical judgment and opinion. Differences exist in the medical culture of specific institutions, and possibly between countries. Charismatic and prominent advocates for a particular operation can influence physicians in their community. Peer pressure also plays a role, with physicians conforming their practice to that of those around them. As more and more evidence of unwarranted variation accumulated, it posed a challenge to the medical profession and its commitment to evidence based medicine. On one hand, evidence of practice variation strengthened the position of therapeutic reformers: evidence of 10-fold variation showed how much need there was for improved rationality and discipline in medical decisions. On the other hand, as the variation persisted decade after decade, it became an affront to the ambitions of evidence-based medicine, a testimony to how far medicine remains from being a fully rational enterprise. Although the problem has been most carefully studied in the United States, it is of course not limited to the United States. British surgeons perform bypass surgery at a fraction of the rate of their colleagues in Australia (one-fourth) or the United States (one-seventh). Angioplasty varies 7-fold between Germany and Ireland. Whose rate is the right rate? No one knows. The variations have not been shown to correlate either with the burden of CAD or with the survival of patients with CAD. The disparities are even more pronounced when middle- and low-income countries are included. Even though Mexico and Germany have similar rates of heart disease, the angioplasty rate in Germany is 300-times higher than in Mexico. Fewer than 20,000 people received angioplasty in China in 2010, with a population of well over 1 billion. Contrast this with 1 million procedures on 300 million Americans. It might once have been acceptable to dismiss these disparities as appropriate. CAD, after all, was long seen as a disease of the industrialized west. But health officials have known for the past two decades that this is not the case. In its 1993 World Development Report, the World Bank reported that CAD caused over 10% of all deaths worldwide, making it the single leading cause of death. Since that time CAD has further tightened its hold on low- and middle-income countries. India now has more heart attack deaths than any other country. Investments in cardiac care have not increased to match the rising tide of deaths. Although it may be true that coronary revascularization is over-used in Germany and the United States, most physicians would agree that it is under-used in China, India, and elsewhere. Millions of deaths could be prevented each year if more patients had access to coronary revascularization. The attention of global health experts remains focused, instead, on prevention. In 1993 the World Bank acknowledged the value of aspirin and blood pressure medicine but held the line at heart surgery and similar treatments with prohibitively low cost-effectiveness. Five years later, in its report Control of Cardiovascular Diseases in Developing Countries, the Institute of Medicine recommended prevention and low-cost medications and advised poor countries to avoid “sophisticated, expensive technologies,” including angiography, PCI, and CABG. When world leaders met in New York City in 2011 for the General Assembly on the Prevention and Control of Non-communicable Diseases, they focused their attention on tobacco, diet and lifestyle: “Prevention must be the cornerstone of the global response to noncommunicable diseases.” There is a clear logic here: resources are limited and prevention will always be more cost effective than treatment. The prevention consensus, however, was tried and abandoned for the case of HIV. Just ten years ago, international health experts asserted that antiretroviral therapy was too expensive and too difficult to deploy in developing countries. Prevention promised to be a better use of limited health care resources. Treatment activists rejected this logic. First, they argued that the rhetoric of appropriate technology provided an excuse for denying medical care to people in poor countries. Second, they asserted that access to life-saving therapy was a human right: prevention did nothing to help the tens of millions of people already infected with HIV. Third, they demonstrated that the cost-benefit calculus could be transformed by reducing the cost of therapy. Fourth, they highlighted treatment’s many collateral benefits: it can reduce viral load and suppress transmission; it offered hope to demoralized communities; it captured the public imagination and mobilized resources on a scale that could scarcely have been imagined in 2002. While great challenges remain, both the Global Fund to Fight AIDS, Tuberculosis, and Malaria and the President’s Emergency Fund for AIDS Relief have made dramatic progress. Prevention may still be more cost effective, but activists, physicians and policy makers agree that treatment is the right thing to do. The parallels of AIDS and CAD are not perfect, but they are informative. Prevention, whether through smoking cessation, weight loss or increased physical activity, does offer the best long term solution for global heart disease. But what about the tens of millions of people who already have advanced coronary atherosclerosis? They too would benefit from lifestyle change, but millions of them will still die from heart attacks each year. Amid rising support for global health equity, no one has made a serious call for global access to cardiac surgery or interventional cardiology. There is no Global Fund or President’s Emergency Program for Heart Attacks and Cardiovascular Disease. The magnitude of the need is great: heart disease kills more people each year than AIDS, tuberculosis, and malaria combined. Technology exists that might save the lives of millions. And yet no one has identified access to angioplasty as a human rights crisis. My point here is not to argue that physicians and funders committed to global health should rush out and build angioplasty suites on every street corner in Mexico City, Mumbai or Shanghai. Instead, I want everyone to think seriously about the moral implications of health care disparities. Decisions about revascularization worldwide do not simply reflect the balance of likely risk and benefit. Instead, they also reflect other values, such as how societies choose to invest scarce health care resources. Even though the United States and other wealthy countries invest heavily in cardiac care, they have not made a commitment to fund such care elsewhere. The discrepancy might mean that we are committed to coronary revascularization but do not think people in resource-poor settings deserve comparable access (or that it is not our responsibility to provide it). Or it might mean that we are committed to providing essential medical care to resource-poor settings, but do not consider angioplasty and bypass surgery to be essential treatments. Neither options holds tremendous appeal. What rate is the right rate for coronary revascularization? Social justice demands that one target be set for all people, not one rate for the wealthy citizens of the United States, and a lower rate for poorer people worldwide. It remains to be seen whether that rate will be high or low. About the Author: David S. Jones is A. Bernard Ackerman Professor of the Culture of Medicine at Harvard University. His research interests include health inequalities between populations, particularly the history of explanations that have been given for health inequalities since the seventeenth century, and medical decision making, focusing on the history of cardiac therapeutics, with particular reference to the relationship between changing disease models of coronary artery disease and the various strategies used to treat it. His most recent book is Broken Hearts: The Tangled History of Cardiac Care. ]]> 0 Jyl Oghuha Fri, 13 Sep 2013 08:00:47 +0000 Lena Pillars. Photograph by Maarten Takens by Greg Downey The Bull of Winter weakens In 2003, after decades of working with the Viliui Sakha, indigenous horse and cattle breeders in the Vilyuy River region of northeastern Siberia, anthropologist Susan Crate began to hear the local people complain about climate change: My own “ethnographic moment” occurred when I heard a Sakha elder recount the age-old story of Jyl Oghuha (the bull of winter). Jyl Oghuha’s legacy explains the 100o C annual temperature range of Sakha’s subarctic habitat. Sakha personify winter, the most challenging season for them, in the form of a white bull with blue spots, huge horns, and frosty breath. In early December this bull of winter arrives from the Arctic Ocean to hold temperatures at their coldest (-60o to -65o C; -76o to -85o F) for December and January. Although I had heard the story many times before, this time it had an unexpected ending… (Crate 2008: 570) This Sakha elder, born in 1935, talked about how the bull symbolically collapsed each spring, but also its uncertain future: The bull of winter is a legendary Sakha creature whose presence explains the turning from the frigid winter to the warming spring. The legend tells that the bull of winter, who keeps the cold in winter, loses his first horn at the end of January as the cold begins to let go to warmth; then his second horn melts off at the end of February, and finally, by the end of March, he loses his head, as spring is sure to have arrived. It seems that now with the warming, perhaps the bull of winter will no longer be. (ibid) Crate found that the ‘softening’ of winter disrupted the Sakha way of life in a number of ways far less prosaic. The winters were warmer, bringing more rain and upsetting the haying season; familiar animals grew less common and new species migrated north; more snow fell, making hunting more difficult in winter; and when that snow thawed, water inundated their towns, fields, and countryside, rotting their houses, bogging down farming, and generally making life more difficult. Or, as a Sakha elder put it to Crate: I have seen two ugut jil (big water years) in my lifetime. One was the big flood in 1959 — I remember canoeing down the street to our kin’s house. The other is now. The difference is that in ‘59 the water was only here for a few days and now it does not seem to be going away. (Sakha elder, 2009; in Crate 2011: 184). (Currently, Eastern Russia is struggling with unprecedented flooding along the Chinese border, and, in July, unusual forest fires struck areas of the region that were permafrost.) As I write this, the website CO2 Now reports that the average atmospheric CO2 level for July 2013 at the Mauna Loa Observatory was 397.23 parts per million, slightly below the landmark 400+ ppm levels recorded in May. The vast majority of climate scientists now argue, not about whether we will witness anthropogenic atmospheric change, but how much and how quickly the climate will change. Will we cross potential ‘tipping points’, when feedback dynamics accelerate the pace of warming? While climate science might be controversial with the public in the US (less so here in Australia and among scientists), the effects on human populations are more poorly understood and unpredictable, both by the public and scientists alike. Following on from Wendy Foden and colleagues’ piece in the PLOS special collection proposing a method to identify the species at greatest risk (Foden et al. 2013), I want to consider how we might identify which cultures are at greatest risk from climate change. Will climate change threaten human cultural diversity, and if so, which groups will be pushed to the brink most quickly? Are groups like the Viliui Sakha at the greatest risk, especially as we know that climate change is already affecting the Arctic and warming may be exaggerated there? And what about island groups, threatened by sea level changes? Who will have to change most and adapt because of a shifting climate? Daniel Lende (2013: 496) has suggested that anthropologists need to put our special expertise to work in public commentary, and in the area of climate change, these human impacts seem to be one place where that expertise might be most useful. The Sakha Republic The Sakha Republic where the Viliui Sakha live is half of Russia’s Far Eastern Federal District, a district that covers an area almost as large as India, twice the size of Alaska. Nevertheless, fewer than one million people live there, spread thinly across the rugged landscape. The region contains the coldest spot on the planet, the Verkhoyansk Range, where the average January temperature — average — is around -50O, so cold that it doesn’t matter whether that’s Fahrenheit or Celsius. The area that is now the Sakha Republic was first taken control by Tsarist Russia in the seventeenth century, a tax taken from the local people in furs. Many early Russian migrants to the region adopted Sakha customs. Both the Tsars and the later Communist governors exiled criminals to the region, which came to be called Yakutia; after the fall of the Soviet Union, the Russian Federation recognised the Sakha Republic. The Sakha, also called Yakuts, are the largest group in the area today; since the fall of the Soviets, many of the ethnic Russian migrants have left. Verkhoyansk Mountains, Sakha Republic. Photograph by Maarten Takens Sakha speakers first migrated north into Siberia as reindeer hunters, mixing with and eventually assimilating the Evenki, a Tungus-speaking group that lived there nomadically. Then these nomadic groups were later assimilated or forced further north by more sedentary groups of Sakha who raised horses and practiced more intensive reindeer herding and some agriculture (for more information see Susan Crate’s excellent discussion, ‘The Legacy of the Viliui Reinfeer-Herding Complex’ at Cultural Survival). The later migrants forced those practicing the earlier, nomadic reindeer-herding way of life into the most remote and rugged pockets of the region. By the first part of the twentieth century, Crate reports, the traditional reindeer-herding lifestyle was completely replaced in the Viliui watershed, although people elsewhere in Siberia continued to practice nomadic lifestyles, following herds of reindeer. Today the economy of the Sakha Republic relies heavily on mining: gold, tin, and especially diamonds. Almost a quarter of all diamonds in the world — virtually all of Russia’s production — comes from Sakha. The great Udachnaya pipe, a diamond deposit just outside the Arctic circle, is now the third deepest open pit mine in the world, extending down more than 600 meters. A new project promises to build a pipeline to take advantage of the massive Chaynda gas field in Sakha, sending the gas eastward to Vladivostok on Russia’s Pacific coast (story in the Siberia Times). The $24 billion Gazprom pipeline, which President Putin’s office says he wants developed ‘within the tightest possible timescale’, would mean that Russia would not have to sell natural gas exclusively through Europe, opening a line for direct delivery into the Pacific. The Sakha have made the transition to the post-Soviet era remarkably well, with a robust economy and a political system that seems capable of balancing development and environmental safeguards (Crate 2003). But after successfully navigating a political thaw, will the Sakha, and other indigenous peoples of the region, fall victim to a much more literal warming? The United Nations on indigenous people and climate change This past month, we celebrated the International Day of the World’s Indigenous People (9 August). From 2005 to 2014, the United Nations called for ‘A Decade for Action and Dignity.’ The focus of this year’s observance is ‘Indigenous peoples building alliances: Honouring treaties, agreements and other constructive arrangements’ (for more information, here’s the UN’s website). According to the UN Development Programme, the day ‘presents an opportunity to honour diverse indigenous cultures and recognize the achievements and valuable contributions of an estimated 370 million indigenous peoples.’ The UN has highlighted the widespread belief that climate change will be especially cruel to indigenous peoples: Despite having contributed the least to GHG [green house gas], indigenous peoples are the ones most at risk from the consequences of climate change because of their dependence upon and close relationship with the environment and its resources. Although climate change is regionally specific and will be significant for indigenous peoples in many different ways, indigenous peoples in general are expected to be disproportionately affected. Indigenous communities already affected by other stresses (such as, for example, the aftermath of resettlement processes), are considered especially vulnerable. (UN 2009: 95) The UN’s report, State of the World’s Indigenous People, goes on to cite the following specific ‘changes or even losses in the biodiversity of their environment’ for indigenous groups, that will directly threaten aspects of indigenous life: - the traditional hunting, fishing and herding practices of indigenous peoples, not only in the Arctic, but also in other parts of the world; - the livelihood of pastoralists worldwide; - the traditional agricultural activities of indigenous peoples living in mountainous regions; - the cultural and ritual practices that are not only related to specific species or specific annual cycles, but also to specific places and spiritual sites, etc.; - the health of indigenous communities (vector-borne diseases, hunger, etc.); - the revenues from tourism. (ibid.: 96) For example, climate change has been linked to extreme drought in Kenya where the Maasai, pastoral peoples, find their herds shrinking and good pasture harder and harder to find. For the Kamayurá in the Xingu region of Brazil, less rain and warmer water have decimated fish stocks in their river and made cassava cultivation a hit and miss affair; children are reduced to eating ants on flatbread to stave off hunger. The UN report touches on a number of different ecosystems where the impacts of climate change will be especially severe, singling out the Arctic: The Arctic region is predicted to lose whole ecosystems, which will have implications for the use, protection and management of wildlife, fisheries, and forests, affecting the customary uses of culturally and economically important species and resources. Arctic indigenous communities—as well as First Nations communities in Canada—are already experiencing a decline in traditional food sources, such as ringed seal and caribou, which are mainstays of their traditional diet. Some communities are being forced to relocate because the thawing permafrost is damaging the road and building infrastructure. Throughout the region, travel is becoming dangerous and more expensive as a consequence of thinning sea ice, unpredictable freezing and thawing of rivers and lakes, and the delay in opening winter roads (roads that can be used only when the land is frozen). (ibid.: 97) Island populations are also often pointed out as being on the sharp edge of climate change (Lazrus 2012). The award-winning film, ‘There Once Was an Island,’ focuses on a community in the Pacific at risk from a rise in the sea level. As a website for the film describes: Takuu, a tiny atoll in Papua New Guinea, contains the last Polynesian culture of its kind.  Facing escalating climate-related impacts, including a terrifying flood, community members Teloo, Endar, and Satty, take us on an intimate journey to the core of their lives and dreams. Will they relocate to war-ravaged Bougainville – becoming environmental refugees – or fight to stay? Two visiting scientists investigate on the island, leading audience and community to a greater understanding of climate change. Similarly, The Global Mail reported the island nation of Kiribati was likely to become uninhabitable in coming decades, not simply because the islands flood but because patterns of rainfall shift and seawater encroaches on the coastal aquifer, leaving wells saline and undrinkable. Heather Lazrus (2012: 288) reviews a number of other cases: Low-lying islands and coastal areas such as the Maldives; the Marshall Islands; the Federated States of Micronesia, Kiribati, and Tuvalu; and many arctic islands such as Shishmaref… and the small islands in Nunavut… may be rendered uninhabitable as sea levels rise and freshwater resources are reduced. Certainly, the evidence from twentieth century cases in which whole island populations were relocated suggests that the move can be terribly disruptive, the social damage lingering long after suitcases are unpacked. Adding climate injury to cultural insult In fact, even before average temperatures climbed or sea levels rose, indigenous groups were already at risk and have been for a while. By nearly every available measure, indigenous peoples’ distinctive lifeways and the globe’s cultural diversity are threatened, not so much by climate, but by their wealthier, more technologically advanced neighbours, who often exercise sovereignty over them. If we take language diversity as an index of cultural distinctiveness, for example, linguist Howard Krauss (1992: 4) warned in the early 1990s that a whole range of languages were either endangered or ‘moribund,’ no longer being learned by new speakers or young people. These moribund languages, Krauss pointed out, would inevitably die with a speaker who had already been born, an individual who would someday be unable to converse in that language because there would simply be no one else to talk to: The Eyak language of Alaska now has two aged speakers; Mandan has 6, Osage 5, Abenaki-Penobscot 20, and Iowa has 5 fluent speakers. According to counts in 1977, already 13 years ago, Coeur d’Alene had fewer than 20, Tuscarora fewer than 30, Menomini fewer than 50, Yokuts fewer than 10. On and on this sad litany goes, and by no means only for Native North America. Sirenikski Eskimo has two speakers, Ainu is perhaps extinct. Ubykh, the Northwest Caucasian language with the most consonants, 80-some, is nearly extinct, with perhaps only one remaining speaker. (ibid.) Two decades ago, Krauss went on to estimate that 90% of the Arctic indigenous languages were ‘moribund’; 80% of the Native North American languages; 90% of Aboriginal Australian languages (ibid.: 5). Although the estimate involved a fair bit of guesswork, and we have seen some interesting evidence of ‘revivals’, Krauss suggested that 50% of all languages on earth were in danger of disappearing. The prognosis may not be quite as grim today, but the intervening years have confirmed the overall pattern. Just recently, The Times of India reported that the country has lost 20% of its languages since 1961 — 220 languages disappeared in fifty years, with the pace accelerating. The spiffy updated Ethnologue website, based upon a more sophisticated set of categories and more precise accounting, suggests that, of the 7105 languages that they recognise globally, around 19% are ‘moribund’ or in worse shape, while another 15% are shrinking but still being taught to new users (see Ethnologue’s discussion of language status here  and UNESCO’s interactive atlas of endangered languages). Back in 2010, I argued that the disappearance of languages was a human rights issue, not simply the inevitable by-product of cultural ‘evolution’, economic motivations, and globalisation (‘Language extinction ain’t no big thing?’ – but beware as my style of blogging has changed a lot since then). Few peoples voluntarily forsake their mother tongues; the disappearance of a language or assimilation of a culture is generally not a path strode by choice, but a lessor-of-evils choice when threatened with chronic violence, abject poverty, and marginalisation. I’ve also written about the case of ‘uncontacted’ Indians on the border of Brazil and Peru, where Western observers sometimes assume that indigenous peoples assimilate because they seek the benefits of ‘modernization’ when, in fact, they are more commonly the victims of exploitation and violent displacement. Just this June, a group of Mashco-Piro, an isolated indigenous group in Peru that has little contact with other societies, engaged in a tense stand-off at the Las Piedras river, a tributary of the Amazon. Caught on video, they appeared to be trying to contact or barter with local Yine Indians at a ranger station. Not only have this group of the Mashco-Piro fought in previous decades with loggers, but they now find that low-flying planes are disturbing their territory in search of natural gas and oil. (Globo Brasil also released footage taken in 2011 by officials from Brazil’s National Indian Foundation, FUNAI, of the Kawahiva, also called the Rio Pardo Indians, an isolated group from Mato Grosso state.) In 1992, Krauss pleaded with fellow scholars to do something about the loss of cultural variation, lest linguistics ‘go down in history as the only science that presided obliviously over the disappearance of 90% of the very field to which it is dedicated’ (1992: 10): Surely, just as the extinction of any animal species diminishes our world, so does the extinction of any language. Surely we linguists know, and the general public can sense, that any language is a supreme achievement of a uniquely human collective genius, as divine and endless a mystery as a living organism. Should we mourn the loss of Eyak or Ubykh any less than the loss of the panda or California condor? (ibid.: 8) The pace of extinction is so quick that some activists, like anthropologist and attorney David Lempert (2010), argue that our field needs to collaborate on the creation of a cultural ‘Red Book,’ analogous to the Red Book for Endangered Species. Anthropologists may fight over the theoretical consequences of reifying cultures, but the political and legal reality is that even states with laws on the books to protect cultural diversity often have no clear guidelines as to what that entails. But treating cultures solely as fragile victims of climate change misrepresents how humans will adapt to climate change. Culture is not merely a fixed tradition, calcified ‘customs’ at risk from warming; culture is also out adaptive tool, the primary way in which our ancestors adapted to such a great range of ecological niches in the first place and we will continue to adapt into the future. And this is not the first time that indigenous groups have confronted climate change. Culture as threatened, culture as adaptation One important stream of research in the anthropology of climate change shows very clearly that indigenous cultures are quite resilient in the face of environmental change. Anthropologist Sarah Strauss of the University of Wyoming has cautioned that, if we only focus on cultural extinction from climate change as a threat, we may miss the role of culture in allowing people to accommodate wide variation in the environment: People are extraordinarily resilient. Our cultures have allowed human groups to colonize the most extreme reaches of planet Earth, and no matter where we have gone, we have contended with both environmental and social change…. For this reason, I do not worry that the need to adapt to new and dramatic environmental changes (those of our own making, as well as natural occurrences like volcanoes) will drive cultures—even small island cultures—to disappear entirely.  (Strauss 2012: n.p. [2]) A number of ethnographic cases show how indigenous groups can adapt to severe climatic shifts. Crate (2008: 571), for example, points out that the Sakha adapted to a major migration northward, transforming a Turkic culture born in moderate climates to suit their new home. Kalaugher (2010) also discusses the Yamal Nenets, another group of Siberian nomads, who adapted to both climate change and industrial encroachment, including the arrival of oil and gas companies that fouled waterways and degraded their land (Survival International has a wonderful photo essay about the Yamal Nenets here.). A team led by Bruce Forbes of the University of Lapland, Finland, found: The Nenet have responded by adjusting their migration routes and timing, avoiding disturbed and degraded areas, and developing new economic practices and social interaction, for example by trading with workers who have moved into gas villages in the area. (article here) Northeast Science Station, Cherskiy, Sakha Republic. Photograph by David Mayer But one of the most amazing stories about the resilience and adaptability of the peoples of the Arctic comes from Wade Davis, anthropologist and National Geographic ‘explorer in residence.’ In his wonderful TED presentation, ‘Dreams from endangered cultures,’ Davis tells a story he heard on a trip to the northern tip of Baffin Island, Canada: … and there’s nothing more than I can say after ‘… and disappeared over the ice floes, shit knife in belt’ that can make this story any better… Climate change in context The problem for many indigenous cultures is not climate change alone or in isolation, but the potential speed of that change and how it interacts with other factors, many human-induced: introduced diseases, environmental degradation, deforestation and resource depletion, social problems such as substance abuse and domestic violence, and legal systems imposed upon them, including forced settlement and forms of property that prevent movement. As Strauss explains: Many researchers… see climate change not as a separate problem, in fact, but rather as an intensifier, which overlays but does not transcend the rest of the challenges we face; it is therefore larger in scale and impact, perhaps, but not entirely separable from the many other environmental and cultural change problems already facing human societies. (Strauss 2012: n.p. [2]) One of the clearest examples of these intensifier effects is the way in which nomadic peoples, generally quite resilient, lose their capacity to adapt when they are prevented from moving. The Siberian Yamal Nenets makes this clear: “We found that free access to open space has been critical for success, as each new threat has arisen, and that institutional constraints and drivers were as important as the documented ecological changes,” said Forbes. “Our findings point to concrete ways in which the Nenets can continue to coexist as their lands are increasingly fragmented by extensive natural gas production and a rapidly warming climate.” (Kalaugher 2010) With language loss in India, it’s probably no coincidence that, ‘Most of the lost languages belonged to nomadic communities scattered across the country’ (Times of India). In previous generations, if climate changed, nomadic groups might have migrated to follow familiar resources or adopt techniques from neighbours who had already adapted to forces novel to them. An excellent recent documentary series on the way that Australian Aboriginal people have adapted to climate change on our continent — the end of an ice age, the extinction of megafauna, wholesale climate change including desertification — is a striking example (the website for the series, First Footprints, is excellent). Today, migration is treated by UN officials and outsiders as ‘failure to adapt’, as people who move fall under the new rubric of ‘climate refugees’ (Lazrus 2012: 293). Migration, instead of being recognised as an adaptive strategy, is treated as just another part of the diabolical problem. (Here in Australia, where refugees on boats trigger unmatched political hysteria, migration from neighbouring areas would be treated as a national security problem rather than an acceptable coping strategy.) For the most part, the kind of migration that first brought the Viliui Sakha to northeastern Siberia is no longer possible. As the Yamal Nenets, for example, migrate with their herds of reindeer, the come across the drills, pipelines, and even the Obskaya-Bovanenkovo railway – the northern-most railway line in the world – all part of Gazprom’s ‘Yamal Megaproject.’  Endangered indigenous groups are hemmed in on all sides, surviving only in geographical niches that were not attractive to their dominant neighbours, unsuitable for farming. As Elisabeth Rosenthal wrote in The New York Times: Throughout history, the traditional final response for indigenous cultures threatened by untenable climate conditions or political strife was to move. But today, moving is often impossible. Land surrounding tribes is now usually occupied by an expanding global population, and once-nomadic groups have often settled down, building homes and schools and even declaring statehood. For the Kamayurá, for example, eating ants instead of fish in Brazil’s Xingu National Park, they are no longer surrounded by the vast expanse of the Amazon and other rivers where they might still fish; the park is now surrounded by ranches and farms, some of which grow sugarcane to feed Brazil’s vast ethanol industry or raise cattle to feed the world’s growing appetite for beef. Now, some of these indigenous groups find themselves squarely in the path of massive new resource extraction projects with nowhere to go, whether that’s in northern Alberta, eastern Peru, Burma, or remnant forests in Indonesia. That is, indigenous peoples have adapted before to severe climate change; but how much latitude (literally) do these groups now have to adapt if we do not allow them to move? In sum, indigenous people are often not directly threatened by climate change alone; rather, they are pinched between climate change and majority cultures who want Indigenous peoples’ resources while also preventing them from adapting in familiar ways. The irony is that the dynamic driving climate change is attacking them from two sides: the forests that they need, the mountains where they keep their herds, and the soil under the lands where they live are being coveted for the very industrial processes that belch excess carbon into the atmosphere. It’s hard not to be struck by the bitter tragedy that, in exchange for the resources to which we are addicted, we offer them assimilation. If they get out of the way so that we can drill out the gas or oil under their land or take their forests, we will invite them in join in our addiction (albeit, as much poorer addicts on the fringes of our societies, if truth be told). They have had little say in the process, or in our efforts to mitigate the process. We assume that our technologies and ways of life are the only potential cure for the problems created by these very technologies and ways of life. In 2008, for example, Warwick Baird, Director of the Native Title Unit of the Australian Human Rights and Equal Opportunity Commission, warned that the shift to an economic mode of addressing climate change abatement threatened to further sideline indigenous people: Things are moving fast in the world of climate change policy and the urgency is only going to get greater. Yet Indigenous peoples, despite their deep engagement with the land and waters, it seems to me, have little engagement with the formulation of climate change policy little engagement in climate change negotiations ­ and little engagement in developing and applying mitigation and adaptation strategies. They have not been included. Human rights have not been at the forefront. (transcript of speech here) The problem then is not that indigenous populations are especially fragile or unable to adapt; in fact, both human prehistory and history demonstrate that these populations are remarkably resilient. Rather, many of these populations have been pushed to the brink, forced to choose between assimilation or extinction by the unceasing demands of the majority cultures they must live along side. The danger is not that the indigenous will fall off the precipice, but rather that the flailing attempts of the resource-thirsty developed world to avoid inevitable culture change — the necessary move away from unsustainable modes of living — will push much more sustainable lifeways over the edge into the abyss first. Piece originally published at PLOS Blogs | Creative Commons License Crate, S. A. 2003. Co-option in Siberia: The Case of Diamonds and the Vilyuy Sakha. Polar Geography 26(4): 289–307. doi: 10.1080/789610151 (pdf available here) Crate, S. 2008. Gone the Bull of Winter? Grappling with the Cultural Implications of and Anthropology’s Role(s) in Global Climate Change. Current Anthropology 49(4): 569-595. doi: 10.1086/529543. Stable URL: Crate, S. 2011. Climate and Culture: Anthropology in the Era of Contemporary Climate Change. Annual Review of Anthropology 40:175–94. doi: 10.1146/annurev.anthro.012809.104925 (pdf available here) Cruikshank, J. 2001. Glaciers and Climate Change: Perspectives from Oral Tradition. Arctic 54(4): 377-393. Article Stable URL: Foden WB, Butchart SHM, Stuart SN, Vié J-C, Akçakaya HR, et al. (2013) Identifying the World’s Most Climate Change Vulnerable Species: A Systematic Trait-Based Assessment of all Birds, Amphibians and Corals. PLoS ONE 8(6): e65427. doi:10.1371/journal.pone.0065427 Kalaugher L. 2010. Learning from Siberian Nomads’ Resilience. Bristol, UK: Environ. Res. Web. Krauss, M. 1992. The world’s languages in crisis. Language 68(1): 4-10. (pdf available here) Lazrus, H. 2012. Sea Change: Island Communities and Climate Change. Annu. Rev. Anthropology 41:285–301. doi: 10.1146/annurev-anthro-092611-145730 Lempert, D. 2010. Why We Need a Cultural Red Book for Endangered Cultures, NOW: How Social Scientists and Lawyers/ Rights Activists Need to Join Forces. International Journal on Minority and Group Rights 17: 511–550. doi: 10.1163/157181110X531420 Lende, D. H. 2013. The Newtown Massacre and Anthropology’s Public Response. American Anthropologist 115 (3): 496–501. doi: 10.1111/aman.12031 Strauss, S. 2012. Are cultures endangered
by climate change? Yes, but. . . WIREs Clim Change. doi: 10.1002/wcc.181 (pdf available here) United Nations. 2009. The State of the World’s Indigenous People. Department of Economic and Social Affairs, ST/ESA/328. United Nations Publications: New York. (available online as a pdf) Inuit Knowledge and Climate Change (54:07 documentary). Isuma TV, network of Inuit and Indigenous media producers. Inuit Perspectives on Recent Climate Change, Skeptical Science, by Caitlyn Baikie, an Inuit geography student at Memorial University of Newfoundland. ]]> 0 Cold War Social Science by Audra J. Wolfe Wed, 28 Aug 2013 08:00:47 +0000 Peace Corps Volunteers work with a water-well drilling team in Chad to provide clean water to the community, 1968 by Audra J. Wolfe Last November I sat in a hotel ballroom surrounded by fellow historians of science as a baffling (to me, anyway) exchange unfolded over the legitimacy of the term “Cold War Social Science.” The occasion was a roundtable discussion at the History of Science Society’s annual meeting on a new book, bearing that very title, edited by Mark Solovey and Hamilton Cravens. Having just written my own book about science and the Cold War, I watched with growing alarm as colleagues spelled out their objections: decolonialization, various rights movements, the triumph of neoliberalism, pre-existing strains of social scientific thinking — surely each of these influenced the postwar social sciences as much as the conflict between Communism and Capitalism? But why, I thought, is this a problem? The book under discussion focused on the social sciences, mostly in the United States, as conducted between approximately 1948 and 1963 — a period that most historians would agree coincides with the first half of the Cold War. Scholars throw around phrases like “Victorian science” all the time. And more to the point, decolonialization, civil rights, women’s rights and the rise of American conservatism aren’t exactly unrelated to the Cold War. So what exactly were the critics objecting to? Soon enough, it clicked: this wasn’t a conversation about the past; it was a conversation about the present. Specifically, it was about the culpability of individual social scientists — including colleagues and mentors of people in the room (and possibly some attendees themselves) — in producing work that was either sponsored by or proved useful to American defense and intelligence operations. When Solovey and Cravens said, “Cold War,” their interlocutors heard “military-industrial-complex.” They heard judgment. Nearly a quarter-century after the collapse of the Soviet Union, scholarly conversations in the United States about the Cold War still slip easily into attributions of heroism and blame. Who signed Faustian bargains? Who spoke truth to power? Though tempting in any number of fields, including the history of art, film and public intellectualism, nowhere is the normative pull stronger than in the history of science, where scholars continue to trade barbs over what historian of physics Paul Forman famously referred to as the “distorting” effects of military funding on scientific research agendas. Forman and others who have advanced this line of argument have a point: the effects of military funding on American academic and even corporate research were pervasive and pernicious during the Cold War. By some estimates, more than three-quarters of all U.S. federal investment in scientific research in the late 1940s and early 1950s came from a single military research agency, the Office of Naval Research. Even when patrons let their clients operate at arms’ length, most of these funds came with strings attached. At the same time, the accounts of science in the Cold War that historians have produced by “following the money” increasingly feel claustrophobic. They limit the discussion to areas of specific interest to the military and intelligence establishments, omitting vast areas of scientific research, practice and policy that contributed to U.S. national prestige and identity. Tracing ties between academic offices and military patrons may tell us something about how the Cold War changed the practice of science, but it can’t answer the underlying question of why high-ranking government officials thought that funding for science — especially funding for so-called “pure” science — would be the thing that would help the United States win the Cold War, or why members of the general public tended to agree. But insisting on categorizing Cold War-era projects (scientific or otherwise) as military or civilian, covert or overt, dirty or clean bears another cost. Blaming “the military” for certain characteristics of U.S. Cold War-era science is akin to blaming soldiers for the conduct of a war, or nuclear engineers for U.S. nuclear policy. The idea seems to be that by carefully identifying, tracing and condemning ties to the defense industry, the rest of science might somehow be excused for its contributions to American nationalism in the Cold War period. From this perspective, the term “Cold War science” becomes an accusation to level at certain researchers as a means to salvage “not-Cold War science”. Thus very different groups of scholars have rejected the term: those who hold (or held) some connection to work supported by the military-industrial complex object to its attribution of blame, while critics of U.S. national policy during the Cold War find the term, if anything, too exonerating. Some fields have been more willing than others to embrace a more nuanced version of the recent past. One of the most vibrant areas in contemporary U.S. diplomatic history, for instance, looks at the history of cultural diplomacy — efforts to draw the non-aligned world closer into the U.S. orbit by promoting the American way of life, which might include anything from jazz and consumerism to labor relations and freedom of religion. The most famous of these efforts, the cultural extravaganzas produced by the Congress for Cultural Freedom, were covertly funded by the Central Intelligence Agency. But, as the historian Hugh Wilford makes clear in The Mighty Wurlitzer, his recent history of covert cultural diplomacy, not only did the CIA take a hands-off approach to most of these activities; some of them really were the efforts of individual acting as free agents, of men and women who felt the pull of ideology strongly enough to embark on missions of private diplomacy. This emphasis on ideological commitments comes across even in more traditional works of diplomatic history. Odd Arne Westad’s magisterial Global Cold War makes abundantly clear that both U.S. and Soviet leaders actually believed the things they were saying about Communism and capitalism, and that those beliefs, at least in part, drove foreign policy decisions. Narrow definitions of “Cold War science” don’t get us very far in making sense of Cold War ideology. What would the history of postwar science look like if you took seriously the idea of the Cold War as a total war — an event that affected all aspects of American life? This was the question I wanted to tackle when I started writing Competing with the Soviets: Science, Technology, and the State in Cold War America. I wanted to know what would happen if you combined the obvious stories about the intersection of science and the Cold War (the nuclear arms race, the military-industrial complex, anti-Communism, secrecy, the space race, and so on) with less-obvious episodes relevant to the ideological struggle between the so-called Free World and the Soviet bloc (the rise of scientometrics, social scientific theories of race relations, and the origins of the biotech industry). Would it be possible to find something, aside from chronology and cash, that connects these stories? In short, yes. There is something special about the role of science in the Cold War, something that goes beyond funding patterns and or even displays of technological might. Science played a unique role in maintaining and projecting state power throughout the postwar era, and this special role derived directly from the ideological conflict we now think of as the Cold War. Science and technology have, of course, always contributed to state power. In the Italian Renaissance, patrons requested that natural philosophers supply them with telescopes and astrolabes; two centuries later, the imperial governments of Spain, France and Great Britain sent crews of naturalists to evaluate the commercial potential of the plants, animals, and minerals in their conquered lands. Even so, this relationship underwent a fundamental change in the years immediately following World War II. Scientific achievement, in the form of the bomb, radar, and the proximity fuse, had apparently won the war for the Allies; it would presumably be the critical factor in deciding the Cold War as well. For all their differences, leaders in both the Soviet Union and the United States agreed that science, and scientists, were critical weapons in the international battle for hearts and minds. That both American and Soviet leaders embraced science as a tool for international military superiority — the driving forced behind high-tech weapons and surveillance — is perhaps not so surprising in an era dominated by the shadow of the atomic bomb. More intriguing is the extent to which both nations touted different definitions of science itself. Communist leaders trumpeted the accomplishments of centrally planned, results-driven Soviet science and technology in transforming agricultural economies into industrial powerhouses. Soviet science held no place for mere theoretical or abstract work. Legitimate scientific investigations needed to have some practical purpose in improving the lives of the people. This is not to say that Soviet scientists stopped conducting so-called basic research, but the more successful among them learned to describe in it Communist code. In contrast, the United States offered the very structure of science — supposedly open, international, and free from government interference — as a beacon of freedom to citizens of the world. The decision to place such defense-related agencies as the Atomic Energy Commission and the National Aeronautics and Space Administration under civilian, rather than military, aegis was at least partially about demonstrating the United States’ commitment to open science to the rest of the world. In the U.S., this idea of “open science” sat uneasily next to the reality of a research infrastructure that was largely backed by state — particularly military — interests. In the fun-house logic of Cold War global politics, American expressions of dedication to scientific freedom and international cooperation were simultaneously sincere and chauvinistic. Though I (mostly) avoided the phrase in my book, I ultimately find “Cold War science” (or Cold War social science, or Cold War physics, etc.) useful as an invocation of the nearly superhuman powers political leaders granted science in the postwar period. The phrase can invoke something more specific than chronology, especially when it refers to actions that relate to state policy. But it need not be considered an accusation. The uncomfortable fact of writing histories of the Cold War is that both the United States and the Soviet Union engaged in activities that were good, bad, and morally neutral. Surely one of the brighter lights of postwar American history was the belief that science, generously funded and left free from political interference, could be a force for both international peace and domestic prosperity: in short, a force for the public good. The Peace Corps, Head Start, and the War on Cancer were just as much products of “Cold War science” as were Apollo and the arms race. The judgment of history is fickle, but from our contemporary perspective — the only moment from which we can ever write — it seems clear enough that the United States won the Cold War. It did so both through the embrace of the idealistic concepts of freedom, democracy and self-determination, and through campaigns of military, paramilitary and economic violence. The past is a complicated place. When historians object to “Cold War anything,” they are objecting to these more troubling aspects of U.S. national conduct, both at home and abroad. But recognizing the inherent contradictions in historical actors’ behavior can be politically liberating, a necessary first step in uncovering a useable past. In the case of “Cold War science,” casting the net in the widest possible terms may help us to articulate concepts of the public good that depend less on demonstrating the superiority of the American way of life, and more on fostering a truly collaborative, international approach to knowledge of the natural world. Mark Solovey and Hamilton Cravens, Cold War Social Science: Knowledge Production, Liberal Democracy, and Human Nature (New York: Palgrave Macmillan, 2012). Paul Forman, “Behind Quantum Electronics: National Security as Basis for Physical Research in the United States, 1940–1960,” Historical Studies in the Physical and Biological Sciences 18 (1987): 149–229. Hugh Wilford, The Mighty Wurlitzer: How the CIA Played America (Cambridge: Harvard University Press, 2008). About the Author: Audra J. Wolfe is a writer, editor, and historian based in Philadelphia. Her research interests include the history of science, the history of the Cold War, and cultural diplomacy. She holds a Ph.D. in the history and sociology of science from the University of Pennsylvania. From 2007 to 2009 she was the executive producer of Distillations, a podcast on the past, present, and future of chemistry. She is the author of Competing with the Soviets: Science, Technology, and the State in Cold War America (Johns Hopkins University Press, 2013). Her current book-in-progress looks at the role of science in diplomacy. You can follow her on Twitter @ColdWarScience. ]]> 0 Just Laugh Fri, 21 Jun 2013 08:02:21 +0000 Ricky Gervais as David Brent in The Office, BBC, 2001 From The American Scholar: In my neuropsychiatric practice, I often use cartoons and jokes to measure a patient’s neurologic and psychiatric well-being. I start off with a standard illustration called “The Cookie Theft.” It depicts a boy, precariously balanced on a stool, pilfering cookies from a kitchen cabinet as his sister eggs him on, while their absentminded mother stands drying a plate, oblivious to the water overflowing from the sink onto the floor.  Though not really a cartoon—in that nothing terribly funny is taking place—it allows me to begin assessing various things: abstraction ability, empathy, powers of observation and description, as well as sense of humor. I am especially curious to see how patients process the image, whether they perceive only a portion of it or take it in as a whole. Some people notice only the boy, others only the mother. Next, I show a series of cartoons, starting with examples from a newspaper comics page and working up to more sophisticated drawings from The New Yorker. I then ask for an explanation of what’s going on in each of them. Over the years, I’ve learned that you can’t fake an understanding of a cartoon; you either get it or you don’t. Finally, I tell a few jokes set out in increasing levels of subtlety and complexity. Patients don’t have to find the jokes funny (humor is too heterogeneous for that), but they should be able to explain why other people might find them funny. Why am I interested in my patients’ ability to appreciate humor? Because humor impairment may point to operational problems at various levels of brain functioning. Charles Darwin referred to humor as “a tickling of the mind.” We speak of being “tickled pink” at a funny joke, and tickling often leads to laughter, so the analogy is apt. At the physiological level, humor reduces levels of stress hormones such as cortisol and is thought to enhance our immune, endocrine, and cardiovascular systems. Laughter also provides a workout for the muscles of the diaphragm, abdomen, and face. A joke can raise our spirits, or ease our tension. If we’re able to laugh during a stressful situation, we can put psychological distance between ourselves and the stress. Norman Cousins, editor of The Saturday Review for more than 30 years, chronicled in his 1979 bestselling book, Anatomy of an Illness, how he attempted to cure himself of a mysterious and rapidly progressive inflammatory illness of the spine by engaging in hours-long laughing sessions while watching Marx Brothers films and reruns of the then-popular Candid Camera. Though Cousins’s claims could not be scientifically confirmed, even the most skeptical researchers agree that humor provides an antidote to some emotions widely recognized to be associated with illness—for example, the feelings of rage and fear that can precipitate a heart attack. “Laughter and the Brain”, Richard Restak, The American Scholar ]]> 0 Elegant Algorithms Fri, 21 Jun 2013 08:00:30 +0000 0 ‘Technology is embedded within social relations of hierarchy and control’ Sat, 18 May 2013 08:00:37 +0000 From The Cubies’ A B C, illustrated by Mary Mills Lyall and Earl Harvey Lyall, 1913. Via by Guy Aitchison Writing in response to the Telecommunications Act of 1996, the first systematic attempt by the US government to police the internet, John Perry Barlow – former lyricist for the Grateful Dead – made a celebrated Declaration of the Independence of Cyberspace. In resonant tones that echoed those of the Founding Fathers, Barlow addressed the “Governments of the Industrial World, you weary giants of flesh and steel”, declaring “the global space we are building to be naturally independent of the tyrannies you seek to impose on us”. For Barlow, and the early pioneers of the internet, they were forming a new and different ‘Social Contract’, a governance that “arises according to the conditions of our world, not yours” informed by egalitarian values and the golden rule. Barlow’s vision captured the giddy optimism of early internet culture. This was a culture deeply hostile to commerce and government regulation that saw in the internet a utopian space that would usher in an age of democracy, open culture and participation. In the years that followed Barlow’s Declaration, far-reaching decisions were made by corporations and government (with little public debate, as is usual with major reforms to communication) that would lay the foundations for the internet’s transformation in a little under a generation into a far more controlled space penetrated by advertising, dominated by corporate monopolists and monitored by government and security agencies. Robert McChesney’s new book Digital Disconnect: How Capitalism is Turning the Internet Against Democracy is a sober and incisive account of this process: of how corporations came to dominate the internet, along with a practical assessment of how to fulfil the genuine democratic potential he rightly identifies in the technology within the context of the current economic crisis. The book is written as an intervention in US public debate, but will be of interest to all those who care about an open media and is of special relevance to political movements struggling to construct and articulate a democratic alternative to austerity against the clatter of neoliberal propaganda. It is long over-due. Anyone who has dipped into the mainstream scholarship on the internet will be familiar with what is a by-now stale debate between ‘celebrants’ and ‘sceptics’. Celebrants, amongst whom Clay Shirky and Yochai Benkler are the most eloquent, paint a panglossian picture of how the internet has democratised the world of information, unleashing our creative and collaborative spirit, freeing us all to participate in worthy projects such as Wikipedia, Couch-surfer and open-source software coding in forms of “commons-based peer production” (Benkler) that harness the “cognitive surplus” we no longer waste on television. At its most vulgar and deluded, the celebrant narrative, propagated by those like Simon Mainwaring, implies the internet has overcome the problem of power asymmetries and will usher in a cool, green capitalism of ethical entrepreneurs “integrating values into their business strategies and embracing the role as enduring custodians of community and planetary well-being”. The sceptics strike a contrarian note. They warn that the internet is corroding our culture (Jaron Lanier), empowering authoritarian governments (Egeny Morozov, Rebecca MacKinnon), channeling us into information ghettos at the expense of democracy (Eli Pariser) and undermining our capacity for deep and reflective thought (Nicholas Carr).  In their different ways, each of these authors illuminate the political, cultural and psychological changes encouraged by the internet. What’s missing is any systematic analysis of how the technology interlaces and reinforces existing relations of power in the economy and society. The terms “democracy” and the “market” are thrown around with little reflection or critical examination. There is a tendency on both sides – most especially the celebrants – towards a kind of reductionism that loses sight of what historian of technology Melvin Kranzberg pointed out over thirty years ago, “Technology is neither good or bad; nor is it neutral.” To ignore the shaping influence of context is to fundamentally “misunderstand the internet”, a point made forcefully by James Curran, Natalie Fenton and Des Freedman, three British scholars whose work can usefully be read alongside McChesney’s. Technology is embedded within social relations of hierarchy and control, and its use will tend to be governed in accordance with the interests of those who hold power, amplifying the dynamics of the existing social and economic system. A technology introduced under capitalism will tend to reinforce its tendencies to domination and inequality unless a balance of class power as a whole ensures that there is countervailing force. History provides examples of new technologies that lend themselves to challenges against authority, as with the printing press which enabled Lutherans to disseminate cheap vernacular copies of the Bible and so challenge the spiritual and temporal authority of the papacy.  These anti-authoritarian possibilities can only be realised however if the technology is put to use by organised interest that challenges society’s common sense of who has power in society and for what purposes. The open architecture of the internet, in which we can all participate and enrich ourselves and our shared culture without permission from some central authority, enables us to create what openDemocracy calls a “digital commons”. That is to say, it is possible to create an open space for interaction not dominated by the enclosures, ad hominem violence and commercialism of the web as a whole. But while the technology makes this possible, it is by no means inevitable. There are, after all, huge fortunes to be made from preventing it. In its ideal form, a digital commons is a realm of free access and reciprocal production much like the common pastural fields and forests of England prior to their enclosure by avaricious landowners; a place where we can live, play, create, share and discuss “out of sight or out of slavery”, as the Digger Gerald Winstanley put it.  Just like the physical commons, however, the digital commons has proven vulnerable to capture and exploitation in the process of capital accumulation. If this commons is to be defended and put to the service of democracy and social justice then it is imperative we understand the nature of the forces trying to control the internet and the broader communications environment. The first step is a thoroughgoing analysis of the internet’s political economy. As McChesney writes, “The profit motive, commercialism, public relations, marketing, and advertising – all defining features of contemporary corporate capitalism – are foundational to any assessment of how the Internet has developed and is likely to develop.” Digital work McChesney rejects the sacred American “catechism” of capitalism that would have us believe that free, competitive markets work for the wellbeing of all and ensure a plural dispersion of power across society. McChesney treats Google, Apple, Amazon, Facebook and the other tech firms as they should be treated: as profit-driven firms with a tendency to drive down the wages and conditions of workers, corrupt the democratic process with money-power and monopolise their sector, manufacturing scarcity and controlling innovation. Despite their geeky figureheads and cuddly PR, these are not NGOs. If they did not act this way, they would not last long as capitalist firms! Currently, 13 of the top 30 US firms are firms grown out of the digital revolution, whilst only 3 are “too big to fail” banks. In 2012, by some accounts, Apple had $110 billion dollars cash, Google $50 billion, Microsoft $51 billion and Amazon $10 billion. The huge profits of these firms are concentrated in the hands of billionaire founders, CEOs and shareholders. They frequently boast of their role in creating jobs. But what kind? Apple and its contractors employ 700,000 workers outside the US, many of them in the notorious Foxconn plants, whilst the 60,000 in America are mostly minimum wage-type jobs. Financial Times journalist Sarah O’Connor recently visited an Amazon warehouse in Rugeley where she saw workers – known as “pickers” – walking around a vast warehouse loading products onto trolleys with devices that tell them where to go next and monitoring their productivity in real time. An Amazon manager told her, “You’re sort of like a robot, but in human form.” Amazon apparently has plenty of robots but (for the time being!) humans are better at collecting the vast array of products. The company doesn’t allow unionisation and operates a “three strikes and release discipline”. Cultural workers have fared little better. The major copyright-holders, determined to preserve their revenues, have prevented any meaningful debate about how cultural work can be properly compensated in an era where music, films and books can be copied and distributed for free. The internet has contributed to the culture of a “flexible” and “mobile” workforce, and now the smart phone serves as a kind of remote leash tying workers to the office and leading to a dramatic rise in unpaid overtime. Beneath all the enchanting talk from politicians and think tanks of a “knowledge” economy and “cognitive” labour, then, lurks the same old beast, degrading men and women to “appendages of a machine”. As long as we remain beings of flesh and blood, there will need to be people who extract the minerals for the computers we use, assemble the parts and distribute the clothes and other goods we order. These are the relations of exclusion capitalism tries to conceal. Anti-capitalists have detected in the rise of network technology the emergence of a multitudinous global proletariat whose “immaterial labour” gives them a degree of independence from capital from which they can link up to oppose it. Although this captures something important about the identity and commonality of current struggles it shouldn’t obscure the realities faced by those voiceless and invisible workers who carry out manual labour in more familiar industrial conditions; struggle on the “part of those who have no part”, in Jacques Ranciere’s words. The conquest of the net So much for the new world of digital work. How about the much-lauded diversity of the internet? Monopolies have arisen at a startling pace over the past few years from companies that had a standing start but the extraordinary advantage of being a prime-mover[7] . Conventional monopolistic practices are accentuated online by the long-observed “network effect” where the benefits of a service like Facebook or Amazon, where users generate value, increases the more people use it. This means a “winner-takes-all” system where the top companies in each field, such as Google, Facebook and Amazon have no serious rivals. Amazon uses competitive pricing to drive its competitors in retail out of business then sets price levels to maximize profit. Clay Shirky likes to give examples of how Web 2.0 endeavours, such as car-pooling services, undermine local monopolies. But this pales against the big picture. The big picture, as McChesney suggests, can be compared to a 19th century map of imperial rivalries. Each of the tech monopolies occupies a continental base camp, with mini-monopolies like Ebay as Japan. Each camp extends their monopoly into new areas and forms temporary alliances with the ultimate aim of conquering the world and controlling the internet. Their current method for achieving this is “locking-in”. Through a combination of carrots and sticks users are ushered into closed, proprietorial systems where the same corporation design the hardware, sell you the software to go with it and then uses your information to sell advertising, producing what economists call an “enhanced surplus extraction” effect. These new forms of extraction, where the collective intelligence of internet users is exploited on a mass scale, has been termed a “social factory” by Tiziana Terranova, part of the way in which capital exploits not only our working hours, but our friendships, communication, creativity and intelligence. Even the open software movement, whose co-operative spirit Shirky and Benkler wax lyrical about, has proven remarkably susceptible to co-optation by capital. When systems are locked-in, the corporations control the terms of the relationship. This has much more in common with the hierarchical one-to-many broadcasting systems of the 20th century than the distributed many-to-many system that gives the internet its democratic potential. Like TV and film, there are already signs that the internet as broadcaster will serve to pacify and depoliticize through a diet of light entertainment, celebrity and distraction. The process of locking-in is greatly enabled by cloud computing and with more and more people accessing the internet via “tethered” devices, in the form of smart phones, the control these companies have becomes even more important (wireless devices don’t have the same neutrality protections as wired ones). The information in your inbox, calendar, notes and photo album, along with the document you copy into the cloud is stored on gigantic – and extremely capital intensive – server farms dotted around the globe, where it will be mined and sold onto advertisers. The recent launch of Facebook Home for mobile devices and Google New suggest that the networks companies are planning to use this vast mountain of data to mediate how we experience our day-to-day reality, telling us who we have to meet next, when, why and how to get there. The next big expansion is into the huge markets of the developing world with Google and Facebook persuading internet carriers to offer free or cheap broadband to users limited to their own sites. This means that, for millions of people now getting online, the internet will be what Facebook and Google want them to think it is. Internet scholar Jonathan Zittrain predicted this development as early as 2006 when he warned of a future internet of “information appliances”. But Zittrain thought in terms of a technological explanation for this closure: the fear of viruses rather than the monopolistic pursuit of economic gain (protection from viruses was indeed the explanation Steve Jobs chose to give at the IPhone’s launch in 2007). In reality, the lock-in strategy mirrors the way in which corporations have monopolised other media systems. The advent of radio, film and television were accompanied by utopian proclamations about an exciting new world of diversity and cultural participation. Initially, these mediums were anarchic and plural, and largely free of adverts and commercialism, but as Tim Wu has shown, eventually ambitious entrepreneurs enter the scene to offer a functional, uniform product and crowd out competitors. As Wu writes, information technologies exhibit a typical progression: from somebody’s hobby to somebody’s industry; from jury-rigged contraption to slick production marvel; from a freely accessible channel to one strictly controlled by a single corporation or cartel — from open to closed system. This monopoly is used to stifle any innovation believed to harm profit margins, as with AT&T’s decision to suppress the invention of the answer phone in the 1930s, which they feared would put people off chatting over the phone. In a similar light, Google’s recent decision to retire the eminently practical Google Reader has been interpreted as a move to direct eyeballs away from RSS feeds and onto company homepages where they can be shown adverts. We can imagine what the response of Apple might be to an app that coordinated protests and boycotts in a way that seriously harmed the interests of them or their advertisers. There is some grounds for hope in the fact that previous attempts to enclose the internet by Microsoft through its browser and the AOL-Time Warner bid have failed. There are powerful features of the technology that mitigate against centralisation. The internet designed by Vint Cerf and Robert E. Kahn was deliberately designed as an open, permissive system that is neutral between the data-packages it carries. The tech-commentator John Naughton – another thoughtful analyst who transcends the sceptic-celebrant debate – has written that the most exciting breakthroughs in the history of the internet, such as Napster’s file-sharing system and the World Wide Web, happened thanks to its system of “permissionless innovation”. Digital currency Bitcoin is another example of the internet’s capacity to spring surprises on us in a manner that upsets the status quo. With proprietorial systems, only those innovations that contribute to the profits of the tech giants will be permitted. Whilst Berners-Lee thought it “unthinkable” to patent the World Wide Web, making it available to everyone for free, Zuckerberg has behaved like a dictator who wins power via elections, taking advantage of the open architecture of the internet to launch Facebook whilst at college and now pushing a closed system where his company gets the final say on what new software is introduced. Google have a reputation for supporting open standards on the internet, in part because its Search cannot reach information hidden in closed systems. But they have created a proprietorial system in Google-plus and their relentless quest to “organise the world’s information” (which they eventually hope to integrate into our person through artificial intelligence devices) raises profound questions about privacy and the democratic control of knowledge that they choose to ignore. This secrecy and bullying is a far cry from the utopia Silicon Valley chooses to project at the Singularity University where future tech and philanthro-capitalism come together with celebrities to “prepare humanity for accelerating technological change.” The capitalist catechism would have us believe, following Milton Friedman, that the tech giants’ power constrains big government. Once again, the reality is a little different. The data gathered by the tech giants is routinely made available to government security agencies upon request without court requirements. In return, far from tackling the power of these giants, the US government, which provided the huge initial investment for the internet before handing it over to the private sector, turns a blind eye to their monopoly practices, their extensive use of tax havens and their abuse of privacy. It throws its weight behind laws and regulations that keep them profitable and champions their cause abroad. The tech giants spend millions lobbying each year, have cosy relationships with legislators and enjoy an invite to Davos as much as the next business mogul. We are seeing a sinister alliance of business and the state to promote corporate expansion, security and militarism. The enthusiastic collaboration by Apple, PayPal and the like in the crackdown on Wikileaks exposed the true nature of the tech-state nexus. The internet has proven an extremely useful tool for mobilising oppositional groups for collective action without the need for top-down organisations. But by operating through hierarchical domain structures oppositional activists will always be vulnerable to having their activities shut down. The case for migration to non-hierarchical distributed networks, as recommended by the likes of Joss Hands, is difficult to resist. No news is bad news In the second part of the book, McChesney turns to discussing the news media. He paints a grim picture of job cuts, local papers being shut down and original reportage all but disappearing. Again, McChesney focuses on the US but the picture here in the UK is broadly the same. Attempts to monetise journalism on the web have not been successful so far. Whilst a few specialist business organs, such as the Financial Times and the Wall Street Journal, have been successful with paywalls, they have not worked for others, whilst advertising makes nowhere near enough to make up the shortfall. A third of 25 large dailies in the US were purchased by hedge funds, who bought cheap but are still trying to sell them. In the UK, even the Guardian is in crisis despite its relatively privileged position funded by the Scott Trust. The final picture is far from clear. ‘Brand leaders’ like the Mail using voyeur- porn prurience to generate a world following may gain significant advertising. Others may follow the FT having established their reputation, or do so for parts of their offering. Those unable to adapt are likely to close. The celebrants recommend a “wait and see” approach to this crisis, hoping that new networked forms of media will step in to fill the void. They underestimate the need for a wage structure and institutional support to provide reporting. There are plenty of excellent blogs and websites, many of which cover issues the mainstream media ignore, as with Kate Belgrave’s coverage of the impact of local council cuts for False Economy in the UK. But this is the exception. Who will regularly sit in on council budget meetings, or follow a story of local corruption over several months? Without a small pot of NGO funding, it is a hobby only the privileged could afford. The new Bureau for Investigative Journalism has done important work, but trusts raise their own issues of independence and cannot be relied on to produce sufficient sustainable funding. Radicals may be tempted to cheer the collapse of corporate media, but there is an urgent need to imagine and create alternative models or the future looks grim. Activist outlets, such as Indy Media, provide indispensable coverage of protests, and sites like LibCom and New Left Project offer a level of critical analysis far superior to the mainstream, but they are no substitute for the daily work of investigation and reportage. We are constantly bombarded with ‘news” but most of it is recycled tattle from press releases pumped out by the major media conglomerates – “churnalism”, in Nick Davies’s phrase. The advertising revenue of these large conglomerates is the primary determinant of the content. In the past, a newspaper would sell their audience to advertisers. Now, audiences are purchased by advertisers in real-time and so we see the rise of “content farmers” who produce content on demand that ranks high on Google search and social media to deliver target audiences to advertisers. McChesney introduces us to Journatic, an extraordinary company that produces local news for US media by outsourcing writing to workers in the Philippines who are expected to produce 250 stories a week minimum for between 35 to 40 cents a piece. As financial pressures mount, this trend is likely to continue. To suggest this is a viable model for critical journalism is insane. As McChesney notes, it produces the very opposite of an informed and empowered citizenry able to debate political issues and make informed decisions. He is not the first to raise concerns about the threat of internet advertising to democracy. Most notably, Eli Pariser’s Filter Bubble describes how the integrated complex of behavioural monitoring systems that tracks behaviour online to sell us stuff is driving us into information ghettoes or “filter bubbles”. These bubbles give us much of what we “like” but little of what we need, reinforcing our prejudices and counteracting the possibility of a deliberative public. Pariser, however, lacks a broader analysis of the economic forces at work. Trapped by the logic of the celebrant-sceptic debate, he is inclined to portray the pre-internet public sphere as lively, vigorous and reflective of popular concerns. The conclusions he recommends (essentially, asking the tech companies to be more transparent about privacy and algorithms) seems thoroughly inadequate to the scale of the challenge. We may, as he suggests, ask “companies that hold great curatorial power” to do more to “cultivate public space and citizenship”, but given everything we know about the history of capitalist firms, they’re unlikely to listen. The merit of McChesney’s approach is to see the internet’s impact within the context of the long-term commercialisation of journalism accelerated by the emergence of large media conglomerates in the 1970s and 80s selling stories of celebs and sex scandals. Throughout much of the 20th century, an ethic of “professional journalism” in the United States had held sway with a separation between editors and owners and an aspiration to objectivity.  This was no golden age by any means. Journalists were far too close to official sources (witness their cheerleading for the Vietnam war) and they could only stray so far from the ideological constraints imposed by advertisers and their wealthy owners. Nonetheless, within the mainstream, there was at least lip service to the ideal of journalism as a “public good”. Quite rightly, McChesney does not think a return to that system is either possible or desirable. We need to bite the bullet, he says, and face up to the fact that neither the market nor amateur enthusiasts can provide the kind of critical effective journalism appropriate to an empowered democratic culture. There needs to be “professional, dedicated journalistic institutions funded as a public good”. McChesney proposes a “citizenship news voucher” system in which each citizen receives $200 to be allocated to the nonprofit media outlet of their choice each year, with all content available for free online and not copy-righted and no commercials. This is a promising proposal. Crucially, it separates public funding from the state. It would be superior to the public funding system we have in the UK, which takes the form of a mandatory transfer to a single monolithic institution in the form of the BBC. The BBC’s coverage of the banking crisis, of the rise of the database state and of the Coalition’s policies of welfare cuts and NHS marketisation has confirmed that it functions as a regime broadcaster embedded within the political class when it comes to core issues (if a functioning, independent media had provided the full story behind the NHS “reforms”, you can guarantee there would have been a much bigger uproar). With a few exceptions, its journalists socialise with, and defer to the views of, the bankers and politicians who call the shots. The BBC has utterly failed to provide the public with an informed understanding of the current economic crisis and how it came about, reinforcing the Tory narrative of over-zealous public spending and necessary cuts. Its decision to censor the “Ding dong!” song, celebrating Thatcher’s death, is just the latest in a long line of capitulations to power following the Hutton inquiry. Arguably, the BBC has been a collaborator from its very beginnings. The 80’s were probably a high point for disobedience to the state – part of the awkward transition from social democratic pluralism to neoliberalism. Tony Hall, the new Director General, played an important role alongside John Birt in bringing the institution to heel. It may be tempting to “cling to nurse, for fear of worse” in the shape of Murdoch, but this record of failures, rooted in the elitist structure and make-up of the BBC, calls for a much more imaginative approach. So long as we restrict our imagination to a choice between regime broadcaster and corporate oligarchs, that’s all we’ll be offered. Far better to argue for a plural democratisation of the licence fee to fund critical, independent media. These funds could be allocated annually by citizens, as McChesney recommends. The approach would be given an added layer of democratic input if citizens controlled not only the financing but also had a say into which editorial projects are funded, drawing on Dan Hind’s proposal for a system of “public commissioning”. This would promote a new form of journalism responsive to active citizens not pacified consumers. As Hind describes it: In a system of public commissioning citizens would, collectively and equally, make decisions about the allocation of resources to journalists and researchers. Each of us would be able to provide a certain amount of material support for projects that we wanted to see funded. With each citizen empowered to give away a voucher of, say, £100 each year to whichever nonprofit they choose, and able to shape the direction of editorial projects, we would quickly see a much more plural and democratic media. Existing outlets like openDemocracy and Novara could apply for funds, whilst new media outlets would spring up on the back of promised funding. There would be more space for criticism. As it stands, critical outlets that fundamentally challenge the logic of social and economic arrangements have always been tolerated (until they become a threat). This allows supporters of the system to trumpet its pluralism. But they always have fewer resources and much lower production values than their corporate rivals and so struggle to make themselves heard. The production values of a Democracy Now or Reel News make it that little less slick and authoritative than a CNN or Fox News.  Equipped with their own independent infrastructure for the quality production and distribution of their content nonprofit media outlets could start to popularise radical ideas that puncture the bankrupt consensus of our political elites. At our current historical juncture there is huge chasm between the utopian hopes raised by technology and automation and the miserable future of grinding insecure work and debt or unemployment and debt our political masters have prepared for us. The mutation of the internet into an exploitative medium is simply one manifestation of this grim gulf between the possible and the actual. It’s hard to think of any belief system that has been so swiftly and comprehensively discredited in theory and practice as neoliberalism. We are at a pivotal moment. The crisis opens up a space for imagining economic alternatives, but unlike the crises of the 1930s and 1970s none has so far gained any traction and so neoliberalism staggers on zombie-like. What the billionaires and tech-utopians of the Singularity University fail to grasp – because it is not in their interests to do so – is that no amount of 3D printers or nano-techonology can deliver liberation so long as human flourishing is subordinate to profit. A critical media would provide a space to imagine how we maximize free time, reduce inequality, and create an empowered democracy, at school, in the workplace, in our communities. Why not have a documentary on intellectual property rights, a panel discussion on the merits of worker-directed firms, a play about the social wage, an explanation of what “quantitative easing” actually means, a discussion of how to fight climate change? We’re constantly led to believe that a philistine “public” is only interested in tittle-tattle, but when ideas are given space and a decent billing, they often prove popular (as with Stephanie Flanders’ recent BBC series on Marx, Keynes and Hayek). To argue against the idea of an informed and empowered citizenry taking the lead in shaping debate is to argue against democracy itself. The battle for the commons We should consider the models of McChesney, Hind and other theorists of critical media as proposals to be reflected on and tested in wider struggles of democratic contestation. New proposals will emerge alongside them. Yet none of them will gain traction without a political movement to support them. The idea of the commons provides the moral and political resources for the collective re-articulation of diverse struggles around a shared idea. The online tendency of libertarian hackers and activists, from Open Rights Group to Anonymous, to Peer-to-Peer networks are increasingly coming to recognise that their enemy is not only the state but private concentrations of power and the tech-state nexus. The defeat of the most draconian provisions of the Digital Economy Bill showed a glimpse of the alliances that are possible. Radical political movements, meanwhile, can’t hope to replace capitalism without an understanding of the media as not only an instrument for their message but as a distinct terrain of struggle. If groups concerned with net neutrality, privacy and limiting copyright could move beyond defensive campaigns and link up with movements fighting corporate power and austerity through a broader articulation of the commons, this opens up exciting possibilities. For the idea of the commons is both a protective, prefigurative shared space and also a claim that a different kind of future is possible starting from it. The commons is a space free from the rule of private property and the state. It is part reality, part aspiration. It provides a realm of cultural freedom but also the public goods of education, health and social security that allow us to participate in that realm as free and equal citizens. It extends to the warehouses and factories that keep the network society running where the most profound relations of domination are experienced. Those who make communication possible can’t be left voiceless. A say over the commons means a say over how work-time is managed and what happens to the surplus workers produce. Effective solidarity will extend to these workers and not speak for them. Inserting one world into another, the commons brings to light the contradictions between the ideal of a shared communal space of equals and the reality of closed divided spaces of exploitation. As Winstanley reminds us, it is a realm where no one is dominated by the arbitrary will of another. The rights of free association and speech, won in political struggle, are only ever effective when the more general right to the commons is defended and expanded. The digital commons, with its vast potential for democratic communication and co-operation, has an important part to play. At its best, it furnishes us with a compelling vision of co-operative, non-market values and behaviours. McChesney is, I think, too strong in his dismissal of the celebrants’ vision here. The pleasurable application of skills and intellect, the joy of creative activity in association with others, these recall Marx’s conception of “species-being” in a society where “the free development of each is the free development of all”. A properly articulated vision of emancipation will need to draw on these glimpses, which hint at an altogether more friendly and co-operative social ontology than the archetypal competitive consumer propagated in the broadcasting of war-like soaps and reality shows. We delight in communal life. Even Facebook – corporate “hamster-cage” though it is – reveals a basic social urge to bond and create with others. Grass-roots movements, such as Occupy and the Indignados, have proven time and again that the tools of the internet lend themselves to successful experiments in democratic media and organisation. The principles that inform these practices can fruitfully be turned towards a critique of the current media-power nexus and the formation of concrete demands for a media that fosters genuine social and political empowerment. Co-operative relations forged through online media can only be understood within the over-riding logic of an exploitative economic system in which they are always vulnerable. They cannot be expected to organically evolve and out-compete hierarchical relations without a conscious effort. Without an outward facing political dimension, the spaces that celebrants admire are no better than the more impotent versions of the encampments; cosy pre-figurative enclaves with no clout when the powers-that-be come knocking. Difficult political choices and meeting the challenges of organisation are required to defend and realise the culture and media we want. The alternative of a private internet of business interests is too bleak to entertain. Piece originally published at Open Democracy | Creative Commons License ]]> 0 Jamming Sat, 11 May 2013 10:00:19 +0000 0 So She Put Her Hands Up Sat, 06 Apr 2013 10:03:25 +0000 0 Glass Is Weird Wed, 03 Apr 2013 10:00:34 +0000 0 Printworld Thu, 28 Mar 2013 08:00:52 +0000 0 ‘At oness, and no fall makestic to us, infessed Russion-bently’ Wed, 06 Mar 2013 08:59:00 +0000 0
049c5458e6b0b19b
Take the 2-minute tour × Disclaimer: I am not a physicist; I am a geometer (and a student!) trying to learn some physics. Please be gentle. Thanks! When solving the Schrödinger equation for a particle in a spherical potential, it seems common to separate variables into angular and radial components. The angular evolution can then be expressed in terms of eigenfunctions of the Laplace-Beltrami operator $\Delta$ on the sphere, i.e., the spherical harmonics. (It is my understanding that these eigenfunctions or eigenstates also have some physical significance, namely that eigenfunctions with the same eigenvalue correspond to states of equal energy.) When solving the Dirac equation (again with a spherical potential) you'd expect a similar story: separate into angular and radial components and write the angular evolution in terms of the eigenfunctions of the Riemannian Dirac operator $D$ on the sphere. And, you'd expect these eigenfunctions would have a similar physical interpretation to the non-relativistic case (after all, the only thing we changed was the energy-momentum relationship). However, the references I'm finding on the Dirac equation with central potential write solutions in terms of the spherical spinors $\Omega$, which are themselves simple functions of the spherical harmonics $Y_l^m$. This situtation seems odd to me because, although eigenfunctions of $D$ are also eigenfunctions of $\Delta$, the opposite is not true. In particular, $D$ will have both positive and negative eigenvalues, and so eigenspaces with equal value but opposite sign get "mixed" when we square $D$ (recall that on, say, Euclidean $R^3$, $D^2=\Delta$). I'm not sure about the physical interpretation, though, because I don't understand the physical meaning of eigenfunctions of the Dirac operator. Here are some more concrete questions: • what do eigenfunctions of the $D$ represent physically? • why are the spherical harmonics used for separation of variables rather than eigenfunctions of $D$? • alternatively, are there cases where eigenfunctions of $D$ are used to solve Dirac's equation? Pedagogical references are appreciated. Thanks! share|improve this question I think the [special-relativity] tag could be dropped from this, since although the theory of Dirac fermions requires SR "behind the scenes" in order to be consistent, we're not talking about relativity directly. Did you have a particular reason for including the tag? –  David Z Dec 2 '10 at 6:20 Done. I guess I added the tag because I saw the Lorentz group coming up a lot in this context... and because I'm not a physicist! ;) –  fuzzytron Dec 2 '10 at 18:42 2 Answers 2 up vote 4 down vote accepted There might be a small bit of confusion in the question here. [Skip all this stuff below until the next bracketed comment.] First recall that $\mathcal{so}(3)$ has representations that integrate to representations of $SU(2)$ rather than $SO(3)$, and these are the "half-integral" spin. Likewise for $\mathcal{so}(3,1)$ and $SU(2)\times SU(2)$ and $SO(3,1)$, respectively. Now for a manifold with Lorentzian metric, a representation of $\mathcal{so}(3,1)$ indicates what kind of particle/object/tensor/spinor you're talking about. The representation tells you how to transform the object/section when you change coordinates. (N.B.: since we're talking about bundles associated to the frame bundle, changing coordinates induces a change of trivialization of the bundle. Ordinarily, the two concepts are indpendent.) More specifically, tensors correspond to "integral spin" representations, e.g. the 4-dimensional representation is vectors, while the 6-dimensional anti-symmetric representation describes two-forms (such as field strengths). So spinors are sections of bundles which correspond to representations of $\mathcal{so}(3,1)$ which integrate to a representation of $SU(2)\times SU(2)$. Practically speaking, in order to covariantly differentiate such an object, you can do the following. The Levi-Civita connection already gives you an element of $\mathcal{so}(3,1)$ (a matrix) for each tangent index. Plus you have a representation of $\mathcal{so}(3,1)$, so you act with this element of $\mathcal{so}(3,1)$ by your representation. This is what the gamma matrices are about. [Oy, this is getting too long.] Now here's the thing: the spinor representations decompose into different irreducible components, and the Dirac operator maps one representation (positive chirality) into another (negative chirality)! That is, it maps sections of one spinor bundle into sections of another. You can of course look at the Dirac operator on the sum of these bundles, but eigenvectors do not have an evident physical interpretation (they are of mixed chirality). In flat space, the square of the Dirac operator is a multiple of the identity endomorphism on the bundle, so eigenvalues make perfect sense and can be written in terms of functions times the (global) basis elements for spinors. share|improve this answer +1: very nice answer. And by the way, I wouldn't mind the bracket part being even longer (for example some mention of Clifford algebras might be useful, seeing as the OP is a geometer). –  Marek Dec 2 '10 at 11:01 I'm ok with this; I think the irreducible representations you're considering (the Weyl spinors?) correspond to what I'd call even and odd functions (i.e., functions taking values in the even or odd part of the Clifford algebra, resp.). And the spinor Dirac operator maps from even to odd and vice versa. I'm trying to understand the physical meaning though: what do these even and odd parts correspond to? You mentioned chirality -- does chirality have a physical meaning here? Maybe that would help explain why one wouldn't want to consider eigenvectors of mixed chirality. –  fuzzytron Dec 2 '10 at 18:29 @fuzzytron: yes, chirality does have a physical meaning: left-handed and right-handed particles. If you reflected the space (this is called P transformation) you would swap such particles. As for the mixed chirality: when actually trying to get a classical approximation it so happens that there exists a basis (mixing left and right chiralities) where the top component spinor is much larger than the bottom one (which can thus be neglegted). The top spinor then corresponds to the classical two-component wavefunction modeling electron with spin. –  Marek Dec 2 '10 at 19:40 I think the essential physics is simply this: The conserved quantity associated with the spherical symmetry of the system is the angular momentum vector. In the case of the scalar particle described by the Schrödinger equation, this is unambiguous. When one progresses to the Dirac operator, however, you are now describing a particle with intrinsic spin. Now, the conserved quantity is $\vec J = \vec S + \vec L$. In general, this means that you would expect some sort of spin-orbit coupling in the system, breaking the degeneracy between different values of the z component of spin seen in the Schrödinger-equation based solution for the central force problem. I've never seen anyone do this by a direct application of the Dirac equation. Typically, the approach is to solve this using the Schrödinger equation, and then treat the spin-orbit coupling as a small perturbation on top of that, and spin as another quantum number that shows up whose explanation is just a handwavey QFT. But all of this information would have to be embedded in the Dirac equation. share|improve this answer "I've never seen anyone do this by a direct application of the Dirac equation." -> Really? I thought it was pretty standard exercise to derive all the basic results (described by Pauli equation) directly from the Dirac equation. –  Marek Dec 2 '10 at 11:05 Your QFT book breaks the Dirac equation into spherical harmonics and solves the hydrogen atom problem directly from the Dirac equation? –  Jerry Schirmer Dec 2 '10 at 14:05 "Typically, the approach is to solve this using the Schrödinger equation, and then treat the spin-orbit coupling as a small perturbation on top of that" Yes, I've come across this approach as well and suspect it's the "right" thing to do in practice, but since my interest is in understanding the connection with geometry I'm willing to consider less practical stuff. ;) –  fuzzytron Dec 2 '10 at 18:44 yes, I happened to have QM and QFT courses that solved hydrogen atom in numerous ways: Schroedinger, Pauli, Klein-Gordon, Dirac. We also did all those Stark and Zeeman effects and also solved the problem algebraically using Laplace-Runge-Lenz vector. I can say by the end I came to quite hate hydrogen atom. Thank god we at least didn't solve it from the point of view of QED, Standard Model and string theory :-) –  Marek Dec 2 '10 at 19:27 That's actually pretty neat, though I'd actually be interested in a first-principles derivation of the Lamb shift, now that you mention it. :) –  Jerry Schirmer Dec 2 '10 at 23:18 Your Answer
9c9d486a0fdd9b8e
From Wikipedia, the free encyclopedia Jump to: navigation, search The monomer repeat unit of unsubstituted polythiophene. Polythiophenes demonstrate interesting optical properties resulting from their conjugated backbone, as demonstrated by the fluorescence of a substituted polythiophene solution under UV irradiation. Polythiophenes (PTs) are polymerized thiophenes, a sulfur heterocycle. They can become conducting when electrons are added or removed from the conjugated π-orbitals via doping. The study of polythiophenes has intensified over the last three decades. The maturation of the field of conducting polymers was confirmed by the awarding of the 2000 Nobel Prize in Chemistry to Alan J. Heeger, Alan MacDiarmid, and Hideki Shirakawa "for the discovery and development of conductive polymers". The most notable property of these materials, electrical conductivity, results from the delocalization of electrons along the polymer backbone – hence the term "synthetic metals". However, conductivity is not the only interesting property resulting from electron delocalization. The optical properties of these materials respond to environmental stimuli, with dramatic color shifts in response to changes in solvent, temperature, applied potential, and binding to other molecules. Both color changes and conductivity changes are induced by the same mechanism—twisting of the polymer backbone, disrupting conjugation—making conjugated polymers attractive as sensors that can provide a range of optical and electronic responses. A number of comprehensive reviews have been published on PTs, the earliest dating from 1981.[1] Schopf and Koßmehl published a comprehensive review of the literature published between 1990 and 1994.[2] Roncali surveyed electrochemical synthesis in 1992,[3] and the electronic properties of substituted PTs in 1997.[4] McCullough's 1998 review focussed on chemical synthesis of conducting PTs.[5] A general review of conjugated polymers from the 1990s was conducted by Reddinger and Reynolds in 1999.[6] Finally, Swager et al. examined conjugated-polymer-based chemical sensors in 2000.[7] These reviews are an excellent guide to the highlights of the primary PT literature from the last two decades. Mechanism of conductivity and doping[edit] Electrons are delocalized along the conjugated backbones of conducting polymers, usually through overlap of π-orbitals, resulting in an extended π-system with a filled valence band. By removing electrons from the π-system ("p-doping"), or adding electrons into the π-system ("n-doping"), a charged unit called a bipolaron is formed (see Figure 1). Figure 1. Removal of two electrons (p-doping) from a PT chain produces a bipolaron. Doping is performed at much higher levels (20–40%) in conducting polymers than in semiconductors (<1%). The bipolaron moves as a unit up and down the polymer chain, and is responsible for the macroscopically observed conductivity of the polymer. For some samples of poly(3-dodecylthiophene) doped with iodine, the conductivity can approach 1000 S/cm.[8] (In comparison, the conductivity of copper is approximately 5×105 S/cm.) Generally, the conductivity of PTs is lower than 1000 S/cm, but high conductivity is not necessary for many applications of conducting polymers (see below for examples). Simultaneous oxidation of the conducting polymer and introduction of counterions, p-doping, can be accomplished electrochemically or chemically. During the electrochemical synthesis of a PT, counterions dissolved in the solvent can associate with the polymer as it is deposited onto the electrode in its oxidized form. By doping the polymer as it is synthesized, a thick film can build up on an electrode—the polymer conducts electrons from the substrate to the surface of the film. Alternatively, a neutral conducting polymer film or solution can be doped post-synthesis. Reduction of the conducting polymer, n-doping, is much less common than p-doping. An early study of electrochemical n-doping of poly(bithiophene) found that the n-doping levels are less than those of p-doping, the n-doping cycles were less efficient, the number of cycles required to reach maximum doping was higher, and the n-doping process appeared to be kinetically limited, possibly due to counterion diffusion in the polymer.[9] A variety of reagents have been used to dope PTs. Iodine and bromine produce high conductivities[8] but are unstable and slowly evaporate from the material.[10] Organic acids, including trifluoroacetic acid, propionic acid, and sulfonic acids produce PTs with lower conductivities than iodine, but with higher environmental stabilities.[10][11] Oxidative polymerization with ferric chloride can result in doping by residual catalyst,[12] although matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) studies have shown that poly(3-hexylthiophene)s are also partially halogenated by the residual oxidizing agent.[13] Poly(3-octylthiophene) dissolved in toluene can be doped by solutions of ferric chloride hexahydrate dissolved in acetonitrile, and can be cast into films with conductivities reaching 1 S/cm.[14] Other, less common p-dopants include gold trichloride[15] and trifluoromethanesulfonic acid.[16] Structure and optical properties[edit] Conjugation length and chromisms[edit] The extended π-systems of conjugated PTs produce some of the most interesting properties of these materials—their optical properties. As an approximation, the conjugated backbone can be considered as a real-world example of the "electron-in-a-box" solution to the Schrödinger equation; however, the development of refined models to accurately predict absorption and fluorescence spectra of well-defined oligo(thiophene) systems is ongoing.[17] Conjugation relies upon overlap of the π-orbitals of the aromatic rings, which, in turn, requires the thiophene rings to be coplanar (see Figure 2, top). Figure 2. Conjugated π-orbitals of a coplanar and a twisted substituted PT. The number of coplanar rings determines the conjugation length—the longer the conjugation length, the lower the separation between adjacent energy levels, and the longer the absorption wavelength. Deviation from coplanarity may be permanent, resulting from mislinkages during synthesis or especially bulky side chains; or temporary, resulting from changes in the environment or binding. This twist in the backbone reduces the conjugation length (see Figure 2, bottom), and the separation between energy levels is increased. This results in a shorter absorption wavelength. Determining the maximum effective conjugation length requires the synthesis of regioregular PTs of defined length. The absorption band in the visible region is increasingly red-shifted as the conjugation length increases, and the maximum effective conjugation length is calculated as the saturation point of the red-shift. Early studies by ten Hoeve et al. estimated that the effective conjugation extended over 11 repeat units,[18] while later studies increased this estimate to 20 units.[19] More recently, Otsubo et al. synthesized 48-[20] and 96-mer[21] oligothiophenes, and found that the red-shift, while small (a difference of 1.9 nm between the 72- and the 96-mer), does not saturate, meaning that the effective conjugation length may be even longer than 96 units.[21] A variety of environmental factors can cause the conjugated backbone to twist, reducing the conjugation length and causing an absorption band shift, including solvent, temperature, application of an electric field, and dissolved ions. The absorption band of poly (3-thiophene acetic acid) in aqueous solutions of poly(vinyl alcohol) (PVA) shifts from 480 nm at pH 7 to 415 nm at pH 4. This is attributed to formation of a compact coil structure, which can form hydrogen bonds with PVA upon partial deprotonation of the acetic acid group.[22] Chiral PTs showed no induced circular dichroism (ICD) in chloroform, but displayed intense, but opposite, ICDs in chloroform–acetonitrile mixtures versus chloroform–acetone mixtures.[23] Also, a PT with a chiral amino acid side chain[24] displayed moderate absorption band shifts and ICDs, depending upon the pH and the concentration of buffer.[25] Shifts in PT absorption bands due to changes in temperature result from a conformational transition from a coplanar, rodlike structure at lower temperatures to a nonplanar, coiled structure at elevated temperatures. For example, poly(3-(octyloxy)-4-methylthiophene) undergoes a color change from red–violet at 25 °C to pale yellow at 150 °C. An isosbestic point (a point where the absorbance curves at all temperatures overlap) indicates coexistence between two phases, which may exist on the same chain or on different chains.[26] Not all thermochromic PTs exhibit an isosbestic point: highly regioregular poly(3-alkylthiophene)s (PATs) show a continuous blue-shift with increasing temperature if the side chains are short enough so that they do not melt and interconvert between crystalline and disordered phases at low temperatures.[citation needed] Finally, PTs can exhibit absorption shifts due to application of electric potentials (electrochromism),[27] or to introduction of alkali ions (ionochromism).[28] These effects will be discussed in the context of applications of PTs below. The asymmetry of 3-substituted thiophenes results in three possible couplings when two monomers are linked between the 2- and the 5-positions. These couplings are: • 2,5', or head–tail (HT), coupling. • 2,2', or head–head (HH), coupling • 5,5', or tail–tail (TT), coupling These three diads can be combined into four distinct triads, shown in Figure 3. Figure 3.The four possible triads resulting from coupling of 3-substituted thiophenes. The triads are distinguishable by NMR spectroscopy, and the degree of regioregularity can be estimated by integration.[29][30] Elsenbaumer et al. first noticed the effect of regioregularity on the properties of PTs. A regiorandom copolymer of 3-methylthiophene and 3-butylthiophene possessed a conductivity of 50 S/cm, while a more regioregular copolymer with a 2:1 ratio of HT to HH couplings had a higher conductivity of 140 S/cm.[31] Films of regioregular poly(3-(4-octylphenyl)thiophene) (POPT) with greater than 94% HT content possessed conductivities of 4 S/cm, compared with 0.4 S/cm for regioirregular POPT.[32] PATs prepared using Rieke zinc formed "crystalline, flexible, and bronze-colored films with a metallic luster". On the other hand, the corresponding regiorandom polymers produced "amorphous and orange-colored films".[33] Comparison of the thermochromic properties of the Rieke PATs showed that, while the regioregular polymers showed strong thermochromic effects, the absorbance spectra of the regioirregular polymers did not change significantly at elevated temperatures. This was likely due to the formation of only weak and localized conformational defects.[citation needed] Finally, Xu and Holdcroft demonstrated that the fluorescence absorption and emission maxima of poly(3-hexylthiophene)s occur at increasingly lower wavelengths (higher energy) with increasing HH dyad content. The difference between absorption and emission maxima, the Stokes shift, also increases with HH dyad content, which they attributed to greater relief from conformational strain in the first excited state.[34] Unsubstituted PTs are conductive after doping, and have excellent environmental stability compared with some other conducting polymers such as polyacetylene, but are intractable and soluble only in solutions like mixtures of arsenic trifluoride and arsenic pentafluoride.[35] However, in 1987 examples of organic-soluble PTs were reported. Elsenbaumer et al., using a nickel-catalyzed Grignard cross-coupling, synthesized two soluble PTs, poly(3-butylthiophene) and poly(3-methylthiophene-'co'-3'-octylthiophene), which could be cast into films and doped with iodine to reach conductivities of 4 to 6 S/cm.[36] Hotta et al. synthesized poly(3-butylthiophene) and poly(3-hexylthiophene) electrochemically[37] (and later chemically[38]), and characterized the polymers in solution[39] and cast into films.[40] The soluble PATs demonstrated both thermochromism and solvatochromism (see above) in chloroform and 2,5-dimethyltetrahydrofuran.[41] Also in 1987, Wudl et al. reported the syntheses of water-soluble sodium poly(3-thiophenealkanesulfonate)s.[42] In addition to conferring water solubility, the pendant sulfonate groups act as counterions, producing self-doped conducting polymers. Substituted PTs with tethered carboxylic acids,[43] acetic acids,[44] amino acids,[24] and urethanes[45] are also water-soluble. More recently, poly(3-(perfluorooctyl)thiophene)s soluble in supercritical carbon dioxide[46] were electrochemically and chemically synthesized by Collard et al.[47] Finally, unsubstituted oligothiophenes capped at both ends with thermally-labile alkyl esters were cast as films from solution, and then heated to remove the solublizing end groups. Atomic force microscopy (AFM) images showed a significant increase in long-range order after heating.[48] PTs can be synthesized electrochemically, by applying a potential across a solution of the monomer to be polymerized, or chemically, using oxidants or cross-coupling catalysts. Both methods have their advantages and disadvantages. Electrochemical synthesis[edit] In an electrochemical polymerization, a potential is applied across a solution containing thiophene and an electrolyte, producing a conductive PT film on the anode.[citation needed] Electrochemical polymerization is convenient, since the polymer does not need to be isolated and purified, but it can produce polymers with undesirable alpha-beta linkages and varying degrees of regioregularity. Figure 4. Initial steps in the electropolymerization of thiophenes. As shown in Figure 4, oxidation of a monomer produces a radical cation, which can then couple with a second radical cation to form a dication dimer, or with another monomer to produce a radical cation dimer. Deposition of long, well-ordered chains onto the electrode surface is followed by growth of either long, flexible chains, or shorter, more crosslinked chains, depending upon the polymerization conditions. The quality of an electrochemically prepared PT film is affected by a number of factors. These include the electrode material, current density, temperature, solvent, electrolyte, presence of water, and monomer concentration.[2] Two other important but interrelated factors are the structure of the monomer and the applied potential. The potential required to oxidize the monomer depends upon the electron density in the thiophene ring π-system. Electron-donating groups lower the oxidation potential, while electron-withdrawing groups increase the oxidation potential. Thus, 3-methylthiophene polymerizes in acetonitrile and tetrabutylammonium tetrafluoroborate at a potential of about 1.5 V vs. SCE (saturated calomel electrode), while unsubstituted thiophene polymerizes at about 1.7 V vs. SCE. Steric hindrance resulting from branching at the α-carbon of a 3-substituted thiophene inhibits polymerization.[49] This observation leads to the so-called "polythiophene paradox": the oxidation potential of many thiophene monomers is higher than the oxidation potential of the resulting polymer. In other words, the polymer can be irreversibly oxidized and decompose at a rate comparable to the polymerization of the corresponding monomer.[citation needed] This remains one of the major disadvantages of electrochemical polymerization, and limits its application for many thiophene monomers with complex side groups. Chemical synthesis[edit] Chemical synthesis offers two advantages compared with electrochemical synthesis of PTs: a greater selection of monomers, and, using the proper catalysts, the ability to synthesize perfectly regioregular substituted PTs. While PTs may have been chemically synthesized by accident more than a century ago,[50] the first planned chemical syntheses using metal-catalyzed polymerization of 2,5-dibromothiophene were reported by two groups independently in 1980. Yamamoto et al. used magnesium in tetrahydrofuran (THF) and nickel(bipyridine) dichloride, analogous to the Kumada coupling of Grignard reagents to aryl halides.[51] Lin and Dudek also used magnesium in THF, but with a series of acetylacetonate catalysts (Pd(acac)2, Ni(acac)2, Co(acac)2, and Fe(acac)3.[52] Later developments produced higher molecular weight PTs than those initial efforts, and can be grouped into two categories based on their structure. Regioregular PTs can be synthesized by catalytic cross-coupling reactions of bromothiophenes, while polymers with varying degrees of regioregularity can be simply synthesized by oxidative polymerization. The first synthesis of perfectly regioregular PATs was described by McCullough et al. in 1992.[53] As shown in Figure 5 (top), Figure 5. Cross-coupling methods for preparing regioregular PATs. selective bromination produces 2-bromo-3-alkylthiophene, which is followed by lithiation, transmetalation and then Kumada cross-coupling in the presence of a nickel catalyst. This method produces approximately 100% HT–HT couplings, according to NMR spectroscopy analysis of the diads. In the method subsequently described by Rieke et al. in 1993,[54] 2,5-dibromo-3-alkylthiophene is treated with highly reactive "Rieke zinc"[55] to form a mixture of organometallic isomers (Figure 5, bottom). Addition of a catalytic amount of Pd(PPh3)4 produces a regiorandom polymer, but treatment with Ni(dppe)Cl2 yields regioregular PAT in quantitative yield.[56] While the McCullough and Rieke methods produce structurally homogenous PATs, they require low temperatures, the careful exclusion of water and oxygen, and brominated monomers. In contrast, the oxidative polymerization of thiophenes using ferric chloride described by Sugimoto in 1986 can be performed at room temperature under less demanding conditions.[57] This method has proven to be extremely popular; antistatic coatings are prepared on a commercial scale using ferric chloride (see below).[58] A number of studies have been conducted in attempts to improve the yield and quality of the product obtained using the oxidative polymerization technique. In addition to ferric chloride, other oxidizing agents, including ferric chloride hydrate, copper perchlorate, and iron perchlorate have also been used successfully to polymerize 2,2'-bithiophene.[59] Slow addition of ferric chloride to the monomer solution produced poly(3-(4-octylphenyl)thiophene)s with approximately 94% H–T content.[32] Precipitation of ferric chloride in situ (in order to maximize the surface area of the catalyst) produced significantly higher yields and monomer conversions than adding monomer directly to crystalline catalyst.[60][61] Higher molecular weights were reported when dry air was bubbled through the reaction mixture during polymerization.[62] Exhaustive Soxhlet extraction after polymerization with polar solvents was found to effectively fractionate the polymer and remove residual catalyst before NMR spectroscopy.[29] Using a lower ratio of catalyst to monomer (2:1, rather than 4:1) may increase the regioregularity of poly(3-dodecylthiophene)s.[63] Andreani et al. reported higher yields of soluble poly(dialkylterthiophene)s in carbon tetrachloride rather than chloroform, which they attributed to the stability of the radical species in carbon tetrachloride.[64] Higher-quality catalyst, added at a slower rate and at reduced temperature, was shown to produce high molecular weight PATs with no insoluble polymer residue.[65] Laakso et al. used a factorial design to determine that increasing the ratio of catalyst to monomer increased the yield of poly(3-octylthiophene), and claimed that a longer polymerization time also increased the yield.[66] The mechanism of the oxidative polymerization using ferric chloride has been controversial. Sugimoto et al. did not speculate on a mechanism in their 1986 report.[57] In 1992, Niemi et al. proposed a radical mechanism, shown in Figure 6(top). Figure 6. Proposed mechanisms for ferric chloride oxidative polymerizations of thiophenes. They based their mechanism on two assumptions. First, since they observed polymerization only in solvents where the catalyst was either partially or completely insoluble (chloroform, toluene, carbon tetrachloride, pentane, and hexane, and not diethyl ether, xylene, acetone, or formic acid), they concluded that the active sites of the polymerization must be at the surface of solid ferric chloride. Therefore, they discounted the possibilities of either two radical cations reacting with each other, or two radicals reacting with each other, "because the chloride ions at the surface of the crystal would prevent the radical cations or radicals from assuming positions suitable for dimerization".[67] Second, using 3-methylthiophene as a prototypical monomer, they performed quantum mechanical calculations to determine the energies and the total atomic charges on the carbon atoms of the four possible polymerization species (neutral 3-methylthiophene, the radical cation, the radical on carbon 2, and the radical on carbon 5). Since the most negative carbon of the neutral 3-methylthiophene is also carbon 2, and the carbon with the highest odd electron population of the radical cation is carbon 2, they concluded that a radical cation mechanism would lead to mostly 2–2, H–H links. They then calculated the total energies of the species with the radicals at the 2 and the 5 carbons, and found that the latter was more stable by 1.5 kJ/mol. Therefore, the more stable radical could react with the neutral species, forming head-to-tail couplings as shown in Figure 6 (top). Andersson et al. offered an alternative mechanism in the course of their studies of the polymerization of 3-(4-octylphenyl)thiophene with ferric chloride, where they found a high degree of regioregularity when the catalyst was added to the monomer mixture slowly. They concluded that, given the selectivity of the couplings, and the strong oxidizing conditions, the reaction could proceed via a carbocation mechanism (Figure 6, middle).[32] The radical mechanism was directly challenged in a short communication in 1995, when Olinga and François noted that thiophene could be polymerized by ferric chloride in acetonitrile, a solvent in which the catalyst is completely soluble. Their analysis of the kinetics of thiophene polymerization also seemed to contradict the predictions of the radical polymerization mechanism.[68] Barbarella et al. studied the oligomerization of 3-(alkylsulfanyl)thiophenes, and concluded from their quantum mechanical calculations, and considerations of the enhanced stability of the radical cation when delocalized over a planar conjugated oligomer, that a radical cation mechanism analogous to that generally accepted for electrochemical polymerization was more likely (Figure 6, bottom).[69] Given the difficulties of studying a system with a heterogeneous, strongly oxidizing catalyst that produces difficult to characterize rigid-rod polymers, the mechanism of oxidative polymerization is by no means decided. However, the radical cation mechanism shown in Figure 6 is generally accepted as the most likely route for PT synthesis. A number of applications have been proposed for conducting PTs, but none has been commercialized. Potential applications include field-effect transistors,[70] electroluminescent devices, solar cells, photochemical resists, nonlinear optic devices,[71] batteries, diodes, and chemical sensors.[72] In general, there are two categories of applications for conducting polymers. Static applications rely upon the intrinsic conductivity of the materials, combined with their ease of processing and material properties common to polymeric materials. Dynamic applications utilize changes in the conductive and optical properties, resulting either from application of electric potentials or from environmental stimuli. Figure 7. PEDOT-PSS (Clevios P). As an example of a static application, poly(3,4-ethylenedioxythiophene)-poly(styrene sulfonate) (PEDOT-PSS) product Clevios P (Figure 7) from Heraeus has been extensively used as an antistatic coating (as packaging materials for electronic components, for example). AGFA coats 200 m × 10 m of photographic film per year with PEDOT:PSS because of its antistatic properties. The thin layer of PEDOT:PSS is virtually transparent and colorless, prevents electrostatic discharges during film rewinding, and reduces dust buildup on the negatives after processing. PEDOT can also be used in dynamic applications where a potential is applied to a polymer film. The electrochromic properties of PEDOT are used to manufacture windows and mirrors that can become opaque or reflective upon the application of an electric potential.[27] Widespread adoption of electrochromic windows could save billions of dollars per year in air conditioning costs.[73] Finally, Phillips has commercialized a mobile phone with an electrically switchable PEDOT mirror. Figure 8. Ionoselective PTs reported by Bäuerle (left) and Swager (right). The use of PTs as sensors responding to an analyte has also been the subject of intense research. In addition to biosensor applications, PTs can also be functionalized with synthetic receptors for detecting metal ions or chiral molecules as well. PTs with pendant[74] and main-chain[28] crown ether functionalities were reported in 1993 by the research groups of Bäuerle and Swager, respectively (Figure 8). Electrochemically polymerized thin films of the Bäuerle pendant crown ether PT were exposed to millimolar concentrations of alkali cations (Li, Na, and K). The current that passed through the film at a fixed potential dropped dramatically in lithium ion solutions, less so for sodium ion solutions, and only slightly for potassium ion solutions. The Swager main chain crown ether PTs were prepared by chemical coupling and characterized by absorbance spectroscopy. Addition of the same alkali cations resulted in absorbance shifts of 46 nm (Li), 91 nm (Na), and 22 nm (K). The size of the shifts corresponds to the ion-binding preferences of the corresponding crown ether, resulting from a twist in the conjugated polymer backbone induced by ion binding. Figure 9. Chiral PT synthesized by Yashima and Goto. In the course of their studies of the optical properties of chiral PTs,[75][76][77][78] Yashima and Goto found that a PT with a chiral primary amine (Figure 9) was sensitive to chiral amino alcohols, producing mirror-image-split ICD responses in the π–transition region.[79] This was the first example of chiral recognition by PTs using a chiral detection method (CD spectroscopy). This distinguished it from earlier work by Lemaire et al. who used an achiral detection method (cyclic voltammetry) to detect incorporation of chiral dopant anions into an electrochemically polymerized chiral PT.[80] A fluorine substituted polythiophene was shown to yield 7% efficiency in polymer-fullerene solar cells.[81] 1. ^ Street, G. B.; Clarke, T. C. (1981). "Conducting Polymers: A Review of Recent Work". IBM J. Res. Dev. 25: 51–57. doi:10.1147/rd.251.0051.  2. ^ a b Schopf, G.; Koßmehl, G. (1997). "Polythiophenes—Electrically Conducting Polymers". Adv. Polym. Sci. Advances in Polymer Science 129: 1–166. doi:10.1007/BFb0008700. ISBN 3-540-61857-0.  3. ^ Roncali, Jean (1992). "Conjugated poly(thiophenes): synthesis, functionalization, and applications". Chemical Reviews 92 (4): 711. doi:10.1021/cr00012a009.  4. ^ Roncali, Jean (1997). "Synthetic Principles for Bandgap Control in Linear π-Conjugated Systems". Chemical Reviews 97 (1): 173–206. doi:10.1021/cr950257t. PMID 11848868.  5. ^ McCullough, Richard D. (1998). "The Chemistry of Conducting Polythiophenes". Advanced Materials 10 (2): 93. doi:10.1002/(SICI)1521-4095(199801)10:2<93::AID-ADMA93>3.0.CO;2-F.  6. ^ Reddinger, J. L.; Reynolds, J. R. (1999). "Molecular Engineering of p-Conjugated Polymers". Adv. Polym. Sci. Advances in Polymer Science 145: 57–122. doi:10.1007/3-540-70733-6_2. ISBN 978-3-540-65210-6.  7. ^ McQuade, D. Tyler; Pullen, Anthony E.; Swager, Timothy M. (2000). "Conjugated Polymer-Based Chemical Sensors". Chemical Reviews 100 (7): 2537–74. doi:10.1021/cr9801014. PMID 11749295.  8. ^ a b McCullough, Richard D.; Tristram-Nagle, Stephanie; Williams, Shawn P.; Lowe, Renae D.; Jayaraman, Manikandan (1993). "Self-orienting head-to-tail poly(3-alkylthiophenes): new insights on structure-property relationships in conducting polymers". Journal of the American Chemical Society 115 (11): 4910. doi:10.1021/ja00064a070.  9. ^ Mastragostino, M.; Soddu, L. (1990). "Electrochemical characterization of "n" doped polyheterocyclic conducting polymers—I. Polybithiophene". Electrochimica Acta 35 (2): 463. doi:10.1016/0013-4686(90)87029-2.  10. ^ a b Loponen, M.; Taka, T.; Laakso, J.; Vakiparta, K.; Suuronen, K.; Valkeinen, P.; Osterholm, J. (1991). "Doping and dedoping processes in poly (3-alkylthiophenes)". Synthetic Metals 41: 479. doi:10.1016/0379-6779(91)91111-M.  11. ^ Bartuš, Ján (1991). "Electrically Conducting Thiophene Polymers". Journal of Macromolecular Science: Part A – Chemistry 28 (9): 917–924. doi:10.1080/00222339108054069.  12. ^ Qiao, X.; Wang, Xianhong; Mo, Zhishen (2001). "The FeCl3-doped poly(3-alkyithiophenes) in solid state". Synthetic Metals 122 (2): 449. doi:10.1016/S0379-6779(00)00587-7.  13. ^ McCarley, Tracy Donovan; Noble; Dubois, C. J.; McCarley, Robin L. (2001). "MALDI-MS Evaluation of Poly(3-hexylthiophene) Synthesized by Chemical Oxidation with FeCl3". Macromolecules 34 (23): 7999. doi:10.1021/ma002140z.  14. ^ Heffner, G.; Pearson, D. (1991). "Solution processing of a doped conducting polymer". Synthetic Metals 44 (3): 341. doi:10.1016/0379-6779(91)91821-Q.  15. ^ S.a. Abdou, M.; Holdcroft, Steven (1993). "Oxidation of π-conjugated polymers with gold trichloride: enhanced stability of the electronically conducting state and electroless deposition of Au0". Synthetic Metals 60 (2): 93. doi:10.1016/0379-6779(93)91226-R.  16. ^ Rudge, Andy; Raistrick, Ian; Gottesfeld, Shimshon; Ferraris, John P. (1994). "A study of the electrochemical properties of conducting polymers for application in electrochemical capacitors". Electrochimica Acta 39 (2): 273. doi:10.1016/0013-4686(94)80063-4.  17. ^ Bässler, H. "Electronic Excitation". In Electronic Materials: The Oligomer Approach; Müllen, K.; Wegner, G., Eds.; Wiley-VCH: Weinheim, Germany, 1998, ISBN 3-527-29438-4 18. ^ Ten Hoeve, W.; Wynberg, H.; Havinga, E. E.; Meijer, E. W. (1991). "Substituted .. – undecithiophenes, the longest characterized oligothiophenes". Journal of the American Chemical Society 113 (15): 5887. doi:10.1021/ja00015a067.  19. ^ Meier, H.; Stalmach, U.; Kolshorn, H (September 1997). "Effective conjugation length and UV/vis spectra of oligomers". Acta Polymerica 48 (9): 379–384. doi:10.1002/actp.1997.010480905.  20. ^ Nakanishi, Hidetaka; Sumi, Naoto; Aso, Yoshio; Otsubo, Tetsuo (1998). "Synthesis and Properties of the Longest Oligothiophenes: the Icosamer and Heptacosamer". The Journal of Organic Chemistry 63 (24): 8632. doi:10.1021/jo981541y.  21. ^ a b Izumi, Tsuyoshi; Kobashi, Seiji; Takimiya, Kazuo; Aso, Yoshio; Otsubo, Tetsuo (2003). "Synthesis and Spectroscopic Properties of a Series of β-Blocked Long Oligothiophenes up to the 96-mer: Revaluation of Effective Conjugation Length". Journal of the American Chemical Society 125 (18): 5286–7. doi:10.1021/ja034333i. PMID 12720435.  22. ^ De Souza, J.; Pereira, Ernesto C. (2001). "Luminescence of poly(3-thiopheneacetic acid) in alcohols and aqueous solutions of poly(vinyl alcohol)". Synthetic Metals 118: 167. doi:10.1016/S0379-6779(00)00453-7.  23. ^ Goto, H.; Yashima, E.; Okamoto, Y. (2000). "Unusual solvent effects on chiroptical properties of an optically active regioregular polythiophene in solution". Chirality 12 (5–6): 396–399. doi:10.1002/(SICI)1520-636X(2000)12:5/6<396::AID-CHIR17>3.0.CO;2-X. PMID 10824159.  24. ^ a b Andersson, M.; Ekeblad, P. O.; Hjertberg, T.; Wennerström, O.; Inganäs, O. (1991). Polym. Commun. 32: 546–548.  25. ^ Nilsson, K. P. R.; Andersson, M. R.; Inganäs, O. (2002). "Conformational transitions of a free amino-acid-functionalized polythiophene induced by different buffer systems". Journal of Physics: Condensed Matter 14 (42): 10011. doi:10.1088/0953-8984/14/42/313.  26. ^ Roux, Claudine; Leclerc, Mario (1992). "Rod-to-coil transition in alkoxy-substituted polythiophenes". Macromolecules 25 (8): 2141. doi:10.1021/ma00034a012.  27. ^ a b H. W. Heuer, R. Wehrmann, S. Kirchmeyer (2002). "Electrochromic Window Based on Conducting Poly(3,4-ethylenedioxythiophene)-Poly(styrene sulfonate)". Advanced Functional Materials 12 (2): 89. doi:10.1002/1616-3028(20020201)12:2<89::AID-ADFM89>3.0.CO;2-1.  28. ^ a b Marsella, Michael J.; Swager, Timothy M. (1993). "Designing conducting polymer-based sensors: selective ionochromic response in crown ether-containing polythiophenes". Journal of the American Chemical Society 115 (25): 12214. doi:10.1021/ja00078a090.  29. ^ a b Barbarella, Giovanna; Bongini, Alessandro; Zambianchi, Massimo (1994). "Regiochemistry and Conformation of Poly(3-hexylthiophene) via the Synthesis and the Spectroscopic Characterization of the Model Configurational Triads". Macromolecules 27 (11): 3039. doi:10.1021/ma00089a022.  30. ^ Diaz-Quijada, G. A. et al. (1996). "Regiochemical Analysis of Water Soluble Conductive Polymers: Sodium Poly(ω-(3-thienyl)alkanesulfonates)". Macromolecules 29 (16): 5416. doi:10.1021/ma960126.  31. ^ Elsenbaumer, R. L.; Jen, K.-Y.; Miller, G. G.; Eckhardt, H.; Shacklette, L. W.; Jow, R. "Poly (alkyl thiophenes) and Poly (substituted heteroaromatic vinylenes): Versatile, Highly Conductive, Processible Polymers with Tunable Properties". In Electronic Properties of Conjugated Polymers (Eds: Kuzmany, H.; Mehring, M.; Roth, S.), Springer, Berlin, 1987, ISBN 0-387-18582-8 32. ^ a b c Andersson, M. R.; Selse, D.; Berggren, M.; Jaervinen, H.; Hjertberg, T.; Inganaes, O.; Wennerstroem, O.; Oesterholm, J.-E. (1994). "Regioselective polymerization of 3-(4-octylphenyl)thiophene with FeCl3". Macromolecules 27 (22): 6503. doi:10.1021/ma00100a039.  33. ^ Chen, Tian-An; Wu, Xiaoming; Rieke, Reuben D. (1995). "Regiocontrolled Synthesis of Poly(3-alkylthiophenes) Mediated by Rieke Zinc: Their Characterization and Solid-State Properties". Journal of the American Chemical Society 117: 233. doi:10.1021/ja00106a027.  34. ^ Xu, Bai; Holdcroft, Steven (1993). "Molecular control of luminescence from poly(3-hexylthiophenes)". Macromolecules 26 (17): 4457. doi:10.1021/ma00069a009.  35. ^ Frommer, Jane E. (1986). "Conducting polymer solutions". Accounts of Chemical Research 19: 2. doi:10.1021/ar00121a001.  36. ^ Elsenbaumer, R.; Jen, K.; Oboodi, R. (1986). "Processible and environmentally stable conducting polymers☆". Synthetic Metals 15 (2–3): 169. doi:10.1016/0379-6779(86)90020-2.  37. ^ Hotta, S.; Rughooputh, S. D. D. V.; Heeger, A. J.; Wudl, F. (1987). "Spectroscopic studies of soluble poly(3-alkylthienylenes)". Macromolecules 20: 212. doi:10.1021/ma00167a038.  38. ^ Hotta, S.; Soga, M.; Sonoda, N. (1988). "Novel organosynthetic routes to polythiophene and its derivatives". Synthetic Metals 26 (3): 267. doi:10.1016/0379-6779(88)90243-3.  39. ^ Hotta, S. (1987). "Electrochemical synthesis and spectroscopic study of poly(3-alkylthienylenes)". Synthetic Metals 22 (2): 103. doi:10.1016/0379-6779(87)90528-5.  40. ^ Hotta, S.; Rughooputh, S.; Heeger, A. (1987). "Conducting polymer composites of soluble polythiophenes in polystyrene". Synthetic Metals 22: 79. doi:10.1016/0379-6779(87)90573-X.  41. ^ Rughooputh, S. D. D. V.; Hotta, S.; Heeger, A. J.; Wudl, F. (1987). "Chromism of soluble polythienylenes". Journal of Polymer Science Part B: Polymer Physics date=May 1987 25 (5): 1071–1078. doi:10.1002/polb.1987.090250508.  42. ^ Patil, A. O.; Ikenoue, Y.; Wudl, Fred; Heeger, A. J. (1987). "Water soluble conducting polymers". Journal of the American Chemical Society 109 (6): 1858. doi:10.1021/ja00240a044.  43. ^ Englebienne, Patrick; Weiland, Mich le (1996). "Synthesis of water-soluble carboxylic and acetic acid-substituted poly(thiophenes) and the application of their photochemical properties in homogeneous competitive immunoassays". Chemical Communications (14): 1651. doi:10.1039/cc9960001651.  44. ^ Kim; Chen, Li; Gong; Osada, Yoshihito (1999). "Titration Behavior and Spectral Transitions of Water-Soluble Polythiophene Carboxylic Acids". Macromolecules 32 (12): 3964. doi:10.1021/ma981848z.  45. ^ Jung, S.; Hwang, D.-H.; Zyung, T.; Kim, W. H.; Chittibabu, K. G.; Tripathy, S. K. (1998). "Temperature dependent photoluminescence and electroluminescence properties of polythiophene with hydrogen bonding side chain". Synthetic Metals 98 (2): 107. doi:10.1016/S0379-6779(98)00161-1.  46. ^ Desimone, J. M.; Guan, Z.; Elsbernd, C. S. (1992). "Synthesis of Fluoropolymers in Supercritical Carbon Dioxide". Science 257 (5072): 945–7. doi:10.1126/science.257.5072.945. PMID 17789638.  47. ^ Li, L.; Counts, K. E.; Kurosawa, S.; Teja, A. S.; Collard, D. M. (2004). "Tuning the Electronic Structure and Solubility of Conjugated Polymers with Perfluoroalkyl Substituents: Poly(3-perfluorooctylthiophene), the First Supercritical CO2-soluble Conjugated Polymer". Advanced Materials 16 (2): 180. doi:10.1002/adma.200305333.  48. ^ Murphy, Amanda R.; Fréchet, Jean M. J.; Chang, Paul; Lee, Josephine; Subramanian, Vivek (2004). "Organic Thin Film Transistors from a Soluble Oligothiophene Derivative Containing Thermally Removable Solubilizing Groups". Journal of the American Chemical Society 126 (6): 1596–7. doi:10.1021/ja039529x. PMID 14871066.  49. ^ Roncali, J.; Garreau, R.; Yassar, A.; Marque, P.; Garnier, F.; Lemaire, M. (1987). "Effects of steric factors on the electrosynthesis and properties of conducting poly(3-alkylthiophenes)". The Journal of Physical Chemistry 91 (27): 6706. doi:10.1021/j100311a030.  50. ^ Meyer, Victor (January–June 1883). "Ueber den Begleiter des Benzols im Steinkohlentheer" [On the companion of benzene in stone coal]. Berichte der deutschen chemischen Gesellschaft (in German) 16 (1): 1465–1478. doi:10.1002/cber.188301601324.  51. ^ Yamamoto, Takakazu; Sanechika, Kenichi; Yamamoto, Akio (January 1980). "Preparation of thermostable and electric-conducting poly(2,5-thienylene)". Journal of Polymer Science: Polymer Letters Edition 18 (1): 9–12. doi:10.1002/pol.1980.130180103.  52. ^ Lin, John W-P.; Dudek, Lesley P. (September 1980). "Synthesis and properties of poly(2,5-thienylene)". Journal of Polymer Science: Polymer Chemistry Edition 18 (9): 2869–2873. doi:10.1002/pol.1980.170180910.  53. ^ McCullough, Richard D.; Lowe, Renae D. (1992). "Enhanced electrical conductivity in regioselectively synthesized poly(3-alkylthiophenes)". Journal of the Chemical Society, Chemical Communications: 70. doi:10.1039/C39920000070.  54. ^ Chen, Tian An; O'Brien, Richard A.; Rieke, Reuben D. (1993). "Use of highly reactive zinc leads to a new, facile synthesis for polyarylenes". Macromolecules 26 (13): 3462. doi:10.1021/ma00065a036.  55. ^ Zhu, Lishan; Wehmeyer, Richard M.; Rieke, Reuben D. (1991). "The direct formation of functionalized alkyl(aryl)zinc halides by oxidative addition of highly reactive zinc with organic halides and their reactions with acid chlorides, .alpha.,.beta.-unsaturated ketones, and allylic, aryl, and vinyl halides". The Journal of Organic Chemistry 56 (4): 1445. doi:10.1021/jo00004a021.  56. ^ Chen, Tian An; Rieke, Reuben D. (1992). "The first regioregular head-to-tail poly(3-hexylthiophene-2,5-diyl) and a regiorandom isopolymer: nickel versus palladium catalysis of 2(5)-bromo-5(2)-(bromozincio)-3-hexylthiophene polymerization". Journal of the American Chemical Society 114 (25): 10087. doi:10.1021/ja00051a066.  57. ^ a b Sugimoto, R.; Taketa, S.; Gu, H. B.; Yoshino, K (1986). "Preparation of soluble polythiophene derivatives utilizing transition metal halides as catalysts and their property". Chemistry Express 1 (11): 635–638.  58. ^ Jonas, F.; Heywang, G.; Schmidtberg, W.; Heinze, J.; Dietrich, M. U.S. Patent 5,035,926, 1991. 59. ^ Ruckenstein, E.; Park, J. (1991). "Polythiophene and polythiophene-based conducting composites". Synthetic Metals 44 (3): 293. doi:10.1016/0379-6779(91)91817-T.  60. ^ Costa Bizzarri, P.; Andreani, Franco; Della Casa, Carlo; Lanzi, Massimiliano; Salatelli, Elisabetta (1995). "Ester-functionalized poly(3-alkylthienylene)s: substituent effects on the polymerization with FeCl3". Synthetic Metals 75 (2): 141. doi:10.1016/0379-6779(95)03401-5.  61. ^ Fraleoni-Morgera, Alessandro; Della-Casa, Carlo; Lanzi, Massimiliano; Costa-Bizzarri, Paolo (2003). "Investigation on Different Procedures in the Oxidative Copolymerization of a Dye-Functionalized Thiophene with 3-Hexylthiophene". Macromolecules 36 (23): 8617. doi:10.1021/ma0348730.  62. ^ Pomerantz, M.; Tseng, J.; Zhu, H.; Sproull, S.; Reynolds, J.; Uitz, R.; Arnott, H.; Haider, M. (1991). "Processable polymers and copolymers of 3-alkylthiophenes and their blends". Synthetic Metals 41 (3): 825. doi:10.1016/0379-6779(91)91505-5.  63. ^ Qiao, X.; Wang, Xianhong; Zhao, Xiaojiang; Liu, Jian; Mo, Zhishen (2000). "Poly(3-dodecylthiophenes) polymerized with different amounts of catalyst". Synthetic Metals 114 (3): 261. doi:10.1016/S0379-6779(00)00233-2.  64. ^ Andreani, F.; Salatelli, E.; Lanzi, M. (February 1996). "Novel poly(3,3 – and 3',4'-dialkyl- 2,2':5',2 – terthiophene)s by chemical oxidative synthesis: evidence for a new step towards the optimization of this process". Polymer 37 (4): 661–665. doi:10.1016/0032-3861(96)83153-3.  65. ^ Gallazzi, M.; Bertarelli, C.; Montoneri, E. (2002). "Critical parameters for product quality and yield in the polymerisation of 3,3″-didodecyl-2,2′:5′,2″-terthiophene". Synthetic Metals 128: 91. doi:10.1016/S0379-6779(01)00665-8.  66. ^ Laakso, J.; Jarvinen, H.; Skagerberg, B. (1993). "Recent developments in the polymerization of 3-alkylthiophenes". Synthetic Metals 55 (2–3): 1204. doi:10.1016/0379-6779(93)90225-L.  67. ^ Niemi, V. M.; Knuuttila, P.; Österholm, J. E.; Korvola, J. (1992). "Polymerization of 3-alkylthiophenes with ferric chloride". Polymer year=1992 33 (7): 1559–1562. doi:10.1016/0032-3861(92)90138-M. . 68. ^ Olinga, T.; François, B. (1995). "Kinetics of polymerization of thiophene by FeCl3 in choloroform and acetonitrile". Synthetic Metals 69: 297. doi:10.1016/0379-6779(94)02457-A.  69. ^ Barbarella, Giovanna; Zambianchi, Massimo; Di Toro, Rosanna; Colonna, Martino; Iarossi, Dario; Goldoni, Francesca; Bongini, Alessandro (1996). "Regioselective Oligomerization of 3-(Alkylsulfanyl)thiophenes with Ferric Chloride". The Journal of Organic Chemistry 61 (23): 8285–8292. doi:10.1021/jo960982j. PMID 11667817.  70. ^ Garnier, F. "Field-Effect Transistors Based on Conjugated Materials". In Electronic Materials: The Oligomer Approach (Eds: Müllen, K.; Wegner, G.), Wiley-VCH, Weinheim, 1998, ISBN 3-527-29438-4 71. ^ Harrison, M. G.; Friend, R. H. "Optical Applications". In Electronic Materials: The Oligomer Approach (Eds: Müllen, K.; Wegner, G.), Wiley-VCH, Weinheim, 1998, ISBN 3-527-29438-4 72. ^ Martina, V; Ionescu, K.; Pigani, L; Terzi, F; Ulrici, A.; Zanardi, C.; Seeber, R (March 2007). "Development of an electronic tongue based on a PEDOT-modified voltammetric sensor". Analytical and bioanalytical chemistry 387 (6): 2101–2110. doi:10.1007/s00216-006-1102-1. PMID 17235499.  73. ^ Rosseinsky, D. R.; Mortimer, R. J. (2001). "Electrochromic Systems and the Prospects for Devices". Advanced Materials 13 (11): 783. doi:10.1002/1521-4095(200106)13:11<783::AID-ADMA783>3.0.CO;2-D.  74. ^ Bäuerle, Peter; Scheib, Stefan (1993). "Molecular recognition of alkali-ions by crown-ether-functionalized poly(alkylthiophenes)". Advanced Materials 5 (11): 848. doi:10.1002/adma.19930051113.  75. ^ Goto, Hidetoshi; Okamoto, Yoshio; Yashima, Eiji (2002). "Metal-Induced Supramolecular Chirality in an Optically Active Polythiophene Aggregate". Chemistry – A European Journal 8 (17): 4027–36. doi:10.1002/1521-3765(20020902)8:17<4027::AID-CHEM4027>3.0.CO;2-Q. PMID 12360944.  76. ^ Goto, Hidetoshi; Okamoto, Yoshio; Yashima, Eiji (2002). "Solvent-Induced Chiroptical Changes in Supramolecular Assemblies of an Optically Active, Regioregular Polythiophene". Macromolecules 35 (12): 4590. doi:10.1021/ma012083p.  77. ^ Goto, Hidetoshi; Yashima, Eiji (2002). "Electron-Induced Switching of the Supramolecular Chirality of Optically Active Polythiophene Aggregates". Journal of the American Chemical Society 124 (27): 7943–9. doi:10.1021/ja025900p. PMID 12095338.  78. ^ Sakurai, Shin-Ichiro; Goto, Hidetoshi; Yashima, Eiji (2001). "Synthesis and Chiroptical Properties of Optically Active, Regioregular Oligothiophenes". Organic Letters 3 (15): 2379–82. doi:10.1021/ol016189g. PMID 11463321.  79. ^ Yashima, Eiji; Goto, Hidetoshi; Okamoto, Yoshio (1999). "Metal-Induced Chirality Induction and Chiral Recognition of Optically Active, Regioregular Polythiophenes". Macromolecules 32 (23): 7942. doi:10.1021/ma9912305.  80. ^ Lemaire, Marc; Delabouglise, Didier; Garreau, Robert; Guy, Alain; Roncali, Jean (1988). "Enantioselective chiral poly(thiophenes)". Journal of the Chemical Society, Chemical Communications (10): 658. doi:10.1039/C39880000658.  81. ^ Price, Samuel C.; Stuart, Andrew C.; Yang, Liqiang; Zhou, Huaxing; You, Wei (2011). "Fluorine Substituted Conjugated Polymer of Medium Band Gap Yields 7% Efficiency in Polymer−Fullerene Solar Cells". Journal of the American Chemical Society 133 (12): 4625–4631. doi:10.1021/ja1112595. PMID 21375339.  Further reading[edit]
3e4eb7249acc8aee
Take the 2-minute tour × What does superposition mean in quantum mechanics? When I say $A+B=C$ (forces). I can mean push something with force $A$ + force $B$ together, and that is same as I push it with force $C$. But when I say wavefunction $A$ + $B$ is also a solution of Schrodinger equation, what do I mean? The physics between them obviously is not same. Is it just something pure mathematical? share|improve this question 4 Answers 4 • Math: If you have an operator $D$ with $$D(\Psi+\Phi)=D(\Psi)+D(\Phi),$$ then if $D(\Psi)=0$ and $D(\Phi)=0$, you can also conclude that $D(\Psi+\Phi)=0$. This is the case for the Schrödinger equation, as it reads $$D(\Psi):=(i\hbar\tfrac{\partial}{\partial t}-H)\Psi=0,$$ where $H$ is linar. For example you certainly have linearity for the derivatives: $$(f(x)+g(x))'=f'(x)+g'(x)$$ and even more so for multiplicative operators: $$V(x)\cdot (f(x)+g(x))=V(x)\cdot f(x)+V(x)\cdot g(x).$$ The books point out that the superposition is possible like that to emphasise that the probability waves don't affect each other and so this enables you to find solutions of the equation. If, in contrast, the Schrödinger equation would read $$D(\Psi):=(i\hbar\tfrac{\partial}{\partial t}-H)\Psi^2=0,$$ which is non-linear because of the $\Psi^2=0$, then you'd have and from $\Phi$ and $\Psi$ being a solution ($D(\Psi)=0$ and $D(\Phi)=0$) it would not follow that $\Psi+\Phi$ is a solution too (you only get $D(\Psi+\Phi)=0+0+D(\sqrt{2\cdot\Psi\cdot\Phi})\ne0$). • Physics: What do you mean by "the physics between them"? Anyway, as an illustration, if you have a function like $\Psi(x)=A\text{e}^{-(x-3)^2}$, which is a bump located around the point $x=3$, and you add it with a function $\Phi(x)=B\text{e}^{-(x-7)^2}$, which is a bump located around the point $x=7$, then you get a function $$\chi(x):=\Psi(x)+\Phi(x)=A\text{e}^{-(x-3)^2}+B\text{e}^{-(x-3)^2},$$ which has two bumps. The wave function relate to propability densities, and if you have high probailities at the points $x=3$ for $\Phi$ and at $x=7$ for $\Phi$, then $\Psi+\Phi$ will tend to describe a situation, which has relatively high probabilities on both of these points. share|improve this answer One way to think of superposition is this: If particles behave to some degree like waves in the sense that they can never be completely "squeezed down" into actual points, then the waves -- the probability functions -- can add together very much like waves on a pond. So, just as on a pond surface you could combine together large waves with crests a foot apart traveling north with small waves whose crests are inch apart traveling east, you can in principle do exactly the same thing with the probability waves of an electron. Wave addition is surprisingly simple, incidentally, amounting to not much more than simply imposing the smaller wave onto the moving surface of the larger wave. So, while the heights of the waves at any one point will change as the two waves move, the height of the wave at that point will always be nothing more than a simple arithmetic sum of the heights that each wave would have had separately. That nice, simple arithmetic property is called linearity, and (fortunately for physicists seeking simplicity!) it can be found throughout much of physics. In the case of the electron there is one additional constraint: A single electron can only generate a finite amount of wave action. That wave action can be split up in many different ways and into many different types of waves, but the total sum of all those waves must always add up to one "electron's worth" of wave action. So for example, just as with the pond waves, an electron wave could consist of an equal mix of large waves moving north and small waves moving south, as long as the two sets of waves always add up to "one electron" of total wave action. Now the fun part is that when electrons are modeled as waves, those waves have a very specific meaning, one that is a bit less than intuitive. The interpretation is this: The big waves traveling north mean that if you poke hard at the wave with something like a photon, you will sometimes (half the time if the two wave types are equal in strength) find an electron moving north, rather slowly. However, the instant you find the electron by using such a poke, all of that wave interpretation "instantly" disappears. (I say "instantly" in quotes because that is a very loaded term in that context; but that's for some other answer!) However, since there are two types of electron waves added together, that same poke is just as likely to find the electron moving east at a much faster clip, which is what the more tightly spaced eastbound wave means. Once again, if a poke finds the electron moving east, all of the wave interpretations cease to have meaning and you simply have an electron that look a lot more like a particle in terms of where it is located. Once found, the electron becomes a candidate for creating new waves and starting the process all over again. That is what happens with conduction electrons in metals, for example. Or, alternatively, it could get captured by a heavier object such as an atom, and at that point it would cease to behave like a roaming wave. However, even then the electron does not stop behaving like a wave. In fact, the entire discipline of chemistry amounts to a detailed mapping out of what happens when the many different waves possible for a charged electron become bound into a tight, cramped, and mostly spherical space, one in which it must argue and negotiate and continually bump into other electrons in an attempt to find its own little bit of turf. From these waves and the intransigence of electrons (called fermion behavior) to pack together tightly comes all of the rich behavior that makes matter and life possible. share|improve this answer Mathematically (as I think you already know) superposition means that if I evolve the quantum state $|c\rangle =|a\rangle + |b\rangle$, the result will be the same as separately evolving $|a\rangle$ and $|b\rangle$,and adding the results. That is due to the linearity of the Schroedinger equation. This means that there esists a simple connection between the states out of which $|c\rangle$ is made and $|c\rangle$ itself. Without this, Quantum mechanics would be infinetely more difficult. Whether this is purely mathematical is a little bit a matter of semantics. I guess most people would see the same property of the electric field as highly intuitive, but to others it would appear highly mathematical. share|improve this answer A wavefunction is a fundamentally different concept to anything that exists in "classical" physics. This question deals with what a wavefunction 'looks like'. You're asking what a superposition of wavefunctions is. You could look at it mathematically as $$|c\rangle = |a\rangle + |b\rangle$$ where $a$, $b$, and $c$ are wavefunctions (if you're unfamiliar with the notation used, take a look at this wikipedia page). But physically this doesn't correspond to the superposition of forces, or electromagnetic fields. The mathematic forms of both are similar, but the physical interpretation is quite different. For example, if you consider the Hydrogen atom - I'm going to assume you know something about orbitals and energy levels. If not, I can explain further - and you consider an electron in the lowest energy level and call that wavefunction $\psi_{0}(\vec r)$ (or $|\psi_{0}\rangle$), and if the same electron is in the first excited energy level (Let's call is $\psi_{1}(\vec r)$ or $|\psi_{1}\rangle$). Now it is possible to have the electron is a superposed state, where the wavefunction is given by $$|\psi\rangle = |\psi_{0}\rangle + |\psi_{1}\rangle $$ What this seems to mean is that the electron is both in the lowest energy level and the first excited state at the same time. This may seem wrong, because of our intuition, but that's exactly what it means. Usually, you wouldn't see this superposition if you observed a quantum system. This is because a superposed state will 'collapse' to one of the states that make it up with some finite probability. So you end up seeing the particle sitting definitely in one or the other energy levels. But this year's Nobel has been given to people who've managed to brilliantly circumvent this problem. You could read a little bit about it here, if you haven't already. So to conclude - A superposition means that you have (mathematically) a sum of two wavefunctions. Physically this corresponds to nothing that you can relate to classically, which is what makes quantum mechanics weird (but awesome). share|improve this answer Your Answer
5b4e2ceebfc56130
Acceleration Signals From Superconductors The Multifamily Structure of Matter Supersymmetry with a Triplet Higgs Status of Experiments: 03 December, 2013 Speculative Source of the Wave Function Ψ By David Schroeder Time delay ckt. added to mother board. Newly built scope trigger ckt. Earlier cryostat configuration Another earlier design Dewar & test setup LN2 boiling around superconductor Photo Archive Since 1992 acceleration effects, in the vicinity of superconductors, or superfluids, tens of magnitudes larger than General Relativity allows, have been reported. By far, the most convincing of these reports has come from the Austrian Research Center (ARC) (Tajmar et al, 2003-2007). It's speculated these signals constitute a tiny residual of a gravity-emulating force, 40 magnitudes stronger than its classical counterpart. A supersymmetric quanta, that ranges to 10-19 m. (TeV scale), is the proposed source of this field. Such a quanta is shown to arise naturally at the intersection of a higher dimensional bulk space and our 3+1 braneworld. At this energy/distance scale, these quanta, in virtual form, are postulated to imprint an Alcubierre topology on spacetime. The resulting geodesic hypersurfaces would neutralize angular acceleration forces on electrons and nucleii that can reach 1022 g's, or more, in the hydrogen atom. This would explain the absense of synchroton radiation, and conesquent stability of atomic structures, at a more fundamental level than the Quantum Mechanical requirement for resonant orbits. Vacuum polarization, from this field, is speculated to momentarily evolve massless spin-2 gravitons, in response to acceleration, until equilibrium is restored. Macroscopic coherence in superconductors, would raise these, exceedingly brief, graviton 'bursts' to detectable levels. The question is addressed whether this field might one day be technologically harnessed: Speculative Reactionless Drive From Angstrom Thick Boundary Layers About the Experiments While the concept of "gravity shielding" has long been discounted on theoretical grounds (incompatible with General Relativity), creation of transitory acceleration pulses via quantum processes may explain any genuine signal that was observed. Based on a theoretical model (Matter Waves and Micro-Warps), only acceleration of the bulk superconductor, supercurrent, or superfluid will produce an acceleration signal, and then only for the duration of the acceleration. Thus the experiments have, to some degree, replicated Podkletnov's "Impulse Gravity Generator", which would indeed have sharply accelerated the supercurrent. For the record, in a series of experimental runs in late March, 2010, apparent acceleration pulses were detected with a PM-3214 oscilloscope. The scope was triggered by the accelerometer's output, every time a 40 mfd capacitor, (charged to 300 volts), was discharged through the superconductor. Control runs seemed to rule out electromagnetic pulses (EMPs) as the culprit, but more runs are needed under identical conditions. The new set-up, pictured above, has most of the experimental components secured to an 11 by 11 by 3/4 inch oak platform. The control box, with cable leading to the high voltage circuit, has been replaced by a 433 MHz RF link, that allows charging and discharing of the capacitor bank remotely. The small remote, on a keychain, is visible on the right side of the photo. This eliminates the danger of having nearly 1000 volts accidentally reaching the hand-held control box. The aluminum project box, in the foreground, houses the accelerometer and associated circuitry. To its left is the cryostat with slidable fiberglass tabs supporting the anode and superconductor. A kit LCD voltmeter has been mounted on a vertical metal frame for monitoring the charging voltage. Only two of the four relays are used; one for starting and stopping capacitor charging, the other to trigger discharge through the superconductor load. An ADXL203, plus/minus 1.7g accelerometer chip (resolution 1 milli-g), aligned with the supercurrent axis, monitors for signal. It is enclosed in a 2" by 4" by 6" aluminum project box for RF (radio frequency) isolation. The accelerometer's output is first referenced to analog common in a 5 volt, bi-polar supply; established by 7805 and 7905 regulators, that, in turn, are fed by a pair of 9 volt batteries mounted outside the case. The second op-amp on the 747 chip provides a 10-to-1 signal gain. Shielded coax cables both within the box to through-panel BNC connectors, and from the box to the oscilloscope adds further RF isolation. Signals can be tapped either directly from the ADXL203's output, from the referencing stage, or the final amplifier output. Using a solid state accelerometer overcomes a pitfall in previous attempts to measure acceleration phenomena from superconductors with a digital scale and target mass. Any brief acceleration pulse would have been averaged over the sampling interval of the digital scale, and further diluted by the large inertial mass of the target. Moreover, negative results would be expected for a static superconductor, in which the bose-condensate is not being accelerated, if the theory presented here is correct. Up till Wednesday, 26 May 2010, electric discharge directly through the superconductor had been the sole method tried. The YBCO superconductor, which yielded signals by this method, was accidentally ruined when silver epoxy was applied to both sides of it, in an effort to obtain electrical contact over its entire surface. Therefore a coil was wound on a fiberglass cylinder slightly larger than the superconductor. Discharging the capacitor through this coil induces a circulating supercurrent in the the tangential plane of the superconductor. The induction method was tried in late May, with interesting results. In the superconductive state the PM-3214 scope was triggered, by the accelerometer's output, in multiple runs when the 40 mfd capacitor was discharged through the solenoid. Scope triggering was not observed after the YBCO chip transitioned into its non-superconducting state. Tried various combinations to duplicate triggering, but only 300-350 volts discharge from 40 mfd capacitor, with YBCO chip in superconductive state, produced results in 3 runs. While tantalizing, since the triggering was close to the noise threshold of the system, it's not irrefutable proof of anamalous phenomena. Tests with a non-superconducting aluminum blank were carried out in early June, 2010, and did not duplicate the triggering effect seen with the YBCO chip in the superconductive state. Potential Superconductor Evidence For Spin-1 Graviton When the Sunday Telegraph first reported on gravitational shielding experiments by the Russian engineer, Evgeny Podkletnov, the news was met with derision and disbelief. After a flurry of attempts to replicate the phenomena failed, with a few possible exceptions, the field entered hibernation. But, scarcely a decade after Podkletnov first gained notoriety, a highly respected European Laboratory - the Austrian Research Center (ARC), reported detection of acceleration signals from a spun-up, ring shaped, niobium superconductor, which yielded about 100 micro-g's for tangential accelerations of less than 10 g's. They believed they were observing an enhanced gravitoelectric field induced by a changing gravitomagnetic field around their toroidal superconductor, in analogy with an electric field generatied by a changing magnetic field. Its great strength was attributed to a mass increase of the graviton, inside the superconductor, in analogy with a massive photon believed responsible for superconductivity. However, direct measurement of the gravitomagnetic field yielded only 1% of the expected value, casting doubt on this interpretation. An alternative explanation is suggested, involving the macro-scale wavefunction of the superconductor in conjunction with enormous dynamic forces within atoms. One easily calculates that the bound electron in a hydrogen atom's lowest orbital experiences an average centripedal acceleration of 1022 g's. Yet, as long as an integral number of deBroglie (matter) waves wrap around the orbit, there is no emission of synchroton radiation. Thus matter waves are intimately linked to stable orbits, and are speculated to be visible evidence for a micro-range, di-pole form of gravity, that matches the coulomb force in strength. The quanta of this field is proposed to be a TeV mass 'photon' with length and time variables, substituting for the usual electric and magnetic variables. A duality between the electroweak and gravity-Higgs forces, in the TeV energy regime, is the proposed source of this field. It will be recalled that a gravity field is measured by length contraction and time retardation, at each point in space. By reacting to, and opposing, applied accelerations, this field would explain atomic stability by counterbalancing centripedal forces on electrons and nucleii that pirouette around their common center of mass. Such a field would embrace fundamental particles in a freefall 'cocoon', as long as the field's intensity is sychronous with cyclical acceleration forces, as in an elliptical orbit. At 10-19 meters, this field would not conflict with the scale of atomic structures, such as the lowest orbital in a hydrogen atom, with a radius of .528 x 10-10 meters. As the variables of this field are free to range in both directions (contract/expand for length, retard/advance for time) it follows that one half of each wave cycle corresponds to a negative energy state of the vacuum. This is proposed to be the source of the complex mathematical form of the wavefunction ψ. Sync shifting, 39 magnitudes stronger than seen in Relativitistic dynamics, would modulate a particle's position in both space and time, giving rise to the uncertainty principle. Since no force operates instantaneously, it's speculated that 10-5 g acceleration signals, detected near a niobium superconductor, at the Austrian Research Center (ARC), resulted from a lag in the response time of this field, to an applied acceleration of only 7.33 g's. Intriguingly, Dr. Evgeny Podkletnov, of the Moscow Chemical Scientific Research Center, reported acceleration pulses of ~ 1000 g's for durations of 10-4 seconds, from a superconductor subjected to 2 megavolt discharges. This claim is quite fantastic, and difficult to believe. On the other hand, such high voltage discharges would have induced momentary accelerations on the cooper pairs, orders of magnitudes larger than in the ARC experiments, who found a linear relationship between applied acceleration and signal. Comparing Podkletnov's experiment with the ARC group's experiment is, admittedly, like comparing apples with oranges. The one feature they do have in common is that they both accelerate cooper-pairs. The ARC experimenters apply 7.33 g's to both the cooper-pairs and lattice sites yielding 100 micro-g's signal. Podkletnov's experiment discharges 2 million volts between a YBCO superconductor and a copper plate, in a partially evacuated chamber, yielding 1000 g's. Electrons, in a vacuum, subjected to such a voltage will reach about 98% c. Assuming this occurs in the 100 micro-second interval, indicated in Dr. Giovanni Modanese's paper, the average acceleration on the electrons is something like 109 g's. Curiously, if one substitutes the proton's mass, in place of the electron's, the resulting acceleration is only 7.3% below a perfect linear extrapolation of the ARC team's applied acceleration versus signal yield. That is pretty remarkable, since this near perfect linearity is maintained across eight orders of magnitude. Also, since the proton has the opposite charge of the electron the acceleration pulse (on the protons) would be opposite in direction to the detected signal; as was the case in the ARC experiments. The puzzle here is that the superconductor in Podkletnov's experiment is stationary, meaning the proton's at the lattice sites can only be displaced a small amount in the rigid lattice structure, before they must rebound to their original positions. Presumably, some of the cooper-pairs exit the superconductor and traverse the near vacuum to the copper plate. Their acceleration through the near vacuum would be much larger than within the superconductor. The question is: Would they maintain their coherence outside the superconductor, in the absence of flexing lattice sites, needed to establish the bond in the BCS theory? In this regard, the flat, glowing discharge, originating at the superconductor, and transiting to the copper anode, is very suggestive of some kind of coherent electron behavior, unlike a normal spark discharge. Further discussion of these issues, and calculation details, will be posted at the bottom of page 9, in the near future: Impulse Generator Coupling Factor As this field is imputed to range only to 10-17 centimeters, a mechanism is needed to account for its detection over macroscopic distances. The proposed explanation is that the intense gravitational polarization of the vacuum, within this spin-1 field's range, induces localized (bidirectional) flows of massless spin-2 gravitons between our brane and extra dimensional bulk, that manifest as lattice vibrations (phonons). Such lattice vibrations play a central role in the BCS theory of superconductivity. In the ARC experiment these bulk spin-2 gravitons would have been detected as an acceleration field in the tangential plane of the spinning ring. The ARC researchers assumed they were detecting an enhanced gravitoelectric (acceleration) field, but the absense of an equal magnitude gravitomagnetic field, needed to induce the gravitoelectric field, casts doubt on that assumption. The ARC team did report a gravitomagnetic field 1% of their originally expected value, which conformed with a revised theory. However, an independent search for gravitomagnetic fields in superconductors, by a team in New Zealand, obtained negative results at an even higher level of sensitivity. By enveloping fundamental particles, this field would severly distort the rate, and direction at which time flows, as perceived by an external observer. Thus, electric and magnetic fields, from every fundamental particle, would be modulated forwards and backwards in time, but average to the local present. As Richard Feynman observed, a reversal of time equates to a reversal of a particle's charge. The weak binding of cooper-pairs - 10-4-10-3 eV, is therefore proposed to arise from a small phase difference in their relative times. These temporal oscillations are proposed to be the origin of the wave nature of matter. The time-reversed portion of these waves would explain the complex structure of the wavefunction. On the celestial scale the large masses of stars, planets, and moons, imparts curvature to the local metric, so that these bodies move along geodesic, or acceleration-free, trajectories. Remarkably, aside from tidal friction, within these bodies, and a mininscule gravitoelectromagnetic radiation, no energy is lost; e.g. energy is conserved in geodesic motion. The postulated micro-range, di-pole, gravity field would also induce a local geodesic state for electrons and quarks that are in stable structures, but it is not yet clear whether this field can be modelled to be energy conserving. The fact that geodesic motion, with an entirely different structure, and origin, is energy conserving in the astronomical domain, is encouraging with respect to a possible similar situation in the particle realm. A TeV+, S-Dual, Quanta as the Source of the Wavefunction Matter waves underlie all of chemistry and even biology at the molecular scale. What matter waves do has long been elucidated through the de Broglie and Schrödinger equations, and Born's statistical interpretation, but what they actually are, or consist of, remains an unanswered question. As every freshman college physics student learns matter waves are intimately linked to nature's fundamental unit of action - Planck's Constant - through the relation: λ = h/p, where λ is the wavelength associated with a particle, p is the particles momentum, and h is Planck's constant. DeBroglie showed that for stable orbits to exist the relation: = 2πR, where n is an integer and R the radius of the orbit, must be satisfied. Erwin Schrödinger was once of the opinion that matter waves represented a real disturbance in space, analogous to the field variables in electromagnetic waves3. Since the wavefunction for a particle ψ(x,t); where x is position in space and t time, concerns the probable position of the particle at a given time, it utilizes the same parameters as general relativistic gravity - space and time. To be more precise, the intensity of the gravitational field, at a given locale, is determined by the amount of contraction of measuring rods and slowing of clocks. But, in contrast to the feebleness of Newtonian gravity, matter waves modulate the location (via probability) of fundamental particles as robustly as do electric and magnetic fields. If Schrödinger's intuition was correct, a similar strength analogue of the electromagnetic (EM) field, with variables of length and time, suggests itself as the physical basis of matter waves. Implicit in a length-time analogue of the electromagnetic field is a bi-polar length variable that contracts/expands and a bi-polar time variable that retards/advances. By definition, one half of such a wave cycle, in which length expands and time advances, corresponds to a negative energy state of the vacuum (a positive mass planet contracts length scales, and retards clocks, a negative mass planet will have the opposite effect). The combined effect of these two variables is proposed to be the origin of the imaginary phase factor 'i' in Schrödinger's wave equation: ψ = Hψ, as well as in Heisenberg's commutative relation: pq - qp = ih/2π. It is speculated that these bi-polar length and time variables account for all quantum interference phenomena, for which the phase factor i is known to be the source. In accordance with Maxwell's laws, a changing 'length current' should give rise to a changing 'time current' and visa-versa. The amplitudes of these two variables would cyclically rise and fall, in step, as the length-time wave propagates past an observer. Clearly, an observer (particle) entrained at a crossover point of a length-time wave (where the wave transitions from a positive to negative vacuum condition) would be continually preceded, within 1/2 wavelength, by a region of contracting spacetime, and trailed within 1/2 wavelength by expanding spacetime (incidentally, this "crossover" point corresponds to the boundary between a higher dimensional "bulk" space, and our 3+1 brane. String theory proscribes that all open-ended particles exist at this boundary - see below). Such a local distortion of spacetime is the metric signature of an Alcubierre warp6. It is proposed to underlie the absense of synchrotron radiation in stable atomic orbits, by creating a local free-fall geodesic for orbiting electrons. This scenario assumes that electrons are 'modulated' by the oscillating length and time fields of virtual length-time 'photons', just as virtual (electromagnetic) photons modify other aspects of real particles, as proscribed by quantum electrodynamics (QED). These oscillating length and time fields are postulated to be the "internal periodic phenomena" all particles are subject to, as predicted by Louis DeBroglie in his 1923 Comptes Rendus note5. But, such a gravity-emulating, Maxwell gauge field cannot be massless, otherwise it would have long since been detected. If it exists at all, it must be in the unexplored supersymmetry realm between 1 and 100 TeV. The warp field of a length-time 'photon' would, accordingly, take the form of a micro-warp in the 10-17 to 10-19 meter range. In this view, the lobe-like complexity of electron orbits would stem from oscillations of the length and time variables, confined within a 10-19 meters, effective warp 'bubble', that should act like a cavity resonator. Thus, throughout its complex gyrations, an orbiting electron would locally be shielded from inertial forces, as the amplitude and orientation of the micro-warp synchronizes with the dynamically changing angular acceleration vector. Large amplitude expansions/contractions of spacetime within the micro-warp's operational radius, stemming from di-pole gravity 40 magnitudes greater than Newtonian gravity, must lead to correspondingly large synchronization (sync) shifts. Since this micro-warp concept is based on extra dimensions of space, a logical deduction is that during the contraction cycle the volume of space within the warp 'bubble' shrinks to the size of the extra dimension(s) and expands into them. Having the higher dimensional bulk serve as the source and sink of spacetime (gravitons) for these alternating expansions and contractions would obviate the need for negative matter to implement an Alcubierre warp. From 2003 to 2007, a group of researchers, led by Martin Tajmar, at the Austrian Research Center, detected anomalously large (up to 277 micro-gs) acceleration signals from a rapidly spun-up, ring shaped, niobium superconductor. They interpreted this acceleration signal (which opposed the applied acceleration) to be a gravitoelectric field, induced by a time-varying gravitomagnetic field. When they attempted to detect the gravitomagnetic field directly with sensitive gyroscopes, they found only 1% of the signal they were expecting. Furthermore, this supposed gravitomagnetic field did not follow the inverse square rule as was expected. Since only an acceleration field was detected, an alternative explanation is proposed. Cooper pairs move as a supercurrent through the lattice, progressively bonding from one lattice site to another as they advance. If the acceleration nulling, dipole field really exists, then all cooper pairs, and their proton (lattice) partners, would experience zero-g acceleration within the 10-19 meters frame of theis field, for all components of momentum. Effectively, perfect superconduction would correspond to an acceleration-free dance for both the moving cooper-pairs, and the flexing lattice sites, as this field exactly cancels the acceleration components apparent to external observers. When the experimenters applied an acceleration to the body of the superconductor, this perfect balance was briefly upset. Since this hypothetical length-time (LT) field is a guage field, like long range electromagnetism, it would respond, like that field by trying to 'brake' the applied acceleration. The problem is that the LT field ranges only to 10-19 meters, so its long range detection is an issue. The proposed explanation is that the LT field, associated with each electron and proton, functions as a micro-pump for shuttling massless gravitions between the extra dimensional bulk and our 3-brane. Assuming fundamental particles are fixed to the brane ' wall' separating our 3D space and the extra dimensions, and enveloped by virtual micro-warps, each particle would see every other particle cyclically receding and advancing in position relative to every other particle. The resulting sync shifts would induce forward/backward translations in time - each particle seeing every other particle oscillating between the past and future, but averaging to the local present. Such temporal oscillations could underlie the weird, non-classical aspects of quantum mechanics as illustrated in John Cramer's Transactional Hypothesis. The electromagnetic-gravity duality, implied in a length-time Maxwell field's existence, is postulated to be embraced within one of six dualities betweeen the forces comprising the superforce. Three forces comprise the superforce above the electroweak synthesis - strong force, electroweak force, and gravity, which would converge in strength, in the TeV scale, if non-compact extra dimensions were indeed a reality. This yields six dualities by the permutation rule N!, where N=3. These six dualities are proposed to correspond to the five 10D string theories and 11D supergravity that make up the tableau of M-Theory. Each of these field theories is speculated to reside on its own m+n "brane" in the 5D "bulk"., where m and n are integers denoting the number of space and time dimensions, respectively. It's also intriguing that the most recent measurements of dark matter by a Cambridge University team shows that 'dark matter' composes between 80-85% of the matter of the universe. It has been suggested that dark matter is really matter sequestered on nearby branes in the higher dimensional bulk. If our brane is but one of six, and all branes are about equal in extent (in terms of total mass energy), then 5/6ths (83.3%) of the matter of the 'multiverse' would be hidden background matter on the other 5 branes; right smack in the middle of the Cambridge team's estimate. Finally, this Maxwell length-time field would be massless on a "3-brane", whose 'spacetime' has electric and magnetic parameters. Such a 3+1 (3 electric/1 magnetic) brane would constitute an S-dual version of our 3+1 (3 length/1 time) brane universe. Conversely, our photon would underlie matter waves in their universe, since it would have a TeV range mass, and exhibit their form of gravity in a dipole form, but range to less than 10-19 meters. . Further Clarification 1). "Startling" Evidence For The Extra Dimensional Bulk: The MiniBooNE Project 2). September 1, 2007 Update 3). Planned Search For Acceleration Signals in YBCO 4). Acceleration Signals During Superconductor Transitions Through Critical Temperature 5). Speculative Reactionless Drive From Angstrom Thick Boundary Layers 6). "Ballistic Acceleration of a Supercurrent in a Superconductor", G. F. Saracila and M. N. Kuncher, Physical Review Letters, February, 2009 Copyright 1998, David Sears Schroeder
2733c1a71c80f1b6
A Voyage to the Kernel, Day 9 The study of secret communication systems has lured people of all ages. And the old methods of encrypting messages are quite popular even in literature. But our interest is centred around two aspects-cryptography and cryptanalysis. Cryptography, in plain words, is concerned with the design of secret communications systems, while the latter studies the ways to compromise secret communications systems! We all know that when a bank upgrades its systems to incorporate IT, it has to make sure that the methods of electronic funds transfer are just as secure as funds transferred by an armoured vehicle. You might have seen the arithmetic and string-processing algorithms that people employ in this realm, which are what beginners are expected to study. Cryptanalysis, for sure, can place an incredible strain on the available computational resources. That is why people consider this to be a very tedious process. To comprehend this, let’s discuss a simple case of cryptography. Let sender (S) send a message (called plaintext) to a particular receiver (R). ‘S’ converts his plaintext message to a secret form for transmission (which we may call the ciphertext) with the aid of a cryptographic algorithm (CA) and some defined key (K) parameters. ‘CA’ is the encryption method used here. The whole procedure assumes some prior method of communication, as ‘R’ needs to know the parameters. The headache of the crypt analyst is that he needs to decipher the plaintext from the ciphertext without knowing the key parameters. As we discussed, one of the simplest (and one of the oldest, too!) methods for encryption is the Caesar cipher method. Here, if a character in a particular place of the word is the Nth letter of the alphabet series, it is replaced by the (N + K)th letter in the series, where K is the parameter-an integer (Caesar used K = 3!). character (N+K)? character(N) You can add more statements to fix bugs (say, if you’re using English, you can specify what to do if (N+K) exceeds 26). Well, as said before, this method is very simple. Therefore, it’s no big deal for the crypt analyst to crack the encrypted data. Things will become more complex if we use a general table to define the substitution and then use the same for the process. But here, too, our villain can try some tricks. He may choose the first character arbitrarily, say E (as E is the most frequent letter in English text). He may also choose not to go for certain diagrams such as QJ (as they never occur together in English). You can develop the method further by using multiple look-up tables. Then, you will come across many interesting cases like the one when the key is as long as the plaintext (‘one-time pad’ case) and so on. It should be noted that if the message and key are encoded in binary, a more common scheme for position-by-position encryption is to use the “exclusive-or” function to encrypt the plaintext-“exclusive-or” it (bit by bit) with the key. Geometric algorithms This methodology can be adopted to solve complex problems that are inherently geometric. It can be applied to solve problems concerning physical objects ranging from large buildings (design) and automobiles, to very large-scale integrated circuits (ICs). But, you will soon see that even the most elementary operations (even on points) are computationally challenging. The interesting aspect is that some of these problems can readily be solved just by looking at them (and some others by applying the concepts in graph theory). If we resort to computational methods, we may have to go in for non-trivial methodologies. This branch is relatively new and many fundamental algorithms are still being developed. Hence you can consider this as a potentially challenging and promising realm. In this introductory piece, we’ll restrict ourselves to the two-dimensional space. If you are able to properly define any point, then we can easily manage to include complex geometrical objects, say a line (as it is a pair of points connected by a straight line segment) or a polygon (defined by a set of points-array). We can represent them by: type point = record x,y: integer end; line = record pl, p2: point end; It is quite easy to work with pictures compared to numbers, especially when it comes to developing a new design (algorithm) pattern. It is also very helpful while debugging the code. Let’s see a recursive program that will enable us to ‘draw’ a line by drawing the endpoints. procedure draw(l: line) ; variable Δx, Δy: integer; p: point; 10,11: line; dot(l.pl.x,l.pl.y); dot(l.p2.x,l.p2.y); Δx:=l.p2.x-1.pl.x; Δy:=l.p2.y-1.pl.y; if (abs(Δx)>l) or (abs(Δy)>l) then p.x:=l.pl .x+Δx div 2; p.y:=l.pl .y+Δy div 2; ll.pl:=l.pl; l.p2:=p; draw(l0); l2.pl:=p; l2.p2:=l.p2; draw(11); end ; You can see that there is a division of the space into two parts, joined by using line segments. You may stumble upon many algorithms where we will be converting geometric objects to points in a specific way. We can group them under the term ‘scan-conversion algorithms’. To get a clear picture, you may write the pseudo code to check whether two lines are intersecting. (Hint: check for a common point.) If you can’t straight away do it, try this function to compute these lines and check whether they meet our condition: function same_point(l: line; pl,p2: point): integer; variable Δx, Δy, Δxl, Δx2, Δyl, Δy2: integer; Δxl :=pl .x-l.p1.x; Δyl :=pl.y-l.p1 .y; Δx2:=p2.x-1.p2.x; Δy2:=p2.y-1.p2.y; If the quantity (Δx. Δyl – Δy. Δxl) is non-zero, we can say that pl is not on the line. A problem for beginners Here we are not trying to address a real problem! We will look at how to produce graphical output with the help of libraries. You might have drawn ‘pictures’ in BASIC while at school, but this is not that method. In fact, our intentions are different. Let’s define our problem: We need to draw a sphere with the help of a few straight lines. We can use HoloDraw (see the resource links for more information) as the library for drawing the sphere and we will do the codes in Shell. We start by ‘flattening’ the sphere to a flat rectangular map. As it is a sphere, we will meddle with the changes in terms of ‘degrees’. We also need an input file for processing by the HoloDraw. (Before you proceed, download a copy of HoloDraw and untar it into a local directory. Also make sure that you have Perl installed.) The input file, sphere.draw, will be quite akin to the following: color=0 1 0 # draw a line around the sphere’s equator line: 0 0 1000, 360 0 1000 line: 0 45 1000, 360 45 1000 line: 0 -45 1000, 360 -45 1000 color=0 0 1 line: 0 90 1000, 0 -90 1000 line: 180 90 1000, 180 -90 1000 line: 30 90 1000, 30 -90 1000 line: 60 90 1000, 60 -90 1000 line: 90 90 1000, 90 -90 1000 line: 120 90 1000, 120 -90 1000 line: 150 90 1000, 150 -90 1000 line: 210 90 1000, 210 -90 1000 line: 240 90 1000, 240 -90 1000 line: 270 90 1000, 270 -90 1000 line: 300 90 1000, 300 -90 1000 line: 330 90 1000, 330 -90 1000 Here the X and Y values (which you can identify from the codes directly) are in degrees around the sphere. And Z (or some axis reference) is the sphere’s radius. As you can see, we have used different colours for east-west lines and north-south lines. Now we will create our flat grid file from this, using the following shell code: /path_to_holodraw/drawwrl.pl < /location_of_input_file/sphere.draw > flatgrid.wrl But when we draw the sphere, we have to slice our long lines into small ones, so that our sphere will have a ‘smooth’ curve. We can do that by using the ‘drawchop’ and ‘drawball’ library files: /path_to_holodraw/drawchop.pl x=15+15 y=15+15 < /location_of_input_file/sphere.draw | /path_to_holodraw/drawball.pl | /path_to_holodraw/drawwrl.pl > ballgrid.wrl We can create the VRML (Virtual Reality Modelling Language) using the ‘drawwrl’ file: # VRML V2.0 utf8 # draw a line around the sphere’s equator Shape { appearance Appearance { material Material { emissiveColor 0 1 0 transparency 0 geometry IndexedLineSet { coord Coordinate { point [ 0 0 1000, 500 0 866.025403784439, 866.025403784439 0 500, 1000 0 6.12303176911189e-14, 866.025403784439 0 -500, 500 0 -866.025403784439, 1.22460635382238e-13 0 -1000, -500 0 -866.025403784439, -866.025403784438 0 -500, -1000 0 -1.83690953073357e-13, -866.025403784439 0 500, -500 0 866.025403784438, -2.44921270764475e-13 0 1000 Shape { appearance Appearance { material Material { emissiveColor 0 1 0 transparency 0 geometry IndexedLineSet { coord Coordinate { point [ 0 707.106781186547 707.106781186548, 353.553390593274 707.106781186547 612.372435695795, 612.372435695795 707.106781186547 353.553390593274, 707.106781186548 707.106781186547 4.32963728535968e-14, 612.372435695795 707.106781186547 -353.553390593274, 353.553390593274 707.106781186547 -612.372435695795, 8.65927457071935e-14 707.106781186547 -707.106781186548, -353.553390593274 707.106781186547 -612.372435695795, -612.372435695794 707.106781186547 -353.553390593274, -707.106781186548 707.106781186547 -1.2988911856079e-13, -612.372435695795 707.106781186547 353.553390593274, -353.553390593274 707.106781186547 612.372435695794, -1.73185491414387e-13 707.106781186547 707.106781186548 This way the code goes on. (The complete code of flatgid.wrl is available here. We can generalize it as : Shape { appearance Appearance { material Material { emissiveColor x x x transparency x geometry IndexedLineSet { coord Coordinate { point [ coordIndex [x,y,z] …where x,y,z are local variables with respect to each reference point. And footer lines will be akin to: #HISTORY# /home/aasisvinayak/Documents/Desktop/holodraw.0.37/drawchop.pl x=30+30 y=30+30 #HISTORY# /home/aasisvinayak/Documents/Desktop/holodraw.0.37/drawball.pl NavigationInfo { type [ “EXAMINE”, “FLY”, “WALK”, “ANY” ] speed 1.0 #HISTORY# /home/aasisvinayak/Documents/Desktop/holodraw.0.37/drawwrl.pl This keeps track of the functions we employed. [Initially, I thought of putting the generated images here, but later I felt that it was better to put the code itself because once you have the copy of the ‘drawwrl’ Perl source file, you can use it to analyse our input and the corresponding output.] We have seen that with the help of libraries, we can generate complex codes quite easily. So you can employ such functions, libraries and black-boxes when you write the algorithms. If you are able to achieve this, then you can straight away try geometrical algorithms. Some evolutionary concepts: In a nutshell Evolutionary algorithms themselves form another major branch. We will confine ourselves to some basic ideas, problem definitions and generalisations (definitions).General single-objective optimisation problem: This is defined as minimising (or maximising) f (x) subject to gi (x) ≤ 0, i = {1, . . . , m}, and hj (x) = 0, j = {1, . . . , p} x ∈ Ω. A solution minimises (or maximises) the scalar f (x) where x is a n-dimensional decision variable vector x = (x1 , . . . , xn ) from some universe Ω. Single-objective global minimum optimisation: Given a function f : Ω ⊆ Rn → R, Ω = ø, for x ∈ Ω the value f * f (x* ) > -∞ is called a global minimum if and only if ∀x ∈ Ω : f (x* ) ≤ f (x). x* is by definition the global minimum solution, f is the objective function, and the set Ω is the feasible region of x. Useful facts: • The purpose of finding the global minimum solution(s) is called the global optimisation problem for a single- objective problem. • Evolutionary multi-objective optimisation (EMO) refers to the use of evolutionary algorithms of any sort (like genetic algorithms, evolution strategies, evolutionary programming or genetic programming) to solve multi-objective optimisation problems. • Other meta-heuristics that are being used to solve multi-objective optimisation problems include particle swarm optimisation, artificial immune systems and cultural algorithms. • Differential evolution, ant colony, tabu search, scatter search, and memetic algorithms are other key ideas in the realm. Key ideas: • You must see that non-dominated points are preserved in objective space, and the associated solution points in the decision space. • The design should be such that it should continue to allow algorithmic progress towards the Pareto Front in the objective function space. • Maintain the diversity of points on Pareto/phenotype front (space) or of Pareto optimal solutions on decision/genotype space. • Provide the decision maker (DM) sufficient but limited number of Pareto points for the selection (which results in decision variable values). Please let me know if you wish to discuss these ideas more in depth. Some ‘tree’ facts In the last column, we discussed the use of trees. I will now list some of their properties that you can employ while designing the strategy: • There will only be one node that connects two nodes in a tree • If a tree has N nodes, there will be N-1 edges • For any binary tree with N internal nodes, there are N+1 external nodes • The height of a given full binary tree with N internal nodes is about log N / log 2 Finding (opting for) a strategy and the efficiency factor While designing strategies it is important to consider their viability, effectiveness and efficiency. To comprehend the idea completely, consider a basic problem in quantum mechanics. Schrödinger equation for the time-dependent wave function can be written as: We can also write an expression for the thermal expectation value of an observable X as: You can see that the above equation is modelled by a Hamiltonian H. Classically, it is quite easy to come out with a computational method to solve such equations (say by using Monte Carlo methods). But here the problem is that the objects (say operators or matrices) in QM do not necessarily commute. Still, we can go for models defined by: A lattice of L sites filled with L/2 electrons with up spin, and L/2 electrons with down spin, is a physical model that easily fits into this. (Please Google the term ‘Hubbard model’ for more information about a better model.) But to find out what is really required to carry out these few steps, we need an order of magnitude for M. And by using approximation methods (like the Sterlings method) we can see that: This means that the quantity M increases exponentially with 2L (approximately). And if we allocate 8 bytes per floating point number, the amount of memory we need to store a single eigenvector will turn out to be: So if I put L = 64, the memory required will be 1028 GB! This means that I need 1028 GB to study a quantum system of just 64 particles on 64 sites. If I submit a proposal with such high values, I am sure that no funding agency will accept this. The only way I can do the computational task is to go for an algorithmic strategy that will reduce the amount of memory needed, at the expense of more CPU time. This is further considered in relation to ‘clouds’ and their effectiveness. Having completed a good portion of our new segment, we can discuss the ideas you suggested. But I think it is too late to discuss notations (and advanced ideas in numerical computation) today. So wait for the forthcoming issues, in which we will address them. Creative Commons License.
c02a9bcf9ac8d5ca
From Wikipedia, the free encyclopedia Jump to: navigation, search This solution of the vibrating drum problem is, at any point in time, an eigenfunction of the Laplace operator on a disk. In mathematics, an eigenfunction of a linear operator, A, defined on some function space, is any non-zero function f in that space that returns from the operator exactly as is, except for a multiplicative scaling factor. More precisely, one has \mathcal A f = \lambda f for some scalar, λ, the corresponding eigenvalue. The solution of the differential eigenvalue problem also depends on any boundary conditions required of f. In each case there are only certain eigenvalues \lambda=\lambda_n (n=1,2,3,...) that admit a corresponding solution for f=f_n (with each f_n belonging to the eigenvalue \lambda_n) when combined with the boundary conditions. Eigenfunctions are used to analyze A. For example, f_k(x) = e^{kx} is an eigenfunction for the differential operator \mathcal A = \frac{d^2}{dx^2} - \frac{d}{dx} for any value of k, with corresponding eigenvalue \lambda = k^2 - k. If boundary conditions are applied to this system (e.g., f=0 at two physical locations in space), then only certain values of k=k_n satisfy the boundary conditions, generating corresponding discrete eigenvalues \lambda_n=k_n^2-k_n. Specifically, in the study of signals and systems, the eigenfunction of a system is the signal f(t) which when input into the system, produces a response y(t) = \lambda f(t) with the complex constant \lambda.[1] Derivative operator[edit] A widely used class of linear operators acting on function spaces are the differential operators on function spaces. As an example, on the space \mathbf{C^\infty} of infinitely differentiable real functions of a real argument t, the process of differentiation is a linear operator since \displaystyle\frac{d}{dt}(af+bg) = a \frac{df}{dt} + b \frac{dg}{dt}, for any functions f and g in \mathbf{C^\infty}, and any real numbers a and b. The eigenvalue equation for a linear differential operator D in \mathbf{C^\infty} is then a differential equation D f = \lambda f The functions that satisfy this equation are commonly called eigenfunctions. For the derivative operator d/dt, an eigenfunction is a function that, when differentiated, yields a constant times the original function. That is, \displaystyle\frac{d}{dt} f(t) = \lambda f(t) for all t. This equation can be solved for any value of \lambda. The solution is an exponential function f(t) = Ae^{\lambda t}.\ The derivative operator is defined also for complex-valued functions of a complex argument. In the complex version of the space \mathbf{C^\infty}, the eigenvalue equation has a solution for any complex constant \lambda. The spectrum of the operator d/dt is therefore the whole complex plane. This is an example of a continuous spectrum. Vibrating strings[edit] The shape of a standing wave in a string fixed at its boundaries is an example of an eigenfunction of a differential operator. The admissible eigenvalues are governed by the length of the string and determine the frequency of oscillation. Let h(x,t) denote the sideways displacement of a stressed elastic chord, such as the vibrating strings of a string instrument, as a function of the position x along the string and of time t. From the laws of mechanics, applied to infinitesimal portions of the string, one can deduce that the function h satisfies the partial differential equation \frac{\partial^2 h}{\partial t^2} = c^2\frac{\partial^2 h}{\partial x^2}, which is called the (one-dimensional) wave equation. Here c is a constant that depends on the tension and mass of the string. This problem is amenable to the method of separation of variables. If we assume that h(x,t) can be written as the product of the form X(x)T(t), we can form a pair of ordinary differential equations: \frac{d^2}{dx^2}X=-\frac{\omega^2}{c^2}X\quad\quad\quad and \quad\quad\quad\displaystyle \frac{d^2}{dt^2}T=-\omega^2 T.\ Each of these is an eigenvalue equation, for eigenvalues -\omega^2/c^2 and -\omega^2, respectively. For any values of \omega and c, the equations are satisfied by the functions X(x) = \sin \left(\frac{\omega x}{c} + \phi \right)\quad\quad\quad and \quad\quad\quad T(t) = \sin(\omega t + \psi),\ where \phi and \psi are arbitrary real constants. If we impose boundary conditions (that the ends of the string are fixed with X(x) = 0 at x = 0 and x = L, for example) we can constrain the eigenvalues. For those boundary conditions, we find \sin(\phi) = 0\ , and so the phase angle \phi=0\ \sin\left(\frac{\omega L}{c}\right) = 0.\ Thus, the constant \omega is constrained to take one of the values \omega_n = n c\pi/L, where n is any integer. Thus, the clamped string supports a family of standing waves of the form h(x,t) = \sin(n\pi x/L)\sin(\omega_n t).\ From the point of view of our musical instrument, the frequency \omega_n\ is the frequency of the nth harmonic, which is called the (n-1)th overtone. Quantum mechanics[edit] Eigenfunctions play an important role in many branches of physics. An important example is quantum mechanics, where the Schrödinger equation \mathcal H \psi = E \psi \mathcal H = -\frac{\hbar^2}{2m}\nabla^2+ V(\mathbf{r},t) has solutions of the form \psi(t) = \sum_k e^{-i E_k t/\hbar} \phi_k, where \phi_k are eigenfunctions of the operator \mathcal H with eigenvalues E_k. The fact that only certain eigenvalues E_k with associated eigenfunctions \phi_k satisfy Schrödinger's equation leads to a natural basis for quantum mechanics and the periodic table of the elements, with each E_k an allowable energy state of the system. The success of this equation in explaining the spectral characteristics of hydrogen is considered one of the greatest triumphs of 20th century physics. Due to the nature of Hermitian Operators such as the Hamiltonian operator \mathcal H, its eigenfunctions are orthogonal functions. This is not necessarily the case for eigenfunctions of other operators (such as the example A mentioned above). Orthogonal functions f_i, i=1, 2, \dots, have the property that 0 = \int f_i^{*} f_j is the complex conjugate of f_i whenever i\neq j, in which case the set \{f_i \,|\, i \in I\} is said to be orthogonal. Also, it is linearly independent. 1. ^ Bernd Girod, Rudolf Rabenstein, Alexander Stenger, Signals and systems, 2nd ed., Wiley, 2001, ISBN 0-471-98800-6 p. 49 See also[edit]
087e0a6373463be4
Jan 03 Physics for Kids: 49 Easy Experiments With Heat (PHYSICS FOR Format: Hardcover Language: English Format: PDF / Kindle / ePub Size: 9.32 MB Downloadable formats: PDF Pages: 150 Publisher: Tab Books; 1st edition (May 1990) ISBN: 0830692924 Light is affected by gravitational forces pdf. Students indicate their responses by means of colored indexed cards, or by means of the "clickers" of the Beyond QuestionTM classroom response system. 1st semester On-Line Quizzes Blackboard Export file download. In the discussion of quantum coding, we will use some rudimentary group theory. Information is something that can be encoded in the state of a physical system, and a computation is a task that can be performed with a physically realizable device. Therefore, since the physical world is fundamentally quantum mechanical, the foundations of information theory and computer science should be sought in quantum physics , cited: http://femtalent.cat/library/time-dependent-fracture-proceedings-of-the-eleventh-canadian-fracture-conference-ottawa-canada. Then, the application of the kinematic equations and the problem-solving strategy to free-fall motion was discussed and illustrated. In this part of Lesson 6, several sample problems will be presented. These problems allow any student of physics to test their understanding of the use of the four kinematic equations to solve problems involving the one-dimensional motion of objects http://nickel-titanium.com/lib/smart-composites-mechanics-and-design-composite-materials. A structure is a set of elements on which certain operations and relations are defined, a mathematical structure is just a structure in which the elements are mathematical objects (numbers, sets, vectors) and the operations mathematical ones, and a model is a mathematical structure used to represent some physically significant structure in the world , e.g. http://schoolbustobaja.com/?freebooks/iutam-symposium-on-nonlinearity-and-stochastic-structural-dynamics-proceedings-of-the-iutam. We provide streaming Products and Services and non-streaming digital downloads over the internet to certain devices (streaming and non-streaming digital downloads are hereinafter collectively referred to as “Streaming Service”) http://nickel-titanium.com/lib/computational-turbulent-incompressible-flow-applied-mathematics-body-and-soul-4. Symmetries, group theory, gauge invariance, Lagrangian of the Standard Model, flavor group, flavor-changing neutral currents, CKM quark mixing matrix, GIM mechanism, rare processes, neutrino masses, seesaw mechanism, QCD confinement and chiral symmetry breaking, instantons, strong CP problem, QCD axion , cited: http://iedaplus.com/books/modern-prestressed-concrete-design-principles-and-construction-methods-2-nd-ed. Define a coordinate system Apply momentum conservation; there will be one equation for each dimension. If the collision is elastic, apply conservation of kinetic energy as well. Solve. 1.5 Two Dimensional collisions 10 The Center of Mass is the average position of the mass of an object. We may consider an object as a hollow, mass less shell with all its mass located at this Center of Mass http://schoolbustobaja.com/?freebooks/nonlinearities-in-action-oscillations-chaos-order-fractals. Notes 1: The Mathematical Formalism of Quantum Mechanics, in ps or pdf format (complete). Notes 2: The Postulates of Quantum Mechanics, in ps or pdf format (complete). Notes 5: Time Evolution in Quantum Mechanics, in ps or pdf format (complete). Notes 6: Topics in One-Dimensional Wave Mechanics, in ps or pdf format (complete) epub. Instead of getting a single answer, you use the equation to work out the probabilities of certain outcomes. The results don’t say, “This is what the world is doing.” Instead, they just describe the probability of its doing any one thing pdf. This is just a maths trick you can do to exponential powers and I personally think it makes the differentiation easier http://nickel-titanium.com/lib/an-introduction-to-mathematics-for-engineers-mechanics. The weight of an object is the consequence of the Earth’s gravity operating on its mass. Thus, the mass of a given object is the same everywhere, but its weight varies slightly if it is moved about the surface of the Earth, and it would change a great deal if it were moved to the surface of another planet epub. Their total kinetic energy is: KE= 12mv 2= 0.5 × (1000 + 5000) × 3.32= 3.3 × 104 J d This is an inelastic collision. A large amount of kinetic energy has been lost, so the collision is inelastic. Examples 17 Questions 18  An 8 N force acts on a 5 kg object for 3 sec http://nickel-titanium.com/lib/elements-of-vibration-analysis. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua , cited: http://fredyutama.com/ebooks/amorphous-solids-and-the-liquid-state-physics-of-solids-and-liquids. Properties can be combined to express internal energy and thermodynamic potentials, which are useful for determining conditions for equilibrium and spontaneous processes pdf. It is kicked and leaves the kicker's foot with a speed of 5. and no atmospheres , e.g. http://nickel-titanium.com/lib/thin-plates-and-shells-theory-analysis-and-applications. The elastic potential energy stored in a spring displaced a distance x from its equilibrium position is U=(1/2)kx 2. The object's potential energy therefore is U = (1/2)kx 2 = (1/2)mω 2 x 2 = (1/2)mω 2 A 2 cos 2 (ωt + φ) The total mechanical energy of the object is E = K+U = (1/2)mω 2 A 2 (sin 2 (ωt + φ)+cos 2 (ωt + φ)) = (1/2)mω 2 A 2 The energy E in the system is proportional to the square of the amplitude E = (1/2)kA 2 It is a continuously changing mixture of kinetic energy and potential energy. 6.4 Energy in Simple Harmonic Motion 114 An ideal simple pendulum consists of a point mass m suspended from a support by a massless string of length L http://nickel-titanium.com/lib/fundamentals-of-noise-and-vibration-analysis-for-engineers. Free fall from a specified height - Find equations and a brief explanation, so you can solve free fall problems accurately. Free Fall Calculator - Use your imagination and pretend you're throwing objects from a cliff. Then use this online tool to determine the speed of the object as it hits the ground http://nickel-titanium.com/lib/heat-convection-in-micro-ducts-microsystems. This page uses frames, but your browser doesn't support them or frames are disabled in browser properties , source: http://nickel-titanium.com/lib/boundary-elements-x-mathematical-and-computational-aspects. Although the two Schrödinger equations form an important part of quantum mechanics, it is possible to present the subject in a more general way. Dirac gave an elegant exposition of an axiomatic approach based on observables and states in a classic textbook entitled The Principles of Quantum Mechanics. (The book, published in 1930, is still in print.) An observable is anything that can be measured—energy, position, a component of angular momentum, and so forth , source: http://nickel-titanium.com/lib/osteoarthritis-of-the-human-knee-composition-and-mechanical-properties-of-osteoarthritic. This course unifies concepts usually learnt in physical sciences and their application in imaging sciences, and will include the latest research advances in this area. The course will also include a photography competition. Prerequisites: The prerequisite will be an undergraduate or graduate class in Computer Vision or in Computer Graphics. A graduate seminar course in Computer Vision with emphasis on representation and reasoning for large amounts of data (images, videos and associated tags, text, gps-locations etc) toward the ultimate goal of Image Understanding , cited: http://agiosioanniskalyvitis.gr/books/practical-reliability-engineering-and-analysis-for-system-design-and-life-cycle-sustainment. Almost always, it only involves REDUCING the size of the water pump pulley (so it spins faster) and a thicker radiator (so there is more heat exchange surface to cool the water). From the above discussion, you probably realize that such a "heavy-duty" cooling system causes the engine to have WORSE efficiency and performance at low engine speeds, due to excessive cooling of the engine cylinders then pdf! Free oscillations of a plucked string or struck: interpretation of the sound emitted by the superposition of these modes. Highlighting modes of vibration by sinusoidal excitation http://thecloudworks.com/?library/introduction-to-computational-fluid-dynamics. Contemporary research in physics can be broadly divided into condensed matter physics; atomic, molecular, and optical physics; particle physics; and astrophysics. Some physics departments also support research in Physics education download. Included are also topics where the flaws (or the alternative explanations where given) are not a certainty but only possible or likely within the light of other evidence , e.g. http://www.aladinfm.eu/?lib/computation-of-singular-solutions-in-elliptic-problems-and-elasticity. It is the consequence of that understanding of Nature that should allow scientists to predict new results. Modern physics has brought a new description of Nature , cited: http://fgnuernberg.de/freebooks/statistical-fluid-mechanics-volume-i-mechanics-of-turbulence-dover-books-on-physics. During the term of their appointment, Scholars have a Fermilab affiliation and the same research opportunities and support infrastructure as Fermilab scientists , source: http://narrowarroe.com/freebooks/fluid-power-maintenance-basics-and-troubleshooting-fluid-power-and-control. The speed at an instant is can be approximated with an average speed measured over a short time interval online. Next, go to any lesson page and begin adding lessons. Edit your Custom Course directly from your dashboard. Name your Custom Course and add an optional description or learning objective. Create chapters to group lesson within your course. Remove and reorder chapters and lessons at any time. Share your Custom Course or assign lessons and chapters. Share or assign lessons and chapters by clicking the "Teacher" tab on the lesson or chapter page you want to assign http://nickel-titanium.com/lib/sources-of-quantum-mechanics-dover-books-on-physics. Rated 4.8/5 based on 1874 customer reviews
ed9ce05803ed554e
Saturday, May 29, 2010 This weekend, I'll be on my way back to Sweden. My time here at Perimeter Institute turned out to be busier than expected, but it has also been very productive. It is somewhat sad that every time I come back more people I knew have left. Those postdocs who I spent my years with here have either left already, sit on packed bags about to start a new job, or are due to apply for a new job this fall. The weather here in Waterloo has been brilliant the last two weeks, and the construction at PI has proceeded rapidly. On the risk of boring you to death, here's more photos of the building extension. Meanwhile, one can imagine how the result will look. The photo below is from the back of the building. To the right, you see the old part of the building, the glass boxes are the researchers' offices. This is a close-up of the new part of the building, with the goldish shimmering glass front: And this is again the view from the parking lot, compare to three weeks ago. If I recall correctly, that's where the new main entrance will be. So, now I have to pack my bag. You'll hear from me once I'm back in Stockholm. Meanwhile, a great weekend to all of you! Thursday, May 27, 2010 Learning to deal with information In the 21st century, information is cheap. Or is it? I have written several times on this blog that it is a naive illusion to think of the internet as a democratic provider of information. Moreover, the simple provision of information is not equivalent to people being well informed. The availability of information on the internet is not democratic but, if anything, anarcho-capitalist. If you have the money to pay people who know something about search engine optimization, and others to spam links to your site wherever they won't be immediately deleted, you can pimp up your website's ranking dramatically. Even Google's PageRank algorithm itself is clearly not democratic: it gives more weight to a link from a site that has itself more links. That's what makes it so powerful and so useful. Sure, we all profit from this clever ordering of information. I'm certainly not complaining about it. It's just not democratic and shouldn't be sold as democratic since not everybody's voice has the same weight. Google itself does smartly not call the algorithm itself "democratic" but writes that "PageRank relies on the uniquely democratic nature of the web." Ohm, which democratic nature are we talking about again? But maybe more important, Google's PageRank also doesn't tell you anything about the quality of information you obtain. That the voices of the wealthy have more impact is hardly surprising, and merely a reflection of what has been going on in the media and news press for a long time. Now one could of course argue that it's up to me to just go through all the hits that my search brought up and find the best piece of information. But as a matter of fact, most people don't do that. I usually don't do it either. And that's not even irrational, because scanning through all the hits that one gets on a query is very time-intensive and the result rarely justifies the effort. Thus, most people will skim maybe the first 20 hits, if at all, and conclude that they've gotten a fair cross-section of what there is to know about the topic. That's the part of the information that is "cheap." Everything else, for example checking sources, becomes increasingly costly in terms of time and effort. And since most websites don't list their sources, there's few shortcuts to that. What is left is that whoever dominates the "cheap" information does, for all practical purposes, dominate the information market. The only cure for that is information literacy. The other day, I read an interesting article by Mark Moran. Moran is CEO of a Web publisher that offers free content and tools that teach students how to use the Web effectively. He writes: In a recent study of fifth grade students in the Netherlands, most never questioned the credibility of a Web site, even though they had just completed a course on information literacy. When my company asked 300 school students how they searched, nearly half answered: "I type a question." When we asked how students knew if a site was credible, the most common answers were "if it sounds good" or "if it has the information I need." Equally dismal was their widespread failure to check a source’s date, author or citations." I find this seriously scary! As I have expressed in my earlier post Cast Away, the passing on of knowledge to the next generation is one of the most essential ingredients to continuing progress. How are people supposed to make informed decisions if they can't tell what the relevant information is to begin with? Where does that leave our political systems? But then I read the following: Wise words, eh? Guess where that's from? Guess, don't Google! It's a press release from the White House. No, really. It's an announcement for the "National Information Literacy Awareness Month" that was last year in October, which somehow passed me by. While recognizing a problem isn't the same as solving it, it is certainly a good first step. Let's hope that other nations will follow that example, there's clearly hope. Yes, we can do it! Indeed, there is more hopeful news today: The Pew Research Center's Project for Excellence in Journalism some days ago published new data comparing the news coverage on blogs to that in the traditional press. Here's an interesting number: only 2% of news in the traditional press are about science and technology. But on the blogs, it's 18%. Sunday, May 23, 2010 On the Edge of Chaos I saw this advert about a month ago, and it got me thinking. It doesn't matter much if you don't understand German, the visuals speak for themselves: It's an advert for craftsmanship (Handwerk). The song lyrics are roughly saying: imagine how life would be without them. (The long list in the end is a list of professions.) To me it shows so nicely how incredibly complex our life has become, and how much that we take for granted is only a very recent achievement in the history of mankind. Friday, May 21, 2010 Terra Incognita As you know, I am presently at a workshop at Perimeter Institute about the Laws of Nature: Their Nature and Knowability. Yesterday, we had a talk by Marcelo Gleiser titled “What can we know of the world?”. It occurred to me somewhat belatedly that I recently read an article by Gleiser in New Scientist, “The imperfect universe: Goodbye, theory of everything.” In that article, he writes that after “Fiften years [as] a physicist hard at work hunting for a theory of nature that would unify the very big and the very small” he has come to the conclusion that “the very notion of a final theory is faulty.” In a nutshell, that was also what his talk was about. The only thing that's interesting about this insight is that it took him 15 years to arrive there. And maybe, why it got printed in New Scientist. Of course the notion of a final, fundamental, theory of all and everything is faulty. For the simple reason that even if we had a theory that explained everything we know, we could never be sure it would eternally remain the theory of everything we know. As Popper already realized about a century ago, one cannot verify a theory, it can only be falsified. Thus, theories we have are forever out for test, always on the risk that some new data does not fit in. That's exactly what makes a theory scientific. It's also one of the points I made in my FQXi essay. You see, I'm an even Newer Scientist. That we can never know whether a theory is truly fundamental and able to explain all observable phenomena of course does not mean there is no fundamental theory. It just means we can never know - so your believe that such a theory exists belongs in the realm of religion, not science. In any case, in his talk (video and audio here), Gleiser touched on another topic that reminded me of something else. He had a sketch of our expanding knowledge, with a filled circle representing “The Known” in the middle, that is expanding into what is now the unknown (“perennial ignorance”) outside: I used a similar, though slightly different analogy for the progress of science in my PI public lecture some years ago (which incidentally has the same title as the FQXi essay, I'm very into recycling). In this case though, I used a map of Middle Earth. The message that I wanted to convey is that the process of knowledge discovery is very similar to exploring unknown territory. There are parts that you have already seen and that you know very well, though details may be missing. And let me be clear that with “The Known” (in contrast to Gleiser) I don't mean laws themselves but the data from which the laws were extracted. Otherwise you lose information that is possibly important about the range of applicability (information you at first possibly didn't think was relevant). You try to explain the known by a theory, and if everything fits you point somewhere into the unknown (make a prediction). Ideally, experimentalists go there and find what you told them they would find. You don't want to point out too far because people today are quite impatient, and if your prediction is not measurable within their lifetime it won't help you get tenure. The other way progress happens is that there is data available for which a theoretical explanation is missing. Or a theory might be sketchy and not work very well. That's the situation of the experimentalist saying: we've seen something on the horizon, please explain that. The body of knowledge that we have is usually not neatly simply connected, but typically has some pieces that don't really match with anything else. Which brings me back to Gleiser's article then. The essential question is not whether you do or don't believe in a fundamental theory of everything. The essential question is what is a good and promising way to expand what is known. You can believe in flying spaghetti monsters, reincarnation, or a theory of everything: if it helps you with your research, by all means, go ahead, just don't put your believes in the abstract of your paper. Experimental input is of course essential to progress all along. On the theoretical side, the obvious reason why people are looking for a unification of the known forces is that unification has worked previously and has been tremendously successful. The same holds for symmetry principles. Sure, that doesn't mean these procedures will continue to be successful, but it's the obvious thing to try. It's the same reason why a band's second hit sounds like the first, and why, after my move to Sweden I first had to learn that asking to speak to a supervisor and complaining about lacking customer service is not a very successful tactic in this country. Similarly, we might have to reconsider our tactics and learn new ways of thinking if we remain unsuccessful making headway on today's big questions in physics. For example when it comes to resolving the apparent tension between General Relativity and quantum mechanics, or to explain the arrow of time, rspt the initial conditions of the universe: It's terry incognita and there may be dragons. That's why I find meetings like the current one at PI very useful to become more aware of our standard mode of thinking, for awareness and acknowledgement of limitations of a procedure is the first step to improvement. Monday, May 17, 2010 Abramowitz/Stegun goes online Did you ever need to learn about the properties of some obscure mathematical function which turns up when you try to solve, say, the Schrödinger equation with a linear potential? In the times before Wikipedia and Eric Weisstein's World of Mathematics/MathWorld, the usual way to proceed was to go to the library and look up in the "Abramowitz/Stegun", a compilation of formulas, relations, graphs and data tables for all kinds of functions you can think of. Airy functions Ai(x), Bi(x) and M(x). Over the last years, Milton Abramowitz' and Irene A. Stegun's time-honored "Handbook of Mathematical Functions" has been carried over to the internet age as the Digital Library of Mathematical Functions. Published by the US National Institute for Standards and Technology (NIST), ... the NIST Digital Library of Mathematical Functions (DLMF), is the culmination of a project that was conceived in 1996 at the National Institute of Standards and Technology (NIST). The project had two equally important goals: to develop an authoritative replacement for the highly successful Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, published in 1964 by the National Bureau of Standards (M. Abramowitz and I. A. Stegun, editors); and to disseminate essentially the same information from a public Web site operated by NIST. (From the DLMF Preface) Parts of the DLMF have been available since some time, but the complete site went online just last week, on May 11. In comparison to the old printed book, there are more functions and formulas, which all can be copied as Latex or MathML code. And while the function graphs at MathWorld are interactive, the DLMF features more detailed descriptions of applications in mathematics and physics, and links to freely available software libraries. Should I ever need to code Jacobian Elliptic Functions, I'll know where to look them up. Via bit-player, where you can also read more about the history of the Abramowitz/Stegun. Saturday, May 15, 2010 The Future of the Conference As you know, I am here at Perimeter Institute for the upcoming workshop on the Laws of Nature: Their Nature and Knowability. Every time I lift my bag on a scale at a check-in counter I am wondering if there will come a time when, instead of stepping on a plane, we will meet in cyberspace. The Past The classical conference in academia is still omnipresent. You go, you sit and listen to a dozen talks a day, you smalltalk over coffee and cheap cookies and try to get to know some people at the "social event," typically a reception with buffet or a conference dinner. Also typical is that the average participants pays a horrendously high fee that covers the VIP guests airfare and four star hotel. But that's okay because most of the participants have a travel grant for exactly this purpose. That might sound a bit dull, and it frankly sometimes is, but there are a lot of good reasons both to organize and go to a conference. On the participant's side: Most notably, conferences are useful to obtain or keep an overview on the research going on in one's field. An overview both on the what and on the who. It is possible to do that by other means, but a conference is an especially efficient way to do it, in particular if you're a newcomer. In contrast to reading a review, you can go and talk to the people who work on similar stuff like you and face-to-face communication is still the best way to exchange information. You might learn about some unfinished work and find a new collaborator. You might get to hear a talk by some of the more well-known people in the field that might be unlikely to pass by your own institution. And last but not least, you have the opportunity to communicate your own research, get feedback and advice. On the organizer's side: Organizing a conference takes a lot of work and time. One doesn't make money with a scientific conference - in fact the first steps will be trying to find sponsors. Reasons for organizing a conference are most notably to advance the research in one's own field. It's to bring together and support the community, to spread ideas, to foster the formation of collaborations. Conferences are also frequently used to advertise the institution where they take place, which most often is one of the main sponsors. As an organizer one often has, to some extend, the possibility to select speakers that one is interested in hearing or meeting. And then there is the communication of the research field's relevance beyond the own community. Many conferences will have a public lecture and will have some media coverage, at least in the news magazine of the university where it takes place. There are a few variations to the conference scheme. A workshop for example will typically have fewer participants and fewer talks, and these talks will be more specialized and leave more room for discussion. The large conferences will often separate talks into plenary talks and parallel session. Sometimes they will dump people into a poster session. The Present During the last years, with the spread of social networking tools and the continuing improvements in information technologies, one could start to see some modifications of the standard scheme. To begin with, it is nowadays possible to let a speaker deliver a talk by video link and it is similarly possible to let people participate by streaming video. I have been to a few conferences were talks were given by video and they typically are not very well attended. I'm not entirely sure why. It's like people think "Oh, he wont really be here." Or maybe it's that, not so surprisingly, these talks typically imply a lot of technical fiddling and are fault-prone. That however will improve the more often it happens. Another quite obvious change that is now so common one easily takes it for granted, is that most conferences have the slides of talks online and in many cases even a recording. This is so omnipresent that indeed most conferences no longer publish proceedings. I find this extremely useful because I don't have to take notes and write down a reference on the speaker's slide. I can just go and look it up later. Then there are the changes in format. I wrote previously that I was at the SciBar Camp in Toronto and later at the SciFoo Camp in California. In this case the schedule is not set before the start of the meeting, but assembled by self-organization and participant's interests upon arrival which is greatly aided if the participants had in advance a possibility to exchange their interests online. This spontaneous self-assembly has advantages and disadvantages. The advantage is that it's a more flexible format that expresses the participants' rather than the organizers' interests, is more lively and interactive. Disadvantage is that most of the sessions will be pretty much unprepared. Since I actually prefer listening to thought-through arguments instead of improvised babble, I am not too much in favor of this mode of organization. I think that for scientific purposes, a combination of both, the old-fashioned scheduled talks and some more flexible sessions would be more suitable. And then of course there is the use of social networking tools, like Twitter, Friendfeed and blogging or just setting up a networking site specifically dedicated to the event. Whether or not that works well depends crucially on how many people make use of it. But if it works well, this can serve a lot of purposes. One is clearly the communication to the public. But besides this it also improves the exchange between the participants. In particular if you are at a large conference you might not actually know who is interested in similar topics like you or who you might want to talk to in the coffee break. If a conference wants to make good use of Web2.0 tools first thing they should do is to aggregate the participants' feeds. It is somewhat ironic, but you might not actually know that the person sitting next to you is writing a blog you read frequently. And installations like a Tweetwall for example (a screen displaying tweets by participants, see picture) add a completely new layer to the discussions that can take place at a meeting and can greatly improve information exchange and facilitate networking. It is interesting to see how more and more conference organizers are making use of these possibilities. It depends of course a lot on the technical support that they have. For example I read the other day on Resonaances that the International Conference on High Energy Physics 2010 has launched an official blog and recruited some bloggers to cover the event. How cool is that? One should add that of course these things have been done since years in some communities that are especially dedicated to advancing these changes in networking, blogging, outreach and information technology. And Perimeter Institute's outreach efforts have been playing around with all these possibilities since years, for example with the recent Quantum2Cosmos Festival. Point I'm trying to make is that the use of these tools is now slowly spreading and becoming more common. The Future? So what's next? NatureNetworks organizes conferences that are life-streamed to Second Life. Will this become the conference of the future? I think it is very well possible. I don't think it is very likely that conferences will become entirely decoupled from physical reality in the sense that we exclusively meet online. But it will become increasingly more common to attend a "real" conference "virtually" if one cannot be there in person for one or the other reason, may that be lack of funding or illness. I also think that conferences will obtain several more virtual layers in the soon future. For example, I imagine that you go to a talk and while you are there can "log in" to the respective website, and so could people who are not physically there but following online. You could then for example skip back and forth in the slides as you wish or ask your colleague in the second row what he thinks about what the speaker just said. I think that this happens to some extend today by people sending emails, but it could become much better aggregated. I am not sure however that such a complex environment as SecondLife is necessary for these purposes. Though it has of course the advantage that the technology is already in place. As to the increased flexibility in format. There is a quite obvious hurdle to having an academic conference that has not a program online a month ahead and neatly scheduled speakers: Many people can only justify their participation and receive a reimbursement for their expenses when they are giving a talk. This is a typical example where requiring "accountability" can be misguided (see also) and hinders improvements. However, I think that this problem will resolve by itself once funding agencies notice that there are other means to document one's participation in an event than being listed on the program. Basically, all they really want to know is that you didn't just spend the week on the beach at their expenses. But taking part in online discussions or blogging can serve a similar purpose. Real change is happening and I think we'll see more of it! Thursday, May 13, 2010 Paradigm Shifts One day, when I'm old and my hair is grey, and I'm sitting in a rocking chair stroking a cat on my lap when the neighbor's son comes with a book on the history of science, I want to say "Yes, I was there." There are as many different motivations to become a physicist as there are physicists. But one of them is certainly the wish to be part of something greater, an event of historical importance. It's the wish to be there and have a say when our view of the world fundamentally changes; when a new picture comes into focus that will be passed on to future generations. The change of our fundamental understanding of Nature, the emergence of a new way of thinking about the world is what is known as a "paradigm shift." It's a notion that occasionally creeps up in a discussion. It most often does so either as a means of defense, when a new proposal is widely rejected or when the speaker tries to make himself more interesting. I was wondering the other day what paradigms there are that might be shifting today. In the 22nd century's textbooks, what of our today's understanding will appear in the historical appendix instead? There's three such potential paradigm shifts that I've come across. The first one is about the limits of reductionism. With the incredible success reductionism has had in physics in the first half of the century, there came the believe that one day we'll be able to explain everything by taking it apart into smaller and smaller pieces. This paradigm has by now pretty much shifted over in favor of acknowledging that emerging features might not be possible to explain, either in practice or in principle, by reduction to more elementary constituents (see also). This change on our perception came along with the rise of chaos theory and complexity, features that are both very common to natural systems and hard if not impossible to address by reductionist approaches. It is funny in fact how silently this shift seems to have taken place. You sometimes find today people in talk vigorously arguing that reductionism has limitations, just to find there's nobody actually disagreeing with them. Except for the old professor in the front row. The second potential paradigm shift that has crossed my way is the multiverse. The multiverse I have in mind is the one forced upon you from the string theory landscape, a vast number of possible universes with different laws of Nature, versus the previously prevailing idea that our universe is unique and is so for a reason that we have to find. Various other sorts of multiverses seem to creep up from other considerations. The multiverse is presently a very hotly discussed topic, with strong defenders both for and against it. I have previously expressed my opinion that the multiverse isn't so much a new paradigm but a new way of thinking about an old paradigm. Instead of finding a way to derive the parameters of the standard model as a 'most optimal' configuration of some sort, one now searches for a measure in the multiverse by means of which our universe is (ideally) most likely. It's a watered-down version of the same game. In any case, I recall that Keith Dienes (the guy with the String Vacuum Project) spoke about the probabilistic attempt as a new way of thinking about "why" we have exactly these laws. And yes, I was thinking, maybe he's right and in some decades from now that will be how we think about our reality. That we're embedded in a vast number of different universes which different laws of Nature and our grandchildren will laugh about that we once thought we were unique. The third potential paradigm shift is that spacetime might not be a fundamental entity. I think that everybody who works on quantum gravity (whatever sort of) is familiar with the idea. But I noticed on occasion, most recently when I was talking to Sophie about Verlinde's emergent gravity scenario, that the idea that space-time is only seemingly a smooth, continuous manifold with a metric on it and on small scales might be nothing like this is not very widely spread outside the community. While there are many approaches towards finding a more fundamental description of spacetime, they each suffer from their own problems. So I think it is pretty unclear presently whether this will turn out to be a true (and useful) description of Nature in some way. But it's certainly a thought hanging in the air. On the completely opposite side is the idea that space-time instead is the only fundamental entity and that matter indeed emerges from it (an idea that dates back at least to Kaluza and Klein). Or that neither is fundamental, but arises from something unified that's neither matter nor space-time. These are all ideas that physicists have been chewing on for quite some while now. I am curious how people will be think about them in the future, if they will laugh about or foolishness or admire our imagination. Tuesday, May 11, 2010 Under Construction As you have probably seen from my Twitter-feed, I have made it to Waterloo, despite ash cloud and all. Perimeter Institute is, as usual, buzzing with activity. Since I have to contribute my part to the buzz, here's just a short update on the building construction. The photos from February are here. It is oddly pleasing to see reality evolve by plan, just as we've been shown in the models. It's so different from my every day life... Sunday, May 09, 2010 Knowledge for the sake of knowledge Saturday, May 08, 2010 If the charming Icelandic volcano lets me, I'll be flying to Toronto on the weekend. Next week, I'll be attending the PI workshop on the "Laws of Nature:Their Nature and Knowability," which promises to be interesting. I have some more trips upcoming this summer. Towards the end of June, there's a meeting of the "Working group on quantum black holes" in Bonn (which is part of the COST action "Black holes in a violent universe"), the SUSY 2010 happens to be also in Bonn in August. The Planck 2010 is May 31 to June 4 at CERN, where I will not be going because it conflicts with my Toronto trip. And of course there's the ESQG 2010, July 12-16, here in Stockholm. To top off the summer, my 15 year high school reunion is also planned for August. Not the busiest summer ever, but still seems I'll collect some frequent flyer miles. You'll hear from me when I'm on the other side of the big water. Meanwhile, have a nice weekend or, as the Swedes say, ha det så bra. Thursday, May 06, 2010 Why'd you have to go and make things so complicated? "Why'd you have to go and make things so complicated? I see the way you're actin' like you're somebody else Gets me frustrated Life's like this you You fall and you crawl and you break And you take what you get, and you turn it into Honestly, you promised me I'm never gonna find you fake it No no no" "Biased" is another way to say there's input missing. Tuesday, May 04, 2010 Physics Bits and Bites Here are three interesting and intriguing physics items I came across recently: • Last year, the American Association for the Advancement of Science (AAAS) had organized a symposium called "Quest for the Perfect Liquid: Connecting Heavy Ions, String Theory, and Cold Atoms". Perfect, low-viscosity liquids can be observed when there is a very strong interaction between the constituents of the fluid, as is the case for the quarks and gluons created in heavy ion collisions at RHIC, or clouds of ultracold lithium atoms in optical traps. The strongly coupled quark gluon plasma can be described using the AdS/CFT correspondence, which brings sting theory into play (see also this earlier post). At the AAAS symposium, physicist-blogger Clifford Johnson (from Asymptotia) and Peter Steinberg (from Entropy Bound) discussed this connection, and a write-up of their presentation has now come out as a feature article in the May 2010 issue of Physics Today, "What black holes teach about strongly coupled particles" (free access). • You may be aware of the ongoing quest for the densest possible packing of tetrahedra? The NYT wrote about this in January, and the articles mentions that a paper on the subject "prompted Paul M. Chaikin, a professor of physics at New York University, to buy tetrahedral dice by the hundreds and have a high school student stuff them into fish bowls and other containers." This project now resulted in a Physical Review Letter, with an experimentally determined volume fraction of 0.76±0.02 (The current theoretical "record" is at 0.856). Analysis of the experiment was done using Magnetic Resonance Imaging to look "into" the container crammed with tetrahedra, which shows that the packing is highly disordered. More background can be found in an article at "Physics", which also contains a free link to the PRL paper. • Also via Physics, I've learned about what is the fastest (and possibly smallest) analogue computer to perform Fourier transforms: a single iodine molecule. A iodine molecule consists of two iodine atoms, which can vibrate, realizing a tiny harmonic oscillator. During one period, a harmonic oscillator follows a circular trajectory in phase space, which means that the Wigner function describing the quantum state of the oscillator "switches" space and momentum coordinates every quarter period. Going from real space to momentum space corresponds to a Fourier transform, so when the wave function of the iodine molecule is prepared in real space, after quarter of a period, the wave function encodes the Fourier transform of the initial configuration. Using laser, it is possible to prepare the molecule in definite state, and to probe the state again later. This allows discrete Fourier transforms for four and eight elements, and all this within 145 femtoseconds, "which is shorter than the typical clock period of the current fastest Si-based computers by 3 orders of magnitudes." (Ultrafast Fourier Transform with a Femtosecond-Laser-Driven Molecule", PRL). Saturday, May 01, 2010 Publication Cut-off The German Research Foundation (DFG) has taken an important and overdue step. To limit their applicant's attempts to blind the reviewer with publications, from July 1st 2010 on a maximum of 5 publications can be listed in the CV. In addition to this, only papers that are already published can be listed. Previously, it was possible to also list papers that are submitted, but not yet published. The change in this policy is apparently a reaction to an instance last year in which applicants (in the area of biodiversity) invented publications. (More details on the new regulation here.) It remains unclear to me whether a paper on the arxiv counts as published or unpublished. With this decision, the DFG is clearly signaling that it's quality that matters, and not quantity. Or at least that's what should matter for their referees. Another reason for the change is that other countries have similar restrictions. The NSF for example also has a limit of 5 publications relevant for the project, and the NIH 15. Matthias Kleiner, President of the DFG said “With this we want to show: For us it is the content that matters for the judgement and the support of science.” And he bemoans that today “The first question is often not anymore what somebody's research is but where and how much he has published.” (As quoted in Physik Journal, April 2010, my translation). The DFG is the funding source for scientific research in Germany. Not the only one, but without doubt the most important one. This decision will therefore have a large impact. The impact however is limited in that the other major reason publication numbers are ever increasing is that hiring committees pay attention to these numbers - or at least are believed to pay attention, which is sufficient already to create the effect. The President of the German Higher Education Association (DHV*), Bernhard Kempen, comments “To assess a candidate's qualification in a hiring process it should also be solely the content of provided publications, not their number, that is decisive for an appointment.” (as quoted here, my translation.) Since I have written many times that it hinders scientific progress when selection criteria set incentives for researchers to strive for secondary goals (many publications) instead of primary goals (good research), it should be clear that I welcome this decision by the DFG. * DHV stands for Deutscher Hochschulverband. The literal translation of the German word "Hochschule" is "high school" but the meaning is different. "Hochschule" in Germany is basically all sorts of higher education, past finishing what's "high school" in America. The American "high school" is in German instead called "Oberstufe," lit. "upper step." See also Wikipedia.
5f3641b3ce186fdc
Archive for February, 2009 Physics Friday 61 February 27, 2009 Let us consider a spinning “top”, spinning about an axis of rotational symmetry with angular velocity ω; the mass of the top is m, and the moment of inertia for the top about this axis is I. The top is supported by a surface below, which it contacts at a single “pivot” point on the rotation axis; the distance between this pivot point and the center of gravity is l. For all following analysis, we ignore dissipative forces. The only external forces on the top are gravity and the normal force at the pivot. If the rotational axis is perfectly vertical, these forces cancel and produce no torques, so that the top continues its simple rotation unchanged. But what if the top is tilted from the vertical by an angle θ? We choose as our origin the pivot point. We now have a nonzero torque which has magnitude , and is directed perpendicular to the vertical plane containing the rotation axis. Now, we note that the top has angular momentum L= directed along the rotation axis. Now, we recall that ; thus the angular momentum will change as a result of the torque. We see that τ is horizontal, and so the vertical component of L, Lz, does not change, and as τ and L are perpendicular, neither does the magnitude of L. Thus, we see that the angular momentum vector L will be rotating uniformly about the vertical axis. Due to the object’s symmetry, the permanent axis must remain along L, and so will rotate about the axis. This is classical torque-driven precession. If the angular velocity of the precession is Ω, then February 23, 2009 According to WordPress, this blog has reached 39=19683 page views. Monday Math 60 February 23, 2009 Let ω(n) be the number of distinct prime factors of the integer n, with us defining ω(1)=0 (see, for example, here). What, then, is the value of the series ? How about ? Physics Friday 60 February 20, 2009 Let us consider a hydrogen atom: a single electron “orbiting” a single (much heavier) nucleus containing a single proton. The potential energy due to the attractive Coulomb force between these charged particles is , where r is the distance between the particles. The time-independent Schrödinger equation for the electron (ignoring relativistic effects, particle spins, and magnetic moments) is (Here, the electron mass m should actually be the reduced mass μ of the electron-proton pair; however, the correction involved is small, and even smaller for more massive atomic nuclei). With the way our potential energy is defined (with zero energy at infinite separation), our bound states for the electron (our states of interest for an atom) will have E<0. Now, we note that the potential is spherically symmetric, and so our Hamiltonian commutes with the angular momentum operators, and we can perform separation of variables in spherical coordinates, with the angular components being the spherical harmonics (see here and here). As in here, when we perform the spherical coordinate separation , we obtain radial equation Making the substitution , Now, let us define , which has units of length, and dimensionless variable , so that . Then we have now, let us make the transform (ρ is dimensionless); then we get We see that as ρ→∞, the equation is approximated by , which has general solution ; normalizability requires C2=0. Also, examining ρ→0, our equation is approximately , which has general solution ; considering the origin tells us C2=0 again. Combining these, we thus try the substitution , giving radial equation Now, letting , we get Which is the associated Laguerre differential equation with and ; so the solution to our transformed equation is , where is an associated Laguerre function. Now, the resulting wavefunction can be normalized only if is a polynomial; this is true when is a non-negative integer; thus, we have , where n is a positive integer, and 0≤ln-1. Looking back through our conversions, we have where , is the constant needed to normalize the wavefunction; with some work involving the properties of associated Laguerre polynomials (see for example here), we can find the normalized wavefunction to be: Where n=1,2,3,…, l=0,1,2,…,n-1, and m=-l, –l+1,…,l-1,l (for a given n, we have n2 different angular momentum states). One should note that is the Bohr radius. Now, recall that we had ; with the normalization condition , we have , and the ground state energy of hydrogen is approximately -13.6 eV, and so the ionization energy of hydrogen is approximately 13.6 eV. Monday Math 59 February 16, 2009 Recall the Riemann zeta function Now, consider the product . We see: and we have the zeta function series with the even terms removed. Now, let us multiply the above by . Then we see: where the remaining terms are those where k is not divisible by 2 or 3. Multiplication of the above by will eliminate those remaining terms divisible by 5; we may continue this procedure through the primes, and (in analogy with the Sieve of Eratosthenes) will, in the infinite product, eliminate every term, k>1, giving , where pn is the nth prime; thus the Riemann zeta function may be expressed as the product Thus we see a connection between the primes and the Riemann zeta function. In fact, through this identity (first proved by Euler), the divergence of at s=1 (divergence of the harmonic series) implies that there are infinitely many primes. Physics Friday 59 February 13, 2009 Let us consider the isotropic three-dimensional quantum harmonic oscillator: we have (where the particle mass is now m0 to prevent confusion later). In cartesian coordinates, this becomes: where Hx, Hy, and Hz are each the hamiltonian for a one-dimensional harmonic oscillator, in the x, y, and z directions respectively. Thus, the energy eigenstates will be products of eigenstates of these three; we have energy levels . Thus, we have energies , with degeneracy equal to the number of solutions of , which, via combinatorics, is . (Only the ground state n=0 is non-degenerate.) Now, suppose instead we consider spherical coordinates. As depends on r only (spherical symmetry), we see that under separation of variables, the angular components will be given by the spherical harmonics (see here and here). We will then have . With this in place, our Schrödinger equation becomes, for the radial component: Defining , we see that this differential equation simplifies to Now, we rescale the radial coordinate by defining dimensionless . Then the above equation becomes , where is the rescaled u(r). We can expect that far from the origin, we should have something like a Gaussian. If we attempt the substitution (where k is a constant for which we will find later a value convenient for solving the equation), then this becomes, after eliminating a factor of , We see that this simplifies greatly if , so that , and then Lastly, defining , and , we get which is the associated Laguerre differential equation with parameters and . The solutions are associated Laguerre functions , , and to be physically valid, we require that be a non-negative integer. Thus ln, and they are both odd or both even integers. Thus, for even n, l can take values 0,2,4,…,n-2,n; and for odd n, l can take values 1,3,5,…,n-2,n. Adding the fact that m is an integer with possible values –l,-l+1,…,l-1,l; we have 2l+1 different possibilities for m. So, for even n, we have different possible angular momentum states; and for odd n, there are angular momentum states; the same formula as for even n. Looking back at the start of this post, we note that this formula matches the result for the degeneracies of the energy states we found using cartesian coordinates. Another Birthday February 12, 2009 And a happy 200th birthday to America’s 16th president, Abraham Lincoln Darwin Day February 12, 2009 Happy Darwin Day! Charles Darwin was born 200 years ago to the day. Monday Math 58 February 9, 2009 Recall the product formula for the sine function (here and here): or with zx, Taking the logarithm of both sides: Now, , so taking the derivative of both sides of the above equation: Note that via the geometric series, Note that we can reverse the order of summation: where is the Riemann zeta function. Now, we use Euler’s formula: as , , , and Previously, we showed that the function can be given by the series . Thus and so Now, as and , we can see that the first two terms of the series in the above are Now, recall that for n≥2, Bn=0 for odd n; thus the series above only has even n terms, and we can rewrite it, using n=2k, as or thus as Comparing this to , we see via the term-by-term comparison that or solving for the zeta function: which gives the Riemann zeta function for even positive integers, as previously noted here. Physics Friday 58 February 6, 2009 Normalizing this wavefunction, Using and , we see that , and so Taking the product,
4f2537bebc031366
Equation of the Day #14: The Harmonic Oscillator The career of a young theoretical physicist consists of treating the harmonic oscillator in ever-increasing levels of abstraction.” Sidney Coleman One of the first physical systems students get to play with is the harmonic oscillator. An oscillator is a system with some property that varies repeatedly around a central value. This variation can be in displacement (such as a mass on a spring), voltage (like the electricity that comes out of the wall), or field strength (like the oscillations that make up light). When this variation has a fixed period, we call the motion harmonic oscillation, and the simplest harmonic oscillation is known as — you may have guessed — simple harmonic oscillation. A simple harmonic oscillator is a system under the influence of a force proportional to the displacement of the system from equilibrium. Think of a mass on a spring; the further I pull the mass from the spring’s natural length, the more the spring pulls back, and if I push the mass so that the spring compresses, the spring will push against me. Mathematically, this is known as Hooke’s Law and can be written as F=-k\Delta x where F is the net force applied to the system, Δx is the displacement of the system from equilibrium, and k is some proportionality constant (often called the “spring constant”) which tells us how strongly the spring pushes or pulls given some displacement – a larger k indicates a stronger spring. If I let go of the mass when it is displaced from equilibrium, the mass will undergo oscillatory motion. What makes this “simple” is that we’re ignoring the effects of damping or driving the oscillation with an outside force. How do we know that such a restoring force causes oscillatory motion? Utilizing Newton’s second law, \begin{aligned} F &= ma\\ &= -kx\\ \Rightarrow \dfrac{d^2x}{dt^2} &= -\dfrac{k}{m} x \end{aligned} The solution to this equation is sinusoidal, x(t) = A\cos(\omega t +\phi), where A is the amplitude of oscillation (the farthest the mass gets from equilibrium), \omega \equiv \sqrt{\dfrac{k}{m}} is the angular frequency of oscillation, and ϕ is the phase, which captures the initial position and velocity of the mass at time t = 0. The period is related to the angular frequency by For this reason, harmonic oscillators are useful timekeepers, since they oscillate at regular, predictable intervals. This is why pendulums, coiled springs, and currents going through quartz crystals have been used as clocks. What other physical systems does this situation apply to? Well, if you like music, simple harmonic oscillation is what air undergoes when you play a wind, string, or membrane instrument. What you’re doing when you play an instrument (or sing) is forcing air, string(s), or electric charge (for electronic instruments) out of equilibrium. This causes the air, string(s), and voltage/current to oscillate, which creates a tone. Patch a bunch of these tones together in the form of chords, melodies, and harmonies, and you’ve created music. A simpler situation is blowing over a soda/pop bottle. When you blow air over the mouth of the bottle, you create an equilibrium pressure for the air above the mouth of the bottle. Air that is slightly off of this equilibrium will oscillate in and out of the bottle, producing a pure tone. Image: Wikipedia Now for the fun part: what happens when we enter the quantum realm? Quantum mechanics says that the energy of a bound system is quantized, and an ideal harmonic oscillator is always bound. The total energy of a harmonic oscillator is given by \begin{aligned} E &= \dfrac{1}{2} mv^2 + \dfrac{1}{2} kx^2\\ &= \dfrac{1}{2m} \left(p^2 + (m\omega x)^2\right), \end{aligned} where the first term is the kinetic energy, or energy of motion, and the second term is the potential energy, or energy due to location. I used the facts that p = mv and k = 2 to go from the first line to the second line. The quantum prescription says that p and x become mathematical operators, and the energy takes a role in the Schrödinger equation. For the harmonic oscillator, solving the Schrödinger equation yields the differential equation \begin{aligned} \dfrac{\hbar}{m\omega}\dfrac{d^2\psi}{dx^2} + \left(\dfrac{2E}{\hbar\omega} - \dfrac{m\omega}{\hbar}\, x^2 \right) \psi(x) = 0 \end{aligned} where ħ is the (reduced) Planck constant, and ψ is the quantum mechanical wave function. After solving this differential equation, the allowed energies turn out to be E_n = \hbar\omega \left(n+\dfrac{1}{2}\right) where n = 0, 1, 2, . . . is a nonnegative integer. Unlike the classical picture, the quantum states of the harmonic oscillator with definite energy are stationary and spread out over space, with higher energy states spread out more than lower energy states. There is a way, though, to produce an oscillating state of the quantum harmonic oscillator by preparing a superposition of pure energy states, forming what’s known as a coherent state, which actually does behave like the classical mass on a spring. It’s a weird instance of classical behavior in the quantum realm! Classical simple harmonic oscillators compared to quantum wave functions of the simple harmonic oscillator. Image: Wikipedia An example of a quantum harmonic oscillator is a molecule formed by a pair of atoms. The bond between the two atoms gives rise to a roughly harmonic potential, which results in molecular vibrational states, like two quantum balls attached by a spring. Depending on the mass of the atoms and the strength of the bond, the molecule will vibrate at a specific frequency, and this frequency tells physicists and chemists about the bond-lengths of molecules and what those bonds are made up of. In fact, the quantum mechanical harmonic oscillator is a major topic of interest because the potential energy between quantum objects can often be approximated as a Hooke’s Law potential near equilibrium, even if the actual forces at play are more complex at larger separations. Additionally, the energy structure of the harmonic oscillator predicts that energies are equally spaced by the amount ħω. This is a remarkable feature of the quantum harmonic oscillator, and it allows us to make a toy model for quantum object creation and annihilation. If we take the energy unit ħω as equal to the rest energy of a quantum object by Einstein’s E = mc2, we can think of the quantum number n as being the number of quantum objects in a system. This idea is one of the basic results of quantum field theory, which treats quantum objects as excitations of quantum fields that stretch over all of space and time. This is what the opening quote is referring to; physicists start off learning the simple harmonic oscillator as classical masses on springs or pendulums oscillating at small angles, then they upgrade to the quantum treatment and learn about its regular energy structure, and then to upgrade to the quantum field treatment where the energies are treated as a number of quantum objects arising from a omnipresent quantum field. I find it to be one of the most beautiful aspects of Nature that such a simple system recurs at multiple levels of our physical understanding of reality. Leave a Reply WordPress.com Logo Google+ photo Twitter picture Facebook photo Connecting to %s
07d197632c814b82
Throwing balls on torus-earth A question came up on Physics Stack Exchange: how does thrown object trajectories look on a toroidal planet? Locally we should expect them to be like on Earth: there is constant gravitational acceleration orthogonal to the ground, so they will just look like parabolas. But if the trajectory is longer the rapid rotation ought to twist it, since there is a fair Coriolis effect. So the differential equation will be \mathbf{x}''=\mathbf{g}+2\mathbf{x}'\times\mathbf{\Omega}. If we just look at the velocity vector we get \mathbf{v}'=\mathbf{g}+2\mathbf{v}\times\mathbf{\Omega}. That is, the forcefield will twist the velocity around if it is large and orthogonal to the angular velocity vector. If the velocity is parallel it will just be affected by gravity. For a trajectory near the pole it will become twisted and tilted: Trajectories of a ball thrown from the surface with the angular velocity vector parallel to the gravity vector. For a starting point on the equator the twisting gets a bit more complex: Trajectories of a ball thrown from the surface with the angular velocity vector orthogonal to the gravity vector. One can also recognise the analogy to an electron in an electromagnetic field: $latex \mathbf{v}’ = (q/m)(\mathbf{E}+\mathbf{v}\times \mathbf{B})$. Without gravity we should hence expect thrown balls to just follow helices around the omega-vector direction just like charged particles follow magnetic field-lines. One can eliminate the electric field from the equation by using a different velocity coordinate $latex \mathbf{v_2}=\mathbf{v}-\mathbf{E}\times\mathbf{B}/B^2$. Hence we can treat ball trajectories like helices plus a drift velocity in the \mathbf{g}\times\mathbf{\Omega} direction. The helix radius will be v/2\Omega. How large is the Coriolis effect? On Earth \Omega=2\pi/86400\approx 0.0000727. On Donut it is 0.000614 and on Hoop 0.000494, several times higher. Still, the correction is not going to be enormous: for a ball moving 10 meters per second the helix radius will be 69 km on Earth (at the pole), 8.1 km on Donut, and 10 km on Hoop. We hence need to throw the ball a suborbital distance before the twists become really visible. At these distances the curvature of the planet and the non-linearity of the gravitational field also begins to bite. I have not simulated such trajectories since I need a proper mass distribution model of the worlds, and it is messy. However, for an infinitely thin ring one can solve orbits numerically relatively easily (you “just” have to integrate elliptic integrals): Some orbits around a massive ring. Some orbits around a massive ring. Beside the “normal” equatorial orbits and torus-like orbits winding themselves around the ring, there are internal halo-orbits and chaotic tangles. Overcoming inertia The tremendous accelerations involved in the kind of spaceflight seen on Star Trek would instantly turn the crew to chunky salsa unless there was some kind of heavy-duty protection. Hence, the inertial damping field. — Star Trek: The Next Generation Technical Manual, page 24. For a space opera RPG setting I am considering adding inertia manipulation technology. But can one make a self-consistent inertia dampener without breaking conservation laws? What are the physical consequences? How many cool explosions, superweapons, and other tropes can we squeeze out of it? How to avoid the worst problems brought up by the SF community? What inertia is As Newton put it, inertia is the resistance of an object to a change in its state of motion. Newton’s force law F=ma is a consequence of the definition of momentum, p=mv (which in a way is more fundamental since it directly ties in with conservation laws). The mass in the formula is the inertial mass. Mass is a measure of how much there is of matter, and we normally multiply it with a hidden constant of 1 to get the inertial mass – this constant is what we will want to mess with. There are relativistic versions of the laws of motion that handles momentum and inertia for high velocities, where the kinetic energy becomes so large that it starts to add mass to the whole system. This makes the total inertia go up, as seen by an outside observer, and looks like a nice case for inertia-manipulating tech being vaguely possible. However, Einstein threw a spanner into this: gravity also acts on mass and conveniently does so exactly as much as inertia: gravitational mass (the masses in F=Gm_1m_2/r^2) and inertial mass appear to be equal. At least in my old school physics textbook (early 1980s!) this was presented as a cool unsolved mystery, but it is a consequence of the equivalence principle in general relativity (1907): all test particles accelerate the same way in a gravitational field, and this is only possible if their gravitational mass and inertial mass are proportional to one another. So, an inertia manipulation technology will have to imply some form of gravity manipulation technology. Which may be fine from my standpoint, since what space opera is complete without antigravity? (In fact, I already had decided to have Alcubierre warp bubble FTL anyway, so gravity manipulation is in.) Playing with inertia OK, let’s leave relativity to the side for the time being and just consider the classical mechanics of inertia manipulation. Let us posit that there is a magical field that allows us to dial up or down the proportionality constant for inertial mass: the momentum of a particle will be p=\mu m v, the force law F=\mu m a and the formula for kinetic energy K=(1/2) \mu m v^2. \mu is the effect of the magic field, running from 0<\mu<\infty, with 1 corresponding to it being absent. I throw a 1 g ping-pong ball at 1 m/s into my inertics device and turn on the field. What happens? Let us assume the field is \mu=1000. Now the momentum and kinetic energy jumps by a factor of 1000 if the velocity remains unchanged. Were I to catch the ball I would have gained 999 times its original kinetic energy: this looks like an excellent perpetual motion machine. Since we do not want that to be possible (a space empire powered by throwing ping-pong balls sounds silly) we must demand that energy is conserved. Velocity shifting to preserve kinetic energy Radiation shieldingOne way of doing energy conservation is for the velocity to go down for my heavy ping-pong ball. This means that the new velocity will be v/\sqrt{\mu}. Inertia-increasing fields slow down objects, while inertia-decreasing fields speed them up. One could have a force-field made of super-high inertia that would slow down incoming projectiles. At first this seems pointless, since once they get through to the other side they speed up and will do the same damage. But we could of course put in a bunch of armour in this field, and have it resist the projectile. The kinetic energy will be the same but it will be a lower velocity collision which means that the strength of the armour has a better chance of stopping it (in fact, as we will see below, we can use superdense armour here too). Consider the difference between being shot with a rifle bullet or being slowly but strongly stabbed by it: in the later case the force can be distributed by a good armour to a vast surface. Definitely a good thing for a space opera. A spacecraft that wants to get somewhere fast could just project a low \mu field around itself and boost its speed by a huge 1/\sqrt{\mu} factor. Sounds very useful. But now an impacting meteorite will both have an high relative speed, and when it enters the field get that boosted by the same factor again: impacts will happen at velocities increased by a factor of 1/\mu as measured by the ship. So boosting your speed with a factor of a 1000 will give you dust hitting you at speeds a million times higher. Since typical interplanetary dust already moves a few km/s, we are talking about hyperrelativistic impactors. The armour above sounds like a good thing to have… Note that any inertia-reducing technology is going to improve rockets even if there is no reactionless drive or other shenanigans: you just reduce the inertia of the reaction mass. The rocket equation no longer bites: sure, your ship is mostly massive reaction mass in storage, but to accelerate the ship you just take a measure of that mass, restore its inertia, expel it, and enjoy the huge acceleration as the big engine pushes the overall very low-inertia ship. There is just a snag in this particular case: when restoring the inertia you somehow need to give the mass enough kinetic energy to be at rest in relation to the ship… This kind of inertics does not make for a great cannon. I can certainly make my projectile speed up a lot in the bore by lowering its inertia, but as soon as it leaves it will slow down. If we assume a given amount of force F accelerating it along the length L bore, it will pick up FL Joules of kinetic energy from the work the cannon does – independent of mass or inertia! The difference may be power: if you can only supply a certain energy per second like in a coilgun, having a slower projectile in the bore is better. Note that entering and leaving an inertics field will induce stresses. A metal rod entering an inertia-increasing field will have the part in the field moving more slowly, pushing back against the not slowed part (yet another plus for the armour!). When leaving the field the lighter part outside will pull away strongly. Another effect of shifting velocities is that gases behave differently. At first it looks like changing speeds would change temperature (since we tend to think of the temperature of a gas as how fast the molecules are bouncing around), but actually the kinetic temperature of a gas depends on (you guessed it) the average kinetic energy. So that doesn’t change at all. However, the speed of sound should scale as \propto 1/\sqrt{\mu}: it becomes far higher in the inertia-dampening field, producing helium-voice like effects. Air molecules inside an inertia-decreasing field would tend to leave more quickly than outside air would enter, producing a pressure difference. Momentum conservation is a headache Atlas 6Changing the velocity so that energy is conserved unfortunately has a drawback: momentum is not conserved! I throw a heavy object at my inertics machine at velocity v, momentum mv and energy (1/2)mv^2, it reduces is inertia and increases the speed to v/\sqrt{\mu}, keeps the kinetic energy at (1/2)mv^2, and the momentum is now mv/\sqrt{\mu}. What if we assume the momentum change comes from the field or machine? When I hit the mass M machine with an object it experiences a force enough to change its velocity by w=mv(1-1/\sqrt{\mu})/M. When set to increase inertia it is pushed back a bit, potentially moving up to speed (m/M)v. When set to decrease inertia it is pushed forward, starting to move towards the direction the object impacted from. In fact, it can get arbitrarily large velocities by reducing \mu close to 0. This sounds odd. Demanding momentum and energy conservation requires mv = mv/\sqrt{\mu} + Mw (giving the above formula) and mv^2 = \mu m(v/\sqrt{\mu})^2 + Mw^2, which insists that w=0. Clearly we cannot have both. I don’t know about you, but I’d rather keep energy conserved. It is more obvious when you cheat about energy conservation. Still, as Einstein pointed out using 4-vectors, momentum and energy conservation are deeply entangled – one reason inertics isn’t terribly likely in the real world is that they cannot be separated. We could of course try to conserve 4-momentum ((E/c,\gamma \mu m v_x, \gamma \mu m v_y, \gamma \mu m v_z)), which would look like changing both energy and normal momentum at the same time. Energy gain/loss to preserve momentum Buffer stopsWhat about just retaining the normal momentum rather than the kinetic energy? The new velocity would be v/\mu, the new kinetic energy would be K_1=(1/2) \mu m (v/\mu)^2 = (1/2) mv^2 / \mu = K_0/\mu. Just like in the kinetic energy preserving case the object slows down (or speeds up), but more strongly. And there is an energy debt of K_0 (1-1/\mu) that needs to be fixed. One way of resolving energy conservation is to demand that the change in energy is supplied by the inertia-manipulation device. My ping-pong ball does not change momentum, but requires 0.999 Joule to gain the new kinetic energy. The device has to provide that. When the ball leaves the field there will be a surge of energy the device needs to absorb back. Some nice potential here for things blowing up in dramatic ways, a requirement for any self-respecting space opera. If I want to accelerate my spaceship in this setting, I would point my momentum vector towards the target, reduce my inertia a lot, and then have to provide a lot of kinetic energy from my inertics devices and power supply (actually, store a lot – the energy is a surplus). At first this sounds like it is just as bad as normal rocketry, but in fact it is awesome: I can convert my electricity directly into velocity without having to lug around a lot of reaction mass! I will even get it back when slowing down, a bit like electric brake regeneration systems.  The rocket equation does not apply beyond getting some initial momentum. In fact, the less velocity I have from the start, the better. At least in this scheme inertia-reduced reaction mass can be restored to full inertia within the conceptual framework of energy addition/subtraction. One drawback is that now when I run into interplanetary dust it will drain my batteries as the inertics system needs to give it a lot of kinetic energy (which will then go on harming me!) Another big problem (pointed out by Erik Max Francis) is that turning energy into kinetic energy gives an energy requirement $latex dK/dt=mva$, which depends on an absolute speed. This requires a privileged reference frame, throwing out relativity theory. Oops (but not unexpected). Energy addition/depletion makes traditional force-fields somewhat plausible: a projectile hits the field, and we use the inertics to reduce its kinetic energy to something manageable. A rifle bullet has a few thousand Joules of energy, and if you can drain that it will now harmlessly bounce off your normal armour. Presumably shields will be depleted when the ship cannot dissipate or store the incoming kinetic energy fast enough, causing the inertics to overload and then leaving the ship unshielded. This kind of inertics allows us to accelerate projectiles using the inertics technology, essentially feeding them as much kinetic energy as we want. If you first make your projectile super-heavy, accelerate it strongly, and then normalise the inertia it will now speed away with a huge velocity. A metal rod entering this kind of field will experience the same type of force as in the kinetic energy respecting model, but here the field generator will also be working on providing energy balance: in a sense it will be acting as a generator/motor. Unfortunately it does not look like it could give a net energy gain by having matter flow through. Note that this kind of device cannot be simply turned off like the previous one: there has to be an energy accounting as everything returns to \mu=1. The really tricky case is if you are in energy-debt: you have an object of lowered inertia in the field, and cut the power. Now the object needs to get a bunch of kinetic energy from somewhere. Sudden absorption of nearby kinetic energy, freezing stuff nearby? That would break thermodynamics (I could set up a perpetual motion heat engine this way). Leaving the inertia-changed object with the changed inertia? That would mean there could be objects and particles with any effective mass – space might eventually be littered with atoms with altered inertia, becoming part of normal chemistry and physics. No such atoms have ever been found, but maybe that is because alien predecessor civilisations were careful with inertial pollution. Other approaches Gravity manipulation Levitating morris dancersAnother approach is to say that we are manipulating spacetime so that inertial forces are cancelled by a suitable gravity force (or, for purists, that the acceleration due to something gets cancelled by a counter-acceleration due to spacetime curvature that makes the object retain the same relative momentum). The classic is the “gravitic drive” idea, where the spacecraft generates a gravity field somehow and then free-falls towards the destination. The acceleration can be arbitrarily large but the crew will just experience freefall. Same thing for accelerating projectiles or making force-fields: they just accelerate/decelerate projectiles a lot. Since momentum is conserved there will be recoil. The force-fields will however be wimpy: essentially it needs to be equivalent to an acceleration bringing the projectile to a stop over a short distance. Given that normal interplanetary velocities are in tens of kilometres per second (escape velocity of Earth, more or less) the gravity field needs to be many, many Gs to work. Consider slowing down a 20 km/s railgun bullet to a stop over a distance of 10 meters: it needs to happen over a millisecond and requires a 20 million m/s^2 deceleration (2.03 megaG). If we go with energy and momentum conservation we may still need to posit that the inertics/antigravity draws power corresponding to the work it does . Make a wheel turn because of an attracting and repulsing field, and the generator has to pay the work (plus experience a torque). Make a spacecraft go from point A to B, and it needs to pay the potential energy difference, momentum change, and at least temporarily the gain in kinetic energy. And if you demand momentum conservation for a gravitic drive, then you have the drive pulling back with the same “force” as the spacecraft experiences. Note that energy and momentum in general relativity are only locally conserved; at least this kind of drive can handwave some excuse for breaking local momentum conservation by positing that the momentum now resides in an extended gravity field (and maybe gravitational waves). Unlike the previous kinds of inertics this doesn’t change the properties of matter, so the effects on objects discussed below do not apply. One problem is edge tidal effects. Somewhere there is going to be a transition zone where there is a field gradient: an object passing through is going to experience some extreme shear forces and likely spaghettify. Conversely, this makes for a nifty weapon ripping apart targets. One problem with gravity manipulation is that it normally has to occur through gravity, which is both very weak and only has positive charges. Electromagnetic technology works so well because we can play positive and negative charges against each other, getting strong effects without using (very) enormous numbers of electrons. Gravity (and gravitomagnetic effects) normally only occurs due to large mass-energy densities and momenta. So for this to work there better be antigravitons, negative mass, or some other way of making gravity behave differently from vanilla relativity. Inertics can typically handwave something about the Higgs field at least. Forcefield manipulation This leaves out the gravity part and just posits that you can place force vectors wherever you want. A bit like Iain M. Banks’ effector beams. No real constraints because it is entirely made-up physics; it is not clear it respects any particular conservation laws. Other physical effects Here are some of the nontrivial effects of changing inertia of matter (I will leave out gravity manipulation, which has more obvious effects). Electromagnetism: beware the blue carrot It is worth noting that this thought experiment does not affect light and other electromagnetic fields: photons are massless. The overall effect is that they will tend to push around charged objects in the field more or less. A low-inertia electron subjected to a given electric field will accelerate more, a high-inertia electron less. This in turn changes the natural frequencies of many systems: a radio antenna will change tuning depending on the inertia change. A receiver inside the inertics field will experience outside signals as being stronger (if the field decreases inertia) or weaker (if it increases it). Reducing inertia also increases the Bohr magneton, e\hbar/2 \mu m_e. This means that paramagnetic materials become more strongly affected by magnetic fields, and that ferromagnets are boosted. Conversely, higher inertia reduces magnetic effects. Changing inertia would likely change atomic spectra (see below) and hence optical properties of many compounds. Many pigments gain their colour from absorption due to conjugated systems (think of carotene or heme) that act as antennas: inertia manipulation will change the absorbed frequencies. Carotene with increased inertia will presumably shift its absorption spectra towards lower frequencies, becoming redder, while lowered inertia causes a green or blue shift. An interesting effect is that the rhodopsin in the eye will also be affected and colour vision will experience the same shift (objects will appear to change colour in regions with a different \mu from the place where the observer is, but not inside their field). Strong enough fields will cause shifts so that absorption and transmission outside the visual range will matter, e.g. infrared or UV becomes visible. However, the above claim that photons should not be affected by inertia manipulation may not have to hold true. Photons carry momentum, p=\hbar k where k is the wave vector. So we could assume a factor of 1/\sqrt{\mu} or 1/\mu gets in there and the field red/blueshifts photons. This would complicate things a lot, so I will leave analysis to the interested reader. But it would likely make inertics fields visible due to refractive effects. Chemistry: toxic energy levels, plus a shrink-ray Projectile warningOne area inertics would mess up is chemistry. Chemistry is basically all about the behaviour of the valence electrons of atoms. Their behaviour depends on their distribution between the atomic orbitals, which in turn depends on the Schrödinger equation for the atomic potential. And this equation has a dependency on the mass of the electron and nucleus. If we look at hydrogen-like atoms, the main effect is that the energy levels become E_n = - \mu (M Z^2 e^4/8 \epsilon_0^2 h^2 n^2), where M=m_e m_p/(m_e+m_p) is the reduced mass. In short, the inertial manipulation field scales the energy levels up and down proportionally. One effect is that it becomes much easier to ionise low-inertia materials, and that materials that are normally held together by ionization bonds (say NaCl salt) may spontaneously decay when in high-inertia fields. The Bohr radius scales as a_0 \propto 1/\mu: low-inertia atoms become larger. This really messes with materials. Placed in a low-inertia field atoms expand, making objects such as metals inflate. In a high inertia-field, electrons keep closer to the nuclei and objects shrink. As distances change, the effects of electromagnetic forces also change: internal molecular electric forces, van der Waals forces and things like that change in strength, which will no doubt have effects on biology. Not to mention melting points: reducing the inertia will make many materials melt at far lower temperatures due to larger inter-atomic and inter-molecular distances, increasing it can make room-temperature liquids freeze because they are now more closely packed. This size change also affects the electron-electron interactions, which among other things shield the nucleus and reduce the effective nuclear charge. The changed energy levels do not strongly affect the structure of the lightest atoms, so they will likely form the same kind of chemical bonds and have the same chemistry. However, heavier atoms such as copper, chromium and palladium already have ordering rules that are slightly off because of the quirks of the energy levels. As the field deviates from 1 we should expect lighter and lighter atoms to get alternative filling patterns and this means they will get different chemistry. Given that copper and chromium are essential for some enzymes, this does not bode well – if copper no longer works in cytochrome oxidase, the respiratory chain will lethally crash. If we allow permanently inertia-altered particles chemistry can get extremely weird. An inertia-changed electron would orbit in a different way than a normal one, giving the atom it resided in entirely different chemical properties. Each changed electron could have its own individual inertia. Presumably such particles would randomise chemistry where they resided, causing all sorts of odd reactions and compounds not normally seen. The overall effect would likely be pretty toxic, since it would on average tend to catalyze metastable high-energy, low-entropy structures in biochemistry to fall down to lower energy, higher entropy states. Lowering inertia in many ways looks like heating up things: particles move faster, chemicals diffuse more, and things melt. Given that much of biochemistry is tremendously temperature dependent, this suggests that even slight changes of \mu to 0.99 or 1.01 would be enough to create many of the bad effects of high fever or hypothermia, and a bit more would be directly lethal as proteins denaturate. Fluids: I need a lie down Inside a lowered inertia field matter responds more strongly to forces, and this means that fluids flow faster for the same pressure difference. Buoyancy cases stronger convection. For a given velocity, the inertial forces  are reduced compared to the viscosity, lowering the Reynolds number and making flows more laminar. Conversely, enhanced inertia fluids are hard to get to move but at a given speed they will be more turbulent. This will really mess up the sense of balance and likely blood flow. Gravity: equivalent exchange I have ignored the equivalence of inertial and gravitational mass. One way for me to get away with it is to claim that they are still equivalent, since everything occurs within some local region where my inertics field is acting: all objects get their inertial mass multiplied by \mu and this also changes their gravitational mass. The equivalence principle still holds. What if there is no equivalence principle? I could make 1 kg object and a 1 gram object fall at different accelerations. If I had a massless spring between them it would be extended, and I would gain energy. Beside the work done by gravity to bring down the objects (which I could collect and use to put them back where they started) I would now have extra energy – aha, another perpetual motion machine! So we better stick to the equivalence principle. Given that boosting inertia makes matter both tend to shrink to denser states and have more gravitational force, an important worldbuilding issue is how far I will let this process go. Using it to help fission or fusion seems fine. Allowing it to squeeze matter into degenerate states or neutronium might be more world-changing. And easy making of black holes is likely incompatible with the survival of civilisation. [ Still, destroying planets with small black holes is harder than it looks. The traditional “everything gets sucked down into the singularity” scenario is surprisingly slow. If you model it using spherical Bondi accretion you need an Earth-mass black hole to make the sun implode within a year or so, and a 3\cdot 10^{19} kg asteroid mass black hole to implode the Earth. And the extreme luminosity slows things a lot more. A better way may be to use an evaporating black hole to irradiate the solar system instead, or blow up something sending big fragments. ] Another fun use of inertics is of course to mess up stars directly. This does not work with the energy addition/depletion model, but the velocity change model would allow creating a region of increased inertia where density ramps up: plasma enters the volume and may start descending below the spot. Conversely, reducing inertia may open a channel where it is easier for plasma from the interior to ascend (especially since it would be lighter). Even if one cannot turn this into a black hole or trigger surface fusion, it might enable directed flares as the plasma drags electromagnetic field lines with it. The probe was invisible on the monitor, but its effects were obvious: titanic volumes of solar plasma were sucked together into a strangely geometric sunspot. Suddenly there was a tiny glint in the middle and a shock-wave: the telemetry screens went blank. “Seems your doomsday weapon has failed, professor. Mad science clearly has no good concept of proper workmanship.” “Stay your tongue. This is mad engineering: the energy ran out exactly when I had planned. Just watch.” Without the probe sucking it together the dense plasma was now wildly expanding. As it expanded it cooled. Beyond a certain point it became too cold to remain plasma: there was a bright flash as the protons and electrons recombined and the vortex became transparent. Suddenly neutral the matter no longer constrained the tortured magnetic field lines and they snapped together at the speed of light. The monitor crashed. “I really hope there is no civilization in this solar system sensitive to massive electromagnetic pulses” the professor gloated in the dark. Model Pros Cons Preserve kinetic energy Nice armour. Fast spacecraft with no energy needs (but weird momentum changes). Interplanetary dust is a problem. Inertics cannons inefficient. Toxic effects on biochemistry. Preserve momentum Nice classical forcefield. Fast spacecraft with energy demands. Inertics cannons work. Potential for cool explosions due to overloads. Interplanetary dust drains batteries. Extremely weird issues of energy-debts: either breaking thermodynamics or getting altered inertia materials. Toxic effects on biochemistry. Breaks relativity. Gravity manipulation No toxic chemistry effects. Fast spacecraft with energy demands. Inertics cannons work. Forcefields wimpy. Gravitic drives are iffy due to momentum conservation (and are WMDs). Gravity is more obviously hard to manipulate than inertia. Tidal edge forces. In both cases where actual inertia is changed inertics fields appear pretty lethal. A brief brush with a weak field will likely just be incapacitating, but prolonged exposure is definitely going to kill. And extreme fields are going to do very nasty stuff to most normal materials – making them expand or contract, melt, change chemical structure and whatnot. Hence spacecraft, cannons and other devices using inertics need to be designed to handle these effects. One might imagine placing the crew compartment in a counter-inertics field keeping \mu=1 while the bulk of the spacecraft is surrounded by other fields. A failure of this counter-inertics field does not just instantly turn the crew into tuna paste, but into blue toxic tuna paste. Gravity manipulation is cleaner, but this is not necessarily a plus from the cool fiction perspective: sometimes bad side effects are exactly what world-building needs. I love the idea of inertics with potential as an anti-personnel or assassination weapon through its biochemical effects, or “forcefields” being super-dense metal with amplified inertia protecting against high-velocity or beam impact. The atomic rocket page makes a big deal out of how reactionless propulsion makes space opera destroying weapons of mass destruction (if every tramp freighter can be turned into a relativistic missile, how long is the Imperial Capital going to last?) This is a smaller problem here: being hit by a inertia-reduced freighter hurts less, even when it is very fast (think of being hit by a fast ping-pong ball). Gravity propulsion still enables some nasty relativistic weaponry, and if you spend time adding kinetic energy to your inertia-reduced missile it can become pretty nasty. But even if the reactionless aspect does not trivially produce WMDs inertia manipulation will produce a fair number of other risky possibilities. However, given that even a normal space freighter is a hypervelocity missile, the problem lies more in how to conceptualise a civilisation that regularly handles high-energy objects in the vicinity of centres of civilisation. Not discussed here are issues of how big the fields can be made. Could we reduce the inertia of an asteroid or planet, sending it careening around? That has some big effects on the setting. Similarly, how small can we make the inertics: do they require a starship to power them, or could we have them in epaulettes? Can they be counteracted by another field? Inertia-changing devices are really tricky to get to work consistently; most space opera SF using them just conveniently ignores the mess – just like how FTL gives rise to time travel or that talking droids ought to transform the global economy totally. But it is fun to think through the awkward aspects, since some of them make the world-building more exciting. Plus, I would rather discover them before my players, so I can make official handwaves of why they don’t matter if they are brought up. Uriel’s stacking problem Reading angelIn Scott Alexander’s kabbalistic sf story Unsong, the archangel Uriel works on a problem while other things are going on in heaven: All the angels listened in rapt attention except Uriel, who was sort of half-paying attention while trying to balance several twelve-dimensional shapes on top of each other. There was utter silence throughout the halls of Heaven, except a brief curse as Uriel’s hyperdimensional tower collapsed on itself and he picked up the pieces to try to rebuild it. A great clamor arose from all the heavenly hosts, save Uriel, who took advantage of the brief lapse to conjure a parchment and pen and start working on a proof about the optimal configuration of twelve-dimensional shapes. (Chapter 20, When the stars threw down their spears) A polytope on a plane This got me thinking about the stability of stacking polytopes. That seemed complicated (I am no archangel) so I started toying with the stability of polytopes on a flat surface. (Terminology note: I will consistently use “face” to denote the D-1 dimensional elements that bound the polytope, although “facet” is in some use.) A face of a 3D polyhedron is stable if the polyhedron can rest on it without tipping over. This means that the projection of the center of mass onto the plane containing the face is inside the polygon. The platonic polyhedra are stable on all faces, but it is not hard to make a few faces unstable by moving a vertex far away from the center. A polyhedron has at least one stable face (if it did not, it would be a perpetual motion device: every tip will move the center of mass downwards, but there is a bound on how low it can go. A uni-stable or monostatic polyhedron has  just one stable face. It is an unsolved problem what the simplest uni-stable 3D polyhedron is, with the current record 14 faces. Also, it seems unclear whether there are monostatic simplices in dimension 9 (they exist in 10 or more dimensions, but not in 8 or fewer). So, how many faces of a polytope will typically be unstable? I wrote a Matlab script to generate random convex polytopes by selecting N points randomly on the surface of a D-dimensional sphere and calculating their convex hull. Using a Delaunay decomposition I can split them into simplices, which allow me to calculate the center of mass. The center of mass of a simplex is just the average of the corners \vec{x^c_j}=\sum_{i=1}^N \vec{x}_{ij}, and the center of mass of the polyhedron is just the sum of the simplex centers of mass weighted by their volumes: \vec{x^p} = \sum_{j=1} V_j \vec{x^c_j} . The volume of a simplex is V_j=(1/D!)\mathrm{det}(X_j) where X_j=[x_{1j};x_{2j};\ldots;x_{Dj}], the matrix made by sticking together the coordinate vectors of a simplex. Once we know this we can project the center of mass onto the plane of a face by finding its nullspace (the higher dimensional counterpart to a normal) \vec{p}= \vec{x^p} -(\vec{x^p}\cdot \vec{n})\vec{n}. Finally, to check whether the projection is inside the face, we can look at the matrix A where each column is the coordinates of one of the faces minus \vec{p} and the final row just ones, and solve for Ax=b where b is zero except for a one in the last row (I found this neat algorithm due to elisbben on stack overflow).  If the answer vector is all positive, then the point is inside the face. Repeat for all the faces. Whew. This math is of course really simple to do in Matlab. Stable (yellow) and unstable (blue) faces of random polyhedron. Stability of random polyhedron. The center of mass is marked by a circle. It is projected along the dotted lines into the plane of each face, marked with a square (if inside the face and hence a stable face) or a cross (if outside the face, which is hence unstable). A dotted line connects the projection points to the center of their face. The 12 dimensional case is a bit messier: Projection of a 12D polytope with 20 vertices. Each face is a 11 dimensional simplex. Projection of a 12D polytope with 20 vertices. Each of the 2777 faces is a 11 dimensional simplex. So, what is the average fraction of stable faces on a 3D polyhedron? Fraction stable faces on 3D convex hulls of N points on a sphere. It tends to converge to 50%. Doing this in higher dimensions shows the same kind of convergence, although to lower fractions. Fraction stable faces on 4D convex hulls of N points on a sphere. Fraction stable faces of N=100 convex hulls in different dimensions. Red line exponential fit. It looks like the fraction of stable faces declines exponentially with dimensionality. Does this mean that for a sufficiently high dimension it is likely that a random polytope is unistable? The answer is no: the number of faces increases pretty exponentially (as 2^{1.7680D}), but the number of stable faces also increases exponentially with D (as $latex 2^{0.9273 D}$). Combined plot of number of faces (points with red line) and stable faces (points with green line) as a function of dimension for N=100. This was based on runs with N=100. Obviously things go much faster if you select a lower N, such as 30. However, as you approach N=D the polytopes become more and more simplex-like, and simplices tend to both have fewer faces and be less stable in high dimensions, so the exponential growth stops. This actually happens far below D; for N=30 the effect is felt already in 11 dimensions. The face growth rates were also lower, with coefficients 1.1621 and 0.4730. Number of faces and stable faces for N=30 random convex hulls in different dimensions. (There are some asymptotic formulas known for the growth of the number of faces for random convex hulls; they grow linearly with N but at an accelerating rate with D.) Stuart Armstrong gave me a very heuristic argument for why there would be so many unstable faces. Consider building up the polytope vertex by vertex, essentially just adding together the simplices from the Delaunay decomposition. If you start from a stable state, eventually you will likely end up with an unstable face. Adding the next vertex will add a simplex to the polyhedron, and the center of mass will move in the direction of the new simplex. To have the face become stable again the shift in center of mass needs to be large enough along the directions parallel to the face to bring the projection back inside the face. But in high dimensional spaces there are many directions you can move in: the probability of a random vector being nearly parallel to another vector is very low. Hence, the next step and the following are likely to preserve the instability. So high dimensional polytopes are likely to have many unstable faces even if they are nicely inscribed in spheres. The number of steps the polytope rolls over  until finding a stable face is also limited: the “drainage basin” of a stable face is a tree, with a branching degree set by D-1 (if faces are D-simplexes). So the number of steps will scale as \log_{D-1}(2^{(1.7680 - 0.9273)D})=0.8407 D \ln(2) / \ln(D-1) \propto D/\ln(D). Even high-dimensional polytopes will stop flipping quickly in general. (A unistable polytope on the other hand can run through at least half of its faces, so there are some very slow ones too). The expected minimum distance between two points on this kind of random polytope scales as N^{-2/D} (if they were optimally distributed it would be N^{-1/D}). At the same time, if N is relatively small compared to D (the polytope is simplex-like), the average diameter (the longest edge) of each face seems to approach \sqrt{\pi}. Why? I think this is because  \Gamma(1/2), the mean of a flipped k=2 Weibull distribution that shows up because of extreme value theory. Meanwhile the average and median cord length between random points on hyperspheres tends towards \sqrt{2}. Faces hence tends to be fairly wide unless N is large compared to D, but there will typically always be a few very narrow ones that are tricky to balance on. Stacking no-slip polytopes What about stacking polytopes? If you put a polytope on top of another one (assuming no slipping) at first it seems you need to use a stable face of the top polytope, but this is not enough nor necessary. Since the underlying face is likely tilted from the horizontal, the vertical projection of the center of mass has to be within the top face. The upper polytope can be rotated, moving the projection point. The tilt angle \theta (or rather, tilt angles – we are doing this in higher dimensions, remember?) generates a hypersphere of radius d \tan(\theta) around the normal projection point (which is at distance d from the center of mass) where the vertical projection can intersect the face. Only parts of the hypersphere surface that are inside the face represent orientations that are stable. Even an unstable face can (sometimes) be stabilized if you turn it so that the tilted projection is inside, but for sufficiently high angles the hypersphere will be bigger than the face and it cannot be stable. Stability of polyhedron on tilted surface. The line of gravity from the center of mass intersects the inside the bottom face, so the polyhedron is resting stably. Turning the polyhedron will move the line to some point on the circle, but since all points on the circle are inside the face all orientations are stable. Stability of polyhedron on tilted surface. The line of gravity from the center of mass intersects the outside the bottom face, so the polyhedron is unstable and will flip over. Turning the polyhedron will move the line to some other point on the circle: since some points on the circle are inside the face there are some orientations that are stable. Having the top polytope stay in place is the first requirement. The second is that the bottom polytope should not become unstable. The new center of mass is moved to a point somewhere along the connecting line between the individual centers of mass of the polytopes, with exact position dependent on their volume ratio (note that turning the top polytope can move the center of mass too). This moves the projection point along the plane of the bottom face, and if it gets outside that face the assembly will tip over. One can imagine this as adding random (D-1)-dimensional vectors of length 1/N until they reach the edge of the face. I am a bit uncertain about the properties of such random walks (all works on decreasing step size walks I have seen have been in 1D). The harmonic random walk in 1D apparently converges with probability 1, so I think the (D-1)-dimensional one also does it since the distance from the origin to the walker will be smaller than if the walker just kept to a 1D line. Since the expected distance traversed in 1D is $latex E[|X|] \approx 1.0761$ this is actually not a very extreme  shift. Given the surprisingly large diameters of the faces if N \ll D the first condition might be tougher to meet than the second, but this is just a guess. The no slipping constraint is important. If the polytopes are frictionless, then any transverse force will move them. Hence only polytopes that have some parallel top and bottom stable faces can be stacked, and the problem becomes simpler. There are still surprises there, though: even stacks of rectangular blocks can do surprising things. The block stacking problem also demonstrates that one can have 1/N overhangs (counting downwards), enabling arbitrarily large total overhangs without tipping over. With polytopes with shapes that act as counterweights the overhangs can be even larger. Uriel’s stacking problems This leads to what we might call “Uriel’s stacking problem”: given a collection of no-slip convex D-dimensional polytopes, what is the tallest tower that can be constructed from them? I suspect that this problem is NP-hard. It sounds very much like a knapsack problem, but there is a dependency on previous steps when you add a new polytope that seem to make it harder. It seems that it would not be too difficult to fool a greedy algorithm just trying to put the next polytope on the most topmost face into adding one that makes subsequent steps too unstable, forcing backtracking. Another related problem: if the polytopes are random convex hulls of N points, what is the distribution of maximum tower heights? What if we just try random stacking? And finally, what is the maximum overhang that can be done by stacking polytopes from a given set? Crystal boutique
c6ebc088df0a699a
Principal quantum number From Wikipedia, the free encyclopedia Jump to navigation Jump to search In quantum mechanics, the principal quantum number (symbolized n) is one of four quantum numbers which are assigned to all electrons in an atom to describe that electron's state. As a discrete variable, the principal quantum number is always an integer. As n increases, the number of electronic shells increases and the electron spends more time farther from the nucleus. As n increases, the electron is also at a higher energy and is, therefore, less tightly bound to the nucleus. The total energy of an electron, as described below, is a negative inverse quadratic function of the principal quantum number n. The principal quantum number was first created for use in the semiclassical Bohr model of the atom, distinguishing between different energy levels. With the development of modern quantum mechanics, the simple Bohr model was replaced with a more complex theory of atomic orbitals. However, the modern theory still requires the principal quantum number. Apart from the principal quantum number, the other quantum numbers for bound electrons are the azimuthal quantum number , the magnetic quantum number ml, and the spin quantum number s. There is a set of quantum numbers associated with the energy states of the atom. The four quantum numbers n, , m, and s specify the complete and unique quantum state of a single electron in an atom, called its wave function or orbital. Two electrons belonging to the same atom cannot have the same values for all four quantum numbers, due to the Pauli exclusion principle. The wavefunction of the Schrödinger wave equation reduces to the three equations that when solved lead to the first three quantum numbers. Therefore, the equations for the first three quantum numbers are all interrelated. The principal quantum number arose in the solution of the radial part of the wave equation as shown below. The Schrödinger wave equation describes energy eigenstates with corresponding real numbers En and a definite total energy, the value of En. The bound state energies of the electron in the hydrogen atom are given by: The parameter n can take only positive integer values. The concept of energy levels and notation were taken from the earlier Bohr model of the atom. Schrödinger's equation developed the idea from a flat two-dimensional Bohr atom to the three-dimensional wave function model. In the Bohr model, the allowed orbits were derived from quantized (discrete) values of orbital angular momentum, L according to the equation where n = 1, 2, 3, … and is called the principal quantum number, and h is Planck's constant. This formula is not correct in quantum mechanics as the angular momentum magnitude is described by the azimuthal quantum number, but the energy levels are accurate and classically they correspond to the sum of potential and kinetic energy of the electron. The principal quantum number n represents the relative overall energy of each orbital. The energy level of each orbital increases as its distance from the nucleus increases. The sets of orbitals with the same n value are often referred to as electron shells or energy levels. The minimum energy exchanged during any wave-matter interaction is the product of the wave frequency multiplied by Planck's constant. This causes the wave to display particle-like packets of energy called quanta. The difference between energy levels that have different n determine the emission spectrum of the element. In the notation of the periodic table, the main shells of electrons are labeled: K (n = 1), L (n = 2), M (n = 3), etc. based on the principal quantum number. The principal quantum number is related to the radial quantum number, nr, by: where is the azimuthal quantum number and nr is equal to the number of nodes in the radial wavefunction. The definite total energy for a particle motion in a common Coulomb field and with a discrete spectrum, is given by: • is the Bohr radius, • is the principal quantum number. This discrete energy spectrum resulted from the solution of the quantum mechanical problem on the electron motion in the Coulomb field, coincides with the spectrum that was obtained with the help application of the Bohr-Sommerfeld quantization rules to the classical equations. The radial quantum number determines the number of nodes of the radial wave function .[1]. See also[edit] 1. ^ Andrew, A. V. (2006). "2. Schrödinger equation". Atomic spectroscopy. Introduction of theory to Hyperfine Structure. p. 274. ISBN 978-0-387-25573-6. External links[edit]
77110d37e4200f2f
Some Shortcomings of Science Repost from Sabine Hossenfelder’s blog “Backreaction” Two posts this week. The first is more for scientists, but I think it mentions points that people reading about science should recognise as possibly there. Sabine has been somewhat critical of some of modern science, and I feel she has a point. I shall do a post of my own on this topic soon, but it might be of interest to read the post following this to see what sort of things can go wrong. As a scientist: Phlogiston – Early Science at Work One of the earlier scientific concepts was phlogiston, and it is of interest to follow why this concept went wrong, if it did. One of the major problems for early theory was that nobody knew very much. Materials had properties, and these were referred to as principles, which tended to be viewed either as abstractions, or as physical but weightless entities. We would not have such difficulties, would we? Um, spacetime?? Anyway, they then observed that metals did something when heated in air: M   + air +  heat        ÞM(calx) ±  ???  (A calx was what we call an oxide.) They deduced there had to be a metallic principle that gives the metallic properties, such as ductility, lustre, malleability, etc., but they then noticed that gold refuses to make a calx, which suggested there was something else besides the metallic principle in metals. They also found that the calx was not a mixture, thus rust did not lead to iron being attached to a lodestone. This may seem obvious to us now, but conceptually this was significant. For example, if you mix blue and yellow paint, you get green and they cannot readily be unmixed, nevertheless it is a mixture. Chemical compounds are not mixtures, even though you might make them by mixing two materials. Even more important was the work by Paracelsus, the significance of which is generally overlooked. He noted there were a variety of metals, calces and salts, and he generalized that acid plus metal or acid plus metal calx gave salts, and each salt was specifically different, and depended only on the acid and metal used. He also recognized that what we call chemical compounds were individual entities, that could be, and should be, purified. It was then that Georg Ernst Stahl introduced into chemistry the concept of phlogiston. It was well established that certain calces reacted with charcoal to produce metals (but some did not) and the calx was usually heavier than the metal. The theory was, the metal took something from the air, which made the calx heavier. This is where things became slightly misleading because burning zinc gave a calx that was lighter than the metal. For consistency, they asserted it should have gained but as evidence poured in that it had not, they put that evidence in a drawer and did not refer to it. Their belief that it should have was correct, and indeed it did, but this avoiding the “data you don’t like” leads to many problems, not the least of which include “inventing” reasons why observations do not fit the theory without taking the trouble to abandon the theory. This time they were right, but that only encourages the act. As to why there was the problem, zinc oxide is relatively volatile and would fume off, so they lost some of the material. Problems with experimental technique and equipment really led to a lot of difficulties, but who amongst us would do better, given what they had? Stahl knew that various things combusted, so he proposed that flammable substances must contain a common principle, which he called phlogiston. Stahl then argued that metals forming calces was in principle the same as materials like carbon burning, which is correct. He then proposed that phlogiston was usually bound or trapped within solids such as metals and carbon, but in certain cases, could be removed. If so, it was taken up by a suitable air, but because the phlogiston wanted to get back to where it came from, it got as close as it could and took the air with it. It was the phlogiston trying to get back from where it came that held the new compound together. This offered a logical explanation for why the compound actually existed, and was a genuine strength of this theory. He then went wrong by arguing the more phlogiston, the more flammable the body, which is odd, because if he said some but not all such materials could release phlogiston, he might have thought that some might release it more easily than others. He also argued that carbon was particularly rich in phlogiston, which was why carbon turned calces into metals with heat. He also realized that respiration was essentially the same process, and fire or breathing releases phlogiston, to make phlogisticated air, and he also realized that plants absorbed such phlogiston, to make dephlogisticated air. For those that know, this is all reasonable, but happens to be a strange mix of good and bad conclusions. The big problem for Stahl was he did not know that “air” was a mixture of gases. A lesson here is that very seldom does anyone single-handedly get everything right, and when they do, it is usually because everything covered can be reduced to a very few relationships for which numerical values can be attached, and at least some of these are known in advance. Stahl’s theory was interesting because it got chemistry going in a systemic way, but because we don’t believe in phlogiston, Stahl is essentially forgotten. People have blind spots. Priestley also carried out Lavoisier’s experiment:  2HgO  + heat   ⇌   2Hg  + O2and found that mercury was lighter than the calx, so argued phlogiston was lighter than air. He knew there was a gas there, but the fact it must also have weight eluded him. Lavoisier’s explanation was that hot mercuric oxide decomposed to form metal and oxygen. This is clearly a simpler explanation. One of the most important points made by Lavoisier was that in combustion, the weight increase of the products exactly matched the loss of weight by the air, although there is some cause to wonder about the accuracy of his equipment to get “exactly”. Measuring the weight of a gas with a balance is not that easy. However, Lavoisier established the fact that matter is conserved, and that in chemical reactions, various species react according to equivalent weights. Actually, the conservation of mass was discovered much earlier by Mikhail Lomonosov, but because he was in Russia, nobody took any notice. The second assertion caused a lot of trouble because it is not true without a major correction to allow for valence. Lavoisier also disposed of the weightless substance phlogiston simply by ignoring the problem of what held compounds together. In some ways, particularly in the use of the analytical balance, Lavoisier advanced chemistry, but in disposing of phlogiston he significantly retarded chemistry. So, looking back, did phlogiston have merit as a concept? Most certainly! The metal gives off a weightless substance that sticks to a particular gas can be replaced with the metal gives off an electron to form a cation, and the oxygen accepts the electron to form an anion. Opposite charges attract and try to bind together. This is, for the time, a fair description of the ionic bond. As for weightless, nobody at the time could determine the weight difference between a metal and a metal less one electron, if they could work out how to make it. Of course the next step is to say that the phlogiston is a discrete particle, and now valence falls into place and modern chemistry is around the corner. Part of the problem there was that nobody believed in atoms. Again, Lomonosov apparently did, but as I noted above, nobody took any notice of him. Of course, is it is far easier to see these things in retrospect. My guess is very few modern scientists, if stripped of their modern knowledge and put back in time would do any better. If you think you could, recall that Isaac Newton spent a lot of time trying to unravel chemistry and got nowhere. There are very few ever that are comparable to Newton. Is Science in as Good a Place as it Might Be? Most people probably think that science progresses through all scientists diligently seeking the truth but that illusion was was shattered when Thomas Kuhn published “The Structure of Scientific Revolutions.” Two quotes: Is that true, and if so, why? I think it follows from the way science is learned and then funded. In general, scientists gain their expertise by learning from a mentor, and if you do a PhD, you work for several years in a very narrow field, and most of the time the student follows the instructions of the supervisor. He will, of course, discuss issues with the supervisor, but basically the young scientist will have acquired a range of techniques when finished. He will then go on a series of post-doctoral fellowships, generally in the same area because he has to persuade the new team leaders he is sufficiently skilled to be worth hiring. So he gains more skill in the same area, but invariably he also becomes more deeply submerged in the standard paradigm. At this stage of his life, it is extremely unusual for the young scientist to question whether the foundations of what he is doing is right, and since most continue in this field, they have the various mentors’ paradigm well ingrained. To continue, either they find a company or other organization to get an income, or they stay in a research organization, where they need funding. When they apply for it they keep well within the paradigm; first, it is the easiest way for success, and also boat rockers generally get sunk right then. To get funding, you have to show you have been successful; success is measured mainly by the number of scientific papers and the number of citations. Accordingly, you choose projects that you know will work and shuld not upset any apple-carts. You cite those close to you, and they will cite you; accuse them of being wrong and you will be ignored, and with no funding, tough. What all this means is that the system seems to have been designed to generate papers that confirm what you already suspect. There will be exceptions, such as “discovering dark matter” but all that has done so far is to design a parking place for what we do not understand. Because we do  not understand, all we can do is make guesses as to what it is, and the guesses are guided by our current paradigm, and so far our guesses are wrong. One small example follows to show what I mean. By itself, it may not seem important, and perhaps it isn’t. There is an emerging area of chemistry called molecular dynamics. What this tries to do is to work out is how energy is distributed in molecules as this distribution alters chemical reaction rates, and this can be important for some biological processes. One such feature is to try to relate how molecules, especially polymers, can bend in solution. I once went to hear a conference presentation where this was discussed, and the form of the bending vibrations was assumed to be simple harmonic because for that the maths are simple, and anyhting wrong gets buried in various “constants”. All question time was taken up by patsy questions from friends, but I got hold of the speaker later, and pointed out that I had published paper a long time previously that showed the vibrations were not simple harmonic, although that was a good approximation for small vibrations. The problem is that small vibrations are irrelevant if you want to see significant chemical effects; they come from large vibrations. Now the “errors” can be fixed with a sequence of anharmonicity terms, each with their own constant, and each constant is worked around until the desired answer is obtained. In short you get the asnswer you need by adjusting the constants. The net result is, it is claimed that good agreement with observation is found once the “constants” are found for the given situation. The “constants” appear to be only constant for a given situation, so arguably they are not constant, and worse, it can be near impossible to find out what they are from the average paper. Now, there is nothing wrong with using empirical relationships since if they work, they make it a lot easier to carry out calculations. The problem starts when, if you do not know whyit works, you may use it under circumstances when it no longer works. Now, before you say that surely scientists want to understand, consider the problem for the scientist: maybe there is a better relationship, but to change to use it would involve re-writing a huge amount of computer code. That may take a year or so, in which time no publications are generated, and when the time for applications for further funding comes up, besides having to explain the inactivity, you have to explain why you were wrong before. Who is going to do that? Better to keep cranking the handle because nobody is going to know the difference. Does this matter? In most cases, no, because most science involves making something or measuring something, and most of the time it makes no difference, and also most of the time the underpinning theory is actually well established. The NASA rockets that go to Mars very successfully go exactly where planned using nothing but good old Newtonian dynamics, some established chemistry, some established structural and material properties, and established electromagnetism. Your pharmaceuticals work because they have been empirically tested and found to work (at least most of the time). The point I am making is that nobody has time to go back and check whether anything is wrong at the fundamental level. Over history, science has been marked by a number of debates, and a number of treasured ideas overthrown. As far as I can make out, since 1970, far more scientific output has been made than in all previous history, yet there have been no fundamental ideas generated during this period that have been accepted, nor have any older ones been overturned. Either we have reached a stage of perfection, or we have ceased looking for flaws. Guess which! Is science being carried out properly? How do scientists carry out science, and how should they? These are questions that have been raised by reviewers in a recent edition of Science magazine, one of the leading science journals. One of the telling quotes is “resources (that) influence the course of science are still more rooted in traditions and intuitions than in evidence.” What does that mean? In my opinion, it is along the lines, for those who have, much will be given. “Much” here refers to much of what is available. Government funding can be tight. And in fairness, those who provide funds want to see something for their efforts, and they are more likely to see something from someone who has produced results consistently in the past. The problem is, the bureaucrats responsible for providing the finds have no idea of the quality of what is produced, so they tend to count scientific papers. This favours the production of fairly ordinary stuff, or even rubbish. Newbies are given a chance, but there is a price: they cannot afford to produce nothing. So what tends to happen is that funds are driven towards something that is difficult to fail, except maybe for some very large projects, like the large hadron collider. The most important thing required is that something is measured, and that something is more or less understandable and acceptable by a scientific journal, for that is a successful result. In some cases, the question, “Why was that measured?” would best be answered, “Because it was easy.” Even the large hadron collider fell into that zone. Scientists wanted to find the Higgs boson, and supersymmetry particles. They found the first, and I suppose when the question of building the collider, the reference (totally not apt) to the “God Particle” did not hurt. However, while getting research funding for things to be measured is difficult, getting money for analyzing what we know, or for developing theories (other than doing applied mathematics on existing theories), is virtually impossible. I believe this is a problem, and particularly for analyzing what we know. We are in this quite strange position that while in principle we have acquired a huge amount of data, we are not always sure of what we know. To add to our problems, anything found more than twenty years ago is as likely as not to be forgotten. Theory is thus stagnating. With the exception of cosmic inflation, there have been no new major theories that have taken hold since about 1970. Yet far more scientists have been working during this period than in all of previous history. Of course this may merely be due to the fact that new theories have been proposed, but nobody has accepted them. A quote from Max Planck, who effectively started quantum mechanics may show light on this: “A new scientific truth does not triumph
by convincing its opponents and making them see the light, but rather because its opponents eventually die.” Not very encouraging. Another reason may be that it failed to draw attention to itself. No scientist these days can read more than an extremely tiny fraction of what is written, as there are tens of millions of scientific papers in chemistry alone. Computer searching helps, but only for well-defined problems, such as a property of some material. How can you define carefully what you do not know exists? Further information from this Science article provided some interest. An investigation led to what then non-scientists might consider a highly odd result, namely for scientific papers to be a hit, it was found that usually at least 90 per cent of what is written is well established. Novelty might be prized, but unless well mixed with the familiar, nobody will read it, or even worse, it will not be published. That, perforce, means that in general there will be no extremely novel approach, but rather anything new will be a tweak on what is established. To add to this, a study of “star” scientists who had premature deaths led to an interesting observation: the output of their collaborators fell away, which indicates that only the “star” was contributing much intellectual effort, and probably actively squashing dissenting views, whereas new entrants to the field who were starting to shine tended not to have done much in that field before the “star” died. A different reviewer noticed that many scientists put in very little effort to cite past discoveries, and when citing literature, the most important is about five years old. There will be exceptions, usually through citing papers by the very famous, but I rather suspect in most cases these are cited more to show the authors in a good light than for any subject illumination. Another reviewer noted that scientists appeared to be narrowly channeled in their research by the need to get recognition, which requires work familiar to the readers, and reviewers, particularly those that review funding applications. The important thing is to keep up an output of “good work”, and that tends to mean only too many go after something that they more or less already now the answer. Yes, new facts are reported, but what do they mean? This, of course, fits in well with Thomas Kuhn’s picture of science, where the new activities are generally puzzles that are to be solved, but not puzzles that will be exceedingly difficult to solve. What all this appears to mean is that science is becoming very good at confirming that which would have been easily guessed, but not so good at coming up with the radically new. Actually, there is worse, but that is for the next post. Have you got what it takes to form a scientific theory? Making a scientific theory is actually more difficult than you might think. The first step involves surveying what knowledge is already available. That comes in two subsets: the actual observational data and the interpretation of what everyone thinks that set of data means. I happen to think that set theory is a great start here. A set is a collection of data with something in common, together with the rule that suggests it should be put into one set, as opposed to several. That rule must arise naturally from any theory, so as you form a rule, you are well on your way to forming a theory. The next part is probably the hardest: you have to decide what interpretation that is allegedly established is in fact wrong. It is not that easy to say that the authority is wrong, and your idea is right, but you have to do that, and at the same time know that your version is in accord with all observational data and takes you somewhere else. Why I am going on about this now is I have written two novels that set a problem: how could you prove the Earth goes around the sun if you were an ancient Roman? This is a challenge if you want to test yourself as a theoretician. If you don’t. I like to think there is still an interesting story there. From September 13 – 20, my novel Athene’s Prophecy will be discounted in the US and UK, and this blog will give some background information to make the reading easier as regards the actual story not regarding this problem. In this, my fictional character, Gaius Claudius Scaevola is on a quest, but he must also survive the imperium of a certain Gaius Julius Caesar, aka Caligulae, who suffered from “fake news”, and a bad subsequent press. First the nickname: no Roman would call him Caligula because even his worst enemies would recognize he had two feet, and his father could easily afford two bootlets. Romans had a number of names, but they tended to be similar. Take Gaius Julius Caesar. There were many of them, including the father, grandfather, great grandfather etc. of the one you recognize. Caligulae was also Gaius Julius Caesar. Gaius is a praenomen, like John. Unfortunately, there were not a lot of such names so there are many called Gaius. Julius is the ancient family name, but it is more like a clan, and eventually there needed to be more, so most of the popular clans had a cognomen. This tended to be anything but grandiose. Thus for Marcus Tullius Cicero, Cicero means chickpea. Scaevola means “lefty”. It is less clear what Caesar means because in Latin the “ar” ending is somewhat unusual. Gaius Plinius Secundus interpreted it as coming from caesaries, which means “hairy”. Ironically, the most famous Julius Caesar was bald. Incidentally, in pronunciation, the latin “C” is the equivalent of the Greek gamma, so it is pronounced as a “G” or “K” – the difference is small and we have now way of knowing. “ae” is pronounced as in “pie”. So Caesar is pronounced something like the German Kaiser. Caligulae is widely regarded as a tyrant of the worst kind, but during his imperium he was only personally responsible for thirteen executions, and he had three failed coup attempts on his life, the leaders of which contributed to that thirteen. That does not sound excessively tyrannical. However, he did have the bad habit of making outrageous comments (this is prior to a certain President tweeting, but there are strange similarities). He made his horse a senator. That was not mad; it was a clear insult to the senators. He is accused of making a fatuous invasion of Germany. Actually, the evidence is he got two rebellious legions to build bridges over the Rhine, go over, set up camp, dig lots of earthworks, march around and return. This is actually a text-book account of imposing discipline and carrying out an exercise, following the methods of his brother-in-law Gnaeus Domitius Corbulo, one of the stronger Roman Generals on discipline. He then took these same two legions and ordered them to invade Britain. The men refused to board what are sometimes called decrepit ships. Whatever, Caligulae gave them the choices between “conquering Neptune” and collecting a mass of sea shells, invading Britain, or face decimation. They collected sea shells. The exercise was not madness: it was a total humiliation for the two legions to have to carry these through Rome in the form of a “triumph”. This rather odd behaviour ended legionary rebellion, but it did not stop the coups. The odd behaviour and the fact he despised many senators inevitably led to bad press because it was the senatorial class that wrote histories, but like a certain president, he seemed to go out of his way to encourage the bad press. However, he was not seen as a tyrant by the masses. When he died the masses gave a genuine outpouring of anger at those who killed him. Like the more famous Gaius Julius Caesar, Caligulae had great support from the masses, but not from the senators. I have collected many of his most notorious acts, and one of the most bizarre political incidents I have heard of is quoted in the novel more or less as reported by Philo of Alexandria, with only minor changes for style consistency, and, of course, to report it in English. As for showing how scientific theory can be developed, in TV shows you find scientists sitting down doing very difficult mathematics, and while that may be needed when theory is applied, all major theories start with relatively simple concepts. If we take quantum mechanics as an example of a reasonably difficult piece of theoretical physics, thus to get to the famous Schrödinger equation, start with the Hamilton-Jacobi equation from classical physics. Now the mathematician Hamilton had already shown you can manipulated that into a wave-like equation, but that went nowhere useful. However, the French physicist de Broglie had argued that there was real wave-like behaviour, and he came up with an equation in which the classical action (momentum times distance in this case) for a wave length was constant, specifically in units of h (Planck’s quantum of action). All that Schrödinger had to do was to manipulate Hamilton’s waves and ensure that the action came in units of h per wavelength. That may seem easy, but everything was present for some time before Schrödinger put that together. Coming up with an original concept is not at all easy. Anyway, in the novel, Scaevola has to prove the Earth goes around the sun, with what was available then. (No telescopes that helped Galileo.) The novel gives you the material avaiable, including the theory and measurements of Aristarchus. See if you can do it. You, at least, have the advantage you know it does. (And no, you do not have to invent calculus or Newtonian mechanics.) The above is, of course, merely the background. The main part of the story involves life in Egypt, the aanti-Jewish riots in Egypt, then the religious problems of Judea as Christianty starts. Origin of the Rocky Planet Water, Carbon and Nitrogen Only the Earth is struck by large wet planetesimals, The Moon is struck by nothing,
31f70718597a4530
Light and Molecules Mario Barbatti's Research Group Methods for nonadiabatic dynamics Nonadiabatic dynamics When a molecule absorbs a photon in the UV or visible range, the energy goes to its electrons, whose configuration is changed in comparison to the ground state electronic density. The probability of absorbing a photon as a function of its wavelength – the absorption spectrum – is discussed in the “UV/Vis spectrum simulations“. Here, we will be concerned with what happens after the absorption. The new electronic density generated right after the photon absorption does not, in general, correspond to an equilibrium state of the molecule. This means that there are forces acting on the atoms, inducing conformational changes (adiabatic process). Dynamics simulation in the excited states is a great method for monitoring how these changes take place. You can know more about the change itself in “Nonadiabatic ultrafast phenomena“. There are a few main challenges concerning excited state dynamics: • First, the potential energy of the excited state is normally much more complicated than that of the ground state. This means that we cannot use simple potential energy models as in molecular mechanics to compute the forces. Their computation should be done by solving the Schrödinger equation, which means that we have to deal with very high computational costs. • The potential energy of the electronically excited state in which the molecule is excited is often very near the potential energy of other excited states. For this reason, the molecule can jump to these other states during the relaxation (nonadiabatic process). We have also to deal with such possibility. • The relaxation dynamics may proceed through several different pathways. These several ways should be mapped and their relative importance evaluated. The main method that we use in our group to investigate excited-state dynamics is the surface hopping approach, which was proposed by Tully and Preston in the early 1970’s (see review in Ref). This is a semiclassical method which allows keeping the computation costs under control. In surface hopping, the challenges enumerated above are addressing in the following way: • The adiabatic processes are treated by solving Newton’s equations for the nuclei under the excited-state forces. • The nonadiabatic processes are treated by simultaneously computing the transition probability to other states and stochastically evaluating whether the molecule should stay in the same state or jump to another one. • The multiple reaction pathways are evaluated statistically by following a large number of trajectories starting with different initial conditions. All these procedures are performed with the Newton-X program package, which we have specially developed for computing surface hopping. Two trajectories starting with the same initial conditions may have different fates due to the stchastic natre of the method. Two trajectories starting with the same initial conditions may have different fates due to the stochastic nature of the method. Numerical nonadiabatic couplings One of the main bottlenecks of nonadiabatic simulations is the computation of nonadiabatic couplings, which are the terms that connect different electronic states. These couplings are not usually available in standard quantum chemical programs for most of the quantum-chemical methods. An alternative to the explicit computation of the nonadiabatic couplings is to compute the time-derivative couplings as proposed by Hammes-Schiffer and Tully. Time-derivative couplings can be evaluated numerically by computation of wavefunction overlaps along the trajectory. We have implemented this method in Newton-X to be used with MRCI, MCSCF (Ref), and TDDFT (Ref), CC2 and ADC(2) (Ref) approaches. This same approach can be used for spin-orbit couplings as well. Nonadiabatic dynamics with QM/MM Nonadiabatic dynamics simulations can also profit from hybrid schemes such as QM/MM. The atoms of the entire system S to be treated by means of the hybrid method are divided into disjoint regions. For a standard QM/MM-setup with electrostatic embedding, these subsets are typically an inner and an outer region. Inner and outer region are described by quantum mechanics and molecular mechanics, respectively. Specifically, QM electronic-structure methods are used to accurately describe multiple electronic states of the compound of interest, while the MM component primarily deals with secondary environmental effects. Standard force fields are employed in the MM part incorporating bonded terms, van-der-Waals interactions and electrostatic interaction between partial point charges associated with each atom. Our implementation of QM/MM surface hopping is described in Ref. Special care should be taken of the initial conditions for the dynamics. To take them from a Wigner distribution, as usually done for small molecules, is not practical. On the other hand, to take the initial conditions from a thermalized MM trajectory in the ground state tends to generate too cold initial ensemble for the QM region (Ref). We have devised a way to avoid these problems by combining Wigner distribution for the QM part and thermal configurations for the MM part. The exact procedure is explained in Ref. An example of a single trajectory computed with surface hopping QM/MM dynamics is shown in the movie below for Me-formamide (Ref).
7df3f82ae3f95d71
Posts Tagged ‘causation’ Materia Prima 10 December 2012 The Beginning of the End So this is it for me and Bernard d’Espagnat’s On Physics and Philosophy. In the final chapter d’Espagnat allows himself to speculate on the philosophical and spiritual importance of his veiled reality (which he capitalizes) in particular, and the results of modern physics in general. The chapter is entitled “The Ground of Things.” It is in these concluding sections that d’Espagnat makes his final defence of a materia prima, a mind-independent reality, before the objections of both realists (who concentrate on empirical reality) and antirealists (who say mind is all). Some of those arguments say there’s no reality-in-itself, some say it exists but is inaccessible, and others say empirical reality is “reality.” Kant vs d’Espagnat D’Espagnat believes “the Real” is a mystery as it is (in his opinion) not accessible through discursive knowledge. He notes Immanuel Kant distinguished between phenomena and “reality-in-itself,” but disagrees with Kant that a mind-independent reality is just a boring “limiting concept” filled with “pure x.” Cassirer vs d’Espagnat Ernest Cassirer strongly objected to being content with a “mystery,” which he felt would be an unbearable block to scientific inquiry. D’Espagnat says when possible the search for clarity is admirable, but the true spirit of science is to follow where the facts lead it. The quantum entanglement shown in “Aspect-like experiments” (by Alain Aspect and others) are just part of our evolving scientific knowledge. Materialists vs Mystics Sometimes one should approach “mystery” the way mystics, poets, or composers have done so (though more often in the past). Realists (materialists) have no reason to believe they hold all the keys to knowledge, even in principle. As for the antirealists (and instrumentalists), if they think reality is something we ourselves build up, then mystery can hardly be called an exceptional “illusion.” Affect vs Effect The “affective” element of human existence is an aspect that seems to circumvent our rationality. Kant felt the “affective mind” was not “ordered on concepts” and therefore could shine no light on Being. D’Espagnat is more sympathetic to Descartes. Thought leads to the self-evidence of existence (“I think, therefore I am”), but d’Espagnat says just as self-evident will be our “joys and pains.” We base our conjectures on what we know most intimately, and what could be closer to us than our “affective consciousness”? This too should be able to inform us of Being, perhaps in some circumstances even better than science can. Realism vs “the Real” We can take a very realist position and imagine that if mankind disappeared the stars would continue in their courses. This is an argument for a mind-independent reality—just not the one d’Espagnat has in mind. D’Espagnat says just because our present existence is usually most conveniently described in realist terms (such as conventional space and time) doesn’t mean the realist position is actually true. Even particle physicists who use the realist language of minuscule points and well-defined trajectories know that’s not what’s “really” going on. Radical Idealism vs “the Real” On the other hand radical idealism believes there is no reality outside the mind. In other words, there’s no mind-independent reality. D’Espagnat says his earlier arguments, either based on no miracles or intersubjective agreement (see chapter five) undermine idealism but not his veiled reality position. Mathematical Realism vs “the Real” Whether it’s “Pythagorism” or “Mathematical Platonism” there’s a belief that mathematical developments are discovered not created. Again, this would be a mind-independent reality, but mathematically based. Physical reality is either grounded on a pre-existing mathematical reality or there’s some strong connection between the two. D’Espagnat reminds us that quantum formalism refers to observational predictions. It’s possible “the Real” is mathematically based, but quantum theory isn’t going to get you there. Brains in a Vat vs “the Real” D’Espagnat disagrees with Hilary Putnam’s thought experiment that places brains in vats. Connect electrodes to the brains and some supersmart being could send (in theory) images and other sensations directly to the brain. Putnam says a vat individual could not truthfully say, “We are brains in a vat.” That’s because his concept of a vat is based on an illusion. So there’s no connection between this particular version of a “ground of being” and our knowledge. D’Espagnat disagrees with the assumption that knowledge springs only from the senses. Also, Putnam’s imaginary statement refers to specific entities. D’Espagnat’s concept of “the Real” is “conceptually prior to any such description.” Self-Modification vs “the Real” Francisco Varela and collaborators proposed “enaction” theory. The brain’s main function is to modify its internal states rather than reflect the external world. External reality is neither a projection of our mind’s contents nor the source of those contents. There’s no need to imagine a “pre-given” reality. D’Espagnat faults Varela’s book for vague terminology. Does Varela mean “empirical reality” or “mind-independent reality” when he talks of “reality”? Is the “subjective” an individual’s subjectivity or intersubjectivity? D’Espagnat disagrees with Varela’s use of “secondary qualities” such as colour to make his arguments. Even Varela’s arguments about attention and perception fail to convince d’Espagnat. The mind may display selective attention but that’s a far cry from proving that mind and world somehow arise together. Structure vs “the Real” D’Espagnat says arguments against veiled reality will fail if they’re based on discursive (descriptive and rational) knowledge. In other words, arguments based on what structures we see or don’t see are irrelevant to “the Real” as “the Real” doesn’t have structures in the way we’re accustomed to think of them. Buddhism vs d’Espagnat D’Espagnat notes Varela’s frequent references to Buddhism. Buddhism speaks of “sunyata” or “emptiness” in rejecting objects in the world as intrinsically existing in the way we perceive them. Furthermore our living “selves” have no absolute existence as individuals. D’Espagnat hopes his veiled reality viewpoint will interest Buddhists, especially as there’s a pretty thick veil between consciousness and “the Real.” Heisenberg vs “the Real” D’Espagnat rejects Werner Heisenberg’s (posthumously published) view that empirical reality is a product of our human-made knowledge. Heisenberg felt there were various “regions of reality” such that our knowledge of biology, for instance, weren’t entirely dependent on our knowledge of physics. Heisenberg did think there might be something that’s “truly real,” vaguely reflected upon human consciousness. However, he felt this level of reality would still be situated within ordinary space and time. It’s on that count that d’Espagnat rejects Heisenberg’s arguments as irrelevant as “the Real” is not located in space and time. Pro vs Con In the end Heisenberg finds arguments both for and against a “ground of things” dubious. You can argue against a “ground of things” but only in the sense of a “pregiven,” describable “world-per-se.” D’Espagnat finds the “pro” arguments based on “commonsense” or a pre-existing mathematical reality also unconvincing. D’Espagnat believes a “more fact-based reasoning” is called for. Universality vs Events D’Espagnat says over the past half-century interest in chaos and complexity led some scientists to demote scientific laws and promote the role of the “event,” previously seen as more or less accidental. He says he argued against rejecting the “universal” in a 1990 book. He’s more ambivalent about the emphasis on “events,” which he says takes place within empirical reality. That reflects the way we’re “apprehending the Real” but doesn’t meant that’s what “the Real” is all about. For instance, we don’t see objects as nonseparable, but that’s what quantum theory tells us. D’Espagnat says Edgar Morin and others in this school of thought have somewhat retreated from their emphasis on events, complexity, and disorder. Morin acknowledges that “Aspect-type” experimental results have shown some limitations in his approach. Nominalism vs “the Real” D’Espagnat is unimpressed with the revival of nominalism among “cultivated, literary, avant-garde people.” It’s a belief system promoted in the Middle Ages by William of Ockham and others. Nominalists nowadays reject the universal while applauding individual initiative, which they feel is a product of individual knowledge. The problem is that nominalism is an all-encompassing philosophy, referring to all things, not just living beings. The discrete atoms of classical physics have given way to “collective modes of existence.” And again such arguments apply to empirical reality not “the Real.” D’Espagnat vs the “Enlightenment” D’Espagnat believes many sophisticated members of society are still enthralled by outmoded ideas of the “Enlightenment” (d’Espagnat’s quotes). D’Espagnat acknowledges that research on chaos and events may eventually back nominalism. However, he things quantum theory and inseparability will win out. Spinoza vs “the Real” D’Espagnat cautions against thinking Spinoza was a committed materialist when he talked of “God, in other words, nature.” Although Spinoza’s natura naturans sounds like “the Real,” his natura naturata sounds like phenomena. D’Espagnat does not agree there’s a willful, personal God behind all this, however. Veiled reality is not “intelligible,” unlike Spinoza’s view of Substance. Phenomenology vs “the Real” Classical physics introduced mechanical, then mathematical, idealizations of objects. How things supposedly “really are” became separated from our “direct experience.” Quantum theory reintroduced a role for the human mind to account for our experiences. In some ways quantum theories reinforces phenomenology. Phenomenology sees an act of creativity in the human mind. It takes various pieces of sensation and constructs some entity that shares these qualities. However, on some level the source of these sensations still independently exists. Quantum theory states that some physical quantities can only be observed through human intervention, thus undermining phenomenology’s belief in independently existing sources of phenomenon. Modern “Sages” vs “the Real” In “developed” societies there are “sages” who take rather contradictory views. They say there is a reality independent of us. But they also say it’s “obvious” we rely on our perceptions to gain access to that world. So they conclude it is illogical to speak of an “unreachable” reality. We should make only statements relying on sense data or tautologies (statements that are always logically true). However, d’Espagnat continues to oppose the view that our perceptions necessarily reflect reality as it really is. Our modern “sages” try to combine realism and positivism, converting “reality-per-se” into “observation-per-se.” But there is no “observation-per-se” as observations involve human intention and selection. The Describable vs “the Real” If we reject the materialists’ rejection of “the Real,” does that force us into the camp of the radical idealists? D’Espagnat says we shouldn’t confuse “the Real” and “the describable.” First, existence takes precedence over knowledge. Secondly, there is something that says “no” to any arbitrary constructions of reality. Third, it’s hard to imagine an “a priori” that evolves. And fourthly, there are universal laws that make predictions, and it’s hard to envision how laws could do so unless you believe in miracles. Even Michel Bitbol and Hervé Zwirn have not entirely rejected the concept of “the Real” even as they critique it. D’Espagnat says thinkers should avoid pushing deductive reasoning into areas where it may not strictly apply. As a sidenote, d’Espagnat says classical instrumentalism believes a concept’s meaning and “reference,” the collection of data about the concept, are the same. Even if you replace “data” with “prediction” it’s not a universal position as predictions require a predictor. And that predictor is some being who’s doing the predicting. Laws vs “the Real” Bitbol and Zwirn may move a bit toward Platonism when they acknowledge something may constrain us that is not entirely attributable to us. However, they believe this “something” is totally inaccessible. D’Espagnat disagrees, and thinks Plato would disagree too. “The Real” must have some influence on empirical reality’s structure as Maxwell’s laws (for instance) are obeyed by phenomena. D’Espagnat’s “extended causality” links not instances of phenomena but rather phenomena and “the Real.” These structural “extended causes” move beyond Kantian causality and recall Plato’s Ideas. Structures vs Hints of Structures D’Espagnat says “the Real” is prior to mind-matter splitting, so the mind may detect hints of the mind’s source, which is “the Real.” That veiled reality is not the same as the underlying reality described by structural realism. D’Espagnat says mind-independent reality is not the source of our physical laws. At best these laws are distortions of the “great structures” of “the Real.” At worst they’re just very obscure “traces.” In the end “the Real” isn’t describable, indescribable, or party describable. The first two options imply a total presence or lack of description, and the third option implies “the Real” has parts, which isn’t the case, says d’Espagnat. Conceptualization vs Meaning If “the Real” can’t be conceptualized can it have any “meaning”? D’Espagnat cites Zwirn’s argument imagining a creature as far ahead of humans as humans are ahead of dogs or monkeys. We can conceptualize things that dogs or monkeys can’t, so surely a superhuman being could conceptualize things we can’t. D’Espagnat believes that poets can allude to things that we somehow know exists even if these concepts can’t be made explicit. Plato’s Cave vs “the Real” At first glance Plato’s Cave approximates d’Espagnat’s view of veiled reality. It suggests the emergence of (shadowy) empirical reality (seen in the cave) from “the Real” (the porters who place their Platonic Ideas in front of the light). However, the fable doesn’t deal with how consciousness (the prisoners) would have emerged from “the Real.” Furthermore, “the Real” cannot be separated into parts (while the porters hold separate objects). We cannot conceptualize “the Real” yet Plato conceptualized his Ideas. Finally, even without prisoners there’d still be the shadows, while in d’Espagnat’s system phenomena would exist only in relation to consciousness. Traditional Thought vs “the Real” D’Espagnat warns against a syncretism of old cultural elements and new philosophical points, but he wonders if “the Real” has any bearing on traditional systems. Religions speak of an “immorality,” which suggests some absolute time that physics no longer can support. However, perhaps the other term “eternity” suggests escaping this illusory time. And perhaps there is a “continuous creation” of Being in a process independent of time. Heisenberg vs d’Espagnat Heisenberg, says d’Espagnat, doubted thought could illuminate deep matters as (according to Heisenberg) thought returns to its source. But d’Espagnat notes that new science has allowed us to move past old science’s viewpoints, such as materialism. So thought has been able to illuminate some deep matters. Platonism vs d’Espagnat D’Espagnat sees similarities between his view of causality and Aristotle’s. Aristotle was a realist and was concerned with causality not just in the realm of phenomena but in “reality-in-itself.” Furthermore, Aristotle was not beholden to the idea that causes precede effects. Instead there could be “final causes” to which things might tend under the influence of Aristotle’s God. As d’Espagnat’s veiled reality is beyond time, “the Real” could impart such a “final cause” on empirical reality. Also, Aristotle’s interest in causation beyond mere phenomena reminds d’Espagnat of his own interest in causation between “the Real” and empirical reality. Aristotle distinguished between “power” and “act” while Newton supposedly saw just “act.” Aristotle saw matter as the seat of a vague potentiality. Materia prima is pure potentiality. “Informed matter” exists on more and more complex levels. Simple beings can be the “matter” for more complex beings. These complex beings in this process are more “real” as their potentiality is expressed. Therefore the deep meaning of reality lies not in the tiny components of complex beings, but rather the meaning is the complex beings themselves. In a similar fashion, in empirical reality the wave functions have an “epistemological reality” at a lower level than, say, macroscopic objects in the wake of decoherence. Although Heisenberg did not cite decoherence he did ponder the possible role of wave functions as a “materia prima.” Abner Shimony went on exploring this issue. However, they’ve both admitted it’s hard to formulate these ideas precisely. Plato vs d’Espagnat As for Plato, d’Espagnat reminds us of his earlier concerns about the Plato’s Cave. However, for Plato the deeper meaning was not in the things themselves. They didn’t reside just in “us” either. He wasn’t a radical idealist. Platonic Ideas (and his concept of the “Good”) bear resemblance to “the Real.” However, Platonic Ideas are conceptualizable while “the Real” is not. Many scientists believe, still, that analyzing more and more sense data will get us closer to the deeper meaning of reality. However, advances in science have relied on a “rapprochement” between science and a philosophical position (Platonism) that questions such a program. D’Espagnat notes that “Platonism” is a term nowadays often interpreted as “Pythagorism” with real mathematical objects. D’Espagnat does not agree with “Pythagorism,” but notes that there’s some relationship between it and Platonism. Even veiled reality has a smidgeon of Pythagorism in it as empirical reality’s objects are somehow a dim reflection of “the Real.” Einstein vs d’Espagnat Albert Einstein appears to have believed “the Real” could in principle be apprehended in its details, even if in practice that was rarely possible. However, the goal remained to explore this deeper world by discovering universal laws. Einstein also believed in three levels of religious experience. The first was based on fear, the second morals, and the third transcends ordinary human views of God. At this third level, Einstein thought, a sublime order is reflected in nature and in thought. Even scientific materialists no longer believe the common materialism that the mass media disseminates. However, there have also been developments that make us question some of Einstein’s philosophical positions. D’Espagnat sees some compatibility between his views and Einstein’s even if Pythagorism doesn’t have to be entirely correct. “The Real” does not have to be totally intelligible. The human mind may tend toward the structures and qualities of “the Real” in the sense that Max Planck had a strong affective experience in his theoretical work. It’s not necessary that mathematics reveals everything about “the Real.” Rather, as long as we have some concept of “the Real” that we can tend to, the structures and qualities of the mind may be drawn to it even as it never fully understands it due to the mind’s limitations. The Spiritual vs the Scientific Maybe this idea is closer to Einstein’s third-level religious experience rather than a completely knowable “Real.” The human mind tends to quest and exploration, though never able to fully accomplish what it desires. Einstein was still grounded in physical materialism. Later developments in physics have shown us something more human-oriented. We can’t limit Being to just material components. The mind may somehow “recall” aspects of Being as consciousness is not just a product of matter. Archetypes of some of our feelings may lie with “the Real.” There’s no way to prove this, or disprove this. But crucially we can no longer see science as an impediment to the “spiritual impetus that moves mankind,” an impetus, according to Einstein, that makes us desire to live “the whole of what is.” And it is an impetus that possesses both unity and meaning. Making an Appearance 8 December 2012 Mind the Details Bernard d’Espagnat delves into finer and finer distinctions between his veiled reality position and similar (though not identical) views. The eighteenth chapter of his On Physics and Philosophy is entitled “Objects and Philosophy,” and there’s only one chapter to go after this. Philosophers vs Consciousness Researchers D’Espagnat says he takes mostly a philosophical approach in this book. Philosophers question the basis of our reality while consciousness researchers (such as neurologists) take physical realism as a given (whether they’re conscious of this or not). Mind vs Reality Radical idealists, who think mind is “primeval,” may wonder about the relationship between mind and “basic reality.” Supporters of d’Espagnat’s “veiled reality” or “open realism” approach are even more motivated to investigate. Truth vs Reality A physical realist can say that a true statement is “adequate to what reality really is.” This is the “similitude theory” of truth. Reality vs Representations But if we don’t have access to reality as it “really is” then we might say we have access only to “human representations” of “the Real.” Instead of worrying about whether statements are true to reality you might worry more about the verifiability of statements. Knowable vs Unknowable Reality Another problem with the “similitude” approach is that quantum mechanics, the best model of the world we have, fundamentally deals with observational probabilities not plain and simple facts. Even resorting to a Broglie-Bohm approach doesn’t help as “hidden variables” will be inaccessible to the observer even in principle. Idealism vs Veiled Reality A radical idealist or Kantian rejects the similitude approach anyway. A supporter of the veiled reality approach has to take a somewhat nuanced tact. Very broad statements about physical constants or “existences prior to knowledge” may hint at “the Real” without claiming to say anything directly about “the Real” as it “really is.” Appearances vs Veiled Reality If we’re not supposed to trust in “appearances” then what is reality really like? We might think that “the Real” is just an updated version of “appearances.” Or maybe mind-independent reality is so independent that it’s entirely inaccessible. D’Espagnat says both approaches are too extreme. Causal Links vs Predictive Laws We like our ordinary, everyday version of “realism” because it lets us imagine particular cause-and-effect relationships. It’s easier to explain things when we can point to particular causes rather than just patterns of observational predictions. D’Espagnat says some causal links are genuine and independent of us, but our interpretation of these links is very much our own. For instance, causality is closely related in our minds to the notion of “will,” which entails a very anthropomorphic (human-centred) view of reality. Intersubjective Agreement vs Appearances But what if a group of humans (and maybe even non-humans!) agree on certain observations? D’Espagnat says that this agreement combined with rules of observational prediction mean this is our “reality.” Saying they’re just “appearances” is misleading. It’s a kind of “reality.” However, modern physics reminds us that humans tend to “reify” (think of the world as a set of objects). So we still have to keep in mind that empirical reality is not the same as “the Real.” Empirical Reality vs Mind-Independent Reality Although d’Espagnat is comfortable with the term “reality” to describe our empirical reality, he says we have to remember these are two “orders” or “levels” of reality. Empirical reality isn’t just a mere variant on “the Real.” Identity Theory vs Efflorescence Theory In some of the more nuanced sections of the chapter d’Espagnat makes a distinction between identity theory and efflorescence theory. Identity theory states that a genuine sensation or awareness (perhaps even thought in general) is traceable to neurons or their components. The material aspect of these neurons is the ultimate cause of our sensations. Efflorescence theory attributes sensations and awareness to “neuronal activity” rather than the material aspects of neurons or their components. Strong vs Weak Completeness D’Espagnat’s main line of attack against identity theory is the completeness principle. In its strong version, quantum mechanics is assumed to be able to describe anything at all. In its weak version, if any theory can describe something then quantum mechanics can do so as well. This leaves open the concept of hidden variables. Since quantum mechanics is antirealist it’s hard to imagine how the strong completeness principle is compatible with identity theory. Even if you take the weak version of the completeness principle all you can conclude is that the identity theory may be true—but we can never show it to be so. But what if you reject the completeness principle entirely? If you used the Broglie-Bohm model you’d still have to deal with an entangled wave function, so sensations can’t be attributed just to some limited coordinates of a particular neuron. Or you can take the Roger Penrose approach by adding nonlinear terms to the Schrödinger equation. D’Espagnat says that approach may work, but he finds it too ad hoc. It’s also work still at an early stage, yet to face the scrutiny a full theory would need to endure. Brain vs Neuron States Now, efflorescence theory relies on neuronal activity not the material aspects of neurons to explain sensation, awareness, and (perhaps) thought itself. But neurologists believe brain states not neuronal states are what drives awareness. You can’t pinpoint a particular neuron or group of neurons that are responsible. It’s the collective action spread across the brain that is associated with awareness. D’Espagnat notes the parallel to quantum entanglement. Protomentality vs Mentality Alfred North Whitehead and other thinkers in the past have wondered if simple organisms or even inorganic entities can have awareness? Abner Shimony’s “potentiality” might satisfy some objections to this concept of protomentality. Various entities have the potentiality of consciousness, but this potentiality isn’t actualized unless a nervous system is present. Consciousness vs Components of Consciousness As a final objection to the efflorescence theory, d’Espagnat says that any component we cite will be part of our empirical reality. Empirical reality depends on our consciousness. Therefore how can something that depends on our consciousness be the cause of our consciousness? D’Espagnat vs The “Received” View The “received” view that thought is produced by matter is, according to d’Espagnat, “slightly useful” as a model but must be rejected as a plausible philosophical stance. Relative Quantum States vs Relative Consciousness Because the observer decides what to measure and how, quantum states are “relative” to these procedures. However, some quantum rules may be considered “in isolation.” They’re not predictive observational rules and hence don’t involve probability. They’re more like descriptions. However, to understand the quantum world you have to consider all quantum rules not just pick and choose the non-probabilistic ones. D’Espagnat says states of consciousness are somewhat similar. Definite vs Indefinite States of Consciousness Imagine a sealed-off laboratory. Paul makes a measurement. His state of consciousness is definite but Peter doesn’t know that until Paul, say, phones him with the measurement. This is a version of Wigner’s friend, and can be extended over and over again, with an observer outside a sealed room, which contains an observer outside a sealed room, etc. Peter thinks Paul’s state of consciousness is not just unknown (before the phone call) but also undefined. It’s a superposition of possible results (pointer values, for instance). Yet once Paul makes the measurement, Paul’s state of consciousness is definite from Paul’s point of view. Consciousness vs The Absolute This apparent conflict doesn’t change the fact that physics is all about predicting observations, says d’Espagnat. However, there’s a related issue. We shouldn’t think that “predictive states of consciousness” are like some Absolute or can even be a substitute for the Absolute. Quantum states are relative, and so are states of consciousness. More precisely, states of consciousness that are predictive are relative. Physical vs Mental So we see some sort of “solidarity” between the physical and the mental, but that doesn’t mean the mental can be reduced to the physical. Wigner’s Friends vs Ultimate Reality The series of “Wigner’s friends” who occupy increasingly large rooms is suggestive of an ultimate reality that we cannot gain access to. Wigner’s friends don’t have access to the overall wave function. Predictive vs Non-Predictive Consciousness However, nothing prevents us from pondering non-predictive states of consciousness. When Paul makes the observation, his state of consciousness becomes well-defined. It’s no longer predictive. Veiled Reality vs Co-Emergence Michel Bitbol, Hervé Zwirn, and other authors speak of thought and empirical reality “co-emerging” at the same time. It’s a “self-qualifying” process by which structure emerges from an initial and total lack of structure. D’Espagnat says his veiled reality viewpoint has an “ultimate ground” endowed with general structures even if they are “far from being knowable.” This ultimate ground may form the basis for not just scientific laws but also creative and mystical endeavours. Emergence vs Non-Emergence So, according to d’Espagnat, structures emerge but don’t co-emerge. They pre-exist. Co-emergence serves merely to connect consciousness and empirical (not ultimate) reality. D’Espagnat acknowledges that in the past he has talked of consciousness and empirical reality existing “in virtue of one another.” This does not mean that empirical reality emerged from consciousness. Furthermore, these words are meant to be evocative rather than a precise philosophical statement. He reiterates the impossibility of appearances, which depend on consciousness, somehow creating consciousness. Indexed vs Non-Indexed States of Consciousness Adopting Bitbol’s terminology, d’Espagnat says some beings may possess non-indexed states of consciousness. That means these states of consciousness are not relative to any particular experimental setup. However, these states of consciousness must therefore be non-predictive. Microscopic vs Macroscopic An idealized miniature version of a being would be too small to interact with the environment to become predictive. In the intermediate state between microscopic and macroscopic, such beings could accurately predict one class of observations but would wrongly predict another class of observations. For macroscopic beings that first class of observations would still be correctly predictable but the second class of observations would be essentially impossible. These practical observations are conveniently describable in realist language, while the practically impossible observations are not. So if we want to talk about co-emergence then we should imagine the co-emergence of “public and predictive” states of consciousness and empirical, physical reality. This co-emergence is constrained by the class of observations that macroscopic beings can perform. Co-emergence draws from a mind-independent reality that presumably, according to d’Espagnat, is beyond intersubjective description. And returning to the idea of potentiality, d’Espagnat says that in moving from the microscopic to the macroscopic the “ontological potentiality” of consciousness becomes empirical actuality. “The Real” is not in itself thought, but can give rise to thought. One World vs Many Egos There appears to be one universe but many minds. Radical idealists have trouble reconciling this situation. Schrödinger calls this the “arithmetical paradox” and proposed two solutions. There’s “Leibniz’ fearful doctrine of monads,” and there’s the belief the multiplicity is only apparent. Schrödinger preferred the second approach, akin to the Upanishads, which states there is unity behind the illusion. Veiled Reality vs Radical Idealism The multiple-room experimental setup showed that predictive states of consciousness are relative. It’s hard to see how all those observers could be part of just one mind. However, perhaps various observers are making mutually compatible observations, calculable using the general Born rule. This is the same as one observer making simultaneous measurements. This sounds compatible with Schrödinger’s viewpoint. However, that doesn’t solve the problem of the observer in that sealed-off inner room. It also doesn’t take decoherence into account. On the other hand, this decoherence also hides any theoretical possibility of discovering contradictions between multiple minds and the quantum structure of physical laws. D’Espagnat thinks more work needs to be done on this issue. Traces of the Real 18 November 2012 Traces of Reality The Process of Elimination Bernard d’Espagnat gets ever deeper into familiar, and largely friendly, territory. It’s a chapter about large agreements and small disagreements as these particular critics seem to agree as much as disagree with him. His major challengers and comrades will be Michel Bitbol and (less prominently) Hervé Zwirn. Form vs Content D’Espagnat first examines Bitbol’s “verbal issues” and questions about d’Espagnat’s logical arguments. Then he moves on to more substantive issues. Veiled Reality vs Dualism Bitbol suspects “Veiled Reality” is dualistic. Classically dualism means there’s mind and there’s matter, though even in Descartes’ time philosophers puzzled how the two could interact. Materialists later on would say mind is just a manifestation of matter, but d’Espagnot says Bitbol isn’t a materialist. D’Espagnat says if Bitbol’s objection is about interactions then he’s got it wrong. D’Espagnot says he doesn’t believe mind and matter are the building blocks of “reality as it really is.” Instead mind and matter emerge from the ground reality, an “Independent Reality.” Coming from the same source mind and matter aren’t fundamentally split from each other. “Veiled Reality” vs Veiled Reality Next is the issue of whether the term “Veiled Reality” is misleading. Although d’Espagnat admits the term might suggest a world of objects behind some veil, he said it’s just simply hard to compress the concept into two words. He admits he used to prefer a “non-watered-down structural realism,” but since then he’s undergone an “evolution” rather than a “revolution.” Objectivist Language vs Objectivist Philosophy D’Espagnat says it’s convenient to talk about an instrument dial pointing to a particular spot. But objectivist language is a matter of convenience, it’s not to taken literally. D’Espagnat uses that kind of language to talk about “impressions,” not events independent of “the existence of thought.” In any event, it seems Bitbol acknowledged the misinterpretation and moved on. D’Espagnat’s approach is an “essentially negative approach” of showing what capital-R “Reality” can’t be: plural, atomistic, embedded in space-time, for instance. He says Bitbol eventually realized this about d’Espagnat’s position. Broglie-Bohm vs Dualism D’Espagnat says the Broglie-Bohm models are logically consistent and follow a mostly “classically dualistic conception.” But the subject still isn’t “face to face” with the world as there are hidden variables and a “Universal nonseparable wave function.” Hence Broglie-Bohm isn’t a fully classical dualism. “A Priori” Dualism vs Observed Dualism Kant used “a priori” arguments to support his “thing-in-itself.” But can we use the data of modern physics instead as d’Espagnat has done? Bitbol said d’Espagnat based his arguments not on quantum mechanics in general. Rather he based it on a particular interpretation, one that rejects hidden variables. D’Espagnat says he’s made no secret of that. Science chooses among various explanations and tends to be wary of “an all-powerful Zeus, for example.” Bitbol calls these factors “ampliative” criteria. And even Bitbol acknowledges that Bohm’s theories lead to a “crisis in atomism” with their “nonlocality and contextuality.” D’Espagnat says nonlocality doesn’t force the “thing-in-itself” to be inaccessible. But it undermines the hope that “the Real” can be progressively unveiled. Knowledge Of vs Knowledge About Bitbol complained that d’Espagnat’s book Veiled Reality talked about “Independent Reality” as “something.” But wouldn’t that make this supposedly independent reality an empirical reality? D’Espagnat says he was careful to say the data would have “something to do” with this reality. It would be knowledge about this reality, but not knowledge of this reality. To Sketch vs Not to Sketch Bitbol says Kantian and neo-Kantian philosophers would object to the idea that an “Independent Reality” is “prestructed.” D’Espagnat says Bitbol needs to do more than just cite possible objectors. He needs to present an actual argument. Bitbol says d’Espagnat is implying observed phenomena lets one “sketch” various features of “Independent Reality.” D’Espagnat says that to talk about sketching is misleading. By giving up on the “locality principle” he’s also giving up on “sketching” Independent Reality. Nonetheless d’Espagnat acknowledges that it’s not just a process of elimination. He does conjecture that observational data may “in a distorted and incomprehensible way” somehow reflect some structures of “the Real.” Reflected Reality vs Reflected Thought Bitbol wonders if maybe predictive laws should be considered “distorted reflections” of our own mental contributions rather than of some “Independent Reality.” D’Espagnat says it’s important to distinguish between what’s sufficient and what’s necessary. Of course “our perceptive” context is important, but probably not enough to produce those perceptions. Furthermore, anyone can come up with an interesting principle and follow its consequences. One can choose to believe all connections between perception and reality are illusory. But that doesn’t mean you’ve proved your case. In the end d’Espagnat remains confident of his “prestructure” hypothesis, though it’s “but a plausible and admittedly unverifiable conjecture.” Evidence vs Other Factors Bitbol and Zwirn also wondered if one theory could be replaced by another for reasons other than evidence. D’Espagnat replies that you can’t tweak a “realist local theory” and make it work. Nonlocality isn’t nudging one theory out of the way—it’s demanding a different theory. If Zwirn and Bitbol believe perceptions come solely from us, then we could believe in an experimentally refuted theory. This may be somehow “rational” but a scientist won’t follow such a path that undermines “science and empirical knowledge in general.” Bitbol proposes some kind of transformation groups that would explain our sensory data’s “structural invariants.” D’Espagnat thinks the analogy from group theory is inexact. In any event, it’s not particularly interesting that nonlocality could appear in some “acceptable realist theory.” What’s important is that it tells us we can never use a local realist theory to explain all of our observed data. Nonseparability vs Unity D’Espagnat admits he went too far in saying the nonseparability of Independent Reality implies some kind of unity in that Independent Reality. He agrees with Bitbol that this statement demands a principle of the excluded middle such that rational categories cover all that is possible. The transcendent may not be so intelligible. Instead of Plotinus’s “One” we should think of a unity that is “the absolutely inexpressible” (pantè aporeton). This view is still consistent with the (unprovable) view that “poetry, music, painting etc.” may provide us with glimpses of “the Real.” Similarly, physical laws and their mathematical structure may be some sort of “traces” of an underlying structure. Nonetheless the connection between those traces and that structure “may well be undecipherable.” This is definitely less than what “structural realism” would expect. Critic vs Critiqued D’Espagnat turns from being critiqued to critiquing Bitbol and Zwirn. He doesn’t see how replacing a static “a priori” with a functional one improves matters. Either way, how do you explain how Newton’s law of gravity ended up with its precise form? D’Espagnat says it “partakes very much of utopia” to expect formalism to overcome observation, which is what he thinks Bitbol believes. An all-encompassing theory of symmetries and so on is unlikely to render it immune to experimental contradiction. Furthermore, quantum theory’s axioms (a framework theory) may someday be justifiable just on their formal basis. But those axioms form the basis of quantum theories, and these are theories “in the ordinary sense.” And it’s those ordinary theories that provide the evidence against locality, for instance. Because “all men, all civilizations” share the intuition of a reality outside of us, d’Espagnat is willing to give up on Independent Reality only if it’s proved false. And it’s a conjecture that can’t be proved false. Maybe some day a conjecture of greater plausibility will supplant the concept of an Independent Reality. For now, Bitbol’s conjecture doesn’t do that. Bitbol is reverting to a medieval approach of arguing from the general to the specific, says d’Espagnat. As for Zwirn, d’Espagnat heartily approves of his analysis of modern science’s conceptual challenges. D’Espagnat believes Zwirn commits some minor errors in summarizing d’Espagnat’s approach. It’s not based on “structural realism,” which Zwirn seems to imply. However, these aren’t a big deal, and the two thinkers agree on much, says d’Espagnat. In fact, he says Zwirn may have come up with an even more detailed version of “Veiled Reality” than he has. It’s Not All in Your Head 11 November 2012 Not Just in Your Head Veiled Threads Another chapter down, three more to go in Bernard d’Espagnat’s On Physics and Philosophy. Chapter sixteen, “Mind and Things,” is (relatively) straightforward. Having spent much of the book undermining physical realism and its kin, he focuses next on the excesses of empiricism and idealism. Much less combative in this chapter, d’Espagnat seems sympathetic to many of the approaches he describes. Sympathetic, yes. In total agreement, no. Ultimately, he’s laying the groundwork for exploring his “Veiled Reality” in some detail as the book draws to a close in the chapters to follow. Empiricism vs Metaphysics Empiricism’s guiding principle: all of our knowledge comes from our senses. It started by discarding metaphysical sources of knowledge. It then emphasized the role of “elementary sensations.” Experience vs Reality Early empiricists seem to have believed primary qualities were real aspects of real objects. Even if our knowledge can’t exceed our experiences, if properly used our experiences produce a good picture of reality. A large number of modern-day scientists hold this view, which is a kind of “physical realism.” Empiricism vs Knowledge Initially the Vienna Circle epistemologists more energetically attacked Kantian views than scientific realism. Nowadays, though, logical positivism is understood in an antirealist sense. But this creates a problem. If the empirical connection between experience and reality is questioned, then are we back to Kant or the neo-Kantians? D’Espagnat thinks that this quandary sank the logical positivist agenda, but contemporary physics can still learn from some of their ideas. Knowledge vs Phenomena “Phenomenalism” has various definitions. One version states that “knowledge is strictly limited to the (physical and mental) phenomena,” says d’Espagnat. Phenomena are just the objects of our (unanalyzed) perceptions or introspection. So phenomenalism is at best consistent with “open realism.” This is the “weak” version of realism that d’Espagnat favours. Phenomenalists vs Physical Objects Unfortunately phenomenalists are often vague about the reality of physical objects. The Vienna Circle positivists were suspicious of counterfactuals: “If you put this lump of sugar in water then it would dissolve.” It’s hard to assign properties such as “soluble” to an object without using counterfactuals. Phenomenalists vs Paths of Knowledge Another problem was pointed out by Bertrand Russell. Unless a solipsist, a phenomenalist accepts the existence of other observers and their assertions. But then why not accept the existence of sound waves, since they convey messages? But then you’re getting into back into physical realism. Private Sensations vs Public Science It’s hard to get around this problem since we have no direct access to other people’s sensations. They’re private. But science relies on communicating knowledge. That’s public. Object vs Method A third problem: We describe objects by describing how to get sense data about them. But then as objects get smaller and smaller the description gets longer and longer. How do you describe an electron as a “construct”? You could describe a cloud chamber, how it has to be prepared, and then the probability that the set-up will produce the hoped-for observation. Stability vs Instability But that gets at a problem when you move from phenomenalism to contemporary physics. To a phenomenalist, an object of knowledge is a stable pattern of perception. In quantum physics there are probabilities of state vectors not some “inherent stability” of perception. Classical Instruments vs Quantum Systems To address Russell’s problem we can assume a measuring instrument is classical. That way various observers can agree on a measurement. Furthermore, our observations agree with the rules of quantum physics, so our sensations aren’t entirely private. It’s a kind of “mutual agreement” between our perceptions and quantum rules. However, d’Espagnat notes this solution is a “hybrid” one. And, he notes, in chapter eight he looked at the problems with saying an experimental apparatus is classical. Operationalism vs Phenomenalism D’Espagant likes operationalism, a modified version of phenomenalism. It deals with how to make observations, and quantum rules for predicting observations are unquestionably accurate. Conventional vs Radical Operationalism There’s a conventional version of operationalism that’s “moderate.” It talks as if there are real properties that are measured. But as seen in chapter seven this leads to ambiguity. “Radical” operationalism is more content to just describe measurements. However, that’s hard to do without specifying the objects being measured. Intrinsic vs Convenient Elements “Radical” operationalism may consider some perceived forms to be “elements” of empirical reality connected by empirical laws. But unlike traditional operationalism, it doesn’t consider these forms to have intrinsic significance. Perceived forms aren’t the “constitutive” bricks of anything. The radical operationalist is prepared to discard one set of predictive rules for another if they work better. And if two sets of rules produce the same predictions then we should accept both. Deductive vs Inductive Logic However, radical operationalism relies heavily on induction. If a rule worked in the past then it must work in the future. That’s not strictly logical. Rules vs Explanations Another problem is that people have trouble seeing these rules as a “genuine explanation.” So do we need the notion of “cause” beyond the realm of just phenomena? Phenomenal vs Transcendental Causation Kantians say that causality is inherent in our a priori understanding, not in the objects themselves. Abner Shimony notes there are many kinds of causality. He feels this diversity undermines the universal application of this Kantian claim. Also, cognitive science has blurred the distinction between the phenomenal and transcendental selves. Therefore categories of understanding can hardly be limited to just the phenomenal self. Phenomena vs the Mind Furthermore, in the past century or two, mathematics and physics have undermined the believe that our understanding of phenomena reflected the ordering principles of the mind. Therefore Shimony doesn’t believe that only phenomena can be the “causes” of other phenomena. Shimony’s Causation vs d’Espagnat’s Laws D’Espagnat focuses on predictive laws in constructing his concept of a “Veiled Reality.” Shimony puts causality at the root of his ontology. True, the reliability of predictive laws must have some kind of “cause.” But d’Espagnat still says he and Shimony have very different views. Transcendental Uniqueness vs Structure A basically Kantian approach says a “transcendental object” is the “purely intelligible cause” of various phenomena. Kant believes that objects exist “per se”—but only in experience. However, he felt there must be a “cause” of these representations. The cause will be totally unknown to us, but he still gave it a name: the “transcendental object.” This unknowable cause is singular. The phenomena it produces are plural. Therefore the “transcendental object” is unique. D’Espagnat already acknowledged (in chapter ten) similarities between Kant’s transcendental object and his own views of extended causality and “ground Reality.” However, d’Espagnat is willing to accept “some sorts of structures” that end up “implying” our scientific laws. That structure and that connection to our laws will still be “undecipherable.” Individual Mind vs Mind in General Operationalism avoids making ontological statements. However, someone has to set up and run the experiment, and someone has to be observe the results. So presumably either an individual “mind” or a “mind in general” exists. Objective Laws vs Mentalist Consciousness Jean Petitot says we can be “objective” about the laws of phenomena even if we can’t see what’s behind the phenomena. Galilean space and time are “mental” concepts. But these mental forms let us construct the legal rules of phenomena, which become “desubjectivized.” So “mentalist” or “cognitive” is what’s unique to each person’s consciousness. Things lying in space are therefore neither ontological nor mentalist. Objectivity vs Ontology Petitot says that space and time are the crucial notions in classical physics. In quantum physics the crucial notion would be probability amplitudes. But space and time seem independent of us, while probability amplitudes are very much connected to an observer. D’Espagnat says it’s not a crucial distinction as Petitot separates objectivity from ontology. Petitot’s approach is what d’Espagnat calls a “weak objectivity” or “intersubjectivity.” Different observers will get the same measurements under the same conditions. Assumptions vs Justifications But d’Espagnat tackles Petitot’s approach on two fronts. The first objection is that stating a rule and justifying a rule are two different things. Stating that “reality-per-se” is unobservable creates some interesting consequences. But the statement is basically an axiom. Galilean physics can still be explained by the “reality of the accidents.” The big challenge to realism was quantum physics, not Petitot’s or anyone else’s transcendental claims. Transcendental vs Individual Subject The second objection is that transcendentalists create a contradiction when they limit “mentalist” and “cognitive” to individual minds. Petitot and Kant both believe a transcendental subject is impersonal. It supposedly conveys a priori sensibility and categories of understanding not limited to an individual. Kant said his transcendentalism differed from Berkeley’s idealism. However, Kant’s “empirical realism” is still only empirical. Objects of experience exist in experience. But experience requires one or more subjects to have that experience. D’Espagnat says a “transcendental subject” can’t eliminate the role of knowledge in our experiences. And knowledge depends on cognition. That knowledge is then communicated intersubjectively, taking it out of the “private realm.” Plato vs Galileo Galileo stressed the mathematical structure of natural laws. Some people take this as evidence he was a “Platonist.” Alexandre Koyré said Galilean science started from this belief: Reason and geometry are enough to acquire “intelligence of the real.” But Galileo took considerable pains to investigate phenomena. For him to be a Platonist you’d have to equate what’s “empirically real” with a kind of Platonic idealism. However, Plato’s cave suggests our pursuit of phenomena will get us only as close as some shadow of the “Real.” Senses vs Innate Knowledge Also ambiguous is the notion of what is “innate.” Both Descartes and Saint Augustine believed we could gain knowledge without use of the senses. But empiricists believe “reality-per-se” is inaccessible. So how could we ever experience an independent reality? Empiricism vs Innatism If we consider Kantian space, time, and causality then these notions must be innate. Furthermore, Kant’s categories of understanding are a priori, so they too are “innate.” But in our “semi-intuitive” world-view we follow sensory evidence as closely as possible, yet interpretation still guides us. Quantum Mechanics vs Innatism Quantum mechanics is “weakly objective” hence “antirealist.” Its view of knowledge is somewhat Kantian, but with a strong dose of operationalism added. This operationalism prevents quantum mechanics from getting too close to Descartes’ innatism. Furthermore, the simplicity of quantum rules leads us to infer (unprovably yet irrefutably) a simplicity in the “Real.” D’Espagnat says this approaches Nicolas Malebranche’s “vision in God.” Empiricism vs Conventionalism D’Espagnat says reading between the lines one can see evidence of Henri Poincaré’s “ontological” stance. Kant believed the axioms of geometry were a priori. The discovery of non-Euclidean geometries refuted such an idea. Poincaré saw the axioms as neither a priori nor as experimental data. They are conventions, he decided. “One geometry cannot be truer than another one, it can only be more convenient,” he wrote. The Convenient vs the Real Poincaré felt the same way about physics. Experimental data and theories about them are not descriptions of an independent reality. They are convenient and concise “pictures” to describe observations and connect them. So, for instance, the ether hypothesis is “convenient.” Whether or not ether exists is the concern of the metaphysician, not him. Supporters of “objectivist realism” complain conventionalism favours convenience over truth. However, both Poincaré and d’Espagnat reply that if rules make the right predictions then we might as well call them “true.” Knowable vs Underlying Reality Poincaré says relationships between things are “objective” when they’re “the same for everybody.” Poincaré says that the relationships between things is “the sole objective reality.” These relationships cannot be conceived independent of a mind that conceives them. However, “they are objective nevertheless since they are shared by all thinking beings.” Poincaré is definitely referring to real, though hidden, objects. And Poincaré says we can discover true relationships between these real objects. D’Espagnat says that the only way to make sense of these statements is to believe Poincaré believed some reality that underlies phenomena. Otherwise it’s hard to imagine how there could be real relationships between real, if hidden, objects. Separability vs Non-separability Although d’Espagnat’s viewpoint and Poincaré’s implicit ontology are similar, they differ over “separability.” Poincaré’s “structural realism” involves “objects-per-se”—unknowable but plural. D’Espagnat says modern physics does not support separability, and hence there must be “some underlying coherence, or deep unity” to this hidden reality. Rules vs Ontology Poincaré believed the equations of classical physics served two purposes. First, they describe the structure of various laws. Secondly, they describe the value of certain properties at different points. Poincaré was happy with the first role. He had doubts about the second role as he felt equations indicated only what would be observed at those points, not what was pre-existing there. D’Espagnat wonders if we can give up the second role of an equation’s symbols while retaining the first. Maybe we could then call this a “structural” realism. Old vs New Theories D’Espagnat does note that Poincaré explored ontological issues only with reluctance. Therefore it would be wrong to attribute this interpretation to Poincaré. Also, this interpretation has some problems. As theories evolve, old equations may be seen as merely approximate. Also, a new theory may have little in common with the old theory. This would imply the structure of “Reality” is very different under this new theory. However, normally one could derive the old theory’s equations from the new theory’s equations. In that case there still might be a meaningful, permanent substratum to reality, despite the objection of radical idealists. Structural Realism vs Veiled Reality In the end, d’Espagnat says, structural realism can be justified only after it’s watered down so much it looks like his own “Veiled Reality.” In the final chapters d’Espagnat says he’ll have to steer between the conceptual difficulties of classical phenomenalism and how physical realism is contradicted by its own science’s results. Background image: NASA, ESA, J. Richard (CRAL) and J.-P. Kneib (LAM) via www.spacetelescope.org. The Portable Rainbow 7 July 2012 Under the Veil Chapter 15 (“Explanation and Phenomena”) of Bernard d’Espagnat’s On Physics and Philosophy continues the previous chapter’s exploration of causation and explanation. With quantum mechanics relying on observation and denying a naive realism, is an empirical explanation good enough? For d’Espagnat there’s a need to postulate a “Veiled Reality” of which we may only be able to sneak some peeks, if at all. Nonetheless he believes it’s there. Prediction vs Explanation If you measure the state of quantum particles and find a correlation-at-a-distance how do you explain it? Quantum mechanics is a “recipe” to predict observations from initial conditions. It’s an “explanation” on the level of empirical reality. If you need some “deeper” explanation about what’s “really” going on you might add “hidden variables” to extend the standard theory. But then you run into problems with Bell’s Theorem and the experimental results of Aspect and others. If you don’t have knowledge of a deeper reality, how can an empiricist justify using induction to create laws? Just because a law summarizes certain observations on certain days, why should we think it’s universal? Induction vs Unknowable Explanation For d’Espagnat, a belief in the existence of a deeper reality is enough to ground our use of induction. We may be incapable of comprehending this deeper reality, but our belief that there is one suggests a connection between empirical reality and an underlying reality. That belief is enough for d’Espagnat to accept induction without having to justify it every time it’s used. Even if we don’t know anything much about this deeper reality, there’s still no “logical inconsistency,” d’Espagnat says, in using its presumed existence to justify induction. Furthermore, this deeper reality—whether “veiled” or even entirely unknowable—will not be an arbitrary reality, d’Espagnat says. Rainbows vs Quantum Concepts Although rainbows can’t be directly grasped and manipulated, they’re explained in classical physics. A description of rainbows might illuminate how we speak about quantum systems. A rainbow (including its two “bases”) will look different from different locations. Hence, the particular rainbow someone sees is observer-dependent. The same reliance on location is true if you set up automatic cameras. Hence you can’t say that just because we’ve taken a picture of a rainbow that this rainbow “really” existed before that observation. Similarly, out tendency to “reify” (seeing something as concrete and real) means we jump from an observation to assuming what was observed somehow pre-existed. If we can argue that a rainbow doesn’t pre-exist, we should be able to argue that a quantum object doesn’t pre-exist either. Dinosaurs vs Humans However, surely dinosaurs existed before humans ever walked the earth. No observation was required to bring them into existence. D’Espagnat says that dinosaur bones are like the pointers of an experimental set-up. We see something and conclude it’s real. Though d’Espagnat says it’s real, he specifies it’s real in the realm of “empirical reality.” However, this empirical reality is hardly an arbitrary production. Its qualities are severely constrained, and in the end observers tend to see mostly the same thing. Explanations vs The Final Key Classical physics can still provide us with “explanations” as long as we don’t presume they derive from a deeper reality. D’Espagnat adds that we should not conclude that these explanations are the “final, ultimate key” to understanding the world. D’Espagnat vs Other Views I’ve concentrated above on d’Espagnat’s ultimate positions, but here are some examples of how he explains his disagreement with other people’s positions (real or conjectured). D’Espagnat vs Cassirer If you see correlations in a quantum experiment then d’Espagnat has trouble imagining Cassirer’s “logical necessity” could explain each particular observation in a sequence. True, Cassirer could choose (or could have chosen, as he’s now dead) hidden variables, but d’Espagnat says that’s too “metaphysical” for Cassirer, and the Aspect-type experiments have refuted them anyway. Maybe Cassirer equates “logical necessity” with a pre-existing logos, a primary notion of absolute existence. D’Espagnat says that whole idea is something the neo-Kantians were trying to get away from, so again it doesn’t sound like Cassirer. Nonetheless, d’Espagnat says his own position is consistent with considering the “Real” (with a capital R) to consist of such a logos. D’Espagnat vs Carnap Carnap says scientists should be more modest. They shouldn’t try to explain the “why” but just the “how” of phenomena. Carnap’s position is that simply producing entities, such as Driesch’s “entelechy” as an explanation for tissue regeneration, is irrelevant as there are no “laws” connecting conditions and observations. So what about d’Espagnat’s “Real”? Is it just a meaningless entity? It doesn’t help us predict anything, so maybe it’s not an explanation at all. D’Espagnat responds by saying scientists long ago were implicitly believing in the realism of a world ruled by classical physics even if explicitly they concerned themselves with just the laws of observation. Even some realists nowadays, says d’Espagnat, acknowledge that there could be an underlying reality, not attainable through “discursive knowledge,” that nonetheless grounds our empirical reality. Furthermore, if laws relate just to our known observations, then what happened before we made those observations? Carnap, according to d’Espagnat, said laws could exist before such observations but the truth of the laws could not be judged. D’Espagnat says this amounts to Carnap’s acknowledging a “human-independent reality” that has a structure we might never know. Since quantum mechanics only predicts observations and does not “explain” underlying reasons, this implies to d’Espagnat that a “Veiled Reality” has a meaning even if we can’t explore it empirically. But what if we imagine Carnap meant some kind of “linguistic framework” involving “nature” and “existence” that replaced the usual meaning of those terms? In a world ruled by classical physics it makes sense to speak of “things” and their qualities makes sense. In a world ruled by quantum physics it makes sense to speak of “sense-data” rather than “things.” D’Espagnat says this approach works fine for making sure scientific statements are clear. But it’s not satisfactory from the philosophical point of view. Carnap, d’Espagnat says, is just “masking” not “eliminating” the connection we make between an explanation of observations and an explanation of what’s going on in some underlying reality. Since a linguistic framework is “chosen by us” according to Carnap it sounds a bit arbitrary and not like a genuine explanation. An Influential Relationship 1 July 2012 Influential Arrows Just Causes and Side Effects Chapter 14 (“Causality and Observational Predictability”) of Bernard d’Espagnat’s On Physics and Philosophy examines how, and if, we can use the concepts of causation and influence to explain the world. Reality vs Observations Taking a break from examining the “notion of reality” d’Espagnat uses this chapter to argue it’s better to predict observations than predict “things as they are.” Animism vs Empiricism Aristotle saw causation as related to human will, and even inanimate objects seemed to have some animistic will—as seen when a falling stone somehow desires to return to its natural resting place. Empiricists went to the other extreme. Physical laws should just be descriptions of events and their regularities. However, what initial conditions are the “cause”? You end up with too many empirical laws. Mathematical vs Physical Determinism If two very close points rapidly diverge then they’re not likely to be “physically deterministic.” It’s too hard to calculate their exact paths. “Strong objectivists” argue that science accumulates knowledge about an underlying reality, not just our experimental observations. Some strong objectivists argue that “chaotic” behaviour is an example of indeterminism, and others argue initial conditions have to be repeated exactly for us to say deterministic laws apply. The first approach implies imperfect observations or calculations show reality is indeterministic, but that’s strange since strong objectivists believe in an underlying reality separate from our fuzzy data. The second approach is a problem since a strong objectivist can’t be absolutely sure the initial conditions won’t be repeated. Laws vs Predictions D’Espagnat also criticizes the claim we’ve seen the “end of certainties” just because some calculations make predictions impossible. He says that’s too harsh as we can still believe in our laws even if sometimes in practice we can make reliable predictions only for the near future. Classical vs Quantum Indeterminacy D’Espagnat cautions against regarding chaos theory as some overriding conceptual triumph as it’s grounded on classical concepts of space and time. Classical physics falters where quantum physics and its apparent indeterminacy excel, particularly on the microscopic level. Yet d’Espagnat says the defining feature of quantum mechanics is not its indeterminacy but its “weak objectivity.” The theory confines itself to observations of reality, not claims about reality itself. Individual vs Statistical Determinacy D’Espagnat agrees with Kant that “regularity in time”—in which one kind of event is followed consistently by another—is a good way to distinguish the empirically real from, say, the events of a dream. Kant’s “sin of omission” (understandable because of his time and place) was not to consider statistical regularities in which ensemble probabilities are deterministic. D’Espagnat emphasizes that quantum mechanics makes reliable predictions for observing ensembles of quantum systems, but these are not probabilities of ignorance about individual systems. At first glance quantum mechanics may seem indeterministic, but if you keep in mind quantum predictions are about observations of multiple systems then it too is deterministic—if only “statistically.” Laws vs Facts D’Espagnat warns against “a variant of nihilism” if you don’t pay enough attention to the difference between laws and facts. He says even Dirac’s musings that universal constants (such as the speed of light) might change over time don’t threaten that distinction. The nihilistic danger, d’Espagnat says, comes from sociologists, epistemologists, or “pure philosophers” who see in the history of a changing universe a fundamental lack of stability. They fail to distinguish between laws and facts, or they fail to appreciate the significance of the distinction. Causes vs Influences D’Espagnat imagines a Laplace daemon that can possess total knowledge of events in part of the universe. The fixed speed of light means, in an Einsteinian world, the daemon need only check events in a point’s past light-cone to predict that point’s future. However, Bell’s Theorem combined with the experiments of Alain Aspect (and others) proved that the locality hypothesis is false. Add to that the order (in time) of events can vary by reference frame, and we see that (earlier) cause and (later) effect can be ambiguous. D’Espagnat thus suggests that faster-than-light influences—or “influential relationships”—do exist.
9efde7a1f3d8f38e
Time crystals would be a perpetuum mobile One of the widely shared recent articles at was Time crystals—how scientists created a new state of matter What's going on? Already at this point, you may see a misconception that leads to Wilczek's impossible proposals. There's a very general implicit problem in his usage of the word "material" or "object" for something that has some properties at different moments of time. Sorry but a "material" or an "object" is fully described by some information at a single moment of time, e.g. \(t=0\). If you need to talk about the values of observables at many or all values of time \(t\), then you are not talking about a "material" or "object" but rather a process. The misguided analogies between "crystals" and hypothetical "time crystals" may be said to result from this confusion mixing objects and processes. By the way, there were lots of the very same confusion in the literature about S-branes (Strominger or spacelike branes). But let us look at the problem from a slightly different, although basically equivalent, angle. More conceptually, Wilczek's time crystal is defined as an object that has the property that in its ground state (the state of lowest energy), an observable that we may call \(x(t)\) is a non-constant or periodic function of time. Something keeps on spinning indefinitely. There exists a name for a hypothetical object that is oscillating indefinitely. The name is perpetuum mobile, or a perpetual motion machine of the first kind. In his original paper, Wilczek is aware of this point and acknowledges that his gadget therefore perilously close to fitting the definition of a perpetual motion machine. Well, he's far too modest here. His gadget wouldn't be just close to a perpetuum mobile; it would be one. Wilczek just rebranded the perpetual motion machines just like cold fusion was rebranded as "low energy nuclear reactions" and creationism was rebranded as "the intelligent design". The main difference between the millions of "inventors" of perpetuum mobile in the past and Frank Wilczek is, we are led to believe, that Frank Wilczek is really smart and a Nobel prize winner and so on, so unlike the numerous losers and/or crackpots before him, he actually succeeded. Design by a predecessor of Wilczek's. The new papers don't show anything of the sort, I am confident although I haven't read them in their entirety. They just present some atomic physics systems that respond periodically when they're stimulated by some periodic pulses of laser light or something like that. What a shock that sustained, periodic external influences lead to sustained, periodic responses. As far as I can see, these "insights" are absolutely trivial and absolutely don't justify the statement that Wilczek's claims were shown true. I don't have enough motivation to read these papers because I find it obvious that they're just papers by clueless experimenters who observe something, they don't understand what they're actually seeing, and they say that it agrees with some theorist's wrong paper. Shortly after Wilczek published his 2012 paper, exchanges between Wilczek and Patrick Bruno, a critic who indeed said that Wilczek's objects are impossible because they're perpetual motion machines, began. I guess that this 2013 paper with a no-go theorem was the last salvo by Bruno in his battles. Watanabe and Oshikawa added another no-go-theorem in their 2014 paper. Instead of discussing any specifics of the experiments that just ignore all these results, let me say what I think is the deep theoretical misconception that led Wilczek to say all these things. He clearly thinks that the spatial translations and temporal translations are analogous – after all, space and time are linked by the Lorentz symmetry in special relativity and they are naturally unified to spacetime translations – and because it's possible to spontaneously break the spatial translations (by creating crystals), it must be possible to do the same with temporal translations (by creating time crystals), so the only remaining task is to decide how to do it nicely. But this "complete democracy" between space and time is wrong for a simple reason. The reason is that things in the spacetime evolve and the observables (e.g. fields) aren't quite independent in every spacetime point. At most, you may determine the initial conditions e.g. at a spacelike hypersurface \(t=0\). Once you determine the fields (and their derivatives or canonical momenta) at \(t=0\), their values in the whole spacetime are determined by the field equations of motion. The surface where you can pick the initial conditions is referred to as the Cauchy surface and even in general relativity where many things are flexible, there exist very good reasons why this surface should better be space-like and contain no timelike vectors in it. In quantum field theory, the reason why timelike Cauchy slices would be no good is simple: the field equations guarantee that timelike-separated fields almost always have nonzero commutators. So you simply can't determine these values independently because of the uncertainty principle! Because the Cauchy surface is spacelike and not timelike, the "complete democracy" between space and time is broken. The equations of motion and commutators etc. are still Lorentz-covariant but the required spacelike signature of the Cauchy surfaces implies that you have much less freedom to determine how things depend on time than the freedom to decide how things depend on the spatial coordinates. And that's probably the key point that Wilczek is overlooking. So if you choose the most general object that may exist in the spacetime, you may always determine it by some information at a Cauchy surface which is morally equivalent to the three-dimensional spacelike \(t=0\) hypersurface. And whether or not its evolution in time will be constant or non-constant, periodic or aperiodic, and damped or not damped, is completely determined by the dynamical laws of your physical theory. You simply cannot prescribe these things. That differs from the case of ordinary crystals where you have the freedom to distribute the atoms to the lattice sites in the three-dimensional space. But you simply don't have the freedom to dictate whether some peaks reappear periodically after every period \(\Delta t\). The question what happens is dictated by the dynamical laws of physics. Another pre-Wilczek model of a classical time crystal. Such concepts remind me of the big-government leftists. It's spectacularly clear that the more redistribution or more moving parts you add, the more energy (or money) the process will cost, waste, or demand for the machinery to run. But they always ignore or understate some energy cost, driven by the unchangeable belief that the perpetuum mobile or the big government is a great idea OK, do the dynamical laws of physics allow the ground state of an object to indefinitely oscillate, to have an observable \(x(t)\) that is non-constant, like \(x_0\cos\omega t\)? Let's use the elementary rules of quantum mechanics to say something about this question. Well, it's easy. If the state \(\ket\psi\) is a ground state, it really means that it is an eigenstate of the energy operator\[ H\ket\psi = E_0 \ket\psi \] where \(E_0\) is the lowest eigenvalue in the whole spectrum. But we don't even need to know that it's the lowest one – although this was the statement that Bruno – focusing on particular systems proposed by Wilczek – was proving incorrect. I only need that the state is an eigenstate. As undergraduate students learn in the first lectures of quantum mechanics – when they are taught about the time-independent and time-dependent Schrödinger equation – the evolution of the energy eigenstate in time is unavoidably stationary,\[ \ket{\psi(t)} = \exp(Ht / i\hbar) \ket{\psi(0)}. \] Only the overall phase of the state is changing with time. That phase has no physical consequences (at an isolated moment of time) and it cancels in all the expectation values etc. which are therefore constant:\[ \bra{\psi(t)} x(t) \ket{\psi(t)} = {\rm const}. \] So the ground states are simply stationary and no observable that may be measured in them may oscillate. Period. That's it. There are no quantum time crystals. If some expectation value in a state depends on time, the state must unavoidably be a superposition of many energy eigenstates corresponding to different eigenvalues of the energy. You may split the state to pieces and pick the lowest-energy eigenstate contribution in it. And this true ground state will be stationary. Even though the relativistic equations respect the Lorentz symmetry which is some kind of a "perfect" symmetry between the space and time, it's still true that relativity doesn't question the qualitative difference between objects that are timelike and objects that are spacelike. Indeed, whether e.g. a spacetime interval is timelike or spacelike is a question that all inertial observers will agree upon – the invariant squared length of the interval is either positive or negative and its value is Lorentz-invariant, a reason why Einstein was tempted to use the term Invariantentheorie for the theory that we know as the theory of relativity. So world lines of true objects in consistent theories have to be timelike (or at most null) and not spacelike. This asymmetric treatment of the two signs is not in any conflict with the Lorentz symmetry because the Lorentz transformations do preserve the qualitative (timelike vs spacelike) character of intervals. For the same reason, one may consistently choose the initial conditions at spacelike Cauchy surfaces, but not surfaces of a mixed signature. As I mentioned, this difference boils down to the fact that fields' commutators vanish at spacelike separation but not timelike separation, so one can only determine the fields independently at spacelike hypersurfaces. Similarly, one can observe or build systems that spontaneously and permanently break the translational symmetry in space but not those that spontaneously and permanently break the translational symmetry in time. Unless I am wrong, Wilczek's reasoning is probably rooted in "overinterpreting" the Lorentz symmetry in a certain way. Alternatively, we may say that people with common sense know that a perpetuum mobile is impossible – and within quantum mechanics, this fact is just demonstrated using a different formalism. It could of course be conceivable that quantum mechanics changes things so radically that it could allow the perpetual motion machines – after all, it allows the quantum tunneling and many other things that are impossible classically. But in the case of the perpetual motion machines, it just isn't the case. Spinning nuclei Under a 2012 blog post about the electron's electric quadrupole moment – which has to be zero (like the tensor properties of all particles with \(j=0\) or \(j=1/2\)) by the Wigner-Eckart theorem – someone asked how it's possible that people often say that uranium-238 is cigar-shaped i.e. has some "quadrupole moment" even though it has \(j=0\). He has also mentioned two \(j=1/2\) nuclei that are sometimes hinted to have a non-spherical shape, too. It's confusing but if you really had just the state with \(j=0\) and nothing else, the Wigner-Eckart theorem – a general group-theoretical consequence of the addition of the angular momentum in quantum mechanics – would require all (traceless symmetric) tensors to be zero. That includes the ordinary and electric quadrupole moments. The spin \(j=0\) or \(j=1/2\) mean that the particles doesn't even carry enough spin-related information to remember the (sign-independent) axis along which it could be elongated or shrunk. What's new about the nuclei is that they're composite which means that they have lots of excited states (describing various types of relative motion between the protons and neutrons – or quarks and gluons). In particular, there are states that look like the extra "orbital motion" added to the nucleus' spin. So aside from the \(j=0\) ground state, there is a \(j=2\) and \(j=4\) and \(j=6\) excited state of uranium-238. The dependence of the energy on the angular momentum goes approximately like \(a+bJ^2\) which allows you to extract something like a "moment of inertia" from \(b\). But this \(b\) isn't quite identified with the expectation value of any tensor in the state with \(j=0\) itself – that expectation value simply has to be zero. Also, the states of uranium-238 with \(j=2,4,6,\dots\) are "excited" which means that they won't survive forever. These excited ("spinning") states of nuclei will emit a photon (gamma-ray) and drop to a lower value of \(j\) very quickly – they will reach the true ground state with \(j=0\) almost instantaneously. The energy and rates of transition of these jumps may also be used to deduce some nonzero values of the "quadrupole moments" (where the quotation marks indicate that you must be careful about the definition of the object because it's a generalization that doesn't necessarily coincide with other meanings of the phrase) – especially if it is true that the dominant emission is some electric quadrupole radiation. But if the transition results from some quadrupole radiation, the transition is determined from the matrix elements such as\[ \bra{\text{U-238}, j=0} Q_{ij} \ket{\text{U-238}, j=2} \] which may be nonzero but they related two states of the nucleus. The matrix element above isn't an expectation value, it isn't a property of one state only, especially not the ground state. So Wilczek's perpetuum mobile doesn't work for the nuclei, either. If the nuclei are spinning in a way that has some "classical component" – in the sense that some expectation value of an observable would be time-dependent – then they are superpositions of many energy eigenstates and the higher ones will collapse to the lower ones (e.g. by the emission of gamma quanta). At the end, you are left with the true ground state that simply has to be stationary. For this reason, the temporal translational symmetry cannot be spontaneously broken, at least not in the sense envisioned by Wilczek. Note that the previous paragraph talks about the collapse to a lower state. This description works because the spectrum of energy is bounded from below. That's how it differs from the spatial counterpart, the momentum which is unbounded – both signs of any component of the momentum are equally good. You could view this "boundedness from below" as another example of the "qualitative differences" between spacelike and timelike entities in relativity. The combination \(k_\mu p^\mu\) of components of the energy-momentum operator \(p^\mu\) has a spectrum that is bounded from below exactly if \(k_\mu\) is timelike i.e. if \(k^\mu k_\mu\leq 0\), assuming the timelike (mostly minus) signature. This "discrimination against" the timelike operators – and timelike crystals – is totally compatible with the Lorentz symmetry because the Lorentz symmetry only allows to transform spacetime intervals (or components of vectors or tensors, or slices, or other entities) of the same qualitative type to each other. Another topic: LIGO and axions Adrian Cho at Science discusses an April 2016 paper by Arvanitaki, Dimopoulos, et al. that just appeared in PRD. When advanced LIGO sees thousands of black hole mergers in coming years, they say, it could also see signs of axionic (dark matter) waves created with the help of the black hole horizons – assuming that the axion mass is a picoelectronvolt, plus minus (or times over?) two orders of magnitude. Some smart folks say that it's a more exciting new thing that could be observed by LIGO than all the previous "possible future discoveries". Add to Digg this Add to reddit snail feedback (0) :
d0d9a9e8447a5c69
ParSol is a library for semi-automatic parallelisation of data-parallel (especially linear algebra) algorithms. It is written in C++, using such C++ features as OOP, template metaprogramming, operatror overloading and others, so that the usage of the library is simple and intuitive, and the library itself is easily expanded. The library presents user with the set of linear algebra objects (multidimentional arrays, vectors, sparse and dense matrices, among others). The functionality of arrays and vectors is similar to those of such libraries as Blitz++, FreePOOMA, and Boost.MultiArray. Using the library, the programmer creates sequential version of the code (no MPI needed), and debugs it. Once done, the parallel version of the algorithm is created by substituting some header files (to switch from sequential to parallel version), some class names, and adding several lines of code (initialization, topology specification and data exchange). The parallel version should be recompiled using MPI implementation (at least MPI-1.1 support required). The parallelization is done the same way as in HPF. In fact, the library brings to C++ parallelization functionality similar to the one HPF brings to Fortran. However, differently from HPF, ParSol is a library, not a new compiler, with all the pros and cons. Also, in ParSol, the programmer must explicitely specify the computational stencil in order to ensure the optimal data exchange. From the one hand, this was a necessity, because it is hard for the library to analyse arbitrary user code. From the other hand, however, this allows for easy communication costs fine-tuning - the area where HPF failed to meet the expectations. Apart from parallelization, the library objects are fine-tuned for high performance. For example, ParSol arrays performs similar to native C/C++ arrays. Also, the library uses only standard C++ features and MPI-1.1 for parallel version, which makes it highly portable. A comprehensive test suite is also available as a part of library. References in zbMATH (referenced in 20 articles ) Showing results 1 to 20 of 20. Sorted by year (citations) 1. Ferreira, V. G.; Kaibara, M. K.; Lima, G. A. B.; Silva, J. M.; Sabatini, M. H.; Mancera, P. F. A.; Mckee, S.: Application of a bounded upwinding scheme to complex fluid dynamics problems (2013) 2. Barovik, D.; Taranchuk, V.: Mathematical modelling of running crown forest fires (2010) 3. Čiegis, R.; Laukaitytė, I.; Trofimov, V.: Parallel numerical algorithm for simulation of counter propagation of two laser beams (2010) 4. Čiegis, R.; Tumanova, N.: Numerical solution of parabolic problems with nonlocal boundary conditions (2010) 5. Paulinas, M.; Meilūnas, M.: An algorithm for partitioning of right heart ventricle medial axis (2010) 6. Annunziato, M.: A finite difference method for piecewise deterministic processes with memory. II. (2009) 7. Čiegis, R.: Numerical solution of hyperbolic heat conduction equation (2009) 8. Čiegis, R.; Laukaitytė, Inga; Radziunas, Mindaugas: Numerical algorithms for Schrödinger equation with artificial boundary conditions (2009) 9. Jakusšev, Alexander; Čiegis, Raimondas; Laukaitytė, Inga; Trofimov, Vyacheslav: Parallelization of linear algebra algorithms using ParSol library of mathematical objects (2009) 10. Laukaitytė, Inga; Čiegis, Raimondas; Lichtner, Mark; Radziunas, Mindaugas: Parallel numerical algorithm for the traveling wave model (2009) 11. Annunziato, M.: Analysis of upwind method for piecewise deterministic Markov processes (2008) 12. Čiegis, R.; Radziunas, M.; Lichtner, M.: Numerical algorithms for simulation of multisection lasers by using traveling wave model (2008) 13. Čiegis, Raim.; Čiegis, Rem.; Jakušev, A.; Šaltenienė, G.: Parallel variational iterative linear solvers (2007) 14. Jakušev, A.: Application of template metaprogramming technologies to improve the efficiency of parallel arrays (2007) 15. Sapagovas, M.; Kairytė, G.; Štikonienė, O.; Štikonas, A.: Alternating direction method for a two-dimensional parabolic equation with a nonlocal boundary condition (2007) 16. Čiegis, Raimondas: Parallel numerical algorithms for 3D parabolic problem with nonlocal boundary condition (2006) 17. Čiegis, Raimondas: Parallel LOD scheme for 3D parabolic problem with nonlocal boundary condition (2006) ioport 18. Čiegis, Raimondas; Jakušev, Alexander; Starikovičius, Vadimas: Parallel tool for solution of multiphase flow problems (2006) ioport 19. Starikovičius, V.; Čiegis, R.; Jakušev, A.: Analysis of upwind and high-resolution schemes for solving convection dominated problems in porous media (2006) 20. Čiegis, R.; Jakušev, A.; Krylovas, A.; Suboč, O.: Parallel algorithms for solution of nonlinear diffusion problems in image smoothing (2005)
9b9bed3bfadfa8d7
After months of speculation, rumours and a leaked paper, the American Institute of Aeronautics and Astronautics (AIAA) has finally published a peer-reviewed paper by Nasa Eagleworks researchers on the controversial space propulsion technology EmDrive which shows that the device does indeed work. The open access paper, entitled "Measurement of Impulsive Thrust from a Closed Radio-Frequency Cavity in Vacuum" was published online on Thursday 17 November ahead of its debut in the print edition of the AIAA Journal of Propulsion and Power in December. It is authored by Harold White, Paul March, James Lawrence, Jerry Vera, Andre Sylvester, David Brady and Paul Bailey – all engineers from the Nasa Eagleworks laboratory who are well known to EmDrive enthusiasts. Similar to an early draft of the paper that was leaked onto the Nasa Spaceflight forum earlier this month by Australian fan Phil Wilson, the published paper reveals that Nasa Eagleworks did indeed build a drive that generated 1.2 millinewtons per kilowatt of thrust in a vacuum. Of course, this amount of thrust is incredibly low, but the scientists said that they were more concerned about trying to prove that the device worked at all. "Although this test campaign was not focused on optimising performance and was more an exercise in existence proof, it is still useful to put the observed thrust-to-power figure of 1.2  mN/kW in context. The current state-of–the-art thrust to power for a Hall thruster is on the order of 60  mN/kW. This is an order of magnitude higher than the test article evaluated during the course of this vacuum campaign; however, for missions with very large delta-v requirements, having a propellant consumption rate of zero could offset the higher power requirements." The EmDrive created by Shawyer's SPR Ltd The EmDrive created by Shawyer's space company Satellite Propulsion Research Ltd Roger Shawyer, Satellite Propulsion Research Ltd And why exactly does the EmDrive work? The researchers have even put forward a hypothesis about why the EmDrive works, claiming it could be due to exotic physics relating to the "pilot-wave theory". The accepted interpretation of quantum mechanics, known as the Copenhagen Interpretation, which was devised by Niels Bohr and Werner Heisenberg in 1927, states that particles in physical systems generally do not have definite properties prior to being measured, and that quantum mechanics can only predict the probabilities that measurements will produce certain results. In contrast, the pilot-wave theory (also known as the "De Broglie–Bohm theory" or "Bohmian Mechanics") states that all particles must have precise positions at all times even when not being observed — a view that is largely rejected because it implies that the world must be strange in other ways. As with the Copenhagen view, there's a wave function governed by the Schrödinger equation. A wave function is a description of the quantum state of a system. The pilot-wave theory states that if you know the initial state of a system, and you have the wave function, then you will be able to calculate where each particle will end up. "Although the idea of a pilot wave or realist interpretation of quantum mechanics is not the dominant view of physics today (which favours the Copenhagen interpretation), it has seen a strong resurgence of interest over the last decade based on some experimental work pioneered by Couder and Fort," the researchers explained. So what now? EmDrive Exclusive: Roger Shawyer confirms MoD and DoD are interested in controversial space propulsion tech IBTimes UK While the paper may have been published by the AIAA, sources in the know have told IBTimes UK that Nasa has indeed shut down the Eagleworks laboratory, and it is believed lead engineer Paul March decided to leave in October due to frustrations over the researchers' work not being taken seriously by the space agency. Back in the UK, EmDrive inventor British Shawyer continues to develop a second generation version of the microwave thruster with many more magnitudes of the thrust than the Eagleworks device for commercialisation with Gilo Industries Group and has applied for patents both in the UK and internationally that have been published and are currently going through the evaluation process. The Boeing X37-B space plane Space planes like the Boeing X37-B, which currently has to be launched from a rocket or plane, could benefit from microwave space propulsion technology EmDrive, says Roger Shawyer NASA "[It's] all very slow but very convincing. There's some very serious work going on and I expect to see a lot of exciting stuff about the EmDrive in the news in the near future," Shawyer told IBTimes UK earlier this month in relation to the leaked Nasa paper, which also reported a result of 1.2 millinewtons per kilowatt of thrust. Senior sources in the international space industry have also told IBTimes UK that the US Air Force is already testing EmDrive on the X37-B spaceplane, and that China is doing the same in its Tiangong-2 space laboratory. However, it is important to note that many in the scientific community still do not believe that the controversial technology works at all, and there will likely be heavy critiques of the Nasa Eagleworks paper in the near future.
4a16c667bdc492ce
Tuesday, November 28, 2006 Chemistry (from Greek χημεία khemeia[1] meaning "alchemy") is the science of matter at the atomic to molecular scale, dealing primarily with collections of atoms, such as molecules, crystals, and metals. Chemistry deals with the composition and statistical properties of such structures, as well as their transformations and interactions to become materials encountered in everyday life. Chemistry also deals with understanding the properties and interactions of individual atoms with the purpose of applying that knowledge at the macroscopic level. According to modern chemistry, the physical properties of materials are generally determined by their structure at the atomic scale, which is itself defined by interatomic forces. Chemistry is often called the "central science" because it connects other sciences, such as physics, material science, nanotechnology, biology, pharmacy, medicine, bioinformatics, and geology.[2] These connections are formed through various sub-disciplines that utilize concepts from multiple scientific disciplines. For example, physical chemistry involves applying the principles of physics to materials at the atomic and molecular level. Chemistry pertains to the interactions of matter. These interactions may be between two material substances or between matter and energy, especially in conjunction with the First Law of Thermodynamics. Traditional chemistry involves interactions between substances in chemical reactions, where one or more substances become one or more other substances. Sometimes these reactions are driven by energetic (enthalpic) considerations, such as when two highly energetic substances such as elemental hydrogen and oxygen react to form the less energetic substance water. Chemical reactions may be facilitated by a catalyst, which is generally another chemical substance present within the reaction media but unconsumed (such as sulfuric acid catalyzing the electrolysis of water) or a non-material phenomenon (such as electromagnetic radiation in photochemical reactions). Traditional chemistry also deals with the analysis of chemicals both in and apart from a reaction, as in spectroscopy. All ordinary matter consists of atoms or the subatomic components that make up atoms; protons, electrons and neutrons. Atoms may be combined to produce more complex forms of matter such as ions, molecules or crystals. The structure of the world we commonly experience and the properties of the matter we commonly interact with are determined by properties of chemical substances and their interactions. Steel is harder than iron because its atoms are bound together in a more rigid crystalline lattice. Wood burns or undergoes rapid oxidation because it can react spontaneously with oxygen in a chemical reaction above a certain temperature. Substances tend to be classified in terms of their energy or phase as well as their chemical compositions. The three phases of matter at low energy are Solid, Liquid and Gas. Solids have fixed structures at room temperature which can resist gravity and other weak forces attempting to rearrange them, due to their tight bonds. Liquids have limited bonds, with no structure and flow with gravity. Gases have no bonds and act as free particles. Another way to view the three phases is by volume and shape: roughly speaking, solids have fixed volume and shape, liquids have fixed volume but no fixed shape, and gases have neither fixed volume nor fixed shape. Water (H2O) is a liquid at room temperature because its molecules are bound by intermolecular forces called Hydrogen bonds. Hydrogen sulfide (H2S) on the other hand is a gas at room temperature and standard pressure, as its molecules are bound by weaker dipole-dipole interactions. The hydrogen bonds in water have enough energy to keep the water molecules from separating from each other but not from sliding around, making it a liquid at temperatures between 0 °C and 100 °C at sea level. Lowering the temperature or energy further, allows for a tighter organization to form, creating a solid, and releasing energy. Increasing the energy (see heat of fusion) will melt the ice although the temperature will not change until all the ice is melted. Increasing the temperature of the water will eventually cause boiling (see heat of vaporization) when there is enough energy to overcome the polar attractions between individual water molecules (100 °C at 1 atmosphere of pressure), allowing the H2O molecules to disperse enough to be a gas. Note that in each case there is energy required to overcome the intermolecular attractions and thus allow the molecules to move away from each other. Scientists who study chemistry are known as chemists. Most chemists specialize in one or more sub-disciplines. The chemistry taught at the high school or early college level is often called "general chemistry" and is intended to be an introduction to a wide variety of fundamental concepts and to give the student the tools to continue on to more advanced subjects. Many concepts presented at this level are often incomplete and technically inaccurate, yet they are of extraordinary utility. Chemists regularly use these simple, elegant tools and explanations in their work because they have been proven to accurately model a very wide array of chemical reactivity, are generally sufficient, and more precise solutions may be prohibitively difficult to obtain. The science of chemistry is historically a recent development but has its roots in alchemy which has been practiced for millennia throughout the world. The word chemistry is directly derived from the word alchemy; however, the etymology of alchemy is unclear (see alchemy). History of chemistry The roots of chemistry can be traced to the phenomenon of burning. Fire was a mystical force that transformed one substance into another and thus was of primary interest to mankind. It was fire that led to the discovery of iron and glass. After gold was discovered and became a precious metal, many people were interested to find a method that could convert other substances into gold. This led to the protoscience called Alchemy. Alchemy was practiced by many cultures throughout history and often contained a mixture of philosophy, mysticism, and protoscience (see Alchemy). Alchemists discovered many chemical processes that led to the development of modern chemistry. As history progressed the more notable alchemists (esp. Geber and Paracelsus) evolved alchemy away from philosophy and mysticism and developed more systematic and scientific approaches. The first alchemist considered to apply the scientific method to alchemy and to distinguish chemistry from alchemy was Robert Boyle (1627–1691); however, chemistry as we know it today was invented by Antoine Lavoisier with his law of Conservation of mass in 1783. The discoveries of the chemical elements has a long history culminating in the creation of the periodic table of the chemical elements by Dmitri Mendeleyev. The Nobel Prize in Chemistry created in 1901 gives an excellent overview of chemical discovery in the past 100 years. In the early part of the 20th century the subatomic nature of atoms were revealed and the science of quantum mechanics began to explain the physical nature of the chemical bond. By the mid 20th century chemistry had developed to the point of being able to understand and predict aspects of biology spawning the field of biochemistry. • Physical chemistry is the study of the physical and fundamental basis of chemical systems and processes. In particular, the energetics and dynamics of such systems and processes are of interest to physical chemists. Important areas of study include chemical thermodynamics, chemical kinetics, electrochemistry, statistical mechanics, and spectroscopy. Physical chemistry has large overlap with molecular physics. Physical chemistry involves the use of calculus in deriving equations. It is usually associated with quantum chemistry and theoretical chemistry. • Theoretical chemistry is the study of chemistry via fundamental theoretical reasoning (usually within mathematics or physics). In particular the application of quantum mechanics to chemistry is called quantum chemistry. Since the end of the Second World War, the development of computers has allowed a systematic development of computational chemistry, which is the art of developing and applying computer programs for solving chemical problems. Theoretical chemistry has large overlap with (theoretical and experimental) condensed matter physics and molecular physics. Essentially from reductionism theoretical chemistry is just physics, just like fundamental biology is just chemistry and physics. Other fields include Astrochemistry, Atmospheric chemistry, Chemical Engineering, Chemo-informatics, Electrochemistry, Environmental chemistry, Flow chemistry, Geochemistry, Green chemistry, History of chemistry, Materials science, Medicinal chemistry, Molecular Biology, Molecular genetics, Nanotechnology, Organometallic chemistry, Petrochemistry, Pharmacology, Photochemistry, Phytochemistry, Polymer chemistry, Solid-state chemistry, Sonochemistry, Supramolecular chemistry, Surface chemistry, and Thermochemistry. Fundamental concepts The most convenient presentation of the chemical elements is in the periodic table of the chemical elements, which groups elements by atomic number. Due to its ingenious arrangement, groups, or columns, and periods, or rows, of elements in the table either share several chemical properties, or follow a certain trend in characteristics such as atomic radius, electronegativity, electron affinity, and etc. Lists of the elements by name, by symbol, and by atomic number are also available. In addition, several isotopes of an element may exist. An ion is a charged species, or an atom or a molecule that has lost or gained one or more electrons. Positively charged cations (e.g. sodium cation Na+) and negatively charged anions (e.g. chloride Cl−) can form neutral salts (e.g. sodium chloride NaCl). Examples of polyatomic ions that do not split up during acid-base reactions are hydroxide (OH−) and phosphate (PO43−). A molecule is the smallest indivisible portion of a pure compound or element that retains a set of unique chemical properties. A chemical substance can be an element, compound or a mixture of compounds, elements or compounds and elements. Most of the matter we encounter in our daily life are one or another kind of mixtures, e.g. air, alloys, biomass etc. Chemical bond States of matter Chemical reactions A Chemical reaction is a process that results in the interconversion of chemical substances. Such reactions can result in molecules attaching to each other to form larger molecules, molecules breaking apart to form two or more smaller molecules, or rearrangement of atoms within or across molecules. Chemical reactions usually involve the making or breaking of chemical bonds. For example, substances that react with oxygen to produce other substances are said to undergo oxidation; similarly a group of substances called acids or alkalis can react with one another to neutralize each other's effect, a phenomenon known as neutralization. Substances can also be dissociated or synthesized from other substances by various different chemical processes. Quantum chemistry Quantum chemistry mathematically describes the fundamental behavior of matter at the molecular scale. It is, in principle, possible to describe all chemical systems using this theory. In practice, only the simplest chemical systems may realistically be investigated in purely quantum mechanical terms, and approximations must be made for most practical purposes (e.g., Hartree-Fock, post Hartree-Fock or Density functional theory, see computational chemistry for more details). Hence a detailed understanding of quantum mechanics is not necessary for most chemistry, as the important implications of the theory (principally the orbital approximation) can be understood and applied in simpler terms. In quantum mechanics (several applications in computational chemistry and quantum chemistry), the Hamiltonian, or the physical state, of a particle can be expressed as the sum of two operators, one corresponding to kinetic energy and the other to potential energy. The Hamiltonian in the Schrödinger wave equation used in quantum chemistry does not contain terms for the spin of the electron. Solutions of the Schrödinger equation for the hydrogen atom gives the form of the wave function for atomic orbitals, and the relative energy of say the 1s,2s,2p and 3s orbitals. The orbital approximation can be used to understand the other atoms e.g. helium, lithium and carbon. Chemical Laws Dalton's law of multiple proportions says that these chemicals will present themselves in proportions that are small whole numbers (i.e. 1:2 O:H in water); although in many systems (notably biomacromolecules and minerals) the ratios tend to require large numbers, and are frequently represented as a fraction. Such compounds are known as non-stoichiometric compounds Interpersonal chemistry In the fields of sociology, behavioral psychology, and evolutionary psychology, with specific reference to intimate relationships or romantic relationships, interpersonal chemistry is a reaction between two people or the spontaneous reaction of two people to each other, especially a mutual sense of attraction or understanding.[4] In a colloquial sense, it is often intuited that people can have either good chemistry or bad chemistry together. Other related terms are team chemistry, a phrase often used in sports, and business chemistry, as between two companies.[5] Recent developments in neurochemistry have begun to shed light on the nature of the "chemistry of love", in terms of measurable changes neurotransmitters such as oxytocin, serotonin, and dopamine. The word chemistry comes from the earlier study of alchemy, which is basically the quest to make gold from earthen starting materials. As to the origin of the word “alchemy” the question is a debatable one; it certainly has Greek origins, and some, following E. Wallis Budge, have also asserted Egyptian origins. Alchemy, generally, derives from the old French alkemie; and the Arabic al-kimia: "the art of transformation." The Arabs borrowed the word “kimia” from the Greeks when they conquered Alexandria in the year 642 AD. ankit kumar said... Your site seems to be very benificial for school level childrens.It help them to complete thier assinged projects. Blogger said... Silver Gold Bull is the most trusted silver & gold dealer. You will be provided with competitive, live pricing and they will ensure your gold & silver arrives to your door discreetly and safely.                                                          XICE MUSIC ON YOU TUBE                                                      ...
7dd297343f393378
Friday, October 28, 2005 Science today, gone tomorrow Christopher Ireland asked me a wonderful question a few days ago: “Of all the facts and principles that science currently believes to be true, which do you think are most likely to be disproved in the next 50-100 years?” There is more up for grabs in the sciences than some people might think. Christopher explains why this matters: “I'm interested because I believe social behaviors are strongly influenced by our collective scientific beliefs. There's a lag time (a long one) while the science filters down to the population, but once it becomes part of people's general sense of reality, it changes their behavior in subtle, but pervasive Here are my guesses: Brain function. There's a lot of work going on here, and it's amazing how little we know. fMRI data is just beginning to be integrated with static imaging, and the scale of analysis is large - patches of brain tissue centimeters across. As the resolution improves, a lot of old ideas will have to be thrown out; they may improve the standard "functional areas" analysis of where how information processing happens. Cosmology. The standard model of cosmology is creaking, but nobody really knows what to replace it with. "Dark matter" is a symptom; it's essentially a fudge factor in the model invented to make the the universe expand at observed rates. I suspect we'll also see some change in more mundane fields like stellar and galactic models; they're constantly being stressed by new data. The Big Bang model could be discredited in a lot less than fifty years. Who knows, some people are even muttering that Newtonian mechanics is incorrect. Climate chemistry. There's so much attention here, and there are so many layers of analysis involved (that is, from molecular chemistry to bulk transport at the order of kilometers), that I expect "facts" greenhouse mechanisms to be restated. It wouldn't surprise me at all if other chemicals beyond CO2 and methane turn out to be critical in global warming. I'm not saying that global warming will be found to be incorrect, just that the mechanisms we now assume to be true could well wrong. Geology. Plate tectonics has stood the test of time but increasingly fine-grained new data could undermine the heuristics that are used to explain earthquakes. We know so little about the dynamics of the mantle, let alone the core, that we might have a very different view of crust activity in a hundred years. Materials science. This is another multi-scale field, like climate. There is a lot that's not understood between the Angstrom scale of atoms, the nanoscale of new materials, and bulk behavior. It's not been a fashionable field, but it's quite possible that some basic rules of thumb, eg in tribology, will come to be rewritten. Biological classification. We know almost nothing about bacteria, relative to their importance. For example, the human gut houses 10 to 100 trillion microbes from 500 to 1000 species - more than 10 times the number of cells that make up the human body. The current three domain classification of life (eukaryotes, bacteria and archaea) could well turn out to be wrong. S., who’s a lapsed physicist just like me, adds these thoughts: Nutrition - or, more generally, "how to be a fit person". The constant discovery of new trace substances (aspirin, omega-3s, etc.) that you need to be truly healthy suggests the kind of explosion of epicycles that precedes a paradigm shift. I anticipate the discovery that there are multiple different models of a healthy lifestyle, and the "eat fruit and vegetables and lean meat, drink exactly 4oz of red wine a day, and exercise 90 minutes a day" is only one of these. Your model may be a matter of choice, or may be ultimately constrained by ? gut bacteria, level of social interaction, mitochondrial DNA, level of something in the womb, aspect of Saturn at your time of conception... Quantum mechanics - your basic Schrödinger equation. This is a bit of a cheat on my part because nobody understands it. However, there's a swirl of ideas around the arrow of time, the classical limit of quantum mechanics, and Bell's inequality crying out for a major advance. Were I cleverer, this is what I would be working on. 1 comment: Marcelo Calbucci said... I'd add one: Genetics. We just started with the DNA and we still have hundreds/thousands of proteins to map. We know so little.
e1c540c06476330b
tisdag 15 november 2016 realQM vs Hartree-Fock and DFT I have put up an updated version of realQM (real Quantum Mechanics) to be compared with stdQM (standard QM). stdQM is based on a linear Schrödinger equation in a $3N$ dimensional wave function with global support for an atom with $N$ electrons, which is made computable in Hartree-Fock and Density Functional Theory DFT approximations reducing the dimensionality to basically 3d. realQM is based on a system of non-linear Schrödinger equations in $N$ 3d electron wave functions with local disjoint supports, which is computable without approximation. Evidence that realQM describes real physics is given. onsdag 9 november 2016 Trump: End of Global Warming Alarmism The new president of US Donald Trump expressed a clear standpoint against global warming alarmism during the presidential race: • Any and all weather events are used by the GLOBAL WARMING HOAXSTERS to justify higher taxes to save our planet! They don't believe it is $\$\$\$\$$! • It’s snowing & freezing in NYC. What the hell ever happened to global warming? Trump says that he will end all federal clean energy development, all research on solar, wind, efficiency, batteries, clean cars, and climate science: • I will also cancel all wasteful climate change spending from Obama-Clinton, including all global warming payments to the United Nations. These steps will save $100 billion over 8 years, and this money will be used to help rebuild the vital infrastructure, including water systems, in America’s inner cities. This is hopeful to the world and to science. It says that you cannot fool all the people all the time, in a democracy with free debate and science.  This is the beginning of the end of global warming alarmism including its most aggressive form led by Sweden and Germany. The weather is now celebrating Trump's victory by heavy snow fall over Stockholm... PS Trump picks top climate skeptic to lead EPA transition: • Choosing Myron Ebell means Trump plans to drastically reshape climate policies. • Ebell’s views appear to square with Trump’s when it comes to EPA’s agenda. Trump has called global warming “bullshit” and he has said he would “cancel” the Paris global warming accord and roll back President Obama’s executive actions on climate change (ClimateWire, May 27). Finally, reason is taking over... söndag 6 november 2016 Why are Scientists Openly Supporting Hillary? Physicists and mathematicians such as Peter Woit, Leonard Susskind and Terence Tao have come out as strong supporters of Hillary in the presidential race, and then of course as strong opponents to Trump. This is unusual because scientists seldom (openly) take on political missions. Why is that? Isn't science beyond politics? No, not in our time, and then not in particular climate science, which has become 100% politics. Climate scientists don't like Trump, because he says that climate science is 100% politics and not science.  Is it the same thing with physics and math? Is a pure mathematician like Tao and a string theorist like Susskind fearing that a questioning non-opportunist Trump would be more difficult to deal with than an opportunist Hillary representing (scientific) establishment? What if Trump would question the value of string theory, as he did with climate science? lördag 5 november 2016 Weinberg: Why Quantum Mechanics Needs an Overhaul! My new book Real Quantum Mechanics seems to fill a need: Nobel Laureate in Physics Steven Weinberg believes that quantum mechanics needs an overhaul because current debates suggest need for new approach to comprehend reality: • I’m not as happy about quantum mechanics as I used to be, and not as dismissive of its critics. • It’s a bad sign in particular that those physicists who are happy about quantum mechanics, and see nothing wrong with it, don’t agree with each other about what it means. I hope this can motivate you to check out the new approach to quantum reality presented in the book, which addresses many of the issues raised by Weinberg. Weinberg takes the first step to progress by admitting that quantum mechanics in its present form cannot be the answer to the physics of atoms and molecules. Of course the witness by Weinberg is not well received by ardent believers in a quantum mechanics once and for all cut in stone by Heisenberg and Born, such as Lubos Motl. But it may be that questioning a theory, in particular a theory supposedly being embraced by all educated, shows more brains and knowledge than simply swallowing it without any question. PS1 I put up a comment on Lubos Reference frame, but the discussion was quickly cut by Lubos, us usual...any questioning of the dogma of Heisenberg-Bohr-Born is impossible to Lubos, but that is not in the spirit of real science and physics... PS2 Here is my closing comment which will be censored by Lubos: It is natural to draw a parallel between Lubos defence of the establishment of QM and the defence of the Clinton establishment by Woit, Tao, Susskind et cet, (rightly questioned by Lubos) in both cases a defence with objective to close the discussion and pretend that everything is perfectly normal. Right Lobos? PS3 Here is a link to Weinberg's talk.
3e4e2bd8dc2dd61c
[#10] The Nature of Reality: Matter, Experience, Appearance, Presence A few days ago I gave a short talk, followed by a much longer discussion, in the Consciousness Club series at YHouse.  The topic was the nature of reality, with the full title "Matter, Experience, and Reality".  Actually, I wound up talking about two more aspects of reality that we consciously partake in: what I like to call appearance and presence. By Piet Hut [#9] Mind and Magnetic Monopoles: Matter, Mind and Magic Breakthroughs in science are often triggered by a realization that one or more of our assumptions were wrong.  In the rapid growth of experimental and theoretical insights in neuroscience, which of the underlying assumptions could be candidates for revision, potentially inducing a big shift in understanding? By Piet Hut [#8] Universal Biology: the Next Frontier in Science [#8] Universal Biology: the Next Frontier in Science I gave a talk at a workshop on the topic of Universal Biology, a little over a week ago, at the Earth-Life Science Institute (ELSI) in Tokyo. The title of my talk was From Universal Biology to Universal Science, in which I argued that the structure of science can be understood as a generalization of a generalization of biology. By Piet Hut [#7] Is Science Really Empirical? [#7] Is Science Really Empirical? The International Society for Theoretical Psychology holds its main conferences every two years in a different country.   This year the choice was Japan, and last week its members came together in Rikkyo University, one of the most prominent private universities in Tokyo. By Piet Hut [#6] Open Source Revolutions, in the 17th and 20th Centuries I was an enthusiastic proponent of the Open Source Movement, almost from the day that I encountered the Unix programming environment.  I very quickly realized how powerful it was, to write software that is very modular, and where the modules could be shared freely among anyone on the planet. By Piet Hut [#5] The Secret of Life: Reliable Systems from Unreliable Parts The question of the origin of life on Earth, and possibly elsewhere in the Universe, is a fascinating topic of study.  In order to ask how life first appeared in a non-living environment, obviously it would help to know what exactly life is, and what sets it apart from non-living forms of matter and energy. By Piet Hut [#3] The Mind-World Circle: Two Aspects of Reality A couple weeks ago, I identified the origins of life with the first information revolution on Earth.  But if life had never got beyond the state it was in for the first three billion years of its existence, it would have been a very quiet revolution indeed: there would have been nobody to notice it, and nobody to celebrate it.  At that time, well before the Cambrian explosion of diversity, complex multicellular organisms had not yet appeared... By Piet Hut [#0] The "Y" in YHouse [#0] The "Y" in YHouse The "Y" in front of "House," read as "why," was literally spelling out the way we were willing to question anything and everything — as happens in scientific research, as well as in any inquiry into the underlying tacit assumptions that we use to ground our lives, often without being aware of doing so.  By Piet Hut Synopsis of May 4th YHouse Lunch Meeting at IAS Synopsis of May 4th YHouse Lunch Meeting at IAS On Thursdays at noon Yhouse holds a lunch meeting at the Institute of Advanced Study, in Princeton. The format is a 15 minute informal talk by a speaker followed by a longer open-ended discussion among the participants, triggered by, but not necessarily confined to, the topic of the talk. Michael Solomon posts a synopsis of the weekly meeting. Synopsis of Andreas Losch’s April 27 YHouse Lunch IAS Talk Synopsis of Andreas Losch’s April 27 YHouse Lunch IAS Talk On Thursdays at noon Yhouse holds a lunch meeting at the Institute of Advanced Study, in Princeton. The format is a 15 minute informal talk by a speaker followed by a longer open-ended discussion among the participants, triggered by, but not necessarily confined to, the topic of the talk.  In order to share I am posting a synopsis of the weekly meetings. April 27, 2017 Synopsis of Andreas Losch’sYHouse Lunch Talk Speaker: Andreas Losch (University of Bern) Title: Top-Down Causation Abstract:  (By the speaker)  “Can the human mind actually control the body? This seems to be an experience we all have every second. Yet doesn’t it imply the existence of a free will to perform its decisions with bodily means? How could this scientifically be imagined, the causal nexus of the physical world provided? Amongst others, Karl Popper and John Eccles – drawing on the ideas of Michael Polanyi and Donald T. Campbell – propagate the idea of a “top-down causation“ as a potential answer. I will discuss the origins, extent and shortcomings of this idea.” Present:  Andreas Losch, Bob McClennan, George Musser, Naoki Yajima, David Fergusson, Nicholaas Rupke, Ed Turner, Michael Solomon  Andreas opened with the basic definition of top-down causation, “Can the whole/ the Macrostructure act back on its parts?”  The question of Top-Down causation has been considered by many thinkers.  George Ellis has argued that this exists everywhere.  Others question whether it exists at all.  Andreas noted the views of Karl Popper resonate with his own ideas.  He said Popper believed in the Kantian vision that Man is not just a machine but is an End-in-itself.  Andreas referred to the Is/Ought distinction often associated with David Hume – “Is” relates to a description of the world, “Ought” involves a goal or purpose.  Top-down causation in Popper’s interpretation of it is an attempt to relate Is and Ought, contrary to Hume’s approach.  The question of Top-Down causality is related to the question of “How Can the Mind can exist in a Physical World?” Andreas described Popper’s ontology of Three Worlds existing, not only the dualistic Cartesian ontology.  The first is the physical world.  The second is the subjective mental world.  The third is the world of combined human understanding, based on both the other worlds. All individual human understanding, the subjective interpretation of reality, is derived from the physical and from the combined totality of human understanding up to this time.  In the physical world, it is thought you can reduce everything to causes at a lower level.  But that does not allow for Top-Down causation.  He described Crystal growth as an example.  A crystal forms because the molecules composing the crystal follow certain rules.  On the other hand, Machines are formed as tools and work only if they are designed properly. Andreas cited Michel Polanyi’s very influential 1968 paper in Science “Life’s Irreducible Structure”.  In this anti-reductionism paper Polanyi argues that the structure of DNA requires the physical laws of chemistry but is itself a higher ordering principle. He described the process of Emergence by which a higher level such as consciousness is dependent on but distinct from the lower levels of physics, chemistry, and bodily functions.  In the same way, the constituents of a machine are harnessed by the design.  Andreas related how he had once asked Michael Polanyi’s son John (himself a Nobel laureate) what he thought of his father’s ideas and was told that since John grew up with those ideas he accepted them as correct.  Andreas spoke of Donald T. Campbell, a reductionist, who attempted to reinterpret Polanyi’s views and incorporate Top-Down causation.  Campbell spoke of “Blind Variation and Selective Retention” to describe not only biologic, but all evolving systems. He described four principles: 1) Higher levels are restricted by lower levels.  2) Higher levels are required to make things work.  3) Emergence.  4) Top-Down Causation.  For example, we can look at the emergence of Jaws in termites.  The worker termite’s jaw works on the physical principle of the lever, which could already be regarded as a top-down concept.  Even more, soldier termite’s jaws are so large the soldiers cannot feed themselves and must be fed by other termites. So, the emergence of the soldier termite’s jaw depends on the larger colony’s organization (top-down). Andreas included consideration of the short comings of Top-Down Causation.  John Eccles, Nobel winning neurophysiologist and philosopher, in 1951 in Nature “Hypotheses relating to the brain-mind problem” held a strongly dualistic and Cartesian approach, but suggested that hidden dendrites or brain modules might operate in a top-down fashion.  Andreas noted that there are some who discount the process of emergence.  Is nature really structured on levels?  He cited that physical interactions depend on fundamental indeterminism in the physical universe that is only probabilistic.  Popper sees similarities between the way higher level selective pressures are imposed on lower level genetic mutations that are probabilistic.  There is the problem of thermodynamics.  Must thermodynamic laws hold within any framework that includes a nonmaterial mind? The brain processes produce and use up heat and energy.  John Polkinghorne’s concept of “Active Information” attempts to avoid that problem on the quantum level, involving the trajectories of chaotic attractors in chaotic systems.  These systems, however, are considered deterministic, but are very dependent on initial conditions.  So, if you consider the quantum level, non-locality would require the whole universe to determine any action of a chaotic system, and you could interpret this with Polkinghorne as actual indeterminism.  Andreas stopped here noting that he had not resolved the issue, but hoped to stimulate discussion. Q:  David asked if Popper had revised his views in the 10 years prior to his death in 1994? A:  Andreas was not aware of any changes. Q:  Ed asked, Is there room for things outside the physical laws? There is the possibility that laws are broken at times.  This might lead to revising the laws, but might also occur with divine intervention.  The appeal to the quantum universe seems obvious, but Andreas dismissed that.  Why did he dismiss that?  Ed could imagine you could influence the physical world and still maintain a quantum universe. For example, when rolling dice, you must get certain probabilistic total outcomes, but maybe could control the sequence and outcomes of each individual role of the dice. A: Andreas asked if that was like providence (last week’s talk)? A: George said you could be a random number generator, but you would still walk out of the casino rich and that is an unlikely event. Ed responded you could control individual events in ways that don’t violate the laws of probability. Ed further said, My mind can make me raise my hand and arm but must do it by mechanisms of electro-chemical events.  It would be possible to manipulate quantum events in the world without violating quantum rules for statistical outcomes. George referred to a Bohmian process. You cannot violate the law but the outcome is extremely improbable.  (I believe he was referring to the de Broglie – Bohm theory that says that in addition to the wave function of all possible configurations described by the Schrödinger equation, there exists an actual configuration even when unobserved.  That configuration is determined by the initial conditions and system boundaries and may depend on the entire rest of the universe.) Ed replied If you shuffle a deck of cards all outcomes are equally probable.  Could you control the shuffle in a game of Bridge to prevent (or allow) anyone getting all of one suit? (That would allow a mechanism for “divine” intervention within the framework of a universe bound by physical quantum laws.) Andreas said he liked the idea that actions in the environment feed back on the world. Q: Nicholaas returned to crystals.  The properties of the constituents predetermine the structure of the crystal. Why should that not apply to elements?  The Periodic Table predicts what the next element will be and what properties it will have based on the rules governing the addition of subatomic particles to the atoms.  This may be considered a Top-Down process as well. A:  Andreas said Popper agrees and talks about the giraffe’s neck using the word “purpose”. Nicholaas noted that the enormous other aspects of anatomy and physiology necessary for the giraffe’s neck are not explained by selection, and Darwin recognized that. Q:  Andreas returned to the question of whether there are higher levels that affect lower levels? Q:  Naoki, a scholar of David Hume, contributed that Hume says the meaning of causation requires clarification. Hume questions whether causation is real or is just inferred by us.  You must define causation to determine if you are justified in attributing causation. A:  Andreas said he believes in causality but knows that belief is a leap of faith.  It works to believe this, though.  He assumes that Hume did not know about the concept of emergence. Nicholaas mentioned that Blumenbach (Johann Friedrich Blumenbach, 1752 – 1840, an early exponent of comparative anatomy and of racial distinctions) recognized emergence in his writings.  In the history of science ideas such as phase transitions were included to explain what appeared to be supernatural.  If you put ion particles together you magically get a magnetic field. Q:  Andreas believes there are two options:  1) Keep up the Is/Ought divide – When we assume our belief that our mental activity affects the world is an illusion. 2)  Relate Is and Ought, but then we need some concept like top-down causation. David offered the critical issue is determinism.  To say the mind can affect the world requires an indeterminate world.  But you could still have Top-Down causation within the confines of a physical deterministic world.  One example is chaotic systems which appear random but are deterministic. Q: Ed added that there might be an appeal in chaos theory to “strange attractors”, i.e. a state to which systems appear to be drawn.  In chaotic systems some outcomes are favored and others are not. Q: David asked do we see human action as top-down and is that then a model for divine action? Q: George asked David if a physical system is deterministic can it still be top-down? Q: Michael suggested that it appears noncontroversial that I can will my arm to lift, and the mechanism involves physical muscles and nerves.  It also seems clear that my mental state can affect my physical state – if I think of something that makes me anxious my heart rate increases, my breathing changes.  There are innumerable examples of feed-back mechanisms in physiology to regulate bodily functions, and many of these involve mental activity.  All physicians acknowledge that our mental state affects our physical state and ability to recover from injury.  Isn’t this reciprocity both bottom-up and top-down? Q: George noted that we have never defined top-down. A: Ed said we usually think of bottom-up.  Everything has a cause.  The event we call World War II can be described starting with string theory, molecules, and working up to societal and nation state events. But we still end up at WW II. Nicholaas:  The question is whether we must distinguish physical from nonphysical things. Ed:  Current Natural Science may not recognize any nonphysical things, although maybe Information is nonphysical. Bob asked what is the “something else” that is going on? Attitudes, beliefs, the weather? Ed:  We don’t know but could imagine a world like that –  a Harry Potter world where my saying the correct spell results in an action. Michael:  Isn’t Mathematics a nonphysical entity that is deterministic in the physical world?  And isn’t mathematics the foundation for all our explanations of the physical world? At this time we stopped our discussion, thanked Andreas for his stimulating presentation, returned our trays, and some of us continued the discussion over coffee. During that continuation Naoki related that Descartes and Leibnitz had different visions of Truth. Descartes denied one eternal truth saying that God can create eternal truths whenever God wants.  Leibnitz argued there is only One truth and that is God’s perfect truth.  These views represent two demonstrations of God’s existence: 1) The Oncological demonstration – God must be perfect to create an imperfect world and people. And 2) the Cosmological demonstration – Based on causal relationships, God is the first cause.  The concept of God then depends on our Empirical interpretation. Bob cited the writing of Plotinus, a first century Greek philosopher.  He described the One as transcendent and indivisible, beyond being and non-being.  The See-er is not separate from the Seen.  This One is not a deity and is not involved in creation. He then describes Mathematics not as an existing non-corporeal entity, but as a language.  The number one is a description of something in the world.  And adding one plus one is not the same as the number two.  Two is a new entity.  In response to my suggesting that mathematics is a nonphysical entity that serves as the foundation for all our science, Bob felt that math is merely a language that allows people who speak many other languages to communicate.  In this regard, the laws of Mathematics may not be constant.  The rules that apply to Euclidian geometry do not apply on the surface of a sphere, for example.  (My addition later from Stanford Encyclopedia of Philosophy under Karl Popper:  Philosophy of arithmetic: “Popper’s principle of falsifiability runs into prima facie difficulties when the epistemological status of mathematics is considered. It is difficult to conceive how simple statements of arithmetic, such as “2 + 2 = 4”, could ever be shown to be false. If they are not open to falsification they can not be scientific. If they are not scientific, it needs to be explained how they can be informative about real world objects and events.  Popper’s solution[36] was an original contribution in the philosophy of mathematics. His idea was that a number statement such as “2 apples + 2 apples = 4 apples” can be taken in two senses. In one sense it is irrefutable and logically true, in the second sense it is factually true and falsifiable. Concisely, the pure mathematics “2 + 2 = 4” is always true, but, when the formula is applied to real world apples, it is open to falsification.”) Ed recommended that we read Piet’s 2005 paper entitled “Mathematics, Matter, and Mind” in which Piet along with Mark Alport and Max Tegmark debate interpretations of a triangle comprised of these elements. (I have subsequently read this and suggest the paper be the topic of a future lunch meeting.) Ed asked David what is the mechanism of Providence?  That was not covered in last week’s presentation.  David answered the theology of providence bumps up against two problems, Evil and Divine Action.  We spoke of evil but not of divine action.  If you had an account of how God created the world, then you have constraints inherent in that act of creation.  But if you think of God continuing to interact then you do need Top-Down intervention. Michael J. Solomon, MD 3/29/17 Synopsis David Fergusson’s YHouse Lunch Talk at IAS 3/29/17 David Fergusson YHouse Lunch IAS Synopsis Speaker: David Fergusson (New College in the University of Edinburgh) Title: Psychotherapy and the self in wartime Edinburgh Abstract: (By the Speaker) “My project explores developments in psychotherapy with shell-shocked soldiers at Craiglockhart War Hospital (1916-19) in Edinburgh. Through Pat Barker’s 1991 novel (and the film) Regeneration, the story of Siegfried Sassoon and his encounters with Rivers, his psychiatrist, and Owen, a younger war poet, is well known. But the work of other key practitioners and the wider significance of their experimental therapeutic methods deserves further attention during the centenary commemorations.” Present:  David Fergusson, Paul Mundey, Sean Sakamoto, Susan Schneider, Piet Hut, Ayako Fuqui, Andreas Losch, Nicolaas Rupke, Bob MacLennan, Naoki Yajima, Monica Manolescu, Michael Solomon David has been involved in an interdisciplinary project in Edinburgh involving psychotherapists, historians, physicians, theologians, and others.  The project takes a holistic approach to looking at the whole person in the history of psychotherapy, at the influence of World War I in this development, and specifically at WW I Shell Shock treatment in Edinburgh. The project has been influenced by the work of John Macmurray, a prominent Scottish philosopher and theologian whose own thinking was influenced by his participation in WW I, and who has been compared with Martin Buber. The activity at Craiglockhart War Hospital in Edinburgh between 1916 and 1919 is the focus of this talk.  The term Shell Shock was introduced by military psychiatrist C.S. Myers in The Lancet in 1915 to describe combat stress disorders.  Later psychiatrists eschewed the term since the causes and symptoms of combat stress were so varied, but the term is still used in popular discourse.  Symptoms include both external manifestations such as blindness, deafness loss of speech, twitching, tremor, stuttering, etc., and internal manifestations such as memory loss, insomnia, nightmares, etc.  The senior officers serving at Craiglockhart War Hospital included three psychiatrists, Dr. Arthur Brock, Dr. William H.R. Rivers, and Dr Arthur Ruggles.  Two of the patients treated there for shell shock were Wilfred Owen and Sigfried Sassoon, both of whom became noted poets.  The 1991 novel “Regeneration” by Pat Barker, subsequently made into a movie, explores the relation between Sassoon and his psychiatrist Rivers.  Dr.Brock practiced Ergo Therapy, which involved keeping patients busy and occupied and reintegrating them into the community.  Rivers was influenced by Freud. Instead of drugs or hypnosis, he developed a form of the ‘talking cure’ which helped the patient to deal with painful and suppressed memories. Sassoon was older than Owen and the two came from different social backgrounds. Sassoon’s family was wealthy while Owen was from a lower financial and social class. Sassoon encouraged the younger man to write as part of his therapy. Owen became the editor of the Hydra, a hospital journal named both for the fact that the hospital building had formerly been a spa (hydro) hotel, and after the many-headed monster that Hercules defeated as one of his tasks.  Owen was returned to the fighting in France after leaving the hospital and died one week before the armistice. Sassoon had grown disillusioned by the war and became a conscientious objector. Having already been decorated for bravery, he was hospitalized rather than imprisoned. Sassoon also returned to the lines but survived the war. Sassoon’s poem “Dreamers” in the Hydra writes: And in the ruined trenches, lashed with rain, Dreaming of things they did with balls and bats, And mocked by hopeless longing to regain Bank-holidays, and picture shows, and spats, And going to the office in the train. Owen’s poem Anthem For Doomed Youth was also composed at Craiglockhart. What passing-bells for these who die as cattle? Only the monstrous anger of the guns. Only the stuttering rifles’ rapid rattle Can patter out their hasty orisons. No mockeries now for them; no prayers nor bells, Nor any voice of mourning save the choirs, – The shrill, demented choirs of wailing shells; And bugles calling for them from sad shires. David described the outcomes at the hospital. 1736 patients were treated.  44% returned to the front lines.  The rest were declared unfit for service at the front though many were able to fill non-combatant positions. Tavestock Clinic and Butler Hospital (in Rhode Island) were founded and run by Dr. Hugh Crighton Miller and Dr. Arthur Ruggles respectively. Both had learned lessons from treating shell-shocked patients – Ruggles in EdinburghThe hospital today is now the main campus of Edinburgh Napier University. This historical project raises several issues for further investigation by the project teamUnder intolerable conditions, strong, healthy and courageous young men experienced severe psychological stress resulting in this disorder. The stigma still exists. Many such patients are considered or consider themselves weak.  The treatment requires time and resources that are not always made available. Some in David’s group view this early psychotherapeutic work as offering alternative and more holistic approaches to counteract the over-medicalisation of sadness and grief. Discussion began with: Q: Monica pointed out the distinction between the terms Post Traumatic Stress Disorder, which we use today, and Shell Shock. Those with PTSD are recognized as the victim and these patients are not returned to combat. A:  One manifestation of what was shell shock would be considered PTSD now.  But shell shock included other disorders also. We lack statistics on relapse of those sent back to combat. There are references to combat stress disorders in ancient literature.  The American Civil War was the first Industrial war with conscription and more massive destruction, and there was a much higher incidence of shell shock than was described previously for hardened professional soldiers in past wars. Q: Susan had a grad student now teaching at West Point whose thesis was on PTSD.  She said Propranalol is effective if given before or even a few hours after the trauma and helps them to forget.  She mentioned the movie “The Eternal Sunshine of the Spotless Mind”.  She reported the military is working on implantable chips to block the PTSD response.  Her student was concerned that these treatments with medications causing amnesia did not honor the memory of those who had died in combat. Michael said that propranolol is not an amnesic medication but is a beta adrenergic blocker.  That is, it slows the heart rate and prevents the physiologic response.  His understanding of PTSD is that any memory of the traumatic event results in an overwhelming repetition of the physiologic fight or flight reaction appropriate and necessary to survive the stress at the time, but maladaptive when one cannot forget or ignore the horror of the event later.  The palpitations, diaphoresis, dry mouth and wide open pupils, inability to concentrate or to suppress the memories are body responses to the psychological stimuli, and these are what propranolol blocks. Susan believed that there was also a loss of memory and this had become a concern for example in rape victims providing testimony in court. Q:  Nicolaas asked how did the Germans or the French treat shell shock in view of the tremendous nationalism inherent in their societies at the time? A: David answered similar treatments were offered. He noted Freud’s influence.  He reported the Americans were better prepared for this when they entered the war due to the experience of the Europeans. Q: Piet asked what about the 1870 war between Germany and France? A: The treatment was similar then as well as in the Boer War and in the 1903 Russo-Japanese war.  The problem was also seen after high speed Railroad accidents, when for the first time in history devastation and trauma on that scale was seen.  The disorder was called Weak Heart Condition or Spinal Condition based on then current thinking, but was not referred to as trauma.  The disorder was also linked to Hysteria and men thought they were behaving like women and had lost their manhood. Q: Nicolaas returned to social acceptability and noted that there was so much nationalism that the social reaction to soldiers showing their “weakness” may have been problematic. A:  There was fear that shell shock was contagious, and this contributed to the perceived need to return patients to combat.  The military saw patients without physical wounds as shirkers.  Dr. Rivers, who did not agree, left under a cloud before the end of the war because the administrators thought the doctors were pandering to weakness. Q: Bob asked Why would we want to send these people back to combat?  He referred to a talk he had attended about the Vietnam War that questioned the use of conscription. A:  David reported that all of society was invested in this course. During the war they enlisted soldiers from mental hospitals, so it is no wonder that many couldn’t cope.  The soldiers felt they were fighting for their families, their religion, and their way of life. Q: Piet asked if the main issue was conscription or industrialization of the fighting? A: Conscription was introduced in England in 1917.  It had been used previously in the American Civil War.  David referred to the Press Gangs:  sailors would get men drunk or would knock them out and then drag them to the boats and sail off in order to maintain the navy. Bob noted that in ancient Rome families were encouraged to have babies to provide soldiers. Q: Sean shared that he had served as a Medic.  His mission was to “preserve the fighting forces”, not to help people.  In the military context caring about the soldiers was not significant.  He noted that in the Napoleonic wars 70% of the casualties were from disease and not from combat.  Sanitation in the camps preserved more lives than any other intervention.  Sean also noted that talk treatments were not very effective.  When talk therapy was tried after the genocide in Rwanda, many did not want to talk but only to forget and go outside. Piet wondered if the belief in spirits influenced these people and led them to avoid dark rooms. Paul agreed that his father, a WW II veteran who had landed at Normandy, never talked about it. Nicolaas stated that the cultural location of these things is important.  In the Netherlands after WW II people would talk about the German concentration camps, but never about the Japanese. Piet agreed and recalled seeing men in the Netherlands who had been 15 to 18 years old during the German occupation and had been sent to occupy Indonesia where they had committed atrocities.  These men sat alone in bars and would never talk about their experiences.  Some Vietnam veterans had similar reactions. Sean suggested that PTSD may occur when people cross their moral boundaries. Paul referred to Drew Faust’s book “This Republic of Suffering:  Death and the American Civil War”. Piet suggested that the development of Photography in the Civil War contributed to the change from romanticizing warfare to showing honestly the horrors. David emphasized the importance of talk therapy and the importance of professional relationships in healing. “It’s the love of the physician that heals the patient.” (Sandor Ferenczi) Treatment probably became much less effective when the patients were returned to the line to fight. Bob referred to Chekov’s short story “Ward Six” in which a young doctor is changed by a friendship with a patient. He wondered whether the psychiatrists had transforming experiences. Paul referred to McNamara’s memoirs of his own role in the Vietnam war and his need to lament that role. David reported that around 300 British soldiers were court martialed and shot for refusing orders. Many were likely suffering from shell shock.  Under the circumstances, it seems astonishing that mutiny was not more widespread. Piet recalled the story of soldiers from opposing sides playing a soccer game during a break in the fighting on Christmas on the front lines in WW I. David recounted that at the battle of Princeton when the fighting stopped the two sides helped each other to care for the dead and wounded remaining on the battle field. Paul lives near Gettysburg and recalled stories of combatants needing to re-engage later in life. Michael tried to relate our discussion to the perspective of Consciousness.  He believes that the interaction of the physical and the mental is what PTSD is about.  Re-experiencing the physiologic responses of panic to the intolerably traumatic mental circumstances that initially triggered that response is what he sees as PTSD. At this time, we ended our discussion.  David’s talk was surprisingly not primarily theological, but was a valuable addition to our series of lunch meetings. Michael Solomon, MD Synopsis 3/23/17 YHouse Lunch Talk IAS Synopsis 3/23/17 YHouse Lunch Talk IAS Speaker: Olaf Witkowski, Earth-Life Science Institute, Tokyo Institute of Technology Title: Characterizing Cognition as Information Flows Abstract: (By the Speaker) “Information’s substrate-independence and interoperability property makes possible symbolic representations such as the genetic code, base upon which life was able to develop, eventually leading to human societies’ complex cognitive capabilities, such as language, science and technology. In this talk, Dr. Witkowski will argue cognition to be the informational software to life’s physical hardware. If life can be formulated computationally to be the search for sources of free energy in an environment in order to maintain its own existence, then cognition is better understood as finding efficient encodings and algorithms to make this search probable to succeed. Cognition then becomes the “abstract computation of life”, with the purpose to make the unlikely likely for the sake of survival. We will show that it can be quantified by well known as well as new computational tools at the intersection of artificial life, information theory and machine learning.” Present: Olaf Witkowski, Piet Hut, Ed Turner, Brian Cantwell Smith, Ayako Fukui, Monica Manolescu, Yuko Ishihara, Nicolaas Rupke, Sean Sakamoto, Susan Schneider, David Fergusson, Erik Persson, Naoki Yajima, Liza Solomonova, Roberto Tottoli, Andreas Losch, Giuliano Mori, Fabien Montcher and Michael Solomon (by speaker phone). Olaf opened saying he would be talking about Cognition from the angle of Information Flows.  His interest began with his interest in Language: “How do you connect two minds?” and also with his background in computer science.  Information Theory began with the work of Claude Shannon in the 1940s (working at Bell Labs, at the Institute For Advanced Study, and at MIT). Olaf defined Information as “When you look at something (box A) and at something else (box B), Information is how much box A allows you to predict something about box B.” Olaf said Entropy does not measure information.  (In his work on transmitting and compressing data Shannon described Shannon Entropy, a measure of communicating information based on the number of possible states the data could take.) Information has Grounding – mutually shared experience and assumptions.  “The Difference that makes a Difference”. Information has meaning and can be used to do something.  For example, in biology if you are a gazelle on the savannah and I give you ten bits about the temperature on Mars, you don’t care.  But if I give you ten bits about the lion behind you, that is relevant and has meaning.  Information has value and can be quantified.  In his work on computer simulations of Life, and sometimes on chemistry simulations, Olaf can define boxes and track how information is transmitted over time.  He will argue that in biology it is important to relate this information to survival. In Tokyo, Olaf looks at the origin of life and of cognition.  It is difficult to define Life, but even more difficult to define Cognition (see speaker’s abstract above).  One aspect of life is self-reproduction.  In computer-science a Quine is when execution of the code results in the code itself. There is both a code to function and a code to replicate in DNA.  Some Quines can also correct errors in their own replication.  Memes are analogous to genes in that memes carry cultural ideas that can be transmitted. In Biology elements of Culture are replicated with corrections.  Even when oral traditions are transmitted from generation to generation, the value of the information is preserved.  What is saved is not just information but relevant information. So, when did Life become Cognitive? Life began about 4.6 billion years ago.  Some believe cognition appeared when Reflexivity appeared – when two systems mutually affect each other – allowing the capacity to respond to environmental factors.  Information coded in cells for mechanisms for energy production, for cellular functions, for reproduction, have persisted from the first cells to the present. Olaf sees not a single emergence but many emergences of cognition.  Looking at biology in computational terms, one can identify transitions from chemistry, to single cells with the ability to move in response to signals, to multicellular organisms where intercellular communication provides value for the group.  Computational bio-modeling allows you to track how much one thing affects another in the system and identify behavioral clusters.  Information in Biology has been understood in this way.   We can identify algorithms some of which we internalize allowing some information to disappear and preserving other information.  Cybernetics shows us how to extract this data from systems to predict what will happen.  This can be seen from the viewpoint of Philosophy as well.  At present Olaf is focusing on Reflexivity to understand Cognition using computational bio-modeling. Discussion began with: Q:  A question of whether you can define Life as Information. A: Olaf replied there is a danger in definitions. You might define life as chemistry that reproduces itself.  But he is offering Information Theory as a tool.  He emphasized the important point that Information is Substrate Independent. Q: When asked if DNA replication is the key? A: Olaf answered, No.  Robustness of the system is the key. (Robust systems “maintain their state and functions against external and internal perturbations”, which is essential for a biological system to survive.) Mutualism is also key – Can systems help one another by transmitting information? Q:  A reference was made to Hume and information transmission. A:  Olaf looks at emergence through random changes.  Shake the box and see what happens. Q:  How do you determine what information is significant for prediction? A: Information can be measured and valued.  For example, there are common threads in diverse religious traditions. Valuable information leads to self-preservation. Q:  With regard to the lion behind the gazelle, if information is about something, how can you quantitate the information? A:  Information itself is not relevant but must be relevant for something.  You can see how knowing about the lion leads to survival and preservation of the pattern. Q:  Ed pointed out that Information is Observer dependent.  It is hard to find mathematical laws that are observer independent.  So the value of information changes. A:  This is a good point.  Subjectivity is observer dependent and information theory accounts for this. Q:  Are predictions good because they are True or because they are Probable? A:  Good, because it predicted what has happened already.  Olaf can rewind and replay the tape and get different outcomes.  You have already seen the outcomes when you quantitate.  This is not the same as machine learning where you look at probability. Q:  Shannon’s Information Theory is objective.  If the entropy of a data set is twelve bits, you can compute it with twelve bits. A:  If you arrange the boxes in your system in some way, you can manipulate the system objectively.  The system is Finite and Deterministic.  The “entropy” is the number of states the system can be in, so every state is limited and can be measured as bearing an amount of information. Q: Susan asked, “Are you working within Shannon’s framework or some other theory/” A: Olaf uses Shannon’s but Shannon is not always clear, so he extends Shannon’s ideas such as entropy to add robustness. Q: Ed asked, “There are simple life forms that have sensation but no representation that could be considered cognition.  So, is cognition purposeful?” A:  Olaf looks at the role of cognition in life.  Specifically, he looks at the maximum rate at which molecular machines can process information to use the information for self-preservation. Q:  Regarding “Difference that make a difference”, How do you define how information becomes useful?  Differences cannot be determined in advance. A:  Olaf agreed. Q: Michael suggested that it can be dangerous to assign Purpose to discussions of evolution.  The mechanisms of evolution use random changes in individuals within a specific environmental niche to select for preservation of the gene pools of populations.  Purpose is not necessarily inherent. A:  Aboutness rather than purpose is better.  Aboutness refers to efficiency and there is a difference in the meaning of purpose. Q: Are you measuring the amount of information or the content of the information? A:  Olaf agrees.  You must consider the amount of information and the quality relevant for a specific meaning, i.e. viability. Q:  But content may not be measurable objectively. A: That is an interesting thought, but Olaf thinks you may be able to measure content on a Meta-level.  This is related to how you can compress information. Q:  Susan referred to the Theory of Thought Content in philosophy which demarcates the content of the system from the amount of information. A: Relevance depends on how the information is Grounded, i.e. how you share context, and that can be quantified. Q: Ed suggested there is nothing compelling about survival as a basis for information value rather than say beauty, etc. A:  Olaf cares about persistence of patterns and about identities.  I am not myself ten years ago. He monitors how patterns persist and change over time in response to perturbations of the system. Q: Shannon used the Ratio of Dependence.  How the information is compressed depends on how much you know about the system. A:  You can get information from observing systems.  Olaf’s PhD thesis was about observing birds to get useful information. The presentation and discussion ended here. Michael Solomon Synopsis of YHouse Lunch 3/2/17 Nicolaas Rupke Synopsis of YHouse Lunch 3/2/17 Nicolaas Rupke
f54af13613cc0c4e
Credit portal How to take derivative of ln how to take derivative of ln Notes on Statistical Thermodynamics - Partition Functions Many times we divide the study of physical chemistry into two broad classes of phenomena. There is the "macroscopic world," where we study the bulk properties of matter. That is, we study samples which contain on the order of 10 23 molecules or particles. The main theoretical framework for the study of bulk properties in chemistry is thermodynamics (or kinetics for most nonequilibrium phenomena), and the fundamental equations are the first, second, and third laws of thermodynamics. On the other hand, we also study the "microscopic world," where we are concerned with the properties of individual molecules or particles. The usual theoretical framework of the microscopic world is quantum mechanics (or sometimes classical mechanics), and the fundamental equations are Schrödinger's equations (or Newton's laws). In the macroscopic world we deal with quantities such as internal energy, enthalpy, entropy, heat capacities, and so on. In the microscopic world we deal with wave functions, particle momenta, kinetic and potential energies, energy levels, and so on. But the properties of bulk matter obviously depend on the properties of the particles of which it is composed. How are the microscopic properties of individual molecules related to the properties of a sample which contains 10 23 molecules, or - more to the point - how can we find the properties of a bulk sample from the properties of the molecules? This is the question which statistical thermodynamics seeks to address. We can think of statistical thermodynamics as a process of model building. We construct a (theoretical) model of the particles, atoms, molecules, etc. which make up the sample, and statistical thermodynamics will tell us what the bulk properties will be. For example, if our model is a collection of molecules which do not interact with each other, we will get the bulk properties of an ideal gas. If we want to get the properties of a nonideal gas, we have to go back to the model and put in the properties of the molecules which will make the gas nonideal. In this case that amounts to including a potential energy of interaction between the molecules. It would be nice if statistical thermodynamics could be derived entirely from the fundamental principles we already know, say quantum mechanics or classical mechanics. Unfortunately, this is not possible at present. In order to arrive at a theory which works we must introduce some new postulates. This path is followed in most books on statistical thermodynamics and is quite successful and largely satisfactory. However, in this discussion we will use a slightly different approach. Here I am going to ask you to believe that the "Boltzmann factor" - which I will describe below - is a correct description of some "probabilities" relevant to the system, and we will derive everything else from there. We will assume that whatever system we are interested in satisfies the Schrödinger equation (even if it contains 10 23 particles!), and that we know or can find the energies of the quantum states. For convenience we will label the energy states in order of increasing energy, E 1E 2E 3E 4 ≤. We use the ≤ sign rather than the 2 /2 is a kinetic energy and that Ea is the Arrhenius activation energy. In the latter case the assertion is usually made that the exponential factor is proportional to the number of molecules with sufficient energy to react. It is cumbersome to keep writing 1/kT all the time, so it is customary to set 1/kT = β. Using this notation, the proportionality can be written: (2) . Assuming that we accept that the probability of finding the system in state i with energy Ei is proportional to , the next natural question is what is the proportionality constant? That's relatively easy to answer because we know that the probabilities must sum to unity - the system must be in some state. So we can write Let's call the proportionality constant c. Then Using Equation (3) we can solve for c by writing so that Again, it is cumbersome to keep writing all the time so we simplify things by writing It turns out that this quantity, Q. is so important in the theory that it is even given its own name. Q. so defined, is called a partition function. (Don't worry about why it is called that, it has something to do with how energy is partitioned among the possible states of the system. In some books the partition function is given the symbol z or Z. which stands for the German word zustands-summe. which means sum over states.) The reason why Q is so important is that it connects the mechanical properties of the system (through the quantized energies Ei ) with thermodynamics (through the T in β = 1/kT ). So this function has both thermodynamics and mechanics in it. Q is a function of T through the β part and it is a function of the mechanical variables in the model through the energies Ei . For example, if the quantized energies of the system depend on the volume, V. the system is contained in and on the number, N. of molecules in the system - and they generally do depend on these variables - then Q will be a function of T. V. and N. (Q will also be a function of other things, like the mass of the individual molecules, but we don't generally indicate that explicitly because the mass of a molecule is not a thermodynamic variable.) So Q is usually a function of T , V. and N. which we write as, (8) . It is a function of T through the β and of V and N through the quantized energies Ei . (It is important to remember that the sum in Equation 7 is over all states of the system not just over energy levels. If there is degeneracy some of the terms in Equation 7 will be identical. For example, if there are four states with a particular energy E. then the term will occur in the summation four times.) Utilizing the fact that the normalization constant is 1/Q. we can write the probability that the system is in state i. with energy Ei . as (9) . Now the question arises, how do we use this to calculate quantities of interest? We'll start with internal energy, U. The best word definition of U is that it is the (average of the) sum of all the potential and kinetic energies of all the particles in the system. In other words, U is the total (average) mechanical energy of the system. Since we know what the possible energy states of the system are and we know the probability that the system is in each state, we can calculate the average energy. We will set this average energy equal to U (this is sort of a postulate, but we won't worry about that now), Since we have an expression for Pi we can rewrite Equation 10 as So far so good, but we can simplify this by noticing that so that So we see that (We will find, as we go along, that all of the thermodynamic properties will depend on lnQ or derivatives of lnQ. Q itself usually is a very, very, very, large dimensionless number, but its natural logarithm is much smaller and will be related to measurable properties.) It is fair to ask what is being held constant in taking the partial derivative in Equation 15. If we recall the definition of Q it will be clear that the only things available to be held constant are the mechanical definitions of the model, such as V. N. and any other purely mechanical things that the Ei may depend on (but not pressure, for example). Sometimes it is convenient to take derivatives with respect to temperature instead of β. Using elementary calculus we can change variables by setting but β = 1/kT. or T = 1/kβ . so that We now have enough information to calculate the heat capacity at constant volume. We can't calculate the heat capacity at constant pressure yet because Q and U are functions of V and not p. (We will call the heat capacity at constant volume, CV . but in fact we are holding all of the mechanical parameters of the system constant.) The thermodynamic definition of CV is, Which can be calculated from our Q and U as, (In statistical thermodynamics it is common to omit the statement of what variables are being held constant since we know that Q is a function of T. V. and N. Thus, it is not unusual to see Equation 20 written, (21) ) We have two thermodynamic properties of our system, U and CV  , all calculated from Q. Can we get anything else? How about entropy? The third law of thermodynamics says that the entropy (of a "nice" system) is zero at the absolute zero of temperature. So all we should have to do is integrate CV  /T from 0 K up to some temperature, T , (I know that I've used T as both variable and limit of integration, but you know what I mean so I won't worry about making it look right to the mathematicians. If this bothers you, put a prime on the T and dT inside the integral.) OK, so entropy becomes (Notice that we have divided CV by T. so that there is one less T factor in each of the terms in the integrand than there were in Equation 21.) This integral is really the sum of two integrals. The first integral is easy and gives The second, (25) , can be integrated by parts. In case you have forgotten how to integrate by parts, recall that udv = d (uv ) − vdu Here, we are setting u = T and dv = ∂ 2 lnQ /∂ T 2 dT. so that v = ∂lnQ /∂T. The integration by parts then gives for the Expression 24 Combining the two integrals we get which, upon separation of the upper and lower limits, becomes (We have not bothered to explicitly indicate that the first two terms are evaluated at temperature T. they are.) The last two terms refer to 0 K and are presumably the entropy at absolute zero. In a more sophisticated treatment we would show that they are identically zero, but here we shall just assume that they are zero because of the third law. So our expression for entropy is just So now we have three thermodynamic functions which we can calculate from lnQ. We have added entropy to the list. If we look carefully at the second term on the right in the last equation, and compare it to Equation (18) we will see that this last term is just U /T. So, But we already know that UTS is just the Helmholtz free energy, A. so This Equation 32 is the fundamental equation connecting the partition function Q to thermodynamics. From this equation we can derive all the other equations which we have given above and more. For example, we can get S again from the usual relationships of thermodynamics as Knowing A and S we can get U as, The heat capacity at constant volume can be calculated two ways, In addition, we can get pressure from and chemical potential, μ. from Cp can also be obtained from lnQ and its derivatives. It is a good test of your thermodynamic skills to derive the expression for Cp in terms of lnQ and its T and V derivatives. The fact is, we can get every property of our system from lnQ that our model contains. (Anything that is not in the model will not show up in the thermodynamic properties. For example, if you want the properties of a nonideal gas you have to include interactions between molecules in your model.) We now have the basic equations, all that remains is make the model and write Q. On the next page we will give some additional useful information and develop some simple models. Category: Bank Similar articles:
f8b2982f0ab85187
This article is about the physical quantity: for other uses of the word "energy", see Energy (disambiguation).* Enlarge picture Lightning is the electric breakdown of air by strong electric fields, producing a plasma, which causes an energy transfer from the electric field to heat, mechanical energy (the random motion of air molecules caused by the heat), and light. In physics and other sciences, energy (from the Greek ενεργός, energos, "active, working")[1] is a scalar physical quantity that is a property of objects and systems of objects which is conserved by nature. Several different forms, such as kinetic, potential, thermal, electromagnetic, chemical, nuclear, and mass have been defined to explain all known natural phenomena. Energy is converted from one form to another, but it is never created or destroyed. This principle, the conservation of energy, was first postulated in the early 19th century, and applies to any isolated system. According to Noether's theorem, the conservation of energy is a consequence of the fact that the laws of physics do not change over time.[2] Although the total energy of a system does not change with time, its value may depend on the frame of reference. For example, a passenger in an airplane has zero kinetic energy relative to the airplane, but nonzero kinetic energy relative to the earth. Main articles: History of energyTimeline of thermodynamics, statistical mechanics, and random processes Enlarge picture Thomas Young - the first to use the term "energy" in the modern sense. The concept of energy emerged out of the idea of vis viva, which Leibniz defined as the product of the mass of an object and its velocity squared; he believed that total vis viva was conserved. To account for slowing due to friction, Leibniz claimed that heat consisted of the random motion of the constituent parts of matter — a view shared by Isaac Newton, although it would be more than a century until this was generally accepted. In 1807, Thomas Young was the first to use the term "energy", instead of vis viva, in its modern sense.[3] Gustave-Gaspard Coriolis described "kinetic energy" in 1829 in its modern sense, and in 1853, William Rankine coined the term "potential energy." It was argued for some years whether energy was a substance (the caloric) or merely a physical quantity, such as momentum. William Thomson (Lord Kelvin) amalgamated all of these laws into the laws of thermodynamics, which aided in the rapid development of explanations of chemical processes using the concept of energy by Rudolf Clausius, Josiah Willard Gibbs and Walther Nernst. It also led to a mathematical formulation of the concept of entropy by Clausius, and to the introduction of laws of radiant energy by Jožef Stefan. During a 1961 lecture[3] for undergraduate students at the California Institute of Technology, Richard Feynman, a celebrated physics teacher and Nobel Laureate, said this about the concept of energy: There is a fact, or if you wish, a law, governing natural phenomena that are known to date. There is no known exception to this law — it is exact so far we know. The law is called conservation of energy; it states that there is a certain quantity, which we call energy that does not change in manifold changes which nature undergoes. That is a most abstract idea, because it is a mathematical principle; it says that there is a numerical quantity, which does not change when something happens. It is not a description of a mechanism, or anything concrete; it is just a strange fact that we can calculate some number, and when we finish watching nature go through her tricks and calculate the number again, it is the same. Since 1918 it has been known that the law of conservation of energy is the direct mathematical consequence of the translational symmetry of the quantity conjugate to energy, namely time. That is, energy is conserved because the laws of physics do not distinguish between different moments of time (see Noether's theorem). Energy in various contexts The concept of energy and its transformations is extremely useful in explaining and predicting most natural phenomena. The direction of transformations in energy (what kind of energy is transformed to what other kind) is often described by entropy (equal energy spread among all available degrees of freedom) considerations, since in practice all energy transformations are permitted on a small scale, but certain larger transformations are not permitted because it is statistically unlikely that energy or matter will randomly move into more concentrated forms or smaller spaces. The concept of energy is used often in all fields of science. In chemistry, energy is that attribute of substance that determines how, when and at what speed it can be converted into another substance or react with other substances. In biology, the sustenance of life itself is critically dependent on energy transformations; living organisms survive because of exchange of energy within and without. In a living organism chemical bonds are constantly broken and made to make the exchange and transformation of energy possible. These chemical bonds are most often bonds in carbohydrates, including sugars. In geology and meteorology, continental drift, mountain ranges, volcanos, and earthquakes are phenomena that can be explained in terms of energy transformations in the Earth's interior. [4] While meteorological phenomena like wind, rain, hail, snow, lightning, tornadoes and hurricanes, are all a result of energy transformations brought about by solar energy on the planet Earth. In cosmology and astronomy the phenomena of stars, nova, supernova, quasars and gamma ray bursts are the universe's highest-output energy transformations of matter. All phenomena (including solar activity) are driven by various kinds of energy transformations. Energy in such transformations is either from gravitational collapse of matter (usually molecular hydrogen) into various classes of astronomical objects (stars, black holes, etc.), or from nuclear fusion (of lighter elements, primarily hydrogen) Regarding applications of the concept of energy Energy is subject to a strict global conservation law; that is, whenever one measures (or calculates) the total energy of a system of particles whose interactions do not depend explicitly on time, it is found that the total energy of the system always remains constant.[5] • The total energy of a system can be subdivided and classified in various ways. For example, it is sometimes convenient to distinguish potential energy (which is a function of coordinates only) from kinetic energy (which is a function of coordinate time derivatives only). It may also be convenient to distinguish gravitational energy, electrical energy, thermal energy, and other forms. These classifications overlap; for instance thermal energy usually consists partly of kinetic and partly of potential energy. • The transfer of energy can take various forms; familiar examples include work, heat flow, and advection, as discussed below. • The word "energy" is also used outside of physics in many ways, which can lead to ambiguity and inconsistency. The vernacular terminology is not consistent with technical terminology. For example, the important public-service announcement, "Please conserve energy" uses vernacular notions of "conservation" and "energy" which make sense in their own context but are utterly incompatible with the technical notions of "conservation" and "energy" (such as are used in the law of conservation of energy).[5] In classical physics energy is considered a scalar quantity, the canonical conjugate to time. In special relativity energy is also a scalar (although not a Lorentz scalar but a time component of the energy-momentum 4-vector).[6] In other words, energy is invariant with respect to rotations of space, but not invariant with respect to rotations of space-time (= boosts). Energy transfer Because energy is strictly conserved and is also locally conserved (wherever it can be defined), it is important to remember that by definition of energy the transfer of energy between the "system" and adjacent regions is work. A familiar example is mechanical work. In simple cases this is written as: if there are no other energy-transfer processes involved. Here   is the amount of energy transferred, and   represents the work done on the system. More generally, the energy transfer can be split into two categories: where   represents the heat flow into the system. There are other ways in which an open system can gain or lose energy. If mass if counted as energy (as in many relativistic problems) then must contain a term for mass lost or gained. In chemical systems, energy can be added to a system by means of adding substances with different chemical potentials, which potentials are then extracted (both of these process are illustrated by fueling an auto, a system which gains in energy thereby, without addition of either work or heat). These terms may be added to the above equation, or they can generally be subsumed into a quantity called "energy addition term " which refers to any type of energy carried over the surface of a control volume or system volume. Examples may be seen above, and many others can be imagined (for example, the kinetic energy of a stream of particles entering a system, or energy from a laser beam adds to system energy, without either being either work-done or heat-added, in the classic senses). Where E in this general equation represents other additional advected energy terms not covered by work done on a system, or heat added to it. Energy is also transferred from potential energy () to kinetic energy () and then back to potential energy constantly. This is referred to as conservation of energy. In this closed system, energy can not be created or destroyed, so the initial energy and the final energy will be equal to each other. This can be demonstrated by the following: The equation can then be simplified further since (mass times acceleration due to gravity times the height) and (half times mass times velocity squared). Then the total amount of energy can be found by adding . Energy and the laws of motion The Hamiltonian The total energy of a system is sometimes called the Hamiltonian, after William Rowan Hamilton. The classical equations of motion can be written in terms of the Hamiltonian, even for highly complex or abstract systems. These classical equations have remarkably direct analogs in nonrelativistic quantum mechanics.<ref >The Hamiltonian MIT OpenCourseWare website 18.013A Chapter 16.3 Accessed February 2007 The Lagrangian Another energy-related concept is called the Lagrangian, after Joseph Louis Lagrange. This is even more fundamental than the Hamiltonian, and can be used to derive the equations of motion. In non-relativistic physics, the Lagrangian is the kinetic energy minus potential energy. Usually, the Lagrange formalism is mathematically more convenient than the Hamiltonian for non-conservative systems (like systems with friction). Energy and thermodynamics Internal energy Internal energy – the sum of all microscopic forms of energy of a system. It is related to the molecular structure and the degree of molecular activity and may be viewed as the sum of kinetic and potential energies of the molecules; it is comprised of the following types of energy:[7] Type Composition of Internal Energy (U) Sensible energy the portion of the internal energy of a system associated with kinetic energies (molecular translation, rotation, and vibration; electron translation and spin; and nuclear spin) of the molecules. Latent energy the internal energy associated with the phase of a system. Chemical energy the internal energy associated with the different kinds of aggregration of atoms in matter. Nuclear energy the tremendous amount of energy associated with the strong bonds within the nucleus of the atom itself. Energy interactions those types of energies not stored in the system (e.g. heat transfer, mass transfer, and work), but which are recognized at the system boundary as they cross it, which represent gains or losses by a system during a process. Thermal energy the sum of sensible and latent forms of internal energy. The laws of thermodynamics According to the second law of thermodynamics, work can be totally converted into heat, but not vice versa. The first law of thermodynamics simply asserts that energy is conserved,[8] and that heat is included as a form of energy transfer. A commonly-used corollary of the first law is that for a "system" subject only to pressure forces and heat transfer (e.g. a cylinder-full of gas), the change in energy of the system is given by: where the first term on the right is the heat transfer, defined in terms of temperature T and entropy S, and the last term on the right hand side is identified as "work" done on the system, where pressure is P and volume V (the negative sign is because we must compress the system to do work on it, so that the volume change dV is negative). Although the standard text-book example, this is highly specific, ignoring all chemical, electrical, nuclear, and gravitational forces, effects such as advection, and because it depends on temperature. The most general statement of the first law — i.e. conservation of energy — is valid even in situations in which temperature is undefinable. Energy is sometimes expressed as: which is unsatisfactory[5] because there cannot exist any thermodynamic state functions W or Q that are meaningful on the right hand side of this equation, except perhaps in trivial cases. Equipartition of energy The energy of a mechanical harmonic oscillator (a mass on a spring) is alternatively kinetic and potential. At two points in the oscillation cycle it is entirely kinetic, and alternatively at two other points it is entirely potential. Over the whole cycle, or over many cycles net energy is thus equally split between kinetic and potential. This is called equipartition principle - total energy of a system with many degrees of freedom is equally split among all these degrees of freedom. This principle is vitally important to understanding the behavior of a quantity closely related to energy, called entropy. Entropy is a measure of evenness of a distribution of energy between parts of a system. This concept is also related to the second law of thermodynamics which basically states that when an isolated system is given more degrees of freedom (=given new available energy states which are the same as existing states), then energy spreads over all available degrees equally without distinction between "new" and "old" degrees. Oscillators, phonons, and photons In an ensemble of unsynchronized oscillators, the average energy is spread equally between kinetic and potential. In a solid, thermal energy (often referred to as heat) can be accurately described by an ensemble of thermal phonons that act as mechanical oscillators. In this model, thermal energy is equally kinetic and potential. In ideal gas, potential of interaction between particles is essentially delta function - thus all of the energy is kinetic. Because an electrical oscillator (LC circuit) is analogous to a mechanical oscillator, its energy must be, on average, equally kinetic and potential. It is entirely arbitrary whether the magnetic energy is considered kinetic and the electrical energy considered potential, or vice versa. That is, either the inductor is analogous to the mass while the capacitor is analogous to the spring, or vice versa. 1. By extension of the previous line of thought, in free space the electromagnetic field can be considered an ensemble of oscillators, meaning that radiation energy can be considered equally potential and kinetic. This model is useful, for example, when the electromagnetic Lagrangian is of primary interest and is interpreted in terms of potential and kinetic energy. 2. On the other hand, in the key equation , the contribution is called the rest energy, and all other contributions to the energy are called kinetic energy. For a particle that has mass, this implies that the kinetic energy is at speeds much smaller than c, as can be proved by writing  √ and expanding the square root to lowest order. By this line of reasoning, the energy of a photon is entirely kinetic, because the photon is massless and has no rest energy. This expression is useful, for example, when the energy-versus-momentum relationship is of primary interest. The two analyses are entirely consistent. The electric and magnetic degrees of freedom in item 1 are transverse to the direction of motion, while the speed in item 2 is along the direction of motion. For non-relativistic particles these two notions of potential versus kinetic energy are numerically equal, so the ambiguity is harmless, but not so for relativistic particles. Work and virtual work Work is roughly force times distance. But more precisely, it is This says that the work () is equal to the integral (along a certain path) of the force; for details see the mechanical work article. Quantum mechanics In quantum mechanics energy is defined in terms of the energy operator as a time derivative of the wave function. The Schrödinger equation equates energy operator to the full energy of a particle or a system. It thus can be considered as a definition of measurement of energy in quantum mechanics. The Schrödinger equation describes the space- and time-dependence of the wave function of quantum systems. The solution of this equation for bound system is discrete (a set of permitted states, each characterized by an energy level) which results in the concept of quanta. In the solution of the Schrödinger equation for any oscillator (vibrator) and for electromagnetic wave in vacuum, the resulting energy states are related to the frequency by the Planck equation (where is the Planck's constant and the frequency). In the case of electromagnetic wave these energy states are called quanta of light or photons. When calculating kinetic energy (= work to accelerate a mass from zero speed to some finite speed) relativistically - using Lorentz transformations instead of Newtonian mechanics, Einstein discovered unexpected by-product of these calculations to be an energy term which does not vanish at zero speed. He called it rest mass energy - energy which every mass must posess even when being at rest. The amount of energy is directly proportional to the mass of body: m is the mass, c is the speed of light in vacuo, E is the rest mass energy. For example, consider electron-positron annihilation, in which the rest mass of individual particles is destroyed, but the inertia equivalent of the system of the two particles (its invariant mass) remains (since all energy is associated with mass), and this inertia and invariant mass is carried off by photons which individually are massless, but as a system retain their mass. This is a reversible process - the inverse process is called pair creation - in which the rest mass of particles is created from energy of two (or more) annihilating photons. In general relativity, the stress-energy tensor serves as the source term for the gravitational field, in rough analogy to the way mass serves as the source term in the non-relativistic Newtonian approximation.[6] It is not uncommon to hear that energy is "equivalent" to mass. It would be more accurate to state that every energy has inertia and gravity equivalent, and because mass is a form of energy, then mass too has inertia and gravity associated with it. There is no absolute measure of energy, because energy is defined as the work that one system does (or can do) on another. Thus, only of the transition of a system from one state into another can be defined and thus measured. The methods for the measurement of energy often deploy methods for the measurement of still more fundamental concepts of science, namely mass, distance, radiation, temperature, time, electric charge and electric current. Enlarge picture A Calorimeter - An instrument used by physicists to measure energy Conventionally the technique most often employed is calorimetry, a thermodynamic technique that relies on the measurement of temperature using a thermometer or of intensity of radiation using a bolometer. Main article: Units of energy Throughout the history of science, energy has been expressed in several different units such as ergs and calories. At present, the accepted unit of measurement for energy is the SI unit of energy, the joule. Forms of energy Enlarge picture Heat, a form of energy, is partly potential energy and partly kinetic energy. Classical mechanics distinguishes between potential energy, which is a function of the position of an object, and kinetic energy, which is a function of its movement. Both position and movement are relative to a frame of reference, which must be specified: this is often (and originally) an arbitrary fixed point on the surface of the Earth, the terrestrial frame of reference. Some introductory authors attempt to separate all forms of energy in either kinetic or potential: this is not incorrect, but neither is it clear that it is a real simplification, as Feynman points out: These notions of potential and kinetic energy depend on a notion of length scale. For example, one can speak of macroscopic potential and kinetic energy, which do not include thermal potential and kinetic energy. Also what is called chemical potential energy (below) is a macroscopic notion, and closer examination shows that it is really the sum of the potential and kinetic energy on the atomic and subatomic scale. Similar remarks apply to nuclear "potential" energy and most other forms of energy. This dependence on length scale is non-problematic if the various length scales are decoupled, as is often the case ... but confusion can arise when different length scales are coupled, for instance when friction converts macroscopic work into microscopic thermal energy. Examples of the interconversion of energy Mechanical energy is converted into by Mechanical energyLever Thermal energyBrakes Electrical energyDynamo Electromagnetic radiationSynchrotron Chemical energyMatches Nuclear energyParticle accelerator Potential energy Main article: Potential energy Potential energy, symbols Ep, V or Φ, is defined as the work done against a given force (= work of given force with minus sign) in changing the position of an object with respect to a reference position (often taken to be infinite separation). If F is the force and s is the displacement, with the dot representing the scalar product of the two vectors. The name "potential" energy originally signified the idea that the energy could readily be transferred as work—at least in an idealized system (reversible process, see below). This is not completely true for any real system, but is often a reasonable first approximation in classical mechanics. The general equation above can be simplified in a number of common cases, notably when dealing with gravity or with elastic forces. Gravitational potential energy Main article: Gravitational potential energy The gravitational force near the Earth's surface varies very little with the height, h, and is equal to the mass, m, multiplied by the gravitational acceleration, g = 9.81 m/s². In these cases, the gravitational potential energy is given by A more general expression for the potential energy due to Newtonian gravitation between two bodies of masses m1 and m2, useful in astronomy, is where r is the separation between the two bodies and G is the gravitational constant, 6.6742(10)×10−11 m3kg−1s−2.[9] In this case, the reference point is the infinite separation of the two bodies. Elastic potential energy Elastic potential energy is defined as a work needed to compress (or expand) a spring. The force, F, in a spring or any other system which obeys Hooke's law is proportional to the extension or compression, x, where k is the force constant of the particular spring (or system). In this case, the calculated work becomes Hooke's law is a good approximation for behaviour of chemical bonds under normal conditions, i.e. when they are not being broken or formed. Kinetic energy Main article: Kinetic energy Kinetic energy, symbols Ek, T or K, is the work required to accelerate an object to a given speed. Indeed, calculating this work one easily obtains the following: At speeds approaching the speed of light, c, this work must be calculated using Lorentz transformations, which results in the following: This equation reduces to the one above it, at small (compared to c) speed. A mathematical by-product of this work (which is immediately seen in the last equation) is that even at rest a mass has the amount of energy equal to: This energy is thus called rest mass energy. Thermal energy Examples of the interconversion of energy Thermal energy is converted into by Mechanical energySteam turbine Thermal energyHeat exchanger Electrical energyThermocouple Electromagnetic radiationHot objects Chemical energyBlast furnace Nuclear energySupernova Main article: Thermal energy The general definition of thermal energy, symbols q or Q, is also problematic. A practical definition for small transfers of heat is where Cv is the heat capacity of the system. This definition will fail if the system undergoes a phase transition—e.g. if ice is melting to water—as in these cases the system can absorb heat without increasing its temperature. In more complex systems, it is preferable to use the concept of internal energy rather than that of thermal energy (see Chemical energy below). Despite the theoretical problems, the above definition is useful in the experimental measurement of energy changes. In a wide variety of situations, it is possible to use the energy released by a system to raise the temperature of another object, e.g. a bath of water. It is also possible to measure the amount of electrical energy required to raise the temperature of the object by the same amount. The calorie was originally defined as the amount of energy required to raise the temperature of one gram of water by 1 °C (approximately 4.1855 J, although the definition later changed), and the British thermal unit was defined as the energy required to heat one gallon (UK) of water by 1 °F (later fixed as 1055.06 J). Electrical energy Main articles: Electromagnetism and Electricity Examples of the interconversion of energy Electrical energy is converted into by Mechanical energyElectric motor Thermal energyResistor Electrical energyTransformer Electromagnetic radiationLight-emitting diode Chemical energyElectrolysis Nuclear energySynchrotron The electric potential energy of given configuration of charges is defined as the work which must be done against the Coulomb force to rearrange charges from infinite separation to this configuration (or the work done by the Coulomb force separating the charges from this configuration to infinity). For two point-like charges Q1 and Q2 at a distance r this work, and hence electric potential energy is equal to: where ε0 is the electric constant of a vacuum, 107/4πc0² or 8.854188…×10−12 F/m.[9] If the charge is accumulated in a capacitor (of capacitance C), the reference configuration is usually selected not to be infinite separation of charges, but vice versa - charges at an extremely close proximity to each other (so there is zero net charge on each plate of a capacitor). In this case the work and thus the electric potential energy becomes If an electric current passes through a resistor, electrical energy is converted to heat; if the current passes through an electric appliance, some of the electrical energy will be converted into other forms of energy (although some will always be lost as heat). The amount of electrical energy due to an electric current can be expressed in a number of different ways: where U is the electric potential difference (in volts), Q is the charge (in coulombs), I is the current (in amperes), t is the time for which the current flows (in seconds), P is the power (in watts) and R is the electric resistance (in ohms). The last of these expressions is important in the practical measurement of energy, as potential difference, resistance and time can all be measured with considerable accuracy. Magnetic energy There is no fundamental difference between magnetic energy and electrical energy: the two phenomena are related by Maxwell's equations. The potential energy of a magnet of magnetic moment m in a magnetic field B is defined as the work of magnetic force (actually of magnetic torque) on re-alignment of the vector of the magnetic dipole moment, and is equal: while the energy stored in a inductor (of inductance L) when current I is passing via it is This second expression forms the basis for superconducting magnetic energy storage. Electromagnetic fields Examples of the interconversion of energy Electromagnetic radiation is converted into by Mechanical energySolar sail Thermal energySolar collector Electrical energySolar cell Electromagnetic radiationNon-linear optics Chemical energyPhotosynthesis Nuclear energyMössbauer spectroscopy Calculating work needed to create an electric or magnetic field in unit volume (say, in a capacitor or an inductor) results in the electric and magnetic fields energy densities: in SI units. Electromagnetic radiation, such as microwaves, visible light or gamma rays, represents a flow of electromagnetic energy. Applying the above expressions to magnetic and electric components of electromagnetic field both the volumetric density and the flow of energy in e/m field can be calculated. The resulting Poynting vector, which is expressed as in SI units, gives the density of the flow of energy and its direction. The energy of electromagnetic radiation is quantized (has discrete energy levels). The spacing between these levels is equal to where h is the Planck constant, 6.6260693(11)×10−34 Js,[9] and ν is the frequency of the radiation. This quantity of electromagnetic energy is usually called a photon. The photons which make up visible light have energies of 270–520 yJ, equivalent to 160–310 kJ/mol, the strength of weaker chemical bonds. Chemical energy Examples of the interconversion of energy Chemical energy is converted into by Mechanical energyMuscle Thermal energyFire Electrical energyFuel cell Electromagnetic radiationGlowworms Chemical energyChemical reaction Chemical energy is the energy due to associations of atoms in molecules and various other kinds of aggregrates of matter. It may be defined as a work done by electric forces during re-arrangement of electric charges, electrons and protons, in the process of aggregration. If the chemical energy of a system decreases during a chemical reaction, it is transferred to the surroundings in some form of energy (often heat); on the other hand if the chemical energy of a system increases as a result of a chemical reaction - it is by converting another form of energy from the surroundings. For example, when two hydrogen atoms react to form a dihydrogen molecule, the chemical energy decreases by 724 zJ (the bond energy of the H–H bond); when the electron is completely removed from a hydrogen atom, forming a hydrogen ion (in the gas phase), the chemical energy increases by 2.18 aJ (the ionization energy of hydrogen). It is common to quote the changes in chemical energy for one mole of the substance in question: typical values for the change in molar chemical energy during a chemical reaction range from tens to hundreds of kJ/mol. The chemical energy as defined above is also referred to by chemists as the internal energy, U: technically, this is measured by keeping the volume of the system constant. However, most practical chemistry is performed at constant pressure and, if the volume changes during the reaction (e.g. a gas is given off), a correction must be applied to take account of the work done by or on the atmosphere to obtain the enthalpy, H: H = ΔU + pΔV A second correction, for the change in entropy, S, must also be performed to determine whether a chemical reaction will take place or not, giving the Gibbs free energy, G: These corrections are sometimes negligible, but often not (especially in reactions involving gases). Since the industrial revolution, the burning of coal, oil, natural gas or products derived from them has been a socially significant transformation of chemical energy into other forms of energy. the energy "consumption" (one should really speak of "energy transformation") of a society or country is often quoted in reference to the average energy released by the combustion of these fossil fuels: 1 tonne of coal equivalent (TCE) = 29 GJ tonne of oil equivalent (TOE) = 41.87 GJ On the same basis, a tank-full of gasoline (45 litres, 12 gallons) is equivalent to about 1.6 GJ of chemical energy. Another chemically-based unit of measurement for energy is the "tonne of TNT", taken as 4.184 GJ. Hence, burning a tonne of oil releases about ten times as much energy as the explosion of one tonne of TNT: fortunately, the energy is usually released in a slower, more controlled manner. Simple examples of chemical energy are batteries and food. When you eat the food is digested and turned into chemical energy which can be transformed to kinetic energy. Nuclear energy Examples of the interconversion of energy Nuclear binding energy is converted into by Mechanical energyAlpha radiation Thermal energySun Electrical energyBeta radiation Electromagnetic radiationGamma radiation Chemical energyRadioactive decay Nuclear energyNuclear isomerism Nuclear potential energy, along with electric potential energy, provides the energy released from nuclear fission and nuclear fusion processes. The result of both these processes are nuclei in which strong nuclear forces bind nuclear particles more strongly and closely. Weak nuclear forces (different from strong forces) provide the potential energy for certain kinds of radioactive decay, such as beta decay. The energy released in nuclear processes is so large that the relativistic change in mass (after the energy has been removed) can be as much as several parts per thousand. Nuclear particles (nucleons) like protons and neutrons are not destroyed (law of conservation of baryon number) in fission and fusion processes. A few lighter particles may be created or destroyed (example: beta minus and beta plus decay, or electron capture decay), but these minor processes are not important to the immediate energy release in fission and fusion. Rather, fission and fusion release energy when collections of baryons become more tightly bound, and it is the energy associated with a fraction of the mass of the nucleons (but not the whole particles) which appears as the heat and electromagnetic radiation generated by nuclear reactions. This heat and radiation retains the "missing" mass, but the mass is missing only because it escapes in the form of heat and light, which retain the mass and conduct it out of the system where it is not measured. The energy from the Sun, also called solar energy, is an example of this form of energy conversion. In the Sun, the process of hydrogen fusion converts about 4 million metric tons of solar matter per second into light, which is radiated into space, but during this process, the number of total protons and neutrons in the sun does not change. In this system, the light itself retains the inertial equivalent of this mass, and indeed the mass itself (as a system), which represents 4 million tons per second of electromagnetic radiation, moving into space. Each of the helium nuclei which are formed in the process are less massive than the four protons from they were formed, but (to a good approximation), no particles or atoms are destroyed in the process of turning the sun's nuclear potential energy into light. Transformations of energy Main article: Energy conversion One form of energy can often be readily transformed into another with the help of a device- for instance, a battery, from chemical energy to electrical energy; a dam: gravitational potential energy to kinetic energy of moving water (and the blades of a turbine) and ultimately to electric energy through an electrical generator. Similarly, in the case of a chemical explosion, chemical potential energy is transformed to kinetic energy and thermal energy in a very short time. Yet another example is that of a pendulum. At its highest points the kinetic energy is zero and the gravitational potential energy is at maximum. At its lowest point the kinetic energy is at maximum and is equal to the decrease of potential energy. If one (unrealistically) assumes that there is no friction, the conversion of energy between these processes is perfect, and the pendulum will continue swinging forever. Energy can be converted into matter and vice versa. The mass-energy equivalence formula E = mc², derived independently by Albert Einstein and Henri Poincaré, quantifies the relationship between mass and rest energy. Since is extremely large relative to ordinary human scales, the conversion of mass to other forms of energy can liberate tremendous amounts of energy, as can be seen in nuclear reactors and nuclear weapons. Conversely, the mass equivalent of a unit of energy is minuscule, which is why a loss of energy from most systems is difficult to measure by weight, unless the energy loss is very large. Examples of energy transformation into matter (particles) are found in high energy nuclear physics. In nature, transformations of energy can be fundamentally classed into two kinds: those that are thermodynamically reversible, and those that are thermodynamically irreversible. A reversible process in thermodynamics is one in which no energy is dissipated into empty quantum states available in a volume, from which it cannot be recovered into more concentrated forms (fewer quantum states), without degradation of even more energy. A reversible process is one in which this sort of dissipation does not happen. For example, conversion of energy from one type of potential field to another, is reversible, as in the pendulum system described above. In processes where heat is generated, however, quantum states of lower energy, present as possible exitations in fields between atoms, act as a reservoir for part of the energy, from which it cannot be recovered, in order to be converted with 100% efficiency into other forms of energy. In this case, the energy must partly stay as heat, and cannot be completely recovered as usable energy, except at the price of an increase in some other kind of heat-like increase in disorder in quantum states, in the universe (such as an expansion of matter, or a randomization in a crystal). As the universe evolves in time, more and more of its energy becomes trapped in irreversible states (i.e., as heat or other kinds of increases in disorder). This has been referred to as the inevitable thermodynamic heat death of the universe. In this heat death the energy of the universe does not change, but the fraction of energy which is available to do work, or be transformed to other usable forms of energy, grows less and less. Law of conservation of energy Energy is subject to the law of conservation of energy. According to this law, energy can neither be created (produced) nor destroyed itself. It can only be transformed. Most kinds of energy (with gravitational energy being a notable exception)[1] are also subject to strict local conservation laws, as well. In this case, energy can only be exchanged between adjacent regions of space, and all observers agree as to the volumetric density of energy in any given space. There is also a global law of conservation of energy, stating that the total energy of the universe cannot change; this is a corollary of the local law, but not vice versa.[3][5] Conservation of energy is the mathematical consequence of translational symmetry of time (that is, the indistinguishability of time intervals taken at different time)[12] - see Noether's theorem. According to energy conservation law the total inflow of energy into a system must equal the total outflow of energy from the system, plus the change in the energy contained within the system. This law is a fundamental principle of physics. It follows from the translational symmetry of time, a property of most phenomena below the cosmic scale that makes them independent of their locations on the time coordinate. Put differently, yesterday, today, and tomorrow are physically indistinguishable. Because energy is quantity which is canonical conjugate to time, it is impossible to define exact amount of energy during any definite time interval - making it impossible to apply the law of conservation of energy. This must not be considered a "violation" of the law. We know the law still holds, because a succession of short time periods does not accumulate any violation of conservation of energy. which is similar in form to the uncertainty principle (but not really mathematically equivalent thereto, since H and t are not dynamically conjugate variables, neither in classical nor in quantum mechanics). In particle physics, this inequality permits a qualitative understanding of virtual particles which carry momentum, exchange by which with real particles is responsible for creation of all known fundamental forces (more accurately known as fundamental interactions). Virtual photons (which are simply lowest quantum mechanical energy state of photons) are also responsible for electrostatic interaction between electric charges (which results in Coulomb law), for spontaneous radiative decay of exited atomic and nuclear states, for the Casimir force, for van der Waals bond forces and some other observable phenomena. Energy and life Any living organism relies on an external source of energy—radiation from the Sun in the case of green plants; chemical energy in some form in the case of animals—to be able to grow and reproduce. The daily 1500–2000 Calories (6–8 MJ) recommended for a human adult are taken in mostly in the form of carbohydrates and fats, of which glucose (C6H12O6) and stearin (C57H110O6) are convenient examples. These are oxidised to carbon dioxide and water in the mitochondria :C6H12O6 + 3O2 → 6CO2 + 6H2O :C57H110O6 + 81.5O2 → 57CO2 + 55H2O and some of the energy is used to convert ADP into ATP :ADP + HPO42− → ATP + H2O The rest of the chemical energy in the carbohydrate or fat is converted into heat: the ATP is used as a sort of "energy currency", and some of the chemical energy it contains is used for other metabolism (at each stage of a metabolic pathway, some chemical energy is converted into heat). Only a tiny fraction of the original chemical energy is used for work:[13] gain in gravitational potential energy of a 150 kg weight lifted through 2 metres: 3kJ Daily food intake of a normal adult: 6–8 MJ It would appear that living organisms are remarkably inefficient (in the physical sense) in their use of the energy they receive (chemical energy or radiation), and it is true that most real machines manage higher efficiencies. However, the energy that is converted to heat serves a vital purpose, as it allows the organism to be highly ordered. The second law of thermodynamics states that energy (and matter) tends to become more evenly spread out across the universe: to concentrate energy (or matter) in one specific place, it is necessary to spread out a greater amount of energy (as heat) across the remainder of the universe ("the surroundings").[14] Simpler organisms can achieve higher energy efficiencies than more complex ones, but the complex organisms can occupy ecological niches that are not available to their simpler brethren. The conversion of a portion of the chemical energy to heat at each step in a metabolic pathway is the physical reason behind the pyramid of biomass observed in ecology: to take just the first step in the food chain, of the estimated 124.7 Pg/a of carbon that is fixed by photosynthesis, 64.3 Pg/a (52%) are used for the metabolism of green plants,[15] i.e. reconverted into carbon dioxide and heat. See also Notes and references 1. ^ Harper, Douglas. Energy. Online Etymology Dictionary. Retrieved on May 1, 2007. 2. ^ Lofts, G; O'Keeffe D; et al (2004). "11 — Mechanical Interactions", Jacaranda Physics 1, 2, Milton, Queensland, Australia: John Willey & Sons Australia Ltd., 286. ISBN 0 7016 3777 3.  3. ^ Smith, Crosbie (1998). The Science of Energy - a Cultural History of Energy Physics in Victorian Britain. The University of Chicago Press. ISBN 0-226-76420-6.  4. ^ [2] 5. ^ Berkeley Physics Course Volume 1. Charles Kittle, Walter D Knight and Malvin A Ruderman 6. ^ Misner, Thorne, Wheeler (1973). Gravitation. San Francisco: W. H. Freeman. ISBN 0716703440.  7. ^ Cengel, Yungus, A.; Boles, Michael (2002). Thermodynamics - An Engineering Approach, 4th ed.. McGraw-Hill, 17-18. ISBN 0-07-238332-1.  8. ^ Kittel and Kroemer (1980). Thermal Physics. New York: W. H. Freeman. ISBN 0-7167-1088-9.  9. ^ International Council of Science Committee on Data for Science and Technology (2007). 2006 CODATA recommended values. 10. ^ Feynman, Richard (1964). The Feynman Lectures on Physics; Volume 1. U.S.A: Addison Wesley. ISBN 0-201-02115-3.  11. ^ The Laws of Thermodynamics including careful definitions of energy, free energy, et cetera. 12. ^ [3] 13. ^ These examples are solely for illustration, as it is not the energy available for work which limits the performance of the athlete but the power output of the sprinter and the force of the weightlifter. A worker stacking shelves in a supermarket does more work (in the physical sense) than either of the athletes, but does it more slowly. 15. ^ Ito, Akihito; Oikawa, Takehisa (2004). "Global Mapping of Terrestrial Primary Productivity and Light-Use Efficiency with a Process-Based Model." in Shiyomi, M. et al. (Eds.) Global Environmental Change in the Ocean and on Land. pp. 343–58. Further reading • Alekseev, G. N. (1986). Energy and Entropy. Moscow: Mir Publishers.  • Walding, Richard,  Rapkins, Greg,  Rossiter, Glenn (1999-11-01). New Century Senior Physics. Melbourne, Australia: Oxford University Press. ISBN 0-19-551084-4.  External links Energy may refer to: In science and society: • Energy (science), concepts related to matter, work, and force in physics and other sciences. • Energy (society), energy resources, fuels, electricity and associated technologies ..... Click the link for more information. Physics is the science of matter[1] and its motion[2][3], as well as space and time[4][5] —the science that deals with concepts such as force, energy, mass, and charge. ..... Click the link for more information. Science (from the Latin scientia, 'knowledge'), in the broadest sense, refers to any systematic knowledge or practice.[1] Examples of the broader use included political science and computer science, which are not incorrectly named, but rather named according to ..... Click the link for more information. Writing system: Greek alphabet  Official status Official language of:  Greece  European Union recognised as minority language in parts of:  European Union Regulated by: ..... Click the link for more information. scalar is a simple physical quantity that does not depend on direction, and is therefore not changed by coordinate system rotations (in Newtonian mechanics), or by Lorentz transformations (in relativity). (Contrast to vector. ..... Click the link for more information. A physical quantity is either a physical property that can be measured (e.g. mass, volume, etc.), or the result of a measurement. The value of a physical quantity Q is expressed as the product of a numerical value and a physical unit [Q]. ..... Click the link for more information. kinetic energy of an object is the extra energy which it possesses due to its motion. It is defined as the work needed to accelerate a body of a given mass from rest to its current velocity. ..... Click the link for more information. Potential energy can be thought of as energy stored within a physical system. This energy can be released or converted into other forms of energy, including kinetic energy. ..... Click the link for more information. In thermal physics, thermal energy is the energy portion of a system that increases with its temperature. In a loose sense, "thermal energy" is a term often used to describe the energy content of a system related to heating effects, e.g. temperature increase or decrease. ..... Click the link for more information. Electromagnetic (EM) radiation is a self-propagating wave in space with electric and magnetic components. These components oscillate at right angles to each other and to the direction of propagation, and are in phase with each other. ..... Click the link for more information. Nuclear energy was first discovered accidentally by French physicist Henri Becquerel in 1896, when he found that ..... Click the link for more information. The rest energy or rest mass-energy of a particle is its energy when it is at rest relative to a given inertial reference frame. It is defined by where is the rest mass of the particle and is the speed of light in a vacuum. ..... Click the link for more information. conservation of energy states that the total amount of energy in any closed system remains constant but can be recreated, although it may change forms, e.g. friction turns kinetic energy into thermal energy. ..... Click the link for more information. In the natural sciences an isolated system, as contrasted with a open system, is a physical system that does not interact with its surroundings. It obeys a number of conservation laws: its total energy and mass stay constant. ..... Click the link for more information. This article or section may be confusing or unclear for some readers. Please [improve the article] or discuss this issue on the talk page. This article has been tagged since August 2007. ..... Click the link for more information. A frame of reference is a particular perspective from which the universe is observed. Specifically, in physics, it refers to a provided set of axes from which an observer can measure the position and motion of all points in a system, as well as the orientation of objects in it. ..... Click the link for more information. energy, though first used in its present sense by Dr Thomas Young about the beginning of this century, has only come into use practically after the doctrine which defines it had ... ..... Click the link for more information. In the history of science, vis viva (from the Latin for living force) is an obsolete scientific theory that served as an elementary and limited early formulation of the principle of conservation of energy. ..... Click the link for more information. Gottfried Wilhelm Leibniz Gottfried Wilhelm Leibniz Born July 1 (June 21 Old Style) 1646 Leipzig, Electorate of Saxony Died November 14 1716 Hannover, Hanover Nationality German ..... Click the link for more information. Sir Isaac Newton Isaac Newton at 46 in Godfrey Kneller's 1689 portrait Born 4 January 1643(1643--) [OS: 25 December 1642] ..... Click the link for more information. Thomas Young (June 13, 1773-May 10, 1829) was an English polymath, contributing to the scientific understanding of vision, light, solid mechanics, energy, physiology, and Egyptology. ..... Click the link for more information. ..... Click the link for more information. Gaspard-Gustave de Coriolis or Gustave Coriolis (May 21 1792–September 19 1843), mathematician, mechanical engineer and scientist born in Paris, France. He is best known for his work on the Coriolis Effect. ..... Click the link for more information. ..... Click the link for more information. William John Macquorn Rankine (July 5, 1820 - December 24, 1872) was a Scottish engineer and physicist. He was a founding contributor, with Rudolf Clausius and William Thomson (1st Baron Kelvin), to the science of thermodynamics. ..... Click the link for more information. ..... Click the link for more information. ..... Click the link for more information. momentum (pl. momenta; SI unit kg m/s, or, equivalently, N•s) is the product of the mass and velocity of an object. For more accurate measures of momentum, see the section "modern definitions of momentum" on this page. ..... Click the link for more information. William Thomson, 1st Baron Kelvin, OM, GCVO, PC, PRS, FRSE, (26 June 1824 – 17 December 1907) was a British mathematical physicist, engineer, and outstanding leader in the physical sciences of the 19th century. ..... Click the link for more information. Thermodynamics (from the Greek θερμη, therme, meaning "heat" and δυναμις, dynamis, meaning "power") is a branch of physics that studies the effects of changes in temperature, pressure, and volume on ..... Click the link for more information.
e845df54d93468fa
Quantum Field Theory/Introduction to The Standard Model Local gauge symmetry. Gauge theories. Non-Abelian gauge theories.Edit Spontaneous symmetry breaking. Goldstone theorem. Higgs mechanism.Edit The Higgs mechanism is a theoretical framework which concerns the origin mass of elementary particles; technically it yields the only consistent explanation, how the masses of the W and Z bosons arise through spontaneous electroweak symmetry breaking. More generally, the Higgs mechanism is the way that the gauge bosons in any gauge theory get a nonzero mass. Also for other particles, e.g. for fermions, a Higgs mechanism can explain the masses, again in a gauge invariant way. The simplest realization of the Higgs mechanism in the standard model requires an extra Higgs field which interacts with the gauge fields, and which has a nonzero value in its lowest energy state, a vacuum expectation value. This means that all of space is filled with the background Higgs field, the so-called Higgs condensate. Interaction with this background field changes the low-energy spectrum of the gauge fields and the gauge bosons become massive. The Higgs field has a non-trivial self-interaction, like the Mexican hat potential, which leads to spontaneous symmetry breaking: in the lowest energy state the symmetry of the potential (which includes the gauge symmetry) is broken by the condensate. Analysis of small fluctuations of the fields near the minimum reveals that the gauge bosons and other particles become massive. In the standard model, the Higgs field is an SU(2) doublet, a complex spinor with four real components, which is charged under the standard model U(1). After symmetry breaking, three of the four degrees of freedom in the Higgs field mix with the W and Z bosons, while the one remaining degree of freedom becomes the Higgs boson – a new scalar particle. Goldstone BosonsEdit The problem with spontaneous symmetry breaking models in particle physics is that, according to Goldstone's theorem, they come with massless scalar particles. If a symmetry is broken by a condensate, acting with a symmetry generator on the condensate gives a second state with the same energy. So certain oscillations do not have any energy, and the particles associated with these oscillations have zero mass. In the standard model, at temperatures high enough so that the symmetry is unbroken, all elementary particles except the scalar Higgs boson are massless. At a critical temperature, the Higgs field spontaneously slides from the point of maximum energy in a randomly chosen direction. Once the symmetry is broken, the gauge boson particles, such as the W bosons and Z boson, acquire masses. The mass can be interpreted as the result of the interactions of the particles with the "Higgs ocean". Fermions, such as the leptons and quarks in the Standard Model, acquire mass as a result of their interaction with the Higgs field, but not in the same way as the gauge bosons. The Higgs mechanism can be considered as the superconductivity in the vacuum. It occurs when all of space is filled with a sea of particles which are charged, or in field language, when a charged field has a nonzero vacuum expectation value. Interaction with the quantum fluid filling the space prevents certain forces from propagating over long distances. A superconductor expels all magnetic fields from its interior, a phenomenon known as the Meissner effect. This was mysterious for a long time, because it implies that electromagnetic forces somehow become short-range inside the superconductor. Contrast this with the behavior of an ordinary metal. In a metal, the conductivity shields electric fields by rearranging charges on the surface until the total field cancels in the interior. But magnetic fields can penetrate to any distance, and if a magnetic monopole (an isolated magnetic pole) is surrounded by a metal the field can escape without collimating into a string. In a superconductor, however, electric charges move with no dissipation, and this allows for permanent surface currents, not just surface charges. When magnetic fields are introduced at the boundary of a superconductor, they produce surface currents which exactly neutralize them. The Meissner effect is due to currents in a thin surface layer, whose thickness, the London penetration depth, can be calculated from a simple model. This simple model, due to Lev Landau and Vitaly Ginzburg, treats superconductivity as a charged Bose–Einstein condensate. Suppose that a superconductor contains bosons with charge q. The wavefunction of the bosons can be described by introducing a quantum field, \psi, which obeys the Schrödinger equation as a field equation (in units where \hbar, the Planck quantum divided by 2\pi, is replaced by 1): i{\partial \over \partial t} \psi = {(\nabla - iqA)^2 \over 2m} \psi The operator \psi(x) annihilates a boson at the point x, while its adjoint \scriptstyle \psi^\dagger creates a new boson at the same point. The wavefunction of the Bose–Einstein condensate is then the expectation value \Psi of \psi(x), which is a classical function that obeys the same equation. The interpretation of the expectation value is that it is the phase that one should give to a newly created boson so that it will coherently superpose with all the other bosons already in the condensate. When there is a charged condensate, the electromagnetic interactions are screened. To see this, consider the effect of a gauge transformation on the field. A gauge transformation rotates the phase of the condensate by an amount which changes from point to point, and shifts the vector potential by a gradient. \psi \rightarrow e^{iq\phi(x)} \psi A \rightarrow A + \nabla \phi When there is no condensate, this transformation only changes the definition of the phase of \psi at every point. But when there is a condensate, the phase of the condensate defines a preferred choice of phase. The condensate wavefunction can be written as \psi(x) = \rho(x)\, e^{i\theta(x)}, where \rho is real amplitude, which determines the local density of the condensate. If the condensate were neutral, the flow would be along the gradients of \theta, the direction in which the phase of the Schrödinger field changes. If the phase \theta changes slowly, the flow is slow and has very little energy. But now \theta can be made equal to zero just by making a gauge transformation to rotate the phase of the field. The energy of slow changes of phase can be calculated from the Schrödinger kinetic energy, H= {1\over 2m} |{(qA+\nabla )\psi|^2}, and taking the density of the condensate \rho to be constant, H\approx {\rho^2 \over 2m} (qA+ \nabla \theta)^2. Fixing the choice of gauge so that the condensate has the same phase everywhere, the electromagnetic field energy has an extra term, {q^2 \rho^2 \over 2m} A^2. When this term is present, electromagnetic interactions become short-ranged. Every field mode, no matter how long the wavelength, oscillates with a nonzero frequency. The lowest frequency can be read off from the energy of a long wavelength A mode, E\approx {{\dot A}^2\over 2} + {q^2 \rho^2 \over 2m} A^2. This is a harmonic oscillator with frequency \scriptstyle \sqrt{q^2 \rho^2/m}. The quantity |\psi|^2 (=\rho^2) is the density of the condensate of superconducting particles. In an actual superconductor, the charged particles are electrons, which are fermions not bosons. So in order to have superconductivity, the electrons need to somehow bind into Cooper pairs. The charge of the condensate q is therefore twice the electron charge e. The pairing in a normal superconductor is due to lattice vibrations, and is in fact very weak; this means that the pairs are very loosely bound. The description of a Bose–Einstein condensate of loosely bound pairs is actually more difficult than the description of a condensate of elementary particles, and was only worked out in 1957 by Bardeen, Cooper and Schrieffer in the famous BCS theory. Abelian Higgs modelEdit In a relativistic gauge theory, the vector bosons are natively massless, like the photon, leading to long-range forces. This is fine for electromagnetism, where the force is actually long-range, but it means that the description of short-range weak forces by a gauge theory requires a modification. Gauge invariance means that certain transformations of the gauge field do not change the energy at all. If an arbitrary gradient is added to A, the energy of the field is exactly the same. This makes it difficult to add a mass term, because a mass term tends to push the field toward the value zero. But the zero value of the vector potential is not a gauge invariant idea. What is zero in one gauge is nonzero in another. So in order to give mass to a gauge theory, the gauge invariance must be broken by a condensate. The condensate will then define a preferred phase, and the phase of the condensate will define the zero value of the field in a gauge invariant way. The gauge invariant definition is that a gauge field is zero when the phase change along any path from parallel transport is equal to the phase difference in the condensate wavefunction. The condensate value is described by a quantum field with an expectation value, just as in the Landau–Ginzburg model. To make sure that the condensate value of the field does not pick out a preferred direction in space-time, it must be a scalar field. In order for the phase of the condensate to define a gauge, the field must be charged. In order for a scalar field \Phi to be charged, it must be complex. Equivalently, it should contain two fields with a symmetry which rotates them into each other, the real and imaginary parts. The vector potential changes the phase of the quanta produced by the field when they move from point to point. In terms of fields, it defines how much to rotate the real and imaginary parts of the fields into each other when comparing field values at nearby points. The only renormalizable model where a complex scalar field Φ acquires a nonzero value is the Mexican-hat model, where the field energy has a minimum away from zero. S(\phi ) = \int {1\over 2} |\partial \phi|^2 - \lambda\cdot (|\phi|^2 - \Phi^2)^2 This defines the following Hamiltonian: H(\phi ) = {1\over 2} |\dot\phi|^2 + |\nabla \phi|^2 + V(|\phi|) The first term is the kinetic energy of the field. The second term is the extra potential energy when the field varies from point to point. The third term is the potential energy when the field has any given magnitude. This potential energy \scriptstyle V(z,\Phi)= \lambda\cdot ( |z|^2 - \Phi^2)^2\, has a graph which looks like a Mexican hat, which gives the model its name. In particular, the minimum energy value is not at z=0, but on the circle of points where the magnitude of z is \Phi. Higgs potential V. For a fixed value of \lambda the potential is presented against the real and imaginary parts of \Phi. The Mexican-hat or champagne-bottle profile at the ground should be noted. When the field \Phi(x) is not coupled to electromagnetism, the Mexican-hat potential has flat directions. Starting in any one of the circle of vacua and changing the phase of the field from point to point costs very little energy. Mathematically, if \phi(x) = \Phi e^{i\theta(x)} with a constant prefactor, then the action for the field \theta (x), i.e., the "phase" of the Higgs field Φ(x), has only derivative terms. This is not a surprise. Adding a constant to \theta (x) is a symmetry of the original theory, so different values of \theta (x) cannot have different energies. This is an example of Goldstone's theorem: spontaneously broken continuous symmetries lead to massless particles. The Abelian Higgs model is the Mexican-hat model coupled to electromagnetism: S(\phi ,A) = \int {1\over 4} F^{\mu\nu} F_{\mu\nu} + |(\partial - i q A)\phi|^2 + \lambda\cdot (|\phi|^2 - \Phi^2)^2. The classical vacuum is again at the minimum of the potential, where the magnitude of the complex field \phi is equal to \Phi. But now the phase of the field is arbitrary, because gauge transformations change it. This means that the field \theta (x) can be set to zero by a gauge transformation, and does not represent any degrees of freedom at all. Furthermore, choosing a gauge where the phase of the condensate is fixed, the potential energy for fluctuations of the vector field is nonzero, just as it is in the Landau–Ginzburg model. So in the abelian Higgs model, the gauge field acquires a mass. To calculate the magnitude of the mass, consider a constant value of the vector potential A in the x direction in the gauge where the condensate has constant phase. This is the same as a sinusoidally varying condensate in the gauge where the vector potential is zero. In the gauge where A is zero, the potential energy density in the condensate is the scalar gradient energy: E = {1\over 2}|\partial (\Phi e^{iqAx})|^2 = {1\over 2} q^2\Phi^2 A^2 And this energy is the same as a mass term m^2 A^2/2 where m=q\Phi. Nonabelian Higgs mechanismEdit The Nonabelian Higgs model has the following action: S(\phi ,\mathbf A) = \int {1\over 4g^2} \mathop{\textrm{tr}}(F^{\mu\nu}F_{\mu\nu}) + |D\phi|^2 + V(|\phi|)\,, where now the nonabelian field \mathbf A is contained in D and in the tensor components F^{\mu \nu} and F_{\mu \nu} (the relation between \mathbf A and those components is well-known from the Yang–Mills theory). It is exactly analogous to the Abelian Higgs model. Now the field \phi is in a representation of the gauge group, and the gauge covariant derivative is defined by the rate of change of the field minus the rate of change from parallel transport using the gauge field A as a connection. D\phi = \partial \phi - i A^k t_k \phi Again, the expectation value of Φ defines a preferred gauge where the condensate is constant, and fixing this gauge, fluctuations in the gauge field A come with a nonzero energy cost. Depending on the representation of the scalar field, not every gauge field acquires a mass. A simple example is in the renormalizable version of an early electroweak model due to Julian Schwinger. In this model, the gauge group is SO(3) (or SU(2)--- there are no spinor representations in the model), and the gauge invariance is broken down to U(1) or SO(2) at long distances. To make a consistent renormalizable version using the Higgs mechanism, introduce a scalar field \phi^a which transforms as a vector (a triplet) of SO(3). If this field has a vacuum expectation value, it points in some direction in field space. Without loss of generality, one can choose the z-axis in field space to be the direction that \phi is pointing, and then the vacuum expectation value of \phi is (0,0,A), where A is a constant with dimensions of mass (\scriptstyle c=\hbar=1). Rotations around the z axis form a U(1) subgroup of SO(3) which preserves the vacuum expectation value of \phi, and this is the unbroken gauge group. Rotations around the x and y axis do not preserve the vacuum, and the components of the SO(3) gauge field which generate these rotations become massive vector mesons. There are two massive W mesons in the Schwinger model, with a mass set by the mass scale A, and one massless U(1) gauge boson, similar to the photon. The Schwinger model predicts magnetic monopoles at the electroweak unification scale, and does not predict the Z meson. It doesn't break electroweak symmetry properly as in nature. But historically, a model similar to this (but not using the Higgs mechanism) was the first in which the weak force and the electromagnetic force were unified. Standard model Higgs mechanismEdit The gauge group of the electroweak part of the standard model is \mathrm{SU}(2)\times \mathrm{U}(1). The Higgs mechanism is by a scalar field which is a weak SU(2) doublet with weak hypercharge −1, it has four real components or two complex components, and it transforms as a spinor under SU(2) and gets multiplied by a phase under U(1) rotations. Note that this is not the same as two complex spinors which mix under U(1), which would have eight real components, rather this is the spinor representation of the group U(2)--- multiplying by a phase mixes the real and imaginary part of the complex spinor into each other. The group SU(2) is all unitary matrices, all the orthonormal changes of coordinates in a complex two dimensional vector space. Rotating the coordinates so that the first basis vector in the direction of H makes the vacuum expection value of H the spinor (A,0). The generators for rotations about the x,y,z axes are by half the Pauli matrices \sigma_x,\sigma_y,\sigma_z, so that a rotation of angle \theta about the z axis takes the vacuum to: (A e^{i\theta/2},0)\, While the X and Y generators mix up the top and bottom components, the Z rotations only multiply by a phase. This phase can be undone by a U(1) rotation of angle \theta/2, which multiplies by the opposite phase, since the Higgs has charge −1. Under both an SU(2) z-rotation and a U(1) rotation by an amount \theta/2, the vacuum is invariant. This combination of generators: Q = W_z + {Y/2} \, defines the unbroken gauge group, where W_z is the generator of rotations around the z-axis in the SU(2) and Y is the generator of the U(1). This combination of generators--- perform a z rotation in the SU(2) and simultaneously perform a U(1) rotation by half the angle--- preserves the vacuum, and defines the unbroken gauge group in the standard model. The part of the gauge field in this direction stays massless, and this gauge field is the actual photon. The phase that a field acquires under this combination of generators is its electric charge, and this is the formula for the electric charge in the standard model. In this convention, all the Y charges in the standard model are multiples of 1/3. To make all the Y-charges in the standard model integers, you can rescale the Y part of the formula by tripling all the Y-charges if you like, and rewrite the charge formula as Q = W_z + Y/6, but the normalization with Y/2 is the universal standard. Affine Higgs mechanismEdit Ernst Stueckelberg discovered a version of the Higgs mechanism by analyzing the theory of quantum electrodynamics with a massive photon. Stueckelberg's model is a limit of the regular Mexican hat Abelian Higgs model, where the vacuum expectation value H goes to infinity and the charge of the Higgs field goes to zero in such a way that their product stays fixed. The mass of the Higgs boson is proportional to H, so the Higgs boson becomes infinitely massive and disappears. The vector meson mass is equal to the product eH, and stays finite. The interpretation is that when a U(1) gauge field does not require quantized charges, it is possible to keep only the angular part of the Higgs oscillations, and discard the radial part. The angular part of the Higgs field \theta has the following gauge transformation law: \theta + e\alpha\, A \rightarrow A + \alpha \, The gauge covariant derivative for the angle (which is actually gauge invariant) is: D\theta = \partial \theta - e A\, In order to keep \theta fluctuations finite and nonzero in this limit, \theta should be rescaled by H, so that its kinetic term in the action stays normalized. The action for the theta field is read off from the Mexican hat action by substituting \scriptstyle \phi = He^{i\theta/H}. S = \int {1\over 4}F^2 + {1\over 2}(D\theta)^2 = \int {1\over 4}F^2 + {1\over 2}(\partial \theta - He A)^2 = \int {1\over 4}F^2 + {1\over 2}(\partial \theta - m A)^2 since \scriptstyle eH is the gauge boson mass. By making a gauge transformation to set \scriptstyle \theta=0, the gauge freedom in the action is eliminated, and the action becomes that of a massive vector field: S= \int {1\over 4} F^2 + {m^2\over 2} A^2\, To have arbitrarily small charges requires that the U(1) is not the circle of unit complex numbers under multiplication, but the real numbers R under addition, which is only different in the global topology. Such a U(1) group is non-compact. The field \theta transforms as an affine representation of the gauge group. Among the allowed gauge groups, only non-compact U(1) admits affine representations, and the U(1) of electromagnetism is experimentally known to be compact, since charge quantization holds to extremely high accuracy. The Higgs condensate in this model has infinitesimal charge, so interactions with the Higgs boson do not violate charge conservation. The theory of quantum electrodynamics with a massive photon is still a renormalizable theory, one in which electric charge is still conserved, but magnetic monopoles are not allowed. For nonabelian gauge theory, there is no affine limit, and the Higgs oscillations cannot be too much more massive than the vectors. Further consequences, e.g. for fermionsEdit In spite of the introduction of spontaneous symmetry-breaking, also for fermions the mass terms oppose the gauge invariance. Therefore, also for these fields the mass terms should be replaced by a gauge-invariant "Higgs" mechanism. An obvious possibility is some kind of "Yukawa coupling" (see below) between the fermion field ψ and the Higgs field Φ, with unknown couplings G_{\psi}, which after symmetry-breaking (more precisely: after expansion of the Lagrange density around a suitable ground state) again results in the original mass terms, which are now, however (i.e. by introduction of the Higgs field) written in a gauge-invariant way. The Lagrange density for the "Yukawa"-interaction of a fermion field ψ and the Higgs field Φ is \mathcal{L}_{\mathrm{Fermion}}(\phi , A, \psi ) = \overline{\psi} \gamma^{\mu} D_{\mu} \psi + G_{\psi} \overline{\psi} \phi \psi \,, where again the gauge field A only enters D_\mu (i.e., it is only indirectly visible). The quantities \gamma^{\mu} are the Dirac matrices, and G_{\psi} is the already-mentioned "Yukawa"-coupling parameter. Already now the mass-generation follows the same principle as above, namely from the existence of a finite expectation value |\langle\phi\rangle |, as descibed above. Again, this is crucial for the existence of the property "mass". Last modified on 10 January 2010, at 18:02
fb8d1a51116e8f96
Common Compounds Exam Guide Construction Kits Companion Notes Just Ask Antoine! Slide Index Tutorial Index Companion Notes Atoms & ions Chemical change The mole Energy & change The quantum theory Electrons in atoms The periodic table Home :Companion NotesPrint | Comment Electrons in atoms Learning Objectives Lecture Notes Internet sites and paper references for further exploration. Frequently Asked Questions Find an answer, or ask a question. Learning objectives • Explain the difference between a continuous spectrum and a line spectrum. • Explain the difference between an emission and an absorption spectrum. • Use the concept of quantized energy states to explain atomic line spectra. • Given an energy level diagram, predict wavelengths in the line spectrum, and vice versa. • Define and distinguish between shells, subshells, and orbitals. • Explain the relationships between the quantum numbers. • Use quantum numbers to label electrons in atoms. • Describe and compare atomic orbitals given the n and ell quantum numbers. • List a set of subshells in order of increasing energy. • Write electron configurations* for atoms in either the subshell or orbital box notations. • Write electron configurations of ions. • Use electron configurations to predict the magnetic properties of atoms. Lecture outline The quantum theory was used to show how the wavelike behavior of electrons leads to quantized energy states when the electrons are bound or trapped. In this section, we'll use the quantum theory to explain the origin of spectral lines and to describe the electronic structure of atoms. Emission Spectra • experimental key to atomic structure: analyze light emitted by high temperature gaseous elements • experimental setup: spectroscopy • atoms emit a characteristic set of discrete wavelengths- not a continuous spectrum! • atomic spectrum can be used as a "fingerprint" for an element • hypothesis: if atoms emit only discrete wavelengths, maybe atoms can have only discrete energies • an analogy A turtle sitting on a ramp can have any height above the ground- and so, any potential energy A turtle sitting on a staircase can take on only certain discrete energies • energy is required to move the turtle up the steps (absorption) • energy is released when the turtle moves down the steps (emission) • only discrete amounts of energy are absorbed or released (energy is said to be quantized) • energy staircase diagram for atomic hydrogen • bottom step is called the ground state • higher steps are called excited states • computing line wavelengths using the energy staircase diagram • computing energy steps from wavelengths in the line spectrum • summary: line spectra arise from transitions between discrete (quantized) energy states The quantum mechanical atom • Electrons in atoms have quantized energies • Electrons in atoms are bound to the nucleus by electrostatic attraction • Electron waves are standing matter waves • standing matter waves have quantized energies, as with the "electron on a wire" model • Electron standing matter waves are 3 dimensional • The electron on a wire model was one dimensional; one quantum number was required to describe the state of the electron • A 3D model requires three quantum numbers • A three-dimensional standing matter wave that describes the state of an electron in an atom is called an atomic orbital • The energies and mathematical forms of the orbitals can be computed using the Schrödinger equation • quantization isn't assumed; it arises naturally in solution of the equation • every electron adds 3 variables (x, y, z) to the equation; it's very hard to solve equations with lots of variables. • energy-level separations computed with the Schrödinger equation agree very closely with those computed from atomic spectral lines Quantum numbers • Think of the quantum numbers as addresses for electrons • the principal quantum number, n • determines the size of an orbital (bigger n = bigger orbitals) • largely determines the energy of the orbital (bigger n = higher energy) • can take on integer values n = 1, 2, 3, ..., • all electrons in an atom with the same value of n are said to belong to the same shell • spectroscopists use the following names for shells Spectroscopist's notation for shells*. nshell name nshell name 1K 5O 2L 6P 3M 7Q • the azimuthal quantum number, ell • designates the overall shape of the orbital within a shell • affects orbital energies (bigger ell = higher energy) • all electrons in an atom with the same value of ell are said to belong to the same subshell • only integer values between 0 and n-1 are allowed • sometimes called the orbital angular momentum quantum number • spectroscopists use the following notation for subshells Spectroscopist's notation for subshells*. ellsubshell name • the magnetic quantum number, mell • determines the orientation of orbitals within a subshell • does not affect orbital energy (except in magnetic fields!) • only integer values between -ell and +ell are allowed • the number of mell values within a subshell is the number of orbitals within a subshell The number of possible mell values determines the number of orbitals* in a subshell. ell possible values of mell number of orbitals in this subshell 0 0 1 1 -1, 0, +1 3 2 -2, -1, 0, +1, +2 5 3 -3, -2, -1, 0, +1, +2, +3 7 • the spin quantum number, ms • several experimental observations can be explained by treating the electron as though it were spinning • spin makes the electron behave like a tiny magnet • spin can be clockwise or counterclockwise • spin quantum number can have values of +1/2 or -1/2 Electron configurations of atoms • a list showing how many electrons are in each orbital or subshell in an atom or ion • subshell notation: list subshells of increasing energy, with number of electrons in each subshell as a superscript • examples • 1s2 2s2 2p5 means "2 electrons in the 1s subshell, 2 electrons in the 2s subshell, and 5 electrons in the 2p subshell" • 1s2 2s2 2p6 3s2 3p3 is an electron configuration with 15 electrons total; 2 electrons have n=1 (in the 1s subshell); 8 electrons have n=2 (2 in the 2s subshell, and 6 in the 2p subshell); and 5 electrons have n=3 (2 in the 3s subshell, and 3 in the 3p subshell). • ground state* configurations fill the lowest energy orbitals first Electron configurations of the first 11 elements, in subshell notation. Notice how configurations can be built by adding one electron at a time. atomZground state electronic configuration H 1 1s1 He 2 1s2 Li 3 1s2 2s1 Be 4 1s2 2s2 B 5 1s2 2s2 2p1 C 6 1s2 2s2 2p2 N 7 1s2 2s2 2p3 O 8 1s2 2s2 2p4 F 9 1s2 2s2 2p5 Ne 10 1s2 2s2 2p6 Na 11 1s2 2s2 2p6 3s1 Writing electron configurations • strategy: start with hydrogen, and build the configuration one electron at a time (the Aufbau principle*) 1. fill subshells in order by counting across periods, from hydrogen up to the element of interest: Filling order of subshells from the periodic table 2. rearrange subshells (if necessary) in order of increasing n & l • examples: Give the ground state electronic configurations for: • Al • Fe • Ba • Hg • watch out for d & f block elements; orbital interactions cause exceptions to the Aufbau principle • half-filled and completely filled d and f subshells have extra stability Know these exceptions to the Aufbau principle in the 4th period. (There are many others at the bottom of the table, but don't worry about them now.) exception configuration predicted by the Aufbau principle true ground state configuration Cr 1s2 2s2 2p6 3s2 3p6 3d4 4s2 1s2 2s2 2p6 3s2 3p6 3d5 4s1 Cu 1s2 2s2 2p6 3s2 3p6 3d9 4s2 1s2 2s2 2p6 3s2 3p6 3d10 4s1 Electron configurations including spin • unpaired electrons give atoms (and molecules) special magnetic and chemical properties • when spin is of interest, count unpaired electrons using orbital box diagrams Examples of ground state electron configurations in the orbital box notation that shows electron spins. atomorbital box diagram • drawing orbital box diagrams 1. write the electron configuration in subshell notation 2. draw a box for each orbital. • Remember that s, p, d, and f subshells contain 1, 3, 5, and 7 degenerate* orbitals, respectively. • Remember that an orbital can hold 0, 1, or 2 electrons only, and if there are two electrons in the orbital, they must have opposite (paired) spins (Pauli principle*) 3. within a subshell (depicted as a group of boxes), spread the electrons out and line up their spins as much as possible (Hund's rule*) • the number of unpaired electrons can be counted experimentally • configurations with unpaired electrons are attracted to magnetic fields (paramagnetism*) • configurations with only paired electrons are weakly repelled by magnetic fields (diamagnetism*) Core and valence electrons • chemistry involves mostly the shell* with the highest value of principal quantum number*, n, called the valence shell* • the noble gas core* under the valence shell is chemically inert • simplify the notation for electron configurations by replacing the core with a noble gas symbol in square brackets: Examples of electron configurations written with the core/valence notation atom full configuration core valence configuration full configuration using core/valence notation O 1s2 2s2 2p4 He 2s2 2p4 [He] 2s2 2p4 Cl 1s2 2s2 2p6 3s2 3p5 Ne 3s2 3p5 [Ne] 3s2 3p5 Al 1s2 2s2 2p6 3s2 3p1 Ne 3s2 3p1 [Ne] 3s2 3p1 • electrons in d and f subshells outside the noble gas core are called pseudocore electrons Examples of electron configurations containing pseudocore electrons atom core pseudocore valence full configuration Fe Ar 3d6 4s2 [Ar] 3d6 4s2 Sn Kr 4d10 5s2 5p2 [Kr] 4d10 5s2 5p2 Hg Xe 4f14 5d10 6s2 [Xe] 4f14 5d10 6s2 Pu Rn 5f6 7s2 [Rn] 5f6 7s2 Sign up for a free monthly newsletter describing updates, new features, and changes on this site. General Chemistry Online! Electrons in atoms Copyright © 1997-2005 by Fred Senese Comments & questions to fsenese@frostburg.edu Last Revised 07/25/05.URL: http://antoine.frostburg.edu/chem/senese/101/electrons/index.shtml
48953ae07eacce6f
Learning Goals If you prefer, you can download these learning goals as a document: [PDF] Course-Scale Learning Goals These learning goals were created by a working group of faculty -- both those in physics education research and those in other areas of research. This list represents what we want students to be able to do at the end of the course (as opposed to what content is expected to be covered, as in a syllabus). 1. Math/physics connection: Translate a physical description of a junior-level quantum mechanics problem into the mathematical equation necessary to solve it. Explain the physical meaning of the formal and/or mathematical formulation of and/or solution to a junior-level quantum mechanics problem. Achieve physical insight through the mathematics of a problem. 2. Visualize the problem: Sketch the physical parameters of a problem (e.g., wave function, potential, probability distribution), as appropriate for a particular problem. When presented with a graph of a wave function or probability density, derive appropriate physical parameters of a system. 3. Organized knowledge: Articulate the big ideas from each content area, and/or lecture, thus indicating that they have organized their content knowledge. Filter this knowledge to access the information needed to apply to a particular physical problem. This organizational process should build on knowledge gained in earlier physics classes. 4. Communication: Justify and explain thinking and/or approach to a problem or physical situation, in either written or oral form. 5. Problem-solving techniques: When faced with a quantum mechanics problem, choose and apply appropriate problem solving techniques. Transfer the techniques learned in class and through homework to novel contexts (i.e., to solve problems which do not map directly to those in the book). Justify selected approach (see "Communication" above). In addition to building on techniques learned in previous courses (e.g., recognizing boundary conditions, setting up and solving differential equations, separation of variables, power-series solutions, operator methods), students are expected to develop specific new techniques as listed in concept-scale learning goals below. 1. Approximations: Recognize when approximations are useful, and to use them effectively (e.g., when the energy is very high, or barrier width very wide). Indicate how many terms of a series solution must be retained to obtain a solution of a given order. 2. Symmetries: Recognize symmetries and be able to take advantage of them in order to choose the appropriate method for solving a problem (e.g., when parity allows you to eliminate certain solutions). 6. Problem-solving strategy: Draw upon knowledge and skills to attack a problem even when a process leading to a correct solution is not yet clear. Continue to develop the ability to monitor progress towards a solution by learning how to: • Backtrack and try a new approach when necessary • Recognize when a solution has been reached and be able to articulate why this solution is valid (see "Expecting and Checking Solution" below) • Persist through to the solution of problems requiring many steps 7. Expecting and checking solution: When appropriate for a given problem, articulate expectations for the solution to a problem, such as: • The general shape of the wave function • Dependence on coordinate choice • Behavior at large distances • Problem symmetry For all problems, justify the reasonableness of a solution reached, by using methods such as: • Checking solution symmetry • Verifying boundary conditions • Order of magnitude estimates • Dimensional analysis • Limiting or special cases (e.g., checking the solution for correct behavior in limiting or known cases) 1. Intellectual maturity: Students should accept full responsibility for their own learning. They should be aware of what they do and do not understand about physical phenomena and classes of problem. They should learn to ask sophisticated, specific questions. Students should learn to identify and articulate where in a problem they experienced difficulty and to take appropriate action to move beyond that difficulty. Finally, they should regularly check their understanding against these learning goals and seek out appropriate help to fill in any gaps. 2. Coherent Theory: Students should recognize that the material covered in this course sets a framework for a consistent and complete understanding of quantum mechanics. 3. Build on Earlier Material: While the material in the course represents a significant departure from earlier course work both mathematically and conceptually, students should recognize and make use of connections to prior work, techniques and understanding gained in classes in classical physics as well as in their modern physics class. 4. Examples from Recent Research: Whenever possible, examples and homework problems should be drawn from recent research results (nominally defined as the last thirty years). Topic-Scale Learning Goals The goals below pertain to specific areas in the study of quantum mechanics which are to be learned in this course. They are organized by subject and thus do not follow any textbook. The subject categories are: • Mathematics • Measurement and the quantum state • The Schrödinger Equation • Formalism • Important Systems • Scattering • Angular Momentum and Spin • Differential Equations: • solve straightforward first and second order differential equations using a variety of methods. • recognize when separation of variables will simplify a differential equation and correctly apply the technique. • Complex Numbers: Thoroughly familiar with complex numbers and be able to find the real part, the imaginary part, the complex conjugate and the norm of any complex expression. • Linear Algebra: Given a matrix operator, find the eigenvalues and eigenvectors of the operator. Not only be able to diagonalize the matrix but be able to explain the physical significance of the procedure and the result. • Hamiltonian Formalism: • Set up the Hamiltonian for a classical system. • Statistics: Due to the statistical nature of quantum mechanics, students should be adept at computing probabilities and standard deviations. • Dirac Delta Function: Correctly compute integrals which contain one or more Dirac delta functions. • Vector Spaces: • Given a set of real or abstract (e.g., Hilbert space) vectors, determine whether the set constitutes a vector space. • Given a set of real or abstract (e.g., Hilbert space) vectors, determine whether or not they form a basis of a given vector space. • Hilbert Space: Compute the correct coefficients of a Hilbert space vector given a basis. • Operator Theory: Compute the expectation value of an operator in a given state. More generally, compute all the matrix elements of an operator in a given basis. Identify a Hermitian operator. Measurement and the Quantum State • The State Vector: • Correctly normalize a (normalizable) quantum state.1 • Describe and calculate different representations of a quantum state (e.g., position space, momentum space).1 • Observable Operators: • Know that observable quantities are represented by Hermitian operators. • Given a wave function and an observable operator, calculate that operator's expectation value. • For simple systems (e.g., 1-D infinite square well), find the eigenvectors and eigenvalues for the energy operator. • Measurement Predictions: • Given the eigenstates of an operator, compute the possible results of a measurement of the observable which corresponds to that operator.1 • Given a quantum state and the eigenbasis of an observable operator, compute the probabilities of obtaining the possible values which would result from a measurement of the corresponding observable quantity. • Given the results of a repeated measurement of an observable on a quantum state, construct a plausible quantum state as a superposition of the eigenstates of the operator associated with the observable.1 • Measurement Effects: • Describe what is known about the state of a system immediately after a measurement, including the significance of the measured value.1 • Time Evolution: • Given an initial wave function and a basis of energy eigenstates, find the time-dependent wave function. • Given an initial wave function and a basis of energy eigenstates, deduce when the probability distribution of an operator will be time dependent.1 • Operator Commutation and Compatibility: • Describe the relationship which must exist between two operators in order for a common eigenbasis to exist. • Compute the commutator of the position and momentum operators as well as the commutation relationships between angular momentum operators. • Describe the effect of following the measurement of an observable with the measurement of an incompatible operator.1 • Given two non-commuting observables, A and B and the result of a measurement of A, compute the possible outcomes of a subsequent measurement of B along with the appropriate probabilities. Schrödinger Equation • Time Dependent Schrödinger Equation: Use the time dependent Schrödinger Equation to compute the time evolution of a wave function. • Time Independent Schrödinger Equation: Describe the conditions under which separation of variables can be used to create a time independent Schrödinger Equation and use this equation to: • solve for the energy levels of the system • apply boundary conditions and solve for the stationary states (energy eigenstates) of the system • apply the Hamiltonian and boundary conditions to determine whether the energy eigenstates are discrete or continuous. • specify the evolution in time of a system when both an initial state and the energy eigenstates known • Normalization: • Explain the relationship between the normalization of a wave function and the ability to correctly calculate expectation values or probability densities. • Correctly normalize any wave function which represents a physically realizable state. • Hamiltonian: • Set up the Hamiltonian for a quantum mechanical system when they can calculate the potential energy for the corresponding classical system.1 • Use commutation relations to be able to determine which operators have eigenstates which are time independent. • Uncertainty Principle: • Given a quantum state and an observable, compute the uncertainty (standard deviation in the measurement) of the observable. • Given two observables, compute the minimum uncertainty of measuring both observables on any quantum state. • Probability in Quantum Mechanics: • Given a (time-dependent) wave function, compute the time-dependent probability density. • For a given quantum state, compute the probability of measuring any particular value for any common observable.1 Important Systems • Infinite Square Well: Thoroughly familiar with all aspects of the one dimensional infinite square well. • Given the size and position of the potential, compute the energy eigenvalues and the energy eigenstate position-space wave functions. • Compute the time evolution of a superposition of energy eigenstates as well as the expectation value of common observables for a superposition state. • General One-dimensional Systems: • Given a one-dimensional potential, sketch the first few energy eigenstates. • Harmonic Oscillator: • Given a specific harmonic-oscillator potential, compute the energy eigenvalues. • Given the raising and lowering operators, find the lowest energy eigenstate. • Given the raising and lowering operators and an energy-eigenstate wave function, find the energy eigenstates on either side. • Sketch the first few energy eigenstates of the harmonic oscillator. • Compute position and momentum expectation values using the raising and lowering operators. • Free Particle: Adept at using the position-space and momentum-space wave functions of the free particle. In particular, use them to construct wave packets. • Two-State Systems: • Given a two-dimensional Hamiltonian, find its eigenstates and eigenvalues. • Given a two-state system in a superposition state, correctly compute the probabilities of measuring each eigenvalue. • Hydrogen Atom: • Set up the Schrödinger equation for a hydrogen-like atoms. • Perform variable separation on the SE for hydrogen. • Describe the energy eigenstates for hydrogen-like atoms including the significance and use of their quantum numbers. Angular Momentum and Spin • Angular Momentum in Quantum Mechanics: • Compute the angular momentum of a system in a known eigenstate of an angular momentum operator (e.g., L², Lz) • Given a system in a known state, compute the probabilities of the possible results of measuring an angular momentum observable (e.g., L², Lz, Ly) • Spin: • Given a system in a known state, compute the probabilities of the possible results of measuring a spin observable (e.g., S², Sz, Sy 1Targeted in QMAT
91f03f0b3a08240c
« · » Click here for help on updating Java and setting Java security. Section 13.6: Angular Solutions of the Schrödinger Equation m = Please wait for the animation to completely load. Most potential energy functions in three dimensions are not often rectangular in form. In fact, they are most often in spherical coordinates (due to a spherical symmetry) and occasionally in cylindrical coordinates due to a cylindrical symmetry. We begin by considering the generalization of the time-independent Schrödinger equation to three-dimensional spherical coordinates, which is1 −(ħ2/2μ)[(1/r2)∂/∂r(r2∂/∂r) + (1/r2sin(θ))(∂/∂θ)(sin(θ)∂/∂θ) + (1/r2sin2(θ))(∂2/∂φ2)]ψ(r) + V(r)ψ(r) = Eψ(r) . (13.19) The probability per unit volume, the probability density, is ψ*(r)ψ(r) and therefore we require ∫ ψ*(r)ψ(r) d3r = 1 (where d3r = dV = r2sin(θ)drdθdφ) to maintain a probabilistic interpretation of the energy eigenfunction in three dimensions. As in the two-dimensional case, we use separation of variables variables, but now using ψ(r) = R(r) Y(θ,φ), i.e., separate the radial part from the angular part. This substitution yields [(1/R(r))d/dr(r2dR(r)/dr) + (1/Ysin(θ))(∂/∂θ)(sin(θ)∂Y/∂θ) + (1/Ysin2(θ))(∂2Y/∂φ2)] − (2μr2/ħ2)[V(r) − E] = 0 , (13.20) as long as V(r) = V(r) only. Note that each term involves either r or θ and φ.  We can separate these equations using the technique of separation of variables to give (1/R(r)) d/dr (r2 dR(r)/dr) − (2μr2/ħ2)[V(r) − E] = l(l + 1) , (13.21) (1/Ysin(θ)) (∂/∂θ)(sin(θ) ∂Y/∂θ) + (1/Ysin2(θ)) (∂2Y/∂φ2) = −l(l + 1) , (13.22) for the radial and angular parts, respectively. The constant l(l + 1) is the separation constant that allows us to separate one differential equation into two. We can do so because the only way for preceding equation to be true for all r, θ, and φ is for the angular part and the radial part to each be equal to a constant, ± l(l + 1). Despite the seemingly odd form of the separation constant, it is completely general and can be made to equal any complex number. For the angular piece, we can again separate variables using the substitution Y(θ,φ) = Θ(θ)Φ(φ).  This gives: sin(θ)/Θ d/dθ(sin(θ) dΘ/dθ) + l(l + 1)sin2(θ) = m2 , (13.23) 1/Φ d2Φ/dφ2 = −m2 , (13.24) where we have written the separation constant as ± m2, again without any loss of generality. The Φ(φ) part of the angular equation is a differential equation, d2Φ/dφ2 = −m2Φ, we have solved before. We get as its unnormalized solution Φm(φ) = exp(imφ) , (13.25) where m is the separation constant which can be both positive and negative. Since the angle φ ε {0, 2π}, we have that Φm(φ) = Φm(φ + 2π).  Like the ring problem in Section 13.5, in order for Φm(φ) to be single valued means that m = 0, ±1, ±2, ±3,….  We show these solutions in Animation 1. The Θ(θ) part of the angular equation is harder to solve. It has the unnormalized solutions Θlm(θ) = A Plm(cos(θ)) , where the Plm are the associated Legendre polynomials, where Plm(x) = (1 − x2)|m|/2 (d/dx)|m| Pl(x), are calculated from the Legendre polynomials Pl(x) = (1/2ll!) (d/dx)l(x2 − 1)l  .     (Rodriques' formula) The first few Legendre polynomials are P0(x) = 1 , P1(x) = x , and P2(x) = (1/2) (3x2 − 1) , or in terms of cos(θ) P0 = 1, P1 = cos(θ), and P2 = (1/2) (3cos2(θ)−1) . We can also write the Plm(x) using the above formulas as: P00 = 1,P11 = sin(θ),; P01 = cos(θ) , P02 = (1/2)(3cos2(θ)-1),    P12 = 3sin(θ)cos(θ),    P22 = 3sin2(θ) . We notice that l > 0 for Rodrigues' formula to be valid. In addition, |m| ≤ l since Pl|m|>l = 0.  (For |m| > l, the power of the derivative is larger than the order of the polynomial and hence the result is zero.) We also note that there must be 2l + 1 values for m, given a particular value of l.  Polar plots (zx plane) of associated Legendre polynomials are shown in Animation 2.  A positive angle θ is defined to be the angle down from the z axis toward the positive x axis. The length of a vector from the origin to the wave function, Plm, is the magnitude of the wave function at that angle. You may vary l and m to see how Plm varies. We normalize Θlm(θ)Φm(φ) by normalizing the angular part separately from the radial part (which we have yet to consider): ∫∫ Ylm*(θ,φ)Ylm(θ,φ) sin (θ) dθdφ = 1    [θ integration from 0 to π, φ integration from 0 to 2π] where Ylm(θ,φ) = Θlm(θ)Φm(φ).  When the Ylm(θ,φ) are normalized, they are called the spherical harmonics.The first few are Y00(θ,φ) = (1/4π)1/2 , Y ±11(θ,φ) = −/+ (3/8π)1/2 sin(θ) exp(±iφ)        Y 01(θ ,φ) = (3/4π)1/2 cos(θ) , and in general for m > 0, Ylm(θ,φ) = (−1)m [(2l + 1)(lm)!/(4π(l + m)!)]1/2 exp(imφ) Plm cos(θ) , and Yl−m(θ,φ) = (−1)mYlm*(θ,φ) for m < 0. When we represent the spherical harmonics this way, they are automatically orthogonal:  ∫ Ylm*(θ,φ)Yl'm'(θ,φ) sin(θ) dθdφ = δm m' δl l' . 1To avoid future confusion, we hereafter use μ for mass, and reserve m for the azimuthal (or magnetic) quantum number. 2Classically, angular momentum is L = r × p. We can write  L using quantum-mechanical operators in rectangular coordinates as Lx = ypzzpy, Ly = zpxxpz, and Lz = xpyypx. We find that if we write L2 and Lz in spherical coordinates, L2 = −ħ2 [(1/sin(θ)) (∂/∂θ)(sin(θ) ∂/∂θ) + (1/sin2(θ)) (∂2/∂φ2) ,   Lz = − (∂/∂φ) .   To which we note L2Ylm = l(l + 1)ħ2Ylm and LzYl= Ylm; the spherical harmonics, the Ylm , are eigenstates of L2 and Lz. The OSP Network: Open Source Physics - Tracker - EJS Modeling Physlet Physics Physlet Quantum Physics
92ebc3749cf380ae
Take the tour × I've read that QM operates in a Hilbert space (where the state functions live). I don't know if its meaningful to ask such a question, what are the answers to an analogous questions on GR and Newtonian gravity? share|improve this question add comment 2 Answers up vote 1 down vote accepted I interpreted your question differently, more like a mathematics question. In Quantum Mechanics, we basically have an equation, the Schrödinger equation, which is a differential equation on the space of square-integrable complex valued functions. This space is a Hilbert space, which means that it is a vector space, and it also has a nice topological structure, basically all Cauchy-sequences of vectors converge in that space. In Newtonian mechanics, the equations are defined on phase space, which is basically a $6N$-dimensional space, $N$ is the total number of particles, on which coordinates for a point consist of the positions and momenta of each particle you want to describe. The solution of the equations induces a flow on this phase space. The structure of phase space is usually that of a symplectic manifold. In General Relativity, the equations are Einstein's field equations. They link the Riemann tensor to the energy-momentum tensor. They are difficult to solve in the sense that they are nonlinear and you have to specify an energy-momentum tensor, but this tensor will also depend on the geometry of space-time, thus the Riemann tensor. So you have to solve in one go for the geometry and energy-matter distribution. In practice, many simplifying assumptions will be made. But the "space" of solutions is the space of geometries and energy-matter distributions compatible with the field equations. share|improve this answer @Raskolnikov : Your interpretation is what i was intending while i was asking the question. Is '6N' a typo, i did'nt get what it is.Is it 'infinite' ? What are the mathematical properties of a phase space of newtonian mechanics ? –  Rajesh D Dec 1 '10 at 15:16 No, it's not a typo, but I admit I have not been clear enough. $N$ is the amount of particles you want to describe. You multiply by 6 because each particle has 3 spatial coordinates and 3 momenta along each spatial direction. The structure of phase space is that of a symplectic manifold. –  Raskolnikov Dec 1 '10 at 15:22 @Raskolnikov : In newtonian mechanics, is the trajectory of the state of the system always smooth ? –  Rajesh D Dec 1 '10 at 15:34 More precisely the dimension of phase space is equal to the number of free generalized co-ordinates of the system. If you have constraints, they generally reduce the dimensions of phase space, which a very important justification for using it. –  Sklivvz Dec 1 '10 at 15:34 @Rajesh: for the next time I suggest you try to formulate your questions clearer so that one doesn't have to waste time on answers that happen to not be what you were intending. Well, I probably shouldn't have answered such a vaguely formulated question in the first place... –  Marek Dec 1 '10 at 15:40 show 3 more comments First, I'll assume that you're talking about quantization. To understand how to quantize GR it is absolutely necessary to give an account (however sketchy) of the approach used to quantize simpler systems. Classical mechanics This is a procedure whereby one transfers from the classical point of view (Newtonian mechanics or equivalently Lagrangian or Hamiltonian mechanics) to the quantum point of view. Now, there are some general prescriptions how one can quantize classical mechanical systems. The most common one is that one replaces the phase space by Hilbert space, functions on phase space by operators on the Hilbert space and Poisson bracket of functions by commutator of the operators. Field theory The previous paragraph was only dealing with mechanics, i.e. case where there are only few degrees of freedom. But GR is a field theory (of gravitational field) and is actually a kind of gauge theory (but a little special at that). One has to first learn how to quantize classical fields and then gauge fields. To do that you can replace the (infinite-dimensional) phase space of the field by (very large) Hilbert space and produce an analogue of Poisson brackets called Dirac bracket which you then replace by commutators. (The second very common approach to quantization is via path integral for which you don't need any operators but I won't elaborate on that here because it is a huge area that would take us far away off the topic of your question) Then to quantize a gauge theory with its own huge gauge symmetry one has to carry out a very nontrivial discussion about the structure of these Dirac brackets. (There also exist other approaches to this but none of them is particularly easy for a beginner. If you're interested see Faddeev-Popov ghosts in path integral gauge quantization and BRST quantization) Now, the thing is that GR (as a field theory) is hard to quantize. I.e. if you repeat the above approach for GR, you'll find out that your quantum theory doesn't make sense (because it is not renormalizable). This suggests that something more than naive approach is needed. And there are actually lots of them. For one thing, one can quantize gravity in certain special dimensions (like 2+1) if one generalizes GR a little (this was done by Witten in '80s). There are also various reformulations that relate quantum gravity and QFT (like AdS/CFT correspondence). There is also matrix string theory that shows duality between matrix quantum mechanics and GR (as pointed out to me by Matt in this question of mine). In short, quantization of GR is very hard. There are many theories and as of yet there is no experimental evidence that would let us know which one is the correct one. share|improve this answer Thank you for pointing out.I i still felt your answer very useful in totally different way than i was expecting...I would try and formulate my question clearly in future. I think that your answer is very helpful for someone googling or browsing through this forum. –  Rajesh D Dec 1 '10 at 15:49 @Rajesh: all right then. I also think my answer could be good if only someone asked the question it addresses :-) –  Marek Dec 1 '10 at 16:03 add comment Your Answer
c44cfbdc56be192d
zbMATH — the first resource for mathematics a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term On a relation of pseudoanalytic function theory to the two-dimensional stationary Schrödinger equation and Taylor series in formal powers for its solutions. (English) Zbl 1067.81032 Summary: We consider the real stationary two-dimensional Schrödinger equation. With the aid of any of its particular solutions, we construct a Vekua equation possessing the following special property. The real parts of its solutions are solutions of the original Schrödinger equation and the imaginary parts are solutions of an associated Schrödinger equation with a potential having the form of a potential obtained after the Darboux transformation. Using Bers’ theory of Taylor series for pseudoanalytic functions, we obtain a locally complete system of solutions of the original Schrödinger equation which can be constructed explicitly for an ample class of Schrödinger equations. For example it is possible when the potential is a function of one Cartesian, spherical, parabolic or elliptic variable. We give some examples of application of the proposed procedure for obtaining a locally complete system of solutions of the Schrödinger equation. The procedure is algorithmically simple and can be implemented with the aid of a computer system of symbolic or numerical calculation. 81Q10Selfadjoint operator theory in quantum theory, including spectral analysis 81Q15Perturbation theories for operators and differential equations 81Q05Closed and approximate solutions to quantum-mechanical equations 35J10Schrödinger operator
1713c608dae02c2e
Optical Conformal Mapping See allHide authors and affiliations Science  23 Jun 2006: Vol. 312, Issue 5781, pp. 1777-1780 DOI: 10.1126/science.1126493 According to Fermat's principle (1), light rays take the shortest optical paths in dielectric media, where the refractive index n integrated along the ray trajectory defines the path length. When n is spatially varying, the shortest optical paths are not straight lines, but are curved. This light bending is the cause of many optical illusions. Imagine a situation where a medium guides light around a hole in it. Suppose that all parallel bundles of incident rays are bent around the hole and recombined in precisely the same direction as they entered the medium. An observer would not see the difference between light passing through the medium or propagating across empty space (or, equivalently, in a uniform medium). Any object placed in the hole would be hidden from sight. The medium would create the ultimate optical illusion: invisibility (2). However, it has been proved (3, 4) that perfect invisibility is unachievable, except in a finite set of discrete directions where the object appears to be squashed to infinite thinness and for certain objects that are small as compared with the wavelength (5, 6). In order to carry images, though, light should propagate with a continuous range of spatial Fourier components, i.e., in a range of directions. The mathematical reason for the impossibility of perfect invisibility is the uniqueness of the inverse-scattering problem for waves (3): the scattering data, i.e., the directions and amplitudes of the transmitted plane-wave components determine the spatial profile of the refractive index (3). Therefore, the scattering data of light in empty space are only consistent with the propagation through empty space. Perfect illusions are thus thought to be impossible due to the wave nature of light. On the other hand, the theorem (3) does not limit the imperfections of invisibility—they may be very small—nor does it apply to light rays, i.e., to light propagation within the regime of geometrical optics (1). This study develops a general recipe for the design of media that create perfect invisibility for light rays over a continuous range of directions. Because this method is based on geometrical optics (1), the inevitable imperfections of invisibility can be made exponentially small for objects that are much larger than the wavelength of light. To manufacture a dielectric invisibility device, media are needed that possess a wide range of the refractive index in the spectral domain where the device should operate. In particular, Fermat's Principle (1) seems to imply that n < 1 in some spatial regions, because only in this case the shortest optical paths may go around the object without causing phase distortions. In our example, n varies from 0 to about 36. In practice, one could probably accept a certain degree of visibility that substantially reduces the demands on the range of the refractive index. Extreme values of n occur when the material is close to resonance with the electromagnetic field. Metamaterials (7) with man-made resonances can be manufactured with appropriately designed circuit boards, similar to the ones used for demonstrating negative refraction (8). The quest for the perfect lens (9) has led to recent improvements (7, 1013) mainly focused on tuning the magnetic susceptibilities. In such metamaterials, each individual circuit plays the role of an artificial atom with tunable resonances. With these artificial dielectrics, invisibility could be reached for frequencies in the microwave-to-terahertz range. In contrast, stealth technology is designed to make objects of military interest as black as possible to radar where, using impedance matching (14), electromagnetic waves are absorbed without reflection, i.e., without any echo detectable by radar. Recently, nanofabricated metamaterials with custom-made plasmon resonances have been demonstrated (13) that operate in the visible range of the spectrum and may be modified to reach invisibility. The method used here is general and also applicable to other forms of wave propagation—for example, to sound waves, where the index n describes the ratio of the local phase velocity of the wave to the bulk value, or to quantum-mechanical matter waves, where external potentials act like refractive-index profiles (1). For instance, one could use the profiles of n described here to protect an enclosed space from any form of sonic tomography. This study examines the simplest nontrivial case of invisibility, an effectively two-dimensional situation, by applying conformal mapping (15) to solve the problem—an elegant technique used in research areas as diverse as electrostatics (14), fluid mechanics (16), classical mechanics (1720), and quantum chaos (21, 22). Consider an idealized situation: a dielectric medium that is uniform in one direction and light of wave number k that propagates orthogonal to that direction. In practice, the medium will have a finite extension and the propagation direction of light may be slightly tilted without causing an appreciable difference to the ideal case. The medium is characterized by the refractive-index profile n(x,y). To satisfy the validity condition of geometrical optics, n(x,y) must not vary by much over the scale of an optical wavelength 2π/k (1). To describe the spatial coordinates in the propagation plane, complex numbers z = x iy are used with the partial derivatives ∂x = ∂z + ∂z* and ∂y = iziz*, where the asterisk symbolizes complex conjugation. In the case of a gradually varying refractive-index profile, both amplitudes ψ of the two polarizations of light obey the Helmholtz equation (1) Embedded Image(1) written here in complex notation with the Laplace operator ∂x2 + ∂y2 = 4∂z*∂z. Suppose we introduce new coordinates w described by an analytic function w(z) that does not depend on z*. Such functions define conformal maps (15) that preserve the angles between the coordinate lines. Because ∂z*∂z = |dw/dz|2w*∂w, we obtain in w space a Helmholtz equation with the transformed refractive-index profile n′ that is related to the original one as Embedded Image(2) Suppose that the medium is designed such that n(z) is the modulus of an analytic function g(z). The integral of g(z) defines a map w(z) to new coordinates where, according to Eq. 2, the transformed index n′ is unity. Consequently, in w coordinates, the wave propagation is indistinguishable from empty space where light rays propagate along straight lines. The medium performs an optical conformal mapping to empty space. If w(z) approaches z for w → ∞, all incident waves appear at infinity as if they have traveled through empty space, regardless of what has happened in the medium. However, as a consequence of the Riemann Mapping Theorem (15), nontrivial w coordinates occupy Riemann sheets with several ∞, one on each sheet. Consider, for example, the simple map Embedded Image(3) illustrated in Fig. 1, that is realized by the refractive-index profile n = |1 – a2/z2|. The constant a characterizes the spatial extension of the medium. The function (3) maps the exterior of a circle of radius a on the z plane onto one Riemann sheet and the interior onto another. Light rays traveling on the exterior w sheet may have the misfortune of passing the branch cut between the two branch points ±2a. In continuing their propagation, the rays approach ∞ on the interior w sheet. Seen on the physical z plane, they cross the circle of radius a and approach the singularity of the refractive index at the origin. For general w(z), only one ∞ on the Riemann structure in w space corresponds to the true ∞ of physical z space and the others to singularities of w(z). Instead of traversing space, light rays may cross the branch cut to another Riemann sheet where they approach ∞. Seen in physical space, the rays are irresistibly attracted toward some singularities of the refractive index. Instead of becoming invisible, the medium casts a shadow that is as wide as the apparent size of the branch cut. Nevertheless, the optics on Riemann sheets turns out to serve as a powerful theoretical tool for developing the design of dielectric invisibility devices. Fig. 1. Optical conformal map. A dielectric medium conformally maps physical space described by the points z = x iy of the complex plane onto Riemann sheets if+the refractive-index profile is |dw/dz| with some analytic function w(z). The figure illustrates the simple map (3) where the exterior of a circle in the picture above is transformed into the upper sheet in the lower picture, and the interior of the circle is mapped onto the lower sheet. The curved coordinate grid of the upper picture is the inverse map z(w) of the w coordinates, approaching a straight rectangular grid at infinity. As a feature of conformal maps, the right angles between the coordinate lines are preserved. The circle line in the figure above corresponds to the branch cut between the sheets below indicated by the curly black line. The figure also illustrates the typical fates of light rays in such media. On the w sheets, rays propagate along straight lines. The rays shown in blue and green avoid the branch cut and hence the interior of the device. The ray shown in red crosses the cut and passes onto the lower sheet where it approaches ∞. However, this ∞ corresponds to a singularity of the refractive index and not to the ∞ of physical space. Rays like this one would be absorbed, unless they are guided back to the exterior sheet. All we need to achieve is to guide light back from the interior to the exterior sheet, i.e., seen in physical space, from the exterior to the interior layer of the device. To find the required refractive-index profile, we interpret the Helmholtz equation in w space as the Schrödinger equation (1) of a quantum particle of effective mass k2 moving in the potential U with energy E such that UE = –n2/2 (1). We wish to send all rays that have passed through the branch cut onto the interior sheet back to the cut at precisely the same location and in the same direction in which they entered. This implies that we need a potential for which all trajectories are closed. Assuming radial symmetry for U(w) around one branch point w1, say +2a in our example, only two potentials have this property: the harmonic oscillator and the Kepler potential (17). In both cases the trajectories are ellipses (17) that are related to each other by a transmutation of force according to the Arnol'd-Kasner theorem (1820). The harmonic oscillator corresponds to a Luneburg lens (23) on the Riemann sheet with the transformed refractive-index profile Embedded Image(4) where r0 is a constant radius. The Kepler potential with negative energy E corresponds to an Eaton lens (23) with the profile Embedded Image(5) Note that the singularity of the Kepler profile in w space is compensated by the zero of |dw/dz| at a branch point in physical space such that the total refractive index (2) is never singular. In both cases (4) and (5), r0 defines the radius of the circle on the interior w sheet beyond which n2 would be negative and hence inaccessible to light propagation. This circle should be large enough to cover the branch cut. The inverse map z(w) turns the outside of the circle into the inside of a region bounded by the image z(w) of the circle line in w space. No light can enter this region. Everything inside is invisible. Yet there is one more complication: Light is refracted (1) at the boundary between the exterior and the interior layer. Seen in w space, light rays encounter here a transition from the refractive index 1 to n′. Fortunately, refraction is reversible. After the cycles on the interior sheets, light rays are refracted back to their original directions (Fig. 2). The invisibility is not affected, unless the rays are totally reflected. According to Snell's Law (1), rays with angles of incidence θ with respect to the branch cut enter the lower sheet with angles θ′ such that n′sinθ′ = sinθ. If n′ < 1, this equation may not have real solutions for θ larger than a critical angle Θ. Instead of entering the interior layer of the device, the light is totally reflected (1). The angle Θ defines the acceptance angle of the dielectric invisibility device, because beyond Θ, the device appears silvery instead of invisible. The transformed refractive-index profiles (4) and (5) at the boundary between the layers are lowest at the other branch point w2 that limits the branch cut, w2 = –2a, in our example. In the case of the harmonic-oscillator profile (4), n′ lies always below 1, and we obtain the acceptance angle Embedded Image(6) For all-round invisibility, the radius r0 should approach infinity, which implies that the entire interior sheet is used for guiding the light back to the exterior layer. Fortunately, the Kepler profile (5) does not lead to total reflection if r0 Θ 2|w2w1|. In this case, the invisible area is largest for Embedded Image(7) Figure 3 illustrates the light propagation in a dielectric invisibility device based on the simple map (3) and the Kepler profile (5) with r0 = 8a. Here n ranges from 0 to about 36, but this example is probably not the optimal choice. One can choose from infinitely many conformal maps w(z) that possess the required properties for achieving invisibility: w(z) ∼ z for z → ∞ and two branch points w1 and w2. The invisible region may be deformed to any simply connected domain by a conformal map that is the numerical solution of a Riemann-Hilbert problem (16). We can also relax the tacit assumption that w1 connects the exterior to only one interior sheet, but to m sheets where light rays return after m cycles. If we construct w(z) as af(z/a) with some analytic function f(z) of the required properties and a constant length scale a, the refractive-index profile |dw/dz| is identical for all scales a. Finding the most practical design is an engineering problem that depends on practical demands. This problem may also inspire further mathematical research on conformal maps in order to find the optimal design and to extend our approach to three dimensions. Fig. 2. Light guiding. The device guides light that has entered its interior layer back to the exterior, represented here using two Riemann sheets that correspond to the two layers, seen from above. Light on the exterior sheet is shown in blue and light in the interior, in red. At the branch cut, the thick line between the two points in the figure (the branch points), light passes from the exterior to the interior sheet. Here light is refracted according to Snell's law. On the lower sheet, the refractive-index profile (5) guides the rays to the exterior sheet in elliptic orbits with one branch point as focal point. Finally, the rays are refracted back to their original directions and leave on the exterior sheet as if nothing has happened. The circle in the figure indicates the maximal elongations of the ellipses. This circle limits the region in the interior of the device that light does not enter. The outside of the circle corresponds to the inside of the device. Anything beyond this circle is invisible. Fig. 3. Ray propagation in the dielectric invisibility device. The light rays are shown in yellow. The brightness of the green background indicates the refractive-index profile taken from the simple map (3) and the Kepler profile (5) with r0 = 8a in the interior layer of the device. The invisible region is shown in black. The upper panel illustrates how light is refracted at the boundary between the two layers and guided around the invisible region, where it leaves the device as if nothing were there. In the lower panel, light simply flows around the interior layer. Finally, we ask why our scheme does not violate the mathematical theorem (3) that perfect invisibility is unattainable. The answer is that waves are not only refracted at the boundary between the exterior and the interior layer, but also are reflected, and that the device causes a time delay. However, the reflection can be substantially reduced by making the transition between the layers gradual over a length scale much larger than the wavelength 2π/k or by using anti-reflection coatings. In this way, the imperfections of invisibility can be made as small as the accuracy limit of geometrical optics (1), i.e., exponentially small. One can never completely hide from waves, but can from rays. References and Notes View Abstract Stay Connected to Science Navigate This Article
853362c9292a59a0
Skip to main content Chemistry LibreTexts 10.1: The Born-Oppenheimer Approximation • Page ID • The Born-Oppenheimer approximation is one of the basic concepts underlying the description of the quantum states of molecules. This approximation makes it possible to separate the motion of the nuclei and the motion of the electrons. This is not a new idea for us. We already made use of this approximation in the particle-in-a-box model when we explained the electronic absorption spectra of cyanine dyes without considering the motion of the nuclei. Then we discussed the translational, rotational and vibrational motion of the nuclei without including the motion of the electrons. In this chapter we will examine more closely the significance and consequences of this important approximation. Note, in this discussion nuclear refers to the atomic nuclei as parts of molecules not to the internal structure of the nucleus. The Born-Oppenheimer approximation neglects the motion of the atomic nuclei when describing the electrons in a molecule. The physical basis for the Born-Oppenheimer approximation is the fact that the mass of an atomic nucleus in a molecule is much larger than the mass of an electron (more than 1000 times). Because of this difference, the nuclei move much more slowly than the electrons. In addition, due to their opposite charges, there is a mutual attractive force of acting on an atomic nucleus and an electron. This force causes both particles to be accelerated. Since the magnitude of the acceleration is inversely proportional to the mass, a = f/m, the acceleration of the electrons is large and the acceleration of the atomic nuclei is small; the difference is a factor of more than 1000. Consequently, the electrons are moving and responding to forces very quickly, and the nuclei are not. You can imagine running a 100-yard dash against someone whose acceleration is a 1000 times greater than yours. That person could literally run circles around you. So a good approximation is to describe the electronic states of a molecule by thinking that the nuclei aren't moving, i.e. that they are stationary. The nuclei, however, can be stationary at different positions so the electronic wavefunction can depend on the positions of the nuclei even though their motion is neglected. Now we look at the mathematics to see what is done in solving the Schrödinger equation after making the Born-Oppenheimer approximation. For a diatomic molecule as an example, the Hamiltonian operator is grouped into three terms \[\hat {H} (r, R) = \hat {T}_{nuc} (R) + \dfrac {e^2}{4\pi \epsilon _0} \dfrac {Z_A Z_B}{R} + \hat {H} _{elec} (r,R) \label {10-1}\] \[T_{nuc} (R) = -\dfrac {\hbar^2}{2m_A} \nabla ^2_A - \dfrac {\hbar ^2}{2m_B} \nabla ^2_B \label {10-2}\] \[\hat {H} _{elec} (r, R) = \dfrac {- \hbar ^2}{2m} \sum \limits _i \nabla ^2_i + \dfrac {e^2}{4 \pi \epsilon _0} \left ( -\sum \limits _i \dfrac {Z_A}{r_{Ai}} - \sum \limits _i \dfrac {Z_B}{r_{Bi}} + \dfrac {1}{2} \sum \limits _i \sum \limits _{j \ne i} \dfrac {1}{r_{ij}}\right ) \label {10-3}\] In Equation \ref{10-1}, the first term represents the kinetic energy of the nuclei, the second term represents the Coulomb repulsion of the two nuclei, and the third term represents the contribution to the energy from the electrons, which consists of their kinetic energy, mutual repulsion for each other, and attraction for the nuclei. Bold-face type is used to represent that \(r\) and \(R\) are vectors specifying the positions of all the electrons and all the nuclei, respectively. Exercise \(\PageIndex{1}\) Define all the symbols in Equations \ref{10-1} through \ref{10-3}. Exercise \(\PageIndex{2}\) Explain why the factor of 1/2 appears in the last term in Equation \ref{10-3}. The Born-Oppenheimer approximation says that the nuclear kinetic energy terms in the complete Hamiltonian, Equation \ref{10-1}, can be neglected in solving for the electronic wavefunctions and energies. Consequently, the electronic wavefunction \(\varphi _e (r,R)\) is found as a solution to the electronic Schrödinger equation: \[\hat {H} _{elec} (r, R) \varphi _e (r, R) = E_e (R) \varphi _e (r, R) \label {10-4}\] Even though the nuclear kinetic energy terms are neglected, the Born-Oppenheimer approximation still takes into account the variation in the positions of the nuclei in determining the electronic energy and the resulting electronic wavefunction depends upon the nuclear positions, \(R\). As a result of the Born-Oppenheimer approximation, the molecular wavefunction can be written as a product the Born-Oppenheimer Approximation \[\psi _{ne} (r, R) = X_{ne} (R) \varphi _e (r, R) \label {10-5}\] This product wavefunction is called the Born-Oppenheimer wavefunction. The function \(X_{ne} (R)\) is the vibrational wavefunction, which is a function of the nuclear coordinates \(R\) and depends upon both the vibrational and electronic quantum numbers or states, \(n\) and \(e\), respectively. The electronic function, \(\varphi _e (r, R) \), is a function of both the nuclear and electronic coordinates, but only depends upon the electronic quantum number or electronic state, \(e\). Translational and rotational motion is not included here. The translational and rotational wavefunctions simply multiply the vibrational and electronic functions in Equation \ref{10-5} to give he complete molecular wavefunction when the translational and rotational motions are not coupled to the vibrational and electronic motion. Crude Born-Oppenheimer Approximation In the Crude Born-Oppenheimer Approximation, \(R\) is set equal to \(R_o\), the equilibrium separation of the nuclei, and the electronic wavefunctions are taken to be the same for all positions of the nuclei. The electronic energy, \(E_e (R)\), in Equation \ref{10-4} combines with the repulsive Coulomb energy of the two nuclei, to form the potential energy function that controls the nuclear motion as shown in Figure \(\PageIndex{1}\). \[ V_e (R) = E_e (R) + \dfrac {e^2}{4\pi \epsilon _0} \dfrac {Z_A Z_B}{R} \label {10-6}\] Consequently the Schrödinger equation for the vibrational motion is \[( \hat {T} _{nuc} (R) + V (R) )_{X_{ne}} (R) = E_{ne X_{ne}} (R) \label {10-7}\] In Chapter 6, the potential energy was approximated as a harmonic potential depending on the displacement, \(Q\), of the nuclei from their equilibrium positions. The potential energy function for a diatomic molecule. Figure \(\PageIndex{1}\): The potential energy function for a diatomic molecule. In practice the electronic Schrödinger equation is solved using approximations at particular values of \(R\) to obtain the wavefunctions \(\varphi _e (r,R)\) and potential energies \(V_e (R)\). The potential energies can be graphed as illustrated in Figure \(\PageIndex{1}\). The graph in Figure \(\PageIndex{1}\) is the energy of a diatomic molecule as a function of internuclear separation, which serves as the potential energy function for the nuclei. When \(R\) is very large there are two atoms that are weakly interacting. As \(R\) becomes smaller, the interaction becomes stronger, the energy becomes a large negative value, and we say a bond is formed between the atoms. At very small values of \(R\), the internuclear repulsion is very large so the energy is large and positive. This energy function controls the motion of the nuclei. Previously, we approximated this function by a harmonic potential to obtain the description of vibrational motion in terms of the harmonic oscillator model. Other approximate functional forms could be used as well, e.g. the Morse potential. The equilibrium position of the nuclei is where this function is a minimum, i.e. at \(R = R_0\). If we obtain the wavefunction at \(R = R_0\), and use this function for all values of \(R\), we have employed the Crude Born-Oppenheimer approximation. Exercise \(\PageIndex{3}\) Relate Equation \ref{10-7} to the one previously used in our description of molecular vibrations in terms of the harmonic oscillator model. In this section we started with the Schrödinger equation for a diatomic molecule and separated it into two equations, an electronic Schrödinger equation and a nuclear Schrödinger equation. In order to make the separation, we had to make an approximation. We had to neglect the effect of the nuclear kinetic energy on the electrons. The fact that this assumption works can be traced to the fact that the nuclear masses are much larger than the electron mass. We then used the solution of the electronic Schrödinger equation to provide the potential energy function for the nuclear motion. The solution to the nuclear Schrödinger equation provides the vibrational wavefunctions and energies. Exercise \(\PageIndex{4}\) Explain the difference between the Born-Oppenheimer approximation and the Crude Born-Oppenheimer approximation. Contributors and Attributions
f20d6ad216db70e5
Posts Tagged ‘ontology’ Materia Prima 10 December 2012 The Beginning of the End So this is it for me and Bernard d’Espagnat’s On Physics and Philosophy. In the final chapter d’Espagnat allows himself to speculate on the philosophical and spiritual importance of his veiled reality (which he capitalizes) in particular, and the results of modern physics in general. The chapter is entitled “The Ground of Things.” It is in these concluding sections that d’Espagnat makes his final defence of a materia prima, a mind-independent reality, before the objections of both realists (who concentrate on empirical reality) and antirealists (who say mind is all). Some of those arguments say there’s no reality-in-itself, some say it exists but is inaccessible, and others say empirical reality is “reality.” Kant vs d’Espagnat D’Espagnat believes “the Real” is a mystery as it is (in his opinion) not accessible through discursive knowledge. He notes Immanuel Kant distinguished between phenomena and “reality-in-itself,” but disagrees with Kant that a mind-independent reality is just a boring “limiting concept” filled with “pure x.” Cassirer vs d’Espagnat Ernest Cassirer strongly objected to being content with a “mystery,” which he felt would be an unbearable block to scientific inquiry. D’Espagnat says when possible the search for clarity is admirable, but the true spirit of science is to follow where the facts lead it. The quantum entanglement shown in “Aspect-like experiments” (by Alain Aspect and others) are just part of our evolving scientific knowledge. Materialists vs Mystics Sometimes one should approach “mystery” the way mystics, poets, or composers have done so (though more often in the past). Realists (materialists) have no reason to believe they hold all the keys to knowledge, even in principle. As for the antirealists (and instrumentalists), if they think reality is something we ourselves build up, then mystery can hardly be called an exceptional “illusion.” Affect vs Effect The “affective” element of human existence is an aspect that seems to circumvent our rationality. Kant felt the “affective mind” was not “ordered on concepts” and therefore could shine no light on Being. D’Espagnat is more sympathetic to Descartes. Thought leads to the self-evidence of existence (“I think, therefore I am”), but d’Espagnat says just as self-evident will be our “joys and pains.” We base our conjectures on what we know most intimately, and what could be closer to us than our “affective consciousness”? This too should be able to inform us of Being, perhaps in some circumstances even better than science can. Realism vs “the Real” We can take a very realist position and imagine that if mankind disappeared the stars would continue in their courses. This is an argument for a mind-independent reality—just not the one d’Espagnat has in mind. D’Espagnat says just because our present existence is usually most conveniently described in realist terms (such as conventional space and time) doesn’t mean the realist position is actually true. Even particle physicists who use the realist language of minuscule points and well-defined trajectories know that’s not what’s “really” going on. Radical Idealism vs “the Real” On the other hand radical idealism believes there is no reality outside the mind. In other words, there’s no mind-independent reality. D’Espagnat says his earlier arguments, either based on no miracles or intersubjective agreement (see chapter five) undermine idealism but not his veiled reality position. Mathematical Realism vs “the Real” Whether it’s “Pythagorism” or “Mathematical Platonism” there’s a belief that mathematical developments are discovered not created. Again, this would be a mind-independent reality, but mathematically based. Physical reality is either grounded on a pre-existing mathematical reality or there’s some strong connection between the two. D’Espagnat reminds us that quantum formalism refers to observational predictions. It’s possible “the Real” is mathematically based, but quantum theory isn’t going to get you there. Brains in a Vat vs “the Real” D’Espagnat disagrees with Hilary Putnam’s thought experiment that places brains in vats. Connect electrodes to the brains and some supersmart being could send (in theory) images and other sensations directly to the brain. Putnam says a vat individual could not truthfully say, “We are brains in a vat.” That’s because his concept of a vat is based on an illusion. So there’s no connection between this particular version of a “ground of being” and our knowledge. D’Espagnat disagrees with the assumption that knowledge springs only from the senses. Also, Putnam’s imaginary statement refers to specific entities. D’Espagnat’s concept of “the Real” is “conceptually prior to any such description.” Self-Modification vs “the Real” Francisco Varela and collaborators proposed “enaction” theory. The brain’s main function is to modify its internal states rather than reflect the external world. External reality is neither a projection of our mind’s contents nor the source of those contents. There’s no need to imagine a “pre-given” reality. D’Espagnat faults Varela’s book for vague terminology. Does Varela mean “empirical reality” or “mind-independent reality” when he talks of “reality”? Is the “subjective” an individual’s subjectivity or intersubjectivity? D’Espagnat disagrees with Varela’s use of “secondary qualities” such as colour to make his arguments. Even Varela’s arguments about attention and perception fail to convince d’Espagnat. The mind may display selective attention but that’s a far cry from proving that mind and world somehow arise together. Structure vs “the Real” D’Espagnat says arguments against veiled reality will fail if they’re based on discursive (descriptive and rational) knowledge. In other words, arguments based on what structures we see or don’t see are irrelevant to “the Real” as “the Real” doesn’t have structures in the way we’re accustomed to think of them. Buddhism vs d’Espagnat D’Espagnat notes Varela’s frequent references to Buddhism. Buddhism speaks of “sunyata” or “emptiness” in rejecting objects in the world as intrinsically existing in the way we perceive them. Furthermore our living “selves” have no absolute existence as individuals. D’Espagnat hopes his veiled reality viewpoint will interest Buddhists, especially as there’s a pretty thick veil between consciousness and “the Real.” Heisenberg vs “the Real” D’Espagnat rejects Werner Heisenberg’s (posthumously published) view that empirical reality is a product of our human-made knowledge. Heisenberg felt there were various “regions of reality” such that our knowledge of biology, for instance, weren’t entirely dependent on our knowledge of physics. Heisenberg did think there might be something that’s “truly real,” vaguely reflected upon human consciousness. However, he felt this level of reality would still be situated within ordinary space and time. It’s on that count that d’Espagnat rejects Heisenberg’s arguments as irrelevant as “the Real” is not located in space and time. Pro vs Con In the end Heisenberg finds arguments both for and against a “ground of things” dubious. You can argue against a “ground of things” but only in the sense of a “pregiven,” describable “world-per-se.” D’Espagnat finds the “pro” arguments based on “commonsense” or a pre-existing mathematical reality also unconvincing. D’Espagnat believes a “more fact-based reasoning” is called for. Universality vs Events D’Espagnat says over the past half-century interest in chaos and complexity led some scientists to demote scientific laws and promote the role of the “event,” previously seen as more or less accidental. He says he argued against rejecting the “universal” in a 1990 book. He’s more ambivalent about the emphasis on “events,” which he says takes place within empirical reality. That reflects the way we’re “apprehending the Real” but doesn’t meant that’s what “the Real” is all about. For instance, we don’t see objects as nonseparable, but that’s what quantum theory tells us. D’Espagnat says Edgar Morin and others in this school of thought have somewhat retreated from their emphasis on events, complexity, and disorder. Morin acknowledges that “Aspect-type” experimental results have shown some limitations in his approach. Nominalism vs “the Real” D’Espagnat is unimpressed with the revival of nominalism among “cultivated, literary, avant-garde people.” It’s a belief system promoted in the Middle Ages by William of Ockham and others. Nominalists nowadays reject the universal while applauding individual initiative, which they feel is a product of individual knowledge. The problem is that nominalism is an all-encompassing philosophy, referring to all things, not just living beings. The discrete atoms of classical physics have given way to “collective modes of existence.” And again such arguments apply to empirical reality not “the Real.” D’Espagnat vs the “Enlightenment” D’Espagnat believes many sophisticated members of society are still enthralled by outmoded ideas of the “Enlightenment” (d’Espagnat’s quotes). D’Espagnat acknowledges that research on chaos and events may eventually back nominalism. However, he things quantum theory and inseparability will win out. Spinoza vs “the Real” D’Espagnat cautions against thinking Spinoza was a committed materialist when he talked of “God, in other words, nature.” Although Spinoza’s natura naturans sounds like “the Real,” his natura naturata sounds like phenomena. D’Espagnat does not agree there’s a willful, personal God behind all this, however. Veiled reality is not “intelligible,” unlike Spinoza’s view of Substance. Phenomenology vs “the Real” Classical physics introduced mechanical, then mathematical, idealizations of objects. How things supposedly “really are” became separated from our “direct experience.” Quantum theory reintroduced a role for the human mind to account for our experiences. In some ways quantum theories reinforces phenomenology. Phenomenology sees an act of creativity in the human mind. It takes various pieces of sensation and constructs some entity that shares these qualities. However, on some level the source of these sensations still independently exists. Quantum theory states that some physical quantities can only be observed through human intervention, thus undermining phenomenology’s belief in independently existing sources of phenomenon. Modern “Sages” vs “the Real” In “developed” societies there are “sages” who take rather contradictory views. They say there is a reality independent of us. But they also say it’s “obvious” we rely on our perceptions to gain access to that world. So they conclude it is illogical to speak of an “unreachable” reality. We should make only statements relying on sense data or tautologies (statements that are always logically true). However, d’Espagnat continues to oppose the view that our perceptions necessarily reflect reality as it really is. Our modern “sages” try to combine realism and positivism, converting “reality-per-se” into “observation-per-se.” But there is no “observation-per-se” as observations involve human intention and selection. The Describable vs “the Real” If we reject the materialists’ rejection of “the Real,” does that force us into the camp of the radical idealists? D’Espagnat says we shouldn’t confuse “the Real” and “the describable.” First, existence takes precedence over knowledge. Secondly, there is something that says “no” to any arbitrary constructions of reality. Third, it’s hard to imagine an “a priori” that evolves. And fourthly, there are universal laws that make predictions, and it’s hard to envision how laws could do so unless you believe in miracles. Even Michel Bitbol and Hervé Zwirn have not entirely rejected the concept of “the Real” even as they critique it. D’Espagnat says thinkers should avoid pushing deductive reasoning into areas where it may not strictly apply. As a sidenote, d’Espagnat says classical instrumentalism believes a concept’s meaning and “reference,” the collection of data about the concept, are the same. Even if you replace “data” with “prediction” it’s not a universal position as predictions require a predictor. And that predictor is some being who’s doing the predicting. Laws vs “the Real” Bitbol and Zwirn may move a bit toward Platonism when they acknowledge something may constrain us that is not entirely attributable to us. However, they believe this “something” is totally inaccessible. D’Espagnat disagrees, and thinks Plato would disagree too. “The Real” must have some influence on empirical reality’s structure as Maxwell’s laws (for instance) are obeyed by phenomena. D’Espagnat’s “extended causality” links not instances of phenomena but rather phenomena and “the Real.” These structural “extended causes” move beyond Kantian causality and recall Plato’s Ideas. Structures vs Hints of Structures D’Espagnat says “the Real” is prior to mind-matter splitting, so the mind may detect hints of the mind’s source, which is “the Real.” That veiled reality is not the same as the underlying reality described by structural realism. D’Espagnat says mind-independent reality is not the source of our physical laws. At best these laws are distortions of the “great structures” of “the Real.” At worst they’re just very obscure “traces.” In the end “the Real” isn’t describable, indescribable, or party describable. The first two options imply a total presence or lack of description, and the third option implies “the Real” has parts, which isn’t the case, says d’Espagnat. Conceptualization vs Meaning If “the Real” can’t be conceptualized can it have any “meaning”? D’Espagnat cites Zwirn’s argument imagining a creature as far ahead of humans as humans are ahead of dogs or monkeys. We can conceptualize things that dogs or monkeys can’t, so surely a superhuman being could conceptualize things we can’t. D’Espagnat believes that poets can allude to things that we somehow know exists even if these concepts can’t be made explicit. Plato’s Cave vs “the Real” At first glance Plato’s Cave approximates d’Espagnat’s view of veiled reality. It suggests the emergence of (shadowy) empirical reality (seen in the cave) from “the Real” (the porters who place their Platonic Ideas in front of the light). However, the fable doesn’t deal with how consciousness (the prisoners) would have emerged from “the Real.” Furthermore, “the Real” cannot be separated into parts (while the porters hold separate objects). We cannot conceptualize “the Real” yet Plato conceptualized his Ideas. Finally, even without prisoners there’d still be the shadows, while in d’Espagnat’s system phenomena would exist only in relation to consciousness. Traditional Thought vs “the Real” D’Espagnat warns against a syncretism of old cultural elements and new philosophical points, but he wonders if “the Real” has any bearing on traditional systems. Religions speak of an “immorality,” which suggests some absolute time that physics no longer can support. However, perhaps the other term “eternity” suggests escaping this illusory time. And perhaps there is a “continuous creation” of Being in a process independent of time. Heisenberg vs d’Espagnat Heisenberg, says d’Espagnat, doubted thought could illuminate deep matters as (according to Heisenberg) thought returns to its source. But d’Espagnat notes that new science has allowed us to move past old science’s viewpoints, such as materialism. So thought has been able to illuminate some deep matters. Platonism vs d’Espagnat D’Espagnat sees similarities between his view of causality and Aristotle’s. Aristotle was a realist and was concerned with causality not just in the realm of phenomena but in “reality-in-itself.” Furthermore, Aristotle was not beholden to the idea that causes precede effects. Instead there could be “final causes” to which things might tend under the influence of Aristotle’s God. As d’Espagnat’s veiled reality is beyond time, “the Real” could impart such a “final cause” on empirical reality. Also, Aristotle’s interest in causation beyond mere phenomena reminds d’Espagnat of his own interest in causation between “the Real” and empirical reality. Aristotle distinguished between “power” and “act” while Newton supposedly saw just “act.” Aristotle saw matter as the seat of a vague potentiality. Materia prima is pure potentiality. “Informed matter” exists on more and more complex levels. Simple beings can be the “matter” for more complex beings. These complex beings in this process are more “real” as their potentiality is expressed. Therefore the deep meaning of reality lies not in the tiny components of complex beings, but rather the meaning is the complex beings themselves. In a similar fashion, in empirical reality the wave functions have an “epistemological reality” at a lower level than, say, macroscopic objects in the wake of decoherence. Although Heisenberg did not cite decoherence he did ponder the possible role of wave functions as a “materia prima.” Abner Shimony went on exploring this issue. However, they’ve both admitted it’s hard to formulate these ideas precisely. Plato vs d’Espagnat As for Plato, d’Espagnat reminds us of his earlier concerns about the Plato’s Cave. However, for Plato the deeper meaning was not in the things themselves. They didn’t reside just in “us” either. He wasn’t a radical idealist. Platonic Ideas (and his concept of the “Good”) bear resemblance to “the Real.” However, Platonic Ideas are conceptualizable while “the Real” is not. Many scientists believe, still, that analyzing more and more sense data will get us closer to the deeper meaning of reality. However, advances in science have relied on a “rapprochement” between science and a philosophical position (Platonism) that questions such a program. D’Espagnat notes that “Platonism” is a term nowadays often interpreted as “Pythagorism” with real mathematical objects. D’Espagnat does not agree with “Pythagorism,” but notes that there’s some relationship between it and Platonism. Even veiled reality has a smidgeon of Pythagorism in it as empirical reality’s objects are somehow a dim reflection of “the Real.” Einstein vs d’Espagnat Albert Einstein appears to have believed “the Real” could in principle be apprehended in its details, even if in practice that was rarely possible. However, the goal remained to explore this deeper world by discovering universal laws. Einstein also believed in three levels of religious experience. The first was based on fear, the second morals, and the third transcends ordinary human views of God. At this third level, Einstein thought, a sublime order is reflected in nature and in thought. Even scientific materialists no longer believe the common materialism that the mass media disseminates. However, there have also been developments that make us question some of Einstein’s philosophical positions. D’Espagnat sees some compatibility between his views and Einstein’s even if Pythagorism doesn’t have to be entirely correct. “The Real” does not have to be totally intelligible. The human mind may tend toward the structures and qualities of “the Real” in the sense that Max Planck had a strong affective experience in his theoretical work. It’s not necessary that mathematics reveals everything about “the Real.” Rather, as long as we have some concept of “the Real” that we can tend to, the structures and qualities of the mind may be drawn to it even as it never fully understands it due to the mind’s limitations. The Spiritual vs the Scientific Maybe this idea is closer to Einstein’s third-level religious experience rather than a completely knowable “Real.” The human mind tends to quest and exploration, though never able to fully accomplish what it desires. Einstein was still grounded in physical materialism. Later developments in physics have shown us something more human-oriented. We can’t limit Being to just material components. The mind may somehow “recall” aspects of Being as consciousness is not just a product of matter. Archetypes of some of our feelings may lie with “the Real.” There’s no way to prove this, or disprove this. But crucially we can no longer see science as an impediment to the “spiritual impetus that moves mankind,” an impetus, according to Einstein, that makes us desire to live “the whole of what is.” And it is an impetus that possesses both unity and meaning. Making an Appearance 8 December 2012 Mind the Details Bernard d’Espagnat delves into finer and finer distinctions between his veiled reality position and similar (though not identical) views. The eighteenth chapter of his On Physics and Philosophy is entitled “Objects and Philosophy,” and there’s only one chapter to go after this. Philosophers vs Consciousness Researchers D’Espagnat says he takes mostly a philosophical approach in this book. Philosophers question the basis of our reality while consciousness researchers (such as neurologists) take physical realism as a given (whether they’re conscious of this or not). Mind vs Reality Radical idealists, who think mind is “primeval,” may wonder about the relationship between mind and “basic reality.” Supporters of d’Espagnat’s “veiled reality” or “open realism” approach are even more motivated to investigate. Truth vs Reality A physical realist can say that a true statement is “adequate to what reality really is.” This is the “similitude theory” of truth. Reality vs Representations But if we don’t have access to reality as it “really is” then we might say we have access only to “human representations” of “the Real.” Instead of worrying about whether statements are true to reality you might worry more about the verifiability of statements. Knowable vs Unknowable Reality Another problem with the “similitude” approach is that quantum mechanics, the best model of the world we have, fundamentally deals with observational probabilities not plain and simple facts. Even resorting to a Broglie-Bohm approach doesn’t help as “hidden variables” will be inaccessible to the observer even in principle. Idealism vs Veiled Reality A radical idealist or Kantian rejects the similitude approach anyway. A supporter of the veiled reality approach has to take a somewhat nuanced tact. Very broad statements about physical constants or “existences prior to knowledge” may hint at “the Real” without claiming to say anything directly about “the Real” as it “really is.” Appearances vs Veiled Reality If we’re not supposed to trust in “appearances” then what is reality really like? We might think that “the Real” is just an updated version of “appearances.” Or maybe mind-independent reality is so independent that it’s entirely inaccessible. D’Espagnat says both approaches are too extreme. Causal Links vs Predictive Laws We like our ordinary, everyday version of “realism” because it lets us imagine particular cause-and-effect relationships. It’s easier to explain things when we can point to particular causes rather than just patterns of observational predictions. D’Espagnat says some causal links are genuine and independent of us, but our interpretation of these links is very much our own. For instance, causality is closely related in our minds to the notion of “will,” which entails a very anthropomorphic (human-centred) view of reality. Intersubjective Agreement vs Appearances But what if a group of humans (and maybe even non-humans!) agree on certain observations? D’Espagnat says that this agreement combined with rules of observational prediction mean this is our “reality.” Saying they’re just “appearances” is misleading. It’s a kind of “reality.” However, modern physics reminds us that humans tend to “reify” (think of the world as a set of objects). So we still have to keep in mind that empirical reality is not the same as “the Real.” Empirical Reality vs Mind-Independent Reality Although d’Espagnat is comfortable with the term “reality” to describe our empirical reality, he says we have to remember these are two “orders” or “levels” of reality. Empirical reality isn’t just a mere variant on “the Real.” Identity Theory vs Efflorescence Theory In some of the more nuanced sections of the chapter d’Espagnat makes a distinction between identity theory and efflorescence theory. Identity theory states that a genuine sensation or awareness (perhaps even thought in general) is traceable to neurons or their components. The material aspect of these neurons is the ultimate cause of our sensations. Efflorescence theory attributes sensations and awareness to “neuronal activity” rather than the material aspects of neurons or their components. Strong vs Weak Completeness D’Espagnat’s main line of attack against identity theory is the completeness principle. In its strong version, quantum mechanics is assumed to be able to describe anything at all. In its weak version, if any theory can describe something then quantum mechanics can do so as well. This leaves open the concept of hidden variables. Since quantum mechanics is antirealist it’s hard to imagine how the strong completeness principle is compatible with identity theory. Even if you take the weak version of the completeness principle all you can conclude is that the identity theory may be true—but we can never show it to be so. But what if you reject the completeness principle entirely? If you used the Broglie-Bohm model you’d still have to deal with an entangled wave function, so sensations can’t be attributed just to some limited coordinates of a particular neuron. Or you can take the Roger Penrose approach by adding nonlinear terms to the Schrödinger equation. D’Espagnat says that approach may work, but he finds it too ad hoc. It’s also work still at an early stage, yet to face the scrutiny a full theory would need to endure. Brain vs Neuron States Now, efflorescence theory relies on neuronal activity not the material aspects of neurons to explain sensation, awareness, and (perhaps) thought itself. But neurologists believe brain states not neuronal states are what drives awareness. You can’t pinpoint a particular neuron or group of neurons that are responsible. It’s the collective action spread across the brain that is associated with awareness. D’Espagnat notes the parallel to quantum entanglement. Protomentality vs Mentality Alfred North Whitehead and other thinkers in the past have wondered if simple organisms or even inorganic entities can have awareness? Abner Shimony’s “potentiality” might satisfy some objections to this concept of protomentality. Various entities have the potentiality of consciousness, but this potentiality isn’t actualized unless a nervous system is present. Consciousness vs Components of Consciousness As a final objection to the efflorescence theory, d’Espagnat says that any component we cite will be part of our empirical reality. Empirical reality depends on our consciousness. Therefore how can something that depends on our consciousness be the cause of our consciousness? D’Espagnat vs The “Received” View The “received” view that thought is produced by matter is, according to d’Espagnat, “slightly useful” as a model but must be rejected as a plausible philosophical stance. Relative Quantum States vs Relative Consciousness Because the observer decides what to measure and how, quantum states are “relative” to these procedures. However, some quantum rules may be considered “in isolation.” They’re not predictive observational rules and hence don’t involve probability. They’re more like descriptions. However, to understand the quantum world you have to consider all quantum rules not just pick and choose the non-probabilistic ones. D’Espagnat says states of consciousness are somewhat similar. Definite vs Indefinite States of Consciousness Imagine a sealed-off laboratory. Paul makes a measurement. His state of consciousness is definite but Peter doesn’t know that until Paul, say, phones him with the measurement. This is a version of Wigner’s friend, and can be extended over and over again, with an observer outside a sealed room, which contains an observer outside a sealed room, etc. Peter thinks Paul’s state of consciousness is not just unknown (before the phone call) but also undefined. It’s a superposition of possible results (pointer values, for instance). Yet once Paul makes the measurement, Paul’s state of consciousness is definite from Paul’s point of view. Consciousness vs The Absolute This apparent conflict doesn’t change the fact that physics is all about predicting observations, says d’Espagnat. However, there’s a related issue. We shouldn’t think that “predictive states of consciousness” are like some Absolute or can even be a substitute for the Absolute. Quantum states are relative, and so are states of consciousness. More precisely, states of consciousness that are predictive are relative. Physical vs Mental So we see some sort of “solidarity” between the physical and the mental, but that doesn’t mean the mental can be reduced to the physical. Wigner’s Friends vs Ultimate Reality The series of “Wigner’s friends” who occupy increasingly large rooms is suggestive of an ultimate reality that we cannot gain access to. Wigner’s friends don’t have access to the overall wave function. Predictive vs Non-Predictive Consciousness However, nothing prevents us from pondering non-predictive states of consciousness. When Paul makes the observation, his state of consciousness becomes well-defined. It’s no longer predictive. Veiled Reality vs Co-Emergence Michel Bitbol, Hervé Zwirn, and other authors speak of thought and empirical reality “co-emerging” at the same time. It’s a “self-qualifying” process by which structure emerges from an initial and total lack of structure. D’Espagnat says his veiled reality viewpoint has an “ultimate ground” endowed with general structures even if they are “far from being knowable.” This ultimate ground may form the basis for not just scientific laws but also creative and mystical endeavours. Emergence vs Non-Emergence So, according to d’Espagnat, structures emerge but don’t co-emerge. They pre-exist. Co-emergence serves merely to connect consciousness and empirical (not ultimate) reality. D’Espagnat acknowledges that in the past he has talked of consciousness and empirical reality existing “in virtue of one another.” This does not mean that empirical reality emerged from consciousness. Furthermore, these words are meant to be evocative rather than a precise philosophical statement. He reiterates the impossibility of appearances, which depend on consciousness, somehow creating consciousness. Indexed vs Non-Indexed States of Consciousness Adopting Bitbol’s terminology, d’Espagnat says some beings may possess non-indexed states of consciousness. That means these states of consciousness are not relative to any particular experimental setup. However, these states of consciousness must therefore be non-predictive. Microscopic vs Macroscopic An idealized miniature version of a being would be too small to interact with the environment to become predictive. In the intermediate state between microscopic and macroscopic, such beings could accurately predict one class of observations but would wrongly predict another class of observations. For macroscopic beings that first class of observations would still be correctly predictable but the second class of observations would be essentially impossible. These practical observations are conveniently describable in realist language, while the practically impossible observations are not. So if we want to talk about co-emergence then we should imagine the co-emergence of “public and predictive” states of consciousness and empirical, physical reality. This co-emergence is constrained by the class of observations that macroscopic beings can perform. Co-emergence draws from a mind-independent reality that presumably, according to d’Espagnat, is beyond intersubjective description. And returning to the idea of potentiality, d’Espagnat says that in moving from the microscopic to the macroscopic the “ontological potentiality” of consciousness becomes empirical actuality. “The Real” is not in itself thought, but can give rise to thought. One World vs Many Egos There appears to be one universe but many minds. Radical idealists have trouble reconciling this situation. Schrödinger calls this the “arithmetical paradox” and proposed two solutions. There’s “Leibniz’ fearful doctrine of monads,” and there’s the belief the multiplicity is only apparent. Schrödinger preferred the second approach, akin to the Upanishads, which states there is unity behind the illusion. Veiled Reality vs Radical Idealism The multiple-room experimental setup showed that predictive states of consciousness are relative. It’s hard to see how all those observers could be part of just one mind. However, perhaps various observers are making mutually compatible observations, calculable using the general Born rule. This is the same as one observer making simultaneous measurements. This sounds compatible with Schrödinger’s viewpoint. However, that doesn’t solve the problem of the observer in that sealed-off inner room. It also doesn’t take decoherence into account. On the other hand, this decoherence also hides any theoretical possibility of discovering contradictions between multiple minds and the quantum structure of physical laws. D’Espagnat thinks more work needs to be done on this issue. Traces of the Real 18 November 2012 Traces of Reality The Process of Elimination Form vs Content Then he moves on to more substantive issues. Veiled Reality vs Dualism Bitbol suspects “Veiled Reality” is dualistic. “Veiled Reality” vs Veiled Reality Objectivist Language vs Objectivist Philosophy He says Bitbol eventually realized this about d’Espagnat’s position. Broglie-Bohm vs Dualism Hence Broglie-Bohm isn’t a fully classical dualism. “A Priori” Dualism vs Observed Dualism D’Espagnat says he’s made no secret of that. Bitbol calls these factors “ampliative” criteria. Knowledge Of vs Knowledge About But wouldn’t that make this supposedly independent reality an empirical reality? To Sketch vs Not to Sketch He needs to present an actual argument. D’Espagnat says that to talk about sketching is misleading. Nonetheless d’Espagnat acknowledges that it’s not just a process of elimination. Reflected Reality vs Reflected Thought But that doesn’t mean you’ve proved your case. Evidence vs Other Factors D’Espagnat thinks the analogy from group theory is inexact. Nonseparability vs Unity The transcendent may not be so intelligible. Critic vs Critiqued D’Espagnat turns from being critiqued to critiquing Bitbol and Zwirn. And it’s a conjecture that can’t be proved false. For now, Bitbol’s conjecture doesn’t do that. D’Espagnat believes Zwirn commits some minor errors in summarizing d’Espagnat’s approach. It’s Not All in Your Head 11 November 2012 Not Just in Your Head Veiled Threads Another chapter down, three more to go in Bernard d’Espagnat’s On Physics and Philosophy. Chapter sixteen, “Mind and Things,” is (relatively) straightforward. Having spent much of the book undermining physical realism and its kin, he focuses next on the excesses of empiricism and idealism. Much less combative in this chapter, d’Espagnat seems sympathetic to many of the approaches he describes. Sympathetic, yes. In total agreement, no. Ultimately, he’s laying the groundwork for exploring his “Veiled Reality” in some detail as the book draws to a close in the chapters to follow. Empiricism vs Metaphysics Empiricism’s guiding principle: all of our knowledge comes from our senses. It started by discarding metaphysical sources of knowledge. It then emphasized the role of “elementary sensations.” Experience vs Reality Early empiricists seem to have believed primary qualities were real aspects of real objects. Even if our knowledge can’t exceed our experiences, if properly used our experiences produce a good picture of reality. A large number of modern-day scientists hold this view, which is a kind of “physical realism.” Empiricism vs Knowledge Initially the Vienna Circle epistemologists more energetically attacked Kantian views than scientific realism. Nowadays, though, logical positivism is understood in an antirealist sense. But this creates a problem. If the empirical connection between experience and reality is questioned, then are we back to Kant or the neo-Kantians? D’Espagnat thinks that this quandary sank the logical positivist agenda, but contemporary physics can still learn from some of their ideas. Knowledge vs Phenomena “Phenomenalism” has various definitions. One version states that “knowledge is strictly limited to the (physical and mental) phenomena,” says d’Espagnat. Phenomena are just the objects of our (unanalyzed) perceptions or introspection. So phenomenalism is at best consistent with “open realism.” This is the “weak” version of realism that d’Espagnat favours. Phenomenalists vs Physical Objects Unfortunately phenomenalists are often vague about the reality of physical objects. The Vienna Circle positivists were suspicious of counterfactuals: “If you put this lump of sugar in water then it would dissolve.” It’s hard to assign properties such as “soluble” to an object without using counterfactuals. Phenomenalists vs Paths of Knowledge Another problem was pointed out by Bertrand Russell. Unless a solipsist, a phenomenalist accepts the existence of other observers and their assertions. But then why not accept the existence of sound waves, since they convey messages? But then you’re getting into back into physical realism. Private Sensations vs Public Science It’s hard to get around this problem since we have no direct access to other people’s sensations. They’re private. But science relies on communicating knowledge. That’s public. Object vs Method A third problem: We describe objects by describing how to get sense data about them. But then as objects get smaller and smaller the description gets longer and longer. How do you describe an electron as a “construct”? You could describe a cloud chamber, how it has to be prepared, and then the probability that the set-up will produce the hoped-for observation. Stability vs Instability But that gets at a problem when you move from phenomenalism to contemporary physics. To a phenomenalist, an object of knowledge is a stable pattern of perception. In quantum physics there are probabilities of state vectors not some “inherent stability” of perception. Classical Instruments vs Quantum Systems To address Russell’s problem we can assume a measuring instrument is classical. That way various observers can agree on a measurement. Furthermore, our observations agree with the rules of quantum physics, so our sensations aren’t entirely private. It’s a kind of “mutual agreement” between our perceptions and quantum rules. However, d’Espagnat notes this solution is a “hybrid” one. And, he notes, in chapter eight he looked at the problems with saying an experimental apparatus is classical. Operationalism vs Phenomenalism D’Espagant likes operationalism, a modified version of phenomenalism. It deals with how to make observations, and quantum rules for predicting observations are unquestionably accurate. Conventional vs Radical Operationalism There’s a conventional version of operationalism that’s “moderate.” It talks as if there are real properties that are measured. But as seen in chapter seven this leads to ambiguity. “Radical” operationalism is more content to just describe measurements. However, that’s hard to do without specifying the objects being measured. Intrinsic vs Convenient Elements “Radical” operationalism may consider some perceived forms to be “elements” of empirical reality connected by empirical laws. But unlike traditional operationalism, it doesn’t consider these forms to have intrinsic significance. Perceived forms aren’t the “constitutive” bricks of anything. The radical operationalist is prepared to discard one set of predictive rules for another if they work better. And if two sets of rules produce the same predictions then we should accept both. Deductive vs Inductive Logic However, radical operationalism relies heavily on induction. If a rule worked in the past then it must work in the future. That’s not strictly logical. Rules vs Explanations Another problem is that people have trouble seeing these rules as a “genuine explanation.” So do we need the notion of “cause” beyond the realm of just phenomena? Phenomenal vs Transcendental Causation Kantians say that causality is inherent in our a priori understanding, not in the objects themselves. Abner Shimony notes there are many kinds of causality. He feels this diversity undermines the universal application of this Kantian claim. Also, cognitive science has blurred the distinction between the phenomenal and transcendental selves. Therefore categories of understanding can hardly be limited to just the phenomenal self. Phenomena vs the Mind Furthermore, in the past century or two, mathematics and physics have undermined the believe that our understanding of phenomena reflected the ordering principles of the mind. Therefore Shimony doesn’t believe that only phenomena can be the “causes” of other phenomena. Shimony’s Causation vs d’Espagnat’s Laws D’Espagnat focuses on predictive laws in constructing his concept of a “Veiled Reality.” Shimony puts causality at the root of his ontology. True, the reliability of predictive laws must have some kind of “cause.” But d’Espagnat still says he and Shimony have very different views. Transcendental Uniqueness vs Structure A basically Kantian approach says a “transcendental object” is the “purely intelligible cause” of various phenomena. Kant believes that objects exist “per se”—but only in experience. However, he felt there must be a “cause” of these representations. The cause will be totally unknown to us, but he still gave it a name: the “transcendental object.” This unknowable cause is singular. The phenomena it produces are plural. Therefore the “transcendental object” is unique. D’Espagnat already acknowledged (in chapter ten) similarities between Kant’s transcendental object and his own views of extended causality and “ground Reality.” However, d’Espagnat is willing to accept “some sorts of structures” that end up “implying” our scientific laws. That structure and that connection to our laws will still be “undecipherable.” Individual Mind vs Mind in General Operationalism avoids making ontological statements. However, someone has to set up and run the experiment, and someone has to be observe the results. So presumably either an individual “mind” or a “mind in general” exists. Objective Laws vs Mentalist Consciousness Jean Petitot says we can be “objective” about the laws of phenomena even if we can’t see what’s behind the phenomena. Galilean space and time are “mental” concepts. But these mental forms let us construct the legal rules of phenomena, which become “desubjectivized.” So “mentalist” or “cognitive” is what’s unique to each person’s consciousness. Things lying in space are therefore neither ontological nor mentalist. Objectivity vs Ontology Petitot says that space and time are the crucial notions in classical physics. In quantum physics the crucial notion would be probability amplitudes. But space and time seem independent of us, while probability amplitudes are very much connected to an observer. D’Espagnat says it’s not a crucial distinction as Petitot separates objectivity from ontology. Petitot’s approach is what d’Espagnat calls a “weak objectivity” or “intersubjectivity.” Different observers will get the same measurements under the same conditions. Assumptions vs Justifications But d’Espagnat tackles Petitot’s approach on two fronts. The first objection is that stating a rule and justifying a rule are two different things. Stating that “reality-per-se” is unobservable creates some interesting consequences. But the statement is basically an axiom. Galilean physics can still be explained by the “reality of the accidents.” The big challenge to realism was quantum physics, not Petitot’s or anyone else’s transcendental claims. Transcendental vs Individual Subject The second objection is that transcendentalists create a contradiction when they limit “mentalist” and “cognitive” to individual minds. Petitot and Kant both believe a transcendental subject is impersonal. It supposedly conveys a priori sensibility and categories of understanding not limited to an individual. Kant said his transcendentalism differed from Berkeley’s idealism. However, Kant’s “empirical realism” is still only empirical. Objects of experience exist in experience. But experience requires one or more subjects to have that experience. D’Espagnat says a “transcendental subject” can’t eliminate the role of knowledge in our experiences. And knowledge depends on cognition. That knowledge is then communicated intersubjectively, taking it out of the “private realm.” Plato vs Galileo Galileo stressed the mathematical structure of natural laws. Some people take this as evidence he was a “Platonist.” Alexandre Koyré said Galilean science started from this belief: Reason and geometry are enough to acquire “intelligence of the real.” But Galileo took considerable pains to investigate phenomena. For him to be a Platonist you’d have to equate what’s “empirically real” with a kind of Platonic idealism. However, Plato’s cave suggests our pursuit of phenomena will get us only as close as some shadow of the “Real.” Senses vs Innate Knowledge Also ambiguous is the notion of what is “innate.” Both Descartes and Saint Augustine believed we could gain knowledge without use of the senses. But empiricists believe “reality-per-se” is inaccessible. So how could we ever experience an independent reality? Empiricism vs Innatism If we consider Kantian space, time, and causality then these notions must be innate. Furthermore, Kant’s categories of understanding are a priori, so they too are “innate.” But in our “semi-intuitive” world-view we follow sensory evidence as closely as possible, yet interpretation still guides us. Quantum Mechanics vs Innatism Quantum mechanics is “weakly objective” hence “antirealist.” Its view of knowledge is somewhat Kantian, but with a strong dose of operationalism added. This operationalism prevents quantum mechanics from getting too close to Descartes’ innatism. Furthermore, the simplicity of quantum rules leads us to infer (unprovably yet irrefutably) a simplicity in the “Real.” D’Espagnat says this approaches Nicolas Malebranche’s “vision in God.” Empiricism vs Conventionalism D’Espagnat says reading between the lines one can see evidence of Henri Poincaré’s “ontological” stance. Kant believed the axioms of geometry were a priori. The discovery of non-Euclidean geometries refuted such an idea. Poincaré saw the axioms as neither a priori nor as experimental data. They are conventions, he decided. “One geometry cannot be truer than another one, it can only be more convenient,” he wrote. The Convenient vs the Real Poincaré felt the same way about physics. Experimental data and theories about them are not descriptions of an independent reality. They are convenient and concise “pictures” to describe observations and connect them. So, for instance, the ether hypothesis is “convenient.” Whether or not ether exists is the concern of the metaphysician, not him. Supporters of “objectivist realism” complain conventionalism favours convenience over truth. However, both Poincaré and d’Espagnat reply that if rules make the right predictions then we might as well call them “true.” Knowable vs Underlying Reality Poincaré says relationships between things are “objective” when they’re “the same for everybody.” Poincaré says that the relationships between things is “the sole objective reality.” These relationships cannot be conceived independent of a mind that conceives them. However, “they are objective nevertheless since they are shared by all thinking beings.” Poincaré is definitely referring to real, though hidden, objects. And Poincaré says we can discover true relationships between these real objects. D’Espagnat says that the only way to make sense of these statements is to believe Poincaré believed some reality that underlies phenomena. Otherwise it’s hard to imagine how there could be real relationships between real, if hidden, objects. Separability vs Non-separability Although d’Espagnat’s viewpoint and Poincaré’s implicit ontology are similar, they differ over “separability.” Poincaré’s “structural realism” involves “objects-per-se”—unknowable but plural. D’Espagnat says modern physics does not support separability, and hence there must be “some underlying coherence, or deep unity” to this hidden reality. Rules vs Ontology Poincaré believed the equations of classical physics served two purposes. First, they describe the structure of various laws. Secondly, they describe the value of certain properties at different points. Poincaré was happy with the first role. He had doubts about the second role as he felt equations indicated only what would be observed at those points, not what was pre-existing there. D’Espagnat wonders if we can give up the second role of an equation’s symbols while retaining the first. Maybe we could then call this a “structural” realism. Old vs New Theories D’Espagnat does note that Poincaré explored ontological issues only with reluctance. Therefore it would be wrong to attribute this interpretation to Poincaré. Also, this interpretation has some problems. As theories evolve, old equations may be seen as merely approximate. Also, a new theory may have little in common with the old theory. This would imply the structure of “Reality” is very different under this new theory. However, normally one could derive the old theory’s equations from the new theory’s equations. In that case there still might be a meaningful, permanent substratum to reality, despite the objection of radical idealists. Structural Realism vs Veiled Reality In the end, d’Espagnat says, structural realism can be justified only after it’s watered down so much it looks like his own “Veiled Reality.” In the final chapters d’Espagnat says he’ll have to steer between the conceptual difficulties of classical phenomenalism and how physical realism is contradicted by its own science’s results. Background image: NASA, ESA, J. Richard (CRAL) and J.-P. Kneib (LAM) via www.spacetelescope.org. The Portable Rainbow 7 July 2012 Under the Veil Chapter 15 (“Explanation and Phenomena”) of Bernard d’Espagnat’s On Physics and Philosophy continues the previous chapter’s exploration of causation and explanation. With quantum mechanics relying on observation and denying a naive realism, is an empirical explanation good enough? For d’Espagnat there’s a need to postulate a “Veiled Reality” of which we may only be able to sneak some peeks, if at all. Nonetheless he believes it’s there. Prediction vs Explanation If you measure the state of quantum particles and find a correlation-at-a-distance how do you explain it? Quantum mechanics is a “recipe” to predict observations from initial conditions. It’s an “explanation” on the level of empirical reality. If you need some “deeper” explanation about what’s “really” going on you might add “hidden variables” to extend the standard theory. But then you run into problems with Bell’s Theorem and the experimental results of Aspect and others. If you don’t have knowledge of a deeper reality, how can an empiricist justify using induction to create laws? Just because a law summarizes certain observations on certain days, why should we think it’s universal? Induction vs Unknowable Explanation For d’Espagnat, a belief in the existence of a deeper reality is enough to ground our use of induction. We may be incapable of comprehending this deeper reality, but our belief that there is one suggests a connection between empirical reality and an underlying reality. That belief is enough for d’Espagnat to accept induction without having to justify it every time it’s used. Even if we don’t know anything much about this deeper reality, there’s still no “logical inconsistency,” d’Espagnat says, in using its presumed existence to justify induction. Furthermore, this deeper reality—whether “veiled” or even entirely unknowable—will not be an arbitrary reality, d’Espagnat says. Rainbows vs Quantum Concepts Although rainbows can’t be directly grasped and manipulated, they’re explained in classical physics. A description of rainbows might illuminate how we speak about quantum systems. A rainbow (including its two “bases”) will look different from different locations. Hence, the particular rainbow someone sees is observer-dependent. The same reliance on location is true if you set up automatic cameras. Hence you can’t say that just because we’ve taken a picture of a rainbow that this rainbow “really” existed before that observation. Similarly, out tendency to “reify” (seeing something as concrete and real) means we jump from an observation to assuming what was observed somehow pre-existed. If we can argue that a rainbow doesn’t pre-exist, we should be able to argue that a quantum object doesn’t pre-exist either. Dinosaurs vs Humans However, surely dinosaurs existed before humans ever walked the earth. No observation was required to bring them into existence. D’Espagnat says that dinosaur bones are like the pointers of an experimental set-up. We see something and conclude it’s real. Though d’Espagnat says it’s real, he specifies it’s real in the realm of “empirical reality.” However, this empirical reality is hardly an arbitrary production. Its qualities are severely constrained, and in the end observers tend to see mostly the same thing. Explanations vs The Final Key Classical physics can still provide us with “explanations” as long as we don’t presume they derive from a deeper reality. D’Espagnat adds that we should not conclude that these explanations are the “final, ultimate key” to understanding the world. D’Espagnat vs Other Views I’ve concentrated above on d’Espagnat’s ultimate positions, but here are some examples of how he explains his disagreement with other people’s positions (real or conjectured). D’Espagnat vs Cassirer If you see correlations in a quantum experiment then d’Espagnat has trouble imagining Cassirer’s “logical necessity” could explain each particular observation in a sequence. True, Cassirer could choose (or could have chosen, as he’s now dead) hidden variables, but d’Espagnat says that’s too “metaphysical” for Cassirer, and the Aspect-type experiments have refuted them anyway. Maybe Cassirer equates “logical necessity” with a pre-existing logos, a primary notion of absolute existence. D’Espagnat says that whole idea is something the neo-Kantians were trying to get away from, so again it doesn’t sound like Cassirer. Nonetheless, d’Espagnat says his own position is consistent with considering the “Real” (with a capital R) to consist of such a logos. D’Espagnat vs Carnap Carnap says scientists should be more modest. They shouldn’t try to explain the “why” but just the “how” of phenomena. Carnap’s position is that simply producing entities, such as Driesch’s “entelechy” as an explanation for tissue regeneration, is irrelevant as there are no “laws” connecting conditions and observations. So what about d’Espagnat’s “Real”? Is it just a meaningless entity? It doesn’t help us predict anything, so maybe it’s not an explanation at all. D’Espagnat responds by saying scientists long ago were implicitly believing in the realism of a world ruled by classical physics even if explicitly they concerned themselves with just the laws of observation. Even some realists nowadays, says d’Espagnat, acknowledge that there could be an underlying reality, not attainable through “discursive knowledge,” that nonetheless grounds our empirical reality. Furthermore, if laws relate just to our known observations, then what happened before we made those observations? Carnap, according to d’Espagnat, said laws could exist before such observations but the truth of the laws could not be judged. D’Espagnat says this amounts to Carnap’s acknowledging a “human-independent reality” that has a structure we might never know. Since quantum mechanics only predicts observations and does not “explain” underlying reasons, this implies to d’Espagnat that a “Veiled Reality” has a meaning even if we can’t explore it empirically. But what if we imagine Carnap meant some kind of “linguistic framework” involving “nature” and “existence” that replaced the usual meaning of those terms? In a world ruled by classical physics it makes sense to speak of “things” and their qualities makes sense. In a world ruled by quantum physics it makes sense to speak of “sense-data” rather than “things.” D’Espagnat says this approach works fine for making sure scientific statements are clear. But it’s not satisfactory from the philosophical point of view. Carnap, d’Espagnat says, is just “masking” not “eliminating” the connection we make between an explanation of observations and an explanation of what’s going on in some underlying reality. Since a linguistic framework is “chosen by us” according to Carnap it sounds a bit arbitrary and not like a genuine explanation. An Influential Relationship 1 July 2012 Influential Arrows Just Causes and Side Effects Chapter 14 (“Causality and Observational Predictability”) of Bernard d’Espagnat’s On Physics and Philosophy examines how, and if, we can use the concepts of causation and influence to explain the world. Reality vs Observations Taking a break from examining the “notion of reality” d’Espagnat uses this chapter to argue it’s better to predict observations than predict “things as they are.” Animism vs Empiricism Aristotle saw causation as related to human will, and even inanimate objects seemed to have some animistic will—as seen when a falling stone somehow desires to return to its natural resting place. Empiricists went to the other extreme. Physical laws should just be descriptions of events and their regularities. However, what initial conditions are the “cause”? You end up with too many empirical laws. Mathematical vs Physical Determinism If two very close points rapidly diverge then they’re not likely to be “physically deterministic.” It’s too hard to calculate their exact paths. “Strong objectivists” argue that science accumulates knowledge about an underlying reality, not just our experimental observations. Some strong objectivists argue that “chaotic” behaviour is an example of indeterminism, and others argue initial conditions have to be repeated exactly for us to say deterministic laws apply. The first approach implies imperfect observations or calculations show reality is indeterministic, but that’s strange since strong objectivists believe in an underlying reality separate from our fuzzy data. The second approach is a problem since a strong objectivist can’t be absolutely sure the initial conditions won’t be repeated. Laws vs Predictions D’Espagnat also criticizes the claim we’ve seen the “end of certainties” just because some calculations make predictions impossible. He says that’s too harsh as we can still believe in our laws even if sometimes in practice we can make reliable predictions only for the near future. Classical vs Quantum Indeterminacy D’Espagnat cautions against regarding chaos theory as some overriding conceptual triumph as it’s grounded on classical concepts of space and time. Classical physics falters where quantum physics and its apparent indeterminacy excel, particularly on the microscopic level. Yet d’Espagnat says the defining feature of quantum mechanics is not its indeterminacy but its “weak objectivity.” The theory confines itself to observations of reality, not claims about reality itself. Individual vs Statistical Determinacy D’Espagnat agrees with Kant that “regularity in time”—in which one kind of event is followed consistently by another—is a good way to distinguish the empirically real from, say, the events of a dream. Kant’s “sin of omission” (understandable because of his time and place) was not to consider statistical regularities in which ensemble probabilities are deterministic. D’Espagnat emphasizes that quantum mechanics makes reliable predictions for observing ensembles of quantum systems, but these are not probabilities of ignorance about individual systems. At first glance quantum mechanics may seem indeterministic, but if you keep in mind quantum predictions are about observations of multiple systems then it too is deterministic—if only “statistically.” Laws vs Facts D’Espagnat warns against “a variant of nihilism” if you don’t pay enough attention to the difference between laws and facts. He says even Dirac’s musings that universal constants (such as the speed of light) might change over time don’t threaten that distinction. The nihilistic danger, d’Espagnat says, comes from sociologists, epistemologists, or “pure philosophers” who see in the history of a changing universe a fundamental lack of stability. They fail to distinguish between laws and facts, or they fail to appreciate the significance of the distinction. Causes vs Influences D’Espagnat imagines a Laplace daemon that can possess total knowledge of events in part of the universe. The fixed speed of light means, in an Einsteinian world, the daemon need only check events in a point’s past light-cone to predict that point’s future. However, Bell’s Theorem combined with the experiments of Alain Aspect (and others) proved that the locality hypothesis is false. Add to that the order (in time) of events can vary by reference frame, and we see that (earlier) cause and (later) effect can be ambiguous. D’Espagnat thus suggests that faster-than-light influences—or “influential relationships”—do exist. Gestures in Empty Space 23 October 2011 Gestures in Empty Space Physics vs Philosophy In the second half of his book On Physics and Philosophy Bernard d’Espagnat explores more of the philosophy and less of the physics of quantum theory, and I’m taking notes to keep track of where he’s headed. In chapter 13 (“Suggestions from Kantism”) d’Espagnat says that physicists nowadays aren’t entirely sure how their scientific theories and results are to be interpreted. Science of Reality vs Science of Phenomena Immanuel Kant steered science away from “reality-per-se” and toward the study of phenomena, so d’Espagnat thinks Kant’s views might be promising. Kant did not reject the idea of an underlying reality, otherwise he would have discarded the “thing-in-itself” concept. Kant did, however, say that Pure Reason would not be able to make any pronouncements on it. Beliefs and speculations are fine as long as we don’t call them scientific or rational views, he said. The Table vs the Representations of the Table Here is a basic problem: if you look at a table you not only have a representation of that table somehow inside of you, but you also believe there’s a table somewhere out there. Can we check that this representation is accurate? Well, we can’t check what’s “really” out there since our senses give us just a representation of the table. Space vs Experience Kant argued that the very concept of space and spatiality was “a priori.” We’re more or less born with it as we need to peg various bits of sense data somehow in relation to each other. D’Espagnat disagrees, saying the same argument could be made about riding a bicycle or learning to swim. He says Kant, living in a time ignorant of the evolution of species, would also not have considered “learning by apprenticeship.” That’s where systems of neurons gradually favour useful gestures and discard the unhelpful ones. D’Espagnat finds himself not just rejecting the realists who argue for the “absoluteness” of space and time, but also the idealists who claim Kant pinned down the foundations of the argument. Not just evolution but also quantum physics was lacking in Kant’s approach, yet even today philosophers consider a “demonstrated truth” the view that spatiality is just the mind’s way of framing phenomena. Modern physics shows that concepts of Euclidean space, universal time, and precise localization are misleading, even if we need those concepts to operate as humans in our ordinary macroscopic lives. Kant’s vs Quantum Theory’s Objective Language In any event, Kant sees science as addressing phenomena, so his version of science is “weakly objective.” So is quantum physics, which speaks of experimental setups and predictions of experimental observations. But Kant uses the language of objective reality with little modification, while quantum physics uses terms such as electrons and virtual particles for reasons of reluctant convenience. Kant vs his Followers Kant’s followers became much more hostile to the idea of an objective reality. Kant talked about the “thing-in-itself,” even if inaccessible to science directly. But neo-Kantians rejected the notion except as a “limiting concept.” D’Espagnat looks at Ernst Cassirer, a neo-Kantian writing around a century ago. D’Espagnat says he may be the “clearest” of all the neo-Kantians. Cassirer’s Concept of Concepts vs Traditional Concepts Cassirer was very interested in the process of coming up with concepts. He noted that traditionally a big concept contains little information because so many distinctions get blurred. For instance, we can move from an oak to a tree to a plant to a living being, a concept so broad we hardly have words to describe it. But with mathematics the bigger categories combine all the qualities of the smaller categories that feed into it. For instance, the concept of second-degree curves doesn’t mean we can’t tell a circle from an ellipse any more. We just have to plug the right values into the right parameters and we can get a circle. Logical Necessity vs Quantum Results Cassirer tries to describe science as a mathematical approach incorporating more and more concepts through some sort of logical necessity. Reason and the universal scope of logical necessity form the basis for some sort of Being. But d’Espagnat says that’s just an analogy. If we look directly at quantum physics we see it doesn’t link disparate impressions into some sort of logically and causally required arrangement of entities. Cassirer’s views on the rules of knowledge and logical necessity put great emphasis on our mental powers to create order rather than some reality “out there.” Yet d’Espagnat reminds us that experiments often refute one theory or other, so there is something “out there” that can derail some view and just say “no.” Complete vs Ongoing Pursuit of Knowledge Furthermore, d’Espagnat’s concept of a “veiled reality” won’t leave us exasperated and depressed, he says, because we will always be able to come up with better and better ways to deal to generalize about phenomena. Mind-independent Reality vs Internal Consistency D’Espagnat says some modern philosophers with “Kantian or empiricist inclinations” are less resistant to the idea of an independent reality than Cassirer was. D’Espagnat says that Hilary Putnam’s “internal realism” sees descriptions evaluated by some kind of “ideal coherence” that keeps our beliefs consistent with each other and with experiences represented in those belief systems. But d’Espagnat says Abner Shimony thinks Putnam gives up too easily in rejecting any kind of correspondence between our senses and a “mind-independent or discourse-independent ‘state of affairs.'” Shimony and d’Espagnat think the concept of a “protomentality” has some similarity with quantum mechanical concepts. Bas van Fraasen came up with a “constructive empiricism” that rejected scientific realism. He said it might be useful to consider structures and processes not directly accessible to an observer, but they’ll have no intrinsic reality. On the other hand, Fraasen said the issue of what is observable and what isn’t should be left to science not philosophy: the “Grand Reversal.” Empiricism vs a Knowing Subject But can Fraasen stay an empiricist when he’s making use of a “knowing subject”? Shimony thinks empiricism needs to be dropped for some kind of realism — with a strong mental component. In the end d’Espagnat agrees with Shimony that there must be some way to “close the circle” even if there is a “dark cloud” making the project difficult. That dark cloud is the impossibility of considering quantum states to have an ontological status. D’Espagnat says that as long as contemporary philosophers stick to philosophy they have trouble radically casting doubt on veiled reality, or even realism in general. Only when modern physics is considered do we see how impossible it is to be a conventional realist. Background image: NASA/JPL-Caltech/P. N. Appleton (SSC-Caltech) via galex.caltech.edu. Terms of Ontological Endearment 25 August 2011 Mosaic of Reality Material Witness At least that’s what he says, and I largely agree. Here’s my summary of the chapter. Dialectical Materialism vs Bohr Scientific Materialism vs Atomism Macroman on the Street vs the Microworld Standard vs Non-standard Interpretations Empirical Reality vs Materialist Reality Positivism vs Materialism Research vs Traditions of Research Observations vs “Ampliative” Arguments Bohm vs Materialism Sophistication vs Atomic Materialism Neomaterialism vs Matter A third approach to materialism comes from André Comte-Sponville. Neutral vs Suggestive Terms Nonseparability vs Neomaterialism Utility vs Evidence Empirical vs Ultimate Reality This a materialism does not make. Convenient Ontologies vs Creeds Knowledge of Good and Banal 10 August 2011 Knowledge of Good and Banal Philosopher’s Walk A little past the halfway point in Bernard d’Espagnat’s On Physics and Philosophy he switches from a look at the relevance of physics to philosophy to the relevance of philosophy to physics. If the first chapter of part two, chapter eleven, is any indication, the last half of the book should be a much easier read than the first, though perhaps less satisfying. It was a huge challenge to wade through d’Espagnat’s descriptions of quantum theory and interpretation, hence I felt the need to write (and post) lots of notes to help me out. At least I felt a sense of reward whenever I finally grasped something of the physics. But as far as I can tell I agree with d’Espgant’s philosophy anyway. Part two may make easier reading, but I already felt a lot of the modern philosophy, soft sciences, and cultural studies he critiques was just plain hokum. I don’t need more convincing. In any event, I will trudge on, and I expect I’ll be posting updates to my dualistic summary much more often now. Science vs Philosophy Descartes, Pascal, and Leibniz were brilliant scientists and philosophers, but by the eighteenth century a huge breach developed. Nature of Things vs Behaviour Fourier refused to speculate on the nature of heat. Instead his heat propagation equation quantitatively predicted heat’s behaviour. Intuitive vs Unintuitive Notions Specialization works when concepts such as (in Fourier’s time) “hotter” and “colder” seem obvious, so don’t need to be defined by a theory. But what’s a quantum field or space-time metrics? Then we do need to consider the nature of such concepts. Ontology vs Operationalism The physicist can give up his exclusive interest in behaviours, or can decide that “behaviour” is just a series of recorded observations. The first option sounds like philosophy, while the second gets close to operationalism. Physics-Aware Philosophy vs Philosophy-Aware Physics In first part of his book d’Espagnat called on philosophers to pay attention to the physics. In this second part he calls on physicists to pay attention to the philosophy. Epistemology vs Scientific Knowledge D’Espagnat says epistemology, the philosophy of knowledge, is particularly important when considering scientific knowledge. Logical Positivists vs Modern Sceptics Epistemology forty years ago was dominated by logical positivists. Nowadays there’s more diversity. Present-day epistemologists often combine extreme scepticism toward science with an “everything goes” attitude to knowledge in general. They also talk of “paradigms” in a way that suggests an underlying belief in objectivist realism. Stubborn Epistemologists vs Blasé Physicists Physics has moved far, far away from realist attitudes to experimental data. Epistemologists generally ignore two points physicists find obvious. The first is that equations, such as Maxwell’s, show remarkable power and longevity, even if interpretations of those equations have changed. The second is that with the help of these equations physical science gets better and better at predicting phenomena. Paradigm Change vs Continuous Change D’Espagnat finds fault with much of contemporary epistemology, but he says Thomas Kuhn and others usefully pointed out that science doesn’t always change slowly but surely. Kuhn sees a strong sociological basis in “paradigm changes”: it’s easier to cast doubt on the present “received” theory than to prove its replacement. Therefore advocates of change must use more tools of persuasion than just the data. Experimental Choice vs Outcome D’Espagnat appreciates that funding and fashion might influence choice of experiment, but he strongly doubts that they affect the results of those experiments. Short-Term Chaos vs Long-Term Progress D’Espagnat says epistemologists probably act like historians, seeing short-term upheaval during science’s most productive periods. But in the long term d’Espagnat strongly believes the “winner” theory will explain not just new facts, but the old ones the previous theory took care of. Huygens’ Waves vs Newton’s Corpuscles D’Espagnat acknowledges how Newton’s corpuscular theory of light replaced Huygens’ wave theory even though Huygens’ explained double refraction a lot better. Does that mean explanatory power is sometimes lost as science “progresses”? D’Espagnat emphasizes that today’s theory of light is quantum electrodynamics, which improves upon both Newton’s and Huygens’ theories. Universal Physics vs The Rest of Science D’Espagnat says epistemologists might dispute the universality of this argument. Instead of physics maybe their claims apply to some other sciences. He believes in the universality of science in principle, but he admits maybe sciences less dependent on technology may suffer a loss of craft as one theory replaces another. However, d’Espagnat still believes such a loss would be temporary. Epistemologists again confuse loss of predictive power and a (temporary) lack of interest in some field. Paradigms vs Reality Kuhn-like epistemologists are so fixed on objectivist or constructivist realism that they see a change in concepts as a radical change in physics’ view of reality. As a result some epistemologists speak of the “noncumulative” nature of physics. Allegory vs Equations D’Espagnat points to the remarkable stability of equations despite changes in “wordings and outward interpretations.” A change in concepts doesn’t destroy the old theory, it just generalizes it and provides a new allegorical picture. Kuhnian vs Other Viewpoints D’Espagnat notes that in the past fifty years other approaches to scientific knowledge have developed that don’t rely on Kuhn. Professional Language vs Sloppy Thinking D’Espagnat says most professional languages help prevent misleading shifts in meaning, but philosophical language actually encourages it. If you apply critical thinking to philosophical texts you’ll often discover ambiguous meaning and mannered style replacing sound arguments. Context vs Scientific Purpose D’Espagnat says some epistemologists delve deeply into the psychology or sociology of scientific discovery, yet remain near silent about what science is really concerned with. Ideas vs Evidence Jean-Jacques Rousseau decided humans are good by nature, but forgot this was an idea of his not a piece of evidence. D’Espagnat says many philosophers of science act the same way, clinging to an idea that is ultimately just part of their dogma. Empiricists decided a priori that evidence comes from the senses, while positivists have their verification principle. Relativity vs Quantum Theory Epistemologists have started taking into account relativity theory but don’t realize how damaging quantum theory is to some of their views. Positivists vs Realists Some realist epistemologists speak of entities as having an unconditional individual existence, or naturally assume that particles travel on continuous trajectories. Realists point to positivism’s failings on philosophical grounds, but the physics points to the failings of realism. Science vs Cultural Fashion Some epistemologists think the positivism of the 1920s led to the “weak” objectivity of standard quantum theory, while today’s attitudes are friendlier towards realism. D’Espagnat calls this argument “valueless.” If it were just an issue of social psychology and today’s fashion then physicists should now have solved the quantum interpretation problem. However, as he’s already explained in detail, other quantum interpretations that make the right predictions cannot be interpreted ontologically, and vice versa. Language vs Thought Throughout the twentieth century many philosophers paid attention to language, thinking it had to mirror—even mould—the logic of thought. The problem is various languages have very different structures. Do we think if a group speaks a different language it thinks differently? D’Espagnat believes “language creates thought” is a Rousseau-like assumption. Aristotle came up with the concept of potentia not so he could think in a new way, but to accommodate new data. New language is convenient and helpful, but springs from a need to explain new evidence. Quantum vs Classic Logic Quantum theory muddies the distinction between concepts of objects and predicates. Some people have put forward a quantum logic to remedy that situation, but this new logic isn’t a necessary part of quantum theory. Metalogic vs Specific Rules D’Espagnat believes that the metalogic used to speak about logic is a universal logic, while specific thinking rules might apply to specific situations. He notes with approval Bohr’s “basic truth” that everyday language is the only clear means of communication that we have. Sociologism vs Science D’Espagnat condemns the idea that “anthropological situations” determine scientific results. He asks, for instance, if the Heisenberg uncertainty principle would have failed had German and Danish culture been different. He says this is sheer absurdity and calls the attitude “sociologism.” Sokal the Anti-Sociologist vs Sokal the Realist D’Espagnat applauds physicist Alan Sokal’s exposé of sociologists’ fuzzy thinking (by submitting an incoherent, jargon-filled paper to a humanities journal). However, d’Espagnat regrets how Sokal “drifted to the other extreme” by clinging to physical realism. Certainties vs The End of Certainties D’Espagnat disagrees with the phrase “the end of certainties,” which is often used to describe the loss of certain knowledge in modern times. He rejects this idea, whether it refers to challenges to determinism or physical realism. Predictive rules, whether of events or probabilities of events, do work. Once experimentally verified they keep on working. D’Espagnat thinks this is “certain” knowledge, though he agrees that “illusively simple” certainties may prove deceptive and short-lived. So Say We All 28 June 2011 Quantum States of Confusion Laws of Classical Physics vs Quantum Physics Prediction vs Justification Quantum Laws vs Macroscopic Predictions Quantum Laws vs The Alternatives Ensembles vs Individual Observations Mere Predictions vs Hidden Variables Outside vs Inside Observations Consciousness vs Matter Matter vs Sense Data Sense Data vs Reality Schrödinger’s Cat vs Wigner’s Friend Improper vs Proper Mixtures Pointers vs Bacteria Pilot Waves vs Schrödinger Wave Function Microscopic vs Macroscopic Subjects Predictive Consciousness vs Predictive Science Predictive vs Non-predictive Consciousness Group vs Individual Observations He does not, however, rule out this possibility. Individual vs Group Observations Can this reliability of individual predictions be extended to group predictions? Physics vs Philosophy Instrumentalists vs the Kantians Naive Realism vs Open Realism Scientists vs Philosophers Kant vs d’Espagnat Empirical vs Objective Reality “The Real” vs Phenomena Unreachable vs Veiled Reality “The Real” vs Space-time D’Espagnat vs Mohrhoff D’Espagnat vs Modern Philosophers Existence vs Knowledge Beautiful Theories vs Falsifiability Realism of the Accidents vs External Influence Present vs Past Building Blocks of Knowledge Partial Knowledge vs Veiled Reality D’Espagnat vs Esotericism Veiled Reality vs Anomalous Phenomena
e84f59ba9c79ab17
All Issues Volume 24, 2019 Volume 23, 2018 Volume 22, 2017 Volume 21, 2016 Volume 20, 2015 Volume 19, 2014 Volume 18, 2013 Volume 17, 2012 Volume 16, 2011 Volume 15, 2011 Volume 14, 2010 Volume 13, 2010 Volume 12, 2009 Volume 11, 2009 Volume 10, 2008 Volume 9, 2008 Volume 8, 2007 Volume 7, 2007 Volume 6, 2006 Volume 5, 2005 Volume 4, 2004 Volume 3, 2003 Volume 2, 2002 Volume 1, 2001 Discrete & Continuous Dynamical Systems - B September 2007 , Volume 8 , Issue 2 Select all articles A linear-quadratic control problem with discretionary stopping Shigeaki Koike, Hiroaki Morimoto and Shigeru Sakaguchi 2007, 8(2): 261-277 doi: 10.3934/dcdsb.2007.8.261 +[Abstract](884) +[PDF](195.4KB) We study the variational inequality for a 1-dimensional linear-quadratic control problem with discretionary stopping. We establish the existence of a unique strong solution via stochastic analysis and the viscosity solution technique. Finally, the optimal policy is shown to exist from the optimality conditions. Stability enhancement of a 2-D linear Navier-Stokes channel flow by a 2-D, wall-normal boundary controller Roberto Triggiani 2007, 8(2): 279-314 doi: 10.3934/dcdsb.2007.8.279 +[Abstract](843) +[PDF](431.6KB) Consider a 2-D, linearized Navier-Stokes channel flow with periodic boundary conditions in the streamwise direction and subject to a wall-normal control on the top wall. There exists an infinite-dimensional subspace $E^0$, where the normal component $v$ of the velocity vector, as well as the vorticity $\omega$, are not influenced by the control. The corresponding control-free dynamics for $v$ and $\omega$ on $E^0$ are inherently exponentially stable, though with limited decay rate. In the case of the linear 2-D channel, the stability margin of the component $v$ on the complementary space $Z$ can be enhanced by a prescribed decay rate, by means of an explicit, 2-D wall-normal controller acting on the top wall, whose space component is subject to algebraic rank conditions. Moreover, its support may be arbitrarily small. Corresponding optimal decays, by the same 2-D wall-normal controller, of the tangential component $u$ of the velocity vector; of the pressure $p$; and of the vorticity $\omega$ over $Z$ are also obtained, to complete the optimal analysis. Optimal investment-consumption strategy in a discrete-time model with regime switching Ka Chun Cheung and Hailiang Yang 2007, 8(2): 315-332 doi: 10.3934/dcdsb.2007.8.315 +[Abstract](933) +[PDF](217.2KB) This paper analyzes the investment-consumption problem of a risk averse investor in discrete-time model. We assume that the return of a risky asset depends on the economic environments and that the economic environments are ranked and described using a Markov chain with an absorbing state which represents the bankruptcy state. We formulate the investor's decision as an optimal stochastic control problem. We show that the optimal investment strategy is the same as that in Cheung and Yang [5], and a closed form expression of the optimal consumption strategy has been obtained. In addition, we investigate the impact of economic environment regime on the optimal strategy. We employ some tools in stochastic orders to obtain the properties of the optimal strategy. Global stability of two epidemic models Qingming Gou and Wendi Wang 2007, 8(2): 333-345 doi: 10.3934/dcdsb.2007.8.333 +[Abstract](704) +[PDF](210.1KB) In this paper we study the global stability of two epidemic models by ruling out the presence of periodic orbits, homoclinic orbits and heteroclinic cycles. One model incorporates exponential growth, horizontal transmission, vertical transmission and standard incidence. The other one incorporates constant recruitment, disease-induced death, stage progression and bilinear incidence. For the first model, it is shown that the global dynamics is completely determined by the basic reproduction number $R_0$. If $R_0\leq1$, the disease free equilibrium is globally asymptotically stable, whereas the unique endemic equilibrium is globally asymptotically stable if $R_0>1$. For the second model, it is shown that the disease-free equilibrium is globally stable if $R_0\leq1$, and the disease is persistent if $R_0>1$. Sufficient conditions for the global stability of an endemic equilibrium of the model are also presented. Distributional chaos via isolating segments Piotr Oprocha and Pawel Wilczynski 2007, 8(2): 347-356 doi: 10.3934/dcdsb.2007.8.347 +[Abstract](872) +[PDF](179.4KB) Recently, Srzednicki and Wójcik developed a method based on Wazewski Retract Theorem which allows, via construction of so called isolating segments, a proof of topological chaos (positivity of topological entropy) for periodically forced ordinary differential equations. In this paper we show how to arrange isolating segments to prove that a given system exhibits distributional chaos. As an example, we consider planar differential equation ż$=(1+e^{i \kappa t}|z|^2)\bar{z}$ for parameter values $0<\kappa \leq 0.5044$. Sharp global existence and blowing up results for inhomogeneous Schrödinger equations Jianqing Chen and Boling Guo 2007, 8(2): 357-367 doi: 10.3934/dcdsb.2007.8.357 +[Abstract](828) +[PDF](195.8KB) In this paper, we first give an important interpolation inequality. Secondly, we use this inequality to prove the existence of local and global solutions of an inhomogeneous Schrödinger equation. Thirdly, we construct several invariant sets and prove the existence of blowing up solutions. Finally, we prove that for any $\omega>0$ the standing wave $e^{i \omega t} \phi (x)$ related to the ground state solution $\phi$ is strongly unstable. Reformed post-processing Galerkin method for the Navier-Stokes equations Yinnian He and R. M.M. Mattheij 2007, 8(2): 369-387 doi: 10.3934/dcdsb.2007.8.369 +[Abstract](695) +[PDF](213.1KB) In this article we compare the post-processing Galerkin (PPG) method with the reformed PPG method of integrating the two-dimensional Navier-Stokes equations in the case of non-smooth initial data $u_0 \epsilon\in H^1_0(\Omega)^2$ with div$u_0=0$ and $f,~f_t\in L^\infty(R^+;L^2(\Omega)^2)$. We give the global error estimates with $H^1$ and $L^2$-norm for these methods. Moreover, if the data $\nu$ and the $\lim_{t \rightarrow \infty}f(t)$ satisfy the uniqueness condition, the global error estimates with $H^1$ and $L^2$-norm are uniform in time $t$. The difference between the PPG method and the reformed PPG method is that their error bounds are of the same forms on the interval $[1,\infty)$ and the reformed PPG method has a better error bound than the PPG method on the interval $[0,1]$. Detecting perfectly insulated obstacles by shape optimization techniques of order two Lekbir Afraites, Marc Dambrine, Karsten Eppler and Djalil Kateb 2007, 8(2): 389-416 doi: 10.3934/dcdsb.2007.8.389 +[Abstract](668) +[PDF](397.3KB) The paper extends investigations of identification problems by shape optimization methods for perfectly conducting inclusions to the case of perfectly insulating material. The Kohn and Vogelius criteria as well as a tracking type objective are considered for a variational formulation. In case of problems in dimension two, the necessary condition implies immediately a perfectly matching situation for both formulations. Similar to the perfectly conducting case, the compactness of the shape Hessian is shown and the ill-posedness of the identification problem follows. That is, the second order quadratic form is no longer coercive. We illustrate the general results by some explicit examples and we present some numerical results. Multiple bifurcations of a predator-prey system Dongmei Xiao and Kate Fang Zhang 2007, 8(2): 417-433 doi: 10.3934/dcdsb.2007.8.417 +[Abstract](876) +[PDF](795.6KB) The bifurcation analysis of a generalized predator-prey model depending on all parameters is carried out in this paper. The model, which was first proposed by Hanski et al. [6], has a degenerate saddle of codimension 2 for some parameter values, and a Bogdanov-Takens singularity (focus case) of codimension 3 for some other parameter values. By using normal form theory, we also show that saddle bifurcation of codimension 2 and Bogdanov-Takens bifurcation of codimension 3 (focus case) occur as the parameter values change in a small neighborhood of the appropriate parameter values, respectively. Moreover, we provide some numerical simulations using XPPAUT to show that the model has two limit cycles for some parameter values, has one limit cycle which contains three positive equilibria inside for some other parameter values, and has three positive equilibria but no limit cycles for other parameter values. A competition-diffusion system with a refuge Daozhou Gao and Xing Liang 2007, 8(2): 435-454 doi: 10.3934/dcdsb.2007.8.435 +[Abstract](772) +[PDF](272.5KB) In this paper, a model composed of two Lotka-Volterra patches is considered. The system consists of two competing species $X, Y$ and only species $Y$ can diffuse between patches. It is proved that the system has at most two positive equilibria and then that permanence implies global stability. Furthermore, to answer the question whether the refuge is effective to protect $Y$, the properties of positive equilibria and the dynamics of the system are studied when $X$ is a much stronger competitor. The role of evanescent modes in randomly perturbed single-mode waveguides Josselin Garnier 2007, 8(2): 455-472 doi: 10.3934/dcdsb.2007.8.455 +[Abstract](786) +[PDF](228.7KB) Pulse propagation in randomly perturbed single-mode waveguides is considered. By an asymptotic analysis the pulse front propagation is reduced to an effective equation with diffusion and dispersion. Apart from a random time shift due to a random total travel time, two main phenomena can be distinguished. First, coupling and energy conversion between forward- and backward-propagating modes is responsible for an effective diffusion of the pulse front. This attenuation and spreading is somewhat similar to the one-dimensional case addressed by the O'Doherty-Anstey theory. Second, coupling between the forward-propagating mode and the evanescent modes results in an effective dispersion. In the case of small-scale random fluctuations we show that the second mechanism is dominant. Homogenization in random media and effective medium theory for high frequency waves Guillaume Bal 2007, 8(2): 473-492 doi: 10.3934/dcdsb.2007.8.473 +[Abstract](1466) +[PDF](279.4KB) We consider the homogenization of the wave equation with high frequency initial conditions propagating in a medium with highly oscillatory random coefficients. By appropriate mixing assumptions on the random medium, we obtain an error estimate between the exact wave solution and the homogenized wave solution in the energy norm. This allows us to consider the limiting behavior of the energy density of high frequency waves propagating in highly heterogeneous media when the wavelength is much larger than the correlation length in the medium. Distributional convergence of null Lagrangians under very mild conditions Marc Briane and Vincenzo Nesi 2007, 8(2): 493-510 doi: 10.3934/dcdsb.2007.8.493 +[Abstract](725) +[PDF](249.8KB) We consider sequences $U^\epsilon$ in $W^{1,m}(\Omega;\RR^n)$, where $\Omega$ is a bounded connected open subset of $\RR^n$, $2\leq m\leq n$. The classical result of convergence in distribution of any null Lagrangian states, in particular, that if $U^\ep$ converges weakly in $W^{1,m}(\Omega)$ to $U$, then det$(DU^\epsilon)$ converges to det$(DU)$ in $\D'(\Omega)$. We prove convergence in distribution under weaker assumptions. We assume that the gradient of one of the coordinates of $U^\epsilon$ is bounded in the weighted space $L^2(\Omega,A^\epsilon(x)dx;\RR^n)$, where $A_\epsilon$ is a non-equicoercive sequence of symmetric positive definite matrix-valued functions, while the other coordinates are bounded in $W^{1,m}(\Omega)$. Then, any $m$-homogeneous minor of the Jacobian matrix of $U^\epsilon$ converges in distribution to a generalized minor provide that $|A_\epsilon^{-1}|^{n/2}$ converges to a Radon measure which does not load any point of $\Omega$. A counter-example shows that this latter condition cannot be removed. As a by-product we derive improved div-curl results in any dimension $n\geq 2$. Generators of Feller semigroups with coefficients depending on parameters and optimal estimators Jerome A. Goldstein, Rosa Maria Mininni and Silvia Romanelli 2007, 8(2): 511-527 doi: 10.3934/dcdsb.2007.8.511 +[Abstract](613) +[PDF](301.6KB) We consider the realization of the operator $L_{\theta, a}u(x) $:$= x^{2 a}u''(x) \ + \ (a x^{2 a - 1} + \theta x^a)u'(x)$, acting on $C[0,+\infty]$, for $\theta\in\R$, $a\in\R$. We show that $L_{\theta, a}$, with the so called Wentzell boundary conditions, generates a Feller semigroup for any $\theta\in\R$, $a\in\R$. The problem of finding optimal estimators for the corresponding diffusion processes is also discussed, in connection with some models in financial mathematics. Here $C[0,+\infty]$ is the space of all real valued continuous functions on $[0,+\infty)$ which admit finite limit at $+\infty$. 2017  Impact Factor: 0.972 Email Alert [Back to Top]
62a4425c5fd98b50
Published online 2 April 2009 | Nature | doi:10.1038/news.2009.212 Column: Muse Physics by numbers A suggestion that the discovery of physical laws can be automated raises questions about what it means to do science, says Philip Ball. Two decades ago, computer scientist Kemal Ebcioğlu at IBM described a computer program that wrote music like Johann Sebastian Bach. Now I know what you're thinking: no one has ever written music like Bach. And Ebcioğlu's algorithm had a somewhat more modest goal: given the bare melody of a Bach chorale, it could fill in the rest (that is, the harmonies) in the style of the maestro1. Ebcioğlu's aim was not to rival Bach, but to explore whether the 'laws' governing his composition could be abstracted from the 'data'. The goal was really no different from that attempted by scientists all the time: to deduce underlying principles from a mass of observations. Writing Bach-like music, however, highlights the constant dilemma in this approach. Even if the computerized chorales would have fooled experts (they were never actually put to the test), there would be no guarantee that the algorithm's rules bore any relation to the compositional mental processes in Bach's head. That issue is becoming increasingly acute, especially in the hazily defined area of science called complexity. Computer models can now supply convincing mimics of all manner of complex behaviours, from the flocking of birds to traffic jams to the dynamics of economic markets. But do the rules of these models bear any relation to the physical processes that generate the real-world behaviour, or are the resemblances coincidental? This matter is raised by a paper today in Science that reports on a technique to automate the identification of natural laws from experimental data2. As the authors Michael Schmidt and Hod Lipson of Cornell University in Ithaca, New York, point out, this is much more than a question of data-fitting — it examines what it means to think like a physicist, and perhaps even interrogates the issue of what natural laws are. Something out of nothing? A mathematical equation can always be found to fit any data set to arbitrary precision. But that risks capturing incidental noise along with any significant relationships between variables. What is needed is a law that obeys Einstein's famous dictum, being as simple as possible but not simpler. Toy robotRobots are our friends.Punchstock Avoiding 'simpler' means not reducing the data to a trivial level. In complex systems, it has become common, even fashionable, to find power laws (y is proportional to xn) that link two variables3. But the ubiquity of such laws in systems ranging from economics to linguistics is now leading some to suspect that power laws might in themselves lack much fundamental significance. Ideally, the mathematical laws governing a process should reflect the physically meaningful invariants of that process. They might, for example, stem from conservation of energy or of momentum. But it can be hard to distinguish true invariants from trivial patterns. A study4 published in 2005, for instance, showed that the constancy of various dimensionless parameters from the life histories of different species — such as the ratio of average life span to age at maturity — is not, as previously thought, evidence of underlying laws but follows inevitably from the way the parameters are chosen. It's not always easy to distinguish the trivial from the profound. Isaac Newton showed that Kepler's laws identifying mathematical regularities in the parameters of planetary orbits have a deep origin in the inverse-square law of gravity. But the notorious Titius–Bode 'law' that alleges a mathematical relationship between the semi-major axes and the ranking of planets in the Solar System remains contentious and is dismissed by many astronomers as numerology. As Schmidt and Lipson point out, some of the invariants embedded in natural laws aren't at all intuitive because they don't actually relate to observable quantities. Newtonian mechanics deals with quantities such as mass, velocity and acceleration, whereas its more fundamental formulation by Joseph Louis Lagrange invokes the principle of minimal action — yet 'action' is an abstract mathematical quantity that can be calculated but not really 'measured'. And many of the seemingly fundamental constructs of natural law — the concept of force, say, or the Schrödinger equation in quantum theory — turn out to be mathematical conveniences or arbitrary (if well motivated) guesses that merely work well. Whether any physical reality should be ascribed to such things, or whether they should just be used as theoretical conveniences, remains unresolved in many of these constructs. Deep questions Schmidt and Lipson present a clever genetic algorithm to narrow down the list of candidate laws describing a data set by using additional criteria, such as whether partial derivatives of the equations also fit those of the data. The best candidate is finally selected by parsimony. When used to deduce mathematical laws that describe the data from two experiments in mechanics, the algorithm came up with precisely the equations of motion that physicists would construct from first principles using Newton's laws of motion and Lagrangian mechanics. In other words, the solutions encode not just the observed data but also the underlying physics. Perhaps the arena most in need of a tool such as this is not physics but biology. Another paper in today's Science, by UK researchers, reports a 'robot scientist' named Adam that can frame and test hypotheses about the genomics of yeast5 (see 'Introducing robo-scientist'). By identifying connections between genes and enzymes, Adam could channel post-docs away from such donkey-work and towards more creative endeavours. But the really deep questions, about which we remain largely ignorant, concern what one might call the physics of genomics: whether there are the equivalents of Newtonian and Lagrangian principles, and if so, what. Despite the current fads for banking vast swathes of biological data, theories of this sort are not going to simply fall out of the numbers. So we need all the help we can get — even if it is from robots.  • References 1. Ebcioğlu, K. Comput. Music J. 12, 43–51 (1988). | Article | 2. Schmidt, M. & Lipson, H. Science 324, 81–85 (2009). | Article | ChemPort | 3. Newman, M. E. J. Contemp. Phys. 46, 323–351 (2005). | Article | 4. Nee, S., Colegrave, N., West, S. A. & Grafen, A. Science 309, 1236–1239 (2005). | Article | PubMed | ChemPort | 5. King, R. D. et al. Science 324, 85–89 (2009). | Article | ChemPort | Comments on this thread are vetted after posting. • #59721 Numbers are the best evidence that you are on the right way in understanding the Universe: Leonardo Rubino Commenting is now closed.
61084b1d39567b14
Interpreting the Quantum World I: Measurement & Non-Locality In previous posts Aron introduced us to the strange, yet compelling world of quantum mechanics and its radical departures from our everyday experience. We saw that the classical world we grew up with, where matter is composed of solid particles governed by strictly deterministic equations of state and motion, is in fact somewhat “fuzzy.” The atoms, molecules, and subatomic particles in the brightly colored illustrations and stick models of our childhood chemistry sets and schoolbooks are actually probabilistic fields that somehow acquire the properties we find in them when they’re observed. Even a particle’s location is not well-defined until we see it here, and not there. Furthermore, because they are ultimately fields, they behave in ways the little hard “marbles” of classical systems cannot, leading to all sort of paradoxes. Physicists, philosophers, and theologians alike have spent nearly a century trying to understand these paradoxes. In this series of posts, we’re going to explore what they tell us about the universe, and our place in it. To quickly recap earlier posts, in quantum mechanics (QM) the fundamental building block of matter is a complex-valued wave function \Psi whose squared amplitude is a real-valued number that gives the probability density of observing a particle/s in any given state. \Psi is most commonly given as a function of the locations of its constituent particles, \Psi\left ( \vec{r_{1}}, \vec{r_{2}}... \vec{r_{n}} \right ), or their momentums, \Psi\left ( \vec{p_{1}}, \vec{p_{2}}... \vec{p_{n}} \right ) (but not both, which as we will see, is important), but will also include any of the system’s other variables we wish to characterize (e.g. spin states). The range of possible configurations these variables span is known as the system’s Hilbert space. As the system evolves, its wave function wanders through this space exploring its myriad probabilistic possibilities. The time evolution of its journey is derived from its total energy in a manner directly analogous to the Hamiltonian formalism of classical mechanics, resulting in the well-known time-dependent Schrödinger equation. Because \left | \Psi \right |^{2} is a probability density, its integral over all of the system’s degrees of freedom must equal 1. This irreducibly probabilistic aspect of the wave function is known as the Born Rule (after Max Born who first proposed it), and the mathematical framework that preserves it in QM is known as unitarity. [Fun fact: Pop-singer Olivia Newton John is Born’s granddaughter!] Notice that \Psi is a single complex-valued wave function of the collective states of all its constituent particles. This makes for some radical departures from classical physics. Unlike a system of little hard marbles, it can interfere with itself—not unlike the way the countless harmonics in sound waves give us melodies, harmonies, and the rich tonalities of Miles Davis’ muted trumpet or Jimi Hendrix’s Stratocaster. The history of the universe is a grand symphony—the music of the spheres! Its harmonies also lead to entangled states, in which one part may not be uniquely distinguishable from another. So, it will not generally be true that the wave function of the particle sum is the sum of the individual particle wave functions, \Psi\left ( \vec{r_{1}}, \vec{r_{2}}... \vec{r_{n}} \right ) \neq \Psi\left ( \vec{r_{1}} \right )\Psi\left ( \vec{r_{2}} \right )... \Psi\left ( \vec{r_{n}} \right ) until the symphony progresses to a point where individual particle histories decohere enough to be distinguished from each other—melodies instead of harmonies. Another consequence of this wave-like behavior is that position and momentum can be converted into each other with a mathematical operation known as a Fourier transform. As a result, the Hilbert space may be specified in terms of position or momentum, but not both, which leads to the famous Heisenberg Uncertainty principle, \Delta x\Delta p \geqslant \hbar/2 where \hbar is the reduced Planck constant. It’s important to note that this uncertainty is not epistemic—it’s an unavoidable consequence of wave-like behavior. When I was first taught the Uncertainty Principle in my undergraduate Chemistry series, it was derived by modeling particles as tiny pool ball “wave packets” whose locations couldn’t be observed by bouncing a tiny cue-ball photon off them without batting them into left field with a momentum we couldn’t simultaneously see. As it happens, this approach does work, and is perhaps easier for novice physics and chemistry students to wrap their heads around. But unfortunately, it paints a completely wrong-heading picture of the underlying reality. We can pin down the exact location of a particle, but in so doing we aren’t simply batting it away—we’re destroying whatever information about momentum it originally had, rendering it completely ambiguous, and vice versa (in the quantum realm paired variables that are related to each other like this are said to be canonical). The symphony is, to some extent, irreducibly fuzzy! So… the unfolding story of the universe is a grand symphony of probability amplitudes exploring their Hilbert space worlds along deterministic paths, often in entangled states where some of their parts aren’t entirely distinct from each other, and acquiring whatever properties we find them to have only when they’re measured, many of which cannot simultaneously have exact values even in principle. Strange stuff to say the least! But the story doesn’t end there. Before we can decipher what it all means (or, I should say, get as close as doing so as we ever will) there are two more subtleties to this bizarre quantum world we still need to unpack… measurement and non-locality. The first thing we need to wrap our heads around is observation, or in quantum parlance, measurement. In classical systems matter inherently possesses the properties that it does, and we discover what those properties are when we observe them. My sparkling water objectively exists in a red glass located about one foot to the right of my keyboard, and I learned this by looking at it (and roughly measuring the distance with my thumb and fingers). In the quantum realm things are messier. My glass of water is really a bundle of probabilistic particle states that in some sense acquired its redness, location, and other properties by the very act of my looking at it and touching it. That’s not to say that it doesn’t exist when I’m not doing that, only that its existence and nature aren’t entirely independent of me. How does this work? In quantum formalism, the act of observing a system is described by mathematical objects known as operators. You can think of an operator as a tool that changes one function into another one in a specific way—like say, “take the derivative and multiply by ten.” The act of measuring some property A (like, say, the weight or color of my water glass) will apply an associated operator \hat A to its initial wave function state \Psi_{i} and change it to some final state \Psi_{f}, \hat A \Psi_{i} = \Psi_{f} For every such operator, there will be one or more states \Psi_{i} could be in at the time of this measurement for which \hat A would end up changing its magnitude but not its direction, \begin{bmatrix} \hat A \Psi_{1} = a_{1}\Psi_{1}\\ \hat A \Psi_{2} = a_{2}\Psi_{2}\\ ...\\ \hat A \Psi_{n} = a_{n}\Psi_{n} \end{bmatrix} These states are called eigenvectors, and the constants a_{n} associated with them are the values of A we would measure if \Psi is in any of these states when we observe it. Together, they define a coordinate system associated with A in the Hilbert space that \Psi can be specified in at any given moment in its history. If \Psi_{i} is not in one of these states when we measure A, doing so will force it into one of them. That is, \hat A \Psi_{i} \rightarrow \Psi_{n} and a_{n} will be the value we end up with. The projection of \Psi_{i} on any of the n axes gives the probability amplitude that measuring A will put the system into that state with the associated eigenvalue being what we measure, P(a_{n}) = \left | \Psi_{i} \cdot \Psi_{n} \right |^{2} So… per the Schrödinger equation, our wave function skips along its merry, deterministic way through a Hilbert space of unitary probabilistic states. Following a convention used by Penrose (2016), let’s designate this part of the universe’s evolution as \hat U. All progresses nicely, until we decide to measure something—location, momentum, spin state, etc. When we do, our wave function abruptly (some would even be tempted to say magically) jumps to a different track and spits out whatever value we observe, after which \hat U starts over again in the new track. This event—let’s call it \hat M—has nothing whatsoever to do with the wave function itself. The tracks it jumps to are determined by whatever properties we observe, and the outcome of these jumps are irreducibly indeterminate. We cannot say ahead of time which track we’ll end up on even in principle. The best we can do is state that some property A has such and such probability of knocking \Psi into this or that state and returning its associated value. When this happens, the wave function is said to have “collapsed.” [Collapsed is in quotes here for a reason… as we shall see, not all interpretations of quantum mechanics accept that this is what actually happens!] It’s often said that quantum mechanics only applies to the subatomic world, but on the macroscopic scale of our experience classical behavior reigns. For the most part this is true. But… as we’ve seen, \Psi is a wave function, and waves are spread out in space. Subatomic particles are only tiny when we observe them to be located somewhere. So, if \hat M involves a discrete collapse, it happens everywhere at once, even over distances that according to special relativity cannot communicate with each other—what some have referred to as “spooky action at a distance.” This isn’t mere speculation, nor a problem with our methods—it can be observed. Consider two electrons in a paired state with zero total spin. Such states (which are known as singlets) may be bound or unbound, but once formed they will conserve whatever spin state they originated with. In this case, since the electron cannot have zero spin, the paired electrons would have to preserve antiparallel spins that cancel each other. If one were observed to have a spin of, say, +1/2 about a given axis, the other would necessarily have a spin of -1/2. Suppose we prepared such a state unbound, and sent the two electrons off in opposite direction. As we’ve seen, until the spin state of one of them is observed, neither will individually be in any particular spin state. The wave function will be an entangled state of two possible outcomes, +/- and -/+ about any axis. Once we observe one of them and find it in, say, a “spin-up” state (+1/2 about a vertical axis), the wave function will have collapsed to a state in which the other must be “spin-down” (-1/2), and that will be what we find if it’s observed a split second later as shown below. But what would happen if the two measurements were made over a distance too large for a light signal to travel from the first observation point to the second one during the time delay between the two measurements? Special relativity tells us that no signal can communicate faster than the speed of light, so how would the second photon know that it was supposed to be in a spin-down state? Light travels 11.8 inches in one nanosecond, so it’s well within existing microcircuit technology to test this, and it has been done on many occasions. The result…? The second photon is always found in a spin state opposite that of the first. Somehow, our second electron knows what happened to its partner… instantaneously! If so, this raises some issues. Traditional QM asserts that the wave function gives us a complete description of a system’s physical reality, and the properties we observe it to have are instantiated when we see them. At this point we might ask ourselves two questions; 1)  How do we really know that prior to our observing it, the wave function truly is in an entangled state of two as-yet unrealized outcomes? What if it’s just probabilistic scaffolding we use to cover our lack of understanding of some deeper determinism not captured by our current QM formalism? 2)  What if the unobserved electron shown above actually had a spin-up property that we simply hadn’t learned about yet, and would’ve had it whether it was ever observed or not (a stance known as counterfactual definiteness)? How do we know that one or more “hidden” variables of some sort hadn’t been involved in our singlet’s creation, and sent the two electrons off with spin state box lunches ready for us to open without violating special relativity (a stance known as local realism)? Together, these comprise what’s known as local realism, or what Physicist John Bell referred to as the “Bertlmann’s socks” view (after Reinhold Bertlmann, a colleague of his at CERN). Bertlmann was known for never wearing matching pairs of socks to work, so it was all but guaranteed that if one could observe one of his socks, the other would be found to be differently colored. But unlike our collapsed electron singlet state, this was because Bertlmann had set that state up ahead of time when he got dressed… a “hidden variable” one wouldn’t be privy to unless they shared a flat with him. His socks would already have been mismatched when we discovered them to be, so no “spooky action at a distance” would be needed to create that difference when we first saw them. In 1964 Bell proposed a way to test this against the entangled states of QM. Spin state can only be observed in one axis at a time. Our experiment can look for +/- states about any axis, but not together. If an observer “Alice” finds one of the electrons in a spin-up state, the second photon will be in a spin-down state. What would happen if another observer “Bob” then measured its spin state about an axis at, say, a 45-deg. angle to vertical as shown below? The projection of the spin-down wave function on the eigenvector coordinate system of Bob’s measurement will translate into probabilities of observing + or – states in that plane. Bell produced a set of inequalities bearing his name which showed that if the electrons in our singlet state had in fact been dressed in different colored socks from the start, experiments like this will yield outcomes that differ statistically from those predicted by traditional QM. This too has been tested many times, and the results have consistently favored the predictions of QM, leaving us with three options; a)  Local realism is not valid in QM. Particles do not inherently possess properties prior to our observing them, and indeterminacy and/or some degree of “spooky action at a distance” cannot be fully exorcised from \hat M. b)  Our understanding of QM is incomplete. Particles do possess properties (e.g. spin, location, or momentum) whether we observe them or not (i.e. – counterfactuals about measurement outcomes exist), but our understanding of \hat U and \hat M doesn’t fully reflect the local realism that determines them. c)  QM is complete, and the universe is both deterministic and locally real without the need for hidden variables, but counterfactual definiteness is an ill-formed concept (as in the "Many Worlds Interpretation" for instance). Nature seems to be telling us that we can’t have our classical cake and eat it. There’s only room on the bus for one of these alternatives. Several possible loopholes have been suggested to exist in Bell’s inequalities through which underlying locally real mechanics might slip through. These have led to ever more sophisticated experiments to close them, which continue to this day. So far, the traditional QM frameworks has survived every attempt to up the ante, painting Bertlmann’s socks into an ever-shrinking corner. In 1966, Bell, and independently in 1967, Simon Kochen and Ernst Specker, proved what has since come to be known as the Kochen-Specker Theorem, which tightens the noose around hidden variables even further. What they showed, was that regardless of non-locality, hidden variables cannot account for indeterminacy in QM unless they’re contextual. Essentially, this all but dooms counterfactual definiteness in \hat M. There are ways around this (as there always are if one is willing to go far enough to make a point about something). The possibility of “modal” interpretations of QM have been floated, as has the notion of a “subquantum” realm where all of this is worked out. But these are becoming increasingly convoluted, and poised for Occam’s ever-present razor. As of this writing, hidden variables theories aren’t quite dead yet, but they are in a medically induced coma. In case things aren’t weird enough for you yet, note that a wave function collapse over spacelike distances raises the specter of the relativity of simultaneity. Per special relativity, over such distances the Lorentz boost blurs the distinction between past and future. In situations like these it’s unclear whether the wave function was collapsed by the first observation or the second one, because which one is in the future of the other is a matter of which inertial reference frame one is viewing the experiment from. Considering that you and I are many-body wave functions, anything that affects us now, like say, stubbing a toe, collapses our wave function everywhere at once. As such, strange as it may sound, in a very real sense it can be said that a short while ago your head experienced a change because you stubbed your toe now, not back then. And… It will experience a change shortly because you did as well. Which of these statements is correct depends only on the frame of reference from which the toe-stubbing event is viewed. It’s important to note that this has nothing to do with the propagation of information along our nerves—it’s a consequence of the fact that as “living wave functions”, our bodies are non-locally spread out across space-time to an extent that slightly blurs the meaning of “now”.  Of course, the elapsed times associated with the size of our bodies are too small to be detected, but the basic principle remains. Putting it all together Whew… that was a lot of unpacking! And the world makes even less sense now than it did when we started. Einstein once said that he wanted to know God’s thoughts, the rest were just details. Well it seems the mind of God is more inscrutable than we ever imagined! But now we have the tools we need to begin exploring some of the way His thoughts have been written into the fabric of creation. Our mission, should we choose to accept it, is to address the following; 1)  What is this thing we call a wave function? Is it ontologically real, or just mathematical scaffolding we use to make sense of things we don’t yet understand? 2)  What really happens when a deterministic, well-behaved \hat U symphony runs headlong into a seemingly abrupt, non-deterministic \hat M event? How do we get them to share their toys and play nicely with each other? 3)  If counterfactual definiteness is an ill-formed concept and every part of the wave function is equally real, why do our observations always leave us with only one experienced outcome? Why don’t we experience entangled realities, or even multiple realities? In the next installment in this series we’ll delve into a few of the answers that have been proposed so far. The best is yet to come, so stay tuned! Penrose, R. (2016). Fashion, faith, and fantasy in the new physics of the universe. Princeton University Press, Sept. 13, 2016. ISBN: 0691178534; ASIN: B01AMPQTRU. Available online at Accessed June 11, 2017. Posted in Metaphysics, Physics | 10 Comments Scott Church guest blogging Posted in Blog | 1 Comment Spam filter problems For the past couple months my spam filter (Akismet) has falsely identified a rather large number of legitimate comments as spam. (For those of you who arrived on the internet yesterday, "spam" is off-topic comments trying to get people to click on links to buy things.  Mostly it is left by "bots" that automatically scan the internet.   When I installed a second layer of protection called "Cookies for Comments" a few months ago, Akismet was processing over a million spam comments a month, causing a slowdown on the server!  The vast majority of these were caught and removed by the filter, but sometimes it gets it wrong and lets spam through (a "false negative") or rejects legit comments (a "false positive"). I'm periodically checking the spam filter to rescue these false positives (just did 2 today), but you can help me out by doing the following: • Send me an email if you try to leave a legitimate comment and it does not appear on the website within a few comments.  You can find a working email for me on my personal website, which is linked to in the bar at the top of the page. • If convenient, go ahead and include a copy of your comment in the email.  (Generally it's a good idea to save long comments on your home computer before submitting, but if you didn't do this, you can often reclaim it by pressing the `back' button on your browser.) My spam filter keeps a copy of all comments flagged as spam for 15 days, so I probably don't actually need this, but rarely there are other technical problems that cause comments to disappear. • Please don't take it personally if your comment doesn't appear.  The spam filtering is done automatically by a hidden algorithm, and I don't have anything to do with it! If you are an insecure person, please don't waste time worrying that maybe you stepped over an invisible line and accidentally insulted me, and therefore I blocked your comments without telling you.   If you are a flesh-and-blood human being, your comment was probably legitimate. While I do occasionally remove "by hand" comments that violate the rules, I generally try to notify the person by email, or in that comments section, except for the worst offenders.  So unless you went on a blasphemous tirade or are an obviously-insulting troll, that's probably not you!  (And even if that is you, you are certainly entitled to respectfully ask by email—once, anyway—for an explanation of why your comment was deleted.) • All this assumes you left me a real email address.  Of course, if you violated the rules by leaving a fake email address, then you might not receive my explanation.  In that case, you deserve what you get, and I may also delete your comment!  (But sometimes, in the case of commenters otherwise engaging in good faith, I have looked the other way on this issue, in order to show mercy to the weak.) Obviously, I promise not to give your email address to the spammers, or otherwise share this information without your permission! • It is also necessary for your web browser to accept "cookies" in order for you to successfully leave a comment.  If this happens to you, you will be redirected to a page with an explanation & instructions.  If you are wrongly redirected to this page, please send me an email saying so.  Also, if for some reason you don't want to accept cookies from other websites, you can add an exception for Undivided Looking. Christ is risen from the dead, Trampling down death by death, And upon those in the tombs Bestowing life! -Paschal troparion Posted in Blog | 2 Comments Christ is Risen! Alleluia!  Christ is Risen! Rafael - ressureicaocristo01.jpg The Resurrection of Christ by St. Raphael. Tapestry version in the Vatican museum, actually I don't have a lot more than that to say right now.  But it seemed relevant, so I thought I'd post it.  If you want to read more about the significance of this event, click here. Posted in Theology | 14 Comments Remember you are dust... I haven't been able to write much recently due to a recurrence of tendonitis in my wrist, a repetitive stress injury from writing and typing too much.  I expect this to be temporary; in the sense that it has always gone away before, with enough rest and ice.  (Which is like the smoker who said to an ex-smoker, you've only quit once; I've quit hundreds of times!) On a brighter note, I'm also being distracted by job interviews for tenure-track faculty positions at two excellent UC schools (Berkeley & Santa Barbara).  I will not call these permanent positions since such permanence is to be found only in Heaven: life is short.  But it would still be nice to settle down for a few decades... Anyway, these are very exciting places for physics research, and I would be extremely pleased to get an offer from either of them! I've still got plenty of the content you love planned, once I'm past these issues!  There's a series on "Comparing Religions" that's almost done, so I might be able to get that out without too much more typing. I apologize to anyone whose questions I haven't answered; I'll email you if I ever get around to it.  But it was nice to see the conversation continue for a while without my needing to continually stoke it.  I will continue to read and moderate the discussion, so you are free to continue talking amongst yourselves... I...I...I...I...I...  Is death really necessary, just for us to come to an end of our continual self-absorption? Posted in Blog | 8 Comments
94af8b4a6a7bc8ad
Machine Learning for Quantum Many-body Physics Excited eigenstates and swarm intelligence algorithms Bagrov, Andrey Accessing excited states of a many-body quantum Hamiltonian can be a challenging task, even more complicated than obtaining its ground state. The recently suggested RBM variational ansatz for many-body wavefunctions seems to be pretty universal (modulo some limitations), and one might hope that it can serve as a good approximation for excited eigenstates of a variety of quantum systems. To make it converge to a state in the middle of the spectrum, we suggest employing particle swarm optimization algorithm instead of the gradient descent-based schemes. Each particle in a swarm represents a neural network, and the fitness function to be optimized can be the energy variance, higher momenta of the Hamiltonian, or a certain function of them known to minimize on eigenstates. The PSO algorithms are easy to parallelize and do not require numerically heavy operations, and the only limitation is due to the intrinsic stochasticity of the neural network approach, which makes it difficult to resolve individual eigenstates. Quantum Error Correction with Recurrent Neural Networks Baireuther, Paul In quantum computation one of the key challenges is to build fault-tolerant logical qubits. A logical qubit consists of several physical qubits. In stabilizer codes, a popular class of quantum error correction schemes, a part of the system of physical qubits is measured repeatedly, without measuring (and collapsing by the Born rule) the state of the encoded logical qubit. These repetitive measurements are called syndrome measurements, and must be interpreted by a classical decoder in order to determine what errors occurred on the underlying physical system. The decoding of these space- and time-correlated syndromes is a highly non-trivial task, and efficient decoding algorithms are known only for a few stabilizer codes. In our research we design and train decoders based on recurrent neural networks. The training is done using only experimentally accessible data. A key requirement for an efficient decoder is that it can decode an arbitrary and unspecified number of error correction cycles. To achieve this we use so-called long short-term memory cells [1]. These recurrent neural network building blocks have an internal memory. During training, the decoder learns how to update and utilize its internal memory, in order to detect errors on the logical qubit. The trained decoder is therefore a round based algorithm, rather than a rigid pattern recognition scheme. It can process the syndrome information in realtime, without having to wait for the quantum computation to be completed. In our recent work [2] we have focused on one type of stabilizer code, the surface code, which is currently being implemented by several experimental groups [3-5]. We have trained and tested the neural network decoder on both a simple circuit model, and on a density matrix simulator with experimental parameters. In the presence of correlated bit-flip and phase-flip errors the neural network decoder outperforms the popular minimum-weight perfect matching decoder. However, our neural network decoder is not tailored to the specifics of the surface code, and should also be applicable to other stabilizer codes, such as the color code [6]. References [1] S. Hochreiter and J. Schmidhuber, Neural Computation 9, 1735 (1997). [2] P. Baireuther, T. E. O’Brien, B. Tarasinski, and C. W. J. Beenakker, arXiv:1705.07855. [3] J. Kelly et al., Nature 519, 66 (2015). [4] M. Takita et al., Phys. Rev. Lett. 117, 210505 (2016). [5] R. Versluis et al., Phys. Rev. Applied 8, 034021 (2017). [6] H. Bombin and M. A. Martin-Delgado, Phys. Rev. Lett. 97, 180501 (2006). Learning Hamiltonians from local data Bairey, Eyal Recent works have succeeded in recovering the local Hamiltonian of an isolated quantum system from a single eigenstate. However, these methods require the measurement of long range correlations throughout the entire system, as well as the system being in a pure state. Here we extend these methods to to any mixed state which commutes with the Hamiltonian. In particular our methods apply to thermal mixed states. In addition, when the underlying Hamiltonian is defined on a lattice with short-range interactions, our method allows us to recover the Hamiltonian of a subsystem by performing measurements restricted to that subsystem and its boundary. When restricted to thermal states, our method can be viewed as a quantum generalization of well-known problem of learning graphical models, such as Boltzmann machines. Surprisingly, whereas the sampling complexity of the classical problem is exponential in the local degree of the underlying graph, we show that under reasonable assumptions, our algorithm is polynomial. Finally, we show how to adapt our method such that it can be used to learn the underlying Hamiltonian from sampling the time dynamics of the system. Locating spin-liquid transitions in three-dimensional quantum magnets Buessen, Finn Lasse Quantum magnetism and the formation of quantum spin liquids remains one of the most intriguing aspects of contemporary solid state physics, driving high research activity of experimentalists and theorists alike. To substantiate experimental findings with appropriate theoretical understanding, an efficient methodological framework is often vital to study frustrated quantum spin models in three dimensions -- a challenging regime that is inaccessible to many conventional (both numerical and analytical) methods. Utilizing a recently developed pseudo-fermion functional renormalization group (pf-FRG) approach, we demonstrate its capability to qualitatively capture the interplay between spin liquid phases and magnetic order even at finite temperature. Quantitatively pinpointing the precise location of phase boundaries based on transition signatures in physical observables, however, can be challenging. Therefore, we started to investigate machine learning as an alternative approach to search for signatures of phase transitions in the vast amount of data on single- and two-particle vertex functions that we generate in pf-FRG calculations. Progressive lifting of the ground-state degeneracy of the long-range kagome Ising antiferromagnet Colbois, Jeanne The nearest-neighbour antiferromagnetic Ising model on the kagome lattice is wellknown to be highly frustrated, and in particular to have a very large macroscopic groundstate degeneracy [1][2]. Recently, a candidate ground state for the model with dipolar couplings has been proposed [3]. In order to study the degeneracy lifting that leads to the ground state of the dipolar model, we implement a rejection-free dual worm algorithm [4] and use it to study the antiferromagnetic Ising model on the kagome lattice with up to fourth neighbour interactions. For the model with up to third neighbour interactions, we show that the ground state exhibits five different phases as a function of the ratio J3 J2 , some of which still have a non-zero residual entropy. Surprisingly, for the model with dipolar couplings truncated at fourth neighbours, we find a ground state which is neither one of those of the J2−J3 model, nor the one proposed for the full dipolar model [3]. This new state, however, is not the ground state for the model with full dipolar couplings, leading to the conclusion that further neighbours beyond the fourth one play an important role in the selection of the ground state of the dipolar model. References [1] K. Kanô and S. Naya, Prog. Theor. Phys. 10, 158 (1953) [2] A. Sütö, Z. Phys. B 44, 121 (1981) [3] I.A. Chioar, N. Rougemaille and B. Canals, Phys. Rev. B 93, 214410 (2016) [4] G. Rakala and K. Damle, Phys. Rev. E 96, 023304 (2017) 1 Artificial Neural Network Representation of Spin Systems in a Quantum Critical Regime Czischek, Stefanie We use the newly developed artificial-neural-network (ANN) representation of quantum spin-1/2 states based on restricted Boltzmann machines to study the dynamical build-up of correlations after sudden quenches in the transverse-field Ising model with and without longitudinal field. We calculate correlation lengths and study their time evolution after sudden quenches from a large initial transverse field to different final fields. By comparison with exact numerical solutions given by exact diagonalization or tDMRG, we show that in regimes of large correlation lengths and volume-law entanglement also large network sizes are necessary to capture the exact dynamics. On the other hand we show a high accuracy of the network representation for quenches into regimes of smaller correlation lengths even for small network sizes scaling linearly with the system size. In these regimes the ANN representation shows promising results which suggest that the method may be efficiently used for more complex systems in one or higher dimensions. Many-body localization in large quantum chains Doggen, Elmer We study quench dynamics of the Heisenberg spin chain with a random on-site magnetic field, using a combination of the time-dependent variational principle, machine learning, and exact diagonalization, with a focus on chains up to 100 spins in length. Around the regime where previous studies have reported the existence of the many-body localization transition, we instead find a wide range of disorder strengths with slow but finite transport. A lower bound for a true many-body localization transition, higher than previous estimates, is presented. Machine learning of quantum phase transitions Dong, Xiaoyu Machine learning algorithms provide a new perspective on the study of physical phenomena. In this paper, we explore the nature of quantum phase transitions using multi-color convolutional neural-network (CNN) in combination with quantum Monte Carlo simulations. We propose a method that compresses d+1 dimensional space-time configurations to a manageable size and then use them as the input for a CNN. We test our approach on two models and show that both continuous and discontinuous quantum phase transitions can be well detected and characterized. Moreover we show that intermediate phases, which were not trained, can also be identified using our approach. Reinforcement learning with neural networks for quantum feedback: a new approach to quantum memory Fösel, Thomas The past few years have seen dramatic demonstrations of the power of neural networks to challenging real-world applications in many domains. In the search for optimal control sequences, where the success can only be judged with some time-delay, reinforcement learning is the method of choice. We have explored how a neural-network based agent can be trained to generate optimal control sequences for quantum feedback, where the agent interacts with a quantum system, using reinforcement learning. We apply this to the problem of stabilizing quantum memories based on few-qubit systems, where the qubit layout and available set of gates is specified by the user. Investigating ultrafast quantum spin dynamics with machine learning Giammarco, Fabiani G. Fabiani, Th. Rasing, J.H. Mentink Radboud University, Institute for Molecules and Materials, Heyendaalseweg 135, 6525 AJ, Nijmegen, the Netherlands E-mail: The use of femto-second laser pulses offers the exciting possibility to control the exchange interaction, the strongest interaction between spins in magnetic materials, on ultrashort time scales [1]. Moreover, recently it was shown that even weak perturbations of the exchange interaction can trigger genuine quantum spin dynamics in magnetic materials [2], which is characterized by femtosecond longitudinal oscillations of the magnetic order parameter [3]. This suggests intriguing possibilities to enhance quantum effects in the short-time dynamics of magnetic materials. However, so far, this dynamics was studied only in the linear response regime, and it is unclear how to access and exploit strongly nonlinear quantum dynamics. To investigate these issues, we apply the machine learning inspired approach recently developed by Carleo and Troyer [4] that, as they showed, efficiently captures both the ground state and time evolution of quantum spin models in one and two dimensions. Here we will present our results for the ground state energy and staggered magnetization of the two dimensional Heisenberg model on a square lattice, which is the simplest possible model that captures the spin dynamics of antiferromagnetic Mott-Hubbard insulators. By extrapolating our results to the thermodynamical limit, we find good agreement with ground state calculations performed with other techniques. For the same system, we also show that this method is able to catch the ultrafast spin dynamics triggered by an ultrashort perturbation of the exchange interaction. Our results pave the way to study strongly nonlinear quantum dynamics of macroscopic magnetic materials, with potential predictive power for experiments based on ultrashort laser pulses in the optical and THz regime. REFERENCES [1] J.H. Mentink 2017 J. Phys.: Condens. Matter 29 453001 [2] Jimin Zhao, A. V. Bragas, D. J. Lockwood, and R. Merlin. Phys. Rev. Lett. 93, 107203 [3] Bossini et al. arXiv: 1710.03143 [4] G. Carleo and M. Troyer, Science 355, 602 (2017) Neural-Network and Tensor-Network duality with applications to chiral topological states and supervised learning Glasser, Ivan Neural-network quantum states have recently been introduced as an Ansatz for describing the wave function of quantum many-body systems. Particularly promising results have been obtained with Boltzmann machines, a kind of probabilistic graphical model. These models can also be seen as tensor networks in particular geometries and we show how to exploit these strong connections with the example of restricted Boltzmann machines : short-range restricted Boltzmann machines are entangled plaquette states, while fully connected restricted Boltzmann machines are string-bond states with a nonlocal geometry and low bond dimension. String-bond states also provide a generic way of enhancing the power of neural-network quantum states and a natural generalization to systems with larger local Hilbert space. We compare the advantages and drawbacks of these different classes of states and present a method to combine them together. This allows us to benefit from both the entanglement structure of tensor networks and the efficiency of neural-network quantum states into a single Ansatz capable of targeting the wave function of strongly correlated systems. While it remains a challenge to describe states with chiral topological order using traditional tensor networks, we show that, because of their nonlocal geometry, neural-network quantum states and their string-bond-state extension can describe a lattice fractional quantum Hall state exactly. In addition, we provide numerical evidence that neural-network quantum states can approximate a chiral spin liquid with better accuracy than entangled plaquette states and local string-bond states. Furthermore we demonstrate that the connection between string-bond states and restricted Boltzmann machines can also be used in traditional machine-learning applications. We provide an algorithm for optimizing string-bond states in a supervised learning setting and discuss improvements over other tensor-network algorithms. Gaussian Process Wave Functions Glielmo, Aldo Neural Networks (NNs) and Gaussian Processes (GPs) are arguably the most wide spread algorithms for regression of complex functions, both possessing relative strengths and weaknesses. However, while NNs have recently received attention as efficient many body wave-function ansatzes [Carleo and Troyer, Science (2017)], the representative power of GPs in the same context has not been explored so far. I will present a first attempt in this direction. The GP ansatz we propose can be understood in terms of the GP kernel function, which implicitly models the interactions taken into account. Inspired by the success of correlated wave-functions like the Slater-Jastrow or the correlator product state, we devise a kernel which can be thought as a generalisation of the above two, encompassing interactions of any order at any distance (up to the full system size). Tests on Hubbard systems in 1 and 2 spatial dimensions reveal the competitiveness of the described model. Notably, regularised training on exact data from small (affordable) systems yields a faithful representation also in large systems. Although computationally inexpensive (training takes minutes on a laptop computer) this approach suffers from inevitable residual finite size effects, which however can be tackled by a variational optimisation of the small-database entries. Supervised Learning of Exotic Tensor Spin Order Greitemann, Jonas We apply supervised machine learning to the identification of exotic spin phases with high-rank tensor order parameters. Following Ponte and Melko (PRB 96, 205146 (2017)), we find that the decision function of a support vector machine (SVM) produces the scalar order parameter. Moreover, despite our reliance on supervision, we show that in a situation where the critical temperature is not accurately know, we can still obtain the order parameter and may even be able to infer the physical critical temperature in a learning-by-confusion scheme. In addition to reproducing the order parameter curve, we are able to infer the analytic form of the tensor order parameter for a variety of symmetries with tensor order parameters of rank up to 6. This may prove useful in the exploration of exotic magnetic orders in spin liquid candidates. Playing the Ice Game Kao, Ying-Jer In classical ice models, the ice rule imposes a strong constraint on the spin configurations and makes the conventional single spin-flip Monte Carlo update inefficient. On the other hand, by proposing global updates in the form of loops, the loop algorithm can provide an efficient update scheme for ice systems. In general, finding global updates is problem dependent and requires sophisticated algorithm design. Reinforcement learning is a fast-growing research field due to its outstanding exploration capability on dynamic environments. In this work, we apply a reinforcement learning method that parametrizes transition operator with neural networks. By promoting the Markov chain to a Markov decision process, the algorithm can adaptively search for global update policy by interacting with the physical model. We observe the emergence of several global update patterns on the ice manifold discovered by the agent. It may serve as a more general framework for searching update schemes in more complicated models. A practical guide to training neural networks of quantum many body systems Lang, Thomas Encoding the representation of the electronic wave function of a minuscule fragment of a crystal is a nearly impossible task. learning promises to cut through the complexity and to allow for efficient encoding of a vastly complex system in a limited number of degrees of freedom by identifying the subtle, yet relevant signatures of phases of matter. We assess the efficiency and practical limits of the representational power of basic neural networks for the many body wave functions of quantum spin systems. We identify the types of wave functions, bases and network topologies, which are favorable and investigate what features the neural networks learn and how to exploit them in scaling up the network. Finally, we comment on the predictive power and entanglement properties of neural networks trained on small portions of the full phase space. Multigrid Renormalization Lubasch, Michael On this poster, I present our article [1] in which we use tensor networks to solve partial differential equations. More precisely, we combine the multigrid (MG) method with state-of-the-art concepts from the variational formulation of the numerical renormalization group. The resulting MG renormalization (MGR) method is a natural generalization of the MG method for solving partial differential equations. When the solution on a grid of N points is sought, our MGR method has a computational cost scaling as $\mathcal{O}(\log(N))$, as opposed to $\mathcal{O}(N)$ for the best standard MG method. Therefore MGR can exponentially speed up standard MG computations. To illustrate our method, we develop a novel algorithm for the ground state computation of the nonlinear Schrödinger equation. Our algorithm acts variationally on tensor products and updates the tensors one after another by solving a local nonlinear optimization problem. We compare several different methods for the nonlinear tensor update and find that the Newton method is the most efficient as well as precise. The combination of MGR with our nonlinear ground state algorithm produces accurate results for the nonlinear Schrödinger equation on $N = 10^{18}$ grid points in three spatial dimensions. [1] M. Lubasch, P. Moinier, and D. Jaksch, arXiv:1802.07259 (2018). Systematic construction of density functionals based on matrix product state computations Lubasch, Michael On this poster, I present the article [1] in which we use simple machine learning concepts in the context of density functional theory. More precisely, we propose a systematic procedure for the approximation of density functionals in density functional theory that consists of two parts. First, for the efficient approximation of a general density functional, we introduce an efficient ansatz whose non-locality can be improved systematically. Second, we present a fitting strategy that is based on systematically increasing a reasonably chosen set of training densities. We investigate our procedure in the context of strongly correlated fermions on a one-dimensional lattice in which we compute accurate training densities with the help of matrix product states. Focusing on the exchange-correlation energy, we demonstrate how an efficient approximation can be found that includes and systematically improves beyond the local density approximation. Importantly, this systematic improvement is shown for target densities that are quite different from the training densities. [1] M. Lubasch, J. I. Fuks, H. Appel, A. Rubio, J. I. Cirac, and M.-C. Ba\~{n}uls, New Journal of Physics 18, 083039 (2016). Machine Learning Competing Orders Matty, Michael The entanglement spectrum is expected to provide a characterization of topologically ordered systems beyond traditional order parameters. Nevertheless, so far attempts at accessing this information relied on the presence of translational symmetry. Here we introduce a framework for using a simple artificial neural network (ANN) to detect defining features of a fractional quantum Hall state, a charge density wave state and a localized state from entanglement spectra, even in the presence of disorder. We then successfully obtain a phase diagram for Coulomb-interacting electrons at fractional filling $\nu = 1/3$, perturbed by modified interactions and disorder. Our results bench-mark well against existing measures in parts of the phase space where such measures are available. Hence we explicitly establish a finite region of robust topological order. Moreover, we establish that the ANN can indeed access and learn defining traits of topological as well as broken symmetry phases using only the entanglement spectra of ground states as input. Supervised learning magnetic skyrmion phases Mazurenko, Vladimir Experimental discovery of magnetic skyrmions [Science 323, 915 (2009)] has initiated a new scientific race aiming at the development of ultradense memory storage technologies and logical gates. From a technological point of view, skyrmions are of a great research interest because of their stability, – topological nature of a skyrmion prevents its transformation into a different magnetic configuration, – and because of the possibility to manipulate them with an electric current of a very low density. One of the difficult tasks when one studying the materials revealing the skyrmion excitations is the construction of the temperature\magnetic field phase diagrams. From theoretical side it requires the calculation of the different correlation functions (spin-spin correlation functions, specific heat), definition of the topological charge and visualisation of numerous magnetic configurations. Motivated by the recent results reported in [Nature Physics 13, 431 (2017)] we used a similar neural-network-based approach for classification of complex non-collinear magnetic configurations such as magnetic skyrmions and spin spirals. We construct the phase diagram for a ferromagnet with Dzyaloshinskii-Moriya interaction and quantitatively describe the transitional areas between different phases by using the values of the output neurons. To analyse the learning process the arguments of the hidden layer neurons were visualised. It was found that the network learns the magnetisation of a particular magnetic configuration. We defined an optimal number of the hidden neurons needed for classification of the topological magnetic phases. Our approach can be used for analysis of the experimental data obtained with spin-polarised scanning tunnelling microscopy technique. Self-learning Monte Carlo simulations of classical and quantum many-body systems Meinerz, Kai The application of machine learning approaches has seen a dramatic surge across a diverse range of fields that aim to benefit from their unmatched core abilities of dimensional reduction and feature extraction. In the field of computational many-body physics machine learning approaches bear the potential to further improve one of the stalwarts in the field — Monte Carlo sampling techniques. Here we explore the capability of “self-learning” Monte Carlo approaches to dramatically improve the update quality in Markov chain Monte Carlo simulations. Such a self-learning approach employs reinforcement learning techniques to learn the distribution of accepted updates and is then used to suggest updates that are almost always accepted, thereby dramatically reducing autocorrelation effects. It can, in principle, be applied to all existing Monte Carlo flavors and is tested here for both classical and quantum Monte Carlo techniques applied to a variety of many-body problems. Exact construction of deep-Boltzmann-machine network to represent ground states of many-body Hamiltonians Nomura, Yusuke We show a deterministic approach to generate deep-Boltzmann-machine (DBM) network to represent ground states of many-body lattice Hamiltonians. The approach reproduces the exact imaginary-time Hamiltonian evolution by dynamically modifying the DBM structure. The number of neurons grows linearly with the system size and imaginary time, respectively. Once the network is constructed, the physical quantities can be measured by sampling both the visible and hidden variables. The present construction of classical DBM network provides a novel framework of quantum-to-classical mappings (in special cases, it becomes equivalent to the path integral formalism). Reference: G. Carleo, Y. Nomura, and M. Imada, arXiv:1802.09558 Versatile machine learning solver using restricted Boltzmann machine Nomura, Yusuke Variational wave function written in terms of restricted Boltzmann machine (RBM) is shown is shown to be powerful in representing ground states of spin Hamiltonians. In the present study, we further improve the form of variational wave function by combining RBM with conventional wave functions used in physics. The combined wave function can be applied not only to bosonic models but also to fermionic models. The combined method substantially improves the accuracy beyond that ever achieved by RBM and conventional wave-function method separately, thus proving its power as an accurate solver. Reference: Y. Nomura, A. S. Darmawan, Y. Yamaji, and M. Imada, Phys. Rev. B 96, 205152 (2017) Identification of the Berezinskii--Kosterlitz--Thouless transition in quantum and classical models with machine learning algorithms Richter, Monika In the last years machine learning has attracted, also among physicists, a great attention. It turned out that the algorithms, while having vast applications in the decision--making (for example in games such as chess or AlphaGo), pattern recognition, medical diagnosis and finance, can be applied also in the area of condensed matter physics. The study of many--body systems is a challenging task, as the size of the Hilbert space and consequently the amount of data to analyze, grows exponentially with the size of the system. Therefore, such systems seem to be a perfect 'material' for machine learning algorithms, which are especially well suited to deal with big and complex sets of data. Such an approach has already been successfully used, e.g., for the Ising model. By analyzing Monte Carlo-generated samples, \textsl{supervised}, as well as \textsl{unsupervised} learning methods (see e.g. Refs. \cite{key-1,key-2}) were able to correctly identify the transition temperature. The problem appears when one tries to deal with a non--conventional, more subtle type of phase transitions, which occurs for example in the 2--dimensional classical and quantum XY models. Due to the Mermin--Wagner theorem it is forbidden for systems described by these models to undergo the regular ferromagnet--paramagnet phase transition related to the spontaneous symmetry breaking. However, this theorem does not exclude the Berezinskii--Kosterlitz--Thouless (BKT) transition, which consists in the formation, at some critical temperature $T_{KT}$, of unbounded vortex--antivortex pairs. In our study, we demonstrate how one can handle this type of phase transitions. We make use of two different approaches. The first attempt is based on the idea already applied for the Ising model with the use of the feed--forward deep neural network \cite{key-1}. However, due to the continuous configurations of the spins, which occur in the models with the BKT transition, we do not train our neural network on the raw spin configurations. Instead, we transform the original data using trigonometric functions. This procedure makes the process of learning more efficient and results in more accurate predictions. In the second attempt we use the confusion scheme, which combines the \textsl{supervised} and \textsl{unsupervised learning} \cite{key-3} algorithms. The neural network is trained many times. Each time a different 'fictitious' critical temperature $T^*$ is assumed and all configurations generated at temperatures $T Predicting correlations functions with neural networks Schindler, Frank In essence, the goal of any physical theory is to relate existing experimental observations to accurate predictions of further such observations. In particular in many-body theory, we aim to build models that allow the prediction of all possible correlation functions from the knowledge of a few such correlation functions. In this talk, I want to show to what extent the standard approach of solving this problem, which involves writing down a Hamiltonian and diagonalizing it, can be circumvented by the use of machine learning techniques. I will focus on spin chains and present results obtained with simple neural network architectures. Effect of Electric Field on Breathing Pyrochlores Sriluckshmy, PV The coupling between conventional (Maxwell) and emergent electrodynamics in quantum spin ice has been studied by Lantagne-Hurtubise et al. (Phys. Rev. B 96, 125145) where they find that a uniform electric field can be used to tune the properties of both the ground state and excitations of the spin liquid. Extending the study to the case of breathing pyrochlores, we find a sufficiently strong electric field triggers a quantum phase transition into new U (1) quantum spin liquid phases along a direction that did not show a phase transition in the isotropic limit. We also analyse the phase diagram of breathing pyrochlores in the presence of Electric field using gauge mean field theory. Finally, we discuss experimental aspects of our results. Finite-scaling scaling the MBL transition with deep neural networks Théveniaut, Hugo The many-body localization (MBL) transition separates an ergodic and a localized phase in a disordered interacting quantum system. It continues to defy a theoretical understanding, such as the issue of its universality class. We investigate the MBL transition using deep neural networks directly fed with wavefunctions and study in detail the influence of the neural network structure as well as finite-size effects using large systems. The input data is preprocessed in such a way that one neural network can handle different system sizes on equal footing, paving the way to an estimate of critical exponents of the MBL transition. Machine learning the interacting ground state electron density Vandermause, Jonathan There is great interest in using machine learning methods to reduce the computational cost of materials simulation. Recent work has shown that it is possible to learn the mapping from external potential to electron density for small molecular systems using Kernel Ridge Regression, improving the accuracy of machine learning models trained on density functional theory calculations [1]. In this talk, we show how this potential-to-density mapping can be made more exact by training on Quantum Monte Carlo calculations and discuss extensions of this technique to solid state systems. [1] Brockherde et al. Bypassing the Kohn-Sham equations with machine learning. Nature Communications 8, 872 (2017). Adaptive population Monte Carlo simulations Weigel, Martin Population annealing is a sequential Monte Carlo scheme that is potentially able to make use of highly parallel computational resources. Additionally, it promises to allow for the accelerated simulation of systems with complex free-energy landscapes, much alike to the much more well known replica-exchange or parallel tempering approach. We equip this method with self-adaptive and machine learning schemes for choosing the algorithmic parameters, including the temperature and sweep protocols as well as the population size. The resulting method is significantly more efficient for simulations of systems with complex free-energy landscapes than some more traditional approaches, and it is particularly well suited for massively parallel computing environments such as (clusters of) GPUs. Interpretable Neural Networks for Learning Phase Diagrams Wetzel, Sebastian In very short time arti cial neural networks have achieved impressive results when tasked with calculating phase diagrams. These algorithms are mainly considered as black box algorithms. Hence, we cannot trust the results of arti cial neural networks blindly. Here, we discuss how to interpret neural networks when they are tasked with classifying phases in a supervised manner. Further, we employ an unsupervised neural network, the so called (variational) autoencoder, which can be interpreted naturally. It turns out that these algorithms have the potential to reveal the nature of the ordered phase. Yang, Mingru Lattice model constructions for gapless domain walls between topological phases Yang, Shuo Cluster updating classical spin systems by equivalent Boltzmann machines Yoshioka, Nobuyuki Undoubtedly the construction of a global update method is the key to accelerating sampling and accurate understanding of physics in Monte Carlo simulations. A significant slow-down in the vicinity of the critical point of a system, for instance, could be overcome by such algorithm. Here, we focus on classical Ising systems with p-body interactions that can be mapped exactly to Boltzmann machines. We find equivalent expressions in extended space such that the well-established cluster update methods can be applied. In our presentation, we discuss the realization of significant reduction in the autocorrelation time of the Markov chains by the newly proposed rejection-free algorithm.