chash
stringlengths
16
16
content
stringlengths
267
674k
179274731c462ece
July 4, 2019 The Quantum Zeno Effect: how to pause time on a quantum computer Zeno of Elea lived around 2500 years ago and established a set of philosophical problems still taught by philosophers. Despite no surviving writings by Zeno, we can learn about the concept known as Zeno’s arrow paradox from Aristotle’s writings. The paradox has inspired a particularly intriguing concept in quantum mechanics: the quantum zeno effect. By just observing a particular system you can stop it from moving. Using IBM’s quantum composer I’ll show you how to try this experiment yourself on a real quantum computer. Aristotle – Physics VI Imagine recording an arrow soaring through the air with a high speed camera. When we play the clip we see that it follows a smooth trajectory towards its target. However, at each infintesimal point in time, or every frame, the arrow is stationary. This apparent paradox highlights the concept of discrete and continuous motion and is resolved by realising that the instantaneous velocity of a moving arrow can never be truly zero. Quantum Zeno Effect The quantum Zeno effect, on the other hand, demonstrates something highly suprising. Here we replace the arrow with a quantum arrow that can only be either in the bow or in the bullseye, or in a superposition of the two. By creating a force acting on the bow, to push the arrow towards the bullseye, we go from the state \ket{\text{bow}} to \ket{\text{bullseye}}. The Quantum Zeno Effect's bow and arrow Mathematically we can represent this as the state of the system evolving under a Hamiltonian of the form     \[H=\frac{1}{T}(\ket{\text{bow}}\bra{\text{bullseye}} + \ket{\text{bullseye}}\bra{\text{bow}})\] where we have defined a time period T which dictates the speed of the evolution. This equation is solved using the Schrödinger equation. The state of the system at later times is     \[\ket{\psi(t)}=\cos(\frac{2\pi t}{T})\ket{\text{bow}}+\sin(\frac{2\pi t}{T})\ket{\text{bullseye}}\] Something strange happens when we measure the state of a quantum system. It collapses into only one of the two states. When this happens the time evolution starts again, from this new state. If we keep measuring the system we can actually stop it from evolving in time! Building the quantum circuit The IBM Q allows us to explore this concept on a real quantum device. We will create a moving system which we will allow to evolve in time, like our arrow through the air. We will then measure the state of our qubit repeatedly. By doing so the qubit will not be able to change state. To start we need to decide where we will begin and where we want to end. To keep things simple lets just start our qubit in the state 0 and launch it towards the state 1. Coincidently, we can imagine our qubit as an arrow pointing to the surface of a sphere. If the arrow is pointing upwards, the qubit is in the state 0 and if it’s pointing down then it’s at 1. The IBM Q automatically initialises our qubit in the 0 state. To test this we can try out a test circuit which simply measures our target (middle) qubit. The output tells us that all the qubits were measured to be in state 0. Hello World: the qubits start in the state 00000 How do we let our state evolve in time? Luckily the IBM Q is a universal quantum computer, meaning we could, in theory, run any classical algorithm we like. The operation we choose is a rotation. We can tell the quantum computer to rotate our state around the bloch sphere by some angle. We have five qubits so we pick to rotate the qubit a fifth of the way around the bloch sphere. Quantum Zeno Effect modelled on the Bloch sphere The qubit is rotated by 180 degrees and is measured to be 1 After doing this five times the result is not suprising, we end up with a 1 every time: the arrow has made it from the bow to the bullseye! Adding Measurement But what happens if we place a detector in the way of each of the rotations? This is equivelant to creating a CNOT gate which has the effect of switching the state of the \ket{0} of a target qubit. Using the principle of deffered and implicit measurment (which states that if we leave some quantum wires untouched we can assume they are measured), we create the effect of observing the state of our middle qubit after each rotation. We add CNOT gates to measure the state of each qubit after each rotation. Running the quantum algorithm First lets try running the alogirthm on IBMs circuit simulator to check we get the result we expect. The number of times each result was given is presented in a histogram. Quantum Zeno Effect: results of simulation Results of the simulation We see that our middle qubit remains in its \ket 0 state 68% of the time! To see why, consider the evolution after each gate rotation. The state of the qubit will be \ket\psi(t_1)=\cos(\pi/10)\ket{0}+\sin(\pi/10)\ket{1}. Using our measurement gadget we see that the qubit will be projected either into the state \ket 0 with probability \cos^2(\pi/10) or into \ket 1 with probability \sin^2(\pi/10). Repeating this process a further 5 times we see that the chance of remaining in state \ket 0 is \cos^{10}(\pi/10)=0.605. By accounting for more complex transition including jumping into state \ket 1 and then jumping back to \ket 0, we recover the expected probability. We can now send this circuit to the real quantum computer and we see that we have succesfully stopped the time evolution of our qubit about 50% of the time! The Quantum Zeno Effect on IBM Q Results from IBM Q on device ibmq_16_melbourne So the result appears not to be perfect – the result isn’t exaclty what is expected from the theory. This is due to two reasons. First, the IBM Q only has limited connectivity. This means the circuit needs to be broken down and remodelled before it can run on the quantum computer. This makes the circuit more complicated and it takes longer to run. The qubits are also vunerable to noise. Extremely subtle effects can have the effect of shifting the states of the qubits and destroying their coherence. The result of the quantum zeno effect demonstrates the suprising and seemingly paradoxical world of quantum mechanics. The measurment problem is largely unresolved and is explored in great detail in Adam Becker’s most recent book ‘What is real?’. Exaclty why measurement has the effect of collapsing the wavefunction, or splitting the unvierse in two as the many world theorists argue, is unknown. Not only does quantum computing have the power to unlock many of the key mysteries of chemisty, technology and health, but I hope it can also help us to answer deep fundamental questions about the nature of our universe. You may also like... Leave a Reply
4095831e65450d88
% Encoding: UTF-8 @COMMENT{BibTeX export based on data in FAU CRIS: https://cris.fau.de/} @COMMENT{For any questions please write to cris-support@fau.de} @article{faucris.208735469, abstract = {Since quantum mechanical calculations do not typically lend themselves to chemical interpretation, analyses of bonding interactions depend largely upon models (the octet rule, resonance theory, charge transfer, etc.). This sometimes leads to a blurring of the distinction between mathematical modelling and physical reality. The issue of polarization vs. charge transfer is an example; energy decomposition analysis is another. The Hellmann-Feynman theorem at least partially bridges the gap between quantum mechanics and conceptual chemistry. It proceeds rigorously from the Schrödinger equation to demonstrating that the forces exerted upon the nuclei in molecules, complexes, etc., are entirely classically coulombic attractions with the electrons and repulsions with the other nuclei. In this paper, we discuss these issues in the context of noncovalent interactions. These can be fully explained in coulombic terms, electrostatics and polarization (which include electronic correlation and dispersion).}, author = {Clark, Timothy and Murray, Jane S. and Politzer, Peter}, doi = {10.1039/c8cp06786d}, faupublication = {yes}, journal = {Physical Chemistry Chemical Physics}, peerreviewed = {Yes}, title = {{A} perspective on quantum mechanics and chemical concepts in describing noncovalent interactions.}, year = {2018} }
57a048ac22950642
Sign up × Consider a free electron in space. Let us suppose we measure its position to be at point A with a high degree of accuracy at time 0. If I recall my QM correctly, as time passes the wave function spreads out, and there is a small but finite chance of finding it pretty much anywhere in the universe. Suppose it's measured one second later by a different observer more than one light second away and, although extremely unlikely, this observer discovers that electron. I.e. the electron appears to have traversed the intervening distance faster than light speed. What's going on here? I can think of several, not necessarily contradictory, possibilities: 1. I'm misremembering how wave functions work, and in particular the wave function has zero (not just very small) amplitude beyond the light speed cone. 2. Since we can't control this travel, no information is transmitted and therefore special relativity is preserved (similar to how non-local correlations from EPR type experiments don't transmit information) 3. Although the difference between positions is greater than could have been traversed by the electron traveling at c, had we measured the momentum instead, we would have always found it to be less than $m_e c$ and it's really the instantaneous momentum that special relativity restricts; not the distance divided by time. 4. My question is ill-posed, and somehow meaningless. Would anyone care to explain how this issue is resolved? share|cite|improve this question This is one reason why we need quantum field theory. – leongz Dec 31 '12 at 19:58 Welcome Elliotte, good question. I don't know the answer to it, I hope someone with better knowledge in QM will be able to help you. I have a small correction for you about the momentum. In special relativity, the momentum is $p=\gamma m v$, where m is the rest mass, $\gamma = \frac{1}{\sqrt{1-(\frac{v}{c})^2}}$, and v is the velocity. As v tends to c, $\gamma$ tends to infinity, so the momentum can actually be much larger than $mc$. – Andrey B Dec 31 '12 at 20:07 Related: and links therein. – Qmechanic Jan 27 '13 at 18:17 3 Answers 3 Excellent question. You are correct about wavepacket spreading, and in fact you do get superluminal propagation in non-relativistic QM - which is rubbish. You need a relativistic theory. You should read the first part of Sidney Coleman's lecture notes on quantum field theory where he discusses this exact problem: The short answer is that you need antiparticles. There is no way to tell difference between an electron propagating from A to B, with A to B spacelike separated, and a positron propagating from B to A. When you add in the amplitude for the latter process the effects of superluminal transmission cancel out. The way to gaurantee that it all works properly is to go to a relativistic quantum field theory. These theories are explicitly constructed so that all observables at spacelike separation commute with each other, so no measurement at A could affect things at B if A and B are spacelike. This causality condition severely constrains the type of objects that can appear in the theory. It is the reason why every particle needs an antiparticle with the same mass, spin and opposite charge, and is partially responsible for the spin-statistics theorem (integer spin particles are bosons and half-integer spin particles are fermions) and the CPT theorem (the combined operation of charge reversal, mirror reflection and time reversal is an exact symmetry of nature). share|cite|improve this answer So, is it correct to say that if one releases an electron at r=0 at t=0, and waits, then the probability of measuring the electron outside the light cone will be zero, but this is due to the field of a positron which cancels out the propagation of electron outside the light cone? Can one then measure a positron anywhere inside the light cone? – Alexey Bobrick Jan 1 '13 at 22:30 Another comment is that one does not need relativistic quantum field theory for this problem. Dirac theory describes a propagating particle-electron (non field) well enough. – Alexey Bobrick Jan 1 '13 at 22:31 Thanks for the question, Elliotte! In my QFT class, we briefly touched on how the antiparticle field cancels out the superluminal effects of the particle field. But what I don't understand is that the particle still can travel faster than the speed of light. Is there no way one can observe only that? I'm sorry if this is a silly question, I've taken just one semester of QFT..Thanks! – user34801 Jan 2 '13 at 7:19 Good questions, and not at all silly! @Alexey Bobrick: You are correct, there is no amplitude to measure an electron outside the light cone. There is also no amplitude to measure a positron inside the lightcone (if you start with an electron state rather than a positron state!). – Michael Brown Jan 2 '13 at 10:59 @user34801: Your question and Alexey's other question are answered by the same discussion: The "electron field" $\psi$ really is the sum of two terms: a term that annihilates an electron (the convention is backwards - blame Heisenberg) and a term that creates a positron. The conjugate field $\bar{\psi}$ does the reverse. (The opposite action on electron vs. positron states makes it an operator with definite electric charge.) Any operator that acts on electron or positron states must be built up out of these combinations to preserve causality. This is the restriction I mentioned before. – Michael Brown Jan 2 '13 at 11:00 Comment on the answer of @Michael: The short answer is that you need antiparticles is false. In Quantum Field Theory you have perfectly working solutions also without antiparticles, i. e. for real fields. Even if you do want to consider antiparticles, always have in mind that despite the misleading name they are in fact different particles from the original ones and saying that an electron propagating from A to B is equivalent to a positron propagating from B to A is also wrong: there is indeed a way to distinguish between the two, namely the former is represented by the field and the latter by its hermitian conjugate and they transform differently under representation of the Poincaré group. Moreover, adding the two contributions does not cancel out the possible superluminar factors. To answer the original question: QM is indeed not a relativistic theory, end of the story. The correct relativistic extension is QFT by the fact that the cancellations happen if you take into account the degrees of freedom carried by the field themselves on top of the ones of the particles (no need to have antiparticles at all). share|cite|improve this answer In a real field, a particle is its own antiparticle. You still have a zero conmutator for spacelike separated measurements, because of the particle propagating forward and backward cancel out. And I'd love to see why the two contributions don't cancel out superluminal factors (remember that the only observable superluminal effect would be a nonzero conmutator of spacelike separated fields) – Bosoneando Jun 26 at 7:07 The interpretation that a real field is its own antiparticle is still at least misleading (if not wrong). Likewise, (anti)-particles never travel backwards: they always do so forward and the backward terminology is just to (wrongly) justify the minus sign. About the commutator: the two contributions only cancel out for free fields; if you try to calculate the same commutator for any type of interaction you will see that the contributions do not generally cancel each other (unless you postulate so and derive the fields accordingly, but that's a different story). – Gennaro Tedesco Jun 26 at 17:47 The very useful solutions of the Shrodinger equation that are usually taught in beginning quantum mechanics are not Lorenz invariant and therefore paradoxes with respect to special relativity may be constructed. The relativistic equations of Dirac: The Klein Gordon equation : ( sometimes Klein–Gordon–Fock equation) is a relativistic version of the Schrödinger equation. Thus there is no problem with the simple solutions of the underlying wave functions that are needed to build up the Quantum Field Theories discussed in the other answers. Those are a meta level using the solutions of the relativistic equations as a basis on which the QFT creation and annihilation operators operate. As far as Lorenz invariance is concerned it is enough that the Hilbert space on which the QFT operators operate is Lorenz invariant in order not to have any light cone problem with any modeling . share|cite|improve this answer Your Answer
54e0358124790883
Wonder Woman - S1E7 This episode - penned from the same guy who wrote a third of the Batman episodes. In this episode: Mr. Brady has The Black Death Dr. Bellows makes homemade earthquakes And Wonder Woman does lots of math So, what are we waiting for?  Let's jump in. Before we start, let me bring up a bit of confusion regarding the episode number designation.  IMDb calls this Episode 8 because they count the pilot as Episode 1.  I do not.  Lynda Carter wasn't even WW in the pilot (it was Cathy Lee Crosby).  Plus, the official series began seven months after the pilot; which puts the pilot even more in a world unto itself. Mike Brady (Robert Reed) gets to play the villain in this episode - The Falcon... and he plays with a flair and panache the part deserves.  This is post-Brady Bunch, and the rakish Reed is jaunting from one show to the next - from The Love Boat to McCloud to The Boy in the Plastic Bubble... and he's having a blast.  The following year (1977) he'd be back with the Bradys in The Brady Bunch Variety Hour. Our story begins with an Oscar Wilde-esque Falcon getting detained in a US airport.  The lovely lady at his arm is Mikki McGoldrick (AKA Mikki Jamison) who just passed away in 2013 (obituary). BTW the custom's agent in this scene is played by the daughter of Pat O'Brien, "Hollywood's Irishman in Residence".  He was one of James Cagney's best friends, and starred with him in 9 films.  He's the guy that said  “win just one for the Gipper" (in Knute Rockne, All American). Any clue on these equations? Are they for real, or gobbldygook?  Inquiring minds want to know. The Falcon escapes and steals his way into the laboratory of Professor Warren.  Warren is played by Hayden Rorke, Dr. Bellows from I Dream of Jeannie. The Falcon wants Warrens "Pluto File" - his secret for creating and prediction earthquakes.  It's The Brady Bunch vs. I Dream of Jeannie.  Who will win? Yeoman Prince visits poor Dr. Bellows Professor Warren in the hospital - the old coot couldn't handle the trauma of a face-to-face with The Falcon.   It's filmed at Walter Reed Hospital.  Lynda Carter was Miss USA in 1972 and toured the military hospital with Bob Hope visiting wounded soldiers just four short years before this episode was shot.  Who knew that in '76 she'd be back, not as Miss USA, but as Wonder Woman.  (insert dramatic music here) Yeoman Prince notices The Falcon on a rooftop with a sniper rifle aimed at Professor Warren.  She quickly changes into WW and deflects the bullets with her bracelets.  But the ever elusive (and dashing rogue) Falcon escapes without a scratch. I couldn't help but note that WW makes her twirly transformation right in front of the window The Falcon was aiming through.  Seems he could've had a pretty easy head shot had he pulled the trigger. Back at an undisclosed location, The Falcon plots his nefarious plan to "induce the world's first man-made earthquake".  (insert Dr. Evil laughter here) Actor Albert Stratton plays one of his forgettable henchmen, Charles Benson.  Stratton hopped from one supporting TV role to next back in the day.  He got a few prime gigs on the revived Perry Mason show and an episode of Quantum Leap.  He's probably best remembered for a Star Trek: The Next Generation second season episode "The Outrageous Okona"  (because you can always count on Trekkies to keep your memories alive). In an interesting connection, this episode's director, Herb Wallerstein, directed four episodes of the original Star Trek. Robert Reed was such a dandy. And here's where shit starts getting a tad crazy with the story line.  See if you can follow me:  Remember Mikka from the beginning? Well, she's got the Bubonic Plague.  (You may need to read that sentence twice).  The last case of The Bubonic Plague was in India, which just happens to be where The Falcon was before he came to the States.  So he's a carrier! Turns out, Benson is one of Professor Warren's fellow scientists... and he looks like Holy Hell.  It doesn't take long for Steve Trevor to put two and two together and realize that, if Benson has The Bubonic Plague, he must have been in contact with The Falcon..... which can only mean Benson is a Nazi traitor! I know, there's like a million mental leaps in order to come to that conclusion (maybe Benson just has a sinus infection); but the accusation is enough to make ol' Benson lose his marbles, pull a gun on Trevor, and get out of Dodge. WW chases Bubonic Benson into a cannabis(?) field and uses her Lasso of Truth to get to the Falcon Connection. So, if you're keeping track.  The Falcon has not only an earthquake device, but also The Bubonic Plague.   What's next, a nuclear bomb?  Oh, snap... Mike "The Falcon" Brady has plans to use The Pluto File to create an artificial earthquake around a nuclear reactor.  The seismic activity will cause the reactor to go "boom" and The Falcon can go home to his Nazi brethren a conquering hero. Note that "Pluto" was a genuinel WWII code-name used for the method of delivering oil during the D-Day landings (Pipe Line Under The Ocean = PLUTO).  "Pluto" also could refer to the radioactive element, Plutonium, used to make these nuclear weapons.  Who knew Wonder Woman had so many layers? WW meets with the professor and the two combine their brains to solve a way to cool down the nuclear reactor.  She actually schools the old scientist, and provides the solution for him.  (Apparently, the solution is to just add water.... brilliant.) This episode aired on Christmas Day 1976.  Nothing says the holidays like natural disasters, nuclear meltdown, and biological warfare. The Falcon bum rushes the lab, rightfully concluding that it would the professor who could stop this nuclear meltdown.  But, he suddenly breaks down with The Bubonic Plague and can fight no more. Professor Warren alerts the nuclear plant on how to stop the meltdown (just add water), and in a moment of parallelism, WW gets a glass of water for The Falcon.    And so it ends.  Not one of the best episodes, but still none too shabby.  It was written by Stanley Ralph Ross, an ordained minister who actually presided over the marriage of Burt Ward (Robin from Batman). Ross also directed that intro to Wide World of Sports we're all familiar with ("... and the agony of defeat") And speaking of defeat, we learn that The Falcon didn't die from The Bubonic Plague, but is recovering in a maximum security prison.   The first WW pilot (with Cathy Lee Crorby) has no connection to the later series beyond the title. Totally different concept and cast. It's no more related than the 1960s promo reel with Linda Harrison ("Nova" in the first two Planet of the Apes flicks as Wonder Woman, done by William Dozier (Batman/Green Hornet). Now, if they had integrated Crosby's pilot into the Lynda Carter series the way Star Trek incorporated "The Cage" into Classic Star Trek (maybe as an "alternate universe"), then you might have a case... 2. Linda Carter was Miss World 1972(-73), not USA. It surprised me to learn she was Miss Anything so I looked it up. 3. That's the Schrödinger equation on the blackboard, one of the workhorses of quantum mechanics. 4. who's looking at equations????? 5. Oh wow, this ep aired on Christmas day! you mean there was no lame reruns amid the hoilday shows? thoese were the days. 6. I think your field of marihuana is oleander (http://en.wikipedia.org/wiki/Nerium), which puts this scene somewhere warmer than Washington, DC. 7. I guess this episode didn't score any Steve Trevor knock-outs or WW being tied up. 8. I was never a fan of this episode. It is IMHO one of the few lemons in an otherwise great season. 9. What's more 1940's than over-the-ears feathered hair and sideburns? Great job, production team.
4cabaaf667fcb43b
Hans O. Karlsson Learn More We use a posteriori error estimation theory to derive a relation between local and global error in the propagation for the time-dependent Schrödinger equation. Based on this result, we design a class of h, p-adaptive Magnus–Lanczos propagators capable of controlling the global error of the time-stepping scheme by only solving the equation once. We provide(More) We consider an optimal control problem for the time-dependent Schrödinger equation modeling molecular dynamics. Given a molecule in its ground state, the interaction with a tuned laser pulse can result in an excitation to a state of interest. By these means, one can optimize the yield of chemical reactions. The problem of designing an optimal laser pulse(More) • 1
a588d2f5a7ad8aff
Franck Laloë Do We Really Understand Quantum Mechanics? Franck Laloë, Do We Really Understand Quantum Mechanics?, Cambridge University Press, 2012, 406pp., $75.00 (hbk), ISBN 9781107025011. Reviewed by Valia Allori, Northern Illinois University Do we really need another book on quantum mechanics? Quantum mechanics is one of the greatest scientific achievements, and yet it is still controversial how we should interpret it and what conclusions we are entitled to draw from it. Because of this, it has inevitably attracted many writers, both physicists and philosophers. So, hasn't enough ink been spilled about the subject already? Who needs more? Surprisingly enough, Franck Laloë has shown us that there is still room for another valuable contribution Laloë provides us with a great addition to the abundant (sometimes even redundant) literature. On the one hand we find technical books (for instance Sakurai, or Messiah), while on the other we have introductory books (Albert, Maudlin, and Ghirardi). The former kind of book is full of mathematical details and focused on formal results and so of great value for students needing to learn how to do calculations, solve problems and perform experiments. But such a book is totally useless if one wishes to get a realist picture of the quantum world since it totally ignores the conceptual problems that plague the foundations of quantum mechanics. In contrast, the latter kind of book usually has the aim of introducing the subject to philosophy students, who usually are interested in understanding rather than computing. As a result, this kind of book often disregards technicalities and focuses primarily on the problems of interpreting the formalism and on the possible realist interpretations of the theory. While this approach is extremely valuable in making the conceptual problems crystal clear, its danger is that the philosopher, when confronted with particularly challenging technical material, might still be unable to get around it and arrive at the correct conclusions without being obfuscated by the mathematical detail. So, what seems to be missing, indeed, is a book that combines these two extremes: a book that cares about conceptual issues but at the same time provides enough mathematical details to enable the reader to understand and judge for herself even the more densely technical material. Laloë's book (at least partially) fills this gap. His goal is explicitly to understand the foundations of quantum theory which also, by his admission, is something that has been neglected in the majority of traditional physics books on the topic. Not having a clear grasp of its foundations, writes Laloë, makes quantum mechanics a "colossus with feet of clay" (p. xi): how can we properly understand the theory and its implications if we do not understand what it is grounded on? This admission, made by a physicist, is rare and frankly refreshing. While the importance of conceptual issues has been underlined by philosophers all along, physicists have always given them a smug look. Laloë's book seems to show that finally the situation has started to change. The book has eleven Chapters and several appendices, through which the author goes from the history of quantum mechanics to its interpretations, passing through the Schrödinger cat, the Einstein-Podolsky-Rosen (EPR) paradox and Bell's theorem; quantum entanglement and its application; and a solid mathematical introduction. More in detail, chapter 1 discusses the history of quantum mechanics with accuracy and balance, from the "prehistory" (Planck's oscillators, Bohr's atomic model, Heisenberg's matrix mechanics), to the "undulatory period" (the contributions of de Broglie, Debye and Schrödinger), to the emergence of the Copenhagen interpretation (the developments of Born, Bohr, Heisenberg, Jordan and Dirac). Particular importance is given to the role and status of the state vector. In this regard, two extreme positions are analyzed: first, the view that the state vector describes the physical properties of a system, and second the view that the state vector represents just the information that an observer has about the system. Laloë rejects both views, calling them "two opposite mistakes" (p. 13). While his reasons for rejecting the latter view are the traditional ones and therefore not controversial, in my opinion his reasons for rejecting the former are too hasty. In fact the first option is dismissed right away by the author as follows: "the difficulties introduced by this view are now so well-known . . . that nowadays few physicists seem to be tempted to support it" (p. 13). Also: "it didn't take long before it became clear that the completely undulatory theory of matter also suffered of serious difficulties, actually so serious that physicists were soon led to abandon it." The main problem identified by the author for this view is that in a many-particle system the state vector would live in configuration space, and this makes it, in the opinion of the author, obviously the wrong candidate to represent matter. While I happen to agree with Laloë's conclusion, the issue cannot be dismissed that quickly. There is an ongoing debate within the philosophy of physics community exactly about whether it is possible, if not even advisable, to regard quantum mechanics as a theory about the wave function, intended as a material field on configuration space, and the issue is far from having been settled. Be that as it may, chapter 2 correctly and exhaustively discusses the fundamental conceptual difficulties of quantum theory (from the Schrödinger cat, to Wigner's friend, to the role of decoherence), while chapter 3 is a very informative presentation of the EPR "paradox." The author uses many nice illustrative examples to clarify the main premises, the logic and the conclusion of the EPR argument, making the chapter an incredible resource for both physicists and philosophers. Chapters 4 and 5 are dedicated to Bell's theorem and nonlocality. The premises of the theorem and the the logic of the argument are made extremely clear and straightforward, something rarely found in the literature. The theorem is discussed in its many formulations -- from Bell's original 1964 theorem, to the Bell-Clauser-Horne-Shimony-Holt (BCHSH) inequalities, to the formulations of Wigner, Mermin, Greenberg-Horne-Zeilinger (GHZ), Cabello, Hardy, Bell-Kochen Specker -- and Laloë correctly notes that what is at stake is locality and not determinism. The discussion is so complete that the author even covers some of the attempts to bypass Bell's conclusion that reality is nonlocal, discussing the so-called "free will," "counterfactuality" and "contextuality" assumptions. Chapter 6 is devoted entirely to the purely quantum property of entanglement, including a discussion of origin of quantum correlations. This is a more technical chapter, in which master equations and density operators are introduced, including a discussion of the evolution of subsystems. The following chapter is dedicated to the many applications of entanglement: from quantum cryptography to teleportation, from quantum computation, quantum gates and quantum algorithms to quantum error correction codes. Chapter 8 returns to a more technical and formal presentation, discussing the notions of "quantum measurement:" from the notions of direct and indirect measurement to those of weak and continuous measurement. Then the book presents experiments to illustrate typical quantum properties, such as quantum reduction in real time, single ion or electron in a trap, number of photons in a cavity and spontaneous phase of Bose-Einsetin condensates. Finally chapter 10 addresses the issue of the "interpretations," as Laloë (like many others) calls them. He starts off with the (commonsensical but vague) pragmatist approach, continues with the statistical interpretation (the view that the quantum description only applies to ensembles), moves to Rovelli's relational interpretation (the state vector depends on the observer), to the algebraic approaches and quantum logic, continues with d'Espagnat's veiled reality interpretation (according to which reality is only marginally accessible to us). After a comprehensive and detailed analysis of hidden variables theories like Bohmian and Nelson mechanics, the book continues with a discussion of the modal interpretations, the Schrödinger dynamics of Ghirardi, Rimini, Weber (GRW) and Pearle, Cramer's transactional interpretation (a view which in certain respects reminds one of the Wheeler-Feynman electromagnetic absorber theory), the history interpretations and concludes with the Everett interpretation. The book concludes with a comprehensive and clear mathematical review of the various elements of quantum mechanics. The book is so dense and full of interesting ideas that many comments could be made. In this review, though, I wish to primarily discuss the main strategy the author uses. Laloë wants to provide a balanced and comprehensive view of the foundations of quantum mechanics; he does not want to give preference to one interpretation or another, to one strategy to another, but rather he wishes to analyze how each of them relate to one another and what their mutual relations and differences are. In fact he writes that his book provides a "balanced view of the conceptual situation" of quantum mechanics (p. xiv). Many would regard this attitude as a strength of the book, since it provides the reader with all the relevant information and tools to decide the issue for herself without being influenced by the opinion of the author. In contrast, I think this strategy may be the weakest trait of the book; this may just be a question of personal taste, but I always find that "neutral" books like this one leave something to be desired. Assuming that I have the means to understand and judge the material independently (which this book is able to provide), I'd rather read a heavily opinionated and provocative book than one in which all the options are stated with an impartial and unbiased attitude. When expressing his own opinion, most commonly an author invariably ends up being more convincing than just merely stating the possible alternatives. Laloë's book seems no exception to this rule. For instance, when the discussion focuses on the attempts to bypass the conclusion of Bell's theorem, Laloë's treatment of the strategies based on the free will assumption, contextuality and counterfactuality is maybe more charitable than needed and sounds a little artificial. More generally, how can one possibly address all the foundational issues correctly without taking a stand? In other words, taking a particular view to be the case will have consequences: in particular, one view will lead to certain problems, another view to others. For instance, if one thinks that Bohr's theory is correct, then the notion of measurement will be a crucial part of quantum mechanics. But if instead another "interpretation" is believed to be the correct one, the importance and the role of measurement in this theory will be fundamentally different than in the previous one. How can one discuss, say, the notion of measurement in quantum mechanics in general? When in chapter 8 Laloë claims that the notion of measurement is important in quantum mechanics, what does he have in mind? How can we decide what is being measured if we don't already have a clear idea what the ontology of the theory is? As Einstein once reminded us, it is the theory that decides what is being measured. In other words, how can we make a theory of measurement without a clear understanding of the "interpretation" of the formalism? Therefore, I find particularly odd that the chapter on interpretations is left at the end of the book. Indeed, I find it misleading that these alternatives are actually called "interpretations"; each of them provides a distinctive picture of reality and because of this each of them should be called "theory" instead of "interpretation." Apart from the terminological point, it seems to me that only once a theory is chosen can one then discuss what concepts are relevant and what problems need to be addressed within the context of that theory. If we leave the question of ontology at the end, one may be led to believe that all theories have the same problems simply because they have the same -- or similar -- mathematics (the state vector, the Schrödinger equation and so on), and this would be a mistake. So, to conclude, while I find Laloë's book to be an extremely valuable contribution to the foundations of quantum mechanics for its completeness and clarity of exposition, I still find unsatisfactory its lack of an opinionated discussion and evaluation of the different alternatives.
4dd314a34383dbe9
Wednesday, June 17, 2009 Survival Isn't Cost-Effective I trust my readers won’t be unduly distressed by an extended safari through the tangled jungles of the “dismal science” of economics. As suggested in several recent Archdruid Report posts, economic factors have played a massive role in putting the industrial world in its current predicament, and an even more substantial role in blocking any constructive attempt to get out of the corner into which we’ve painted ourselves. There’s an all too real sense in which, if modern industrial civilization perishes, it will be because the steps necessary for its survival weren’t cost-effective enough. In a different context, that of the physics of vacuum tubes, Philip Partner commented in his classic textbook Electronics (1950): “The theory speaks of ions, atoms, and electrons, and of collisions between them; but these are figments of the mind, props for its understanding. [...] The electron, like the atom, is a concept; it is part of a mental shorthand which we have invented to summarize our knowledge of Nature. So when we say, for example, that an electron collides with an atom, we should bear in mind that we have never seen it happen. The use of the present indicative does not turn hypothesis into fact” (p. 569). Unfortunately this level of clarity is hard to achieve and harder to maintain. This has to be kept in mind when trying to make sense of the economic dimension of industrial civilization’s decline and fall, because both sides of the equation – the models and the reality – throw up challenges in the way of constructive action, and so do economic policies that are based on the models, and thus function at a second remove from the reality. It’s true, and will be a central theme of future posts, that current economic theory has lost touch with reality in critical ways, and a revision of some of the basic ideas of modern economics is essential if we’re to make sense of our predicament and do anything constructive in response to it. It’s equally true that government policies based on today’s misguided economic notions have become massive liabilities to societies struggling to deal with today’s crisis, and even this late in the game, changes in these policies might still do a great deal of good. Still, it’s also true that economic factors in the real world, independent of theory, impose hard limits on what can be done. The classic example has to be the plethora of projects for “lifeboat communities” floated in recent years. The basic idea seems plausible enough at first glance: to preserve lives and knowledge through the decline and fall of the industrial age, establish a network of self-sufficient communities in isolated rural areas, equipped with the tools and technology they will need to maintain a tolerable standard of living in difficult times. The trouble comes, as it usually does, when it’s time to tot up the bill. The average lifeboat community project I’ve seen would cost well over $10 million to establish – many would cost a great deal more – and I have yet to see such a project that provides any means for its inhabitants to cover those costs and pay their bills in the years before industrial civilization goes away. The result is a massive distortion in our understanding of the realities that shape our lives. It’s generally not considered a viable business plan – outside of the financial industry, that is – to make large profits in the short term by running up debts so large the business will have to declare bankruptcy in the not too distant future. Yet this is exactly what an economic system that ignores the cumulative costs of resource depletion and pollution mitigation is doing, and on an even larger scale. The future costs of extracting resources from depleted reserves and mitigating the impacts of a polluted environment have the same effect as the future costs of debt service on excessive borrowing; they buy temporary prosperity in the near future at the cost of impoverishment or collapse further down the road. As we make the transition from what I’ve called the abundance economies of the first half of the industrial age to the scarcity industrialism of the near and middle future, it’s entirely possible that such adjustments could be put into place in our own societies. The accumulated burdens of past mistakes weigh heavily enough on the future that changes of this sort won’t stave off a great deal of trouble and suffering, but it’s entirely possible that a shift to saner policies backed by more realistic economic ideas could cushion the descent into the deindustrial age, and make it easier to allocate resources to projects that will actually do some good, instead of pursuing policies which – like nearly all the economic policies currently in place in the industrial world – will simply make matters worse. Nnonnth said... Great, great post. Surprised to see you thinking in these terms in a way, since macro governments are usually written off as a fruitless to contemplate hereabouts. Personally, I see signs in local government here in the UK, and a couple of signs at higher levels too, that things might change... just in time to be completely too late instead of only partially lol. Ed Miliband ("Secretary of State for Energy and Climate Change" over here) made a visit to Transition Town speakings recently. His major comment was, I can't sell 'no growth' to the electorate. He hasn't quite seen how forcefully economics is going to sell it to everyone. What I really want is for someone like him to start reading your post on, for example, how sensible it is to fund nuclear fusion vs. hydro from washing machine motors, which is a classic example of what you mean here I guess. Enough badgering could start to influence policy, really could -- here anyhow. In the US I don't know what people's sense is. Typo: beginning of last sentence of 1st para, you wanted, "There's an all too real sense..." Robert Magill said... I suspect the successful among us will live in the future as the horse and buggy Amish do today. Minus the use of propane, of course. gaias daughter said... Yes, it's entirely possible . . . but is it probable? We, in the US, have made some progress toward protecting the commons -- we now have pollution and conservation laws that have done a great deal toward restoring air, water and land quality, but to do more in hard economic times is, sadly, more than I believe the American public is willing to do. Unfortunately, the costs associated with depletion and pollution would be paid, ultimately, by the consumer, and higher prices are the last thing anyone is going to want as buying power diminishes. Of course, one could argue, rightly, that the consumer is already paying the costs -- but hidden costs are more appealing than visibile ones. So while I agree with everything you wrote, JMG, when I step back and ask myself "could this happen?" the answer is "don't count on it!" But then, I am a pessimist . . . and I could be wrong! RDatta said... Once again, (unfortunately - for those who wish to continue business as usual) you are promoting ideas that are anathema. But you are right; survival is not cost - effective. If biological beings had limited themselves to cost- effective methods, all that would exist would be slime molds. Or perhaps we would still be sitting in our family trees and counting on the fingers of our feet. Llewellyn said... Another exceptional post, thanks JMG! hapibeli said... I have nothing to add to the last paragraph, save, " oh mercy, mercy, there be trouble brewin!" Peter said... Good post as always, but a tad optimistic in that concluding paragraph. It's not new economic understanding that's required, as much as a much greater voice for scientific research and modeling in the political process. "Business as Usual" must be brought to sudden stop, and this would require a degree of political will informed by the true conditions we face, not by the minor adjustments (ie cap and trade) industry begrudgingly agrees to. See Jay Hanson's site for a much more detailed elaboration of the real challenge. ( Perhaps a true black swan event will awaken our sense of urgency; how else do you see it happening JMG? blue sun said... JMG – Excellent post! You have pointed out a blind spot in economics. But don’t apologize for discussing the “dismal science.” For there is also a blind spot in our culture for economics itself, and we all need a good dose. After all, economics is arguably the reason behind every decision people make! I think the media certainly has a large share of the blame, focusing as it does on gossip instead of relevant information (how many news outlets paid attention in 1999 when sections of the Glass-Steagall Act of 1933 were repealed, contributing to the current financial crisis by permitting, among other things, Citibank’s current formation). The school system is also partially to blame, but that’s probably just a reflection on our culture. Instead of studying economic history, we study social history, and therefore even in school we are trained to focus on scapegoats and champions, instead of the actual reasons behind it all. There are numerous examples in our textbooks where heroes or villains get all the credit while underlying economic reasons are ignored. If asked, most Americans would probably blame the French Revolution on Marie Antionette being such a jerk, rather than poor agricultural policies which limited French food production (in sharp contrast to England’s food production). We all collectively suffer a little from this blind spot. John Michael Greer said... Nnonth, thanks for the correction. My guess is it'll be a while before ideas like these get any traction with national governments, but regional and local governments are another matter. The first step is to get them in circulation. Robert, quite possibly -- minus the propane, but very likely plus radio. Daughter, the thing about economics is that a good theory gives a competitive advantage to those people, communities and nations that adopt it. We're already seeing countries (China and Russia, in particular) adapting to the new reality of scarcity industrialism, and profiting mightily at the expense of those countries that aren't willing to do so. I'll have some specific proposals to offer later on; if I'm right -- always a caveat worth making -- the payoffs to be had by adopting them may be tempting enough to encourage people and communities to look into them. RDatta, anathema is my stock in trade! Llewellyn, many thanks. Hapibeli, I think the only thing I can add is "heh heh heh." Peter, I disagree. It's exactly the unwillingness of the scientific community to tackle the grubby realities of economics (while lining up to feed at assorted government and corporate troughs) that has negated any influence the scientific community might have had on public affairs. I don't think waiting around for a black swan is particularly useful; what's needed, rather, is a set of good reasons why it's in people's own economic interest to do the right thing -- and I propose to provide that. Blue Sun, exactly! The blind spot toward economics is particularly severe in the peak oil community, it seems to me, and that above all has to change. In a society where economic concerns override most other factors, we need to learn how to talk in the language of costs and benefits, prosperity and impoverishment, or nobody's going to listen -- and the economic impact of the end of the industrial age is significant enough (!) that this won't be hard. Nnonnth said... On that, on the UK front for anyone interested, this is worth a look: The audio talks at the top of the page right now are useful, and the third one (Alexis Rowell) will show anyone just what an informed local politician can do. Another example, Somerset County Council has become the first in the UK to declare itself a 'Transition Council': There are enough signs to feel mildly positive about this in principle, here in the UK. Essentially such initiatives as Transition Towns represent most of the reason for this, due to what they are getting so good at -- being visible. bryant said... Great post! I am glad you are back; that was a long two weeks Coyote said... The shift to saner economic policies cannot happen as long as corporations have more rights and fewer responsibilities than citizens. Governments have lost the will, and perhaps the ability to control corporations, and these corporate entities have no ethical obligation to save the planet or even civilization. This lack of a moral compass among the ruling elite will be the ultimate downfall of the industrial age. In his Buddhist Economics essay E. F. Schumacher called it a “metaphysical blindness” on the part of economists that see economics as a science instead of a social science as it should be regarded. The application of a social science should improve our culture and not treat it as it were an experiment in a Petri dish where the preferred outcome favors more business activity over quality of life. This blindness is a direct result of the lack of an ethical component within corporations. Economists (especially of the free market wingnut variety) talk about efficiency and externalities while failing to differentiate between capital and income. You can’t recapitalize the earth! John Michael Greer said... Nnonnth, it's good to see the Transition Town movement getting publicity; what remains to be seen is whether it can go past drawing up plans for the future, and actually start implementing the changes that have to happen. Bryant, thank you. There'll be another gap later this summer, for related reasons; more on this in a bit. Coyote, I think you have it backwards. It's the metaphysical blindness that's allowed corporations to run amok like Mickey Mouse's animated brooms, rather than vice versa. That being said, of course you're quite right that the legal status of corporations needs to be reformed; I'll have some suggestions along those lines in a future post. Jon said... In the past building new communities could only be done with sponsorship. The Jamestown settlement, for instance, was privately funded, until it went broke. After that, one of the main shareholders, King James, made it a state sponsored project, and that was more as an English counter to Spanish activity in Florida. It was too strategic to fail. Settlers needed charters and sponsorship for the outlay of ships, licensing of protection, farm equipment and provisions sufficient to carry them until they could be self sufficient. Of course, the sponsors wanted something in return. Do you think there would ever be a large scale (read: deep pockets) sponsor, either private or governmental, of a permaculture community? John Michael Greer said... Jon, you've actually answered your own question. What possible payback could a private or governmental sponsor expect from such a project? This is why I've focused here and elsewhere on steps that individuals and groups can take on their own, without a budget in the millions. Ahavah Gayle said... I have to wonder - it's true that pre-industrial societies learned very well to manage the "tragedy of the commons" problem, for the most part. But they had very deep ethnic or political or religious ties to each other and understood the deep need of the community to act together. Modern America has none of these - how will you ever convince people raised on the myth (yes, myth) of rugged individualism and the deeply ingrained philosophy of radical subjectivity to ever admit the good of the community must come first, so that all may survive? I just don't see it happening. We have nothing at all to make communities anymore, except for very small groups which will simply be plunder for the "survival of the fittest" mobs. Spirit Flower said... I read your weekly blog and enjoy it. Thanks! I am newly unemployed. Today, I happily sat at home at the computer and applied for unemployment. I think there is a silent revolution of sorts: white collar workers who previously went and got new jobs, are tired of working too many hours for disloyal companies, and just sitting back collecting for once(might not be any jobs anyway). We are not paying taxes, collecting instead and finding ways to fit our expenses into the new budget. None of us really have retirement funds any more. Quietly, we are bringing the government down. Slowly, we will do new things and change the country. John Michael Greer said... Ahavah, Americans wallow in the myth of individualism because we're rich enough to afford to do so. The 1920s were every bit as individualistic and subjective as today, but that went away in a hurry when the 1929 crash removed the economic basis for it. I expect to see the same thing in the decade to come. Once it sinks in that a viable community is the difference between surviving and starving to death, I think you'll be quite pleasantly surprised to see how fast people see the value of working together. Er, Flower, the moment applying for unemployment has any chance of even inconveniencing the government in the least, you may safely expect to have your application turned down. I hope you're using your leisure to do something constructive, such as learning some hands-on trade that can support you in the years to come. wylde otse said... I like the way Coyote made his point, at the same time, I remember insisting (long ago)that things could also make sense 'backwards', so I perceive no inconsistency - I don't claim to be smart. Stll I have this urge to kick the corporate complex in its fat ***. And how about another great disconnect, between the increasingly very few that seem to 'see' things to some higher degree, and those who seem beyond caring. (say, a terrific brain in a nerveless, paralyzed, growing body, awaiting nourishment) Jan Suzukawa said... My first exposure to you and your blog was when I heard you as a guest on the Coast to Coast AM radio show. You have a very interesting perspective on these times we're living in. As to this post specifically, I had not heard the term "lifeboat community," but the idea of communal living is one that I find intriguing. I am probably romanticizing the idea (and I am sure I am not the only one to do so). Your blog is refreshing as it reminds me and the rest of your readers that there are economic realities that underlie all models for living, and that those realities can't be ignored. John Michael Greer said... Otse, granted, in a complex system no arrow of causation only goes one way. I tend to focus on the way that ideas, metaphors, and stories shape our experienced reality because that offers handles by which reality can be changed, starting from the standpoint of the individual. Jan, I think communal living is entirely possible, and so are viable communities in the deindustrial era. My quarrel is with those people -- and there are a lot of them -- who are basically projecting yuppie fantasies of an updated Levittown onto the future, and insisting they can maintain a modern middle-class lifestyle in the approaching age of scarcity. ChrisH said... Good point about the fatal flaw of most current lifeboat schemes...basically that they require people to jump ship far too early. Any viable lifeboat must find a way to become financial viable both before and after a crash. Something more akin to a timeshare or insurance contract which allows individuals to continue their normal daily lives while providing acess to a lifeboat like place for those willing to pay. A logical place to start could be the many small scale organic or low impact farms that already exist. The money paid could help make them more economically vaible while building resilience in the food system and provideing guaranteed food and shelter as the decent progresses for those who had paid in before hand (in exhange for labor of course). Andrew said... Reminds me of a 4th year resource economics course I took, about 20 years ago, when we were discussing a cost/benefit curve. The prof pointed out that because we couldn't calculate well the cost of pollution, it was simply not included in the cost side of the curve... Draco TB said... It's not a new understanding of economics that's required as you say - but an acceptance that the would itself is limited and that we need to live within those limits. It's quite amusing , in a sad way, that the first thing I was taught at uni about economics is that it is the science of the distribution of limited resources and then they went on as if the resources weren't limited. The people that you should expect to most believe in Anthropogenic driven Climate Change is the economists and they, from news reports, are the least likely to do so. Most economists really do seem to believe that we live on an infinite world. The biggest problem I can see is that our politicians,at the behest of business, is still trying to persuade us that we can everything we want rather than pointing out that we have to limit ourselves. That includes the number of people on the world - present est. pop. 3.8 billion, it's gone up by ~ 100 million in 12 months. John Michael Greer said... Chris, very good. That would be quite a workable scheme. The one detail I'd want to see added is a rule that the people who pay for spots on the "lifeboat" need to spend x number of weekends a year on the farm, helping to plant and harvest the crops with hand tools, so they won't be totally useless when they finally move there; organic farming is a skilled craft, and the more of the learning curve that can be gotten out of the way in advance, the better. Andrew, thanks for the excellent example. I've been reading recent economics textbooks, and finding the same blank unwillingness to deal with the economic value of nature. It's actually not that hard to measure -- you just have to start from the presupposition that it matters, and go from there. Draco, the concept that the world is limited and we need to live within those limits is the central concept of exactly the new vision of economics we need. More on this next week. Dwig said... A good place to learn more about stakeholder-managed commons is Elinor Ostrom's book "Governing the Commons". It's academic in tone, but there's a lot of accessible information, including case studies of successful and unsuccessful commons. In particular, it addresses Ahavah's claim -- there have been and are successfully managed commons in "Modern America", and the "deep need" of the community to act is the stakeholders' understanding of the nature of the commons and what's needed to maintain and sustain it. JMG: "I've been reading recent economics textbooks, and finding the same blank unwillingness to deal with the economic value of nature." Google for "ecological economics"; you'll get a lot of links. Maybe start with Of course, this is still outside the mainstream, particularly in the US, but the ground is already being plowed, and some promising "green shoots" are growing. Apple Jack Creek said... As I read this post, I found myself scaling the ideas down in size - the same economic constraints that make it difficult for societies or communities to transition to a different way of life hit individuals and families trying to make the shift as well. It's the problem of "living with one foot in each world": someone has to go to work in the existing world five days a week to pull in a paycheque so we can buy the fence posts and wire and pay the property taxes and the mortgage on the six acres we are trying to build up into a functional small farm. You can't make a living off the farm in the current economy, but you can't wait until everything falls apart to start preparing the infrastructure and learning the skills, either. So, you end up with two full time jobs: working for a cheque during the week and spending your spare time trying to build up the farm, learn the skills needed to live in a world without ready access to diesel, the hardware store, and internet shopping, and to practice living differently. But when every morning you head off into the 'old world', there's a price to be paid. The psychic strain of trying to live in both worlds is ... well ... it can be daunting. Reading the writings of others who see the changes that are coming helps keep me motivated: the world my kids grow up in is going to look a lot different, and the effort we expend now may mean they have useful skills and resources in a changed world. It's hard, though, when the as-yet-unconvinced look at you like you are a nutcase because you spend your weekends putting up fences or building a barn. hapibeli said... Listening to the Jay Hanson interview from the site was very interesting. Particularly the concept of all human interaction as tribally influenced. Politics and those who enter it is just another tribe and those entering it begin to do favors for each other so that they will receive favors in return...Very simple. Compromise, bringing corruption, is the result of tribal thinking. It seems that we must get beyond our "animal brain" to be able to survive, even though it is that same brain that has helped us reach our current societal state. What a conundrum. hapibeli said... The economists are also living in a tribe, and as tribal members, they must agree to do favors [ read; accepting the illusion of infinite resources ] to remain in the tribe they have chosen! wylde otse said... Thanks for the reply comment; for reminding me of the power of our own perception - which for me, explains why your reality seems so compelling. (I've run up a few 'world-altering' blog-items recently that don't have anyone saluting the flagpole, yet...but, I'm more comfortable with my perception, and that (my?)truth too, in the right time and circumstance will prevail. tkx) Thomas said... You are one of the most lucid and perceptive of the commentators on our current predicament. I've read your recent book and am looking forward to the new one. Have you looked at some of the stuff by the post-autistic economics folks? They are challenging the current models. When am I going to find you on the air? I've usually got a good 20M path to the NW in the early evening and am often looking around on the digital modes, especially PSK31. Give me a shout if you ever want to try a sked. My e-mail is good on QRZ. Danby said... And yet it is possible for a farsighted group to actually make preparations and make them pay for themselves. I just read a story on Deutsche Welle about an effort to bring Saharan solar power to Europe. The company involved expects to lay out 400Bn Euros for the completed project, including transmission capability. It remains to be seen if they can raise the financing, or if the solar steam project will actually be a success. Lord knows the attempts made in the American Desert in the '70s weren't. But then again that was 30 or more years ago. The real weak point to me is the transmission of power through one of the least politically stable regions in the world. Still, it shows a remarkable willingness to invest in a a sustainable energy source. The lead company believes it can supply as much as 15% of Europe's future electricity needs with power delivery starting in 2019. With sufficient build out and redundant transmission capacity, that number could go much higher. There's no technological reason that Africa and Europe couldn't provide their entire electricity demand from Saharan sunlight. John Michael Greer said... Dwig, granted, there are some very thoughtful explorations of alternative economics under way, far from the mainstream. Consider this an attempt to add to them. Apple Jack, the fear of being thought a nutcase is one of the most potent forces driving people toward self-destruction. I've long thought that learning to ignore the opinion of bystanders is one of the most essential skills in life. Hapibeli, I'm not a great fan of Hanson, and this is part of the reason why. Labels such as "tribe" and "animal brain" seem much more like putdowns than useful concepts to me, at least the way Hanson is using them. Still, I may be misjudging the man. Otse, you're welcome. AB0DI, I'm still scraping together funds and hardware to get a station going -- modern equipment is the opposite of cheap, and the beat-up old boatanchors I can afford need tlc guided by a level of practical knowledge I'm still working toward. (Note to future hams: you can slam-dunk the Extra Class exam and still have only the most abstract idea of how to make a radio work.) I'll get on the air one of these days, no question, but it's likely to be CW on some improbable combination of old Heathkit and postindustrial homebrew! Dan, well, I wish them luck. Theoretically, it ought to work, and I'm glad somebody's trying it, but let's just say I have my doubts about doing solar thermal generation on that scale. Danby said... So do I, as I hope was clear in the tone of my post. If they take a modular approach, with many small standardized installations that can be dispersed over a wide area, and if they make it in the economic interest of the locals to have the system operating, it's possible to make such a system work. On the other hand, if they take the typical American approach of building a single monolithic installation, occupying 5 square kilometers of desert, and treating the locals as an annoying environmental hazard, they are doomed. Lance Michael Foster said... John, you've spent a long time making arguments and exploring the various avenues of underlying causes, the macrovision of past, present and future, etc. Some folks are with you, some you have helped see the light, and there are others that no matter what you say will always remain scoftics (see for that neologism!) So how about you at least alternate "grand vision"/intellectual posts on historic and economic trends with posts on the brass tacks (or bronze nails) of concrete actions? Sharon Astyk has great examples for gardening, raising animals and food storage. Orlov is great about grand visions too, but adds his personal experiences from the Russian case (PS. At the risk of being called a Chicken Little or a Cassandra, each of us owes it to ourselves and others to read the latest from Orlov: I KNOW you have some wonderful things to say about concrete steps each of us can take involving community, education, spiritual practices, the natural world. Sort of like how you already mentioned in passing that people should look again at slide rules, ham radio, preserving books and community libraries, preventative health care, and picking up a deindustrial trade or two (hat-making, herbalism, cobbler, blacksmith, etc.). Whaddaya say? Can you at least reign in the academia once in a while and get to more concrete actions? thanks! John Michael Greer said... Danby, of course you're right -- but I suspect if the units are small and standardized enough to make it, their main effect will be to ensure that a bunch of Algerian villages have a decent supply of electricity, at least until the spare parts run out. Lance, thanks for the vote of confidence, but I don't actually have that much to say about concrete steps -- much less than you can get from your average 1970s-era handbook of appropriate technology, say. The practical work I've had the chance to test out is specific to my own situation and needs, which won't be the same as that of most other people. Since my basic take on things is that muddling through on a case-by-case basis makes much more sense than following a recipe, I'm not well suited to going into the recipe business! What I do have to offer that's of more general use is the perspective my historical and ecological studies have given me toward the contemporary crisis of the industrial world. All the concrete steps in the world won't do the job if they're still guided by the old industrial worldview, while a clear sense of what we're facing and why will make it easier for people to choose and apply the practical skills that are relevant to their own situations. As Lincoln said, you can't please all of the people all of the time, and this blog is no exception to that rule. We've got a bunch of economics ahead of us; with any luck, it won't be too dry, but it will inevitably deal in ideas and generalities en route to the more specific proposals I have in mind. If that's not to your taste, you may want to skim the next few months. zapoteca said... Thank you so much for addressing the practical aspects of re-acquiring what used to be tribal knowledge etc. From a parochial standpoint - my own: I have a job that I was glad to get, for a worthwhile problem, in the private sector. I am by no means exploited. I am paid fairly. We all need one another and painfully aware of it. There is no dead weight, and the problem drives us all. I am exhausted at the end of the week. What I SHOULD be doing on the weekend is driving around, comparing land, hydrology and soil types. What I ACTUALLY do is recover. Read quietly, walk my dog. Research some more. Recharge enough to return on Monday. I SHOULD have everything bought and paid for by the time I am ready to chuck it. Not a prayer of doing that. What will likely happen? I'll leave at the appointed time. THEN I'll do the driving around. Without an income, I'll make many compromises buying for cash. Then the financial drain of fitting up. What a nightmare. There is no prospect of a "do-over" if I make a grave mistake. People and places are always hospitable and welcoming on the front end. You don't find the skeletons in the closet till you've lived there some. Woulda, coulda, shoulda. We are not going to get government programs to subsidize experiments that contravene archetypal beliefs. That is, our society is a going concern on an upward trajectory, once this temporary hiccup is resolved. Poetic justice, perhaps. We boomers have been vilified for hogging the resources of the succeeding generations. My prospective giveback: I will beggar myself to establish a sustainable family compound, assuming they are not slackers who piss it away before it is really needed. JMG, I'm with you that this will unwind in a slow decline. There is virtually nothing in your book with which I disagree. Danby said... Well, it's a good thing for Berbers. Moroccans and Algerians to have plentiful electricity too! That's one of the requirements, of course, that the locals get power as well as the Europeans, and at a pretty steep discount too. If it's in the best interest of Ahmed and Ghazala to keep the power flowing, they will make sure it does. The Germans are actually pretty good at this, and at doing infrastructure in the less-industrial parts of the world. Most of the power systems and telecom systems in the mideast and Southeast Asia were designed and built by the Germans. John Michael Greer said... Zapoteca, a lot of people will be in the same boat you are, trying to muddle through in difficult times with limited preparation. Even in those areas where things get really harsh, some will make it. Danby, I'm certainly in favor of the Berbers et al. having electricity, especially if the technology is simple enough for them to maintain with local resources -- and it could be. My suspicion, though, is that the vision of powering Europe at something like present levels with Saharan solar facilities will turn out to be a pipe dream. John Michael Greer said... By the way, I fielded, and deleted, yet another bunch of comments by a cornucopian flamebaiter who came barreling onto the comments page to insist that resources are infinite, ecological limits don't matter, the laws of thermodynamics don't apply to us, blah blah blah. Those claims have been addressed on this blog a couple of dozen times, and it's just not a productive use of my time to keep on repeating the same should-be-obvious points. Further rants along the same lines from any source will be deleted out of hand. das monde said... In a response to Blue Sun, JMG writes about “a society where economic concerns override most other factors”. Unfortunately, the American society is not one of the kind. All politics and economics is very social there, shockingly! Resources, common prosperity, economic stability or sustainability - these are things that “no one” has to worry about. The strongest concern is... relative well-being of so-called winners. Whatever they have, however they got it, and whatever communities need - the “competitive advantage” of those winners must be protected (even if that means an end of the actual “competition”, or even an end of the world). Try to argue against that! Most attention in the 20th century economics went to satisfying needs of over-producing industries. Not human needs here and there, but corporate needs of profit (and that does not guarantee a lifespan of good living on the planet, however we try to believe in "invisible hands"). Now the focus goes in satisfying "needs" of the finacial industry, or investors. Who else gets the alchemy of money, or possible relations between money and producing economy? One inspiring "lifeboat community" example: when the US won the independence, the Bank of England offered money to borrow. The founding fathers said "Thanks, we can print money ourselves" ;-) Jon said... John, I briefly belonged to Mr. Hanson’s blog. Some of his ideas are very interesting, indeed. However, he travels with severe blinders on. I tried to start a discussion on the influence of females on the psychological evolution of human society. My idea is that the human race has been self domesticating over the past few hundred thousand years and that the influence of women over the life of the small but growing prehistoric communities was mostly responsible for this tempering of what was otherwise a brutal, male mindset. Jay not only utterly rejected even the idea of such a possibility, but he would not allow the discussion to get as far as his blog. To him (by his own admission) male violence is the only significant factor in human evolution and females appear to be no more than decorative penis sheaths. I think there is more to it than this. Dwig said... Zapoteca, here's a thought that might help: don't try to do it all yourself. Try to hook up with some folks young and energetic enough to be happy exploring and creating on a shoestring, and some recently retired folks with time and still enough energy to take on a new venture, and some folks in roughly your own situation with whom you can pool resources and divide up tasks. No guarantee, of course, but if you can find a group of like-minded folks, the burden won't seem so heavy. Not that this is easy or quick. Just getting a group together and learning to cooperate, collaborate, and co-create will be a serious endeavor. If you're interested in going that direction, though, I recommend Shaffer and Amundsen's book "Creating Community Anywhere" as a resource. Whatever your path, I wish you well. Robert said... You're addressing issues that desparately need to be addressed just now. I would like to ask that you try to keep things more concrete so that those of us who are economically uneducated can follow along. As far as I have been able to figure out, economists are people you take your vice to so they can wave their hands and say "Hocus pocus, economies of scale" and turn it into a virtue. I don't doubt that there probably is more to it but it is mainly a mystery to me. I am continuously frustrated by economists who talk about "the economy" when they seem to mean "the economic status quo". When they talk about "economic collapse" they seem to mean a decline in the amount of current economic activity. Since most current economic activity involves producing things we don't need or would be better off without (think weapons or automobiles), I don't find that all that alarming. If instead they said that what was going to happen was that the loss of the efficiencies associated with the economies of scale brought about by the current level of economic activity would result in more resources having to be devoted to the production of those things that are actually needed, I might understand. This, however, might suggest another way of dealing with the situation than when they threaten "economic collapse" as this just seems to call for panic. In this weeks post you speak of alternate sources of energy as having to be paid for "out of current economic output." I hope I'm not the only one who has difficulty understanding this. We spend almost a trillion dollars per year on military activities that have nothing to do with defending the country and who knows how much money using an industrial infrastructure that would be well suited to producing trains and busses to produce automobiles. Is this not current economic output that could be diverted to producing alternatives? I am not trying to be difficult nor am I feigning ignorance. I have spent more time than I care to admit trying to understand these things. DIYer said... The quote from the electronics book has reminded me of how utterly wrong the Bohr atom (tiny solar system, colliding billiard balls) has proven to be. Although our feeble human eyesight has never seen it, we now know of electrons expressing their "electron-ness" by sloshing around in unique solutions to the Schrödinger equation. Behaving in strange, but experimentally demonstrable ways: if one ever came to a complete standstill it would occupy an infinite volume of space, but with zero probability of being found at any given spot therein. There has to be a lesson in there somewhere, but I'm not sure just what... perhaps in keeping with your essay, that our economy of "growth" and the wonderful benefits of "scale" are wrong as well. John Michael Greer said... Das Monde, you're quite correct, of course. What dominates American society is a very narrow and ideological concept of economics -- one that leaves out most of the things people actually value. More on this in later posts. Jon, I have to admit I have my doubts about the claim that there's a "male mindset" that is innately brutal -- how is it that the simpleminded Victorian notion that men are naturally brutal and savage, and women are naturally peaceful, kind and good, suddenly became cutting edge ideology on the left? Still, I'd noticed the aspect of Hanson's discourse that you've referenced, and yes, it's problematic. Dwig, good. The community, rather than the individual, is the basic unit of human survival. Robert, you're asking a perfectly reasonable question. The answer is that those billions of dollars spent on weapons and troops are the sole remaining basis for America's prosperity. The 5% of the world's population that lives in the US uses around a third of the world's energy, resources, and industrial production. This doesn't happen because the rest of the world loves us; it happens because we have a global empire, with troops on the ground in 140 countries, maintaining profoundly unbalanced patterns of exchange that bring most of the world's wealth to the US and its satellite countries, such as Britain and Australia. In theory, we could replace all those tanks and bombers with solar panels and wind turbines. In practice, if we did so, the US' share of the world's resource base and industrial product would soon drop to a tiny fraction of its current level, which means a third world economy in which solar panels and wind turbines would become unaffordable luxuries, and you and I would be spending our time trying to scrape together enough food to eat rather than debating these issues on the internet. Nor would this bring an end to inequitable distribution of the world's resources; there are already a good many candidates vying for the position of the next global empire. That's a measure of the corner we've backed ourselves into. It doesn't make it any better that our empire is going to go down the drain soon anyway, I know. DIYer, exactly! The gross national product is as mythical as the neat orbit of a Bohr electron -- that is, it's a model that represents experience, and not necessarily all that well. More on this later. Kurt said... I'm quite late to the game but I welcome you back JMG! I love reading your posts and your responses are trenchant to the subject matter. Usually to buttress your arguments and often expand on them. It is very rewarding to me to read the dialogue with your readers. I must admit that I had an "aha" moment in reading that economics is a social science (coyote's comment) such as psychology. I had never thought of it that way and it makes much more sense to me now (now, it seems obvious). A psych prof of mine once said that psychology was pre-paradigmatic and the same could be said for economics. It's just like psychology, incomplete. Recent articles that I've read also harmonize with the ecological aspects/limitations of economics such as the response to the cornicopian economists: Other readers might find this comforting to know that not all economists discount the ecological real world in their thinking. Apologies for linking away from your blog, just open another tab and post the link. :) Jon said... I think the ‘male as aggressor’ sentiment comes, partly, from outliers that grab our attention. Of a hundred males, ninety five may be perfectly content to live and let live and do the right thing for their families, friends and neighbors. Five sadistic jerks make the rest of us look bad. I know that in war, a high percent of soldiers (80? 85?) will intentionally aim high when shooting at the enemy. All generals know this and try to increase the numbers by depersonalizing the enemy. A blip on a screen doesn’t suffer. cheeba said... Slightly cheeky off-topic request of the erudite TADR readership: Can anyone help me find this book I was reading on google books at work and had to suddenly desist from? I think I got a link to it from somewhere in the last few weeks so someone else might have also seen it. Other wise this is going to sound really random. All I can remember is: - It concerned a general theory of cultural decline, but was fairly positive and non-doomy. - It was by an American with an English sounding name, possibly using a middle initial. Possibly even co-authored with his wife? - It had an amazing theory of cultural rhythms with very strange diagrams of same in the second half. - Published in last two decades, probably '90s or early '00s. - I think the cover was green. If nothing else, this is how my brain works at the moment! If anyone can successfully tell me, I will buy it and review it (briefly!) in the comments here in gratitude. If not (or - JMG - if this is out of order) apologies for the interruption, (and delete me peremptorily without fear of ire!). divelly said... On male/female: Several years ago three women social scientists collated all research on gender behavioral differences in societies ranging from primitive to modern without any criticism as to methodology, bias,etc.. They discovered only 2 universal traits: 1.Men were more physically violent 2.Women were the primary child care providers Nathan said... Have you heard of Dancing Rabbit or the Sanctuary? The former supposedly cost less than 2 million to establish and the latter less than 200K. The former may take more money to reach a critical mass where a local economy is possible. The latter uses no electricity or internal combustion engines. tom said... cheeba, the book you describe sounds a lot like 'A Prosperous Way Down' by Howard T. Odum and Elisabeth C. Odum. There's a summary at Pretty much anything by anyone called Odum seems to be worth a look... cheeba said... Yes! That's exactly what it was. Muchas gracias tom. Frankly, I'm a little amazed that you got it from my threadbare fragments of memory. Score another one for the ABR readership... >Pretty much anything by anyone >called Odum seems to be worth a >look... And something by two people called Odum is doubly so, presumably. Kartturi said... Just one short question: Is winning a war cost-effective? chad said... millions of dollars for a utopian survival camp that still relies on cob housing and permaculture technqiues? JMG I trust you're aware there are evangelists of the surival camp utopian future who claim they can build a survival camp fit to support 200 people with a mere 50,000 dollars total, claiming that building houses with mud and growing all your own food from the land itself eliminate the very need for money or government. Could you please elaborate on how and why the above mentioned scenario is foolishness and naivete at best and downright conmanship and scamming at worst?
bc4d9bf7047eab4f
Atomic orbital From Wikipedia, the free encyclopedia   (Redirected from Electron cloud) Jump to: navigation, search The shapes of the first five atomic orbitals: 1s, 2s, 2px, 2py, and 2pz. The colors show the wave function phase. These are graphs of ψ(x, y, z) functions which depend on the coordinates of one electron. To see the elongated shape of ψ(x, y, z)2 functions that show probability density more directly, see the graphs of d-orbitals below. Each orbital in an atom is characterized by a unique set of values of the three quantum numbers n, , and m, which respectively correspond to the electron's energy, angular momentum, and an angular momentum vector component (the magnetic quantum number). Any orbital can be occupied by a maximum of two electrons, each with its own spin quantum number. The simple names s orbital, p orbital, d orbital and f orbital refer to orbitals with angular momentum quantum number = 0, 1, 2 and 3 respectively. These names, together with the value of n, are used to describe the electron configurations of atoms. They are derived from the description by early spectroscopists of certain series of alkali metal spectroscopic lines as sharp, principal, diffuse, and fundamental. Orbitals for > 3 continue alphabetically, omitting j (g, h, i, k, ...).[3][4][5] Electron properties[edit] With the development of quantum mechanics and experimental findings (like the two slits difraction of electrons), it was found that the orbiting electrons around a nucleus could not be fully described as particles, but needed to be explained by the wave-particle duality. In this sense, the electrons have the following properties: Wave-like properties: 1. The electrons do not orbit the nucleus in the sense of a planet orbiting the sun, but instead exist as standing waves. The lowest possible energy an electron can take is therefore analogous to the fundamental frequency of a wave on a string. Higher energy states are then similar to harmonics of the fundamental frequency. Particle-like properties: 1. There is always an integer number of electrons orbiting the nucleus. 3. The electrons retain particle like-properties such as: each wave state has the same electrical charge as the electron particle. Each wave state has a single discrete spin (spin up or spin down). Thus, despite the obvious analogy to planets revolving around the Sun, electrons cannot be described simply as solid particles. In addition, atomic orbitals do not closely resemble a planet's elliptical path in ordinary atoms. A more accurate analogy might be that of a large and often oddly shaped "atmosphere" (the electron), distributed around a relatively tiny planet (the atomic nucleus). Atomic orbitals exactly describe the shape of this "atmosphere" only when a single electron is present in an atom. When more electrons are added to a single atom, the additional electrons tend to more evenly fill in a volume of space around the nucleus so that the resulting collection (sometimes termed the atom’s “electron cloud”[6]) tends toward a generally spherical zone of probability describing where the atom’s electrons will be found. Formal quantum mechanical definition[edit] Atomic orbitals may be defined more precisely in formal quantum mechanical language. Specifically, in quantum mechanics, the state of an atom, i.e. an eigenstate of the atomic Hamiltonian, is approximated by an expansion (see configuration interaction expansion and basis set) into linear combinations of anti-symmetrized products (Slater determinants) of one-electron functions. The spatial components of these one-electron functions are called atomic orbitals. (When one considers also their spin component, one speaks of atomic spin orbitals.) A state is actually a function of the coordinates of all the electrons, so that their motion is correlated, but this is often approximated by this independent-particle model of products of single electron wave functions.[7] (The London dispersion force, for example, depends on the correlations of the motion of the electrons.) Types of orbitals[edit] False-color density images of some hydrogen-like atomic orbitals (f orbitals and higher are not shown) Atomic orbitals can be the hydrogen-like "orbitals" which are exact solutions to the Schrödinger equation for a hydrogen-like "atom" (i.e., an atom with one electron). Alternatively, atomic orbitals refer to functions that depend on the coordinates of one electron (i.e. orbitals) but are used as starting points for approximating wave functions that depend on the simultaneous coordinates of all the electrons in an atom or molecule. The coordinate systems chosen for atomic orbitals are usually spherical coordinates (r, θ, φ) in atoms and cartesians (x, y, z) in polyatomic molecules. The advantage of spherical coordinates (for atoms) is that an orbital wave function is a product of three factors each dependent on a single coordinate: ψ(r, θ, φ) = R(r) Θ(θ) Φ(φ). The angular factors of atomic orbitals Θ(θ) Φ(φ) generate s, p, d, etc. functions as real combinations of spherical harmonics Yℓm(θ, φ) (where and m are quantum numbers). There are typically three mathematical forms for the radial functions R(r) which can be chosen as a starting point for the calculation of the properties of atoms and molecules with many electrons. 1. the hydrogen-like atomic orbitals are derived from the exact solution of the Schrödinger Equation for one electron and a nucleus, for a hydrogen-like atom. The part of the function that depends on the distance from the nucleus has nodes (radial nodes) and decays as e−(constant × distance). 3. The form of the Gaussian type orbital (Gaussians) has no radial nodes and decays as e(−distance squared). Main article: Atomic theory The term "orbital" was coined by Robert Mulliken in 1932 as an abbreviation for one-electron orbital wave function.[8] However, the idea that electrons might revolve around a compact nucleus with definite angular momentum was convincingly argued at least 19 years earlier by Niels Bohr,[9] and the Japanese physicist Hantaro Nagaoka published an orbit-based hypothesis for electronic behavior as early as 1904.[10] Explaining the behavior of these electron "orbits" was one of the driving forces behind the development of quantum mechanics.[11] Early models[edit] With J.J. Thomson's discovery of the electron in 1897,[12] it became clear that atoms were not the smallest building blocks of nature, but were rather composite particles. The newly discovered structure within atoms tempted many to imagine how the atom's constituent parts might interact with each other. Thomson theorized that multiple electrons revolved in orbit-like rings within a positively charged jelly-like substance,[13] and between the electron's discovery and 1909, this "plum pudding model" was the most widely accepted explanation of atomic structure. Shortly after Thomson's discovery, Hantaro Nagaoka, a Japanese physicist, predicted a different model for electronic structure.[10] Unlike the plum pudding model, the positive charge in Nagaoka's "Saturnian Model" was concentrated into a central core, pulling the electrons into circular orbits reminiscent of Saturn's rings. Few people took notice of Nagaoka's work at the time,[14] and Nagaoka himself recognized a fundamental defect in the theory even at its conception, namely that a classical charged object cannot sustain orbital motion because it is accelerating and therefore loses energy due to electromagnetic radiation.[15] Nevertheless, the Saturnian model turned out to have more in common with modern theory than any of its contemporaries. Bohr atom[edit] In 1909, Ernest Rutherford discovered that bulk of the atomic mass was tightly condensed into a nucleus, which was also found to be positively charged. It became clear from his analysis in 1911 that the plum pudding model could not explain atomic structure. Shortly after, in 1913, Rutherford's post-doctoral student Niels Bohr proposed a new model of the atom, wherein electrons orbited the nucleus with classical periods, but were only permitted to have discrete values of angular momentum, quantized in units h/2π.[9] This constraint automatically permitted only certain values of electron energies. The Bohr model of the atom fixed the problem of energy loss from radiation from a ground state (by declaring that there was no state below this), and more importantly explained the origin of spectral lines. The Rutherford–Bohr model of the hydrogen atom. With de Broglie's suggestion of the existence of electron matter waves in 1924, and for a short time before the full 1926 Schrödinger equation treatment of hydrogen-like atom, a Bohr electron "wavelength" could be seen to be a function of its momentum, and thus a Bohr orbiting electron was seen to orbit in a circle at a multiple of its half-wavelength (this physically incorrect Bohr model is still often taught to beginning students). The Bohr model for a short time could be seen as a classical model with an additional constraint provided by the 'wavelength' argument. However, this period was immediately superseded by the full three-dimensional wave mechanics of 1926. In our current understanding of physics, the Bohr model is called a semi-classical model because of its quantization of angular momentum, not primarily because of its relationship with electron wavelength, which appeared in hindsight a dozen years after the Bohr model was proposed. Modern conceptions and connections to the Heisenberg Uncertainty Principle[edit] In the quantum picture of Heisenberg, Schrödinger and others, the Bohr atom number n for each orbital became known as an n-sphere[citation needed] in a three dimensional atom and was pictured as the mean energy of the probability cloud of the electron's wave packet which surrounded the atom. Orbital names[edit] Orbitals are given names in the form: X \, \mathrm{type}^y \ where X is the energy level corresponding to the principal quantum number n, type is a lower-case letter denoting the shape or subshell of the orbital and it corresponds to the angular quantum number , and y is the number of electrons in that orbital. For example, the orbital 1s2 (pronounced "one ess two") has two electrons and is the lowest energy level (n = 1) and has an angular quantum number of = 0. In X-ray notation, the principal quantum number is given a letter associated with it. For n = 1, 2, 3, 4, 5, …, the letters associated with those numbers are K, L, M, N, O, ... respectively. Hydrogen-like orbitals[edit] Main article: Hydrogen-like atom A given (hydrogen-like) atomic orbital is identified by unique values of three quantum numbers: n, , and m. The rules restricting the values of the quantum numbers, and their energies (see below), explain the electron configuration of the atoms and the periodic table. Quantum numbers[edit] Main article: Quantum number Complex orbitals[edit] The azimuthal quantum number describes the orbital angular momentum of each electron and is a non-negative integer. Within a shell where n is some integer n0, ranges across all (integer) values satisfying the relation 0 \le \ell \le n_0-1. For instance, the n = 1 shell has only orbitals with \ell=0, and the n = 2 shell has only orbitals with \ell=0, and \ell=1. The set of orbitals associated with a particular value of  are sometimes collectively called a subshell. The magnetic quantum number, m_\ell, describes the magnetic moment of an electron in an arbitrary direction, and is also always an integer. Within a subshell where \ell is some integer \ell_0, m_\ell ranges thus: -\ell_0 \le m_\ell \le \ell_0. = 0 = 1 = 2 = 3 = 4 ... n = 1 m_\ell=0 n = 2 0 −1, 0, 1 n = 3 0 −1, 0, 1 −2, −1, 0, 1, 2 n = 4 0 −1, 0, 1 −2, −1, 0, 1, 2 −3, −2, −1, 0, 1, 2, 3 n = 5 0 −1, 0, 1 −2, −1, 0, 1, 2 −3, −2, −1, 0, 1, 2, 3 −4, −3, −2, −1, 0, 1, 2, 3, 4 Each electron also has a spin quantum number, s, which describes the spin of each electron (spin up or spin down). The number s can be +12 or −12. The Pauli exclusion principle states that no two electrons can occupy the same quantum state: every electron in an atom must have a unique combination of quantum numbers. The above conventions imply a preferred axis (for example, the z direction in Cartesian coordinates), and they also imply a preferred direction along this preferred axis. Otherwise there would be no sense in distinguishing m = +1 from m = −1. As such, the model is most useful when applied to physical systems that share these symmetries. The Stern–Gerlach experiment — where an atom is exposed to a magnetic field — provides one such example.[18] Real orbitals[edit] In the real hydrogen-like orbitals, for example, n and have the same interpretation and significance as their complex counterparts, but m is no longer a good quantum number (though its absolute value is). The orbitals are given new names based on their shape with respect to a standardized Cartesian basis. The real hydrogen-like p orbitals are given by the following[19][20][21] p_z = p_0 p_x = \frac{1}{\sqrt{2}} \left(p_1 + p_{-1} \right) p_y = \frac{1}{i\sqrt{2}} \left( p_1 - p_{-1} \right) where p0 = Rn 1Y1 0, p1 = Rn 1Y1 1, and p−1 = Rn 1Y1 −1, are the complex orbitals corresponding to = 1. Shapes of orbitals[edit] Cross-section of computed hydrogen atom orbital (ψ(r, θ, φ)2) for the 6s (n = 6, = 0, m = 0) orbital. Note that s orbitals, though spherically symmetrical, have radially placed wave-nodes for n > 1. However, only s orbitals invariably have a center anti-node; the other types never do. Simple pictures showing orbital shapes are intended to describe the angular forms of regions in space where the electrons occupying the orbital are likely to be found. The diagrams cannot, however, show the entire region where an electron can be found, since according to quantum mechanics there is a non-zero probability of finding the electron (almost) anywhere in space. Instead the diagrams are approximate representations of boundary or contour surfaces where the probability density | ψ(r, θ, φ) |2 has a constant value, chosen so that there is a certain probability (for example 90%) of finding the electron within the contour. Although | ψ |2 as the square of an absolute value is everywhere non-negative, the sign of the wave function ψ(r, θ, φ) is often indicated in each subregion of the orbital picture. Sometimes the ψ function will be graphed to show its phases, rather than the | ψ(r, θ, φ) |2 which shows probability density but has no phases (which have been lost in the process of taking the absolute value, since ψ(r, θ, φ) is a complex number). | ψ(r, θ, φ) |2 orbital graphs tend to have less spherical, thinner lobes than ψ(r, θ, φ) graphs, but have the same number of lobes in the same places, and otherwise are recognizable. This article, in order to show wave function phases, shows mostly ψ(r, θ, φ) graphs. The lobes can be viewed as interference patterns between the two counter rotating "m" and "m" modes, with the projection of the orbital onto the xy plane having a resonant "m" wavelengths around the circumference. For each m there are two of these m⟩+⟨−m and m⟩−⟨−m. For the case where m = 0 the orbital is vertical, counter rotating information is unknown, and the orbital is z-axis symmetric. For the case where = 0 there are no counter rotating modes. There are only radial modes and the shape is spherically symmetric. For any given n, the smaller is, the more radial nodes there are. Loosely speaking n is energy, is analogous to eccentricity, and m is orientation. Also in general terms, determines an orbital's shape, and m its orientation. However, since some orbitals are described by equations in complex numbers, the shape sometimes depends on m also. Together, the whole set of orbitals for a given and n fill space as symmetrically as possible, though with increasingly complex sets of lobes and nodes. The three p-orbitals for n = 2 have the form of two ellipsoids with a point of tangency at the nucleus (the two-lobed shape is sometimes referred to as a "dumbbell"—there are two lobes pointing in opposite directions from each other). The three p-orbitals in each shell are oriented at right angles to each other, as determined by their respective linear combination of values of m. The overall result is a lobe pointing along each direction of the primary axes. The five d orbitals in ψ(x, y, z)2 form, with a combination diagram showing how they fit together to fill space around an atomic nucleus. Orbitals table[edit] This table shows all orbital configurations for the real hydrogen-like wave functions up to 7s, and therefore covers the simple electronic configuration for all elements in the periodic table up to radium. "ψ" graphs are shown with and + wave function phases shown in two different colors (arbitrarily red and blue). The pz orbital is the same as the p0 orbital, but the px and py are formed by taking linear combinations of the p+1 and p−1 orbitals (which is why they are listed under the m = ±1 label). Also, the p+1 and p−1 are not the same shape as the p0, since they are pure spherical harmonics. s ( = 0) p ( = 1) d ( = 2) f ( = 3) s pz px py dz2 dxz dyz dxy dx2−y2 fz3 fxz2 fyz2 fxyz fz(x2−y2) fx(x2−3y2) fy(3x2−y2) n = 1 S1M0.png Qualitative understanding of shapes[edit] Below, a number of drum membrane vibration modes are shown. The analogous wave functions of the hydrogen atom are indicated. A correspondence can be considered where the wave functions of a vibrating drum head are for a two-coordinate system ψ(r, θ) and the wave functions for a vibrating sphere are three-coordinate ψ(r, θ, φ). s-type modes p-type modes d-type modes Orbital energy[edit] Main article: Electron shell In atoms with a single electron (hydrogen-like atoms), the energy of an orbital (and, consequently, of any electrons in the orbital) is determined exclusively by n. The n=1 orbital has the lowest possible energy in the atom. Each successively higher value of n has a higher level of energy, but the difference decreases as n increases. For high n, the level of energy becomes so high that the electron can easily escape from the atom. In single electron atoms, all levels with different \ell within a given n are (to a good approximation) degenerate, and have the same energy. This approximation is broken to a slight extent by the effect of the magnetic field of the nucleus, and by quantum electrodynamics effects. The latter induce tiny binding energy differences especially for s electrons that go nearer the nucleus, since these feel a very slightly different nuclear charge, even in one-electron atoms; see Lamb shift. The increase in energy for subshells of increasing angular momentum in larger atoms is due to electron–electron interaction effects, and it is specifically related to the ability of low angular momentum electrons to penetrate more effectively toward the nucleus, where they are subject to less screening from the charge of intervening electrons. Thus, in atoms of higher atomic number, the \ell of electrons becomes more and more of a determining factor in their energy, and the principal quantum numbers n of electrons becomes less and less important in their energy placement. s p d f g 1 1 2 2 3 3 4 5 7 4 6 8 10 13 5 9 11 14 17 21 6 12 15 18 22 7 16 19 23 8 20 24 Electron placement and the periodic table[edit] Atomic orbitals and periodic table construction The periodic table may also be divided into several numbered rectangular 'blocks'. The elements belonging to a given block have this common feature: their highest-energy electrons all belong to the same -state (but the n associated with that -state depends upon the period). For instance, the leftmost two columns constitute the 's-block'. The outermost electrons of Li and Be respectively belong to the 2s subshell, and those of Na and Mg to the 3s subshell. 2s 2p 2p 2p 3s 3p 3p 3p 4s 3d 3d 3d 3d 3d 4p 4p 4p 5s 4d 4d 4d 4d 4d 5p 5p 5p 6s (4f) 5d 5d 5d 5d 5d 6p 6p 6p 7s (5f) 6d 6d 6d 6d 6d 7p 7p 7p Relativistic effects[edit] There are no nodes in relativistic orbital densities, although individual components of the wavefunction will have nodes.[27] Transitions between orbitals[edit] Bound quantum states have discrete energy levels. When applied to atomic orbitals, this means that the energy differences between states are also discrete. A transition between these states (i.e. an electron absorbing or emitting a photon) can thus only happen if the photon has an energy corresponding with the exact energy difference between said states. Consider two states of the hydrogen atom: State 1) n = 1, = 0, m = 0 and s = +12 State 2) n = 2, = 0, m = 0 and s = +12 See also[edit] 5. ^ Laidler, Keith J.; Meiser, John H. (1982). Physical Chemistry. Benjamin/Cummings. p. 488. ISBN 0-8053-5682-7.  6. ^ Feynman, Richard; Leighton, Robert B.; Sands, Matthew (2006). The Feynman Lectures on Physics -The Definitive Edition, Vol 1 lect 6. Pearson PLC, Addison Wesley. p. 11. ISBN 0-8053-9046-4.  7. ^ Roger Penrose, The Road to Reality 8. ^ Mulliken, Robert S. (July 1932). "Electronic Structures of Polyatomic Molecules and Valence. II. General Considerations". Physical Review 41 (1): 49–71. Bibcode:1932PhRv...41...49M. doi:10.1103/PhysRev.41.49.  10. ^ a b Nagaoka, Hantaro (May 1904). "Kinetics of a System of Particles illustrating the Line and the Band Spectrum and the Phenomena of Radioactivity". Philosophical Magazine 7 (41): 445–455. doi:10.1080/14786440409463141.  12. ^ Thomson, J. J. (1897). "Cathode rays". Philosophical Magazine 44 (269): 293. doi:10.1080/14786449708621070.  14. ^ Rhodes, Richard (1995). The Making of the Atomic Bomb. Simon & Schuster. pp. 50–51. ISBN 978-0-684-81378-3.  16. ^ Heisenberg, W. (March 1927). "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik". Zeitschrift für Physik A 43 (3–4): 172–198. Bibcode:1927ZPhy...43..172H. doi:10.1007/BF01397280.  17. ^ Bohr, Niels (April 1928). "The Quantum Postulate and the Recent Development of Atomic Theory". Nature 121 (3050): 580–590. Bibcode:1928Natur.121..580B. doi:10.1038/121580a0.  18. ^ Gerlach, W.; Stern, O. (1922). "Das magnetische Moment des Silberatoms". Zeitschrift für Physik 9: 353–355. Bibcode:1922ZPhy....9..353G. doi:10.1007/BF01326984.  19. ^ Levine, Ira (2000). Quantum Chemistry. Upper Saddle River, NJ: Prentice-Hall. p. 148. ISBN 0-13-685512-1.  20. ^ C.D.H. Chisholm (1976). Group theoretical techniques in quantum chemistry. New York: Academic Press. ISBN 0-12-172950-8. 21. ^ Blanco, Miguel A.; Flórez, M.; Bermejo, M. (December 1997). "Evaluation of the rotation matrices in the basis of real spherical harmonics". Journal of Molecular Structure: THEOCHEM 419 (1-3): 19–27. doi:10.1016/S0166-1280(97)00185-1.  22. ^ Powell, Richard E. (1968). "The five equivalent d orbitals". Journal of Chemical Education 45 (1): 45. Bibcode:1968JChEd..45...45P. doi:10.1021/ed045p45.  23. ^ Kimball, George E. (1940). "Directed Valence". The Journal of Chemical Physics 8 (2): 188. Bibcode:1940JChPh...8..188K. doi:10.1063/1.1750628.  24. ^ Cazenave, Lions, T., P.; Lions, P. L. (1982). "Orbital stability of standing waves for some nonlinear Schrödinger equations". Communications in Mathematical Physics 85 (4): 549–561. Bibcode:1982CMaPh..85..549C. doi:10.1007/BF01403504.  26. ^ Lower, Stephen. "Primer on Quantum Theory of the Atom".  27. ^ Szabo, Attila (1969). "Contour diagrams for relativistic orbitals". Journal of Chemical Education 46 (10): 678. Bibcode:1969JChEd..46..678S. doi:10.1021/ed046p678.  Further reading[edit] External links[edit]
b8a55ca272a4eb7c
Take the 2-minute tour × Why does the formalism of QM represent reversible changes (eg the time evolution operator, quantum gates, etc) with unitary operators? To put it another way, can it be shown that unitary transformations preserve entropy? share|improve this question 2 Answers 2 up vote 4 down vote accepted The fact that evolutions of quantum mechanics are unitary after finite periods of time can be proven from the Schrödinger equation, and hinges on the characterization of unitary operators as those linear operators which are norm-preserving. Recall the Schrödinger equation: $$ \frac{\mathrm d}{\mathrm d t} |\psi\rangle \;=\; -i H |\psi\rangle \;,$$ where $H$ may or may not be time-dependent, but in any case has real eigenvalues, so that $H = H^\dagger$. As a result, the way in which $|\psi\rangle$ changes instantaneously with time is in such a way that its magnitude, as a vector in Hilbert space, does not increase. We can see this by simply computing: $$ \frac{\mathrm d}{\mathrm dt} \langle\psi|\psi\rangle \;=\; \left[\frac{\mathrm d}{\mathrm dt} \langle\psi|\right]|\psi\rangle + \langle\psi|\left[\frac{\mathrm d}{\mathrm dt}|\psi\rangle\right] \;=\; i\langle\psi|H^\dagger|\psi\rangle - i\langle\psi|H|\psi\rangle \;=\; 0.$$ because $H = H^\dagger$. Then at all times the state vector remains the same length. That is to say, the norm is preserved under finite-time evolution. Because unitary operators are exactly those ones which preserve the norm, it follows that the finite-time evolution will be unitary. share|improve this answer That's helpful, thank you. Please could you explain in a little more detail why it is that finite-time evolutions of this sort should be reversible? –  Benjamin Hodgson Jan 16 '13 at 23:33 Reversibility is a consequence of unitarity: precisely because $|\psi\rangle \mapsto U|\psi\rangle$ preserves norms, we have $1 = \langle \psi | \psi \rangle = \langle \psi | U^\dagger U | \psi \rangle$ for all unit vectors $|\psi\rangle$. It follows by linearity that $U^\dagger U = I$, which in particular means that $U$ is invertible. –  Niel de Beaudrap Jan 16 '13 at 23:50 Like all 20th century physics, the formalism is invariant with respect to time reversal. This was true in classical mechanics and it remains true in QM because canonical quantization does not alter the meaning of energy - it just becomes an evolution operator. Unitary operators satisfying $A A^{\dagger} = I$ are associated logarithmically to Hermitian ones with real eigenvalues. Since measured quantities must be real, this was historically the inevitable route for QM. Entropy should not be confused with unitary evolution. It is a thermodynamic quantity requiring a system of many states and it always increases for one direction in time, even when the underlying laws are time reversal invariant. This was Boltzmann's original insight (see kinetic theory for gases). With black holes we can talk about other forms of entropy, but the general view is that unitary evolution applies to all the usual quantum states so that information is conserved even for black hole processes. This is the so called Information Paradox. The story is much more complicated in QFT but time reversal symmetry is still maintained at a fundamental level, even though there is a time operator with symmetry broken for specific interactions by CP violation. share|improve this answer Your Answer
aeddf3ec94375bad
Pauli exclusion principle From Wikipedia, the free encyclopedia   (Redirected from Pauli exclusion) Jump to: navigation, search Wolfgang Pauli The Pauli exclusion principle is the quantum mechanical principle that states that two identical fermions (particles with half-integer spin) cannot occupy the same quantum state simultaneously. In the case of electrons, it can be stated as follows: it is impossible for two electrons of a poly-electron atom to have the same values of the four quantum numbers (n, , m and ms). For two electrons residing in the same orbital, n, , and m are the same, so ms must be different and the electrons have opposite spins. This principle was formulated by Austrian physicist Wolfgang Pauli in 1925. A more rigorous statement is that the total wave function for two identical fermions is antisymmetric with respect to exchange of the particles. This means that the wave function changes its sign if the space and spin co-ordinates of any two particles are interchanged. Integer spin particles, bosons, are not subject to the Pauli exclusion principle: any number of identical bosons can occupy the same quantum state, as with, for instance, photons produced by a laser and Bose–Einstein condensate. The Pauli exclusion principle governs the behavior of all fermions (particles with "half-integer spin"), while bosons (particles with "integer spin") are not subject to it. Fermions include elementary particles such as quarks (the constituent particles of protons and neutrons), electrons and neutrinos. In addition, protons and neutrons (subatomic particles composed from three quarks) and some atoms are fermions, and are therefore subject to the Pauli exclusion principle as well. Atoms can have different overall "spin", which determines whether they are fermions or bosons — for example helium-3 has spin 1/2 and is therefore a fermion, in contrast to helium-4 which has spin 0 and is a boson.[1]:123–125 As such, the Pauli exclusion principle underpins many properties of everyday matter, from its large-scale stability, to the chemical behavior of atoms. "Half-integer spin" means that the intrinsic angular momentum value of fermions is \hbar = h/2\pi (reduced Planck's constant) times a half-integer (1/2, 3/2, 5/2, etc.). In the theory of quantum mechanics fermions are described by antisymmetric states. In contrast, particles with integer spin (called bosons) have symmetric wave functions; unlike fermions they may share the same quantum states. Bosons include the photon, the Cooper pairs which are responsible for superconductivity, and the W and Z bosons. (Fermions take their name from the Fermi–Dirac statistical distribution that they obey, and bosons from their Bose–Einstein distribution). In the early 20th century it became evident that atoms and molecules with even numbers of electrons are more chemically stable than those with odd numbers of electrons. In the 1916 article "The Atom and the Molecule" by Gilbert N. Lewis, for example, the third of his six postulates of chemical behavior states that the atom tends to hold an even number of electrons in the shell and especially to hold eight electrons which are normally arranged symmetrically at the eight corners of a cube (see: cubical atom).[2] In 1919 chemist Irving Langmuir suggested that the periodic table could be explained if the electrons in an atom were connected or clustered in some manner. Groups of electrons were thought to occupy a set of electron shells around the nucleus.[3] In 1922, Niels Bohr updated his model of the atom by assuming that certain numbers of electrons (for example 2, 8 and 18) corresponded to stable "closed shells".[4]:203 Pauli looked for an explanation for these numbers, which were at first only empirical. At the same time he was trying to explain experimental results of the Zeeman effect in atomic spectroscopy and in ferromagnetism. He found an essential clue in a 1924 paper by Edmund C. Stoner, which pointed out that for a given value of the principal quantum number (n), the number of energy levels of a single electron in the alkali metal spectra in an external magnetic field, where all degenerate energy levels are separated, is equal to the number of electrons in the closed shell of the noble gases for the same value of n. This led Pauli to realize that the complicated numbers of electrons in closed shells can be reduced to the simple rule of one electron per state, if the electron states are defined using four quantum numbers. For this purpose he introduced a new two-valued quantum number, identified by Samuel Goudsmit and George Uhlenbeck as electron spin.[5] Connection to quantum state symmetry[edit] The Pauli exclusion principle with a single-valued many-particle wavefunction is equivalent to requiring the wavefunction to be antisymmetric. An antisymmetric two-particle state is represented as a sum of states in which one particle is in state \scriptstyle |x \rangle and the other in state \scriptstyle |y\rangle: |\psi\rangle = \sum_{x,y} A(x,y) |x,y\rangle, and antisymmetry under exchange means that A(x,y) = −A(y,x). This implies A(x,y) = 0 when x=y, which is Pauli exclusion. It is true in any basis, since unitary changes of basis keep antisymmetric matrices antisymmetric, although strictly speaking, the quantity A(x,y) is not a matrix but an antisymmetric rank-two tensor. Conversely, if the diagonal quantities A(x,x) are zero in every basis, then the wavefunction component A(x,y)=\langle \psi|x,y\rangle = \langle \psi | ( |x\rangle \otimes |y\rangle ) is necessarily antisymmetric. To prove it, consider the matrix element \langle\psi| \Big((|x\rangle + |y\rangle)\otimes(|x\rangle + |y\rangle)\Big). This is zero, because the two particles have zero probability to both be in the superposition state |x\rangle + |y\rangle. But this is equal to \langle \psi |x,x\rangle + \langle \psi |x,y\rangle + \langle \psi |y,x\rangle + \langle \psi | y,y \rangle. The first and last terms on the right side are diagonal elements and are zero, and the whole sum is equal to zero. So the wavefunction matrix elements obey: \langle \psi|x,y\rangle + \langle\psi |y,x\rangle = 0, A(x,y) = -A(y,x). Pauli principle in advanced quantum theory[edit] According to the spin-statistics theorem, particles with integer spin occupy symmetric quantum states, and particles with half-integer spin occupy antisymmetric states; furthermore, only integer or half-integer values of spin are allowed by the principles of quantum mechanics. In relativistic quantum field theory, the Pauli principle follows from applying a rotation operator in imaginary time to particles of half-integer spin. In one dimension, bosons, as well as fermions, can obey the exclusion principle. A one-dimensional Bose gas with delta-function repulsive interactions of infinite strength is equivalent to a gas of free fermions. The reason for this is that, in one dimension, exchange of particles requires that they pass through each other; for infinitely strong repulsion this cannot happen. This model is described by a quantum nonlinear Schrödinger equation. In momentum space the exclusion principle is valid also for finite repulsion in a Bose gas with delta-function interactions,[6] as well as for interacting spins and Hubbard model in one dimension, and for other models solvable by Bethe ansatz. The ground state in models solvable by Bethe ansatz is a Fermi sphere. Atoms and the Pauli principle[edit] The Pauli exclusion principle helps explain a wide variety of physical phenomena. One particularly important consequence of the principle is the elaborate electron shell structure of atoms and the way atoms share electrons, explaining the variety of chemical elements and their chemical combinations. An electrically neutral atom contains bound electrons equal in number to the protons in the nucleus. Electrons, being fermions, cannot occupy the same quantum state as other electrons, so electrons have to "stack" within an atom, i.e. have different spins while at the same electron orbital as described below. An example is the neutral helium atom, which has two bound electrons, both of which can occupy the lowest-energy (1s) states by acquiring opposite spin; as spin is part of the quantum state of the electron, the two electrons are in different quantum states and do not violate the Pauli principle. However, the spin can take only two different values (eigenvalues). In a lithium atom, with three bound electrons, the third electron cannot reside in a 1s state, and must occupy one of the higher-energy 2s states instead. Similarly, successively larger elements must have shells of successively higher energy. The chemical properties of an element largely depend on the number of electrons in the outermost shell; atoms with different numbers of shells but the same number of electrons in the outermost shell have similar properties, which gives rise to the periodic table of the elements.[7]:214–218 Solid state properties and the Pauli principle[edit] In conductors and semiconductors, there are very large numbers of molecular orbitals which effectively form a continuous band structure of energy levels. In strong conductors (metals) electrons are so degenerate that they cannot even contribute much to the thermal capacity of a metal.[8]:133–147 Many mechanical, electrical, magnetic, optical and chemical properties of solids are the direct consequence of Pauli exclusion. Stability of matter[edit] The stability of the electrons in an atom itself is unrelated to the exclusion principle, but is described by the quantum theory of the atom. The underlying idea is that close approach of an electron to the nucleus of the atom necessarily increases its kinetic energy, an application of the uncertainty principle of Heisenberg.[9] However, stability of large systems with many electrons and many nucleons is a different matter, and requires the Pauli exclusion principle.[10] It has been shown that the Pauli exclusion principle is responsible for the fact that ordinary bulk matter is stable and occupies volume. This suggestion was first made in 1931 by Paul Ehrenfest, who pointed out that the electrons of each atom cannot all fall into the lowest-energy orbital and must occupy successively larger shells. Atoms therefore occupy a volume and cannot be squeezed too closely together.[11] A more rigorous proof was provided in 1967 by Freeman Dyson and Andrew Lenard, who considered the balance of attractive (electron–nuclear) and repulsive (electron–electron and nuclear–nuclear) forces and showed that ordinary matter would collapse and occupy a much smaller volume without the Pauli principle.[12][13] The consequence of the Pauli principle here is that electrons of the same spin are kept apart by a repulsive exchange interaction, which is a short-range effect, acting simultaneously with the long-range electrostatic or Coulombic force. This effect is partly responsible for the everyday observation in the macroscopic world that two solid objects cannot be in the same place at the same time. Astrophysics and the Pauli principle[edit] Dyson and Lenard did not consider the extreme magnetic or gravitational forces which occur in some astronomical objects. In 1995 Elliott Lieb and coworkers showed that the Pauli principle still leads to stability in intense magnetic fields such as in neutron stars, although at a much higher density than in ordinary matter.[14] It is a consequence of general relativity that, in sufficiently intense gravitational fields, matter collapses to form a black hole. Astronomy provides a spectacular demonstration of the effect of the Pauli principle, in the form of white dwarf and neutron stars. In both types of body, atomic structure is disrupted by large gravitational forces, leaving the constituents supported by "degeneracy pressure" alone. This exotic form of matter is known as degenerate matter. In white dwarfs atoms are held apart by electron degeneracy pressure. In neutron stars, subject to even stronger gravitational forces, electrons have merged with protons to form neutrons. Neutrons are capable of producing an even higher degeneracy pressure, albeit over a shorter range. This can stabilize neutron stars from further collapse, but at a smaller size and higher density than a white dwarf. Neutrons are the most "rigid" objects known; their Young modulus (or more accurately, bulk modulus) is 20 orders of magnitude larger than that of diamond. However, even this enormous rigidity can be overcome by the gravitational field of a massive star or by the pressure of a supernova, leading to the formation of a black hole.[15]:286–287 See also[edit] 1. ^ Kenneth S. Krane (5 November 1987). Introductory Nuclear Physics. Wiley. ISBN 978-0-471-80553-3.  2. ^ [1] 3. ^ Langmuir, Irving (1919). "The Arrangement of Electrons in Atoms and Molecules" (PDF). Journal of the American Chemical Society 41 (6): 868–934. doi:10.1021/ja02227a002. Retrieved 2008-09-01.  4. ^ Shaviv, Glora. The Life of Stars: The Controversial Inception and Emergence of the Theory of Stellar Structure (2010 edition ed.). Springer. ISBN 978-3642020872.  5. ^ Straumann, Norbert (2004). "The Role of the Exclusion Principle for Atoms to Stars: A Historical Account". Invited talk at the 12th Workshop on Nuclear Astrophysics.  6. ^ A. Izergin and V. Korepin, Letter in Mathematical Physics vol 6, page 283, 1982 8. ^ Kittel, Charles (2005), Introduction to Solid State Physics (8th ed.), USA: John Wiley & Sons, Inc., ISBN 978-0-471-41526-8  9. ^ Elliot J. Lieb The Stability of Matter and Quantum Electrodynamics 10. ^ This realization is attributed by Lieb and by GL Sewell (2002). Quantum Mechanics and Its Emergent Macrophysics. Princeton University Press. ISBN 0-691-05832-6.  to FJ Dyson and A Lenard: Stability of Matter, Parts I and II (J. Math. Phys., 8, 423–434 (1967); J. Math. Phys., 9, 698–711 (1968) ). 11. ^ As described by FJ Dyson (J.Math.Phys. 8, 1538–1545 (1967) ), Ehrenfest made this suggestion in his address on the occasion of the award of the Lorentz Medal to Pauli. 13. ^ Dyson, Freeman (1967). "Ground‐State Energy of a Finite System of Charged Particles". J. Math. Phys. 8 (8): 1538–1545. Bibcode:1967JMP.....8.1538D. doi:10.1063/1.1705389.  14. ^ Lieb, E. H.; Loss, M.; Solovej, J. P. (1995). "Stability of Matter in Magnetic Fields". Phys. Rev. Letters 75 (6): 985–9. arXiv:cond-mat/9506047. Bibcode:1995PhRvL..75..985L. doi:10.1103/PhysRevLett.75.985.  15. ^ Martin Bojowald (5 November 2012). The Universe: A View from Classical and Quantum Gravity. John Wiley & Sons. ISBN 978-3-527-66769-7.  • Dill, Dan (2006). "Chapter 3.5, Many-electron atoms: Fermi holes and Fermi heaps". Notes on General Chemistry (2nd ed.). W. H. Freeman. ISBN 1-4292-0068-5.  • Liboff, Richard L. (2002). Introductory Quantum Mechanics. Addison-Wesley. ISBN 0-8053-8714-5.  External links[edit]
0ed59f5775d1d2b1
Sir John A. Pople, a mathematician who became a chemist and won a Nobel Prize in 1998 for a computer tool that describes the dance of molecules in chemical reactions, died Monday at his daughter's home in Chicago. He was 78. The cause of death was liver cancer, his family said. Dr. Pople was among the first to realize the potential of computers in chemistry. The behavior of all molecules is defined by the Schrödinger equation, the fundamental formula of quantum mechanics. But the equation is impossible to solve exactly except in the simplest cases. In the 1960's, Dr. Pople developed methods for calculating approximate solutions, determining the orbits of electrons zipping around molecules. From the electron orbits, the computer program predicts properties of the molecules, including whether they are stable, which colors of light they will absorb or emit, and the pace of chemical reactions. The work culminated in a program, Gaussian-70, published in 1970. That program and succeeding versions have become a common tool for chemists. ''It's literally thousands of chemists worldwide who are using the results of Pople's research,'' said Dr. Stuart W. Staley, a professor of chemistry at Carnegie-Mellon University in Pittsburgh, where Dr. Pople taught for many years. ''It's had a tremendous impact.'' In recent years, however, Dr. Pople was not among its users. In 1991 he left Gaussian Inc., a company set up to market the computer program. ''There were disagreements about how the company should grow, and so he parted ways with other founders of the company,'' said Michael J. Frisch, president of Gaussian and a former student of Dr. Pople. When Dr. Pople helped found a competing company, Q-Chem, in 1993, Gaussian declined to license newer versions of its software to him. Born on Oct. 31, 1925, in Burnham-on-Sea, a small town on the west coast of England, John Anthony Pople (pronounced POPE-el) was the first in his family to attend college, graduating with a bachelor's degree in mathematics from Cambridge University in 1946. He completed his doctoral degree at Cambridge in 1951 and continued working there through 1958. He left Cambridge to head the basic physics division at the National Physical Laboratory in England, and in 1964 he became a professor of chemistry at the Carnegie Institute of Technology, now part of Carnegie-Mellon University. In 1993 he moved to Northwestern University. He remained a British citizen after moving to the United States, and last year he was knighted for his chemistry achievements. Sir John's wife, Joy, died in 2002 after nearly 50 years of marriage. He is survived by his daughter, Hilary; three sons, Adrian, who lives in Ireland; Mark, of Houston; and Andrew, of Pittsburgh; 11 grandchildren; and a great-granddaughter. Sir John's interest in the puzzles of physical chemistry, as opposed to abstract mathematics, dated from early in his career. His doctoral thesis, for instance, explored the structure of water. ''I had clearly changed from being a mathematician to a practicing scientist,'' he wrote in an autobiography on the Nobel Prize Web site. ''Indeed, I was increasingly embarrassed that I could no longer follow some of the more modern branches of pure mathematics, in which my undergraduate students were being examined.'' Photo: Sir John A. Pople (Photo by Associated Press, 1998)
c158cde7c7995519
stub born Born-Oppenheimer approximation In quantum chemistry, the computation of the energy and wavefunction of an average-size molecule is a formidable task that is alleviated by the Born-Oppenheimer (BO) approximation. For instance the benzene molecule consists of 12 nuclei and 42 electrons. The time independent Schrödinger equation, which must be solved to obtain the energy and molecular wavefunction of this molecule, is a partial differential eigenvalue equation in 162 variables—the spatial coordinates of the electrons and the nuclei. The BO approximation makes it possible to compute the wavefunction in two less formidable, consecutive steps. This approximation was proposed in the early days of quantum mechanics by Born and Oppenheimer (1927) and is still indispensable in quantum chemistry. Psi_{ total} = psi_{ electronic} times psi_{ nuclear} In the first step of the BO approximation the electronic Schrödinger equation is solved, yielding the wavefunction psi_{electronic} depending on electrons only. For benzene this wavefunction depends on 126 electronic coordinates. During this solution the nuclei are fixed in a certain configuration, very often the equilibrium configuration. If the effects of the quantum mechanical nuclear motion are to be studied, for instance because a vibrational spectrum is required, this electronic computation must be repeated for many different nuclear configurations. The set of electronic energies thus computed becomes a function of the nuclear coordinates. In the second step of the BO approximation this function serves as a potential in a Schrödinger equation containing only the nuclei—for benzene an equation in 36 variables. The success of the BO approximation is due to the high ratio between nuclear and electronic masses. The approximation is an important tool of quantum chemistry, without it only the lightest molecule, H2, could be handled; all computations of molecular wavefunctions for larger molecules make use of it. Even in the cases where the BO approximation breaks down, it is used as a point of departure for the computations. The electronic energies, constituting the nuclear potential, consist of kinetic energies, interelectronic repulsions and electron-nuclear attractions. In a handwaving manner the nuclear potential is an averaged electron-nuclear attraction. The BO approximation follows from the inertia of electrons to be negligible in comparison to the atom to which they are bound. Short description The Born-Oppenheimer (BO) approximation is ubiquitous in quantum chemical calculations of molecular wavefunctions. It consists of two steps. In the first step the nuclear kinetic energy is neglected, that is, the corresponding operator Tn is subtracted from the total molecular Hamiltonian. In the remaining electronic Hamiltonian He the nuclear positions enter as parameters. The electron-nucleus interactions are not removed and the electrons still "feel" the Coulomb potential of the nuclei clamped at certain positions in space. (This first step of the BO approximation is therefore often referred to as the clamped nuclei approximation.) The electronic Schrödinger equation H_mathrm{e}(mathbf{r,R} ); chi(mathbf{r,R}) = E_mathrm{e} ; chi(mathbf{r,R}) left[T_mathrm{n} + E_mathrm{e}(mathbf{R})right] phi(mathbf{R}) = E phi(mathbf{R}) is solved. This second step of the BO approximation involves separation of vibrational, translational, and rotational motions. This can be achieved by application of the Eckart conditions. The eigenvalue E is the total energy of the molecule, including contributions from electrons, nuclear vibrations, and overall rotation and translation of the molecule. Derivation of the Born-Oppenheimer approximation E_0(mathbf{R}) ll E_1(mathbf{R}) ll E_2(mathbf{R}), cdots for all :mathbf{R}. H= H_mathrm{e} + T_mathrm{n} , with H_mathrm{e}= -sum_{i}{frac{1}{2}nabla_i^2}- sum_{i,A}{frac{Z_A}{r_{iA}}} + sum_{i>j}{frac{1}{r_{ij}}}+ sum_{A > B}{frac{Z_A Z_B}{R_{AB}}} quadmathrm{and}quad T_mathrm{n}=-sum_{A}{frac{1}{2M_A}nabla_A^2}. The position vectors mathbf{r}equiv {mathbf{r}_i} of the electrons and the position vectors mathbf{R}equiv {mathbf{R}_A = (R_{Ax},,R_{Ay},,R_{Az})} of the nuclei are with respect to a Cartesian inertial frame. Distances between particles are written as r_{iA} equiv |mathbf{r}_i - mathbf{R}_A| (distance between electron i and nucleus A) and similar definitions hold for r_{ij}; and R_{AB},. We assume that the molecule is in a homogeneous (no external force) and isotropic (no external torque) space. The only interactions are the Coulomb interactions between the electrons and nuclei. The Hamiltonian is expressed in atomic units, so that we do not see Planck's constant, the dielectric constant of the vacuum, electronic charge, or electronic mass in this formula. The only constants explicitly entering the formula are ZA and MA—the atomic number and mass of nucleus A. T_mathrm{n} = sum_{A} sum_{alpha=x,y,z} frac{P_{Aalpha} P_{Aalpha}}{2M_A} quadmathrm{with}quad P_{Aalpha} = -i {partial over partial R_{Aalpha}}. Suppose we have K electronic eigenfunctions chi_k (mathbf{r}; mathbf{R}) of H_mathrm{e},, that is, we have solved H_mathrm{e};chi_k (mathbf{r};mathbf{R}) = E_k(mathbf{R});chi_k (mathbf{r};mathbf{R}) quadmathrm{for}quad k=1,ldots, K. The electronic wave functions chi_k, will be taken to be real, which is possible when there are no magnetic or spin interactions. The parametric dependence of the functions chi_k, on the nuclear coordinates is indicated by the symbol after the semicolon. This indicates that, although chi_k, is a real-valued function of mathbf{r}, its functional form depends on mathbf{R}. For example, in the molecular-orbital-linear-combination-of-atomic-orbitals (LCAO-MO) approximation, chi_k, is an MO given as a linear expansion of atomic orbitals (AOs). An AO depends visibly on the coordinates of an electron, but the nuclear coordinates are not explicit in the MO. However, upon change of geometry, i.e., change of mathbf{R}, the LCAO coefficients obtain different values and we see corresponding changes in the functional form of the MO chi_k,. We will assume that the parametric dependence is continuous and differentiable, so that it is meaningful to consider P_{Aalpha}chi_k (mathbf{r};mathbf{R}) = - i frac{partialchi_k (mathbf{r};mathbf{R})}{partial R_{Aalpha}} quad mathrm{for}quad alpha=x,y,z, which in general will not be zero. The total wave function Psi(mathbf{R},mathbf{r}) is expanded in terms of chi_k (mathbf{r}; mathbf{R}): Psi(mathbf{R}, mathbf{r}) = sum_{k=1}^K chi_k(mathbf{r};mathbf{R}) phi_k(mathbf{R}) , with langle,chi_{k'}(mathbf{r};mathbf{R}),|, chi_k(mathbf{r};mathbf{R})rangle_{(mathbf{r})} = delta_{k' k} and where the subscript (mathbf{r}) indicates that the integration, implied by the bra-ket notation, is over electronic coordinates only. By definition, the matrix with general element big(mathbb{H}_mathrm{e}(mathbf{R})big)_{k'k} equiv langle chi_{k'}(mathbf{r};mathbf{R}) | H_mathrm{e} | chi_k(mathbf{r};mathbf{R})rangle_{(mathbf{r})} = delta_{k'k} E_k(mathbf{R}) is diagonal. After multiplication by the real function chi_{k'}(mathbf{r};mathbf{R}) from the left and integration over the electronic coordinates mathbf{r} the total Schrödinger equation H;Psi(mathbf{R},mathbf{r}) = E ; Psi(mathbf{R},mathbf{r}) is turned into a set of K coupled eigenvalue equations depending on nuclear coordinates only left[mathbb{H}_mathrm{n}(mathbf{R}) + mathbb{H}_mathrm{e}(mathbf{R}) right] ; boldsymbol{phi}(mathbf{R}) = E; boldsymbol{phi}(mathbf{R}). The column vector boldsymbol{phi}(mathbf{R}) has elements phi_k(mathbf{R}),; k=1,ldots,K. The matrix mathbb{H}_mathrm{e}(mathbf{R}) is diagonal and the nuclear Hamilton matrix is non-diagonal with the following off-diagonal (vibronic coupling) terms, big(mathbb{H}_mathrm{n}(mathbf{R})big)_{k'k} = langlechi_{k'}(mathbf{r};mathbf{R}) | T_mathrm{n}|chi_k(mathbf{r};mathbf{R})rangle_{(mathbf{r})}. mathrm{H_n}(mathbf{R})_{k'k}equiv big(mathbb{H}_mathrm{n}(mathbf{R})big)_{k'k} = delta_{k'k} T_{textrm{n}} + sum_{A,alpha}frac{1}{M_A} langlechi_{k'}|big(P_{Aalpha}chi_kbig)rangle_{(mathbf{r})} P_{Aalpha} + langlechi_{k'}|big(T_mathrm{n}chi_kbig)rangle_{(mathbf{r})} The diagonal (k'=k) matrix elements langlechi_{k}|big(P_{Aalpha}chi_kbig)rangle_{(mathbf{r})} of the operator P_{Aalpha}, vanish, because this operator is Hermitian and purely imaginary. The off-diagonal matrix elements satisfy langlechi_{k'}|big(P_{Aalpha}chi_kbig)rangle_{(mathbf{r})} = frac{langlechi_{k'} |big[P_{Aalpha}, H_mathrm{e}big] | chi_krangle_{(mathbf{r})}} {E_{k}(mathbf{R})- E_{k'}(mathbf{R})}. The matrix element in the numerator is langlechi_{k'} |big[P_{Aalpha}, H_mathrm{e}big] | chi_krangle_{(mathbf{r})} = iZ_Asum_i ;langlechi_{k'}|frac{(mathbf{r}_{iA})_alpha}{r_{iA}^3}|chi_krangle_{(mathbf{r})} ;;mathrm{with};; mathbf{r}_{iA} equiv mathbf{r}_i - mathbf{R}_A . The matrix element of the one-electron operator appearing on the right hand side is finite. When the two surfaces come close, {E_{k}(mathbf{R})approx E_{k'}(mathbf{R})}, the nuclear momentum coupling term becomes large and is no longer negligible. This is the case where the BO approximation breaks down and a coupled set of nuclear motion equations must be considered, instead of the one equation appearing in the second step of the BO approximation. Conversely, if all surfaces are well separated, all off-diagonal terms can be neglected and hence the whole matrix of P^{A}_alpha is effectively zero. The third term on the right hand side of the expression for the matrix element of Tn (the Born-Oppenheimer diagonal correction) can approximately be written as the matrix of P^{A}_alpha squared and, accordingly, is then negligible also. Only the first (diagonal) kinetic energy term in this equation survives in the case of well-separated surfaces and a diagonal, uncoupled, set of nuclear motion equations results, left[T_mathrm{n} +E_k(mathbf{R})right] ; phi_k(mathbf{R}) = E phi_k(mathbf{R}) quadmathrm{for}quad k=1,ldots, K, which are the normal second-step of the BO equations discussed above. We reiterate that when two or more potential energy surfaces approach each other, or even cross, the Born-Oppenheimer approximation breaks down and one must fall back on the coupled equations. Usually one invokes then the diabatic approximation. Historical note The Born-Oppenheimer approximation is named after M. Born and R. Oppenheimer who wrote a paper [Annalen der Physik, vol. 84, pp. 457-484 (1927)] entitled: Zur Quantentheorie der Moleküle (On the Quantum Theory of Molecules). This paper describes the separation of electronic motion, nuclear vibrations, and molecular rotation. Somebody who expects to find in this paper the BO approximation—as it is explained above and in most modern textboooks—will be in for a surprise. The reason being that the presentation of the BO approximation is well hidden in Taylor expansions (in terms of internal and external nuclear coordinates) of (i) electronic wave functions, (ii) potential energy surfaces and (iii) nuclear kinetic energy terms. Internal coordinates are the relative positions of the nuclei in the molecular equilibrium and their displacements (vibrations) from equilibrium. External coordinates are the position of the center of mass and the orientation of the molecule. The Taylor expansions complicate the theory and make the derivations very hard to follow. Moreover, knowing that the proper separation of vibrations and rotations was not achieved in this paper, but only 8 years later [by C. Eckart, Physical Review, vol. 46, pp. 383-387 (1935)] (see Eckart conditions), one is not very much motivated to invest much effort into understanding the work by Born and Oppenheimer, however famous it may be. Although the article still collects many citations each year, it is safe to say that it is not read anymore (except perhaps by historians of science). External links Resources related to the Born-Oppenheimer approximation: See also Search another word or see stub bornon Dictionary | Thesaurus |Spanish Copyright © 2014, LLC. All rights reserved. • Please Login or Sign Up to use the Recent Searches feature
d71999d3d678e64c
Take the 2-minute tour × I'm slightly confused as to answer this question, someone please help: Consider a free particle in one dimension, described by the initial wave function $$\psi(x,0) = e^{ip_{0}x/\hbar}e^{-x^{2}/2\Delta^{2}}(\pi\Delta^2)^{-1/4}.$$ Find the time-evolved wavefunctions $\psi(x,t)$. Now I know that since it is a free particle we have the hamiltonian operator as $$H = -\frac{\hbar^2}{2m}\frac{\partial^2}{\partial x^2},$$ which yields the energy eigenfunctions to be of the form $$\psi_E(x,t) = C_1e^{ikx}+C_2e^{-ikx},$$ where $k=\frac{\sqrt{2mE}}{\hbar}$, and the time evolution of the Schrödinger equation gives $$\psi(x,t)=e^{-\frac{i}{\hbar}Ht}\psi(x,0)$$ but the issue I face is what is the correct method to find the solution so that I can then calculate things such as the probability density $P(x,t)$ and the mean and the uncertainty (all which is straight forward once I know $\psi(x,t)$. In short - how do I find the initial state in terms of the energy eigenfunctions $\psi_E(x,t)$ so that I can find the time evolved state wavefunction. share|improve this question More on Gaussian wave packets: physics.stackexchange.com/search?q=Gaussian+wave+packet –  Qmechanic May 16 '13 at 23:11 2 Answers 2 For a free particle, the energy/momentum eigenstates are of the form $e^{i k x}$. Going over to that basis is essentially doing a Fourier transform. Once you do that, you'll have the wavefunction in the momentum basis. After that, time-evolving that should be simple. Hint: The fourier transform of a Gaussian is another Gaussian, but the width inverts, in accordance with the Heisenberg uncertainty principle. The phase and the mean position will transform into each other -- that is a little more subtle and you need to work it out. Also have a look at http://en.wikipedia.org/wiki/Wave_packet. share|improve this answer Some broadly applicable background might be in order, since I remember this aspect of quantum mechanics not being stressed enough in most courses. [What follows is very good to know, and very broadly applicable, but may be considered overkill for this particular problem. Caveat lector.] What the OP lays out is exactly the motivation for finding how an initial wavefunction can be written as a sum of eigenfunctions of the Hamiltonian - if only we could have that representation, the Schrödinger equation plus linearity get us the wavefunction for all time. As Siva alludes to, this amounts to finding how a vector (our wavefunction) looks in a particular basis (the set of eigenfunctions of any Hermitian operator is guaranteed to be a basis). In general, one does this by taking inner products with the basis vectors, and the reasoning is as follows. We know the set of vectors $\{\lvert \psi_E \rangle\}$ (yes, I'm using Dirac notation here - it's a good thing to get used to), where $E$ is an index ranging over (possibly discrete possibly continuous) energies, forms a basis for the space of all wavefunctions. Therefore, there must be complex numbers $c_E$ such that $$ \lvert \psi \rangle = \sum_E c_E \lvert \psi_E \rangle, $$ where $\lvert \psi \rangle$ is our initial wavefunction. If there are infinitely many energies, the sum has infinitely many terms. If there is a continuum of energies, it is an integral.1 Now the problem is clearly one of finding the coefficients $c_E$. To do that, we take inner products with the basis vectors, one by one, where presumably our energy basis is orthonormal. Pick a generic, unspecified basis element $\lvert \psi_{E'} \rangle$. Then we have $$ \langle \psi_{E'} \vert \psi \rangle = \sum_E c_E \langle \psi_{E'} \vert \psi_E \rangle = \sum_E c_E \delta_{E'E} = c_{E'}. $$ Whether the delta function is of the Kronecker or Dirac variety depends on whether the "sum" is a sum or an integral. Here then we have our formula for coefficients, which reads (after removing the primes), $$ c_E = \langle \psi_E \vert \psi \rangle. $$ How does one go about solving this. At this point, it is okay to switch out of abstract vector notation and go into the position basis. We can do this with the somewhat cryptic yet awesome-sounding spectral resolution of the identity in, say, the position basis: $$ c_E = \langle \psi_E \vert I \vert \psi \rangle = \int_{-\infty}^\infty \langle \psi_E \vert x \rangle \langle x \vert \psi \rangle \ \mathrm{d}x. $$ Here $\langle \psi \vert x \rangle \equiv \psi(x)$ is just your wavefunction, expressed in more familiar terms.2 Furthermore, as you have hopefully been told, the correct inner product at play here introduces a complex conjugation if you switch the ordering, so $$ \langle \psi_E \vert x \rangle = \langle x \vert \psi_E \rangle^* \equiv \psi_E^*(x). $$ You now have enough to evaluate the coefficients $c_E$ for any initial problem given any orthonormal basis arising from a Hamiltonian. Given the free-particle form of $\psi_E(x)$ you can see that this process will essentially be a Fourier transform, so if you keep your wits about you you don't even need to do any messy integrals at all. Furthermore, depending on what is ultimately desired, the position basis may not be the most suitable basis for this problem, but doing a few problems the hard way builds character if nothing else. 1 Math aside: Countable infinities are not a big deal, since one of the assumptions of quantum mechanics is that our vector space isn't just a fancy inner product space, but also a really fancy Hilbert space. Then well-behaved linear combinations of wavefunctions, even countably infinitely many, well converge to perfectly well-defined wavefunctions. Justifying the integral is trickier, but it can be done. 2 Yes, this is the connection between Dirac notation and traditional "probability density as a function of space" notation students often learn first. Abstract kets become functions of position only when "bra-ed" with a generic position basis element. share|improve this answer Your Answer
bb3a9fe49061f3e2
Sign up × This is a thought experiment, so please don't treat it too harsh :-) Short: If we could isolate two places A and B in the universe from all and any interaction with the surroundings, is there a physical law which states "if something is dropped in place A, it has to stay there"? Long version: Let's assume that the energy of the whole universe is fixed. Let's further assume that it is (by some trick) possible to completely isolate a box of 1m^3, say in the center of a planet (all gravitational and centrifugal forces cancel themselves out), the mass of the planet shields against radiation and we use a trick to shield against neutrinos or we ignore them since the rarely interact with matter). How does an object behave when it has no interaction with the rest of the universe whatsoever? If I put an object in a box described above and I have several such boxes, would it matter in which box the object is? Is there a law which says "even if no one knows, the object has to stay where it is"? Or is that just our expectation based on everyday experience? share|cite|improve this question 3 Answers 3 up vote 4 down vote accepted Classically Sklivvz's answer would be correct. But in quantum world the story is not quite over. In the following I'll talk about particles (because for them quantum effects are more apparent) but it would also be true for bigger objects (although the bigger the object the more improbable the "teleportation" would be). First, from the point of view of Quantum Mechanics and for a while assuming that you can really cancel all forces on the particle you are essentially investigating a double-well potential. There will always be some tunelling between two boxes. If the boxes are too far away then it will be very improbable (where this improbability increases exponentially with distance) but surely possible that if you look into second box after some time, you'll find your particle there. Now, in reality the picture is complicated by Quantum Field Theory. First, it stops to make sense to talk about particles because they are indistinguishable and also they can be created out of nothing. Related to this is the fact that you can't ever dispose of all interaction because vacuum itself is very lively place! There are particles created and annihilated all the time and this has observable effects on any object (see Casimir effect). So you would need you object to be big enough so that it becomes distinguishable (with high probability). But as already stated the bigger the object, the lower the probability that it leaves the box. The conclusion, as you might have guessed beforehand, is that with extremely high probability nothing interesting will happen at all. share|cite|improve this answer correct - by "object" I have assumed a classical-sized test object. – Sklivvz Nov 28 '10 at 15:45 I missed vacuum fluctuations; thanks for pointing that out. – Aaron Digulla Nov 28 '10 at 18:02 Interestingly, if the two boxes were close together, a particle-antiparticle pair could be spontaneously created between the two, and the antiparticle annihilate with the original particle. The result would be that the particle has disappeared from one box and appeared in the other - in other words, the teleportation mentioned in the question title. – Phil H Nov 30 '10 at 13:24 Of course: Newton's first law! share|cite|improve this answer I agree for classical physics. But I don't change it's state of motion, I just change its space-time coordinate in an instant. So the first law doesn't really apply. – Aaron Digulla Nov 28 '10 at 18:04 @Aaron: you can't just change an object's spacetime coordinate by a finite amount instantaneously, unless you have something which can exert an infinite force at your disposal (this doesn't exist in the real world). Doing so would be a violation of Newton's second law in classical physics, or of the Schrödinger equation in quantum mechanics. – David Z Nov 28 '10 at 22:19 @David: So how does an electron tunnel? As I understand it, the same rules could apply to macroscopic objects (only the probability is way too low to experience it). – Aaron Digulla Nov 29 '10 at 14:17 How does an object behave when is has no interaction with the rest of the universe whatsoever? Like every object you ever studied in a physics class. Interaction with the rest of the universe is the hard part that we always abstract away so we can isolate some particular interaction of interest. A perfectly isolated system would always behave in perfect accordance with the rules governing whatever sort of object it was. In quantum terms (since you've got "quantum" in the question title), what you're describing is an infinite square well. That is, the potential barrier for the system in question is of infinite height, perfectly isolating the system from everything else. The wavefunction at the edges of the box would be exactly zero, and the wavefunction everywhere outside the box would be zero. This is the only system in quantum physics that does not have some probability (however infinitesimal) of turning up at some different location. This is, of course, a textbook idealization and not anything you could actually make in reality. share|cite|improve this answer Your Answer
ae0a40a4d669eda7
Page semi-protected Properties of water From Wikipedia, the free encyclopedia   (Redirected from H2O) Jump to: navigation, search "Hydrogen monoxide" redirects here. For the hoax involving the chemical name of water, see Dihydrogen monoxide hoax. "H2O" and "HOH" redirect here. For other uses, see H2O (disambiguation) and HOH (disambiguation). Water (H The water molecule has this basic geometric structure Ball-and-stick model of a water molecule Space filling model of a water molecule A drop of water falling towards water in a glass IUPAC name water, oxidane Other names Hydrogen oxide, Dihydrogen monoxide (DHMO), Hydrogen monoxide, Dihydrogen oxide, Hydrogen hydroxide (HH or HOH), Hydric acid, Hydrohydroxic acid, Hydroxic acid, Hydrol,[1] μ-Oxido dihydrogen 7732-18-5 YesY 3D model (Jmol) Interactive image ChEBI CHEBI:15377 YesY ChEMBL ChEMBL1098659 YesY ChemSpider 937 YesY PubChem 962 RTECS number ZC0110000 Molar mass 18.01528(33) g/mol Odor None Density Liquid: 999.9720 kg/m3 ≈ 1 tonne/m3 = 1 kg/l = 1 g/cm3 ≈ 62.4 lb/ft3 (maximum, at ~4 °C) Solid: 917 kg/m3 = 0.917 tonne/m3 = 0.917 kg/l = 0.917 g/cm3 ≈ 57.2 lb/ft3 Boiling point 99.98 °C; 211.96 °F; 373.13 K [3][a] Solubility Poorly soluble in haloalkanes, aliphatic and aromatic hydrocarbons, ethers.[4] Improved solubility in carboxylates, alcohols, ketones, amines. Miscible with methanol, ethanol, isopropanol, acetone, glycerol. Vapor pressure 3.1690 kilopascals or 0.031276 atm[5] Acidity (pKa) 13.995[6][b] Basicity (pKb) 13.995 Thermal conductivity 0.6065 W/m·K[8] 1.3330 (20°C)[9] Viscosity 0.890 cP[10] 1.8546 D[11] 75.375 ± 0.05 J/mol·K[12] 69.95 ± 0.03 J/mol·K[12] -285.83 ± 0.040 kJ/mol[4][12] -237.24 kJ/mol[4] Main hazards Drowning Water intoxication Avalanche (as snow) (see also Dihydrogen monoxide hoax) NFPA 704 Flash point Non-flammable Related compounds Other cations Hydrogen sulfide Hydrogen selenide Hydrogen telluride Hydrogen polonide Hydrogen peroxide Related solvents Related compounds Water vapor Heavy water YesY verify (what is YesYN ?) Infobox references Water (H ) is a polar inorganic compound that is at room temperature a tasteless and odorless liquid, nearly colorless with a hint of blue. This simplest hydrogen chalcogenide is by far the most studied chemical compound and is described as the "universal solvent" for its ability to dissolve many substances.[13][14] This allows it to be the "solvent of life".[15] It is the only common substance to exist as a solid, liquid, and gas in nature.[16] Water molecules form hydrogen bonds with each other and are strongly polar. This polarity allows it to separate ions in salts and strongly bond to other polar substances such as alcohols and acids, thus dissolving them. Its hydrogen bonding causes its many unique properties, such as having a solid form less dense than its liquid form, a relatively high boiling point of 100 °C for its molar mass, and a high heat capacity. Water is amphoteric, meaning it is both an acid and a base—it produces H+ and OH ions by self ionization. This regulates the concentrations of H+ and OH ions in water. Due to water being a very good solvent, it is rarely pure and some of the properties of impure water can vary from those of the pure substance. However, there are also many compounds that are essentially, if not completely, insoluble in water, such as fats, oils and other non-polar substances. The accepted IUPAC name of water is oxidane or simply water,[17] or its equivalent in different languages, although there are other systematic names which can be used to describe the molecule. Oxidane is only intended to be used as the name of the mononuclear parent hydride used for naming derivatives of water by substituent nomenclature.[18] These derivatives commonly have other recommended names. For example, the name hydroxyl is recommended over oxidanyl for the –OH group. The name oxane is explicitly mentioned by the IUPAC as being unsuitable for this purpose, since it is already the name of a cyclic ether also known as tetrahydropyran.[19][20] The polarized form of the water molecule, H+ , is also called hydron hydroxide by IUPAC nomenclature.[21] In keeping with the basic rules of chemical nomenclature, water would have a systematic name of dihydrogen monoxide,[22] but this is not among the names published by the International Union of Pure and Applied Chemistry.[17] It is a rarely used name of water, and mostly used in various hoaxes or spoofs that call for this "lethal chemical" to be banned, such as in the dihydrogen monoxide hoax. Other systematic names for water include hydroxic acid, hydroxylic acid, and hydrogen hydroxide, using acid and base names.[c] None of these exotic names are used widely. Water is the chemical substance with chemical formula H ; one molecule of water has two hydrogen atoms covalently bonded to a single oxygen atom.[23] Water is a tasteless, odorless liquid at ambient temperature and pressure, and appears colorless in small quantities, although it has its own intrinsic very light blue hue.[24][2] Ice also appears colorless, and water vapor is essentially invisible as a gas. The molecules of water are constantly moving in relation to each other, and the hydrogen bonds are continually breaking and reforming at timescales faster than 200 femtoseconds (2×10−13 seconds).[25] However, these bonds are strong enough to create many of the peculiar properties of water, some of which make it integral to life. Water can be described as a polar liquid that slightly dissociates disproportionately or self ionizes into an hydronium ion and hydroxide ion. 2 H + OH The dissociation constant for this dissociation is commonly symbolized as Kw and has a value of about 1014 at 25 °C; see here for values at other temperatures. Water, ice, and vapor , see the article ice. The gaseous phase of water is known as water vapor (or steam), in which water takes the form of a transparent cloud. (Visible steam and clouds are, in fact, water in the liquid form as minute droplets suspended in the air.) The fourth state of water, that of a supercritical fluid, is much less common than the other three and only rarely occurs in nature, in extremely hostile conditions. When water achieves a specific critical temperature and a specific critical pressure (647 K and 22.064 MPa), the liquid and gas phases merge to one homogeneous fluid phase, with properties of both gas and liquid. A likely example of naturally occurring supercritical water is in the hottest parts of deep water hydrothermal vents, in which water is heated to the critical temperature by volcanic plumes and the critical pressure is caused by the weight of the ocean at the extreme depths where the vents are located. This pressure is reached at a depth of about 2200 meters: much less than the mean depth of the ocean (3800 meters).[26] Heat capacity and heats of vaporization and fusion Heat of vaporization of water from melting to critical temperature Water has a very high specific heat capacity of 4.1814 J/(g·K) at 25 °C – the second highest among all the heteroatomic species (after ammonia), as well as a high heat of vaporization (40.65 kJ/mol or 2257 kJ/kg at the normal boiling point), both of which are a result of the extensive hydrogen bonding between its molecules. These two unusual properties allow water to moderate Earth's climate by buffering large fluctuations in temperature. According to Josh Willis, of NASA's Jet Propulsion Laboratory, the oceans can absorb one thousand times more heat than the atmosphere without changing their temperature much and are absorbing 80 to 90% of the heat global warming.[27] The specific heat capacity of ice at −10 °C is 2.03 J/(g·K)[28] and the heat capacity of steam at 100 °C is 2.08 J/(g·K).[29] Density of water and ice Density of ice and water as a function of temperature The density of water is about 1 gram per cubic centimetre (62 lb/cu ft): this relationship was originally used to define the gram.[30] The density varies with temperature, but not linearly: as the temperature increases, the density rises to a peak at 3.98 °C (39.16 °F) and then decreases.[31] This unusual negative thermal expansion below 4 °C (39 °F) is also observed in molten silica.[32] Regular, hexagonal ice is also less dense than liquid water—upon freezing, the density of water decreases by about 9%.[33] Other substances that expand on freezing are acetic acid, silicon, gallium,[34] germanium, bismuth, plutonium and also chemical compounds that form spacious crystal lattices with tetrahedral coordination. Temperature distribution in a lake in summer and winter The unusual density curve and lower density of ice than of water is vital to life—if water was most dense at the freezing point, then in winter the very cold water at the surface of lakes and other water bodies would sink, the lake could freeze from the bottom up, and all life in them would be killed.[33] Furthermore, given that water is a good thermal insulator (due to its heat capacity), some frozen lakes might not completely thaw in summer.[33] The layer of ice that floats on top insulates the water below.[37] Water at about 4 °C (39 °F) also sinks to the bottom, thus keeping the temperature of the water at the bottom constant (see diagram).[33] Density of saltwater and ice WOA surface density Miscibility and condensation Red line shows saturation Main article: Humidity Vapor pressure Vapor pressure diagrams of water Triple point The various triple points of water Phases in stable equilibrium Pressure Temperature liquid water, ice Ih, and water vapor 611.657 Pa[41] 273.16 K (0.01 °C) Phase diagram of water Melting point The melting point of ice is 0 °C (32 °F; 273 K) at standard pressure; however, pure liquid water can be supercooled well below that temperature without freezing if the liquid is not mechanically disturbed. It can remain in a fluid state down to its homogeneous nucleation point of about 231 K (−42 °C; −44 °F).[46] The melting point of ordinary hexagonal ice falls slightly under moderately high pressures, by 0.0073 °C (0.0131 °F)/atm[f] or about 0.5 °C (0.90 °F)/70 atm[g][47] as the stabilization energy of hydrogen bonding is exceeded by intermolecular repulsion, but as ice transforms into its allotropes (see crystalline states of ice) above 209.9 MPa (2,072 atm), the melting point increases markedly with pressure, i.e., reaching 355 K (82 °C) at 2.216 GPa (21,870 atm) (triple point of Ice VII[48]). Electrical properties Electrical conductivity ) and one hydronium cation (H It is known that the theoretical maximum electrical resistivity for water is approximately 18.2 MΩ·cm (182 ·m) at 25 °C.[49] This figure agrees well with what is typically seen on reverse osmosis, ultra-filtered and deionized ultra-pure water systems used, for instance, in semiconductor manufacturing plants. A salt or acid contaminant level exceeding even 100 parts per trillion (ppt) in otherwise ultra-pure water begins to noticeably lower its resistivity by up to several kΩ·m.[citation needed] In pure water, sensitive equipment can detect a very slight electrical conductivity of 0.05501 ± 0.0001 µS/cm at 25.00 °C.[49] Water can also be electrolyzed into oxygen and hydrogen gases but in the absence of dissolved ions this is a very slow process, as very little current is conducted. In ice, the primary charge carriers are protons (see proton conductor).[50] Ice was previously thought to have a small but measurable conductivity of 1×1010 S cm−1, but this conductivity is now thought to be almost entirely from surface defects, and without those, ice is an insulator with an immeasurably small conductivity.[31] Polarity, hydrogen bonding and intermolecular structure An important feature of water is its polar nature. The structure has a bent molecular geometry for the two hydrogens from the oxygen vertex. The oxygen atom also has two lone pairs of electrons. One effect usually ascribed to the lone pairs is that the H–O–H gas phase bend angle is 104.48°,[51] which is smaller than the typical tetrahedral angle of 109.47°. The lone pairs are closer to the oxygen atom than the electrons sigma bonded to the hydrogens, so they require more space. The increased repulsion of the lone pairs forces the O–H bonds closer to each other.[52] Another effect of the electronic structure is that water is a polar molecule. Due to the difference in electronegativity, there is a bond dipole moment pointing from each H to the O, making the oxygen partially negative and each hydrogen partially positive. In addition, the lone pairs of electrons on the O are in the direction opposite to the hydrogen atoms. This results in a large molecular dipole, pointing from a positive region between the two hydrogen atoms to the negative region of the oxygen atom. The charge differences cause water molecules to be attracted to each other (the relatively positive areas being attracted to the relatively negative areas) and to other polar molecules. This attraction contributes to hydrogen bonding, and explains many of the properties of water, such as solvent action.[53] ), has much weaker hydrogen bonding due to sulfur's lower electronegativity. H Proposed structures Model of hydrogen bonds (1) between molecules of water However, there is an alternative theory for the structure of water. In 2004, a controversial paper from Stockholm University suggested that water molecules in liquid form typically bind not to four but to only two others; thus forming chains and rings. The term "string theory of water" (which is not to be confused with the string theory of physics) was coined. These observations were based upon X-ray absorption spectroscopy that probed the local environment of individual oxygen atoms. Water, the team suggests, is a muddle of the two proposed structures. They say that it is a soup flecked with "icebergs" each comprising 100 or so loosely connected molecules that are relatively open and hydrogen bonded. The soup is made of the string structure and the icebergs of the tetrahedral structure.[54] Cohesion and adhesion Dew drops adhering to a spider web Surface tension Temperature dependence of the surface tension of pure water Another surface tension effect is capillary waves, which are the surface ripples that form around the impacts of drops on water surfaces, and sometimes occur with strong subsurface currents flowing to the water surface. The apparent elasticity caused by surface tension drives the waves. Additionally, the surface tension of water allows certain insects to walk on the surface of water. This is caused by the strength of the hydrogen bonds, making it difficult to break the surface of water. These insects, including the raft spider, are more dense than water and yet are still able to walk on the surface. [59] Capillary action Water as a solvent Main article: Aqueous solution cations and Cl Quantum tunneling The quantum tunneling dynamics in water was reported as early as 1992. At that time it was known that there are motions which destroy and regenerate the weak hydrogen bond by internal rotations of the substituent water monomers.[60] On 18 March 2016, it was reported that the hydrogen bond can be broken by quantum tunneling in the water hexamer. Unlike previously reported tunneling motions in water, this involved the concerted breaking of two hydrogen bonds.[61] Later in the same year, the discovery of the quantum tunneling of water molecules was reported.[62] Chemical properties in nature , naturally found in large quantities on the ocean floor. Pure water has the concentration of hydroxide ions (OH ) equal to that of the hydronium (H ) or hydrogen (H+ Electromagnetic absorption Heavy water and isotopologues Hydrogen occurs naturally in three isotopes. The most common isotope, 1 , sometimes called protium, accounts for more than 99.98% of hydrogen in water and consists of only a single proton in its nucleus. A second stable isotope, deuterium (chemical symbol D or 2 ), has an additional neutron. Deuterium oxide, D , is also known as heavy water because of its higher density. It is used in nuclear reactors as a neutron moderator. The third isotope, tritium (chemical symbol T or 3 ) has 1 proton and 2 neutrons, and is radioactive, decaying with a half-life of 4500 days. THO exists in nature only in minute quantities, being produced primarily via cosmic ray-induced nuclear reactions in the atmosphere. Water with one protium and one deuterium atom HDO occurs naturally in ordinary water in low concentrations (~0.03%) and D The most notable physical differences between H and D Consumption of pure isolated D may affect biochemical processes – ingestion of large amounts impairs kidney and central nervous system function. Small quantities can be consumed without any ill-effects; humans are generally unaware of taste differences,[64] but sometimes report a burning sensation[65] or sweet flavor.[66] Very large amounts of heavy water must be consumed for any toxicity to become apparent. Rats, however, are able to avoid heavy water by smell, and it is toxic to many animals.[67] Oxygen also has three stable isotopes, with 16 present in 99.76%, 17 in 0.04%, and 18 in 0.2% of water molecules.[68] Standard water or T), which has two neutrons. Acid-base reactions Water is amphoteric: it has the ability to act as either an acid or a base in chemical reactions.[69] According to the Brønsted-Lowry definition, an acid is a proton (H+ ) donor and a base is a proton acceptor.[70] When reacting with a stronger acid, water acts as a base; when reacting with a stronger base, it acts as an acid.[70] For instance, water receives an H+ ion from HCl when hydrochloric acid is formed: + H + Cl In the reaction with ammonia, NH , water donates a H+ ion, and is thus acting as an acid: + H + OH (Lewis acid) + H (Lewis base) (Lewis acid) + H (Lewis base) (Lewis base) + H (Lewis acid) + H ⇌ NaOH + NaHCO Ligand chemistry , to perrhenic acid, which contains two water molecules coordinated to a rhenium atom, and various solid hydrates, such as CoCl . Water is typically a monodentate ligand–it forms only one bond with the central atom.[71] Organic chemistry As a hard base, water reacts readily with organic carbocations; for example in an hydration reaction, a hydroxyl group (OH Water in redox reactions Water contains hydrogen in the oxidation state +1 and oxygen in the oxidation state −2.[72] It oxidizes chemicals such as hydrides, alkali metals, and some alkaline earth metals.[73][74][75] One example of an alkali metal reacting with water is:[76] 2 Na + 2 H + 2 Na+ + 2 OH Some other reactive metals, such as aluminum and beryllium, are oxidized by water as well, but their oxides adhere to the metal and form a passive protective layer.[77] Note, however, that the rusting of iron is a reaction between iron and oxygen[78] that is dissolved in water, not between iron and water. . Almost all such reactions require a catalyst.[79] An example of the oxidation of water is: 4 AgF + 2 H → 4 AgF + 4 HF + O Main article: Electrolysis of water Water can be split into its constituent elements, hydrogen and oxygen, by passing an electric current through it. This process is called electrolysis. The cathode half reaction is: 2 H+ + 2 The anode half reaction is: 2 H + 4 H+ + 4 Henry Cavendish showed that water was composed of oxygen and hydrogen in 1781.[80] The first decomposition of water into hydrogen and oxygen, by electrolysis, was done in 1800 by English chemist William Nicholson and Anthony Carlisle.[80][81] In 1805, Joseph Louis Gay-Lussac and Alexander von Humboldt showed that water is composed of two parts hydrogen and one part oxygen.[82] Gilbert Newton Lewis isolated the first sample of pure heavy water in 1933.[83] See also 4. ^ (1-0.95865/1.00000) × 100% = 4.135% 5. ^ Adiabatic cooling resulting from the ideal gas law 7. ^ Using the fact that 0.5/0.0073 = 68.5 1. ^ "Definition of Hydrol". Merriam-Webster. (subscription required (help)).  2. ^ a b Braun, Charles L.; Smirnov, Sergei N. (1993-08-01). "Why is water blue?". Journal of Chemical Education. 70 (8): 612. Bibcode:1993JChEd..70..612B. doi:10.1021/ed070p612. ISSN 0021-9584.  3. ^ Water in Linstrom, P.J.; Mallard, W.G. (eds.) NIST Chemistry WebBook, NIST Standard Reference Database Number 69. National Institute of Standards and Technology, Gaithersburg MD. (retrieved 2016-5-27) 5. ^ Lide, David R. (2003-06-19). CRC Handbook of Chemistry and Physics, 84th Edition. CRC Handbook. CRC Press. Vapor Pressure of Water From 0 to 370° C in Sec. 6. ISBN 9780849304842.  6. ^ Lide, David R. (2003-06-19). CRC Handbook of Chemistry and Physics, 84th Edition. CRC Handbook. CRC Press. Chapter 8: Dissociation Constants of Inorganic Acids and Bases. ISBN 9780849304842.  8. ^ Ramires, Maria L. V.; Castro, Carlos A. Nieto de; Nagasaka, Yuchi; Nagashima, Akira; Assael, Marc J.; Wakeham, William A. (1995-05-01). "Standard Reference Data for the Thermal Conductivity of Water". Journal of Physical and Chemical Reference Data. 24 (3): 1377–1381. doi:10.1063/1.555963. ISSN 0047-2689.  9. ^ Lide, David R. (2003-06-19). CRC Handbook of Chemistry and Physics, 84th Edition. CRC Handbook. CRC Press. 8—Concentrative Properties of Aqueous Solutions: Density, Refractive Index, Freezing Point Depression, and Viscosity. ISBN 9780849304842.  10. ^ Lide, David R. (2003-06-19). CRC Handbook of Chemistry and Physics, 84th Edition. CRC Handbook. CRC Press. 6.186. ISBN 9780849304842.  11. ^ Lide, David R. (2003-06-19). CRC Handbook of Chemistry and Physics, 84th Edition. CRC Handbook. CRC Press. 9—Dipole Moments. ISBN 9780849304842.  13. ^ Greenwood, Norman N.; Earnshaw, Alan (1997). Chemistry of the Elements (2nd ed.). Butterworth-Heinemann. p. 620. ISBN 0-08-037941-9.  14. ^ "Water, the Universal Solvent". USGS.  15. ^ Reece, Jane B. (31 October 2013). Campbell Biology (10 ed.). Pearson. p. 48. ISBN 9780321775658.  16. ^ Reece, Jane B. (31 October 2013). Campbell Biology (10 ed.). Pearson. p. 44. ISBN 9780321775658.  17. ^ a b Leigh, G. J.; et al. (1998). Principles of chemical nomenclature: a guide to IUPAC recommendations (PDF). Blackwell Science Ltd, UK. p. 34. ISBN 0-86542-685-6. Archived (PDF) from the original on 2011-07-26.  18. ^ Nomenclature of Inorganic Chemistry: IUPAC Recommendations 2005 (PDF). Royal Society of Chemistry. 22 Nov 2005. p. 85. ISBN 978-0-85404-438-2. Retrieved 2016-07-31.  19. ^ Leigh, G. J.; et al. (1998). Principles of chemical nomenclature: a guide to IUPAC recommendations (PDF). IUPAC, Commission on Nomenclature of Organic Chemistry. Blackwell Science Ltd, UK. p. 99. ISBN 0-86542-685-6. Archived (PDF) from the original on 2011-07-26.  20. ^ "Tetrahydropyran". Pubchem. National Institutes of Health. Retrieved 2016-07-31.  22. ^ Leigh, G. J.; et al. (1998). Principles of chemical nomenclature: a guide to IUPAC recommendations (PDF). Blackwell Science Ltd, UK. pp. 27–28. ISBN 0-86542-685-6. Archived (PDF) from the original on 2011-07-26.  24. ^ "Water (Code C65147)". NCI Thesaurus. National Cancer Institute. Retrieved 2016-08-01.  28. ^ Lide, David R. (2003-06-19). CRC Handbook of Chemistry and Physics, 84th Edition. CRC Handbook. CRC Press. Chapter 6: Properties of Ice and Supercooled Water. ISBN 9780849304842.  29. ^ Lide, David R. (2003-06-19). CRC Handbook of Chemistry and Physics, 84th Edition. CRC Handbook. CRC Press. 6. Properties of Water and Steam as a Function of Temperature and Pressure. ISBN 9780849304842.  31. ^ a b c d Greenwood, Norman N.; Earnshaw, Alan (1997). Chemistry of the Elements (2nd ed.). Butterworth-Heinemann. p. 625. ISBN 0-08-037941-9.  32. ^ Shell, Scott M.; Debenedetti, Pablo G. & Panagiotopoulos, Athanassios Z. (2002). "Molecular structural order and anomalies in liquid silica" (PDF). Phys. Rev. E. 66: 011202. arXiv:cond-mat/0203383Freely accessible. Bibcode:2002PhRvE..66a1202S. doi:10.1103/PhysRevE.66.011202.  33. ^ a b c d e Perlman, Howard. "Water Density". The USGS Water Science School. Retrieved 2016-06-03.  34. ^ Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 938. ISBN 978-1-13-361109-7.  35. ^ Loerting, Thomas; Salzmann, Christoph; Kohl, Ingrid; Mayer, Erwin; Hallbrucker, Andreas (2001-01-01). "A second distinct structural "state" of high-density amorphous ice at 77 K and 1 bar". Physical Chemistry Chemical Physics. 3 (24): 5355–5357. doi:10.1039/b108676f. ISSN 1463-9084.  36. ^ Greenwood, Norman N.; Earnshaw, Alan (1997). Chemistry of the Elements (2nd ed.). Butterworth-Heinemann. p. 624. ISBN 0-08-037941-9.  37. ^ Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 493. ISBN 978-1-13-361109-7.  38. ^ a b c "Can the ocean freeze?". National Ocean Service. National Oceanic and Atmospheric Administration. Retrieved 2016-06-09.  41. ^ Review of the vapour pressures of ice and supercooled water for atmospheric applications. D. M. Murphy and T. Koop (2005) Quarterly Journal of the Royal Meteorological Society, 131, 1539. 45. ^ Lewis, William C.M. & Rice, James (1922). A System of Physical Chemistry. Longmans, Green and Co.  46. ^ Debenedetti, P. G. & Stanley, H. E. (2003). "Supercooled and Glassy Water" (PDF). Physics Today. 56 (6): 40–46. Bibcode:2003PhT....56f..40D. doi:10.1063/1.1595053.  47. ^ Sharp, Robert Phillip (1988-11-25). Living Ice: Understanding Glaciers and Glaciation. Cambridge University Press. p. 27. ISBN 0-521-33009-2.  48. ^ "Revised Release on the Pressure along the Melting and Sublimation Curves of Ordinary Water Substance" (PDF). IAPWS. September 2011. Retrieved 2013-02-19.  49. ^ a b Light, Truman S.; Licht, Stuart; Bevilacqua, Anthony C.; Morash, Kenneth R. (2005-01-01). "The Fundamental Conductivity and Resistivity of Water". Electrochemical and Solid-State Letters. 8 (1): E16–E19. doi:10.1149/1.1836121. ISSN 1099-0062.  51. ^ Hoy, AR; Bunker, PR (1979). "A precise solution of the rotation bending Schrödinger equation for a triatomic molecule with application to the water molecule". Journal of Molecular Spectroscopy. 74: 1–8. Bibcode:1979JMoSp..74....1H. doi:10.1016/0022-2852(79)90019-5.  52. ^ Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 393. ISBN 978-1-13-361109-7.  53. ^ Campbell, Mary K. & Farrell, Shawn O. (2007). Biochemistry (6th ed.). Cengage Learning. pp. 37–38. ISBN 978-0-495-39041-1.  54. ^ Ball, Philip (2008). "Water—an enduring mystery". Nature. 452 (7185): 291–292. Bibcode:2008Natur.452..291B. doi:10.1038/452291a. PMID 18354466.  55. ^ Campbell, Neil A. & Reece, Jane B. (2009). Biology (8th ed.). Pearson. p. 47. ISBN 978-0-8053-6844-4.  56. ^ Chiavazzo, Eliodoro; Fasano, Matteo; Asinari, Pietro; Decuzzi, Paolo (2014). "Scaling behaviour for the water transport in nanoconfined geometries". Nature Communications. 5: 4565. Bibcode:2014NatCo...5E4565C. doi:10.1038/ncomms4565.  58. ^ Lide, David R. (2003-06-19). CRC Handbook of Chemistry and Physics, 84th Edition. CRC Press. Water in Table Surface Tension of Common Liquids. ISBN 9780849304842.  59. ^ Campbell, Neil (2011). Biology. Benjamin Cummings. p. 48. ISBN 0321558235.  61. ^ Richardson, Jeremy O.; Pérez, Cristóbal; Lobsiger, Simon; Reid, Adam A.; Temelso, Berhane; Shields, George C.; Kisiel, Zbigniew; Wales, David J.; Pate, Brooks H. (2016-03-18). "Concerted hydrogen-bond breaking by quantum tunneling in the water hexamer prism". Science. 351 (6279): 1310–1313. Bibcode:2016Sci...351.1310R. doi:10.1126/science.aae0012. ISSN 0036-8075. PMID 26989250. Retrieved 2016-04-23.  63. ^ Hardy, Edme H.; Zygar, Astrid; Zeidler, Manfred D.; Holz, Manfred; Sacher, Frank D. (2001). "Isotope effect on the translational and rotational motion in liquid water and ammonia". J. Chem Phys. 114 (7): 3174–3181. Bibcode:2001JChPh.114.3174H. doi:10.1063/1.1340584.  66. ^ Müller, Grover C. (June 1937). "Is 'Heavy Water' the Fountain of Youth?". Popular Science Monthly. 130 (6). New York: Popular Science Publishing. pp. 22–23. Retrieved 7 Jan 2011.  69. ^ Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 659. ISBN 978-1-13-361109-7.  70. ^ a b Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 654. ISBN 978-1-13-361109-7.  71. ^ Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 984. ISBN 978-1-13-361109-7.  72. ^ Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 171. ISBN 978-1-13-361109-7.  73. ^ "Hydrides". Chemwiki. UC Davis. Retrieved 2016-06-25.  74. ^ Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 932. ISBN 978-1-13-361109-7.  75. ^ Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 936. ISBN 978-1-13-361109-7.  76. ^ Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 338. ISBN 978-1-13-361109-7.  77. ^ Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 862. ISBN 978-1-13-361109-7.  78. ^ Zumdahl, Steven S.; Zumdahl, Susan A. (2013-01-01). Chemistry (9th ed.). Cengage Learning. p. 981. ISBN 978-1-13-361109-7.  80. ^ a b Greenwood, Norman N.; Earnshaw, Alan (1997). Chemistry of the Elements (2nd ed.). Butterworth-Heinemann. p. 601. ISBN 0-08-037941-9.  81. ^ "Enterprise and electrolysis...". Royal Society of Chemistry. August 2003. Retrieved 2016-06-24.  82. ^ "Joseph Louis Gay-Lussac, French chemist (1778–1850)". 1902 Encyclopedia. Footnote 122-1. Retrieved 2016-05-26.  External links Wikiversity has small "student" steam tables suitable for classroom use.
db68dd06e4dfc49e
Chirikov standard map From Scholarpedia Boris Chirikov and Dima Shepelyansky (2008), Scholarpedia, 3(3):3550. doi:10.4249/scholarpedia.3550 revision #137209 [link to/cite this article] Jump to: navigation, search Post-publication activity Curator: Dima Shepelyansky The Chirikov standard map [1], [2] is an area-preserving map for two canonical dynamical variables, i.e., momentum and coordinate \( (p,x)\). It is described by the equations: \[\tag{1} \begin{array}{lcr} \bar{p} = p+K\sin x \\ \bar{x} = x+\bar{p} \end{array} \] where the bars indicate the new values of variables after one map iteration and \(K\) is a dimensionless parameter that influences the degree of chaos. Due to the periodicity of \( \sin x \) the dynamics can be considered on a cylinder (by taking \( x \!\!\! \mod{2\pi} \)) or on a torus (by taking both \( x,p \!\!\! \mod{2\pi} \)). The map is generated by the time dependent Hamiltonian \( H(p,x,t)= p^2/2 +K \cos(x) \, \delta_1(t) \), where \( \delta_1(t) \) is a periodic \( \delta- \)function with period 1 in time. The dynamics is given by a sequence of free propagations interleaved with periodic kicks. Examples of the Poincare sections of the standard map on a torus are shown in the following Figs. 1,2,3. Figure 1: K=0.5 Figure 2: K=0.971635 Figure 3: K=5 Below the critical parameter \( K < K_c \) (Fig.1) the invariant Kolmogorov-Arnold-Moser (KAM) curves restrict the variation of momentum \( p \) to be bounded. The golden KAM curve with the rotation number \[ r=r_g=(\sqrt{5}-1)/2 =0.618033... \] is destroyed at \(K=K_g=0.971635... \) [3], [4] (Fig.2). This Fig. shows a generic phase space structure typical for various area-preserving maps with smooth generating functions: stability islands are embedded in a chaotic sea, similar structure appears on smaller and smaller scales. In a vicinity of a critical invariant curve, with a golden tail in a continued fraction expansion of \( r\), the phase space structure is universal for all smooth maps [4]. Above the critical value \( K > K_c \) (see Fig.3 showing a chaotic component and visible islands of stability) the variation of \(p\) becomes unbounded and is characterized by a diffusive growth \( p^2 \sim D_0 t\) with number of map iterations \( t \). Here \( D_0 \) is a diffusion rate with \( D_0 \approx (K-K_c)^3/3 \) for \( K_c < K < 4 \) and \( D_0 \approx D_{ql}=K^2/2 \) for \( 4 < K \) [2], [5]. There are strong arguments in favor of the equality \( K_c = K_g \) but rigorously it is only proven that there are no KAM curves for \( K > 63/64 = 0.984375 \) [6]. With the numerical results [3], [4] this implies inequality for the global chaos border, \( K_g \leq K_c < 63/64 \). A simple analytical criterion proposed in 1959 and now known as the Chirikov resonance-overlap criterion [7] gives the chaos border \( K_c = \pi^2/4\) [1] and after some improvements leads to \( K_c \approx 1.2\) [2],[8]. This accuracy is not so impressive compared to modern numerical methods but still up to now this criterion remains the only simple analytical tool for determining the chaos border in various Hamiltonian dynamical systems. The Kolmogorov-Sinai entropy of the map is well described by relation \( h \approx \ln(K/2)\) valid for \( K > 4 \)[1], [2]. Universality and Applications The map (1) describes a situation when nonlinear resonances are equidistant in phase space that corresponds to a local description of dynamical chaos. Due to this property various dynamical systems and maps can be locally reduced to the standard map and due to this reason the term standard map was coined in [2]. Thus, the standard map describes a universal, generic behavior of area-preserving maps with divided phase space when integrable islands of stability are surrounded by a chaotic component. A short list of systems reducible to the standard map is given below: • chaotic layer around separatrix of a nonlinear resonance induced by a monochromatic force (the whisker map) [2] • charged particle confinement in mirror magnetic traps [1], [2], [7], [9] • fast crossing of nonlinear resonance [1], [10] • particle dynamics in accelerators [11] • comet dynamics in solar system [12] with a rather similar map for the comet Halley [13] • microwave ionization of Rydberg atoms (linked to the Kepler map) [14] and autoionization of molecular Rydberg states [15] • electron magnetotransport in a resonant tunneling diode [16] Open Problems • In spite of fundamental advances in ergodic theory [17], a rigorous proof of the existence of a set of positive measure of orbits with positive entropy is still missing, even for specific values of \( K \) (see e.g. [18]). • What are the fractal properties of critical chaos parameter \( K_c(r) \) as a function of arithmetic properties of the rotation number \( r \) of KAM curve? do local maxima correspond only to a golden tail of continuous fraction expansion [3], [4] or they may have tails with Markov numbers as it is conjectured in [19]? (see also [20]) • Due to trajectory sticking around stability islands the statistics of Poincare recurrences in Hamiltonian systems with divided phase space (see e.g. Fig.2 with a critical golden KAM curve) is characterized by an algebraic decay \( P(\tau) \propto 1/\tau^\alpha \) with \( \alpha \approx 1.5 \) while a theory based on the universality in a vicinity of critical golden curve gives \( \alpha \approx 3 \ ;\) this difference persists up to 1013 map iterations; as a result correlation functions decay rather slowly \( C(\tau) \sim \tau P(\tau) \propto 1/\tau^{\alpha-1} \) that can lead to a divergence of diffusion rate \( D \sim \tau C(\tau) \) (see [21] and Refs. therein) Quantum Map Figure 4: Dependence of rescaled rotator energy \( E/(k^2/4) \) on time \( t \) for \( K=kT=5, \hbar=0.25 (k=20, T=0.25) \); the full curve shows numerical data and the straight line gives the diffusive energy growth in the classical case (from [23]). The quantization of the standard map is obtained by considering variables in (1) as the Heisenberg operators with the commutation relation \( [p,x] = -i \hbar \), where \( \hbar \) is an effective dimensionless Planck constant. In a same way it is possible to use the Schrödinger equation with the Hamiltonian \( H(\hat{p},\hat{x},t) \) given above and \( \hat{p}=-i\hbar \partial/\partial x \). Integration on one period gives the quantum map for the wave function \( \psi \): \[\tag{2} \bar{\psi} = \hat{U} \psi = e^{-i{\hat p}^2/2\hbar} e^{-i K/\hbar \cos {\hat x}} \psi \] where bar marks the new value of \( \psi \) after one map iteration. Due to space periodicity of the Hamiltonian the momentum can be presented in the form \( p=\hbar (n + \beta) \), where \( n \) is an integer and \( \beta \) is a quasimomentum preserved by the evolution operator \( \hat U \) . The case with \( \beta =0 \) corresponds to a periodic boundary conditions with \( \psi(x+2\pi) =\psi(x) \) and is known as the kicked rotator introduced in [22]. Other notations with \( \hbar \rightarrow T\), \( K/\hbar \rightarrow k\) are also used to mark the dependence on the period \( T\) between kicks, then \( K = k T\). The diffusion rate over quantum levels \( n \) is \( D=D_0/\hbar^2= n^2/t \approx K^2/2\hbar^2 =k^2/2\), thus the rotator energy \( E = <n^2>/2\) grows linearly with time. Quantum interference effects lead to a suppression of this semiclassical diffusion [22] on the diffusive time scale \( t_D \) so that the quantum probability spreads effectively only on a finite number of states \( \Delta n \sim \sqrt{D t_D} \) (Fig.4). According to the analytical estimates obtained in [23]: \[\tag{3} t_D \sim \Delta n \sim D \sim k^2 \sim D_0/\hbar^2 . \] This diffusive time scale is much larger than the Ehrenfest time scale [23], [24] \( t_E \sim \ln(1/\hbar)/2h \) after which a minimal coherent wave packet spreads over the whole phase space due to the exponential instability of classical dynamics. For \( t < t_E \) a quantum wave packet follows the chaotic dynamics of a classical trajectory as it is guaranteed by the Ehrenfest theorem [23]. For the case of Fig.4 the Kolmogorov-Sinai entropy \( h \approx 1 \) and the Ehrenfest time \( t_E \sim 1 \) is extremely short comparing to the diffusive time \( t_D \sim D \sim 200 \). The quantum suppression of chaotic diffusion is similar to the Anderson localization in disordered systems if to consider the level number as an effective site number in a disordered lattice, such an analogy has been established in [25]. However, in contrast to a disordered potential for the case of Anderson localization, in the quantum map (2) diffusion and chaos have a pure deterministic origin appearing as a result of dynamical chaos in the classical limit. Figure 5: Dependence of the localization length \( l \) on the quantum parameter of chaos \( K \rightarrow K_q=2k \sin T/2 \). The circles and the curve are, respectively, the numerical data and the theory for the classical diffusion \( D(K) \) (see [8]). The quantum data for \( l \) are shown by \( + \) (for \( 0<T<\pi \)) and by \( \times \) (for \( \pi<T<2\pi \)); here \( k=30; D_{ql}=k^2/2 \) (from [27]). Due to that this phenomenon is called the dynamical localization. The eigenstates of the unitary evolution operator \( \hat U \) are exponentially localized over momentum states \( \psi_m(n) \sim \exp(-|n-m|/l)/\sqrt{l} \) with the localization length \( l \sim \Delta n \sim t_D \) given by the relation [26], [27] \[\tag{4} l=D(K)/2 =D_0(K)/2\hbar^2, \] where \( D \) is the semiclassical diffusion expressed via a square number of levels per period of perturbation. For \( \hbar = T > 1 \) the chaos parameter \( K \) in the dependence \( D(K) \) should be replaced by its quantum value \( K \rightarrow K_q = 2 k \sin T/2 \) [27]. The quantum localization length \( l \) repeats the characteristic oscillations of the classical diffusion as it is shown in Fig.5. The relation (4) assumes that \( T/4\pi \) is a typical irrational number while for rational values of this ratio the phenomenon of quantum resonance takes place and the energy grows quadratically with time for rational values of quasimomentum [28]. The derivations of the relation (4) based on the field theory methods applied to dynamical systems with chaotic diffusion can be find in [29], [30] (see also Refs. therein). If the quantum map (2) is taken on a torus with \( N \) levels then the level spacing statistics is described by the Poisson law for \( N \gg l \) and by the Wigner-Dyson law of the random matrix theory for \( N \ll l \) [24],[31]. In the later case the quantum eigenstates are ergodic on a torus in agreement with the Shnirelman theorem and the level spacing statistics agrees with the Bohigas-Giannoni-Schmit conjecture (see books on quantum chaos in Recommended Reading). The quantum map (2) was built up experimentally with cold atoms in a kicked optical lattice by the group of M.Raizen [32]. Such a case corresponds to a particle in an infinite periodic lattice with averaging over many various \( \beta \). The quantum resonances at \( \beta \approx 0 \) were also experimentally observed with the Bose-Einstein condensate (BEC) in [33]. Quantum accelerator modes for kicked atoms falling in the gravitational field were found and analyzed in [34]. Extensions and Related Quantum Systems Due to universal properties of the standard map its quantum version also finds applications for various systems and various physical effects: • dynamical localization for ionization of excited hydrogen atoms in a microwave field was theoretically predicted in [35] and was experimentally observed by the group of P.Koch [36] (see more details in [14],[37],[38]) • quantum particle in a triangular well and monochromatic field with a quantum delocalization transition [39] • the kicked Harper model where in contrast to the relation (4) the quantum delocalization can take place due to quasi-periodicity of unperturbed spectrum (see [40], [41] and Refs. therein) • 3D Anderson transition in kicked rotator with modulated kick strength and quantum transport in mesoscopic conductors (see [42] and Refs. therein) • dissipative quantum chaos [43] • fractal Weyl law for the quantum standard map with absorption (see [44] and Refs. therein) Figure 6: Dependence of rescaled energy \( E/(k^2/4) \) on time in the classical map (1) at \( K=5 \); time reversal is performed at \( t=150 \); numerical simulations are done on BESM-6 with relative accuracy \( \epsilon \approx 10^{-12} \) (from [46]). Time Reversibility and Boltzmann - Loschmidt Dispute Figure 7: Same as in Fig.6 but for the quantum map (2) with \( K=5, \hbar=0.25 \), the straight line shows the classical diffusion; time reversal is performed at the moment \( t=150 \) marked by the vertical line, numerical simulations are done on the same computer BESM-6, in addition random quantum phases \( 0<\Delta \phi <0.1 \) are added for quantum amplitudes in momentum representation at the moment of time reversal (from [46]). The statistical theory of gases developed by Boltzmann leads to macroscopic irreversibility and entropy growth even if dynamical equations of motion are time reversible. This contradiction was pointed out by Loschmidt and is now known as the Loschmidt paradox. The reply of Boltzmann relied on the technical difficulty of velocity reversal for material particles: a story tells that he simply said "then go and do it" [45]. The modern resolution of this famous dispute, which took place around 1876 in Wien, came with the development of the theory of dynamical chaos (see e.g. [8], [17]). Indeed, for chaotic dynamics the Kolmogorov-Sinai entropy is positive and small perturbations grow exponentially with time, making the motion practically irreversible. This fact is convenient to illustrate on the example of the standard map which dynamics is time reversible, e.g. by inverting all velocities at the middle of free propagation between two kicks (see Fig.6). This explanation is valid for classical dynamics, while the case of quantum dynamics requires special consideration. Indeed, in the quantum case the exponential growth takes place only during the rather short Ehrenfest time, and the quantum evolution remains stable and reversible in presence of small perturbations [46] (see Fig.7). Quantum reversibility in presence of various perturbations has been actively studied in recent years and is now described through the Loschmidt echo (see [47] and Refs. therein). A method of approximate time reversal of matter waves for ultracold atoms in the regime of quantum chaos, like those in [32], [33], is proposed in [48]. In this method a large fraction of the atoms returns back even if the time reversal is not perfect. This fraction of the atoms exhibits Loschmidt cooling which can decrease their temperature by several orders of magnitude. At the same time a kicked BEC of attractive atoms (soliton) described by the Gross-Pitaevskii equation demonstrates a truly chaotic dynamics for which the exponential instability breaks the time reversibility [49]. However, since a number of atoms in BEC is finite and since BEC is a really quantum object one should expect that the Ehrenfest time is still very short and hence the time reversibility should be preserved in presence of small errors if the second quantization is taken into account. Links to Other Physical Topics Frenkel-Kontorova Model The Frenkel-Kontorova model describes a one-dimensional chain of atoms/particles with harmonic couplings placed in a periodic potential [50]. This model was introduced with the aim to study crystal dislocations but it also successfully applies for the description of commensurate-incommensurate phase transitions, epitaxial monolayers on the crystal surface, ionic conductors, glassy materials, charge-density waves and dry friction [51]. The Hamiltonian of the model is \( H= \sum_i \left({P_i^2 \over 2} + {(x_i -x_{i-1})^2 \over 2}- K \cos x_i \right)\), where \( P_i, x_i \) are momentum and position of atom \( i \). At the equilibrium the momenta \( P_i =0\) and \( \partial H/\partial x_i =0\) so that the positions of atoms are described by the map (1) with \( p_{i+1} = x_{i+1}- x_i , \; p_{i+1}= p_i +K\sin x_i\). The density of atoms corresponds to the rotation number \( r \) of an invariant KAM curve. For the golden density with \( r =r_g\) the chain slides in the periodic potential for \( K < K_g \) (KAM curve regime) while for \( K > K_g \) the transition by the breaking of analyticity, or Aubry transition, takes place, the chain becomes pinned and atoms form an invariant Cantor set called cantorus (see [52] and Aubry-Mather theory). In this regime the phonon spectrum has a gap so that the phonon excitations are suppressed at low temperature. The mathematical Aubry-Mather theory guarantees that the ground state of the chain exists and is unique. However there exist exponentially many static equilibrium configurations which are exponentially close to the energy of the ground state. The energies of these configurations form a fractal quasi-degenerate band structure and become mixed at any physically realistic temperature. Thus, such configurations can be viewed as a dynamical spin glass. For a case of Coulomb interactions between particles (e.g. ions or electrons) one obtains a problem of Wigner crystal in a periodic potential which again is locally described by the Frenkel-Kontorova model since the map (1) gives the local description of the dynamics. For the quantum Frenkel-Kontorova model the dynamics of atoms (ions) in the chain is quantum. In this case the quantum vacuum fluctuations and instanton tunneling lead to a quantum melting of pinned phase: above a certain effective Planck constant a quantum phase transition takes place from pinned instanton glass to sliding phonon gas (see [53] and Refs. therein). Quantum Computing One iteration of maps (1) and (2) can be simulated on a quantum computer in a polynomial number of quantum gates for an exponentially large vector representing a Liouville density distribution or a quantum state. The quantum algorithm of such a quantum computation is described in [54], effects of quantum errors are analyzed in [55] (see also Refs. therein). Historical Notes The standard map (1) in a form of recursive relation for atoms in a periodic potential appears already in the works of Kontorova and Frenkel [50]. As a dynamical map it first appeared as a description of electron dynamics in a new relativistic accelerator proposed by V.I.Veksler (Dokl. Akad. Nauk SSSR 43: 346 (1944)). The regime of a stable regular acceleration was studied later also by A.A.Kolomensky (Zh. Tekh. Fiz. 30: 1347 (1960)) and S.P.Kapitsa, V.N.Melekhin ("Microtron", Nauka, Moscow (1969) in Russian). Among the early researchers of model (1) was also British physicist J.B.Taylor (unpublished reports). The description of chaos in map (1) and its main properties, including chaos border, diffusion rate and positive entropy, was given in [1]. The term "standard map" appeared in [2], "Chirikov-Taylor map" [8] and "Chirikov standard map" [16] are also used, the quantum standard map or kicked rotator was first considered in [22]. Appearance of other terms: Kolmogorov-Arnold-Moser theory [1], Arnold diffusion [1], Kolmogorov-Sinai entropy [2], Ehrenfest time [24]. Recommended Reading B.V.Chirikov, "Research concerning the theory of nonlinear resonance and stochasticity", Preprint N 267, Institute of Nuclear Physics, Novosibirsk (1969), (Engl. Trans., CERN Trans. 71-40 (1971)) B.V.Chirikov, "A universal instability of many-dimensional oscillator systems", Phys. Rep. 52: 263 (1979). B.V.Chirikov, "Time-dependent quantum systems" in "Chaos and quantum mechanics", Les Houches Lecture Series, Vol. 52, pp.443-545, Eds. M.-J.Giannoni, A.Voros, J.Zinn-Justin, Elsevier Sci. Publ., Amsterdam (1991) A.J.Lichtenberg, M.A.Lieberman, "Regular and chaotic dynamics", Springer, Berlin (1992). F.Haake, "Quantum signatures of chaos", Springer, Berlin (2001). L.E.Reichl, "The Transition to chaos in conservative classical systems and quantum manifestations", Springer, Berlin (2004). Internal references • Martin Gutzwiller (2007) Quantum chaos. Scholarpedia, 2(12):3146. External Links Selected publications of Boris Chirikov [1] Sputnik of Chaos [2] Google query for "standard map" [3] B.V.Chirikov, "Research concerning the theory of nonlinear resonance and stochasticity", Preprint N 267, Institute of Nuclear Physics, Novosibirsk (1969) [4], (Engl. Trans., CERN Trans. 71-40 (1971)) [5]. B.V.Chirikov, "A universal instability of many-dimensional oscillator systems", Phys. Rep. 52: 263 (1979) [6]. J.M.Greene, "Method for determining a stochastic transition", J. Math. Phys. 20(6): 1183 (1979). R.S.MacKay, "A renormalization approach to invariant circles in area-preserving maps", Physica D 7(1-3): 283 (1983). R.S.MacKay, J.D.Meiss, I.C.Percival, "Transport in Hamiltonian systems", Physica D 13(1-2): 55 (1984). R.S.MacKay, I.C.Percival, "Converse KAM - theory and practice", Comm. Math. Phys. 94(4): 469 (1985). B.V.Chirikov, "Resonance processes in magnetic traps", At. Energ. 6: 630 (1959) (in Russian [7]) (Engl. Transl., J. Nucl. Energy Part C: Plasma Phys. 1: 253 (1960) [8]). B.V.Chirikov, "Particle confinement and adiabatic invariance", Proc. R. Soc. Lond. A 413: 145 (1987) [9]. B.V.Chirikov, D.L.Shepelyanskii, "Diffusion during multiple passage through a nonlinear resonance", Sov. Phys. Tech. Phys. 27(2): 156 (1982) [10] (in Russian [11]) F.M.Izraelev, "Nearly linear mappings and their applications", Physica D 1(3): 243 (1980). T.Y.Petrowsky, "Chaos and cometary clouds in the solar system", Phys. Lett. A 117(7): 328 (1986). B.V.Chirikov, V.V.Vecheslavov, "Chaotic dynamics of comet Halley", Astron. Astrophys. 221: 146 (1989) [12]. G.Casati, I.Guarneri, D.L.Shepelyansky, "Hydrogen atom in monochromatic field: chaos and dynamical photonic localization", IEEE J. of Quant. Elect. 24: 1420 (1988). F.Benvenuto, G.Casati, D.L.Shepelyansky, "Chaotic autoionization of molecular Rydberg states", Phys. Rev. Lett. 72: 1818 (1994). D.L.Shepelyansky, A.D.Stone, "Chaotic Landau level mixing in classical and quantum wells", Phys. Rev. Lett. 74: 2098 (1995). I.P.Cornfeld, S.V.Fomin, Ya.G.Sinai, "Ergodic theory", Springer, Berlin (1982). A.Giorgilli, V.F.Lazutkin, "Some remarks on the problem of ergodicity of the standard map", Phys. Lett. A 272: 359 (2000). B.V.Chirikov, D.L.Shepelyansky, "Chaos border and statistical anomalies", Eds. D.V.Shirkov, D.I.Kazakov and A.A.Vladimirov, World Sci. Publ., Singapore, "Renormalization Group" p.221 (1988) [13]. J.M.Greene, R.S.MacKay, J.Stark, "Boundary circles for area-preserving maps", Physica D 21(2-3): 267 (1986). B.V.Chirikov, D.L.Shepelyansky, "Asymptotic statistics of Poincare recurrences in Hamiltonian systems with Divided Phase Space", Phys. Rev. Lett. 82: 528 (1999) [14]; 89: 239402 (2002) [15] . F.M.Izrailev, G.Casati, J.Ford, B.V.Chirikov, "Stochastic behavior of a quantum pendulum under a periodic perturbation", Preprint 78-46, Institute of Nuclear Physics, Novosibirsk (1978) (extended version in Russian) [16]; G.Casati, B.V.Chirikov, F.M.Izrailev, J.Ford, Lecture Notes in Physics, Springer, Berlin, 93: 334 (1979) [17]. B.V.Chirikov, F.M.Izrailev, D.L.Shepelyansky, "Dynamical stochasticity in classical and quantum mechanics", Sov. Scient. Rev. C 2: 209 (1981) (Section C - Mathematical Physics Reviews, Ed. S.P.Novikov vol.2, Harwood Acad. Publ., Chur, Switzerland (1981)) [18]. B.V.Chirikov, F.M.Izrailev, D.L.Shepelyansky, "Quantum chaos: localization vs. ergodicity", Physica D 33: 77 (1988) [19]. S.Fishman, D.R.Grempel, R.E.Prange, "Chaos, quantum recurrences, and Anderson localization", Phys. Rev. Lett. 49: 509 (1982). B.V.Chirikov, D.L.Shepelyanskii, "Localization of dynamical chaos in quantum systems", Izv. Vyssh. Ucheb. Zaved. Radiofizika 29(9): 1041 (1986) (in Russian [20]); (English Trans. Plenum Publ. [21] ). D.L.Shepelyansky, "Localization of diffusive excitation in multi-level systems", Physica D 28: 103 (1987). F.M.Izrailev, D.L.Shepelyanskii, "Quantum resonance for a rotator in a nonlinear periodic field", Theor. Math. Phys. 43(3): 553 (1980); see also I. Dana and D.L. Dorofeev, "General quantum resonances of kicked particle", Phys. Rev. E 73: 026206 (2006). K.M.Frahm, "Localization in a rough billiard: a sigma model formulation", Phys. Rev. B 55: 8626(R) (1997). C.Tian, A.Kamenev, A.Larkin, "Weak dynamical localization in periodically kicked cold atomic gases", Phys. Rev. Lett. 93: 124101 (2004). F.M.Izrailev, "Simple models of quantum chaos: spectrum and eigenfunctions", Phys. Rep. 196: 299 (1990). F.L.Moore, J.C.Robinson, C.F.Bharucha, B.Sundaram, M.G.Raizen, "Atom optics realization of the quantum \( \delta \)-kicked rotor", Phys. Rev. Lett. 75: 4598 (1995). C.Ryu, M.F.Anderen, A.Vaziri, M.B.d'Arcy, J.M.Grossman, K.Helmerson, W.D.Phillips, "High-order quantum resonances observed in a periodically kicked Bose-Einstein condensate", Phys. Rev. Lett. 96: 160403 (2006). A.Buchleitner, M.B.d'Arcy, S.Fishman, S.A.Gardiner, I.Guarneri, Z.-Y.Ma, L.Rebuzzini, G.S.Summy, "Quantum accelerator modes from the Farey tree", Phys. Rev. Lett. 96: 164101 (2006). G.Casati, B.V.Chirikov, D.L.Shepelyansky, ""Quantum limitations for chaotic excitation of the hydrogen atom in a monochromatic field", Phys. Rev. Lett. 53: 2525 (1984) [22]. E.J.Galvez, B.E.Sauer, L.Moorman, P.M.Koch, D.Richards, "Microwave ionization of H atoms: breakdown of classical dynamics for high frequencies", Phys. Rev. Lett. 61: 2011 (1988). G.Casati, B.V.Chirikov, D.L.Shepelyansky, I.Guarneri, ""Relevance of classical chaos in quantum mechanics: the hydrogen atom in a monochromatic field", Phys. Rep. 154: 77 (1987) [23]. P.M.Koch, K.A.H. van Leeuwen, "The importance of resonances in microwave “ionization” of excited hydrogen atoms", Phys. Rep. 255: 289 (1995). F.Benvenuto, G.Casati, I.Guarneri, D.L.Shepelyansky, "A quantum transition from localized to extended states in a classically chaotic system", Z.Phys.B - Cond. Matt. 84: 159 (1991). R.Lima, D.L.Shepelyansky, "Fast delocalization in a model of quantum kicked rotator", Phys. Rev. Lett. 67: 1377 (1991). T.Prosen, I.I.Saija, N.Shah, "Dimer decimation and intricately nested localized-ballistic phases of a kicked Harper model", Phys. Rev. Lett. 87: 066601 (2001). F.Borgonovi, D.L.Shepelyansky, "Two interacting particles in an effective 2-3-d random potential", J. de Physique I France 6: 287 (1996) and F. Borgonovi, I.Guarneri and L. Rebuzzini, "Chaotic diffusion and statistics of universal scattering fluctuations", Phys. Rev. Lett. 72: 1463 (1994). G.G.Carlo, G.Benenti, D.L.Shepelyansky, "Dissipative quantum chaos: transition from wave packet collapse to explosion", Phys. Rev. Lett. 95: 164101 (2005). D.L.Shepelyansky, "Fractal Weyl law for quantum fractal eigenstates", Phys. Rev. E 77: 015202(R) (2008). J.E.Mayer, M.Goppert-Mayer, "Statistical mechanics", John Wiley & Sons, N.Y. (1977). D.L.Shepelyansky, "Some statistical properties of simple classically stochastic quantum systems", Physica D 8: 208 (1983). T.Gorin, T.Prosen, T.H.Seligman, M.Znidaric, "Dynamics of Loschmidt echos and fidelity decay", Phys. Rep. 435: 33 (2006). J.Martin, B.Georgeot, D.L.Shepelyansky, "Loschmidt cooling by time reversal of atomic matter waves", arxiv:0710.4860[cond-mat]), Phys. Rev. Lett. 100: 044106 (2008). F.Benvenuto, G.Casati, A.S.Pikovsky, D.L.Shepelyansky, "Manifestations of classical and quantum chaos in nonlinear wave propagation", Phys. Rev. A 44: 3423(R) (1991). T.A.Kontorova, Ya.I.Frenkel, "On the theory of plastic deformation and doubling", Zh. Eksp. Teor. Fiz 8: 89 (1938); 8: 1340 (1938); 8: 1359 (1938) (in Russian). O.M. Braun, Yu.S. Kivshar, "The Frenkel-Kontorova Model: Concepts, Methods, and Applications", Springer, Berlin (2004). S.Aubry, "The twist map, the extended Frenkel-Kontorova model and the devil's staircase", Physica D 7(1-3): 240 (1983). I.Garcia-Mata, O.V.Zhirov, D.L.Shepelyansky, "Frenkel-Kontorova model with cold trapped ions", Eur. Phys. J. D 41: 325 (2007). B.Georgeot, D.L.Shepelyansky, "Exponential gain in quantum computing of quantum chaos and localization", Phys. Rev. Lett. 86: 2890 (2001). K.M.Frahm, R.Fleckinger, D.L.Shepelyansky, "Quantum chaos and random matrix theory for fidelity decay in quantum computations with static imperfections", Eur. Phys. J. D 29: 139 (2004). See also Hamiltonian systems, Mapping, Chaos, Kolmogorov-Arnold-Moser Theory, Kolmogorov-Sinai entropy, Aubry-Mather theory, Quantum chaos Personal tools Focal areas
4996a399bdd589c4
Sunday, August 5, 2018 Interpretation of Quantum Mechanics for Cats Many Worlds The de Broglie–Bohm theory, also known as the pilot wave theory, Bohmian mechanics, Bohm's interpretation, and the causal interpretation, is an interpretation of quantum mechanics. In addition to a wavefunction on the space of all possible configurations, it also postulates an actual configuration that exists even when unobserved. The transactional interpretation of quantum mechanics (TIQM) by John G. Cramer is an interpretation of quantum mechanics inspired by the Wheeler–Feynman absorber theory. It describes the collapse of the wave function as resulting from a time-symmetric transaction between a possibility wave from the source to the receiver (the wave function) and a possibility wave from the receiver to source (the complex conjugate of the wave function). Since the possibility wave is collapsed by interaction with the receiver, consciousness plays no role in the theory, eliminating Schrödinger's cat paradox. Consistent Histories This interpretation of quantum mechanics is based on a consistency criterion that then allows probabilities to be assigned to various alternative histories of a system such that the probabilities for each history obey the rules of classical probability while being consistent with the Schrödinger equation. In contrast to some interpretations of quantum mechanics, particularly the Copenhagen interpretation, the framework does not include "wavefunction collapse" as a relevant description of any physical process, and emphasizes that measurement theory is not a fundamental ingredient of quantum mechanics. Suggests that quantum gravity makes for fundamental limitations on the accuracy of clocks, which imply a type of decoherence. No comments: Post a Comment
e339c6bfb6103530
Clifford Algebras Applications to Mathematics, Physics, and Engineering • Rafał Abłamowicz Part of the Progress in Mathematical Physics book series (PMP, volume 34) Table of contents 1. Front Matter Pages i-xxiv 2. Clifford Analysis 3. Geometry 4. Mathematical Structures 5. Physics 1. Front Matter Pages 373-373 2. Francesco Bonechi, Nicola Ciccoli, Marco Tarlini Pages 393-399 3. Chiang-Mei Chen, James M. Nester, Roh-Suan Tung Pages 417-430 4. Claude Daviau Pages 431-450 5. Tevian Dray, Corinne A. Manogue Pages 451-466 6. Anthony Lasenby, Chris Doran, Elsa Arcaute Pages 467-489 7. José María Pozo, Josep Manel Parra Pages 531-546 8. Greg Trayling, William E. Baylis Pages 547-558 6. Applications in Engineering 1. Front Matter Pages 559-559 2. Christian Perwass, Christian Gebken, Gerald Sommer Pages 561-575 3. Jan J. Koenderink Pages 577-596 4. Bodo Rosenhahn, Gerald Sommer Pages 597-612 7. Back Matter Pages 613-626 About this book The invited papers in this volume provide a detailed examination of Clifford algebras and their significance to geometry, analysis, physics, and engineering. Divided into five parts, the book's first section is devoted to Clifford analysis; here, topics encompass the Morera problem, inverse scattering associated with the Schrödinger equation, discrete Stokes equations in the plane, a symmetric functional calculus, Poincaré series, differential operators in Lipschitz domains, Paley-Wiener theorems and Shannon sampling, Bergman projections, and quaternionic calculus for a class of boundary value problems. A careful discussion of geometric applications of Clifford algebras follows, with papers on hyper-Hermitian manifolds, spin structures and Clifford bundles, differential forms on conformal manifolds, connection and torsion, Casimir elements and Bochner identities on Riemannian manifolds, Rarita-Schwinger operators, and the interface between noncommutative geometry and physics. In addition, attention is paid to the algebraic and Lie-theoretic applications of Clifford algebras---particularly their intersection with Hopf algebras, Lie algebras and representations, graded algebras, and associated mathematical structures. Symplectic Clifford algebras are also discussed. Finally, Clifford algebras play a strong role in both physics and engineering. The physics section features an investigation of geometric algebras, chiral Dirac equations, spinors and Fermions, and applications of Clifford algebras in classical mechanics and general relativity.  Twistor and octonionic methods, electromagnetism and gravity, elementary particle physics, noncommutative physics, Dirac's equation, quantum spheres, and the Standard Model are among topics considered at length. The section devoted to engineering applications includes papers on twist representations for cycloidal curves, a description of an image space using Cayley-Klein geometry, pose estimation, and implementations of Clifford algebra co-processor design. While the papers collected in this volume require that the reader possess a solid knowledge of appropriate background material, they lead to the most current research topics. With its wide range of topics, well-established contributors, and excellent references and index, this book will appeal to graduate students and researchers. Algebra Dirac operator Eigenvalue Lattice Schrödinger equation Spinor calculus differential equation electromagnetism geometry manifold mathematics mechanics model operator Editors and affiliations • Rafał Abłamowicz • 1 1. 1.Department of MathematicsTennessee Technological UniversityUSA Bibliographic information
955e138a458267a2
Erwin Schrodinger (August 1887 – January 1961) was a Nobel Prize-wi... What Is Life? The Physical Aspect of the Living Cell is based on a ... Aperiodic crystal: "any crystal in which three-dimensional lattice ... Atomic size is the distance from the nucleus to the valence shell w... This section refers to statistical mechanics, a field which explain... Paramagnetism is a form of magnetism whereby certain materials are ... Brownian motion is the random motion of particles suspended in a fl... First published 1944 What is life? The Physical Aspect of the Living Cell. College, Dublin, in February 1943. To the memory of My Parents A scientist is supposed to have a complete and thorough I of knowledge, at first hand, of some subjects and, therefore, is usually expected not to write on any topic of which he is not a life, master. This is regarded as a matter of noblesse oblige. For the present purpose I beg to renounce the noblesse, if any, and to be the freed of the ensuing obligation. My excuse is as follows: We have inherited from our forefathers the keen reminds us, that from antiquity to and throughout many centuries the universal aspect has been the only one to be given full credit. But the spread, both in and width and depth, of the multifarious branches of knowledge by during the last hundred odd years has confronted us with a queer dilemma. We feel clearly theories, albeit with second-hand and incomplete knowledge of some of them -and at the risk of making fools of ourselves. So much for my apology. The difficulties of language are not negligible. One's native speech is a closely fitting garment, and one never feels quite at ease when it is not immediately available and has to be replaced by another. My thanks are due to Dr Inkster (Trinity College, Dublin), to Dr Padraig Browne (St Patrick's College, Maynooth) and, last but not least, to Mr S. C. Roberts. They were put to great trouble to fit the new garment on me and to even greater trouble by my occasional reluctance to give up some 'original' fashion of my own. Should some of it have survived the mitigating tendency of my friends, it is to be put at my door, not at theirs. The head-lines of the numerous sections were originally intended to be marginal summaries, and the text of every chapter should be read in continuo. E.S. Dublin September 1944 Homo liber nulla de re minus quam de morte cogitat; et ejus sapientia non mortis sed vitae meditatio est. SPINOZA'S Ethics, Pt IV, Prop. 67 but on life.) The Classical Physicist's Approach to the Subject This little book arose from a course of public lectures, delivered by a theoretical physicist to an audience of about four hundred which did not substantially dwindle, though warned at the outset that the subject-matter dreaded weapon, mathematical deduction, would hardly be utilized. The reason for this was not that the subject was simple enough to be explained without mathematics, but rather that it was much too involved to be fully accessible to mathematics. Another feature which at least induced a semblance of popularity was the lecturer's intention to make clear the fundamental idea, which hovers between biology and physics, to both the physicist and the biologist. For actually, in spite of the variety of topics involved, the whole enterprise is intended to convey one idea only -one small comment on a large and important question. In order not to lose our way, it may be useful to outline the plan very briefly in advance. The large and important and very much discussed question is: How can the events in space and time which take place within the spatial boundary of a living organism be accounted for by physics and chemistry? The preliminary answer which this little book will endeavor to expound and establish can be summarized as follows: The obvious inability of present-day physics and chemistry to account for such events is no reason at all for doubting that they can be accounted for by those sciences. That would be a very trivial remark if it were meant only to stimulate the hope of achieving in the future what has not been achieved in the past. But the meaning is very much more positive, viz. that the inability, up to the present moment, is amply accounted for. Today, thanks to the ingenious work of biologists, mainly of geneticists, during the last thirty or forty years, enough is known about the actual material structure of organisms and about their functioning to state that, and to tell precisely why present-day physics and chemistry could not possibly account for what happens in space and time within a living organism. The arrangements of the atoms in the most vital parts of an organism and the interplay of these arrangements differ in a fundamental way from all those arrangements of atoms which physicists and chemists have hitherto made the object of their experimental and theoretical research. Yet the difference which I have just termed fundamental is of such a kind that it might easily appear slight to anyone except a physicist who is thoroughly imbued with the knowledge that the laws of physics and chemistry are statistical throughout. For it is in relation to the statistical point of view that the structure of the vital parts of living organisms differs so entirely from that of any piece of matter that we physicists and chemists have ever handled physically in our laboratories or mentally at our writing desks. It is well-nigh unthinkable that the laws and regularities thus discovered should happen to apply immediately to the behaviour of systems which do not exhibit the structure on which those laws and regularities are based. The non-physicist cannot be expected even to grasp let alone to appreciate the relevance of the difference in ‘statistical structure’ stated in terms so abstract as I have just used. To give the statement life and colour, let me anticipate what will be explained in much more detail later, namely, that the most essential part of a living cell-the chromosome fibre may suitably be called an aperiodic crystal. In physics we have dealt hitherto only with periodic crystals. To a humble physicist's mind, these are very interesting and complicated objects; they repetition, but an elaborate, coherent, meaningful design traced by the great master. In calling the periodic crystal one of the most complex objects of his research, I had in mind the physicist proper. Organic chemistry, indeed, in investigating more and more complicated molecules, has come very much nearer to that 'aperiodic crystal' which, in my opinion, is the material carrier of life. And therefore it is small wonder that the organic chemist has already made large and important contributions to the problem of life, whereas the physicist has made next to none. After having thus indicated very briefly the general idea -or rather the ultimate scope -of our investigation, let me describe the line of attack. I propose to develop first what you might call 'a naive physicist's ideas about organisms', that is, the ideas which might arise in the mind of a physicist who, after having learnt his physics and, more especially, the statistical foundation of his science, begins to think about organisms and about the way they behave and function and who comes to ask himself conscientiously whether he, from what he has learnt, from the point of view of his comparatively simple and clear and humble science, can make any relevant contributions to the question. It will turn out that he can. The next step must be to f compare his theoretical anticipations with the biological facts. It will then turn out that -though on the whole his ideas seem quite sensible -they need to be appreciably amended. In this way we shall gradually approach the correct view -or, to put it more modestly, the one that I propose as the correct one. Even if I should be right in this, I do not know whether my way of approach is really the best and simplest. But, in short, it was mine. The 'naive physicist' was myself. And I could not find any better or clearer way towards the goal than my own crooked one. A good method of developing 'the naive physicist's ideas' is to start from the odd, almost ludicrous, question: Why are atoms so small? To begin with, they are very small indeed. Every little piece of matter handled in everyday life contains an enormous number of them. Many examples have been devised to bring this fact home to an audience, none of them more impressive than the one used by Lord Kelvin: Suppose that you could mark the molecules in a glass of water; then pour the contents of the glass into the ocean and stir the latter thoroughly so as to distribute the marked molecules uniformly throughout the seven seas; if then you took a glass of water anywhere out of the ocean, you would find in it about a hundred of your marked molecules. The actual sizes of atoms lie between about 1/5000 and 1/2000 the wave-length of yellow light. The comparison is significant, because the wave-length roughly indicates the dimensions of the smallest grain still recognizable in the microscope. Thus it will be seen that such a grain still contains thousands of millions of atoms. Now, why are atoms so small? Clearly, the question is an evasion. For it is not really aimed at the size of the atoms. It is concerned with the size of organisms, more particularly with the size of our own corporeal selves. Indeed, the atom is small, when referred to our civic unit of length, say the yard or the metre. In atomic physics one is accustomed to use the so-called Angstrom (abbr. A), which is the 10 th part of a metre, or in decimal notation 0.0000000001 metre. Atomic diameters range between 1 and 2A. Now those civic units (in relation to which the atoms are so small) are closely related to the size of our bodies. There is a story tracing the yard back to the humour of an English king whom his councillors asked what unit to adopt -and he stretched out his arm sideways and said: 'Take the distance from the middle of my chest to my fingertips, that will do all right.' True or not, the story is significant for our purpose. The king would naturally I indicate a length comparable with that of his own body, knowing that anything else would be very inconvenient. With all his predilection for the Angstrom unit, the physicist prefers to be told that his new suit will require six and a half yards of tweed -rather than sixty-five thousand millions of Angstroms of tweed. It thus being settled that our question really aims at the ratio of two lengths -that of our body and that of the atom - with an incontestable priority of independent existence on the side of the atom, the question truly reads: Why must our bodies be so large compared with the atom? I can imagine that many a keen student of physics or chemistry may have deplored the fact that everyone of our sense organs, forming a more or less substantial part of our body and hence (in view of the magnitude of the said ratio) being itself composed of innumerable atoms, is much too coarse to be affected by the impact of a single atom. We cannot see or feel or hear the single atoms. Our hypotheses with regard to them differ widely from the immediate findings of our gross sense organs and cannot be put to the test of direct inspection. Must that be so? Is there an intrinsic reason for it? Can we trace back this state of affairs to some kind of first principle, in order to ascertain and to understand why nothing else is compatible with the very laws of Nature? Now this, for once, is a problem which the physicist is able to clear up completely. The answer to all the queries is in the affirmative. If it were not so, if we were organisms so sensitive that a single atom, or even a few atoms, could make a perceptible impression on our senses -Heavens, what would life be like! To stress one point: an organism of that kind would most certainly not be capable of developing the kind of orderly thought which, after passing through a long sequence of earlier stages, ultimately results in forming, among many other ideas, the idea of an atom. Even though we select this one point, the following considerations would essentially apply also to the functioning of organs other than the brain and the sensorial system. Nevertheless, the one and only thing of paramount interest to us in ourselves is, that we feel and think and perceive. To the physiological process which is responsible for thought and sense all the others play an auxiliary part, at least from the human point of view, if not from that of purely objective biology. Moreover, it will greatly facilitate our task to choose for investigation the process which is closely accompanied by subjective events, even though we are ignorant of the true nature of this close parallelism. Indeed, in my view, it lies outside the range of natural science and very probably of human understanding altogether. We are thus faced with the following question: Why should an organ like our brain, with the sensorial system attached to it, of necessity consist of an enormous number of atoms, in order that its physically changing state should be in close and intimate correspondence with a highly developed thought? On what grounds is the latter task of the said organ incompatible with being, as a whole or in some of its peripheral parts which interact directly with the environment, a mechanism sufficiently refined and sensitive to respond to and register the impact of a single atom from outside? The reason for this is, that what we call thought (1) is itself an orderly thing, and (2) can only be applied to material, i.e. to perceptions or experiences, which have a certain degree of orderliness. This has two consequences. First, a physical organization, to be in close correspondence with thought (as my brain is with my thought) must be a very well-ordered organization, and that means that the events that happen within it must obey strict physical laws, at least to a very high degree of accuracy. Secondly, the physical impressions made upon that physically well- organized system by other bodies from outside, obviously correspond to the perception and experience of the corresponding thought, forming its material, as I have called it. Therefore, the physical interactions between our system and others must, as a rule, themselves possess a certain degree of physical orderliness, that is to say, they too must obey strict physical laws to a certain degree of accuracy. And why could all this not be fulfilled in the case of an organism composed of a moderate number of atoms only and sensitive already to the impact of one or a few atoms only? Because we know all atoms to perform all the time a completely disorderly heat motion, which, so to speak, opposes itself to their orderly behaviour and does not allow the events that happen between a small number of atoms to enrol themselves according to any recognizable laws. Only in the co-operation of an enormously large number of atoms do statistical laws begin to operate and control the behaviour of these assemblies with an accuracy increasing as the number of atoms involved increases. It is in that way that the events acquire truly orderly features. All the physical and chemical laws that are known to play an important part in the life of organisms are of this statistical kind; any other kind of lawfulness and orderliness that one might think of is being perpetually disturbed and made inoperative by the unceasing heat motion of the atoms. Let me try to illustrate this by a few examples, picked somewhat at random out of thousands, and possibly not just the best ones to appeal to a reader who is learning for the first time about this condition of things -a condition which in modern physics and chemistry is as fundamental as, say, the fact that organisms are composed of cells is in biology, or as Newton's Law in astronomy, or even as the series of integers, 1, 2, 3, 4, 5, mathematics. An entire newcomer should not expect to obtain from the following few pages a full understanding and appreciation of the subject, which is associated with the illustrious names of Ludwig Boltzmann and Willard Gibbs and treated in textbooks under the name of 'statistical thermodynamics'. If you fill an oblong quartz tube with oxygen gas and put it into a magnetic field, you find that the gas is magnetized. The magnetization is due to the fact that the oxygen molecules are little magnets and tend to orientate themselves parallel to the field, like a compass needle. But you must not think that they actually all turn parallel. For if you double the field, you get double the magnetization in your oxygen body, and that proportionality goes on to extremely high field strengths, the magnetization increasing at the rate of the field you apply. This is a particularly clear example of a purely statistical law. The orientation the field tends to produce is continually counteracted by the heat motion, which works for random orientation. The effect of this striving is, actually, only a small preference for acute over obtuse angles between the dipole axes and the field. Though the single atoms change their orientation incessantly, they produce on the average (owing to their enormous number) a constant small preponderance of orientation in the direction of the field and proportional to it. This ingenious explanation is due to the French physicist P. Langevin. It can be checked in the following way. If the observed weak magnetization is really the outcome of rival tendencies, namely, the magnetic field, which aims at combing all the molecules parallel, and the heat motion, which makes for random orientation, then it ought to be possible to increase the magnetization by weakening the heat motion, that is to say, by lowering the temperature, instead of reinforcing the field. That is confirmed by experiment, which gives the magnetization inversely proportional to the absolute temperature, in quantitative agreement with theory (Curie's law). Modern equipment even enables us, by lowering the temperature, to reduce the heat motion to such insignificance that the orientating tendency of the magnetic field can assert itself, if not completely, at least sufficiently to produce a substantial fraction of 'complete magnetization'. In this case we no longer expect that double the field strength will double the magnetization, but that the latter will increase less and less with increasing field, approaching what is called 'saturation'. This expectation too is quantitatively confirmed by experiment. Notice that this behaviour entirely depends on the large numbers of molecules which co-operate in producing the observable magnetization. Otherwise, the latter would not be an constant at all, but would, by fluctuating quite irregularly of from one second to the next, bear witness to the vicissitudes of pe the contest between heat motion and field. If you fill the lower part of a closed glass vessel with fog, pt consisting of minute droplets, you will find that the upper or boundary of the fog gradually sinks, with a well-defined velocity, determined by the viscosity of the air and the size and the specific gravity of the droplets. But if you look at one of the droplets under the microscope you find that it does not permanently sink with constant velocity, but performs a very irregular movement, the so-called Brownian movement, which corresponds to a regular sinking only on the average. Now these droplets are not atoms, but they are sufficiently small and light to be not entirely insusceptible to the impact of one single molecule of those which hammer their surface in perpetual impacts. They are thus knocked about and can only on the average follow the influence of gravity. This example shows what funny and disorderly experience we should have if our senses were susceptible to the impact of a few molecules only. There are bacteria and other organisms so small that they are strongly affected by this phenomenon. Their movements are determined by the thermic whims of the surrounding medium; they have no choice. If they had some locomotion of their own they might nevertheless succeed in on getting from one place to another -but with some difficulty, since the heat motion tosses them like a small boat in a rough sea. A phenomenon very much akin to Brownian movement is that of diffusion. Imagine a vessel filled with a fluid, say water, with a small amount of some coloured substance dissolved in it, say potassium permanganate, not in uniform concentration, but rather as in Fig. 4, where the dots indicate the molecules of the dissolved substance (permanganate) and the concentration diminishes from left to right. If you leave this system alone a very slow process of 'diffusion' sets in, the at permanganate spreading in the direction from left to right, that is, from the places of higher concentration towards the places of lower concentration, until it is equally distributed of through the water. The remarkable thing about this rather simple and apparently not particularly interesting process is that it is in no way due, as one might think, to any tendency or force driving the permanganate molecules away from the crowded region to the less crowded one, like the population of a country spreading to those parts where there is more elbow-room. Nothing of the sort happens with our permanganate molecules. Every one of them behaves quite independently of all the others, which it very seldom meets. Everyone of them, whether in a crowded region or in an empty one, suffers the same fate of being continually knocked about by the impacts of the water molecules and thereby gradually moving on in an unpredictable direction -sometimes towards the higher, sometimes towards the lower, concentrations, sometimes obliquely. The kind of motion it performs has often been compared with that of a blindfolded person on a large surface imbued with a certain desire of 'walking', but without any preference for any particular direction, and so changing his line continuously. That this random walk of the permanganate molecules, the same for all of them, should yet produce a regular flow towards the smaller concentration and ultimately make for uniformity of distribution, is at first sight perplexing -but only at first sight. If you contemplate in Fig. 4 thin slices of approximately constant concentration, the permanganate molecules which in a given moment are contained in a particular slice will, by their random walk, it is true, be carried with equal probability to the right or to the left. But precisely in consequence of this, a plane separating two neighbouring slices will be crossed by more molecules coming from the left than in the opposite direction, simply because to the left there are more molecules engaged in random walk than there are to the right. And as long as that is so the balance will show up as a regular flow from left to right, until a uniform distribution is reached. When these considerations are translated into mathematical language the exact law of diffusion is reached in the form of a partial differential equation §p/§t= DV which I shall not trouble the reader by explaining, though its meaning in ordinary language is again simple enough. The reason for mentioning the stern 'mathematically exact' law here, is to emphasize that its physical exactitude must nevertheless be challenged in every particular application. Being based on pure chance, its validity is only approximate. If it is, as a rule, a very good approximation, that is only due to the enormous number of molecules that co-operate in the phenomenon. The smaller their number, the larger the quite haphazard deviations we must expect and they can be observed under favourable circumstances. The last example we shall give is closely akin to the second c one, but has a particular interest. A light body, suspended by a long thin fibre in equilibrium orientation, is often used by physicists to measure weak forces which deflect it from that position of equilibrium, electric, magnetic or gravitational forces being applied so as to twist it around the vertical axis. (The light body must, of course, be chosen appropriately for ! the particular purpose.) The continued effort to improve the accuracy of this very commonly used device of a 'torsional balance', has encountered a curious limit, most interesting in itself. In choosing lighter and lighter bodies and thinner and longer fibres -to make the balance susceptible to weaker and weaker forces -the limit was reached when the suspended body became noticeably susceptible to the impacts of the heat motion of the surrounding molecules and began to perform an incessant, irregular 'dance' about its equilibrium position, much like the trembling of the droplet in the second example. Though this behaviour sets no absolute limit to the accuracy of measurements obtained with the balance, it sets a practical one. The uncontrollable effect of the heat motion competes with the effect of the force to be measured and makes the ;t' law single deflection observed insignificant. You have to multiply never- observations, in order to eliminate the effect of the Brownian Being movement of your instrument. This example is, I think, particularly illuminating in our present investigation. For our to the organs of sense, after all, are a kind of instrument. We can see in the how useless they would be if they became too sensitive. So much for examples, for the present. I will merely add that there is not one law of physics or chemistry, of those that are relevant within an organism or in its interactions with its environment, that I might not choose as an example. The second detailed explanation might be more complicated, but the salient point would always be the same and thus the description would become monotonous. But I should like to add one very important quantitative statement concerning the degree of inaccuracy to be expected in any physical law, the so-called \/n law. I will first illustrate it by a simple example and then generalize it. If I tell you that a certain gas under certain conditions of pressure and temperature has a certain density, and if I expressed this by saying that within a certain volume (of a size relevant for some experiment) there are under these conditions just n molecules of the gas, then you might be sure that if you could test my statement in a particular moment of time, you would find it inaccurate, the departure being of the order of \/n. Hence if the number n = 100, you would find a departure of about 10, thus relative error = 10%. But n = 1 million, you would be likely to find a departure of about 1,000, thus relative error = 1\10%. Now, roughly speaking, this statistical law is quite general. The laws of physics and physical chemistry are inaccurate within a probable relative error of the order of 1/ \/Vn, where n is the number of molecules that co-operate to bring about that law -to produce its validity within such regions of space or time (or both) that matter, for some considerations or for some particular experiment. You see from this again that an organism must have a comparatively gross structure in order to enjoy the benefit of fairly accurate laws, both for its internal life and for its , interplay with the external world. For otherwise the number of co-operating particles would be too small, the 'law' too inaccurate. The particularly exigent demand is the square root. For though a.million is a reasonably large number, an accuracy of Just 1in 1,000 is not overwhelmingly good, If a thing claims the dignity of being a 'Law of Nature. The Hereditary Mechanism Thus we have come to the conclusion that an organism and all the biologically relevant processes that it experiences must have an extremely 'many-atomic' structure and must be safeguarded against haphazard, 'single-atomic' events attaining too great importance. That, the 'naive physicist' tells us, is essential, so that the organism may, so to speak, have sufficiently accurate physical laws on which to draw for setting up its marvellously regular and well-ordered working. How do these conclusions, reached, biologically speaking, a priori (that is, from the purely physical point of view), fit in with actual biological facts? At first sight one is inclined to think that the conclusions are little more than trivial. A biologist of, say, thirty years ago might have said that, although it was quite suitable for a popular lecturer to emphasize the importance, in the organism as elsewhere, of statistical physics, the point was, in fact, rather a familiar truism. For, naturally, not only the body of an adult individual of any higher species, but every single cell composing it contains a 'cosmical' number of single atoms of every kind. And every particular physiological process that we observe, either within the cell or in its interaction with the cell environment, appears -or appeared thirty years ago -to involve such enormous numbers of single atoms and single atomic processes that all the relevant laws of physics and physical chemistry would be safeguarded even under the very exacting demands of statistical physics in respect of large numbers; this demand illustrated just now by the \/n rule. Today, we know that this opinion would have been a mistake. As we shall presently see, incredibly very orderly and lawful events within a living organism. They have control of the observable large-scale features which the organism acquires in the course of its development, they determine important characteristics of its functioning; and in all this very sharp and very strict me biological laws are displayed. I must begin with giving a brief summary of the situation in biology, more especially in genetics -in other words, I have to summarize the present state of knowledge in a subject of which I am not a master. This cannot be helped and I apologize, particularly to any biologist, for the dilettante character of my summary. On the other hand, I beg leave to put the prevailing ideas before you more or less dogmatically. A poor theoretical physicist could not be expected to produce anything like a competent survey of the experimental evidence, which consists of a large number of long and beautifully interwoven series of breeding experiments of truly unprecedented ingenuity on the one hand and of direct observations of the living cell, conducted with all the refinement of modern microscopy, on the other. Let me use the word 'pattern' of an organism in the sense in be which the biologist calls it 'the four- dimensional pattern', meaning not only the structure and functioning of that organism in the adult, or in any other particular stage, but the whole of its ontogenetic development from the fertilized egg the cell to the stage of maturity, when the organism begins to reproduce itself. Now, this whole four-dimensional pattern is known to be determined by the structure of that one cell, the fertilized egg. Moreover, we know that it is essentially determined by the structure of only a small part of that cell, its large nucleus. This nucleus, in the ordinary 'resting state' of the cell, usually appears as a network of chromatine, distributed over the cell. But in the vitally important processes of cell division (mitosis and meiosis, see below) it is seen to consist of a set of particles, usually fibre-shaped or rod-like, called the chromosomes, which number 8 or 12 or, in man, 48. But I ought really to have written these illustrative numbers as 2 X 4, 2 X 6, ..., 2 X 24, ..., and I ought to have spoken of two sets, in order to use the expression in the customary strict meaning of the biologist. For though the single chromosomes are sometimes clearly distinguished and individualized by shape and size, the two sets are almost entirely alike. As we have shall see in a moment, one set comes from the mother (egg cell), one from the father (fertilizing spermatozoon). It is these chromosomes, or functioning in the mature state. Every complete set of chromosomes contains the full code; so there are, as individual. In calling the structure of the chromosome fibres a code-script we mean that the all-penetrating their structure whether the egg would develop, under suitable conditions, into a black cock or into a speckled hen, into a fly or a maize plant, a rhododendron, a beetle, a mouse or a woman. To which we may add, that the appearances of the egg cells are very often remarkably similar; and even when they are not, as in the case of the comparatively gigantic eggs of birds and reptiles, the difference is not been so much the relevant structures as in the nutritive material which in these cases is added for obvious reasons. But the another simile, they are architect's plan and builder's craft -in one. How do the chromosomes behave in ontogenesis? The growth of an organism is effected by consecutive cell met divisions. Such a cell division is called mitosis. It is, in the life of a cell, not such a very frequent event as one might expect, considering the enormous number of cells of which our body is composed. In the beginning the growth is rapid. The egg divides into two 'daughter cells' which, at the next step, will produce a generation of four, then of 8, 16, 32, 64, ..., etc. The frequency of division will not remain exactly the same in all parts of the growing body, and that will break the regularity of these numbers. But from their rapid increase we infer by an easy computation that on the average as few as 50 or 60 successive divisions suffice to produce the number of cells in a grown man -or, say, ten times the number, taking into account the exchange of cells during lifetime. Thus, a body cell of mine is, on the average, only the 50th or 60th 'descendant' of the egg that was I. How do the chromosomes behave on mitosis? They duplicate -both sets, both copies of the code, duplicate. The process has been intensively studied under the microscope and is of paramount interest, but much too involved to describe here in detail. The salient point is that each of the two 'daughter cells' gets a dowry of two further complete sets of chromosomes exactly similar to those of the parent cell. So all the body cells are exactly alike as regards their chromosome treasure. However little we understand the device we cannot but think that it must be in some way very relevant to the functioning of the organism, that every single cell, even a less important one, should be in possession of a complete (double) copy of the code-script. Some time ago we were told in the newspapers that in his African campaign General Montgomery made a point of having every single soldier of his army meticulously informed of all his designs. If that is true (as it conceivably might be, considering the high intelligence and reliability of his troops) it provides an excellent analogy to our case, in which the corresponding fact certainly is literally true. The most surprising fact is the doubleness of the chromosome set, maintained throughout the mitotic divisions. That it is the outstanding feature of the genetic mechanism is most strikingly revealed by the one and only departure from the rule, which we have now to discuss. Very soon after the development of the individual has set in, a group of cells is reserved for producing at a later stage the so-called gametes, the sperm cells or egg cells, as the case may be, needed for the reproduction of the individual in maturity. 'Reserved' means that they do not serve other purposes in the meantime and suffer many fewer mitotic divisions. The exceptional or reductive division (called meiosis) is the one by which eventually, on maturity, the gametes posed to are produced from these reserved cells, as a rule only a short time before syngamy is to take place. In meiosis the double chromosome set of the parent cell simply separates into two single sets, one of which goes to each of the two daughter cells, the gametes. In other words, the mitotic doubling of the number of chromosomes does not take place in meiosis, the number remains constant and thus every gamete receives only half -that is, only one complete copy of the code, not two, e.g. in man only 24:, not 2 X 24: = 4:8. Cells with only one chromosome set are called haploid (from Greek , single). Thus the gametes are haploid, the ordinary body cells diploid (from , double). Individuals with three, four, ...or generally speaking with many chromosome sets in all their body cells occur occasionally; the latter are then called triploid, tetraploid, ..., polyploid. In the act of syngamy the male gamete (spermatozoon) and the female gamete (egg), both haploid cells, coalesce to form the fertilized egg cell, which is thus diploid. One of its chromosome sets comes from the mother, one from the father. One other point needs rectification. Though not indispensable for our purpose it is of real interest, since it shows that actually a fairly complete code-script of the 'pattern' is contained in every single set of chromosomes. There are instances of meiosis not being followed shortly after by fertilization, the haploid cell (the 'gamete') under- going meanwhile numerous mitotic cell divisions, which result in building up a complete haploid individual. This is the case in the male bee, the drone, which is produced parthenogenetically, that is, from non-fertilized and therefore haploid eggs of the queen. The drone has no father! All its body cells are haploid. If you please, you may call it a grossly exaggerated spermatozoon; and actually, as everybody knows, to function as such happens to be its one and only task in life. However, that is perhaps a ludicrous point of view. For the case is not two quite unique. There are families of plants in which the haploid gamete which is produced by meiosis and is called a spore in the such cases falls to the ground and, like a seed, develops into a the true haploid plant comparable in size with the diploid. Fig. 5 is a rough sketch of a moss, well known in our forests. The leafy lower part is the haploid plant, called the gametophyte, because at its upper end it develops sex organs and gametes, which by mutual fertilization produce in the ordinary way the diploid plant, the bare stem with the capsule at the top. This is called the sporophyte, because it produces, by meiosis, the spores in the capsule at the top. When the capsule opens, the spores fall to the ground and develop into a leafy stem, etc. The course of events is appropriately called alternation of generations. You may, if you choose, look upon the ordinary case, man and the animals, in the same way. But the 'gametophyte' is then as a rule a very short-lived, unicellular generation, spermatozoon or egg cell as the case may be. Our body corresponds to the sporophyte. Our 'spores' are the reserved cells from which, by meiosis, the unicellular generation springs. The important, the really fateful event in the process of reproduction of the individual is not fertilization but meiosis. One set of chromosomes is from the father, one from the mother. Neither chance nor destiny can interfere with that. Every man owes just half of his inheritance to his mother, half of it to his father. That one or the other strain seems often to prevail is due to other reasons which we shall come to later. (Sex itself is, of course, the simplest instance of such prevalence.). But when you trace the origin of your inheritance back to your grandparents, the case is different. Let me fix attention on my paternal set of chromosomes, in particular on one of them, say No.5. It is a faithful replica either of the No.5 my father received from his father or of the No.5 he had received from his mother. The issue was decided by a 50:50 chance in the meiosis taking place in my father's body in November 1886 and producing the spermatozoon which a few days later was to be effective in begetting me. Exactly the same story could be repeated about chromosomes Nos. 1, 2, 3, ...,24 of my paternal set, and mutatis mutandis about every one of my maternal chromosomes. Moreover, all the 48 issues are fi entirely independent. Even if it were known that my paternal it chromosome No.5 came from my grandfather Josef Schrodinger, the No.7 still stands an equal chance of being either also from him, or from his wife Marie, nee Bogner. But pure chance has been given even a wider range in mixing the grandparental inheritance in the offspring than would appear from the preceding description, in which it has been tacitly assumed, or even explicitly stated, that a particular chromosome as a whole was either from the grandfather or back to from the grandmother; in other words that the single chromosomes are passed on undivided. In actual fact they are not, or on one of not always. Before being separated in the reductive division, No.5 my say the one in the father's body, any two 'homologous' chromosomes come into close contact with each other, during chance in which they sometimes exchange entire portions in the way illustrated in Fig. 6. By this process, called 'crossing-over', days later two properties situated in the respective parts of that chromosome will be separated in the grandchild, who will follow the grandfather in one of them, the grandmother in the other invaluable information regarding the location of properties in the chromosomes. For a full account we should have to draw on conceptions not introduced before the next chapter (e.g. heterozygosy, dominance, etc.); but as that would take us beyond the range of this little book, let me indicate the salient point right away. If there were no crossing-over, two properties for which the same chromosome is responsible would chromosomes of the same ancestor, which could never go together. These rules and chances are interfered with by crossing-over. Hence the probability of this event can be ascertained by registering carefully the percentage composition of the off-spring in extended breeding experiments, suitably laid out for at the purpose. In analysing the statistics, one accepts the suggestive working hypothesis that the 'linkage' between two properties situated in the same chromosome, is the less frequently broken by crossing-over, the nearer they lie to each other. For then there is less chance of the point of exchange lying between them, whereas properties located near the opposite ends of the chromosomes are separated by every crossing- over. (Much the same applies to the recombination of properties located in homologous chromosomes of the same ancestor.) In this way one may expect to get from the 'statistics of linkage' a sort of 'map of properties' within every chromosome. These anticipations have been fully confirmed. In the cases to which tests have been thoroughly applied (mainly, but not only, Drosophila) the tested properties actually divide into as h many separate groups, with no linkage from group to group, as there are different chromosomes (four in Drosophila). Within every group a linear map of properties can be drawn up which accounts quantitatively for the degree of linkage it between any two of that group, so that there is little doubt h that they actually are located, and located along a line, as the rod-like shape of the chromosome suggests. Of course, the scheme of the hereditary mechanism, as drawn up here, is still rather empty and colourless, even slightly naive. For we have not said what exactly we understand by a property. It seems neither adequate nor possible to dissect into discrete 'properties' the pattern of an organism which is essentially a unity, a 'whole'. Now, what we actually state in any particular case is, that a pair of ancestors were different in a certain well-defined respect (say, one had blue eyes, the other brown), and that the offspring follows in this respect either one or the other. What we locate in the chromosome is the seat of this difference. (We call it, in technical language, a 'locus', or, if we think of the hypothetical material structure underlying it, a 'gene'.) Difference of by property, to my view, is really the fundamental concept rather than property itself, notwithstanding the apparent linguistic out for and logical contradiction of this statement. The differences of Its the properties actually are discrete, as will emerge in the next chapter when we have to speak of mutations and the dry scheme hitherto presented will, as I hope, acquire more life each colour. We have just introduced the term gene for the hypothetical same material carrier of a definite hereditary feature. We must now the stress two points which will be highly relevant to our every investigation. The first is the size -or, better, the maximum size -of such a carrier; in other words, to how small a volume can we trace the location? The second point will be the permanence of a gene, to be inferred from the durability of the hereditary pattern. As regards the size, there are two entirely independent estimates, one resting on genetic evidence (breeding experiments), the other on cytological evidence (direct microscopic inspection). The first is, in principle, simple enough. After having, in the way described above, located in the chromosome a considerable number of different (large-scale) features (say of the Drosophila fly) within a particular one of its chromosomes, to get the required estimate we need only divide the measured length of that chromosome by the number of features and multiply by the cross-section. For, of course, we count as different only such features as are occasionally separated by crossing-over, so that they cannot be due to the same (microscopic or molecular) structure. On the other hand, it is clear that our estimate can only give a maximum size, because the number of features isolated by in this genetic analysis is continually increasing as work goes on. The other estimate, though based on microscopic inspection, is really far less direct. Certain cells of Drosophila (namely, those of its salivary glands) are, for some reason, enormously enlarged, and so are their chromosomes. In them you distinguish a crowded pattern of transverse dark bands across the fibre. C. D. Darlington has remarked that the number of these bands (2,000 in the case he uses) is, though, considerably larger, yet roughly of the same order of magnitude as the number of genes located in that chromosome by breeding experiments. He inclines to regard these bands as indicating the actual genes (or separations of genes). Dividing the length of the chromosome, measured in a normal-sized cell by their number (2,000) he finds the volume of a gene equal to a cube of edge 300 A. Considering the roughness of the estimates, we may regard this to be also the size obtained by the first method. A full discussion of the bearing of statistical physics on all the facts I am recalling -or perhaps, I ought to say, of the bearing of these facts on the use of statistical physics in the living cell will follow later. But let me draw attention at this point to the fact that 300 A is only about 100 or 150 atomic distances in a liquid or in a solid, so that a gene contains certainly not more than about a million or a few million atoms. That number is much too small (from the \/v point of view) to entail an orderly and lawful behaviour according to statistical physics -and that means according to physics. It is too small, even if all these atoms played the same role, as they do in a gas or in a drop of liquid. And the gene is most certainly not just a homogeneous drop of liquid. It is probably a large protein molecule, in which every atom, every radical, every heterocyclic ring plays an individual role, more or less different from that played by any of the other similar atoms, radicals, or rings. This, at any rate, is the opinion of leading geneticists such as Haldane and Darlington, and we shall soon have to refer to genetic experiments which come very near to proving it. Let us now turn to the second highly relevant question: What degree of permanence do we encounter in hereditary properties and what must we therefore attribute to the material structures which carry them? The answer to this can really be given without any special investigation. The mere fact that we speak of hereditary properties indicates that we recognize the permanence to be of the almost absolute. For we must not forget that what is passed on by the parent to the child is not just this or that peculiarity, a hooked nose, short fingers, a tendency to rheumatism, haemophilia, dichromasy, etc. Such features we may conveniently select for studying the laws of heredity. But actually it is the whole (four-dimensional) pattern of the 'phenotype', the all the visible and manifest nature of the individual, which is reproduced without appreciable change for generations, permanent within centuries -though not within tens of thousands of years -and borne at each transmission by the material in a structure of the nuclei of the two cells which unite to form the fertilized egg cell. That is a marvel -than which only one is greater; one that, if intimately connected with it, yet lies on a different plane. I mean the fact that we, whose total being is entirely based on a marvellous interplay of this very kind, yet if all possess the power of acquiring considerable knowledge about it. I think it possible that this knowledge may advance to little just a short of a complete understanding -of the first marvel. The second may well be beyond human understanding. The general facts which we have just put forward in evidence of the durability claimed for the gene structure, are perhaps too familiar to us to be striking or to be regarded as convincing. Here, for once, the common saying that exceptions prove the rule is actually true. If there were no exceptions to the likeness between children and parents, we should have been deprived not only of all those beautiful experiments which have revealed to us the detailed mechanism of heredity, but also of that grand, million-fold experiment of Nature, which forges the species by natural selection and survival of the fittest. Let me take this last important subject as the starting-point for presenting the relevant facts -again with an apology and a reminder that I am not a biologist. We know definitely, today, that Darwin was mistaken in regarding the small, continuous, accidental variations, that are bound to occur even in the most homogeneous population, as the material on which natural selection works. For it has been proved that they are not inherited. The fact is important enough to be illustrated briefly. If you take a crop of pure-strain barley, and measure, ear by ear, the length of its awns and plot the result of your statistics, you will get a bell-shaped curve as shown in Fig. 7, where the number of ears with a definite length of awn is plotted against the length. In other words: a definite medium length prevails, and deviations in either direction occur with certain frequencies. Now pick out a group of ears (as indicated by blackening) with awns noticeably beyond the average, but sufficient in number to be sown in a field by themselves and give a new crop. In making the same statistics for this, Darwin would have expected to find the corresponding curve shifted to the right. In other words, he would have expected to produce by selection an increase of the average length of the awns. That is not the case, if a truly pure-bred strain of barley has been used. The new statistical curve, obtained from the selected crop, is identical with the first one, and the same would be the case if ears with particularly short awns had been selected for seed. Selection has no effect -because the small, continuous variations are not inherited. They are obviously not based on the structure of the hereditary substance, they are accidental. But about forty years ago the Dutchman de Vries discovered that in the offspring even of thoroughly pure-bred stocks, a very small number of individuals, say two or three in tens of thousands, turn up with small but 'jump-like' changes, the expression ‘jump-like' not meaning that the change is so very considerable, but that there is a discontinuity inasmuch as there are no intermediate forms between the unchanged and the few changed. De Vries called that a mutation. The significant fact is the discontinuity. It reminds a physicist of quantum theory -no intermediate energies occurring between two neighbouring energy levels. He would be inclined to call de Vries's mutation theory, figuratively, the quantum theory of biology. We shall see later that this is much more than figurative. The mutations are actually due to quantum jumps in the gene molecule. But quantum theory was but two years old when de Vries first published his discovery, in 1902. Small wonder that it took another generation to discover the intimate connection! Mutations are inherited as perfectly as the original, correctly unchanged characters were. To give an example, in the first crop of barley considered above a few ears might turn up with awns considerably outside the range of variability shown in Fig. 7, say with no awns at all. They might represent a de Vries mutation and would then breed perfectly true, that is to We must say, all their descendants would be equally awnless. Hence a mutation is definitely a change in the hereditary without treasure and has to be accounted for by some change in the hereditary substance. Actually most of the important breeding experiments, which have revealed to us the mechanism of by a heredity, consisted in a careful analysis of the offspring obtained by crossing, according to a preconceived plan, mutated (or, in many cases, multiply mutated) with non-mutated or with differently mutated individuals. On the other hand, by virtue of their breeding true, mutations are a suitable material on which natural selection may work and produce the species as described by Darwin, by eliminating the unfit and letting the fittest survive. In Darwin's theory, you just have to substitute 'mutations' for his 'slight accidental variations' (just as quantum theory substitutes 'quantum jump' for 'continuous transfer of energy'). In all other respects little change was necessary in Darwin's theory, that is, if I am correctly interpreting the view held by the majority of biol We must now review some other fundamental facts and notions about mutations, again in a slightly dogmatic manner, without showing directly how they spring, one by one, from the experimental evidence. We should expect a definite observed mutation to be caused by a change in a definite region in one of the chromosomes. And so it is. It is important to state that we know definitely, that it is a change in one chromosome only, but not in the corresponding 'locus' of the homologous chromosome. Fig. 8 indicates this schematically, the cross denoting the mutated a locus. The fact that only one chromosome is affected is revealed when the mutated individual (often called 'mutant') is crossed with a non-mutated one. For exactly half of the offspring exhibit the mutant character and half the normal one. That is what is to be expected as a consequence of the separation of the two chromosomes on meiosis in the mutant as shown, very schematically, in Fig. 9. This is a 'pedigree', representing every individual (of three consecutive generations) simply by the pair of chromosomes in question. Please realize that if the mutant had both its chromosomes affected, all the children would receive the same (mixed) inheritance, different from that of either parent. But experimenting in this domain is not as simple as would appear from what has just been said. It is complicated by the second important fact, viz. that mutations are very often latent. What does that mean? In the mutant the two copies of the code-script are no longer identical; they present two different 'readings' or 'versions', at any rate in that one place. Perhaps it is well to point out at once that, while it might be tempting, it would nevertheless be entirely wrong to regard the original version as 'orthodox', and the mutant version as 'heretic'. We have to is regard them, in principle, as being of equal right -for the normal characters have also arisen from mutations. What actually happens is that the 'pattern' of the individual, as a general rule, follows either the one or the other rte version, which may be the normal or the mutant one. The -version which is followed is called dominant, the other, recessive; in other words, the mutation is called dominant or recessive, according to whether it is immediately effective in changing the pattern or not. Recessive mutations are even more frequent than dominant ones and are very important, though at first they do not show up at all. To affect the pattern, they have to be present in both chromosomes (see Fig. 10). Such individuals can be produced when two equal recessive mutants happen to be crossed with each other or when a mutant is crossed with itself; this is possible in hermaphroditic plants and even happens spontaneously. An easy reflection shows that in these cases about one-quarter of the offspring will be of this type and thus visibly exhibit the mutated pattern. I think it will make for clarity to explain here a few technical terms. For what I called 'version of the code- script' -be it the original one or a mutant one -the term 'allele' has been; adopted. When the versions are different, as indicated in Fig. 8, the individual is called heterozygous, with respect to that locus. When they are equal, as in the non-mutated individual or in the case of Fig. 10, they are called homozygous. Thus a recessive allele influences the pattern only when homozygous, whereas a dominant allele produces the same pattern, whether homozygous or only heterozygous. Colour is very often dominant over lack of colour (or white). Thus, for example, a pea will flower white only when it has the 'recessive allele responsible for white' in both chromosomes in question, when it is 'homozygous for white'; it will then breed true, and all its descendants will be white. But one 'red allele' (the other being white; 'heterozygous') will make it flower red, and so will two red alleles ('homozygous'). The difference of the latter two cases will only show up in the offspring, when the heterozygous red will produce some white descendants, and the homozygous red will breed true. The fact that two individuals may be exactly alike in their outward appearance, yet differ in their inheritance, is so important that an exact differentiation is desirable. The geneticist says they have the same phenotype, but different genotype. The contents of the preceding paragraphs could thus be summarized in the brief, but highly technical statement: A recessive allele influences the phenotype only when the genotype is homozygous. We shall use these technical expressions occasionally, but shall recall their meaning to the reader where necessary. Recessive mutations, as long as they are only heterozygous, are of course no working-ground for natural selection. If they are detrimental, as mutations very often are, they will nevertheless not be eliminated, because they are latent. Hence quite a host of unfavourable mutations may accumulate and do no immediate damage. But they are, of course, transmitted to that half of the offspring, and that has an important application to man, cattle, poultry or any other species, the good physical qualities of which are of immediate concern to us. In Fig. 9 it is assumed that a male individual (say, for concreteness, myself) carries such a recessive detrimental mutation heterozygously, so that it does not show up. Assume that my wife is free of it. Then half of our children (second line) will also carry it -again heterozygously. If all of them are again mated with non-mutated partners (omitted from the diagram, to avoid reed confusion), a quarter of our grandchildren, on the average, will be affected in the same way. No danger of the evil ever becoming manifest arises, unless of equally affected individuals are crossed with each other, when, as an easy reflection shows, one-quarter of their children, being homozygous, would manifest the damage. Next to self-fertilization (only possible in hermaphrodite plants) the greatest danger would be a marriage between a son and a daughter of mine. Each of them standing an even chance of being latently affected or not, one-quarter of these incestuous unions would be dangerous inasmuch as one-quarter of its children would manifest the damage. The danger factor for an incestuously bred child is thus 1: 16. In the same way the danger: factor works out to be 1 :64 for the offspring of a union between two ('clean-bred') grand- children of mine who are first cousins. These do not seem to be but overwhelming odds, and actually the second case is usually tolerated. But do not forget that we have analysed the consequences of only one possible latent injury in one partner of the ancestral couple ('me and my wife'). Actually both of them are quite likely to harbour more than one latent deficiency of this kind. If you know that you yourself harbour a definite one, you have to reckon with l out of 8 of your first cousins sharing it! Experiments with plants and animals seem to indicate that in addition to comparatively rare deficiencies of a serious kind, there seem to be a host of minor ones whose chances combine to deteriorate the offspring of close-breeding as a whole. Since we are no longer inclined to eliminate failures in the harsh way the Lacedemonians used to adopt in the Taygetos mountain, we have to take a particularly serious view about these things in the case of man, were natural selection of the fittest is largely retrenched, nay, turned to the contrary. The anti-selective effect of the modern mass slaughter of the healthy youth of all nations is hardly outweighed by the consideration that in more primitive conditions war may have had a positive value in letting the fittest survive. The fact that the recessive allele, when heterozygous, is completely overpowered by the dominant and produces no visible effects at all, is amazing. It ought at least to mentioned that there are exceptions to this behaviour. When a homozygous white snapdragon is crossed with, equally homozygous, crimson snapdragon, all the immediate descendants are intermediate in colour, i.e. they are pink (not crimson, as might be expected). A much more important case of two alleles exhibiting their influence simultaneously occurs in blood-groups -but we cannot enter into that here. I should not be astonished if at long last recessivity should turn our to be capable of degrees and to depend on the sensitivity of the tests we apply to examine the ‘phenotype’. This is perhaps the place for a word on the early history of genetics. The backbone of the theory, the law of inheritance, to successive generations, of properties in which the parents differ, and more especially the important distinction recessive-dominant, are due to the now world famous Augustininan Abbot Gregor Mendel (1822-84). Mendel knew nothing about mutations and chromosomes. In his cloister gardens in Brunn (Brno) he made experiments on the garden pea, of first which he reared different varieties, crossing them and watching their offspring in the 1st, 2nd, 3rd, ..., generation. You might say, he experimented with mutants which he found ready-made in nature. The results he published as early as 1866 in the Proceedings of the Naturforschender Verein in Brunn. Nobody seems to have been particularly interested in the abbot's hobby, and nobody, certainly, had the faintest idea that his discovery would in the twentieth century become the lodestar of an entirely new branch of science, easily the most interesting of our days. His paper was forgotten and was only rediscovered in 1900, simultaneously and independently, by Correns (Berlin), de Vries (Amsterdam) and Tschermak may (Vienna). So far we have tended to fix our attention on harmful mutations, which may be the more numerous; but it must be definitely stated that we do encounter advantageous mutations as well. If a spontaneous mutation is a small step in the development of the species, we get the impression that some change is 'tried out' in rather a haphazard fashion at the risk n, as of its being injurious, in which case it is automatically eliminated. This brings out one very important point. In order to be suitable material for the work of natural selection, mutations must be rare events, as they actually are. If they were so frequent that there was a considerable chance of, say, a dozen of different mutations occurring in the same individual, the injurious ones would, as a rule, predominate over the advantageous ones and the species, instead of being improved by selection, would remain unimproved, or would perish. The comparative conservatism which results from the high degree of permanence of the genes is essential. An analogy might be sought in the working of a large manufacturing plant in a factory. For developing better methods, innovations, even if as yet unproved, must be tried out. But in order to ascertain whether the innovations improve or decrease the output, it is essential that they should be introduced one at a time, while all the other parts of the mechanism are kept constant. We now have to review a most ingenious series of genetical research work, which will prove to be the most relevant feature of our analysis. The percentage of mutations in the offspring, the so-called mutation rate, can be increased to a high multiple of the Small natural mutation rate by irradiating the parents with X-rays or γ-rays. The mutations produced in this way differ in no way (except by being more numerous) from those occurring spontaneously, and one has the impression that every ‘natural’ mutation can also be induced by X-rays. In Drosophila many special mutations recur spontaneously again and to you again in the vast cultures; they have been located in the chromosome, as described on pp. 26-9, and have been given special names. There have been found even what are called say, on 'multiple alleles', that is to say, two or more different 'versions' and 'readings' -in addition to the normal, non-mutated one -of the same place in the chromosome code; that means not only two, but three or more alternatives in that particular one 'locus', any two of which are to each other in the relation 'dominant- recessive' when they occur simultaneously in their corresponding loci of the two homologous chromosomes. The experiments on X-ray-produced mutations give the impression that every particular 'transition', say from the normal individual to a particular mutant, or conversely, has its individual 'X-ray coefficient', indicating the percentage of the offspring which turns out to have mutated in that particular way, when a unit dosage of X-ray has been applied to the parents, before the offspring was engendered. Furthermore, the laws governing the induced mutation rate are extremely simple and extremely illuminating. I follow here the report of N. W. Timofeeff, in Biological Reviews, vol. IX, 1934. To a considerable extent it refers to that author's own beautiful work. The first law is (I) The increase is exactly proportional to the dosage of rays, so that one can actually speak (as I did) of a coefficient of increase. We are so used to simple proportionality that we are liable to underrate the far-reaching consequences of this simple law. To grasp them, we may remember that the price of a commodity, for example, is not always proportional to its amount. In ordinary times a shopkeeper may be so much every impressed by your having bought six oranges from him, that, on your deciding to take after all a whole dozen, he may give it to you for less than double the price of the six. In times of scarcity the opposite may happen. In the present case, we conclude that the first half-dosage of radiation, while causing, say, one out of a thousand descendants to mutate, has not influenced the rest at all, either in the way of predisposing them for, or of immunizing them against, mutation. For otherwise the second half-dosage would not cause again just one out of a thousand to mutate. Mutation is thus not an accumulated effect, brought about by consecutive small portions of radiation reinforcing each other. It must consist in some single event occurring in one chromosome during irradiation. What kind of event? This is answered by the second law, viz. (2) If you vary the quality of the rays (wave-length) within wide limits, from soft X-rays to fairly hard γ-rays, the coefficient remains constant, provided you give the same dosage in so-called r-units, that is to say, provided you measure the dosage by the total amount standard substance during the time and at the place where the parents are exposed to the rays. As standard substance one chooses air not only for convenience, but also for the reason that organic tissues are composed of elements of the same atomic weight as air. A lower limit for the amount of ionizations or allied processes (excitations) in the tissue is obtained simply by multiplying the number of ionizations in air by the ratio of the densities. It is thus fairly obvious, and is confirmed by a more critical investigation, that the single event, causing a mutation, is just an ionization (or similar process) occurring within some 'critical' volume of the germ cell. What is the size of this critical volume? It can be estimated from the observed mutation rate by a consideration of this kind: if a dosage of 50,000 ions per cm3 produces a chance of only 1:1000 for any particular gamete (that finds itself in the irradiated district) to mutate in that particular way, we conclude that the critical volume, the 'target' which has to be 'hit' by an ionization for that mutation to occur, is only 1/1000 of 1/50000 of a cm3, that is to say, one fifty-millionth of a cm3. The numbers are not the right ones, but are used only by way of illustration. In the actual estimate we follow M. Delbruck, in a paper by Delbruck, N.W. Timofeeffand K.G. Zimmer, which will also be the principal source of the theory to be expounded in the following two chapters. He arrives there at a size of only about ten average atomic distances cubed, containing thus only about 10 = a thousand atoms. The simplest interpretation of this result is that there is a fair chance of producing that mutation when an ionization (or excitation) occurs not more than about '10 atoms away' from some particular spot in the chromosome. We shall discuss this in more detail presently. The Timofeeff report contains a practical hint which I cannot refrain from when these imminent dangers to the individual are successfully warded off, there appears to be the indirect danger of small detrimental mutations being produced in the germ cells -mutations of the kind envisaged naively, the injuriousness marriage between first cousins might very this well be increased by the fact that mutations ought to be a matter of concern to the community. The Quantum-Mechanical Evidence Thus, aided by the marvellously subtle instrument of X-rays (which, as the physicist remembers, revealed thirty years ago really the detailed atomic lattice structures of crystals), the united efforts of biologists and physicists have of late succeeded in reducing the upper limit for the size of the microscopic structure, being responsible for a definite large-scale feature of the individual- the 'size of a gene' -and reducing it far below the estimates obtained on pp. 29-30. We are now seriously faced with the question: How can we, from the point of view of statistical physics, reconcile the facts that the gene structure seems to involve only a comparatively small number of atoms (of the order of 1,000 and possibly much less), and that value nevertheless it displays a most regular and lawful activity -with a durability or permanence that borders upon the miraculous? Let me throw the truly amazing situation into relief once again. Several members of the Habsburg dynasty have a peculiar disfigurement of the lower lip ('Habsburger Lippe'). Its inheritance has been studied carefully and published, complete with historical portraits, by the Imperial Academy In Vienna, under the auspices of the family. The feature proves to be a genuinely Mendelian 'allele' to the normal form of the lip. Fixing our attention on the portraits of a member of the family in the sixteenth century and of his descendant, living in the nineteenth, we may safely assume that the material gene structure, responsible for the abnormal feature, has been carried on from generation to generation through the centuries, faithfully reproduced at every one of the not very numerous cell divisions that lie between. Moreover, the number of atoms involved in the responsible gene structure is likely to be of the same order of magnitude as in the cases tested by X-rays. The gene has been kept at a temperature around 98°F during all that time. How are we to understand that it has remained unperturbed by the disordering tendency of the heat motion for centuries? A physicist at the end of the last century would have been at a loss to answer this question, if he was prepared to draw only on those laws of Nature which he could explain and which he really understood. Perhaps, indeed, after a short reflection on the statistical situation he would have answered (correctly, as we shall see): These material structures can only be molecules. Of the existence, and sometimes very high stability, of these associations of atoms, chemistry had already acquired a widespread knowledge at the time. But the knowledge was purely empirical. The nature of a molecule was not understood -the strong mutual bond of the atoms which keeps a molecule in shape was a complete conundrum to everybody. Actually, the answer proves to be correct. But it is of limited value as long as the enigmatic biological stability is traced back only to an equally enigmatic chemical stability. The evidence that two features, similar in appearance, are based on the same principle, is always precarious as long as the principle itself is unknown. In this case it is supplied by quantum theory. In the light of present knowledge, the mechanism of heredity is closely related to, nay, founded on, the very basis of quantum theory. This theory was discovered by Max Planck in 1900. Modern genetics can be dated from the rediscovery of Mendel's paper by de Vries, Correns and Tschermak (1900) and from de Vries's paper on mutations (l901-3). Thus the births of the two great theories nearly coincide, and it is small wonder that both of them had to reach a certain maturity before the connection could emerge. On the side of quantum theory it took more than a quarter of a century till in 1926-7 the quantum theory of the chemical bond was outlined in its general principles by W. Heitler and F. London. The Heitler-London theory involves the most subtle and intricate conceptions of the latest development of quantum theory (called 'quantum mechanics' or 'wave mechanics'). A presentation without the use of calculus is well-nigh impossible or would at least require another little volume each like this. But fortunately, now that all work has been done and has served to clarify our thinking, it seems to be possible to point out in a more direct manner the connection between 'quantum jumps' and mutations, to pick out at the moment the most conspicuous item. That is what we attempt here. The great revelation of quantum theory was that features of a discreteness were discovered in the Book of Nature, in context in which anything other than continuity seemed to be absurd according to the views held until then. The first case of this kind concerned energy. A body on the large scale changes its energy continuously. A pendulum, for instance, that is set swinging is gradually slowed down by the resistance of the air. Strangely enough, it proves necessary to admit that a system of the order of the atomic scale behaves differently. On grounds upon which we cannot enter here, we then have to assume that a small system can by its very nature possess only certain discrete amounts of energy, called its peculiar energy levels. The transition from one state to another is a rather mysterious event, which is usually called a quantum Jump. But energy is not the only characteristic of a system. Take again our pendulum, but think of one that can perform different kinds of movement, a heavy ball suspended by a string from the ceiling can be made to swing in a north-south or east-west or any other direction or in a circle or in an ellipse. By gently blowing the ball with a bellows, it can be made to pass continuously from one state of motion to other. For small-scale systems most of these or similar characteristics -we cannot enter into details -change discontinuously. They are 'quantized', just as the energy is. The result is that a number of atomic nuclei, including their bodyguards of electrons, when they find themselves close to each other, forming 'a system', are unable by their very nature to adopt any arbitrary configuration we might think of. Their very nature leaves them only a very numerous but discrete series of 'states' to choose from. We usually call them levels or energy levels, because the energy is a very relevant part of the characteristic. But it must be understood that the complete description includes much more than just the energy. It is virtually correct to think of a state as meaning a definite configuration of all the corpuscles. The transition from one of these configurations to another is a quantum jump. If the second one has the greater energy ('is a higher level'), the system must be supplied from outside with at least the difference of the two energies to make the transition possible. To a lower level it can change spontaneously on the spending the surplus of energy in radiation. Among the discrete set of states of a given selection of atoms in such a state form a molecule. The point to stress here is, that the molecule will of necessity have a certain stability; the configuration cannot change, unless at least the energy difference, necessary to 'lift' it to the next higher level, is supplied from outside. Hence this level difference, which is a well-defined quantity, determines quantitatively the degree of stability of the molecule. It will be observed how intimately this fact is linked with the very basis of quantum theory, viz. with the discreteness of the level scheme. I must beg the reader to take it for granted that this order of ideas has been thoroughly checked by chemical facts; and that it has proved successful in explaining the basic fact of chemical valency and many details about the structure of molecules, their binding-energies, their stabilities at different temperatures, and so on. I am speaking of the Heitler- London theory, which, as I said, cannot be examined in detail here. We must content ourselves with examining the point which is of paramount interest for our biological question, namely, the stability of a molecule at different temperatures. Take our system of atoms at first to be actually in its state of lowest energy. The physicist would call it a molecule at the absolute zero of temperature. To lift it to the next higher state or level a definite supply of energy is required. The simplest way of trying to supply it is to 'heat up' your molecule. You bring it into an environment of higher temperature ('heat bath'), thus allowing other systems (atoms, molecules) to impinge upon it. Considering the entire irregularity of heat motion, there is no sharp temperature limit at which the 'lift' will be brought about with certainty and immediately. Rather, at any temperature (different from absolute zero) there is a certain smaller or greater chance for the lift to occur, the chance increasing of course with the temperature of the heat bath. The best way to express this chance is to indicate the average time you will have to wait until the lift takes place, the 'time of expectation'. From an investigation, due to M. Polanyi and E. Wigner, the 'time of expectation' largely depends on the ratio of two energies, one being just the energy difference itself that is required to effect the lift (let us write W for it), the other one characterizing the intensity of the heat motion at the temperature in question (let us write T for the absolute temperature and kT for the characteristic energy). It stands to reason that the chance for effecting the lift is smaller, and hence that the time of expectation is longer, the higher the lift itself compared with the average heat energy, that is to say, the greater the ratio W:kT. What is amazing is how enormously the time of expectation depends on comparatively small changes of the ratio W:kT. To give an example (following Delbruck): for W 30 times kT the time of expectation might be as short as 1\10s., but would rise to 16 months when W is 50 times kT, and to 30,000 years when W is 60 times kT! It might be as well to point out in mathematical language -for those readers to whom it appeals -the reason for this enormous sensitivity to changes in the level step or temperature, and to add a few physical remarks of a similar kind. The reason is that the time of expectation, call it t, depends on the ratio W/kT by an exponential function, thus t = teW/kT. t is a certain small constant of the order of 10 or 10 S. Now, this particular exponential function is not an accidental feature. It recurs again and again in the statistical theory of heat, forming, as it were, its backbone. It is a measure of the improbability of an energy amount as large as W gathering accidentally in some particular part of the system, and it is this improbability which increases so enormously when a considerable multiple of the 'average energy' kT is required. Actually a W = 30kT (see the example quoted above) is already extremely rare. That it does not yet lead to an enormously long time of expectation (only 1/10s. in our example) is, of course, due to the smallness of the factor T. This factor has a physical meaning. It is of the order of the period of the vibrations which take place in the system all the time. You could, very broadly, describe this factor as meaning that the chance of accumulating the required amount W, though very small, recurs again and again 'at every vibration', that is to say, about 10 or 10 times during every second. In offering these considerations as a theory of the stability of the molecule it has been tacitly assumed that the quantum jump which we called the 'lift' leads, if not to a complete disintegration, at least to an essentially different configuration of the same atoms -an isomeric molecule, as the chemist would say, that is, a molecule composed of the same atoms in a different arrangement (in the application to biology it is going to represent a different 'allele' in the same 'locus' and the quantum jump will represent a mutation). To allow of this interpretation two points must be amended in our story, which I purposely simplified to make it at all intelligible. From the way I told it, it might be imagined that only in its very lowest state does our group of atoms form what we call a molecule and that already the next higher state is 'something else'. That is not so. Actually the lowest level is followed by a crowded series of levels which do not involve any appreciable change in the configuration as a whole, but only correspond to those small vibrations among the atoms free which we have mentioned above. They, too, are 'quantized', but with comparatively small steps from one level to the next. Hence the impacts of the particles of the 'heat bath' may suffice to set them up already at fairly low temperature. If the molecule is an extended structure, you may conceive these vibrations as high-frequency sound waves, crossing the molecule without doing it any harm. So the first amendment is not very serious: we have to disregard the 'vibrational fine-structure' of the level scheme. The term 'next higher level' has to be understood as meaning the next level that corresponds to a relevant change of configuration. The second amendment is far more difficult to explain, involve because it is concerned with certain vital, but rather complicated, features of the scheme of relevantly different levels. The atoms free passage between two of them may be obstructed, quite apart from the required energy supply; in fact, it may be obstructed even from the higher to the lower state. Let us start from the empirical facts. It is known to the chemist that the same group of atoms can unite in more than one way to form a molecule. Such molecules are called isomeric ('consisting of the same parts'). Isomerism is not an exception, it is the rule. The larger the molecule, the more isomeric alternatives are offered. Fig. II shows one of the simplest cases, the two kinds of propyl alcohol, both consisting of 3 carbons (C), 8 hydrogens (H), 1 oxygen (0). The latter can be interposed between any hydrogen and its carbon, but only the two cases shown in our figure are different substances. And they really are. All their physical and chemical constants are distinctly different. Also their energies are different, they represent 'different levels'. The remarkable fact is that both molecules are perfectly stable, both behave as though they were 'lowest states'. There are no spontaneous transitions from either state towards the other. The reason is that the two configurations are not neighbouring configurations. The transition from one to the other can only take place over intermediate configurations which have a greater energy than either of them. To put it crudely, the oxygen has to be extracted from one position and has to be inserted into the other. There does not seem to be a way of doing that without passing through configurations of considerably higher energy. The state of affairs is sometimes figuratively pictured as in Fig. 12, in which I and 2 represent the two isomers, 3 the 'threshold' between them, and the two arrows indicate the 'lifts', that is to say, the energy supplies required to produce the transition from state I to state 2 or from state 2 to state I, respectively. Now we can give our 'second amendment', which is that transitions of this 'isomeric' kind are the only ones in which we shall be interested in our biological application. It was these we had in mind when explaining 'stability' on pp. 49-51. The 'quantum jump' which we mean is the transition from one relatively stable molecular configuration to another. The energy supply required for the transition (the quantity denoted by W) is not the actual level difference, but the step from the initial level up to the threshold (see the arrows in Fig. 12). Transitions with no threshold interposed between the initial and the final state are entirely uninteresting, and that not only in our biological application. They have actually nothing to contribute to the chemical stability of the molecule. Why? They have no lasting effect, they remain unnoticed. For, when they occur, they are almost immediately followed by a relapse so into the initial state, since nothing prevents their return. Delbruck's Model Discussed and Tested From these facts emerges a very simple answer to our question, namely: Are these structures, composed of comparatively few atoms, capable of withstanding for long periods the disturbing influence of heat motion to which the hereditary substance is continually exposed? We shall assume the structure of a gene to be that of a huge molecule, capable only of discontinuous change, which consists in a rearrangement of the atoms and leads to an isomeric molecule. The rearrangement may affect only a small region of the gene, and a vast number of different rearrangements may be possible. The energy thresholds, separating the actual configuration from any possible isomeric ones, have to be high enough (compared with the average heat energy of an atom) to make the change-over a rare event. These rare events we shall identify with spontaneous mutations. The later parts of this chapter will be devoted to putting this general picture of a gene and of mutation (due mainly to! the German physicist M. Delbruck) to the test, by comparing it in detail with genetical facts. Before doing so, we may fittingly make some comment on the foundation and general nature of the theory. Was it absolutely essential for the biological question to dig up the deepest roots and found the picture on quantum mechanics? The conjecture that a gene is a molecule is today, I dare say, a commonplace. Few biologists, whether familiar with quantum theory or not, would disagree with it. On p. 47 we ventured to put it into the mouth of a pre-quantum physicist, as the only reasonable explanation of the observed permanence. The subsequent considerations about isomerism, threshold energy, the paramount role of the ratio W:kT in determining the probability of an isomeric transition -all that could very well be introduced to our purely empirical basis, at any rate without drawing on quantum theory. Why did I so strongly insist on the quantum-mechanical periods the point of view, though I could not really make it clear in this little book and may well have bored many a reader? Quantum mechanics is the first theoretical aspect which accounts from first principles for all kinds of aggregates of atoms actually encountered in Nature. The Heitler- London bondage is a unique, singular feature of the theory, not invented for the purpose of explaining the chemical bond. It comes in quite by itself, in a highly interesting and puzzling manner, being forced upon us by entirely different considerations. It proves to correspond exactly with the observed chemical facts, and, as I said, it is a unique feature, well enough understood to tell with reasonable certainty that 'such a thing could not happen again' in the further development of quantum theory. Consequently, we may safely assert that there is no alternative to the molecular explanation of the hereditary substance. The physical aspect leaves no other possibility to account for itself and of its permanence. If the Delbruck picture should fail, we would have to give up further attempts. That is the first point I wish to make. But it may be asked: Are there really no other endurable structures composed of atoms except molecules? Does not a gold coin, for example, buried in a tomb for a couple of thousand years, preserve the traits of the portrait stamped on it? It is true that the coin consists of an enormous number of atoms, but surely we are in this case not inclined to attribute the mere preservation of shape to the statistics of large numbers. The same remark applies to a neatly developed batch of crystals we find embedded in a rock, where it must have been for geological periods without changing. That leads us to the second point I want to elucidate. The cases of a molecule, a solid crystal are not really different. In the light of present knowledge they are virtually the same. Unfortunately, school teaching keeps up certain traditional views, which have been out of date for many years and which obscure the understanding of the actual state of affairs. Indeed, what we have learnt at school about molecules does not give the idea that they are more closely akin to the solid state than to the liquid or gaseous state. On the contrary, we have been taught to distinguish carefully between a physical change, such as melting or evaporation in which the molecules are preserved (so that, for example, alcohol, whether solid, liquid or a gas, always consists of the same molecules, C2H6O), and a chemical change, as, for example, the burning of alcohol, C2H6O + 302 = 2C02 + 3H2O, where an alcohol molecule and three oxygen molecules undergo a rearrangement to form two molecules of carbon dioxide and three molecules of water. About crystals, we have been taught that they form three-fold periodic lattices, in which the structure of the single molecule is sometimes recognizable, as in the case of alcohol, and most organic compounds, while in other crystals, e.g. rock-salt (NaCI), NaCI molecules cannot be unequivocally delimited, because every Na atom is symmetrically surrounded by six CI atoms, and vice versa, so that it is largely arbitrary what pairs, if any, are regarded as molecular partners. Finally, we have been told that a solid can be crystalline or not, and in the latter case we call it amorphous. Now I would not go so far as to say that all these statements and distinctions are quite wrong. For practical purposes they are sometimes useful. But in the true aspect of the structure of matter the limits must be drawn in an entirely different way. The fundamental distinction is between the two lines of the following scheme of 'equations': molecule = solid = crystal. gas = liquid = amorphous. We must explain these statements briefly. The so-called amorphous solids are either not really amorphous or not really solid. In 'amorphous' charcoal fibre the rudimentary structure of the graphite crystal has been disclosed by X-rays. So charcoal is a solid, but also crystalline. Where we find no crystalline structure we have to regard the thing as a liquid with very high 'viscosity' (internal friction). Such a substance discloses by the absence of a well-defined melting temperature and of a latent heat of melting that it is not a true solid. When heated it softens gradually and eventually liquefies without discontinuity. (I remember that at the end of the first Great War we were given in Vienna an asphalt-like substance as a substitute for coffee. It was so hard that one had to use a chisel or a hatchet to break the little brick into pieces, when it would show a smooth, shell-like cleavage. Yet, given time, it would behave as a liquid, closely packing the lower part of a vessel in which you were unwise enough to leave it for a couple of days.). The continuity of the gaseous and liquid state is a well-known story. You can liquefy any gas without discontinuity by taking your way 'around' the so-called critical point. But we shall not enter on this here. We have thus justified everything in the above scheme, except the main point, namely, that we wish a molecule to be regarded as a solid = crystal. The reason for this is that the atoms forming a molecule, whether there be few or many of them, are united by forces of exactly the same nature as the numerous crystal. Remember that it is precisely this solidity on which we draw to account for the permanence of the gene! The distinction that is really important in the structure of small matter is whether atoms are bound together by those Heitler-London forces or whether they are not. In a solid and in a molecule they all are. In a gas of single atoms (as e.g. think mercury vapour) they are not. In a gas composed of molecules, only the atoms within every molecule are linked in this thirty way. A small molecule might be called 'the germ of a solid'. Starting from such a small solid germ, there seem to be two different ways of building up larger and larger associations. One is the comparatively dull way of crystal. Once the periodicity is established, there is no definite limit to the size of the aggregate. The other the case of the more and more complicated organic moleculein which every atom, and every group of structure). We might quite properly call that an aperiodic crystal or solid and express our hypothesis by saying: We believe a gene -or perhaps the whole chromosome fibre -to be an aperiodic solid. It has often been asked how this tiny speck of material, nucleus of the fertilized egg, could contain an elaborate code-script involving all the future development of the organism. A well-ordered association of atoms, endowed with sufficient resistivity to keep its order permanently, appears to be the only conceivable material structure that offers a variety of possible ('isomeric') arrangements, sufficiently large to embody a complicated system of 'determinations' within a small spatial boundary. Indeed, the number of atoms in such a structure need not be very large to produce an almost unlimited number of possible arrangements. For illustration, think of the Morse code. The two different signs of dot and dash in well-ordered groups of not more than four allow thirty different specifications. Now, if you allowed yourself the use of a third sign, in addition to dot and dash, and used groups of not more than ten, you could form 88,572 different 'letters'; with five signs and groups up to 25, the number is 372,529,029,846,19 1,405. It may be objected that the simile is deficient, because our two Morse signs may have different composition (e.g. .--and .-) and thus they are a bad analogue for isomerism. To remedy this defect, let us pick, from the third example, only the combinations of exactly 25 symbols and only those containing is exactly 5 out of each of the supposed 5 types (5 dots, 5 dashes, etc.). A rough count gives you the number of combinations as more 62,330,000,000,000, where zeros on the right stand for figures which I have not taken the trouble to compute. Of course, in the actual case, by no means 'every' arrangement of the group of atoms will represent a possible molecule; moreover, it is not a question of a code to be adopted arbitrarily, for the code-script must itself be the operative factor bringing about the development. But, on the other hand, the number chosen in the example (25) is still very small, and we have envisaged only the simple arrangements in one line. What we wish to illustrate is simply that with the molecular picture of the gene it is no longer inconceivable that the miniature code should precisely correspond with a highly complicated and specified plan of development and should somehow contain the means to put it into operation. Now let us at last proceed to compare the theoretical picture cha with the biological facts. The first question obviously is, whether it can really account for the high degree of permanence we observe. Are threshold values of the required amount -high multiples of the average heat energy kT - reasonable, are they within the range known from ordinary chemistry? That question is trivial; it can be answered in the affirmative without inspecting tables. The molecules of any substance which the chemist is able to isolate at a given temperature must at that temperature have a lifetime of at least minutes. That is putting it mildly; as a rule they have much more. Thus the threshold values the chemist encounters are of necessity precisely of the order of magnitude required to account for practically any degree of permanence the biologist may encounter; for we recall from p. 51 that thresholds varying within a range of about 1:2 will account for lifetimes ranging from a fraction of a second to tens of thousands of years. But let me mention figures, for future reference. The ratios W/kT mentioned by way of example on p. 51, viz. W/kT = 30,50,60, producing lifetimes of 1/10s, 16 months, 30,000 years, respectively, correspond at room temperature with threshold values of 0.9, 1.5, 1.8 electron-volts. We must explain the unit 'electron-volt', which is rather convenient for the physicist, because it can be visualized. For highly example, the third number (1.8) means that an electron, accelerated by a voltage of about 2 volts, would have acquired just sufficient energy to effect the transition by impact. (For comparison, the battery of an ordinary pocket flash-light has 3 volts.). These considerations make it conceivable that an isomeric change of configuration in some part of our molecule is, produced by a chance fluctuation of the vibrational energy, can actually be a sufficiently rare event to be interpreted as a spontaneous mutation. Thus we account, by the very principles of quantum mechanics, for the most amazing fact about mutations, the fact by which they first attracted de Vrie's attention, namely, that they are 'jumping' variations of any intermediate forms occurring. Having discovered the increase of the natural mutation rate by any kind of ionizing rays, one might think of attributing the natural rate to the radio-activity of the soil and air and to cosmic radiation. But a quantitative comparison with the X-ray results shows that the 'natural radiation' is much too weak and could account only for a small fraction of the natural rate. Granted that we have to account for the rare natural mutations by chance fluctuations of the heat motion, we must not be very much astonished that Nature has succeeded in making such a subtle choice of threshold values as is necessary to make mutation rare. For we have, earlier in these lectures, arrived at the conclusion that frequent mutations are detrimental to evolution. Individuals which, by mutation, acquire a gene configuration of insufficient stability, will have little chance of seeing their 'ultra-radical', rapidly mutating descendancy survive long. The species will be freed of them and will thus collect stable genes by natural selection. But, of course, as regards the mutants which occur in our breeding experiments and which we select, qua mutants, for studying their offspring, there is no reason to expect that they should all show that very high stability. For they have not yet been 'tried out' -or, if they have, they have been 'rejected' in - the wild breeds -possibly for too high mutability. At any rate, we are not at all astonished to learn that actually some of these mutants do show a much higher mutability than the normal ‘wild’ genes. test our mutability formula, which was (It will be remembered that t is the time of expectation for a mutation with threshold energy W.) We ask: How does t change with the temperature? We easily find from the preceding formula in good approximation the ratio of the value of t at temperature T + 10 to that at temperature T. The exponent being now negative, the ratio is, naturally, there smaller than I. The time of expectation is diminished by raising the temperature, the mutability is increased. Now that can be tested and has been tested with the fly Drosophila in the range of temperature which the insects will stand. The result was, at first sight, surprising. The low mutability of wild genes was distinctly increased, but the comparatively high mutability occurring with some of the already mutated genes was not, or at any rate was much less, increased. That is just what we expect on comparing our two formulae. A large value of W/kT, which according to the first formula is required to make t large (stable gene), will, according to the second one, make for a small value of the ratio computed there, that is to say for a considerable increase of mutability with temperature. (The actual values of the ratio seem to lie between about 1/2 and 1/5. The reciprocal, 2.5, is what in an ordinary chemical reaction we call the van't Hoff factor.) HOW X-RAYS PRODUCE MUTATION Turning now to the X-ray-induced mutation rate, we have already inferred from the breeding experiments, first (from the proportionality of mutation rate, and dosage), that some single event produces the mutation; secondly (from quantitative results and from the fact that the mutation rate is determined by the integrated ionization density and independent of the wave-length), that this single event must be an ionization, or similar process, which has to take place inside a certain volume of only about 10 atomic-distances-cubed, in order to produce a specified mutation. According to our picture, the energy for overcoming the threshold must obviously be furnished by that explosion-like process, ionization or excitation. I call it explosion-like, because the energy spent in one ionization (spent, incidentally, not by the X-ray itself, but by a secondary electron it produces) is well known and has the comparatively enormous amount of 30 electron-volts. It is bound to be turned into enormously increased heat motion around the point where it is discharged and to spread from there in the form of a 'heat wave', a wave of intense oscillations of the atoms. That this heat wave should still be able to furnish the required threshold energy of 1 or 2 electron-volts at an average 'range of action' of about ten atomic distances, is not inconceivable, though it may well be that an unprejudiced physicist might have anticipated a slightly lower range of action. That in many cases the effect of the explosion will not be an orderly isomeric transition but a lesion of the chromosome, a lesion that becomes lethal when, by ingenious crossings, the uninjured partner (the corresponding chromosome of the second set) is removed and replaced by a partner whose corresponding gene is known to be itself morbid -all that is absolutely to be expected and it is exactly what is observed. Quite a few other features are, if not predictable from the picture, easily understood from it. For example, an unstable mutant does not on the average show a much higher X-ray mutation rate than a stable one. Now, with an explosion furnishing an energy of 30 electron-volts you would certainly not expect that it makes a lot of difference whether the required threshold energy is a little larger or a little smaller, say 1 or 1.3 volts. In some cases a transition was studied in both directions, say from a certain 'wild' gene to a specified mutant and back from that mutant to the wild gene. In such cases the natural mutation rate is sometimes nearly the same, sometimes very different. At first sight one is puzzled, because the threshold to be overcome seems to be the same in both cases. But, of course, it need not be, because it has to be measured from the energy level of the starting configuration, and that may be different for the wild and the mutated gene. (See Fig. 12 on p. 54, where 'I' might refer to the wild allele, '2' to the mutant, whose lower stability would be indicated by the shorter arrow.) On the whole, I think, Delbruck's 'model' stands the tests fairly well and we are justified in using it in further considerations Order, Disorder and Entropy Let me refer to the phrase on p. 62, in which I tried to explain that the molecular picture of the gene made it at least conceivable that the miniature code should be in one-to-one correspondence with a highly complicated and specified plan of development and should somehow contain the means of putting it into operation. Very well then, but how does it do this? How are we going to turn ‘conceivability’ into true understanding? Delbruck's molecular model, in its complete generality, seems to contain no hint as to how the hereditary substance works, Indeed, I do not expect that any detailed information on this question is likely to come from physics in the near may future. The advance is proceeding and will, I am sure, continue to do so, from biochemistry under the guidance of physiology and genetics. No detailed information about the functioning of the genetical mechanism can emerge from a description of its structure so general as has been given above. That is obvious. But, strangely enough, there is just one general conclusion to be obtained from it, and that, I confess, was my only motive for writing this book. From Delbruck's general picture of the hereditary subustance it emerges that living matter, while not eluding the 'laws of physics' as established up to date, is likely to involve 'other laws of physics' hitherto unknown, which, however, once they have been revealed, will form just as integral a part of this science as the former. This is a rather subtle line of thought, open to misconception in more than one respect. All the remaining pages are concerned with making it clear. A preliminary insight, rough but not altogether erroneous, may be found in the following considerations: It has been explained in chapter 1 that the laws of physics, as we know them, are statistical laws. They have a lot to do with the natural tendency of things to go over into disorder. But, to reconcile the high durability of the hereditary substance with its minute size, we had to evade the tendency to disorder by 'inventing the molecule', in fact, an unusually large molecule which has to be a masterpiece of highly differentiated order, safeguarded by the conjuring rod of quantum theory. The laws of chance are not invalidated by this 'invention', but their outcome is modified. The physicist is familiar with the fact that the classical laws of physics are modified by quantum theory, especially at low temperature. There are many instances of this. Life seems to be one of them, a particularly striking one. Life seems to be orderly and lawful behaviour of matter, not based exclusively on its tendency to go over from order to disorder, but based partly on existing order that is kept up. To the physicist -but only to him -I could hope to make my view clearer by saying: The living organism seems to be a macroscopic system which in part of its behaviour approaches to that purely mechanical (as contrasted with thermodynamical) conduct to which all systems tend, as the temperature approaches absolute zero and the molecular disorder is removed. The non-physicist finds it hard to believe that really the ordinary laws of physics, which he regards as the prototype of a part inviolable precision, should be based on the statistical tendency of matter to go over into disorder. I have given examples in chapter 1. The general principle involved is the famous Second Law of Thermodynamics (entropy principle) and its equally famous statistical foundation. On pp. 69-74 I will try to sketch the bearing of the entropy principle on the large-scale behaviour of a living organism -forgetting at the moment all that is known about chromosomes, inheritance, and so on. something', moving, exchanging material with its environment, and so forth, and that for a much longer period than we would expect of an inanimate piece of matter to 'keep going' under similar circumstances. When a system that is not alive is isolated or placed in a uniform environment, all motion usually comes to a standstill very soon as a result of various kinds of friction; differences of electric or chemical potential are equalized, substances which tend to form a chemical compound do so, temperature becomes uniform by heat conduction. After that the whole system fades away into a dead, inert lump of matter. A permanent state is reached, in which no observable events occur. The physicist calls this the state of thermodynamical equilibrium, or of ‘maximum entropy'. Practically, a state of this kind is usually reached very rapidly. Theoretically, it is very often not yet an absolute equilibrium, not yet the true maximum of entropy. But then the final approach to equilibrium is very slow. It could take anything between hours, years, centuries,... To give an example -one in which the approach is still fairly rapid: if a glass filled with pure water and a second one filled with sugared water are placed together in a hermetically closed case at constant temperature, it appears at first that nothing happens, and the impression of complete equilibrium is created. But after a day or so it is noticed that the pure water, owing to its higher vapour pressure, slowly evaporates and condenses on the solution. The latter overflows. Only after the pure water has totally evaporated has the sugar reached its aim of being equally distributed among all the liquid water available. These ultimate slow approaches to equilibrium could never be mistaken for life, and we may disregard them here. I have referred to them in order to clear myself of a charge of Inaccuracy. so much so, that from the earliest times of human thought some special non-physical or supernatural force (vis viva, entelechy) was claimed to be operative in the organism, and in some quarters is still claimed. How does the living organism avoid decay? The obvious answer is: By eating, drinking, breathing and (in the case of plants) assimilating. The technical term is metabolism. The Greek word () means change or exchange. Exchange of what? Originally the underlying idea is, no doubt, exchange of material. (E.g. the German for metabolism is Stoffwechsel.) That the exchange of material should be the essential thing is absurd. Any atom of nitrogen, oxygen, sulphur, etc., is as good as any other of its kind; what could be gained by exchanging them? For a while in the past our curiosity was silenced by being told that we feed upon energy. In some very advanced country (I don't remember whether it was Germany or the U.S.A. or both) you could find menu cards in restaurants indicating, in addition to the price, the energy content of every dish. Needless to say, taken literally, this is just as absurd. For an adult organism the energy content is as stationary as the material content. Since, surely, any calorie is worth as much as any other calorie, one cannot see how a mere exchange could help. What then is that precious something contained in our food which keeps us from death? That is easily answered. Every process, event, entropy -or, as you may say, produces positive entropy -and thus tends to approach the dangerous state of maximum entropy, which is of death. It can only keep aloof from it, i.e. alive, by continually drawing from its environment negative entropy -which is something very positive as we shall immediately see. What an Let me first emphasize that it is not a hazy concept or idea, but a measurable physical quantity just like of the length of a rod, the temperature at any point of a body, the heat of fusion of a given crystal or the specific heat of any given substance. At the absolute zero point of temperature (roughly -273°C) the entropy of any substance is zero. When you bring the substance into any other state by slow, reversible little steps (even if thereby the substance changes its physical or chemical nature or splits up into two or more parts be of different physical or chemical nature) the entropy increases by an amount which is computed by dividing every little portion of heat you had to supply in that procedure by the absolute temperature at which it was supplied -and by summing up all these small contributions. To give an example, when you melt a solid, its entropy increases by the amount of the heat of fusion divided by the temperature at the more melting-point. You see from this, that the unit in which entropy is measured is cal./C (just as the calorie is the unit of heat or the centimetre the unit of length). I have mentioned this technical definition simply in order to remove entropy from the atmosphere of hazy mystery that frequently veils it. Much more important for us here is the bearing on the statistical concept of order and disorder, a connection that was revealed by the investigations of Boltzmann and Gibbs in statistical physics. This too is an exact quantitative connection, and is expressed by entropy = k log D, where k is the so-called Boltzmann constant ( = 3.2983 . 10 cal./C), and D a quantitative measure of the atomistic disorder of the body in question. To give an exact explanation of this quantity D in brief non- technical terms is well-nigh impossible. The disorder it indicates is partly that of heat motion, partly that which consists in different kinds of atoms or molecules being mixed at random, instead of being neatly separated, e.g. the sugar and water molecules in the example quoted above. Boltzmann's equation is well illustrated by that example. The gradual 'spreading out' of the sugar over all the water available increases the disorder D, and hence (since the logarithm of D increases with D) the entropy. It is also pretty clear that any supply of heat increases the turmoil of heat motion, that is to say, increases D and thus increases the entropy; it is particularly clear that this should be so when you melt a crystal, since you thereby destroy the neat and permanent arrangement of the atoms or molecules and turn the crystal lattice into a continually changing random distribution. An isolated system or a system in a uniform environment (which for the present consideration we do best to include as the part of the system we contemplate) increases its entropy and more or less rapidly approaches the inert state of maximum entropy. We now recognize this fundamental law of physics to be just the natural tendency of things to approach the chaotic state (the same tendency that the books of a library or the piles of papers and manuscripts on a writing desk display) unless we obviate it. (The analogue of irregular heat motion, in this case, is our handling those objects now and again to without troubling to put them back in their proper places. How would we express in terms of the statistical theory the marvellous faculty of a living organism, by which it delays the decay into thermodynamical equilibrium (death)? We said before: 'It feeds upon negative entropy', attracting, as it were, a stream of negative entropy upon itself, to compensate the entropy increase it produces by living and thus to maintain itself on a stationary and fairly low entropy level. If D is a measure of disorder, its reciprocal, l/D, can be regarded as a direct measure of order. Since the logarithm of l/D is just minus the logarithm of D, we can write Boltzmann's equation thus: -(entropy) = k log (l/D). Hence the awkward expression 'negative entropy' can be he replaced by a better one: entropy, taken with the negative sign, is itself a measure of order. Thus the device by which an organism maintains itself stationary at a fairly high level of he orderliness ( = fairly low level of entropy) really consists continually Rather could it be blamed for triviality. Indeed, in the case of higher animals we know the kind of have their most power supply of ‘negative entropy’ the sunlight) The remarks on negative entropy have met with doubt and Opposition from physicist colleagues. Let me say first, that if I had been law catering for them alone I should have let the discussion turn on free energy too near to energy for making the average reader alive to the contrast between the two things. He is likely to take free as more or less an epitheton ornans without much relevance, while actually the concept is a rather intricate one, whose relation to Boltzmann's order-disorder principle is less easy to trace than for entropy and 'entropy taken with a negative sign', which by the way is not my invention. It happens to be precisely the thing on which Boltzmann's original argument turned. But F. Simon has very pertinently pointed out to me that my simple thermodynamical considerations cannot account for our having to feed on matter 'in the extremely well ordered state of more or less complicated organic compounds' rather than on charcoal or diamond pulp. He is right. But to the lay reader I must explain that a piece of un-burnt coal or diamond, together with the amount of oxygen needed for its combustion, is also in an extremely well ordered state, as the physicist understands it. Witness to this: if you allow the reaction, the burning of the coal, to take place, a great amount of heat is produced. By giving it off to the surroundings, the system disposes of the very considerable entropy increase entailed by the reaction, and reaches a state in which it has, in point of fact, roughly the same entropy as before. Yet we could not feed on the carbon dioxide that results from the reaction. And so Simon is quite right in pointing out to me, as he did, that actually the energy content of our food does matter; so my mocking at the menu cards that indicate it was out of place. Energy is needed to replace not only the mechanical energy of our bodily exertions, but also the heat we continually give off to the environment. And that we give off heat is not accidental, but essential. For this is precisely the manner in which we dispose of the surplus entropy we continually produce in our physical life process. This seems to suggest that the higher temperature of the warm-blooded animal includes the advantage of enabling it to get rid of its entropy at a quicker rate, so that it can afford a more intense life process. I am not sure how much truth there is in this argument (for which I am responsible, not Simon). One may hold against it, that on the other hand many warm-blooders are protected against the rapid loss of heat by coats of fur or feathers. So the parallelism between body temperature and 'intensity of life', which I believe to exist, may have to be accounted for more directly by van't Hoff’s law, mentioned on p. 65: the higher temperature itself speeds up the chemical reactions involved in living. (That it actually does, has been confirmed experimentally in species which take the temperature of the surroundings.). Is Life Based on the Laws of Physics? What I wish to make clear in this last chapter is, in short, that from all we have learnt about the structure of living matter, we must be prepared to find it working in a manner that cannot be reduced to the ordinary laws of physics. And that not on the ground that there is any 'new force' or what not, directing the behaviour of the single atoms within a living organism, but because the construction is different from a anything we have yet tested in the physical laboratory. To put it crudely, an engineer, familiar with heat engines only, will, after inspecting the construction of an electric motor, be prepared to find it working along principles which he does not yet understand. He finds the copper familiar to him in kettles used here in the form of long, wires wound in coils; the iron familiar to him in levers and bars and steam cylinders here filling the interior of those coils of copper wire. He will be convinced that it is the same copper and the same iron, subject to the same laws of Nature, and he is right in that. The difference in construction is enough to prepare him for an entirely different way of functioning. He will not suspect that an electric motor is driven by a ghost because it is set spinning by the turn of a switch, without boiler and steam. If a man never contradicts himself, the reason must be that he virtually never says anything at all. The unfolding of events in the life cycle of an organism exhibits an admirable regularity and orderliness, unrivalled by anything we meet with in inanimate matter. We find it controlled by a supremely well- ordered group of atoms, which represent only a very small fraction of the sum total in every cell. Moreover, from the view we have formed of the mechanism of mutation we conclude that the dislocation of just a few atoms within the group of 'governing atoms' of the germ cell suffices to bring about a well-defined change in the large-scale hereditary characteristics of the organism. These facts are easily the most interesting that science has revealed in our day. We may be inclined to find them, after all, not wholly unacceptable. An organism's astonishing gift of concentrating a 'stream of order' on itself and thus escaping that the decay into atomic chaos -of 'drinking orderliness' from a suitable environment -seems to be connected with the presence of the 'aperiodic solids', the chromosome molecules, which doubtless represent the highest degree of well-ordered atomic association we know of -much higher than the ordinary periodic crystal -in virtue of the individual role every atom and every radical is playing here. To put it briefly, we witness the event that existing order displays the power of maintaining itself and of producing orderly events. That sounds plausible enough, though in finding it plausible we, no doubt, draw on experience concerning social organization and other events which involve the activity of organisms. And so it might seem that something like a vicious circle is implied. However that may be, the point to emphasize again and again is that to the physicist the state of affairs is not only not plausible but most exciting, because it is unprecedented. Contrary to the common belief the regular course of events, governed by the laws of physics, is never the consequence one well-ordered configuration of atoms -not unless that configuration of atoms repeats itself a great number of times, either as in the periodic crystal or as in a liquid or in a gas composed of a great number of identical molecules. Even when the chemist handles a very complicated molecule in vitro he is always faced with an enormous number of like molecules. To them his laws apply. He might tell you, for example, that one minute after he has started some particular reaction half of the molecules will have reacted, and after a second minute three-quarters of them will have done so. But whether any particular molecule, supposing you could follow, its course, will be among those which have reacted or among those which are still untouched, he could not predict. That is a matter of pure chance. This is not a purely theoretical conjecture. It is not that we can never observe the fate of a single small group of atoms or even of a single atom. We can, occasionally. But whenever we do, we find complete irregularity, co-operating to produce regularity only on the average. We have dealt with an example in chapter 1. The Brownian movement of a small particle suspended in a liquid is completely irregular. But if there are many similar particles, they will by their irregular movement give rise to the regular phenomenon of diffusion. The disintegration of a single radioactive atom is observable (it emits a projectile which causes a visible scintillation on a fluorescent screen). But if you are given a single radioactive atom, its probable lifetime is much less certain than that of a healthy sparrow. Indeed, nothing more can be said about it than this: as long as it lives (and that may be for thousands of years) the chance of its blowing up within the next second, whether large or small, remains the same. This patent lack of individual determination nevertheless results in the exact exponential law of decay of a large number of radioactive atoms of the same kind. In biology we are faced with an entirely different situation. A single group of atoms existing only in one copy produces orderly events, marvellously tuned in with each other and us number of with the environment according to most subtle laws. I said existing only in one copy, for after all we have the example of the egg and of the unicellular organism. In the following stages of a higher organism the copies are multiplied, that is true. But to what extent? Something like 10 in a grown mammal, I understand. What is that! Only a millionth of the number of molecules in one cubic inch of air. Though comparatively bulky, by coalescing they would form but a tiny drop of liquid. And look at the way they are actually distributed. Every cell harbours just one of them (or two, if we bear in mind diploidy). Since we know the power this tiny central office has in the isolated cell, do they not resemble stations of local government dispersed through the body, communicating with each other with great ease, thanks to the code that is common to all of them? Well, this is a fantastic description, perhaps less becoming a scientist than a poet. However, it needs no poetical imagination but only clear and sober scientific reflection to recognize that we are here obviously faced with events whose regular and lawful unfolding is guided by a 'mechanism' entirely different from the 'probability mechanism' of physics. For it is simply a fact of observation that the guiding principle in every cell is embodied in a single atomic association existing only one copy (or sometimes two) -and a fact of observation that it may results in producing events which are a paragon of anywhere else except in living matter. The physicist and the chemist, investigating inanimate matter, have never witnessed phenomena which they had to interpret in this way. The case did not arise and so our theory does not cover it -our beautiful statistical theory of which we were so justly proud because it allowed us to look behind the curtain, to watch the magnificent order of exact physical law coming forth from atomic and molecular disorder; because it revealed that the most important, the most general, the all-embracing law of entropy could be understood without a special assumption ad hoc, for it is nothing but molecular disorder itself. The orderliness encountered in the unfolding of life springs from a different source. It appears that there are two different 'mechanisms' by which orderly events can be produced: the 'statistical mechanism' which produces order from disorder and the new one, producing order from order. To the unprejudiced mind the second principle appears to be much simpler, much more plausible. No a doubt it is. That is why physicists were so proud to have fallen in with the other one, the 'order-from-disorder' principle, which is actually followed in Nature and which alone conveys an understanding of the great line of natural events, in the first place of their irreversibility. But we cannot expect that the 'laws of physics' derived from it suffice straightaway to explain the behaviour of living matter, whose most striking features are visibly based to a large extent on the 'order-from-order' principle. You would not expect two entirely different mechanisms to bring about the same type of law -you would not expect your latch-key, to open your neighbour's door as well. We must therefore not be discouraged by the difficulty of interpreting life by the ordinary laws of physics. For that is just what is to be expected from the knowledge we have gained of the structure of living matter. We must be prepared to find a new type of physical law prevailing in it. Or are we to term it a non- physical, not to say a super-physical, law? No. I do not think that. For the new principle that is involved is a genuinely physical one: it is, in my opinion, nothing else than the principle of quantum theory over again. To explain this, we have to go to some length, including a refinement, not to say an amendment, of the assertion previously made, namely, that all physical laws are based on statistics. This assertion, made again and again, could not fail to arouse contradiction. For, indeed, there are phenomena whose conspicuous features are visibly based directly on the 'order-from-order' principle and appear to have nothing to do with statistics or molecular disorder. The order of the solar system, the motion of the planets, is maintained for an almost indefinite time. The constellation of principle this moment is directly connected with the constellation at any particular moment in the times of the Pyramids; it can be traced back to it, or vice versa. Historical eclipses have been calculated and have been found in close agreement with historical records or have even in some cases served to correct the accepted chronology. These calculations do not imply any statistics, they are based solely on Newton's law of universal attraction. Nor does the regular motion of a good clock or any similar mechanism appear to have anything to do with statistics. In short, all purely mechanical events seem to follow distinctly and directly the 'order-from-order' principle. And if we say 'mechanical', the term must be taken in a wide sense. A very useful kind of clock is, as you know, based on the regular transmission of electric pulses from the power station. I remember an interesting little paper by Max Planck on we have the topic 'The Dynamical and the Statistical Type of Law' ('Dynamische und Statistische Gesetzmassigkeit'). The distinction is precisely the one we have here labelled as 'order from order' and 'order from disorder'. The object of that paper was to show how the interesting statistical type of law, controlling large-scale events, is constituted from the dynamical laws supposed to govern the small-scale events, the interaction of the single atoms and molecules. The latter type is illustrated by large-scale mechanical phenomena, as the motion of the planets or of a clock, etc. Thus it would appear that the 'new' principle, the order- from-order principle, to which we have pointed with great solemnity as being the real clue to the understanding of life, is not at all new to physics. Planck's attitude even vindicates priority for it. We seem to arrive at the 'clock-work' in the sense of Planck's paper, The conclusion is not ridiculous and is, in my opinion, not entirely wrong, but it has to be taken 'with a very big grain of salt'. Let us analyse the motion of a real clock accurately. It is not at all a purely mechanical phenomenon. A purely mechanical clock would need no spring, no winding. Once set in motion, it would go on forever. A real clock without a spring stops after a few beats of the pendulum, its mechanical energy is turned into heat. This is an infinitely complicated atomistic process. The general picture the physicist forms of it compels him to admit that the inverse process is not entirely impossible: a springless clock might suddenly begin to move, at the expense of the heat energy of its own cog wheels and of the environment. The physicist would have to say: The clock experiences an exceptionally in tense fit of Brownian movement. We have seen in chapter 2 (p. 16) that with a very sensitive torsional balance (electrometer or galvanometer) that sort of thing happens all the time. In the case of a clock it is, of course, infinitely unlikely. Whether the motion of a clock is to be assigned to the dynamical or to the statistical type of lawful events (to use Planck's expressions) depends on our attitude. In calling it a dynamical phenomenon we fix attention on the regular going that can be secured by a comparatively weak spring, which overcomes the small disturbances by heat motion, so that we may disregard them. But if we remember that without a spring the clock is gradually slowed down by friction, we find that this process can only be understood as a statistical phenomenon. However insignificant the frictional and heating effects in a clock may be from the practical point of view, there can be no doubt that the second attitude, which does not neglect them, is the more fundamental one, even when we are faced with the based on a regular motion of a clock that is driven by a spring. For it must not be believed that the driving mechanism really does away with the statistical nature of the process. The true physical picture includes the possibility that even a regularly going clock should all at once invert its motion and, working backward, rewind its own spring -at the expense of the heat of the environment. The event is just a little less likely than a 'Brownian fit' of a clock without driving CLOCKWORK AFTER ALL STATISTICAL Let us now review the situation. The 'simple' case we have analysed is representative of many others -in fact of all such appear to evade the all-embracing principle of molecular statistics. Clockworks made of real physical matter (in contrast to imagination) are not true 'clock-works'. The element of chance may be more or less reduced, the likelihood of the clock suddenly going altogether wrong may be infinitesimal, but it always remains in the background. Even in the motion of the celestial bodies irreversible frictional and thermal torsional influences are not wanting. Thus the rotation of the earth is slowly diminished by tidal friction, and along with this of course, reduction the moon gradually recedes from the earth, which would not happen if the earth were a completely rigid rotating sphere. Nevertheless the fact remains that 'physical clock-works' visibly display very prominent 'order-from-order' features - the type that aroused the physicist's excitement when he encountered them in the organism. It seems likely that the two cases have after all something in common. It remains to be seen what this is and what is the striking difference which makes case of the organism after all novel and unprecedented. When does a physical system -any kind of association atoms -display 'dynamical law' (in Planck's meaning) 'clock-work features'? Quantum theory has a very short answer to this question, viz. at the absolute zero of temperature. As zero temperature is approached the molecular disorder ceases to have any bearing on physical events. This fact was, by the way, not discovered by theory, but by carefully investigating chemical reactions over a wide range of temperatures and extrapolating the results to zero temperature -which cannot actually be reached. This is Walther Nernst's famous 'Heat Theorem', which is sometimes, and not unduly, given the proud name of the 'Third Law of Thermodynamics' (the first being the energy principle, the second the entropy principle). Quantum theory provides the rational foundation of Nernst's empirical law, and also enables us to estimate how closely a system must approach to the absolute zero in order to display an approximately 'dynamical' behaviour. What temperature is in any particular case already practically equivalent to zero? Now you must not believe that this always has to be a very low temperature. Indeed, Nernst's discovery was induced by the fact that even at room temperature entropy plays a astonishingly insignificant role in many chemical reactions (Let me recall that entropy is a direct measure of molecular disorder, viz. its logarithm.). What about a pendulum clock? For a pendulum clock room temperature is practically equivalent to zero. That is the reason why it works 'dynamically'. It will continue to work as it does if you cool it (provided that you have removed all traces of oil!). But it does not continue to work if you heat it above room temperature, for it will eventually melt. That seems very trivial but it does, I think, hit the cardinal point. Clockworks are capable of functioning 'dynamically', because they are built of solids, which are kept in shape by London-Heider forces, strong enough to elude the disorderly tendency of heat motion at ordinary temperature. Now, I think, few words more are needed to disclose the point of resemblance between a clockwork and an organism. It is simply and solely that the latter also hinges upon a solid –the aperiodic crystal forming the hereditary substance, largely withdrawn from the disorder of heat motion. But please do not accuse me of calling the chromosome fibres just the 'cogs of the organic machine' -at least not without a reference to the profound physical theories on which the simile is based. For, indeed, it needs still less rhetoric to recall the fundamental difference between the two and to justify the epithets novel and unprecedented in the biological case. The most striking features are: first, the curious distribution of the cogs in a many-celled organism, for which I may refer to a very the somewhat poetical description on p. 79; and secondly, by fact that the single cog is not of coarse human make, but is the finest masterpiece ever achieved along the lines of the Lord's quantum mechanics. On Determinism and Free Will (considering also their complex structure and the accepted statistical explanation of physico-chemistry) if not strictly deterministic at any rate statistico-deterministic. To the physicist I wish to would, if there were not the well-known, unpleasant feeling about 'declaring oneself to be a pure mechanism'. For it is deemed to contradict Free Will as in warranted by direct introspection. But immediate from the following two premises: (i) My body functions as a pure mechanism according to the Laws of word, that is to say, every conscious mind that has ever said or felt 'I' -am the person, if any, who controls the 'motion of the atoms' according to the Laws of Nature. Within a cultural milieu (Kulturkreis) where Christian terminology to say: 'Hence I am God Almighty' sounds both blasphemous and lunatic. But please Upanishads the recognition ATHMAN = BRAHMAN upheld in (the personal self equals the omnipresent, grandest of all thoughts. Again, the mystics of many centuries, independently, yet in perfect harmony with for it and in spite of those true lovers who, as they look into each other's eyes, become aware that their thought and their joy are numerically one -not merely similar or identical; but they, as a rule, are mystic. Allow me a few further comments. Consciousness is never experienced in the plural, only in the and his speech just as much as our own. How does the idea of plurality (so emphatically opposed by the narcosis, lesion of the brain and so on.) Now there is a great plurality of similar bodies. Hence the been questioned whether women, or only men, have souls. Such consequences, even if only tentative, must plurality of souls, but 'remedy' it by declaring the souls to be perishable, to be annihilated with the respective bodies? The only possible alternative is simply to keep to the immediate experience that valleys. There are, of course, elaborate ghost-stories fixed in our minds to hamper our acceptance of such this extravagance Kant is responsible. In the order of ideas which regards consciousness as a singulare experience and memory forms a unit, quite distinct from that of any other person. He refers to it as 'I' and What is this 'I'? If you analyse it closely you will, I think, find that it is just the facts little more than a And you will, on close introspection, find that what you really mean by 'I' is that ground-stuff upon which “The youth that was I', you may come to speak of him in the third person, indeed the protagonist of the This section refers to statistical mechanics, a field which explains the thermodynamics behavior of large systems. Microscopic mechanical behavior (on the order of atoms) does not contain concepts like temperature, heat, or entropy; but when you build a system of atoms, statistical mechanics shows how these concepts that Erwin refers to arise from the natural uncertainty about the state of the system. What Is Life? The Physical Aspect of the Living Cell is based on a course of public lectures that Schrödinger gave in February 1943 to an audience of about 400. Schrödinger's lecture focuses on one important question: "how can the events in space and time which take place within the spatial boundary of a living organism be accounted for by physics and chemistry?" The most significant contribution of this fascinating book is the idea of an "aperiodic crystal" that contained genetic information in its configuration of covalent chemical bonds. Even though the existence of DNA had been known since 1869, its role in reproduction and its helical shape were still unknown at the time of Schrödinger's lecture and his idea stimulated scientists to search for the genetic molecule in the 1950s. Schrödinger's "aperiodic crystal" can be viewed as a well-reasoned theoretical prediction of what biologists should have been looking for during their search for genetic material-> Both James D. Watson, and independently, Francis Crick, co-discoverers of the structure of DNA, credited Schrödinger's book with presenting an early theoretical description of how the storage of genetic information would work, and each respectively acknowledged the book as a source of inspiration for their initial research. Brownian motion is the random motion of particles suspended in a fluid (a liquid or a gas) resulting from their collision with the fast-moving atoms or molecules in the gas or liquid. Here is an example of brownian motion of potassium permanganate: Atomic size is the distance from the nucleus to the valence shell where the valence electrons are located. The size of an isolated atom cannot be measured precisely because it has no definite boundary- the electrons that surround the nucleus exist in an electron cloud. Atomic size is instead estimated by assuming that the radius of an atom is half the distance between adjacent atoms in a solid. This technique works best for elements that are metals that form solids composed of extended planes of atoms of that element. ![small description]( Paramagnetism is a form of magnetism whereby certain materials are attracted by an externally applied magnetic field, and form internal, induced magnetic fields in the direction of the applied magnetic field. Here is an example of paramagnetism relevant to Erwin's example below: Aperiodic crystal: "any crystal in which three-dimensional lattice periodicity can be considered to be absent" (?) - Erwin Schrodinger (August 1887 – January 1961) was a Nobel Prize-winning Austrian physicist who developed many of the fundamental results of quantum theory including the basis for wave mechanics: he formulated the wave equation (Schrödinger equation). Erwin was a prolific writer and thinker and authored works in many fields including statistical mechanics, thermodynamics, color theory, electrodynamics, general relativity, and cosmology; he wrote about the philosophical aspects of science, ancient and oriental philosophical concepts, ethics, religion, and came up with the "Schrödinger's cat" thought-experiment. ![small description](
f5bcc2e061c28315
We gratefully acknowledge support from the Simons Foundation and member institutions New submissions [ total of 265 entries: 1-265 ] New submissions for Fri, 16 Nov 18 [1]  arXiv:1811.05973 [pdf, other] Title: Fault-Tolerant Metric Dimension of $P(n,2)$ with Prism Graph Comments: 9 pages, 2 figures Subjects: Combinatorics (math.CO) Let $G$ be a connected graph and $d(a,b)$ be the distance between the vertices $a$ and $b$. A subset $U =\{u_1,u_2,\cdots,u_k\}$ of the vertices is called a resolving set for $G$ if for every two distinct vertices $a,b \in V(G)$, there is a vertex $u_\xi \in U$ such that $d(a,u_\xi)\neq d(b,u_\xi)$. A resolving set containing a minimum number of vertices is called a metric basis for $G$ and the number of vertices in a metric basis is its metric dimension denoted by $dim(G)$. A resolving set $U$ for $G$ is fault-tolerant if $U \setminus \{u\}$ is also a resolving set, for each $u \in U$, and the fault-tolerant metric dimension of $G$ is the minimum cardinality of such a set. In this paper we introduce the study of the fault-tolerant metric dimension of $P(n,2)$ with prism graph. [2]  arXiv:1811.06005 [pdf, ps, other] Title: Factoring Non-negative Operator Valued Trigonometric Polynomials in Two Variables Comments: 20 pages Subjects: Functional Analysis (math.FA) Using Schur complement techniques, it is shown that a non-negative operator valued trigonometric polynomial in two variables with degree (d_1,d_2) can be written as a finite sum of hermitian squares of at most 2d_2 analytic polynomials with degrees at most (d_1, 2d_2-1). In analogy with the Tarski transfer principle in real algebra, when the coefficient space is finite dimensional, the proof lifts the problem to an ultraproduct, solves it there, and then shows that this implies the existence of a solution in the original context. The general result is obtained through a compactness argument. While the proof is non-constructive, it nevertheless leads to a constructive algorithm for the factorization. [3]  arXiv:1811.06008 [pdf, ps, other] Title: Four-body problem in d-dimensional space: ground state, (quasi)-exact-solvability. IV Comments: 42 pages, 1 figure Subjects: Mathematical Physics (math-ph) Due to its great importance for applications, we generalize and extend the approach of our previous papers to study aspects of the quantum and classical dynamics of a $4$-body system with equal masses in {\it $d$}-dimensional space with interaction depending only on mutual (relative) distances. The study is restricted to solutions in the space of relative motion which are functions of mutual (relative) distances only. The ground state (and some other states) in the quantum case and some trajectories in the classical case are of this type. We construct the quantum Hamiltonian for which these states are eigenstates. For $d \geq 3$, this describes a six-dimensional quantum particle moving in a curved space with special $d$-independent metric in a certain $d$-dependent singular potential, while for $d=1$ it corresponds to a three-dimensional particle and coincides with the $A_3$ (4-body) rational Calogero model; the case $d=2$ is exceptional and is discussed separately. The kinetic energy of the system has a hidden $sl(7,{\bf R})$ Lie (Poisson) algebra structure, but for the special case $d=1$ it becomes degenerate with hidden algebra $sl(4,R)$. We find an exactly-solvable four-body $S_4$-permutationally invariant, generalized harmonic oscillator-type potential as well as a quasi-exactly-solvable four-body sextic polynomial type potential with singular terms. Naturally, the tetrahedron whose vertices correspond to the positions of the particles provides pure geometrical variables, volume variables, that lead to exactly solvable models. Their generalization to the $n$-body system as well as the case of non-equal masses is briefly discussed. [4]  arXiv:1811.06009 [pdf, ps, other] Title: A translation of Y. Benoist's "Propriétés asymptotiques des groupes linéaires" Authors: Ilia Smilga Subjects: Group Theory (math.GR); Algebraic Geometry (math.AG) This is a translation of Yves Benoist's "Propri\'et\'es asymptotiques des groupes lin\'eaires", Geom. and funct. anal., 7:1-47, 1997 [5]  arXiv:1811.06022 [pdf, ps, other] Title: On the multivariable generalization of Anderson-Apostol sums Comments: 18 pages Subjects: Number Theory (math.NT) In this paper, we give various identities for the weighted average of the product of generalized Anderson-Apostol sums with weights concerning completely multiplicative function, completely additive function, logarithms, the Gamma function, the Bernoulli polynomials, and binomial coefficients. [6]  arXiv:1811.06023 [pdf, ps, other] Title: Distinguishing number of Urysohn metric spaces Subjects: Combinatorics (math.CO) The distinguishing number of a structure is the smallest size of a partition of its elements so that only the trivial automorphism of the structure preserves the partition. We show that for any countable subset of the positive real numbers, the corresponding countable Urysohn metric space, when it exists, has distinguishing number two or is infinite. While it is known that a sufficiently large finite primitive structure has distinguishing number two, unless its automorphism group is not the full symmetric group or alternating group, the infinite case is open and these countable Urysohn metric spaces provide further confirmation toward the conjecture that all primitive homogeneous countably infinite structures have distinguishing number two or else is infinite. [7]  arXiv:1811.06033 [pdf, ps, other] Title: Two time discretizations for gradient flows exactly replicating energy dissipation Subjects: Numerical Analysis (math.NA) The classical implicit Euler scheme fails to reproduce the exact dissipation dynamics of gradient flows: The discrete dissipation necessarily does not correspond to the energy drop. We discuss two modifications of the Euler scheme satisfying an exact energy equality at the discrete level. Existence of discrete solutions and their convergence as the fineness of the partition goes to zero are discussed. Eventually, we address extensions to generalized gradient flows, GENERIC flows, and curves of maximal slope in metric spaces. [8]  arXiv:1811.06034 [pdf, other] Title: Limit distributions of the upper order statistics for the Lévy-frailty Marshall-Olkin distribution Subjects: Probability (math.PR) The Marshall-Olkin (MO) copula has been recognized as a key model to capture dependency in reliability theory and in risk analysis. In this setting, the multivariate distribution is used to model the lifetimes of components of a system, where dependency between the lifetimes is induced by "shocks" that hit one or more components. Of particular interest is the L\'evy-frailty subfamily of the Marshall-Olkin (LF MO) distribution, since it has few parameters and because the nontrivial dependency structure is driven by an underlying L\'evy subordinator process. The main contribution of our work is that we derive the precise asymptotic behavior of the upper order statistics of the LF MO distribution. More specifically, we consider a sequence of $n$ univariate random variables jointly distributed as a multivariate LF MO distribution and analyze the order statistics of the sequence as $n$ grows. Our main result states that if the underlying L\'evy subordinator is in the normal domain of attraction of a stable distribution with stability index $\alpha$, then $\alpha=1$ is a critical value for the asymptotic behavior of the distribution of the upper order statics. We show that, after certain logarithmic centering and scaling, the upper order statistics converge in distribution to a stable distribution if $\alpha>1$ or a simple transformation of it if $\alpha \leq 1$. Our result is especially useful in network reliability and systemic risk, when modeling the lifetimes of the components in a system using the LF MO distribution, as it allows to give easily computable confidence intervals for the lifetimes of components. [9]  arXiv:1811.06055 [pdf, other] Title: Minimax Rates in Network Analysis: Graphon Estimation, Community Detection and Hypothesis Testing Authors: Chao Gao, Zongming Ma Subjects: Statistics Theory (math.ST); Social and Information Networks (cs.SI); Machine Learning (stat.ML) This paper surveys some recent developments in fundamental limits and optimal algorithms for network analysis. We focus on minimax optimal rates in three fundamental problems of network analysis: graphon estimation, community detection, and hypothesis testing. For each problem, we review state-of-the-art results in the literature followed by general principles behind the optimal procedures that lead to minimax estimation and testing. This allows us to connect problems in network analysis to other statistical inference problems from a general perspective. [10]  arXiv:1811.06057 [pdf, other] Title: On the Robustness of Information-Theoretic Privacy Measures and Mechanisms Subjects: Information Theory (cs.IT) Consider a data publishing setting for a dataset composed of non-private features correlated with a set of private features not necessarily present in the dataset. The objective of the publisher is to maximize the amount of information about the non-private features in a revealed dataset (utility), while keeping the information leaked about the private attributes bounded (privacy). Here, both privacy and utility are measured using information leakage measures that arise in adversarial settings. We consider a practical setting wherein the publisher uses an estimate of the joint distribution of the features to design the privacy mechanism. For any privacy mechanism, we provide probabilistic upper bounds for the discrepancy between the privacy guarantees for the empirical and true distributions, and similarly for utility. These bounds follow from our main technical results regarding the Lipschitz continuity of the considered information leakage measures. We also establish the statistical consistency of the notion of optimal privacy mechanism. Finally, we introduce and study the family of uniform privacy mechanisms, mechanisms designed upon an estimate of the joint distribution which are capable of providing privacy to a whole neighborhood of the estimated distribution, and thereby, guaranteeing privacy for the true distribution with high probability. [11]  arXiv:1811.06062 [pdf, ps, other] Title: Central limit theorems with a rate of convergence for sequences of transformations Authors: Olli Hella Subjects: Probability (math.PR); Dynamical Systems (math.DS) Using Stein's method, we prove an abstract result that yields multivariate central limit theorems with a rate of convergence for time-dependent dynamical systems. As examples we study a model of expanding circle maps and a quasistatic model. In both models we prove multivariate central limit theorems with a rate of convergence. [12]  arXiv:1811.06064 [pdf, ps, other] Title: Lattice bijections for string modules, snake graphs and the weak Bruhat order Comments: 17 pages Subjects: Representation Theory (math.RT); Combinatorics (math.CO) In this paper we introduce abstract string modules and give an explicit bijection between the submodule lattice of an abstract string module and the perfect matching lattice of the corresponding abstract snake graph. In particular, we make explicit the direct correspondence between a submodule of a string module and the perfect matching of the corresponding snake graph. For every string module, we define a Coxeter element in a symmetric group, and we establish a bijection between these lattices and the interval in the weak Bruhat order determined by the Coxeter element. Using the correspondence between string modules and snake graphs, we give a new concise formulation of snake graph calculus. [13]  arXiv:1811.06070 [pdf, ps, other] Title: Effective Primality Test for $p2^n+1$, $p$ prime, $n>1$ Authors: Tejas R. Rao Comments: 3 pages Subjects: Number Theory (math.NT) We develop a simple $O((\log n)^2)$ test as an extension of Proth's test for the primality for $p2^n+1$, $p>2^n$. This allows for the determination of large, non-Sierpinski primes $p$ and the smallest $n$ such that $p2^n+1$ is prime. If $p$ is a non-Sierpinski prime, then for all $n$ where $p2^n+1$ passes the initial test, $p2^n+1$ is prime with $3$ as a primitive root or is primover and divides the base $3$ Fermat Number, $GF(3,n-1)$. We determine the form the factors of any composite overpseudoprime that passes the initial test take by determining the form that factors of $GF(3,n-1)$ take. [14]  arXiv:1811.06076 [pdf, ps, other] Title: On singularities of dynamic response functions in the massless regime of the XXZ spin-1/2 chain Authors: K. K. Kozlowski Comments: 115 pages, 6 figures Subjects: Mathematical Physics (math-ph); Statistical Mechanics (cond-mat.stat-mech); Exactly Solvable and Integrable Systems (nlin.SI) This work extracts, by means of an exact analysis, the singular behaviour of the dynamical response functions -- the Fourier transforms of dynamical two-point functions -- in the vicinity of the various excitation thresholds in the massless regime of the XXZ spin-1/2 chain. The analysis yields the edge exponents and associated amplitudes which describe the local behaviour of the response function near a threshold. The singular behaviour is derived starting from first principle considerations: the method of analysis \textit{does not rely, at any stage}, on some hypothetical correspondence with a field theory or other phenomenological approaches. The analysis builds on the massless form factor expansion for the response functions of the XXZ chain obtained recently by the author. It confirms the non-linear Luttinger based predictions relative to the power-law behaviour and of the associated edge exponents which arise in the vicinity of the dispersion relation of one massive excitation (hole, particle or bound state). In addition, the present analysis shows that, due to the lack of strict convexity of the particles dispersion relation and due to the presence of slow velocity branches of the bound states, there exist excitation thresholds with a different structure of edge exponents. These origin from multi-particle/hole/bound state excitations maximising the energy at fixed momentum. [15]  arXiv:1811.06077 [pdf, ps, other] Title: On conjugates and the asymptotic distortion of 1-dimensional $C^{1+bv}$ diffeomorphisms Authors: Andrés Navas Subjects: Dynamical Systems (math.DS) We show that a $C^{1+bv}$ circle diffeomorphism with absolutely continuous derivative and irrational rotation number can be conjugated into diffeomorphisms that are $C^{1+bv}$ arbitrary close of the corresponding rotation. This improves a theorem of M.~Herman, who established the same result but starting with a $C^2$ diffeomorphism. We prove that the same holds for Abelian groups of diffeomorphisms acting freely on the circle, a result which is new even in the $C^{\infty}$ context. Related results and examples concerning the asymptotic distortion of diffeomorphisms are presented. [16]  arXiv:1811.06092 [pdf, other] Title: Massively parallel computations in algebraic geometry - not a contradiction Comments: 9 pages, 6 figures Subjects: Algebraic Geometry (math.AG) The design and implementation of parallel algorithms is a fundamental task in computer algebra. Combining the computer algebra system Singular and the workflow management system GPI-Space, we have developed an infrastructure for massively parallel computations in commutative algebra and algebraic geometry. In this note, we give an overview on the current capabilities of this framework by looking into three sample applications: determining smoothness of algebraic varieties, computing GIT-fans in geometric invariant theory, and determining tropicalizations. These applications employ algorithmic methods originating from commutative algebra, sheaf structures on manifolds, local geometry, convex geometry, group theory, and combinatorics, illustrating the potential of the framework in further problems in computer algebra. [17]  arXiv:1811.06093 [pdf, other] Title: Computeralgebra - vom Vorlesungsthema zum Forschungsthema Authors: Janko Boehm Comments: German, 5 pages Subjects: History and Overview (math.HO) In this note for the joint meeting of DMV and GDM we illustrate with examples the role of computer algebra in university mathematics education. We discuss its potential in teaching algebra, but also computer algebra as a subject in its own right, its value in the context of practical programming projects and its role as a research topic in student papers. [18]  arXiv:1811.06097 [pdf, ps, other] Title: On the Causal and Topological Structure of the $2$-Dimensional Minkowski Space Subjects: Mathematical Physics (math-ph) A list of all possible causal relations in the $2$-dimensional Minkowski space $M$ is exhausted, based on the duality between timelike and spacelike in this particular case, and thirty topologies are introduced, all of them encapsulating the causal structure of $M$. Possible generalisations of these results are discussed. [19]  arXiv:1811.06102 [pdf, ps, other] Title: CHL Calabi-Yau threefolds: Curve counting, Mathieu moonshine and Siegel modular forms Comments: 73 pages, including a 10 page Appendix by Sheldon Katz and the second author. Comments welcome A CHL model is the quotient of $\mathrm{K3} \times E$ by an order $N$ automorphism which acts symplectically on the K3 surface and acts by shifting by an $N$-torsion point on the elliptic curve $E$. We conjecture that the primitive Donaldson-Thomas partition function of elliptic CHL models is a Siegel modular form, namely the Borcherds lift of the corresponding twisted-twined elliptic genera which appear in Mathieu moonshine. The conjecture matches predictions of string theory by David, Jatkar and Sen. We use the topological vertex to prove several base cases of the conjecture. Via a degeneration to $\mathrm{K3} \times \mathbb{P}^1$ we also express the DT partition functions as a twisted trace of an operator on Fock space. This yields further computational evidence. An extension of the conjecture to non-geometric CHL models is discussed. We consider CHL models of order $N=2$ in detail. We conjecture a formula for the Donaldson-Thomas invariants of all order two CHL models in all curve classes. The conjecture is formulated in terms of two Siegel modular forms. One of them, a Siegel form for the Iwahori subgroup, has to our knowledge not yet appeared in physics. This discrepancy is discussed in an appendix with Sheldon Katz. [20]  arXiv:1811.06107 [pdf, ps, other] Title: Operator-Theoretical Treatment of Ergodic Theorem and Its Application to Dynamic Models in Economics Authors: Shizhou Xu Comments: 16 pages Subjects: Probability (math.PR); Theoretical Economics (econ.TH) The purpose of this paper is to study the time average behavior of Markov chains with transition probabilities being kernels of completely continuous operators, and therefore to provide a sufficient condition for a class of Markov chains that are frequently used in dynamic economic models to be ergodic. The paper reviews the time average convergence of the quasi-weakly complete continuity Markov operators to a unique projection operator. Also, it shows that a further assumption of quasi-strongly complete continuity reduces the dependence of the unique invariant measure on its corresponding initial distribution through ergodic decomposition, and therefore guarantees the Markov chain to be ergodic up to multiplication of constant coefficients. Moreover, a sufficient and practical condition is provided for the ergodicity in economic state Markov chains that are induced by exogenous random shocks and a correspondence between the exogenous space and the state space. [21]  arXiv:1811.06108 [pdf, ps, other] Title: Interpolative Fusions Comments: Preliminary version, later versions may contain substantial changes, comments are welcome! Subjects: Logic (math.LO) We define the interpolative fusion of multiple theories over a common reduct, a notion that aims to provide a general framework to study model-theoretic properties of structures with randomness. In the special case where the theories involved are model complete, their interpolative fusion is precisely the model companion of their union. Several theories of model-theoretic interest are shown to be canonically bi-interpretable with interpolative fusions of simpler theories. We initiate a systematic study of interpolative fusions by also giving general conditions for their existence and relating their properties to those of the individual theories from which they are built. [22]  arXiv:1811.06110 [pdf, ps, other] Title: Layered Belief Propagation for Low-complexity Large MIMO Detection Based on Statistical Beams Comments: 6 pages, 4 figures, conference Subjects: Information Theory (cs.IT) This paper proposes a novel layered belief propagation (BP) detector with a concatenated structure of two different BP layers for low-complexity large multi-user multi-input multi-output (MU-MIMO) detection based on statistical beams. To reduce the computational burden and the circuit scale on the base station (BS) side, the two-stage signal processing consisting of slow varying outer beamformer (OBF) and group-specific MU detection (MUD) for fast channel variations is effective. However, the dimensionality reduction of the equivalent channel based on the OBF results in significant performance degradation in subsequent spatial filtering detection. To compensate for the drawback, the proposed layered BP detector, which is designed for improving the detection capability by suppressing the intra- and inter-group interference in stages, is introduced as the post-stage processing of the OBF. Finally, we demonstrate the validity of our proposed method in terms of the bit error rate (BER) performance and the computational complexity. [23]  arXiv:1811.06113 [pdf, ps, other] Title: Learning from Past Bids to Participate Strategically in Day-Ahead Electricity Markets Subjects: Optimization and Control (math.OC) We consider the process of bidding by electricity suppliers in a day-ahead market context where each supplier bids a linear non-decreasing function of her generating capacity with the goal of maximizing her individual profit given other competing suppliers' bids. Based on the submitted bids, the market operator schedules suppliers to meet demand during each hour and determines hourly market clearing prices. Eventually, this game-theoretic process reaches a Nash equilibrium when no supplier is motivated to modify her bid. However, solving the individual profit maximization problem requires information of rivals' bids, which are typically not available. To address this issue, we develop an inverse optimization approach for estimating rivals' production cost functions given historical market clearing prices and production levels. We then use these functions to bid strategically and compute Nash equilibrium bids. We present numerical experiments illustrating our methodology, showing good agreement between bids based on the estimated production cost functions with the bids based on the true cost functions. We discuss an extension of our approach that takes into account network congestion resulting in location-dependent prices. [24]  arXiv:1811.06116 [pdf, other] Title: Green's functions for even order boundary value problems Subjects: Classical Analysis and ODEs (math.CA) In this paper we will show several properties of the Green's functions related to various boundary value problems of arbitrary even order. In particular, we will write the expression of the Green's functions related to the general differential operator of order 2n coupled to Neumann, Dirichlet and mixed boundary conditions, as a linear combination of the Green's functions corresponding to periodic conditions on a different interval. This will allow us to ensure the constant sign of various Green's functions and to deduce spectral results. We will also show how to apply these results to deal with nonlinear boundary value problems. In particular, we will point out the fact that the existence of a pair of lower and upper solutions of a considered problem could imply the existence of solution of another one with different boundary conditions. [25]  arXiv:1811.06117 [pdf, ps, other] Title: On multi-point resonant problems on the half-line Subjects: Classical Analysis and ODEs (math.CA) In this work we obtain sufficient conditions for the existence of bounded solutions of a resonant multi-point second-order boundary value problem, with a fully differential equation. The noninvertibility of the linear part is overcome by a new perturbation technique, which allows to obtain an existence result and a localization theorem. Our hypotheses are clearly much less restrictive than the ones existent in the literature and, moreover, they can be applied to higher order, resonant or non-resonant, boundary value problems defined on the half-line or even on the real line. [26]  arXiv:1811.06118 [pdf, ps, other] Title: Existence and multiplicity results for some generalized Hammerstein equations with a parameter Subjects: Classical Analysis and ODEs (math.CA) This paper considers the existence and multiplicity of fixed points for the integral operator \begin{equation*} {\mathcal{T}}u(t)=\lambda \,\int_{0}^{T}k(t,s)\,f(s,u(s),u^{\prime }(s),\dots ,u^{(m)}(s))\,\dif s,\quad t\in \lbrack 0,T]\equiv I, \end{equation*} where $\lambda >0$ is a positive parameter, $k:I\times I\rightarrow \mathbb{R% }$ is a kernel function such that $k\in W^{m,1}\left( I\times I\right) $, $m$ is a positive integer with $m\geq 1$, and $f:I\times \mathbb{R}^{m+1}\rightarrow \lbrack 0,+\infty \lbrack $ is a $L^{1}$-Carath\'{e}odory function. The existence of solutions for these Hammerstein equations is obtained by fixed point index theory on new type of cones. Therefore some assumptions must hold only for, at least, one of the derivatives of the kernel or, even, for the kernel, on a subset of the domain. Assuming some asymptotic conditions on the nonlinearity $f$, we get sufficient conditions for multiplicity of solutions. Two examples will illustrate the potentialities of the main results, namely the fact that the kernel function and/or some derivatives may only be positive on some subintervals, which can degenerate to a point. Moreover, an application of our method to general Lidstone problems improves the existent results on the literature in this field. [27]  arXiv:1811.06121 [pdf, ps, other] Title: Existence of solutions of integral equations defined in unbounded domains via spectral theory Subjects: Classical Analysis and ODEs (math.CA) In this work we study integral equations defined on the whole real line. Using a suitable Banach space, we look for solutions which satisfy some certain kind of asymptotic behavior. We will consider spectral theory in order to find fixed points of the integral operator. [28]  arXiv:1811.06122 [pdf, other] Title: The case for shifting the Renyi Entropy Comments: 20 pages, 2 figures, 2 tables. arXiv admin note: text overlap with arXiv:1710.04728 Subjects: Information Theory (cs.IT) We introduce a variant of the R\'enyi entropy definition that aligns it with the well-known H\"older mean: in the new formulation, the r-th order R\'enyi Entropy is the logarithm of the inverse of the r-th order H\"older mean. This brings about new insights into the relationship of the R\'enyi entropy to quantities close to it, like the information potential and the partition function of statistical mechanics. We also provide expressions that allow us to calculate the R\'enyi entropies from the Shannon cross-entropy and the escort probabilities. Finally, we discuss why shifting the R\'enyi entropy is fruitful in some applications. [29]  arXiv:1811.06123 [pdf, other] Title: Some Moderate Deviations for Ewens-Pitman Sampling Model Authors: Youzhou Zhou Comments: 21 pages, three figures Subjects: Probability (math.PR); Statistics Theory (math.ST) Ewens-Pitman model has been successfully applied to various fields including Bayesian statistics. There are four important estimators $K_{n},M_{l,n}$,$K_{m}^{(n)},M_{l,m}^{(n)}$. In particular, $M_{1,n}, M_{1,m}^{(n)}$ are related to discovery probability. Their asymptotic behavior, such as large deviation principle, has already been discussed in [4],[1] and [2]. Moderate deviation principle is also discussed in [3] with some speed restriction. In this article, we will apply complex asymptotic analysis to show that this speed restriction is unnecessary. [30]  arXiv:1811.06125 [pdf, ps, other] Title: On Galois categories and perfectly reduced schemes Authors: Clark Barwick Subjects: Algebraic Geometry (math.AG) It turns out that one can read off facts about schemes up to universal homeomorphism from their Galois categories. Here we propose a first modest slate of entries in a dictionary between the geometric features of a perfectly reduced scheme (or morphism of such) and the categorical properties of its Galois category (or functor of such). The main thing that makes this possible is Schr\"oer's total separable closure. [31]  arXiv:1811.06132 [pdf, ps, other] Title: On certain subclasses of analytic functions associated with Poisson distribution series Authors: B.A. Frasin Subjects: Complex Variables (math.CV) In this paper, we find the necessary and sufficient conditions, inclusion relations for Poisson distribution series $\mathcal{K}(m,z)=z+\sum_{n=2}^\infty \frac{m^{n-1}}{(n-1)!}e^{-m}z^{n}$ belonging to the subclasses $\mathcal{S}(k,\lambda )$ and $\mathcal{C}(k,\lambda )$ of analytic functions with negative coefficients. Further, we consider the integral operator $\mathcal{G}(m,z) = \int_0^z \frac{\mathcal{F}(m,\zeta)}{\zeta} d\zeta$ belonging to the above classes. [32]  arXiv:1811.06134 [pdf, ps, other] Title: Gallai-Ramsey numbers for a class of graphs with five vertices Authors: Xihe Li, Ligong Wang Comments: 12 pages,1 figure Subjects: Combinatorics (math.CO) Given two graphs $G$ and $H$, the $k$-colored Gallai-Ramsey number $gr_k(G : H)$ is defined to be the minimum integer $n$ such that every $k$-coloring of the complete graph on $n$ vertices contains either a rainbow copy of $G$ or a monochromatic copy of $H$. In this paper, we consider $gr_k(K_3 : H)$ where $H$ is a connected graph with five vertices and at most six edges. There are in total thirteen graphs in this graph class, and the Gallai-Ramsey numbers for some of them have been studied step by step in several papers. We determine all the Gallai-Ramsey numbers for the remaining graphs, and we also obtain some related results for a class of unicyclic graphs. [33]  arXiv:1811.06137 [pdf, ps, other] Title: Forbidden rainbow subgraphs that force large monochromatic or multicolored connected subgraphs Authors: Xihe Li, Ligong Wang Comments: 15 pages, 1 figure Subjects: Combinatorics (math.CO) We consider a forbidden structure condition that implies the existence of a large highly connected monochromatic subgraph. In a paper Fujita and Magnant (SIAM J. Discrete Math. 27 (2013), 1625--1637) proposed the following question: Let $n, k, m$ be positive integers with $n\gg m\gg k$, and for what graphs $G$ of order at least 3 such that there is a $k$-connected monochromatic subgraph of order at least $n-f(G,k,m)$ in any rainbow $G$-free coloring of $K_n$ using $m$ colors? Fujita and Magnant settled the above question when the rainbow subgraph $G$ is connected. In this paper, we classify the disconnected graphs that satisfy this question. Namely, we prove that the set of such disconnected graphs is $\{(i-2)K_2\cup G\ |\ G\in \mathcal{H}^{(2)},\ i=2, 3, \ldots\}$, where the set $\mathcal{H}^{(2)}$ consists of precisely $P_3\cup P_4$, $K_2\cup P_6$, $K_2\cup K_3$, $K_2\cup P^{+}_4$ and their subgraphs of order at least 3 with component number 2. Our result along with Fujita and Magnant's result completes the solution to the above Question. Furthermore, we show that for sufficiently large $n$, any rainbow $G$-free coloring of $K_n$ using $m$ colors contains a $k$-connected monochromatic subgraph of order at least $cn$, where $0< c < 1$ and $G \in \{P_3 \cup K_{1,3}, P_3 \cup K_3, P_3 \cup P_5\}$. Finally, we show that every Gallai-3-coloring of $K_n$ contains a $k$-connected subgraph of order $n-\lfloor\frac{k-1}{2}\rfloor$ using at most two colors for all integers $n\geq 7$ and $1\leq k \leq 3$. [34]  arXiv:1811.06139 [pdf, other] Title: Empirical Effects of Dynamic Human-Body Blockage in 60 GHz Communications Subjects: Information Theory (cs.IT) The millimeter wave (mmWave) bands and other high frequencies above 6~GHz have emerged as a central component of Fifth-Generation (5G) cellular standards to deliver high data rates and ultra-low latency. A key challenge in these bands is blockage from obstacles, including the human body. In addition to the reduced coverage, blockage can result in highly intermittent links where the signal quality varies significantly with motion of obstacles in the environment. The blockages have widespread consequences throughout the protocol stack including beam tracking, link adaptation, cell selection, handover and congestion control. Accurately modeling these blockage dynamics is therefore critical for the development and evaluation of potential mmWave systems. In this work, we present a novel spatial dynamic channel sounding system based on phased array transmitters and receivers operating at 60 GHz. Importantly, the sounder can measure multiple directions rapidly at high speed to provide detailed spatial dynamic measurements of complex scenarios. The system is demonstrated in an indoor home-entertainment type setting with multiple moving blockers. Preliminary results are presented on analyzing this data with a discussion of the open issues towards developing statistical dynamic models. [35]  arXiv:1811.06141 [pdf, ps, other] Title: Self-similar solutions to the derivative nonlinear Schrödinger equation Comments: 16 pages, no figures Subjects: Analysis of PDEs (math.AP) A class of self-similar solutions to the derivative nonlinear Schr\"odinger equations is studied. Especially, the asymptotics of profile functions are shown to posses a logarithmic phase correction. This logarithmic phase correction is obtained from the nonlinear interaction of profile functions. This is a remarkable difference from the pseudo-conformally invariant case, where the logarithmic correction comes from the linear part of the equations of the profile functions. [36]  arXiv:1811.06144 [pdf, ps, other] Title: Adaptive Full-Duplex Jamming Receiver for Secure D2D Links in Random Networks Subjects: Information Theory (cs.IT) Device-to-device (D2D) communication raises new transmission secrecy protection challenges, since conventional physical layer security approaches, such as multiple antennas and cooperation techniques, are invalid due to its resource/size constraints. The full-duplex (FD) jamming receiver, which radiates jamming signals to confuse eavesdroppers when receiving the desired signal simultaneously, is a promising candidate. Unlike existing endeavors that assume the FD jamming receiver always improves the secrecy performance compared with the half-duplex (HD) receiver, we show that this assumption highly depends on the instantaneous residual self-interference cancellation level and may be invalid. We propose an adaptive jamming receiver operating in a switched FD/HD mode for a D2D link in random networks. Subject to the secrecy outage probability constraint, we optimize the transceiver parameters, such as signal/jamming powers, secrecy rates and mode switch criteria, to maximize the secrecy throughput. Most of the optimization operations are taken off-line and only very limited on-line calculations are required to make the scheme with low complexity. Furthermore, some interesting insights are provided, such as the secrecy throughput is a quasi-concave function. Numerical results are demonstrated to verify our theoretical findings, and to show its superiority compared with the receiver operating in the FD or HD mode only. [37]  arXiv:1811.06153 [pdf, ps, other] Title: Global Stability of Boltzmann Equation with Large External Potential for a Class of Large Oscillation Data Subjects: Analysis of PDEs (math.AP) In this paper, we investigate the stability of Boltzmann equation with large external potential in $\mathbb{T}^3$. For a class of initial data with large oscillations in $L^\infty_{x,v}$ around the local Maxwellian, we prove the existence of a global solution to the Boltzmann equation provided the initial perturbation is suitably small in $L^2$-norm. The large time behavior of the Boltzmann solution with exponential decay rate is also obtained. This seems to be the first result on the perturbation theory of large-amplitude non-constant equilibriums for large-amplitude initial data. [38]  arXiv:1811.06154 [pdf, ps, other] Title: On the breakdown of solutions to the incompressible Euler equations with free surface boundary Authors: Daniel Ginsberg Subjects: Analysis of PDEs (math.AP) We prove a continuation critereon for incompressible liquids with free surface boundary. We combine the energy estimates of Christodoulou and Lindblad with an analog of the estimate due to Beale, Kato, and Majda for the gradient of the velocity in terms of the vorticity, and use this to show solution can be continued so long as the second fundamental form and injectivity radius of the free boundary, the vorticity, and one derivative of the velocity on the free boundary remain bounded, assuming that the Taylor sign condition holds. [39]  arXiv:1811.06155 [pdf, other] Title: Cops and robbers on oriented graphs Comments: 21 pages, 4 figures Subjects: Combinatorics (math.CO); Discrete Mathematics (cs.DM) We consider the well-studied cops and robbers game in the context of oriented graphs, which has received surprisingly little attention to date. We examine the relationship between the cop numbers of an oriented graph and its underlying undirected graph, giving a surprising result that there exists at least one graph $G$ for which every strongly connected orientation of $G$ has cop number strictly less than that of $G$. We also refute a conjecture on the structure of cop-win digraphs, study orientations of outerplanar graphs, and study the cop number of line digraphs. Finally, we consider some the aspects of optimal play, in particular the capture time of cop-win digraphs and properties of the relative positions of the cop(s) and robber. [40]  arXiv:1811.06158 [pdf, other] Title: Regularity of inverse mean curvature flow in asymptotically hyperbolic manifolds with dimension $3$ Comments: 23 pages, 1 figure, All comments are welcome Subjects: Differential Geometry (math.DG) By making use of nice behavior of Hawking mass of slices of weak solution of the inverse mean curvature flow in three dimensional asymptotically hyperbolic manifolds, we are able to show that each slice of flow is star-shaped after a long time, and then we get the regularity of the weak solution of the inverse mean curvature flow in asymptotically ADS-Schwarzschild manifolds with total mass $m> 0$. [41]  arXiv:1811.06160 [pdf, ps, other] Title: Intersecting Families of Perfect Matchings Authors: Nathan Lindzey Comments: 40 pages, 1 figure Subjects: Combinatorics (math.CO) A family of perfect matchings of $K_{2n}$ is $t$-$intersecting$ if any two members share $t$ or more edges. We prove for any $t \in \mathbb{N}$ that every $t$-intersecting family of perfect matchings has size no greater than $(2(n-t) - 1)!!$ for sufficiently large $n$, and that equality holds if and only if the family is composed of all perfect matchings that contain a fixed set of $t$ disjoint edges. This is an asymptotic version of a conjecture of Godsil and Meagher that can be seen as the non-bipartite analogue of the Deza-Frankl conjecture proven by Ellis, Friedgut, and Pilpel. [42]  arXiv:1811.06161 [pdf, ps, other] Title: Existence of mass-conserving weak solutions to the singular coagulation equation with multiple fragmentation Subjects: Analysis of PDEs (math.AP) In this paper we study the continuous coagulation and multiple fragmentation equation for the mean-field description of a system of particles taking into account the combined effect of the coagulation and the fragmentation processes in which a system of particles growing by successive mergers to form a bigger one and a larger particle splits into a finite number of smaller pieces. We demonstrate the global existence of mass-conserving weak solutions for a wide class of coagulation rate, selection rate and breakage function. Here, both the breakage function and the coagulation rate may have algebraic singularity on both the coordinate axes. The proof of the existence result is based on a weak L^1 compactness method for two different suitable approximations to the original problem, i.e. the conservative and non-conservative approximations. Moreover, the mass-conservation property of solutions is established for both approximations. [43]  arXiv:1811.06168 [pdf, other] Title: Two quasi-local masses evaluated on surfaces with boundary Authors: Xiaoxiang Chai Comments: 18 pages; comments welcome Subjects: Differential Geometry (math.DG) We study Hawking mass and the Huisken's isoperimetric mass evaluated on surfaces with boundary. The convergence to an ADM mass defined on asymptotically flat manifold with a non-compact boundary are proved. [44]  arXiv:1811.06172 [pdf, ps, other] Title: The autoregression bootstrap for kernel estimates of smooth nonlinear functional time series Subjects: Statistics Theory (math.ST) Functional times series have become an integral part of both functional data and time series analysis. This paper deals with the functional autoregressive model of order 1 and the autoregression bootstrap for smooth functions. The regression operator is estimated in the framework developed by Ferraty and Vieu [2004] and Ferraty et al. [2007] which is here extended to the double functional case under an assumption of stationary ergodic data which dates back to Laib and Louani [2010]. The main result of this article is the characterization of the asymptotic consistency of the bootstrapped regression operator. [45]  arXiv:1811.06174 [pdf, ps, other] Title: On a Central Binomial Series Related to Zeta(4) Authors: Vivek Kaushik Subjects: Classical Analysis and ODEs (math.CA) We prove a classical binomial coefficient series identity $\sum_{n \geq 1} \frac{1}{n^4 \binom{2n}{n}}=\frac{17 \pi^4}{3240}$ by evaluating a double integral equal to this sum in two ways. The latter way will lead us to evaluating a sum of polylogarithmic integrals, whose values are linear combinations of $\zeta(2)=\sum_{n \geq 1} \frac{1}{n^2}=\frac{\pi^2}{6}$ and $\zeta(4)=\sum_{n \geq 1} \frac{1}{n^4}=\frac{\pi^4}{90}.$ [46]  arXiv:1811.06175 [pdf, ps, other] Title: Homotopy structures of smooth CW complexes Comments: 19 pages Subjects: Algebraic Topology (math.AT) In this paper we present the notion of smooth CW complexes and study their homotopy structures on the category of diffeological spaces. In particular, we will introduce the homotopy extension property, the cellular approximation theorem and the Whitehead theorem on smooth CW complexes. [47]  arXiv:1811.06178 [pdf, other] Title: Monochromatic Schur triples in randomly perturbed dense sets of integers Comments: 6 pages Subjects: Combinatorics (math.CO) Given a dense subset $A$ of the first $n$ positive integers, we provide a short proof showing that for $p=\omega(n^{-2/3})$ the so-called {\sl randomly perturbed} set $A \cup [n]_p$ a.a.s. has the property that any $2$-colouring of it has a monochromatic Schur triple, i.e.\ a triple of the form $(a,b,a+b)$. This result is optimal since there are dense sets $A$, for which $A\cup [n]_p$ does not possess this property for $p=o(n^{-2/3})$. [48]  arXiv:1811.06180 [pdf, ps, other] Title: A q-analogue and a symmetric function analogue of a result by Carlitz, Scoville and Vaughan Authors: Yifei Li Subjects: Combinatorics (math.CO) We derive an equation that is analogous to a well-known symmetric function identity: $\sum_{i=0}^n(-1)^ie_ih_{n-i}=0$. Here the elementary symmetric function $e_i$ is the Frobenius characteristic of the representation of $\mathcal{S}_i$ on the top homology of the subset lattice $B_i$, whereas our identity involves the representation of $\mathcal{S}_n\times \mathcal{S}_n$ on the Segre product of $B_n$ with itself. We then obtain a q-analogue of a polynomial identity given by Carlitz, Scoville and Vaughan through examining the Segre product of the subspace lattice $B_n(q)$ with itself. We recognize the connection between the Euler characteristic of the Segre product of $B_n(q)$ with itself and the representation on the Segre product of $B_n$ with itself by recovering our polynomial identity from specializing the identity on the representation of $\mathcal{S}_i\times \mathcal{S}_i$. [49]  arXiv:1811.06184 [pdf, other] Title: Electric Vehicle Valet Comments: 8 pages, 3 figures Subjects: Optimization and Control (math.OC); Computational Engineering, Finance, and Science (cs.CE) We propose a novel way to use Electric Vehicles (EVs) as dynamic mobile energy storage with the goal to support grid balancing during peak load times. EVs seeking parking in a busy/expensive inner city area, can get free parking with a valet company in exchange for being utilized for grid support. The valet company would have an agreement with the local utility company to receive varying rewards for discharging EVs at designated times and locations of need (say, where power lines are congested). Given vehicle availabilities, the valet company would compute an optimal schedule of which vehicle to utilize where and when so as to maximize rewards collected. Our contributions are a detailed description of this new concept along with supporting theory to bring it to fruition. On the theory side, we provide new hardness results, as well as efficient algorithms with provable performance guarantees that we also test empirically. [50]  arXiv:1811.06185 [pdf, ps, other] Title: Special Unipotent Arthur Packets For Real Reductive Groups Subjects: Representation Theory (math.RT) We compute special unipotent Arthur packets for real reductive groups in many cases. We list the cases that lead to incomplete answers, and in those cases, provide a suitable set of representations that could lead to a complete description of the special Arthur packet. In the process of achieving this goal we classify theta forms of a given even complex nilpotent orbit, and find methods to compute the associated varieties of irreducible group representations. [51]  arXiv:1811.06188 [pdf, ps, other] Title: Gaitsgory's central sheaves via the diagrammatic Hecke category Authors: Ben Elias Comments: 146 pages and 146 figures! Color viewing essential Subjects: Representation Theory (math.RT) We initiate the study of Gaitsgory's central sheaves using complexes of Soergel bimodules, in extended affine type A. We conjecture that the complex associated to the standard representation can be flattened to a central complex in finite type A, which is the complex sought after in a conjecture of Gorsky-Negut-Rasmussen. [52]  arXiv:1811.06189 [pdf, other] Title: Equivariant Perturbation in Gomory and Johnson's Infinite Group Problem. VII. Inverse semigroup theory, closures, decomposition of perturbations Comments: 61 pages, 20 figures Subjects: Optimization and Control (math.OC) In this self-contained paper, we present a theory of the piecewise linear minimal valid functions for the 1-row Gomory-Johnson infinite group problem. The non-extreme minimal valid functions are those that admit effective perturbations. We give a precise description of the space of these perturbations as a direct sum of certain finite- and infinite-dimensional subspaces. The infinite-dimensional subspaces have partial symmetries; to describe them, we develop a theory of inverse semigroups of partial bijections, interacting with the functional equations satisfied by the perturbations. Our paper provides the foundation for grid-free algorithms for the Gomory-Johnson model, in particular for testing extremality of piecewise linear functions whose breakpoints are rational numbers with huge denominators. [53]  arXiv:1811.06190 [pdf, ps, other] Title: The conjugacy problem in $GL(n,Z)$ Subjects: Group Theory (math.GR) We present a new algorithm that, given two matrices in $GL(n,Q)$, decides if they are conjugate in $GL(n,Z)$ and, if so, determines a conjugating matrix. We also give an algorithm to construct a generating set for the centraliser in $GL(n,Z)$ of a matrix in $GL(n,Q)$. We do this by reducing these problems respectively to the isomorphism and automorphism group problems for certain modules over rings of the form $\mathcal O_K[y]/(y^l)$, where $\mathcal O_K$ is the maximal order of an algebraic number field and $l \in N$, and then provide algorithms to solve the latter. The algorithms are practical and our implementations are publicly available in Magma. [54]  arXiv:1811.06191 [pdf, ps, other] Title: On the Comparison of Measures of Convex Bodies via Projections and Sections Authors: Johannes Hosle Comments: 26 pages Subjects: Metric Geometry (math.MG); Classical Analysis and ODEs (math.CA) In this manuscript, we study the inequalities between measures of convex bodies implied by comparison of their projections and sections. Recently, Giannopoulos and Koldobsky proved that if convex bodies $K, L$ satisfy $|K|\theta^{\perp}| \le |L \cap \theta^{\perp}|$ for all $\theta \in S^{n-1}$, then $|K| \le |L|$. Firstly, we study the reverse question: in particular, we show that if $K, L$ are origin-symmetric convex bodies in John's position with $|K \cap \theta^{\perp}| \le |L|\theta^{\perp}|$ for all $\theta \in S^{n-1}$ then $|K| \le \sqrt{n}|L|$. The condition we consider is weaker than both the conditions $|K \cap \theta^{\perp}| \le |L \cap \theta^{\perp}|$ and $|K|\theta^{\perp}| \le |L|\theta^{\perp}|$ for all $\theta \in S^{n-1}$ that appear in the Busemann-Petty and Shephard problems respectively. Secondly, we appropriately extend the result of Giannopoulos and Koldobsky to various classes of measures possessing concavity properties, including log-concave measures. [55]  arXiv:1811.06192 [pdf, ps, other] Title: The fibration method over real function fields Comments: 27 pages Subjects: Algebraic Geometry (math.AG) Let $\mathbb R(C)$ be the function field of a smooth, irreducible projective curve over $\mathbb R$. Let $X$ be a smooth, projective, geometrically irreducible variety equipped with a dominant morphism $f$ onto a smooth projective rational variety with a smooth generic fibre over $\mathbb R(C)$. Assume that the cohomological obstruction introduced by Colliot-Th\'el\`ene is the only one to the local-global principle for rational points for the smooth fibres of $f$ over $\mathbb R(C)$-valued points. Then we show that the same holds for $X$, too, by adopting the fibration method similarly to Harpaz--Wittenberg. We also show that the strong vanishing conjecture for $n$-fold Massey products holds for fields of virtual cohomological dimension at most $1$ using a theorem of Haran. [56]  arXiv:1811.06201 [pdf, ps, other] Title: Exotic Steiner Chains in Miquelian Möbius Planes Comments: 21 pages, 3 figures Subjects: Combinatorics (math.CO) In the Euclidean plane, two intersecting circles or two circles which are tangent to each other clearly do not carry a finite Steiner chain. However, in this paper we will show that such exotic Steiner chains exist in finite Miquelian M\"obius planes of odd order. We state and prove explicit conditions in terms of the order of the plane and the capacitance of the two carrier circles $C_1$ and $C_2$ for the existence, length, and number of Steiner chains carried by $C_1$ and $C_2$. [57]  arXiv:1811.06205 [pdf, ps, other] Title: An analytic Chevalley-Shephard-Todd Theorem Comments: 9 pages Subjects: Complex Variables (math.CV) In this note, we prove two theorems extending the well known Chevalley-Shephard-Todd Theorem about the action of pseudo-reflection groups on the ring of polynomials to the setting of the ring of holomorphic functions. In the process, we obtain a purely algebraic determinantal formula that may also be of independent interest. [58]  arXiv:1811.06209 [pdf, ps, other] Title: Fano generalized Bott manifolds Authors: Yusuke Suyama Comments: 8 pages Subjects: Algebraic Geometry (math.AG); Combinatorics (math.CO) We give a necessary and sufficient condition for a generalized Bott manifold to be Fano or weak Fano. As a consequence we characterize Fano Bott manifolds. [59]  arXiv:1811.06215 [pdf, other] Title: Two delays induce Hopf bifurcation and double Hopf bifurcation in a diffusive Leslie-Gower predator-prey system Subjects: Dynamical Systems (math.DS) In this paper, the dynamics of a modified Leslie-Gower predator-prey system with two delays and diffusion is considered. By calculating stability switching curves, the stability of positive equilibrium and the existence of Hopf bifurcation and double Hopf bifurcation are investigated on the parametric plane of two delays. Taking two time delays as bifurcation parameters, the normal form on the center manifold near the double Hopf bifurcation point is derived, and the unfoldings near the critical points are given. Finally, we obtain the complex dynamics near the double Hopf bifurcation point, including the existence of quasi-periodic solutions on a 2-torus, quasi-periodic solutions on a 3-torus, and strange attractors. [60]  arXiv:1811.06218 [pdf, other] Title: Topological conjugation classes of tightly transitive subgroups of $\text{Homeo}_{+}(\mathbb{S}^1)$ Authors: Enhui Shi, Hui Xu Comments: 17 pages, 4 figures Subjects: Dynamical Systems (math.DS) Let $\text{Homeo}_{+}(\mathbb{S}^1)$ denote the group of orientation preserving homeomorphisms of the circle $\mathbb{S}^1$. A subgroup $G$ of $\text{Homeo}_{+}(\mathbb{S}^1)$ is tightly transitive if it is topologically transitive and no subgroup $H$ of $G$ with $[G: H]=\infty$ has this property; is almost minimal if it has at most countably many nontransitive points. In the paper, we determine all the topological conjugation classes of tightly transitive and almost minimal subgroups of $\text{Homeo}_{+}(\mathbb{S}^1)$ which are isomorphic to $\mathbb{Z}^n$ for any integer $n\geq 2$. [61]  arXiv:1811.06221 [pdf, other] Title: A Schur transform for spatial stochastic processes Authors: James Mathews Comments: 9 pages, 1 figure Subjects: Statistics Theory (math.ST) The variance, higher order moments, covariance, and joint moments or cumulants are shown to be special cases of a certain tensor in $V^{\otimes n}$ defined in terms of a collection $X_1,...,X_n$ of $V$-valued random variables, for an appropriate finite-dimensional real vector space $V$. A statistical transform is proposed from such collections--finite spatial stochastic processes--to numerical tuples using the Schur-Weyl decomposition of $V^{\otimes n}$. It is analogous to the Fourier transform, replacing the periodicity group $\mathbb{Z}$, $\mathbb{R}$, or $U(1)$ with the permutation group $S_{n}$. As a test case, we apply the transform to one of the datasets used for benchmarking the Continuous Registration Challenge, the thoracic 4D Computed Tomography (CT) scans from the M.D. Anderson Cancer Center available for download from DIR-Lab. Further applications to morphometry and statistical shape analysis are suggested. [62]  arXiv:1811.06223 [pdf, ps, other] Title: Lipschitz stability in inverse source and inverse coefficient problems for a first- and half-order time-fractional diffusion equation Comments: 26pages Subjects: Analysis of PDEs (math.AP) We consider inverse problems for the first and half order time fractional equation. We establish the stability estimates of Lipschitz type in inverse source and inverse coefficient problems by means of the Carleman estimates. [63]  arXiv:1811.06236 [pdf, ps, other] Title: Emiorthogonal decopositions on Enriques surfaces and Derived Torelli Thoerem Comments: 14 pages comments are welcome!!! Subjects: Algebraic Geometry (math.AG) We show that the general Enriques surfaces can be recovered from the Kutznetsov component of its bounded derived categories of coherent sheaves. [64]  arXiv:1811.06239 [pdf, other] Title: Error and stability analysis of an anisotropic phase-field model for binary-fluid mixtures in the presence of magnetic-field Subjects: Numerical Analysis (math.NA) In this article, we study the error and stability of the proposed numerical scheme in order to solve a two dimensional anisotropic phase-field model with convection and externally applied magnetic field in an isothermal solidification of binary alloys. The proposed numerical scheme is based on mixed finite element method satisfying the CFL condition. A particular application with real physical parameters of Nickel-Copper(Ni-Cu) is considered in order to validate the numerical scheme employed. The results of stability and error analysis substantiates complete accordance with the postulated theoretical error estimates which demonstrates the efficiency of the presented method. [65]  arXiv:1811.06242 [pdf, other] Title: On the optimization of the fixed-stress splitting for Biot's equations Subjects: Numerical Analysis (math.NA) In this work we are interested in effectively solving the quasi-static, linear Biot model for poromechanics. We consider the fixed-stress splitting scheme, which is a popular method for iteratively solving Biot's equations. It is well-known that the convergence of the method is strongly dependent on the applied stabilization/tuning parameter. In this work, we propose a new approach to optimize this parameter. We show theoretically that it depends also on the fluid flow properties and not only on the mechanics properties and the coupling coefficient. The type of analysis presented in this paper is not restricted to a particular spatial discretization. We only require it to be inf-sup stable. The convergence proof applies also to low-compressible or incompressible fluids and low-permeable porous media. Illustrative numerical examples, including random initial data, random boundary conditions or random source terms and a well-known benchmark problem, i.e. Mandel's problem are performed. The results are in good agreement with the theoretical findings. Furthermore, we show numerically that there is a connection between the inf-sup stability of discretizations and the performance of the fixed-stress splitting scheme. [66]  arXiv:1811.06247 [pdf, other] Title: Fundamental Limits of Caching in Heterogeneous Networks with Uncoded Prefetching Comments: arXiv admin note: substantial text overlap with arXiv:1809.09422 Subjects: Information Theory (cs.IT) The work explores the fundamental limits of coded caching in heterogeneous networks where multiple ($N_0$) senders/antennas, serve different users which are associated (linked) to shared caches, where each such cache helps an arbitrary number of users. Under the assumption of uncoded cache placement, the work derives the exact optimal worst-case delay and DoF, for a broad range of user-to-cache association profiles where each such profile describes how many users are helped by each cache. This is achieved by presenting an information-theoretic converse based on index coding that succinctly captures the impact of the user-to-cache association, as well as by presenting a coded caching scheme that optimally adapts to the association profile by exploiting the benefits of encoding across users that share the same cache. The work reveals a powerful interplay between shared caches and multiple senders/antennas, where we can now draw the striking conclusion that, as long as each cache serves at least $N_0$ users, adding a single degree of cache-redundancy can yield a DoF increase equal to $N_0$, while at the same time --- irrespective of the profile --- going from 1 to $N_0$ antennas reduces the delivery time by a factor of $N_0$. Finally some conclusions are also drawn for the related problem of coded caching with multiple file requests. [67]  arXiv:1811.06253 [pdf, ps, other] Title: Diameter of homogeneous spaces: an effective account Comments: 31 pages Subjects: Group Theory (math.GR); Dynamical Systems (math.DS); Number Theory (math.NT) In this paper we prove explicit estimates for the size of small lifts of points in homogeneous spaces. Our estimates are polynomially effective in the volume of the space and the injectivity radius. [68]  arXiv:1811.06254 [pdf, other] Title: Positive mass theorem and free boundary minimal surfaces Authors: Xiaoxiang Chai Comments: 25 pages, 2 figures; comments welcome Subjects: Differential Geometry (math.DG) Built on a recent work of Almaraz, Barbosa, de Lima on positive mass theorems on asymptotically flat manifods with a noncompact boundary, we apply free boundary minimal surface techniques to prove their positive mass theorem and study the existence of positive scalar curvature metrics with mean convex boundary on a connected sum of the form $(\mathbb{T}^{n-1}\times[0,1])\#M_0$. [69]  arXiv:1811.06257 [pdf, other] Title: Generalized Attracting Horseshoe in the Rössler Attractor Comments: 20 pages, 9 figures, 1 table Subjects: Dynamical Systems (math.DS) We show that there is a mildly nonlinear three-dimensional system of ordinary differential equations - realizable by a rather simple electronic circuit - capable of producing a generalized attracting horseshoe map. A system specifically designed to have a Poincar\'{e} section yielding the desired map is described, but not pursued due to its complexity, which makes the construction of a circuit realization exceedingly difficult. Instead, the generalized attracting horseshoe and its trapping region is obtained by using a carefully chosen Poincar\'{e} map of the R\"{o}ssler attractor. Novel numerical techniques are employed to iterate the map of the trapping region to approximate the chaotic strange attractor contained in the generalized attracting horseshoe, and an electronic circuit is constructed to produce the map. Several potential applications of the idea of a generalized attracting horseshoe and a physical electronic circuit realization are proposed. [70]  arXiv:1811.06259 [pdf, ps, other] Title: Axiomatic approach to the theory of algorithms and relativized computability Authors: Alexander Shen (ESCAPE) Comments: Traduction en anglais 2018 Journal-ref: Vestnik Moskovskogo Universiteta, Ser. 1, Mathematics, mechanics, 1980, pp.27-29 Subjects: Logic (math.LO); Logic in Computer Science (cs.LO) It is well known that many theorems in recursion theory can be "relativized". This means that they remain true if partial recursive functions are replaced by functions that are partial recursive relative to some fixed oracle set. Uspensky formulates three "axioms" called "axiom of computation records", "axiom of programs'" and "arithmeticity axiom". Then, using these axioms (more precisely, two first ones) he proves basic results of the recursion theory. These two axioms are true also for the class of functions that are partial recursive relative to some fixed oracle set. Also this class is closed under substitution, primitive recursion and minimization ($\mu$-operator); these (intuitively obvious) closure properties are also used in the proofs. This observation made by Uspensky explains why many theorems of recursion theory can be relativized. It turns out that the reverse statement is also true: all relativizable results follow from the first two axioms and closure properties. Indeed, \emph{every class of partial functions that is closed under substitution, primitive recursion and minimization that satisfies the first two axioms is the class of functions that are partial recursive relative to some oracle set $A$}. This is the main result of the present article. [71]  arXiv:1811.06268 [pdf, ps, other] Title: Galois theory of periods Authors: Annette Huber In this mostly expository note we explain how Nori's theory of motives achieves the aim of establishing a Galois theory of periods, at least under the period conjecture. We explain and compare different notions periods, different versions of the period conjecture and view the evidence by explaining the examples of Artin motive, mixed Tate motives and 1-motives. [72]  arXiv:1811.06269 [pdf, ps, other] Title: On Graphs with equal Dominating and C-dominating energy Subjects: Combinatorics (math.CO) Graph energy and Domination in graphs are most studied areas of graph theory. In this paper we made an attempt to connect these two areas of graph theory by introducing c-dominating energy of a graph $G$. First, we show the chemical applications of c-dominating energy with the help of well known statistical tools. Next, we obtain mathematical properties of c-dominating energy. Finally, we characterize trees, unicyclic graphs, cubic and block graphs with equal dominating and c-dominating energy. [73]  arXiv:1811.06273 [pdf, other] Title: On Infinite Prefix Normal Words Comments: 20 pages, 4 figures, accepted at SOFSEM 2019 (45th International Conference on Current Trends in Theory and Practice of Computer Science, Nov\'y Smokovec, Slovakia, January 27-30, 2019) Subjects: Combinatorics (math.CO) Prefix normal words are binary words that have no factor with more $1$s than the prefix of the same length. Finite prefix normal words were introduced in [Fici and Lipt\'ak, DLT 2011]. In this paper, we study infinite prefix normal words and explore their relationship to some known classes of infinite binary words. In particular, we establish a connection between prefix normal words and Sturmian words, between prefix normal words and abelian complexity, and between prefix normality and lexicographic order. [74]  arXiv:1811.06275 [pdf, ps, other] Title: Some Class of Linear Operators Involved in Functional Equations Subjects: Classical Analysis and ODEs (math.CA) Fix $N\in\mathbb N$ and assume that for every $n\in\{1,\ldots, N\}$ the functions $f_n\colon[0,1]\to[0,1]$ and $g_n\colon[0,1]\to\mathbb R$ are Lebesgue measurable, $f_n$ is almost everywhere approximately differentiable with $|g_n(x)|<|f'_n(x)|$ for almost all $x\in [0,1]$, there exists $K\in\mathbb N$ such that the set $\{x\in [0,1]:\mathrm{card}{f_n^{-1}(x)}>K\}$ is of Lebesgue measure zero, $f_n$ satisfy Luzin's condition N, and the set $f_n^{-1}(A)$ is of Lebesgue measure zero for every set $A\subset\mathbb R$ of Lebesgue measure zero. We show that the formula $Ph=\sum_{n=1}^{N}g_n\!\cdot\!(h\circ f_n)$ defines a linear and continuous operator $P\colon L^1([0,1])\to L^1([0,1])$, and then we obtain results on the existence and uniqueness of solutions $\varphi\in L^1([0,1])$ of the equation $\varphi=P\varphi+g$ with a given $g\in L^1([0,1])$. [75]  arXiv:1811.06283 [pdf, ps, other] Title: Irregular model sets and tame dynamics Comments: 22 pages We study the dynamical properties of irregular model sets and show that the translation action on their hull always admits an infinite independence set. The dynamics can therefore not be tame and the topological sequence entropy is strictly positive. Extending the proof to a more general setting, we further obtain that tame implies regular for almost automorphic group actions on compact spaces. In the converse direction, we show that even in the restrictive case of Euclidean cut and project schemes irregular model sets may be uniquely ergodic and have zero topological entropy. This provides negative answers to questions by Schlottmann and Moody in the Euclidean setting. [76]  arXiv:1811.06288 [pdf, ps, other] Title: On $C^1$-approximability of functions by solutions of second order elliptic equations on plane compact sets and $C$-analytic capacity Subjects: Classical Analysis and ODEs (math.CA); Analysis of PDEs (math.AP) Criteria for approximability of functions by solutions of homogeneous second order elliptic equations (with constant complex coefficients) in the norms of the Whitney $C^1$-spaces on compact sets in $\mathbb R^2$ are obtained in terms of the respective $C^1$-capacities. It is proved that the mentioned $C^1$-capacities are comparable to the classic $C$-analytic capacity, and so have a proper geometric measure characterization. [77]  arXiv:1811.06289 [pdf, ps, other] Title: On a new class of score functions to estimate tail probabilities of some stochastic processes with Adaptive Multilevel Splitting Subjects: Probability (math.PR); Computation (stat.CO) We investigate the application of the Adaptive Multilevel Splitting algorithm for the estimation of tail probabilities of solutions of Stochastic Differential Equations evaluated at a given time, and of associated temporal averages. We introduce a new, very general and effective family of score functions which is designed for these problems. We illustrate its behavior on a series of numerical experiments. In particular, we demonstrate how it can be used to estimate large deviation rate functionals for the longtime limit of temporal averages. [78]  arXiv:1811.06294 [pdf, ps, other] Title: Unique ergodicity for stochastic hyperbolic equations with additive space-time white noise Authors: Leonardo Tolomeo Subjects: Analysis of PDEs (math.AP); Probability (math.PR) In this paper, we consider a certain class of second order nonlinear PDEs with damping and space-time white noise forcing, posed on the $d$-dimensional torus. This class includes the wave equation for $d=1$ and the beam equation for $d\le 3$. We show that the Gibbs measure of the equation without forcing and damping is the unique invariant measure for the flow of this system. Since the flow does not satisfy the Strong Feller property, we introduce a new technique for showing unique ergodicity. This approach may be also useful in situations in which finite-time blowup is possible. [79]  arXiv:1811.06299 [pdf, ps, other] Title: Local theorems for arithmetic compound renewal processes, when Cramer's condition holds Subjects: Probability (math.PR) We continue the study of the compound renewal processes (c.r.p.), where the moment Cramer's condition holds (see [1]-[10], where the study of c.r.p. was started). In the paper arithmetic c.r.p. Z(n) are studied. In such processes random vector {\xi} = ({\tau},{\zeta}) has the arithmetic distribution, where {\tau} > 0 defines the distance between jumps, {\zeta} defines the values of jumps. For this processes the fine asymptotics in the local limit theorem for probabilities P(Z(n) = x) has been obtained in Cramer's deviation region of x $\in$ Z. In [6]-[10] the similar problem has been solved for non-lattice c.r.p., when the vector {\xi} = ({\tau},{\zeta}) has the non-lattice distribution. [80]  arXiv:1811.06301 [pdf, other] Title: Stable discretizations of elastic flow in {R}iemannian manifolds Comments: 27 pages, 3 figures. This article is closely related to arXiv:1809.01973 Subjects: Numerical Analysis (math.NA); Differential Geometry (math.DG) The elastic flow, which is the $L^2$-gradient flow of the elastic energy, has several applications in geometry and elasticity theory. We present stable discretizations for the elastic flow in two-dimensional Riemannian manifolds that are conformally flat, i.e.\ conformally equivalent to the Euclidean space. Examples include the hyperbolic plane, the hyperbolic disk, the elliptic plane as well as any conformal parameterization of a two-dimensional manifold in ${\mathbb R}^d$, $d\geq 3$. Numerical results show the robustness of the method, as well as quadratic convergence with respect to the space discretization. [81]  arXiv:1811.06304 [pdf, other] Title: Hopf-Hopf bifurcation and chaotic attractors in a delayed diffusive predator-prey model with fear effect Subjects: Dynamical Systems (math.DS) We investigate a diffusive predator-prey model by incorporating the fear effect into prey population, since the fear of predators could visibly reduce the reproduction of prey. By introducing the mature delay as bifurcation parameter, we find this makes the predator-prey system more complicated and usually induces Hopf and Hopf-Hopf bifurcations. The formulas determining the properties of Hopf and Hopf-Hopf bifurcations by computing the normal form on the center manifold are given. Near the Hopf-Hopf bifurcation point we give the detailed bifurcation set by investigating the universal unfoldings. Moreover, we show the existence of quasi-periodic orbits on three-torus near a Hopf-Hopf bifurcation point, leading to a strange attractor when further varying the parameter. We also find the existence of Bautin bifurcation numerically, then simulate the coexistence of stable constant stationary solution and periodic solution near this Bautin bifurcation point. [82]  arXiv:1811.06309 [pdf, other] Title: Redundancy scheduling with scaled Bernoulli service requirements Subjects: Probability (math.PR) Redundancy scheduling has emerged as a powerful strategy for improving response times in parallel-server systems. The key feature in redundancy scheduling is replication of a job upon arrival by dispatching replicas to different servers. Redundant copies are abandoned as soon as the first of these replicas finishes service. By creating multiple service opportunities, redundancy scheduling increases the chance of a fast response from a server that is quick to provide service, and mitigates the risk of a long delay incurred when a single selected server turns out to be slow. The diversity enabled by redundant requests has been found to strongly improve the response time performance, especially in case of highly variable service requirements. Analytical results for redundancy scheduling are unfortunately scarce however, and even the stability condition has largely remained elusive so far, except for exponentially distributed service requirements. In order to gain further insight in the role of the service requirement distribution, we explore the behavior of redundancy scheduling for scaled Bernoulli service requirements. We establish a sufficient stability condition for generally distributed service requirements and we show that, for scaled Bernoulli service requirements, this condition is also asymptotically nearly necessary. This stability condition differs drastically from the exponential case, indicating that the stability condition depends on the service requirements in a sensitive and intricate manner. [83]  arXiv:1811.06311 [pdf, ps, other] Title: Sum rules and large deviations for spectral matrix measures in the Jacobi ensemble Subjects: Probability (math.PR) We continue to explore the connections between large deviations for objects coming from random matrix theory and sum rules. This connection was established in [17] for spectral measures of classical ensembles (Gauss-Hermite, Laguerre, Jacobi) and it was extended to spectral matrix measures of the Hermite and Laguerre ensemble in [20]. In this paper, we consider the remaining case of spectral matrix measures of the Jacobi ensemble. Our main results are a large deviation principle for such measures and a sum rule for matrix measures with reference measure the Kesten-McKay law. As an important intermediate step, we derive the distribution of canonical moments of the matrix Jacobi ensemble. [84]  arXiv:1811.06312 [pdf, ps, other] Title: Construction of spectra of triangulated categories without tensor structure and applications to commutative rings Comments: 15 pages Subjects: Commutative Algebra (math.AC); Representation Theory (math.RT) In this paper, as an analogue of the spectrum of a tensor triangulated category introduced by Balmer, we define a spectrum of a triangulated category which does not necessarily admit a tensor structure. We apply it for some triangulated categories associated to a commutative noetherian ring. [85]  arXiv:1811.06319 [pdf, other] Title: From Domain Decomposition to Homogenization Theory Comments: Submitted for publication in the proceedings of the 25th International Conference on Domain Decomposition Methods Subjects: Numerical Analysis (math.NA) This paper rediscovers a classical homogenization result for a prototypical linear elliptic boundary value problem with periodically oscillating diffusion coefficient. Unlike classical analytical approaches such as asymptotic analysis, oscillating test functions, or two-scale convergence, the result is purely based on the theory of domain decomposition methods and standard finite elements techniques. The arguments naturally generalize to problems far beyond periodicity and scale separation and we provide a brief overview on such applications. [86]  arXiv:1811.06327 [pdf, ps, other] Title: The length and depth of real algebraic groups Authors: Damian Sercombe Subjects: Group Theory (math.GR) Let $G$ be a connected real algebraic group. An unrefinable chain of $G$ is a chain of subgroups $G=G_0>G_1>...>G_t=1$ where each $G_i$ is a maximal connected real subgroup of $G_{i-1}$. The maximal (respectively, minimal) length of such an unrefinable chain is called the length (respectively, depth) of $G$. We give a precise formula for the length of $G$, which generalises results of Burness, Liebeck and Shalev on complex algebraic groups and also on compact Lie groups. If $G$ is simple then we bound the depth of $G$ above and below, and in many cases we compute the exact value. In particular, the depth of any simple $G$ is at most $9$. [87]  arXiv:1811.06337 [pdf] Title: Implicit Euler time discretization and FDM with Newton method in nonlinear heat transfer modeling Comments: 4 pages, 1 figure Subjects: Numerical Analysis (math.NA) This paper considers one-dimensional heat transfer in a media with temperature-dependent thermal conductivity. To model the transient behavior of the system, we solve numerically the one-dimensional unsteady heat conduction equation with certain initial and boundary conditions. Contrary to the traditional approach, when the equation is first discretized in space and then in time, we first discretize the equation in time, whereby a sequence of nonlinear two-point boundary value problems is obtained. To carry out the time-discretization, we use the implicit Euler scheme. The second spatial derivative of the temperature is a nonlinear function of the temperature and the temperature gradient. We derive expressions for the partial derivatives of this nonlinear function. They are needed for the implementation of the Newton method. Then, we apply the finite difference method and solve the obtained nonlinear systems by Newton method. The approach is tested on real physical data for the dependence of the thermal conductivity on temperature in semiconductors. A MATLAB code is presented. [88]  arXiv:1811.06339 [pdf, ps, other] Title: Hörmander's theorem for semilinear SPDEs We consider a broad class of semilinear SPDEs with multiplicative noise driven by a finite-dimensional Wiener process. We show that, provided that an infinite-dimensional analogue of H\"ormander's bracket condition holds, the Malliavin matrix of the solution is an operator with dense range. In particular, we show that the laws of finite-dimensional projections of such solutions admit smooth densities with respect to Lebesgue measure. The main idea is to develop a robust pathwise solution theory for such SPDEs using rough paths theory, which then allows us to use a pathwise version of Norris's lemma to work directly on the Malliavin matrix, instead of the "reduced Malliavin matrix" which is not available in this context. On our way of proving this result, we develop some new tools for the theory of rough paths like a rough Fubini theorem and a deterministic mild It\^o formula for rough PDEs. [89]  arXiv:1811.06342 [pdf, ps, other] Title: Constructive noncommutative invariant theory Subjects: Representation Theory (math.RT); Rings and Algebras (math.RA) A constructive Hilbert-Nagata Theorem is proved for the algebra of invariants in a Lie nilpotent relatively free associative algebra endowed with an action induced by a representation of a reductive group. More precisely, the problem of finding generators of the subalgebra of invariants in the relatively free algebra is reduced to finding generators in the commutative algebra of polynomial invariants for a representation associated canonically to the original representation. [90]  arXiv:1811.06344 [pdf, ps, other] Title: A small solid body with large density in a planar fluid is negligible Authors: Jiao He (ICJ), Dragos Iftimie (ICJ) Subjects: Analysis of PDEs (math.AP); Fluid Dynamics (physics.flu-dyn) In this article, we consider a small rigid body moving in a viscous fluid filling the whole plane. We assume that the diameter of the rigid body goes to 0, that the initial velocity has bounded energy and that the density of the rigid body goes to infinity. We prove that the rigid body has no influence on the limit equation by showing convergence of the solutions towards a solution of the Navier-Stokes equations in the full plane. [91]  arXiv:1811.06348 [pdf, ps, other] Title: Existence of solution of the $p(x)$-Laplacian problem involving critical exponent and Radon measure Subjects: Analysis of PDEs (math.AP) In this paper we are proving the existence of a nontrivial solution of the ${p}(x)$- Laplacian equation with Dirichlet boundary condition. We will use the variational method and concentration compactness principle involving positive radon measure $\mu$. \begin{align*} \begin{split} -\Delta_{p(x)}u & = |u|^{q(x)-2}u+f(x,u)+\mu\,\,\mbox{in}\,\,\Omega,\\ u & = 0\,\, \mbox{on}\,\, \partial\Omega, \end{split} \end{align*} where $\Omega \subset \mathbb{R}^N$ is a smooth bounded domain, $\mu > 0$ and $1 < p^{-}:=\underset{x\in \Omega}{\text{inf}}\;p(x) \leq p^{+}:= \underset{x\in \Omega}{\text{sup}}\;p(x) < q^{-}:=\underset{x\in \Omega}{\text{inf}}\;q(x)\leq q(x) \leq p^{\ast}(x) < N$. The function $f$ satisfies certain conditions. Here, $q^{\prime}(x)=\frac{q(x)}{q(x)-1}$ is the conjugate of $q(x)$ and $p^{\ast}(x)=\frac{Np(x)}{N-p(x)}$ is the Sobolev conjugate of $p(x)$. [92]  arXiv:1811.06351 [pdf, other] Title: State-dependent jump activity estimation for Markovian semimartingales Authors: Fabian Mies Subjects: Statistics Theory (math.ST) The jump behavior of an infinitely active It\^o semimartingale can be conveniently characterized by a jump activity index of Blumenthal-Getoor type, typically assumed to be constant in time. We study Markovian semimartingales with a non-constant, state-dependent jump activity index and a non-vanishing continuous diffusion component. Nonparametric estimators for the functional jump activity index as well as for the drift function are proposed and shown to be asymptotically normal under combined high-frequency and long-time-span asymptotics. The results are based on a novel uniform bound on the Markov generator of the jump diffusion. [93]  arXiv:1811.06352 [pdf, ps, other] Title: New integral representations for the Fox-Wright functions and its applications II Authors: Khaled Mehrez Subjects: Classical Analysis and ODEs (math.CA) Our aim in this paper, is to establish several new integral representations for the Fox--Wright functions ${}_p\Psi_q[^{(\alpha_p,A_p)}_{(\beta_q,B_q)}|z]$ when their terms contain the Fox H-function such that $$\mu=\sum_{j=1}^q\beta_j-\sum_{k=1}^p\alpha_k+\frac{p-q}{2}=-m,\;m\in\mathbb{N}_0.$$ In particular, closed-form integral expressions are derived here for the four parameters Wright function under a special restriction on parameters. Exponential bounding inequalities for a class of the Fox-Wright function ( likes Luke's type inequalities) are derived. Moreover, monotonicity property of ratios involving the Fox-Wright functions are established. [94]  arXiv:1811.06353 [pdf, ps, other] Title: Positivity of certain class of functions related to the Fox H-functions and applications Authors: Khaled Mehrez Subjects: Classical Analysis and ODEs (math.CA) In this present investigation, we found a set of sufficient conditions to be imposed on the parameters of the Fox H-functions which allow us to conclude that it is non-negative. As applications, various new facts regarding the Fox-Wright functions, including complete monotonicity, logarithmic completely monotonicity and monotonicity of ratios are considered. [95]  arXiv:1811.06357 [pdf, ps, other] Title: Finiteness of Small Eigenvalues of Geometrically Finite Rank one Locally Symmetric Manifolds Authors: Jialun Li Subjects: Spectral Theory (math.SP); Geometric Topology (math.GT) Let M be a geometrically finite rank one locally symmetric manifolds. We prove that the spectrum of the Laplace operator on M is finite in a small interval which is optimal. [96]  arXiv:1811.06359 [pdf, ps, other] Title: The 2-variable unified family of generalized Apostol-Euler, Bernoulli and Genocchi polynomials Subjects: Classical Analysis and ODEs (math.CA) In this paper, we introduce The 2-variable unified family of generalized Apostol-Euler, Bernoulli and Genocchi polynomials and derive some implicit summation formulae and general symmetry identities. The result extend some known summations and identities of generalized Bernoulli, Euler and Genocchi numbers and polynomials. [97]  arXiv:1811.06360 [pdf, ps, other] Title: Deterministic homogenization of variational inequalities with unilateral constraints Subjects: Mathematical Physics (math-ph); Analysis of PDEs (math.AP) The article studies the reiterated homogenization of linear elliptic variational inequalities arising in problems with unilateral constrains. We assume that the coefficients of the equations satisfy and abstract hypothesis covering on each scale a large set of concrete deterministic behavior such as the periodic, the almost periodic, the convergence at infinity. Using the multi-scale convergence method, we derive a homogenization result whose limit problem is of the same type as the problem with rapidly oscillating coefficients. [98]  arXiv:1811.06365 [pdf, ps, other] Title: Nonabelian motivic homology Comments: 32 pages. Comments welcome! Subjects: Algebraic Geometry (math.AG); Algebraic Topology (math.AT); Number Theory (math.NT) We investigate several interrelated foundational questions pertaining to Guzman's theory of motivic dga's. In particular, we note that motivic dga's give rise to a natural notion of "nonabelian motivic homology". Just as abelian motivic homology is a homotopy group of a spectrum coming from K-theory, nonabelian motivic homology is a homotopy group of a certain limit of such spectra; we give an explicit formula for this limit --- a necessary first step (we believe) towards any explicit computations or dimension bounds. We also consider commutative comonoids in Chow motives, which we call "motivic Chow coalgebras". We discuss the relationship between motivic Chow coalgebras and motivic dga's of smooth proper schemes. As an application of our foundational results, we prove that among schemes which are finite \'etale over a number field, morphisms of associated motivic dga's are no different than morphisms of schemes. This may be regarded as a small consequence of a plausible generalization of Kim's relative unipotent section conjecture, hence as an ounce of evidence for the latter. [99]  arXiv:1811.06370 [pdf, ps, other] Title: On a solution to a functional equation Subjects: Classical Analysis and ODEs (math.CA) We offer a solution to a functional equation using properties of the Mellin transform. A new criteria for the Riemann Hypothesis is offered as an application of our main result, through a functional relationship with the Riemann xi function. [100]  arXiv:1811.06371 [pdf, ps, other] Title: Restricted summability of the multi-dimensional Cesàro means of Walsh-Kaczmarz-Fourier series Subjects: Classical Analysis and ODEs (math.CA) The properties of the maximal operator of the $(C,\alpha)$-means ($\alpha=(\alpha_1,\ldots,\alpha_d)$) of the multi-dimensional Walsh-Kaczmarz-Fourier series are discussed, where the set of indices is inside a cone-like set. We prove that the maximal operator is bounded from $H_p^\gamma$ to $L_p$ for $p_0< p $ ($p_0=\max{1/(1+\alpha_k): k=1,\ldots, d}$) and is of weak type $(1,1)$. As a corollary we get the theorem of Simon \cite{S1} on the a.e. convergence of cone-restricted two-dimensional Fej\'er means of integrable functions. At the endpoint case $p=p_0$, we show that the maximal operator $\sigma^{\kappa,\alpha,*}_L$ is not bounded from the Hardy space $H_{p_0}^\gamma$ to the space $L_{p_0}$. [101]  arXiv:1811.06375 [pdf, ps, other] Title: A Short Proof that Lebesgue Outer Measure of an Interval is its Length Authors: Jitender Singh Comments: The proof is based on connectedness of interval Journal-ref: American Mathematical Mothly, 125:6, 553, 2018 Subjects: History and Overview (math.HO) We prove the statement in the title using the connectedness of the interval in real line. [102]  arXiv:1811.06380 [pdf, ps, other] Title: Embeddings of Free Magmas with Applications to the Study of Free Non-Associative Algebras Authors: Cameron Ismail Subjects: Rings and Algebras (math.RA) We introduce an embedding of the free magma on a set A into the direct product of the free magma on a singleton set and the free semigroup on A. This embedding is then used to prove several theorems related to algebraic independence of subsets of the free non-associative algebra on A. Among these theorems is a generalization of a result due to Kurosh, which states that every subalgebra of a free non-associative algebra is also free. [103]  arXiv:1811.06381 [pdf, ps, other] Title: Hopf algebra structure on symplectic superspace ${\rm SP}_q^{2|1}$ Authors: Salih Celik Comments: 12 pages. arXiv admin note: text overlap with arXiv:1607.02491, arXiv:1509.05876 Subjects: Quantum Algebra (math.QA) Super-Hopf algebra structure on the function algebra on the extended quantum symplectic superspace ${\rm SP}_q^{2|1}$ has been defined. The dual Hopf algebra is explicitly constructed. [104]  arXiv:1811.06382 [pdf, ps, other] Title: On the Further Structure of the Finite Free Convolutions Comments: 15 pages, 1 figure Subjects: Combinatorics (math.CO); Complex Variables (math.CV) Since the celebrated resolution of Kadison-Singer (via the Paving Conjecture) by Marcus, Spielman, and Srivastava, much study has been devoted to further understanding and generalizing the techniques of their proof. Specifically, their barrier method was crucial to achieving the required polynomial root bounds on the finite free convolution. But unfortunately this method required individual analysis for each usage, and the existence of a larger encapsulating framework is an important open question. In this paper, we make steps toward such a framework by generalizing their root bound to all differential operators. We further conjecture a large class of root bounds, the resolution of which would require for more robust techniques. We further give an important counterexample to a very natural multivariate version of their bound, which if true would have implied tight bounds for the Paving Conjecture. [105]  arXiv:1811.06385 [pdf, ps, other] Title: Transportation inequalities for stochastic wave equation Authors: Yumeng Li, Xinyu Wang Comments: arXiv admin note: text overlap with arXiv:1605.02310 by other authors Subjects: Probability (math.PR) In this paper, we prove a Talagrand's T2-transportation inequality for the law of a stochastic wave equation in spatial dimension d=3 driven by the Gaussian random field, white in time and correlated in space, on the continuous paths space with respect to the L2-metric. The Girsanov transformation and the martingale representation theorem for the space-colored time-white noise play an important role. [106]  arXiv:1811.06389 [pdf, ps, other] Title: Semi-perfect 1-Factorizations of the Hypercube Subjects: Combinatorics (math.CO) A 1-factorization $\mathcal{M} = \{M_1,M_2,\ldots,M_n\}$ of a graph $G$ is called perfect if the union of any pair of 1-factors $M_i, M_j$ with $i \ne j$ is a Hamilton cycle. It is called $k$-semi-perfect if the union of any pair of 1-factors $M_i, M_j$ with $1 \le i \le k$ and $k+1 \le j \le n$ is a Hamilton cycle. We consider 1-factorizations of the discrete cube $Q_d$. There is no perfect 1-factorization of $Q_d$, but it was previously shown that there is a 1-semi-perfect 1-factorization of $Q_d$ for all $d$. Our main result is to prove that there is a $k$-semi-perfect 1-factorization of $Q_d$ for all $k$ and all $d$, except for one possible exception when $k=3$ and $d=6$. This is, in some sense, best possible. We conclude with some questions concerning other generalisations of perfect 1-factorizations. [107]  arXiv:1811.06392 [pdf, other] Title: Leaf-induced subtrees of leaf-Fibonacci trees Comments: 9 pages, 4 figures Subjects: Combinatorics (math.CO) In analogy to a concept of Fibonacci trees, we define the leaf-Fibonacci tree of size $n$ and investigate its number of nonisomorphic leaf-induced subtrees. Denote by $f_0$ the one vertex tree and $f_1$ the tree that consists of a root with two leaves attached to it; the leaf-Fibonacci tree $f_n$ of size $n\geq 2$ is the binary tree whose branches are $f_{n-1}$ and $f_{n-2}$. We derive a nonlinear difference equation for the number $\text{N}(f_n)$ of nonisomorphic leaf-induced subtrees (subtrees induced by leaves) of $f_n$, and also prove that $\text{N}(f_n)$ is asymptotic to $1.00001887227319\ldots (1.48369689570172 \ldots)^{\phi^n}$ ($\phi=$~golden ratio) as $n$ grows to infinity. [108]  arXiv:1811.06393 [pdf, ps, other] Title: Weighted mixed-norm $L_p$-estimates for elliptic and parabolic equations in non-divergence form with singular degenerate coefficients Comments: 27 pages, comments are welcome Subjects: Analysis of PDEs (math.AP) In this paper, we study both elliptic and parabolic equations in non-divergence form with singular degenerate coefficients. Weighted and mixed-norm $L_p$-estimates and solvability are established under some suitable partially weighted BMO regularity conditions on the coefficients. When the coefficients are constants, the operators are reduced to extensional operators which arise in the study of fractional heat equations and fractional Laplace equations. Our results are new even in this setting and in the unmixed case. For the proof, we establish both interior and boundary Lipschitz estimates for solutions and for higher order derivatives of solutions to homogeneous equations. We then employ the perturbation method by using the Fefferman-Stein sharp function theorem, the Hardy-Littlewood maximum function theorem, as well as a weighted Hardy's inequality. [109]  arXiv:1811.06396 [pdf, other] Title: Asynchronous Stochastic Composition Optimization with Variance Reduction Comments: 30 pages, 19 figures Subjects: Optimization and Control (math.OC); Distributed, Parallel, and Cluster Computing (cs.DC); Machine Learning (cs.LG) Composition optimization has drawn a lot of attention in a wide variety of machine learning domains from risk management to reinforcement learning. Existing methods solving the composition optimization problem often work in a sequential and single-machine manner, which limits their applications in large-scale problems. To address this issue, this paper proposes two asynchronous parallel variance reduced stochastic compositional gradient (AsyVRSC) algorithms that are suitable to handle large-scale data sets. The two algorithms are AsyVRSC-Shared for the shared-memory architecture and AsyVRSC-Distributed for the master-worker architecture. The embedded variance reduction techniques enable the algorithms to achieve linear convergence rates. Furthermore, AsyVRSC-Shared and AsyVRSC-Distributed enjoy provable linear speedup, when the time delays are bounded by the data dimensionality or the sparsity ratio of the partial gradients, respectively. Extensive experiments are conducted to verify the effectiveness of the proposed algorithms. [110]  arXiv:1811.06399 [pdf, ps, other] Title: Function spaces of logarithmic smoothness: embeddings and characterizations Comments: 161 pages Subjects: Functional Analysis (math.FA) In this paper we present a comprehensive treatment of function spaces with logarithmic smoothness (Besov, Sobolev, Triebel-Lizorkin). We establish the following results: Sharp embeddings between the Besov spaces defined by differences and by Fourier-analytical decompositions as well as between Besov and Sobolev/Triebel-Lizorkin spaces; Various new characterizations for Besov norms in terms of different K-functionals. For instance, we derive characterizations via ball averages, approximation methods, heat kernels, and Bianchini-type norms; Sharp estimates for Besov norms of derivatives and potential operators (Riesz and Bessel potentials) in terms of norms of functions themselves. We also obtain quantitative estimates of regularity properties of the fractional Laplacian. The key tools behind our results are limiting interpolation techniques and new characterizations of Besov and Sobolev norms in terms of the behavior of the Fourier transforms for functions such that their Fourier transforms are of monotone type or lacunary series. [111]  arXiv:1811.06402 [pdf, ps, other] Title: A variant of Shelah's characterization of Strong Chang's Conjecture Subjects: Logic (math.LO) Shelah considered a certain version of Strong Chang's Conjecture, which we denote $\text{SCC}^{\text{cof}}$, and proved that it is equivalent to several statements, including the assertion that Namba forcing is semiproper. We introduce an apparently weaker version, denoted $\text{SCC}^{\text{split}}$, and prove an analogous characterization of it. In particular, $\text{SCC}^{\text{split}}$ is equivalent to the assertion that the the Friedman-Krueger poset is semiproper. This strengthens and sharpens the results of Cox, and sheds some light on problems from Usuba and Torres-Perez and Wu. [112]  arXiv:1811.06409 [pdf, other] Title: Chordal circulant graphs and induced matching number Authors: Francesco Romeo Subjects: Combinatorics (math.CO); Commutative Algebra (math.AC) Let $G=C_{n}(S)$ be a circulant graph on $n$ vertices. In this paper we characterize chordal circulant graphs and then we compute $\nu (G)$, the induced matching number of $G$. These latter are useful in bounding the Castelnuovo-Mumford regularity of the edge ring of $G$. [113]  arXiv:1811.06411 [pdf, other] Title: Characterising $k$-connected sets in infinite graphs Comments: 48 pages, 8 figures Subjects: Combinatorics (math.CO) A $k$-connected set in an infinite graph, where $k > 0$ is an integer, is a set of vertices such that any two of its subsets of the same size $\ell \leq k$ can be connected by $\ell$ disjoint paths in the whole graph. We characterise the existence of $k$-connected sets of arbitrary but fixed infinite cardinality via the existence of certain minors and topological minors. This characterisation provides an analogue of the Star-Comb Lemma, one of the most-used tools in topological infinite graph theory, for higher connectivity. We also prove a duality theorem for the existence of such $k$-connected sets: if a graph contains no such a $k$-connected set, then it has a tree structure which, whenever it exists, clearly precludes the existence of such a $k$-connected set. [114]  arXiv:1811.06413 [pdf, ps, other] Title: On syzygies for rings of invariants of abelian groups Authors: M. Domokos Subjects: Commutative Algebra (math.AC); Rings and Algebras (math.RA) It is well known that results on zero-sum sequences over a finitely generated abelian group can be translated to statements on generators of rings of invariants of the dual group. Here the direction of the transfer of information between zero-sum theory and invariant theory is reversed. First it is shown how a presentation by generators and relations of the ring of invariants of an abelian group acting linearly on a finite dimensional vector space can be obtained from a presentation of the ring of invariants for the corresponding multiplicity free representation. This combined with a known degree bound for syzygies of rings of invariants, yields bounds on the presentation of a block monoid associated to a finite sequence of elements in an abelian group. The results have an equivalent formulation in terms of binomial ideals, but here the language of monoid congruences and the notion of catenary degree is used. [115]  arXiv:1811.06415 [pdf] Title: Effects of Beamforming and Antenna Configurations on Mobility in 5G NR Comments: 6 pages, 5 figures and 1 table Subjects: Information Theory (cs.IT) The future 5G systems are getting closer to be a reality. It is envisioned, indeed, that the roll-out of first 5G network will happen around end of 2018 and beginning of 2019. However, there are still a number of issues and problems that have to be faces and new solutions and methods are needed to solve them. Along these lines, the effects that beamforming and antenna configurations may have on the mobility in 5G New Radio (NR) is still unclear. In fact, with the use of directive antennas and high frequencies (e.g., above 10 GHz), in order to meet the stringent requirements of 5G (e.g., support of 500km/h) it is crucial to understand how the envisioned 5G NR antenna configurations may impact mobility (and thus handovers). In this article, first we will briefly survey mobility enhancements and solution currently under discussion in 3GPP Release 15. In particular, we focus our analysis on the physical layer signals involved in the measurement reporting and the new radio measurement model used in 5G NR to filter the multiple beams typical of directive antenna with a large number of antenna elements. Finally, the critical aspect of mobility identified in the previous sections will be analyzed in more details through the obtained results of an extensive system-level evaluation analysis. [116]  arXiv:1811.06416 [pdf, other] Title: The Sliding Frank-Wolfe Algorithm and its Application to Super-Resolution Microscopy Authors: Quentin Denoyelle (CEREMADE), Vincent Duval (MOKAPLAN), Gabriel Peyré (DMA), Emmanuel Soubies Subjects: Numerical Analysis (math.NA); Optimization and Control (math.OC) This paper showcases the theoretical and numerical performance of the Sliding Frank-Wolfe, which is a novel optimization algorithm to solve the BLASSO sparse spikes super-resolution problem. The BLASSO is a continuous (i.e. off-the-grid or grid-less) counterpart to the well-known 1 sparse regularisation method (also known as LASSO or Basis Pursuit). Our algorithm is a variation on the classical Frank-Wolfe (also known as conditional gradient) which follows a recent trend of interleaving convex optimization updates (corresponding to adding new spikes) with non-convex optimization steps (corresponding to moving the spikes). Our main theoretical result is that this algorithm terminates in a finite number of steps under a mild non-degeneracy hypothesis. We then target applications of this method to several instances of single molecule fluorescence imaging modalities, among which certain approaches rely heavily on the inversion of a Laplace transform. Our second theoretical contribution is the proof of the exact support recovery property of the BLASSO to invert the 1-D Laplace transform in the case of positive spikes. On the numerical side, we conclude this paper with an extensive study of the practical performance of the Sliding Frank-Wolfe on different instantiations of single molecule fluorescence imaging, including convolutive and non-convolutive (Laplace-like) operators. This shows the versatility and superiority of this method with respect to alternative sparse recovery technics. [117]  arXiv:1811.06419 [pdf, other] Title: Learning to Bound the Multi-class Bayes Error Comments: 15 pages, 14 figures, 1 table Subjects: Information Theory (cs.IT); Machine Learning (cs.LG) In the context of supervised learning, meta learning uses features, metadata and other information to learn about the difficulty, behavior, or composition of the problem. Using this knowledge can be useful to contextualize classifier results or allow for targeted decisions about future data sampling. In this paper, we are specifically interested in learning the Bayes error rate (BER) based on a labeled data sample. Providing a tight bound on the BER that is also feasible to estimate has been a challenge. Previous work [1] has shown that a pairwise bound based on the sum of Henze-Penrose (HP) divergence over label pairs can be directly estimated using a sum of Friedman-Rafsky (FR) multivariate run test statistics. However, in situations in which the dataset and number of classes are large, this bound is computationally infeasible to calculate and may not be tight. Other multi-class bounds also suffer from computationally complex estimation procedures. In this paper, we present a generalized HP divergence measure that allows us to estimate the Bayes error rate with log-linear computation. We prove that the proposed bound is tighter than both the pairwise method and a bound proposed by Lin [2]. We also empirically show that these bounds are close to the BER. We illustrate the proposed method on the MNIST dataset, and show its utility for the evaluation of feature reduction strategies. [118]  arXiv:1811.06423 [pdf, other] Title: On Clamped Plates with Log-Convex Density Subjects: Spectral Theory (math.SP) We consider the analogue of Rayleigh's conjecture for the clamped plate in Euclidean space weighted by a log-convex density. We show that the lowest eigenvalue of the bi-Laplace operator with drift in a given domain is bounded below by a constant $C(V,n)$ times the lowest eigenvalue of a centered ball of the same volume; the constant depends on the volume $V$ of the domain and the dimension $n$ of the ambient space. Our result is driven by a comparison theorem in the spirit of Talenti, and the constant $C(V,n)$ is defined in terms of a minimization problem following the work of Ashbaugh and Benguria. When the density is an "anti-Gaussian," we estimate $C(V,n)$ using a delicate analysis that involves confluent hypergeometric functions, and we illustrate numerically that $C(V,n)$ is close to $1$ for low dimensions. [119]  arXiv:1811.06424 [pdf, ps, other] Title: Units in complex group rings of group extensions Comments: 13 pages Subjects: Rings and Algebras (math.RA); Operator Algebras (math.OA) Let $N$ and $H$ be groups, and let $G$ be an extension of $H$ by $N$. In this article we describe the structure of the complex group ring of $G$ in terms of data associated with $N$ and $H$. In particular, we present conditions on the building blocks $N$ and $H$ guaranteeing that $G$ satisfies the unit, zero-divisor and idempotent conjectures. Moreover, for central extensions involving amenable groups we present conditions on the building blocks guaranteeing that the Kaplansky-Kadison conjecture holds for the reduced group C*-algebra of $G$. [120]  arXiv:1811.06430 [pdf, other] Title: Test sequences and formal solutions over hyperbolic groups Authors: Simon Heil Subjects: Group Theory (math.GR); Logic (math.LO) In 2006 Z. Sela and independently O. Kharlampovich and A. Myasnikov gave a solution to the Tarski problems by showing that two non-abelian free groups have the same elementary theory. Subsequently Z. Sela generalized the techniques used in his proof of the Tarski conjecture to classify all finitely generated groups elementary equivalent to a given torsion-free hyperbolic group. One important step in his analysis of the elementary theory of free and torsion-free hyperbolic groups is the Generalized Merzlyakov's Theorem. In our work we show that given a hyperbolic group $\Gamma$ and $\Gamma$-limit group $L$, there exists a larger group $Comp(L)$, namely its completion, into which $L$ embeds, and a sequence of points $(\lambda_n)$ in the variety $Hom(Comp(L),\Gamma)$ from which one can recover the structure of the group $Comp(L)$. Using such a test sequence $(\lambda_n)$ we are finally able to prove a version of the Generalized Merzlyakov's Theorem over all hyperbolic groups (possibly with torsion). [121]  arXiv:1811.06431 [pdf, other] Title: Introducing Multiobjective Complex Systems Subjects: Optimization and Control (math.OC) This article focuses on the optimization of a complex system which is composed of several subsystems. On the one hand, these subsystems are subject to multiple objectives, local constraints as well as local variables, and they are associated with an own, subsystem-dependent decision maker. On the other hand, these subsystems are interconnected to each other by global variables or linking constraints. Due to these interdependencies, it is in general not possible to simply optimize each subsystem individually to improve the performance of the overall system. This article introduces a formal graph-based representation of such complex systems and generalizes the classical notions of feasibility and optimality to match this complex situation. Moreover, several algorithmic approaches are suggested and analyzed. [122]  arXiv:1811.06432 [pdf, ps, other] Title: A fast algorithm for calculating $s$-invariants Authors: Dirk Schuetz Comments: 22 pages We use the divide-and-conquer and scanning algorithms for calculating Khovanov cohomology directly on the Lee- or Bar-Natan deformations of the Khovanov complex to give an alternative way to compute Rasmussen $s$-invariants of knots. By disregarding generators away from homological degree 0 we can considerably improve the efficiency of the algorithm. With a slight modification we can also apply it to a refinement of Lipshitz-Sarkar. [123]  arXiv:1811.06433 [pdf, other] Title: On a minimum distance procedure for threshold selection in tail analysis Subjects: Statistics Theory (math.ST); Methodology (stat.ME) Power-law distributions have been widely observed in different areas of scientific research. Practical estimation issues include how to select a threshold above which observations follow a power-law distribution and then how to estimate the power-law tail index. A minimum distance selection procedure (MDSP) is proposed in Clauset et al. (2009) and has been widely adopted in practice, especially in the analyses of social networks. However, theoretical justifications for this selection procedure remain scant. In this paper, we study the asymptotic behavior of the selected threshold and the corresponding power-law index given by the MDSP. We find that the MDSP tends to choose too high a threshold level and leads to Hill estimates with large variances and root mean squared errors for simulated data with Pareto-like tails. [124]  arXiv:1811.06435 [pdf, ps, other] Title: A note on Severi varieties of nodal curves on Enriques surfaces Subjects: Algebraic Geometry (math.AG) Let $|L|$ be a linear system on a smooth complex Enriques surface $S$ whose general member is a smooth and irreducible curve of genus $p$, with $L^ 2>0$, and let $V_{|L|, \delta} (S)$ be the Severi variety of irreducible $\delta$-nodal curves in $|L|$. We denote by $\pi:X\to S$ the universal covering of $S$. In this note we compute the dimensions of the irreducible components $V$ of $V_{|L|, \delta} (S)$. In particular we prove that, if $C$ is the curve corresponding to a general element $[C]$ of $V$, then the codimension of $V$ in $|L|$ is $\delta$ if $\pi^{-1}(C)$ is irreducible in $X$ and it is $\delta-1$ if $\pi^ {-1}(C)$ consists of two irreducible components. [125]  arXiv:1811.06438 [pdf, ps, other] Title: An extension theorem of holomorphic functions on hyperconvex domains Comments: 7 pages Subjects: Complex Variables (math.CV) Let $n \geq 3$ and $\Omega$ be a bounded domain in $\mathbb{C}^n$ with a smooth negative plurisubharmonic exhaustion function $\varphi$. As a generalization of Y. Tiba's result, we prove that any holomorphic function on a connected open neighborhood of the support of $(i\partial \bar \partial \varphi )^{n-2}$ in $\Omega$ can be extended to the whole domain $\Omega$. To prove it, we combine an $L^2$ version of Serre duality and Donnelly-Fefferman type estimates on $(n,n-1)$- and $(n,n)$- forms. [126]  arXiv:1811.06442 [pdf, ps, other] Title: Energy Efficient Precoder in Multi-User MIMO Systems with Imperfect Channel State Information Subjects: Information Theory (cs.IT) This article is on the energy efficient precoder design in multi-user multiple-input-multiple-output (MU-MIMO) systems which is also robust with respect to the imperfect channel state information (CSI) at the transmitters. In other words, we design the precoder matrix associated with each transmitter to maximize the general energy efficiency of the network. We investigate the problem in two conventional cases. The first case considers the statistical characterization for the channel estimation error that leads to a quadratically constrained quadratic program (QCQP) with a semi-closed-form solution. Then, we turn our attention to the case which considers only the uncertainty region for the channel estimation error; this case eventually results in a semi-definite program (SDP). [127]  arXiv:1811.06448 [pdf, other] Title: A regularised Dean-Kawasaki model for weakly interacting particles Comments: 22 pages, 1 figure The evolution of finitely many particles obeying Langevin dynamics is described by Dean-Kawasaki type equations, a class of stochastic equations featuring a non-Lipschitz multiplicative noise in divergence form. We derive a regularised Dean-Kawasaki type model based on second order Langevin dynamics by analysing a system of weakly interacting particles. Key tools of our analysis are the propagation of chaos and Simon's compactness criterion. The model we obtain is a small-noise regime stochastic perturbation of the undamped McKean-Vlasov equation. We also provide a high-probability result for existence and uniqueness for our model. [128]  arXiv:1811.06451 [pdf, ps, other] Title: MIMO Beampattern and Waveform Design with Low Resolution DACs Subjects: Information Theory (cs.IT) Digital beamforming and waveform generation techniques in MIMO radar offer enormous advantages in terms of flexibility and performance compared to conventional radar systems based on analog implementations. To allow for such fully digital design with an efficient hardware complexity, we consider the use of low resolution analog-to-digital converters (DACs) while maintaining a separate radio-frequency chain per antenna. A sum of squared residuals (SSR) formulation for the beampattern and spectral shaping problem is solved based on the Generalized Approximate Message Passing (GAMP) algorithm. Numerical results demonstrate good performance in terms of spectral shaping as well as cross-correlation properties of the different probing waveforms even with just 2-bit resolution per antenna. [129]  arXiv:1811.06453 [pdf, other] Title: Minimizing the Age of Information from Sensors with Correlated Observations Comments: Submitted as conference paper The Age of Information (AoI) metric has recently received much attention as a measure of freshness of information in a network. In this paper, we study the average AoI in a generic setting with correlated sensor information, which can be related to multiple Internet of Things (IoT) scenarios. The system consists of physical sources with discrete-time updates that are observed by a set of sensors. Each source update may be observed by multiple sensors, and hence the sensor observations are correlated. We devise a model that is simple, but still capable to capture the main tradeoffs. We propose two sensor scheduling policies that minimize the AoI of the sources; one that requires the system parameters to be known a priori, and one based on contextual bandits in which the parameters are unknown and need to be learned. We show that both policies are able to exploit the sensor correlation to reduce the AoI, which result in a large reduction in AoI compared to the use of schedules that are random or based on round-robin. [130]  arXiv:1811.06460 [pdf, ps, other] Title: Don't Try This at Home: No-Go Theorems for Distributive Laws Subjects: Category Theory (math.CT); Logic in Computer Science (cs.LO) Beck's distributive laws provide sufficient conditions under which two monads can be composed, and monads arising from distributive laws have many desirable theoretical properties. Unfortunately, finding and verifying distributive laws, or establishing if one even exists, can be extremely difficult and error-prone. We develop general-purpose techniques for showing when there can be no distributive law between two monads. Two approaches are presented. The first widely generalizes ideas from a counterexample attributed to Plotkin, yielding general-purpose theorems that recover the previously known situations in which no distributive law can exist. Our second approach is entirely novel, encompassing new practical situations beyond our generalization of Plotkin's approach. It negatively resolves the open question of whether the list monad distributes over itself. Our approach adopts an algebraic perspective throughout, exploiting a syntactic characterization of distributive laws. This approach is key to generalizing beyond what has been achieved by direct calculations in previous work. We show via examples many situations in which our theorems can be applied. This includes a detailed analysis of distributive laws for members of an extension of the Boom type hierarchy, well known to functional programmers. [131]  arXiv:1811.06463 [pdf, ps, other] Title: Construction of Multi-Bubble Solutions for a System of Elliptic Equations arising in Rank Two Gauge Theory Authors: Hsin-Yuan Huang Subjects: Analysis of PDEs (math.AP) We study the existence of multi-bubble solutions for the following skew-symmetric Chern--Simons system \begin{equation}\label{e051} \left\{ \begin{split} &\Delta u_1+\frac{1}{\varepsilon^2}e^{u_2}(1-e^{u_1})=4\pi\sum_{i=1}^{2k}\delta_{p_{1,i}}\\ &\Delta u_2+\frac{1}{\varepsilon^2}e^{u_1}(1-e^{u_2})=4\pi\sum_{i=1}^{2k}\delta_{p_{2,i}} \end{split} \text{ in }\quad \Omega\right., \end{equation} where $k\geq 1$ and $\Omega$ is a flat tours in $\mathbb{R}^2$. It continues the joint work with Zhang\cite{HZ-2015}, where we obtained the necessary conditions for the existence of bubbling solutions of Liouville type. Under nearly necessary conditions(see Theorem \ref{main-thm}), we show that there exist a sequence of solutions $(u_{1,\varepsilon}, u_{2,\varepsilon})$ to \eqref{e051} such that $u_{1,\varepsilon}$ and $u_{2,\varepsilon}$ blow up simultaneously at $k$ points in $\Omega$ as $\varepsilon\to 0$. [132]  arXiv:1811.06466 [pdf, ps, other] Title: Nonlinear scalar discrete multipoint boundary value problems at resonance Subjects: Dynamical Systems (math.DS) In this work we provide conditions for the existence of solutions to nonlinear boundary value problems of the form \begin{equation*} y(t+n)+a_{n-1}(t)y(t+n-1)+\cdots a_0(t)y(t)=g(t,y(t+m-1)) \end{equation*} subject to \begin{equation*} \sum_{j=1}^nb_{ij}(0)y(j-1)+\sum_{j=1}^nb_{ij}(1)y(j)+\cdots+\sum_{j=1}^nb_{ij}(N)y(j+N-1)=0 \end{equation*} for $i=1,\cdots, n$. The existence of solutions will be proved under a mild growth condition on the nonlinearity, $g$, which must hold only on a bounded subset of $\{0,\cdots, N\}\times\mathbb{R}$. [133]  arXiv:1811.06472 [pdf, ps, other] Title: Oversampled Adaptive Sensing with Random Projections: Analysis and Algorithmic Approaches Comments: To be presented in the 18th IEEE ISSPIT in Louisville, Kentucky, USA, December 2018. 6 pages, 3 figures Subjects: Information Theory (cs.IT) Oversampled adaptive sensing (OAS) is a recently proposed Bayesian framework which sequentially adapts the sensing basis. In OAS, estimation quality is, in each step, measured by conditional mean squared errors (MSEs), and the basis for the next sensing step is adapted accordingly. For given average sensing time, OAS reduces the MSE compared to non-adaptive schemes, when the signal is sparse. This paper studies the asymptotic performance of Bayesian OAS, for unitarily invariant random projections. For sparse signals, it is shown that OAS with Bayesian recovery and hard adaptation significantly outperforms the minimum MSE bound for non-adaptive sensing. To address implementational aspects, two computationally tractable algorithms are proposed, and their performances are compared against the state-of-the-art non-adaptive algorithms via numerical simulations. Investigations depict that these low-complexity OAS algorithms, despite their suboptimality, outperform well-known non-adaptive schemes for sparse recovery, such as LASSO, with rather small oversampling factors. This gain grows, as the compression rate increases. [134]  arXiv:1811.06475 [pdf, other] Title: The q-Hahn PushTASEP Comments: 27 pages, 3 figures Subjects: Probability (math.PR); Mathematical Physics (math-ph); Combinatorics (math.CO); Quantum Algebra (math.QA) We introduce the q-Hahn PushTASEP --- an integrable stochastic interacting particle system which is a 3-parameter generalization of the PushTASEP, a well-known close relative of the TASEP (Totally Asymmetric Simple Exclusion Process). The transition probabilities in the q-Hahn PushTASEP are expressed through the ${}_4\phi_3$ basic hypergeometric function. Under suitable limits, the q-Hahn PushTASEP degenerates to all known integrable (1+1)-dimensional stochastic systems with a pushing mechanism. One can thus view our new system as a pushing counterpart of the q-Hahn TASEP introduced by Povolotsky (arXiv:1308.3250). We establish Markov duality relations and contour integral formulas for the q-Hahn PushTASEP. We also take a $q\to1$ limit of our process arriving at a new beta polymer-like model. [135]  arXiv:1811.06482 [pdf, other] Title: A Note On Universal Point Sets for Planar Graphs Subjects: Combinatorics (math.CO); Computational Geometry (cs.CG) We investigate which planar point sets allow simultaneous straight-line embeddings of all planar graphs on a fixed number of vertices. We first show that $(1.293-o(1))n$ points are required to find a straight-line drawing of each $n$-vertex planar graph (vertices are drawn as the given points); this improves the previous best constant $1.235$ by Kurowski (2004). Our second main result is based on exhaustive computer search: We show that no set of 11 points exists, on which all planar 11-vertex graphs can be simultaneously drawn plane straight-line. This strengthens the result by Cardinal, Hoffmann, and Kusters (2015), that all planar graphs on $n \le 10$ vertices can be simultaneously drawn on particular `universal' sets of $n$ points while there are no universal sets for $n \ge 15$. Moreover, we provide a set of 23 planar 11-vertex graphs which cannot be simultaneously drawn on any set of 11 points. This, in fact, is another step towards a (negative) answer of the question, whether every two planar graphs can be drawn simultaneously -- a question raised by Brass, Cenek, Duncan, Efrat, Erten, Ismailescu, Kobourov, Lubiw, and Mitchell (2007). [136]  arXiv:1811.06483 [pdf, other] Title: Baire categorical aspects of first passage percolation II Authors: Balázs Maga Subjects: General Topology (math.GN); Classical Analysis and ODEs (math.CA); Probability (math.PR) In this paper we continue our earlier work about topological first passage percolation and answer certain questions asked in our previous paper. Notably, we prove that apart from trivialities, in the generic configuration there exists exactly one geodesic ray, in the sense that we consider two geodesic rays distinct if they only share a finite number of edges. Moreover, we show that in the generic configuration any not too small and not too large convex set arises as the limit of a sequence $\frac{B(t_n)}{t_n}$ for some $(t_n)\to\infty$. Finally, we define topological Hilbert first passage percolation, and amongst others we prove that certain geometric properties of the percolation in the generic configuration guarantee that we consider a setting linearly isomorphic to the ordinary topological first passage percolation. [137]  arXiv:1811.06484 [pdf, ps, other] Title: Fourier decay, Renewal theorem and Spectral gaps for random walks on split semisimple Lie groups Authors: Jialun Li Subjects: Dynamical Systems (math.DS); Probability (math.PR) Let $\mu$ be a Borel probability measure on a split semisimple Lie group $G$, whose support generates a Zariski dense subgroup. Let $V$ be a finite-dimensional irreducible linear representation of $G$. A theorem of Furstenberg says that there exists a unique $\mu$-stationary probability measure on $\mathbb PV$ and we are interested in the Fourier decay of the stationary measure. The main result of the manuscript is that the Fourier transform of the stationary measure has a power decay. From this result, we obtain a spectral gap of the transfer operator, whose properties allow us to establish an exponential error term for the renewal theorem in the context of products of random matrices. A key technical ingredient for the proof is a Fourier decay of multiplicative convolutions of measures on $\mathbb R^n$, which is a generalisation of Bourgain's theorem on dimension 1. [138]  arXiv:1811.06489 [pdf, ps, other] Title: Lebesgue's Density Theorem and definable selectors for ideals Comments: 28 pages Subjects: Logic (math.LO) We introduce a notion of density point and prove results analogous to Lebesgue's density theorem for various well-known ideals on Cantor space and Baire space. In fact, we isolate a class of ideals for which our results hold. As a contrasting result of independent interest, we show that there is no reasonably definable selector that chooses representatives for the equivalence relation on the Borel sets of having countable symmetric difference. In other words, there is no notion of density which makes the ideal of countable sets satisfy an analogue to the density theorem. [139]  arXiv:1811.06491 [pdf, ps, other] Title: A Newton-Girard Formula for Monomial Symmetric Polynomials Subjects: Commutative Algebra (math.AC) The Newton-Girard Formula allows one to write any elementary symmetric polynomial as a sum of products of power sum symmetric polynomials and elementary symmetric polynomials of lesser degree. It has numerous applications. We have generalized this identity by replacing the elementary symmetric polynomials with monomial symmetric polynomials. [140]  arXiv:1811.06493 [pdf, other] Title: Intermediate dimensions Comments: 18 pages, 1 figure Subjects: Metric Geometry (math.MG); Classical Analysis and ODEs (math.CA); Dynamical Systems (math.DS) We introduce a continuum of dimensions which are `intermediate' between the familiar Hausdorff and box dimensions. This is done by restricting the families of allowable covers in the definition of Hausdorff dimension by insisting that $|U| \leq |V|^\theta$ for all sets $U, V$ used in a particular cover, where $\theta \in [0,1]$ is a parameter. Thus, when $\theta=1$ only covers using sets of the same size are allowable, and we recover the box dimensions, and when $\theta=0$ there are no restrictions, and we recover Hausdorff dimension. We investigate many properties of the intermediate dimension (as a function of $\theta$), including proving that it is continuous on $(0,1]$ but not necessarily continuous at $0$, as well as establishing appropriate analogues of the mass distribution principle, Frostman's lemma, and the dimension formulae for products. We also compute, or estimate, the intermediate dimensions of some familiar sets, including sequences formed by negative powers of integers, and Bedford-McMullen carpets. [141]  arXiv:1811.06495 [pdf, ps, other] Title: Galois action on VOA gauge anomalies Comments: 16 pages Subjects: Quantum Algebra (math.QA); Mathematical Physics (math-ph); Number Theory (math.NT) Assuming regularity of the fixed subalgebra, any action of a finite group $G$ on a holomorphic VOA $V$ determines a gauge anomaly $\alpha \in \mathrm{H}^3(G; \boldsymbol{\mu})$, where $\boldsymbol{\mu} \subset \mathbb{C}^\times$ is the group of roots of unity. We show that under Galois conjugation $V \mapsto {^\gamma V}$, the gauge anomaly transforms as $\alpha \mapsto \gamma^2(\alpha)$. This provides an a priori upper bound of $24$ on the order of anomalies of actions preserving a $\mathbb{Q}$-structure, for example the Monster group $\mathbb{M}$ acting on its Moonshine VOA $V^\natural$. We speculate that each field $\mathbb{K}$ should have a "vertex Brauer group" isomorphic to $\mathrm{H}^3(\mathrm{Gal}(\bar{\mathbb{K}}/\mathbb{K}); \boldsymbol{\mu}^{\otimes 2})$. In order to motivate our constructions and speculations, we warm up with a discussion of the ordinary Brauer group, emphasizing the analogy between VOA gauging and quantum Hamiltonian reduction. [142]  arXiv:1811.06507 [pdf, other] Title: Twisted Conjugation on Connected Simple Lie Groups and Twining Characters Comments: 27 pages, 1 figure Subjects: Representation Theory (math.RT); Differential Geometry (math.DG) This article discusses the twisted adjoint action $\mathrm{Ad}_{g}^{\kappa}:G\rightarrow G$, $x\mapsto gx\kappa(g^{-1})$ given by a Dynkin diagram automorphism $\kappa\in\mathrm{Aut}(G)$, where $G$ is compact, connected, simply connected and simple. The first aim is to recover the classification of $\kappa$-twisted conjugacy classes by elementary means, without invoking the non-connected group $G\rtimes\langle\kappa\rangle$. The second objective is to highlight several properties of the so-called \textit{twining characters} $\tilde{\chi}^{(\kappa)}:G\rightarrow\mathbb{C}$, as defined by Fuchs, Schellekens and Schweigert. These class functions generalize the usual characters, and define $\kappa$-twisted versions $\tilde{R}^{(\kappa)}(G)$ and $\tilde{R}_{k}^{(\kappa)}(G)$ ($k\in\mathbb{Z}_{>0}$) of the representation and fusion rings associated to $G$. In particular, the latter are shown to be isomorphic to the representation and fusion rings of the \textit{orbit Lie group} $G_{(\kappa)}$, a simply connected group obtained from $\kappa$ and the root data of $G$. [143]  arXiv:1811.06508 [pdf, ps, other] Title: Invariance properties of coHochschild homology Subjects: Algebraic Topology (math.AT) The notion of Hochschild homology of a dg algebra admits a natural dualization, the coHochschild homology of a dg coalgebra, introduced in arXiv:0711.1023 by Hess, Parent, and Scott as a tool to study free loop spaces. In this article we prove "agreement" for coHochschild homology, i.e., that the coHochschild homology of a dg coalgebra $C$ is isomorphic to the Hochschild homology of the dg category of appropriately compact $C$-comodules, from which Morita invariance of coHochschild homology follows. Generalizing the dg case, we define the topological coHochschild homology (coTHH) of coalgebra spectra, of which suspension spectra are the canonical examples, and show that coTHH of the suspension spectrum of a space $X$ is equivalent to the suspension spectrum of the free loop space on $X$, as long as $X$ is a nice enough space (for example, simply connected.) Based on this result and on a Quillen equivalence established by the authors in arXiv:1402.4719, we prove that "agreement" holds for coTHH as well. [144]  arXiv:1811.06509 [pdf, other] Title: A bias parity question for Sturmian words Comments: 12 pages and 10 figures Subjects: Number Theory (math.NT) We analyze a natural parity problem on the divisors associated to Sturmian words. We find a clear bias towards the odd divisors and obtain a sharp asymptotic estimate for the average of the difference odd-even function tamed by a mollifier, which proves various experimental results. [145]  arXiv:1811.06510 [pdf, ps, other] Title: Anti-concentration in most directions Authors: Amir Yehudayoff Comments: 16 pages Subjects: Probability (math.PR) We prove anti-concentration for the inner product of two independent random vectors in the discrete cube. Our results imply Chakrabarti and Regev's lower bound on the randomized communication complexity of the gap-hamming problem. They are also meaningful in the context of randomness extraction. The proof provides a framework for establishing anti-concentration in discrete domains. The argument has two different components. A local component that uses harmonic analysis, and a global ("information theoretic") component. [146]  arXiv:1811.06531 [pdf, other] Title: Simultaneous approximation on affine subspaces Subjects: Number Theory (math.NT) We solve the convergence case of the generalized Baker-Schmidt problem for simultaneous approximation on affine subspaces, under natural diophantine type conditions. In one of our theorems, we do not require monotonicity on the approximation function. In order to prove these results, we establish asymptotic formulae for the number of rational points close to an affine subspace. One key ingredient is a sharp upper bound on a certain sum of reciprocals of fractional parts associated with the matrix defining the affine subspace. [147]  arXiv:1811.06535 [pdf, ps, other] Title: Topological Quillen localization of structured ring spectra Comments: 18 pages Subjects: Algebraic Topology (math.AT) The aim of this short paper is to construct a TQ-localization functor on algebras over a spectral operad O, in the general case where no connectivity assumptions are made on the O-algebras, and to establish the associated TQ-local homotopy theory as a left Bousfield localization of the usual model structure on O-algebras, which itself is not left proper, in general. In the resulting TQ-local homotopy theory, the weak equivalences are the TQ-homology equivalences, where `TQ-homology' is short for topological Quillen homology. More generally, we establish these results for TQ-homology with coefficients in a spectral algebra A. A key observation, that goes back to the work of Goerss-Hopkins on moduli problems, is that the usual left properness assumption may be replaced with a strong cofibration condition in the desired subcell lifting arguments: our main result is that the TQ-local homotopy theory can be constructed, without left properness on O-algebras, by localizing with respect to a set of strong cofibrations that are TQ-equivalences. Cross-lists for Fri, 16 Nov 18 [148]  arXiv:1811.05650 (cross-list from hep-th) [pdf, other] Title: B-type Landau-Ginzburg models with one-dimensional target Comments: 10 pages, proceedings for the Group 32 conference, Prague, 2018 Subjects: High Energy Physics - Theory (hep-th); Algebraic Geometry (math.AG); Differential Geometry (math.DG) We summarize the main results of our investigation of B-type topological Landau-Ginzburg models whose target is an arbitrary open Riemann surface. Such a Riemann surface need not be affine algebraic and in particular it may have infinite genus or an infinite number of Freudenthal ends. Under mild conditions on the Landau-Ginzburg superpotential, we give a complete description of the triangulated structure of the category of topological D-branes in such models as well as counting formulas for the number of topological D-branes considered up to relevant equivalence relations. [149]  arXiv:1811.06101 (cross-list from hep-th) [pdf, other] Title: The Geometry of Exceptional Super Yang-Mills Theories Comments: 15 pages, 2 figures Some time ago, Sezgin, Bars and Nishino have proposed super Yang-Mills theories (SYM's) in $D=11+3$ and beyond. Using the \textit{\textquotedblleft Magic Star"} projection of $\mathfrak{e}_{8(-24)}$, we show that the geometric structure of SYM's in $11+3$ and $12+4$ space-time dimensions is recovered from the affine symmetry of the space $AdS_{4}\otimes S^{8}$, with the $8$-sphere being a line in the Cayley plane. By reducing to transverse transformations, along maximal embeddings, the near horizon geometries of the M2-brane ($% AdS_{4}\otimes S^{7}$) and M5-brane ($AdS_{7}\otimes S^{4}$) are recovered. Generalizing the construction to higher, generic levels of the recently introduced \textit{\textquotedblleft Exceptional Periodicity"} (EP) and exploiting the embedding of semi-simple rank-3 Jordan algebras into rank-3 T-algebras of special type, yields the spaces $AdS_{4}\otimes S^{8n}$ and $AdS_{7}\otimes S^{8n-3}$, with reduced subspaces $AdS_{4}\otimes S^{8n-1}$ and $% AdS_{7}\otimes S^{8n-4}$, respectively. Within EP, this suggests generalizations of the near horizon geometry of the M2-brane and its Hodge (magnetic) duals, related to $(1,0)$ SYM's in $(8n+3)+3$ dimensions, constituting a particular class of novel higher-dimensional SYM's, which we name \textit{% exceptional} SYM's. Remarkably, the $n=3$ level, hinting at M2 and M21 branes as solutions of bosonic M-theory, gives support for Witten's monstrous $AdS$/CFT construction. [150]  arXiv:1811.06126 (cross-list from cs.GT) [pdf, ps, other] Title: Cooperation Enforcement and Collusion Resistance in Repeated Public Goods Games Authors: Kai Li, Dong Hao Journal-ref: The Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-2019) Subjects: Computer Science and Game Theory (cs.GT); Optimization and Control (math.OC) Enforcing cooperation among substantial agents is one of the main objectives for multi-agent systems. However, due to the existence of inherent social dilemmas in many scenarios, the free-rider problem may arise during agents' long-run interactions and things become even severer when self-interested agents work in collusion with each other to get extra benefits. It is commonly accepted that in such social dilemmas, there exists no simple strategy for an agent whereby she can simultaneously manipulate on the utility of each of her opponents and further promote mutual cooperation among all agents. Here, we show that such strategies do exist. Under the conventional repeated public goods game, we novelly identify them and find that, when confronted with such strategies, a single opponent can maximize his utility only via global cooperation and any colluding alliance cannot get the upper hand. Since a full cooperation is individually optimal for any single opponent, a stable cooperation among all players can be achieved. Moreover, we experimentally show that these strategies can still promote cooperation even when the opponents are both self-learning and collusive. [151]  arXiv:1811.06138 (cross-list from quant-ph) [pdf, ps, other] Title: It Does Not Follow. Response to "Yes They Can! ..." Authors: Mladen Pavicic Comments: 3 pages This a response to "Yes They Can! ..." (a comment on [5]) by J.S. Shaari et al. [9]. We show that the claims in the comment do not hold up and that all the conclusions obtained in [5] are correct. In particular, the two considered kinds of two-way communication protocols (ping-pong and LM05) under a quantum-man-in-the-middle (QMM) attack have neither a critical disturbance (D), nor a valid privacy amplification (PA) procedure, nor an unconditional security proof. However, we point out that there is another two-way protocol which does have a critical D and a valid PA and which is resistant to a QMM attack. [152]  arXiv:1811.06264 (cross-list from gr-qc) [pdf, ps, other] Title: Quantum Riemannian Geometry and Scalar Fields on the Integer Line Authors: Shahn Majid Comments: 20 pages latex, no figures Subjects: General Relativity and Quantum Cosmology (gr-qc); Quantum Algebra (math.QA) We construct noncommutative or `quantum' Riemannian geometry on the integers $\Bbb Z$ as a 1D lattice line $\cdots\bullet_{i-1}-\bullet_i-\bullet_{i+1}\cdots$ with its natural 2-dimensional differential structure and metric given by arbitrary non-zero edge square-lengths $\bullet_i{\buildrel a_i\over -}\bullet_{i+1}$. We find for generic metrics a canonical $*$-preserving quantum Levi-Civita connection, which is flat if and only if $a_i$ are a geometric progression where the ratios $\rho_i=a_{i+1}/a_i$ are constant. More generally, we compute the Ricci tensor for the natural antisymmetric lift of the volume 2-form and find that the quantum Einstein-Hilbert action up to a total divergence is $-{1\over 2}\sum \rho\Delta \rho$ where $(\Delta\rho)_i=\rho_{i+1}+\rho_{i-1}-2\rho_i$ is the standard discrete Laplacian. We take a first look at some issues for quantum gravity on the lattice line. We also examine $1+0$ dimensional scalar quantum theory with mass $m$ and the lattice line as discrete time. As an application, we compute a kind of `Hawking effect' for a step function jump in the metric by a factor $\rho$, finding that an initial vacuum state has at later times an occupancy $< N>=(1-\sqrt{\rho})^2/(4\sqrt{\rho})$ in the continuum limit, independently of the frequency. By contrast, a delta-function spike leads to an $<N>$ that vanishes in the continuum limit. [153]  arXiv:1811.06324 (cross-list from hep-th) [pdf, other] Title: On the Integrable Structure of Super Yang-Mills Scattering Amplitudes Authors: Nils Kanning Comments: 169 pages, PhD thesis based on the author's publications arXiv:1312.1693, arXiv:1412.8476 and basis for arXiv:1811.04949 Subjects: High Energy Physics - Theory (hep-th); Mathematical Physics (math-ph); Exactly Solvable and Integrable Systems (nlin.SI) The maximally supersymmetric Yang-Mills theory in four-dimensional Minkowski space is an exceptional model of mathematical physics. Even more so in the planar limit, where the theory is believed to be integrable. In particular, the tree-level scattering amplitudes were shown to be invariant under the Yangian of the superconformal algebra psu(2,2|4). This infinite-dimensional symmetry is a hallmark of integrability. In this dissertation we explore connections between these amplitudes and integrable models. Our aim is to lay foundations for an efficient integrability-based computation of amplitudes. To this end, we characterize Yangian invariants within the quantum inverse scattering method, which is an extensive toolbox for integrable spin chains. Making use of this setup, we develop methods for the construction of Yangian invariants. We show that the algebraic Bethe ansatz can be specialized to yield Yangian invariants for u(2). Our approach also allows to interpret these Yangian invariants as partition functions of vertex models. What is more, we establish a unitary Gra{\ss}mannian matrix model for the construction of u(p,q|m) Yangian invariants with oscillator representations. In a special case our formula reduces to the Brezin-Gross-Witten model. We apply an integral transformation due to Bargmann to our unitary Gra{\ss}mannian matrix model, which turns the oscillators into spinor helicity-like variables. Thereby we are led to a refined version of the Gra{\ss}mannian integral formula for certain amplitudes. The most decisive differences are that we work in Minkowski signature and that the integration contour is fixed to be a unitary group manifold. We compare Yangian invariants defined by our integral to amplitudes and recently introduced deformations thereof. [154]  arXiv:1811.06338 (cross-list from physics.comp-ph) [pdf, other] Title: Nonlinearly Bandlimited Signals Authors: Vishal Vaibhav Subjects: Computational Physics (physics.comp-ph); Numerical Analysis (math.NA) In this paper, we study the inverse scattering problem for a class of signals that have a compactly supported reflection coefficient. The problem boils down to the solution of the Gelfand-Levitan-Marchenko (GLM) integral equations with a kernel that is bandlimited. By adopting a sampling theory approach to the associated Hankel operators in the Bernstein spaces, a constructive proof of existence of a solution of the GLM equations is obtained under various restrictions on the nonlinear impulse response (NIR). The formalism developed in this article also lends itself well to numerical computations yielding algorithms that are shown to have algebraic rates of convergence. In particular, the use Whittaker-Kotelnikov-Shannon sampling series yields an algorithm that converges as $\mathscr{O}\left(N^{-1/2}\right)$ whereas the use of Helms and Thomas (HT) version of the sampling expansion yields an algorithm that converges as $\mathscr{O}\left(N^{-m-1/2}\right)$ for any $m>0$ provided the regularity conditions are fulfilled. The complexity of the algorithms depend on the linear solver used. The use of conjugate-gradient (CG) method yields an algorithm of complexity $\mathscr{O}\left(N_{\text{iter.}}N^2\right)$ per sample of the signal where $N$ is the number of sampling basis functions used and $N_{\text{iter.}}$ is the number of CG iterations involved. The HT version of the sampling expansions facilitates the development of algorithms of complexity $\mathscr{O}\left(N_{\text{iter.}}N\log N\right)$ (per sample of the signal) by exploiting the special structure as well as the (approximate) sparsity of the matrices involved. [155]  arXiv:1811.06350 (cross-list from cs.SY) [pdf, ps, other] Title: Temporal viability regulation for control affine systems with applications to mobile vehicle coordination under time-varying motion constraints Comments: 7 pages, 3 figures. Submitted to a conference for publication Subjects: Systems and Control (cs.SY); Multiagent Systems (cs.MA); Robotics (cs.RO); Symbolic Computation (cs.SC); Optimization and Control (math.OC) Controlled invariant set and viability regulation of dynamical control systems have played important roles in many control and coordination applications. In this paper we develop a temporal viability regulation theory for general dynamical control systems, and in particular for control affine systems. The time-varying viable set is parameterized by time-varying constraint functions, with the aim to regulate a dynamical control system to be invariant in the time-varying viable set so that temporal state-dependent constraints are enforced. We consider both time-varying equality and inequality constraints in defining a temporal viable set. We also present sufficient conditions for the existence of feasible control input for the control affine systems. The developed temporal viability regulation theory is applied to mobile vehicle coordination. [156]  arXiv:1811.06361 (cross-list from q-fin.RM) [pdf, other] Title: On approximations of Value at Risk and Expected Shortfall involving kurtosis Comments: 27 pages, 11 figures Subjects: Risk Management (q-fin.RM); Probability (math.PR); Mathematical Finance (q-fin.MF) We derive new approximations for the Value at Risk and the Expected Shortfall at high levels of loss distributions with positive skewness and excess kurtosis, and we describe their precisions for notable ones such as for exponential, Pareto type I, lognormal and compound (Poisson) distributions. Our approximations are motivated by extensions of the so-called Normal Power Approximation, used for approximating the cumulative distribution function of a random variable, incorporating not only the skewness but the kurtosis of the random variable in question as well. We show the performance of our approximations in numerical examples and we also give comparisons with some known ones in the literature. [157]  arXiv:1811.06455 (cross-list from cond-mat.soft) [pdf, other] Title: Crossover between parabolic and hyperbolic scaling, oscillatory modes and resonances near flocking Comments: 17 pages, 4 figures, to appear in Phys. Rev. E Subjects: Soft Condensed Matter (cond-mat.soft); Statistical Mechanics (cond-mat.stat-mech); Mathematical Physics (math-ph); Biological Physics (physics.bio-ph) A stability and bifurcation analysis of a kinetic equation indicates that the flocking bifurcation of the two-dimensional Vicsek model exhibits an interplay between parabolic and hyperbolic behavior. For box sizes smaller than a certain large value, flocking appears continuously from a uniform disordered state at a critical value of the noise. Because of mass conservation, the amplitude equations describing the flocking state consist of a scalar equation for the density disturbance from the homogeneous particle density (particle number divided by box area) and a vector equation for a current density. These two equations contain two time scales. At the shorter scale, they are a hyperbolic system in which time and space scale in the same way. At the longer, diffusive, time scale, the equations are parabolic. The bifurcating solution depends on the angle and is uniform in space as in the normal form of the usual pitchfork bifurcation. We show that linearization about the latter solution is described by a Klein-Gordon equation in the hyperbolic time scale. Then there are persistent oscillations with many incommensurate frequencies about the bifurcating solution, they produce a shift in the critical noise and resonate with a periodic forcing of the alignment rule. These predictions are confirmed by direct numerical simulations of the Vicsek model. [158]  arXiv:1811.06513 (cross-list from cond-mat.mes-hall) [pdf, ps, other] Title: Effect of Magnetic Field on Goos-Hänchen Shifts in Gaped Graphene Triangular Barrier Comments: 15 pages, 7 figures Subjects: Mesoscale and Nanoscale Physics (cond-mat.mes-hall); Mathematical Physics (math-ph); Quantum Physics (quant-ph) We study the effect of a magnetic field on Goos-H\"anchen shifts in gaped graphene subjected to a double triangular barrier. Solving the wave equation separately in each region composing our system and using the required boundary conditions, we then compute explicitly the transmission probability for scattered fermions. These wavefunctions are then used to derive with the Goos-H\"anchen shifts in terms of different physical parameters such as energy, electrostatic potential strength and magnetic field. Our numerical results show that the Goos-H\"anchen shifts are affected by the presence of the magnetic field and depend on the geometrical structure of the triangular barrier. Replacements for Fri, 16 Nov 18 [159]  arXiv:1401.7487 (replaced) [pdf, ps, other] Title: Primitive geodesic lengths and (almost) arithmetic progressions Comments: v3: 23 pages. To appear in Publ. Mat Subjects: Differential Geometry (math.DG); Geometric Topology (math.GT) [160]  arXiv:1411.5306 (replaced) [pdf, ps, other] Title: The Simplicial EHP Sequence in A1-Algebraic Topology Comments: Revisions made at the suggestion of an anonymous referee Subjects: Algebraic Topology (math.AT); Algebraic Geometry (math.AG) [161]  arXiv:1411.7915 (replaced) [pdf, other] Title: Geometrically and diagrammatically maximal knots Comments: 26 pages, 12 figures. V2: Updated references, removed hypothesis from Theorem 1.4, added Corollary 1.11, and made other minor wording changes. V3: Discussion on weaving knots has been moved into another paper. V4: Added Figures 10 and 12, minor wording changes Journal-ref: J. London Math. Soc. 94 (2016), no. 3, 883-908 Subjects: Geometric Topology (math.GT) [162]  arXiv:1501.06376 (replaced) [pdf, ps, other] Title: Sum of states approximation by Laplace method for particle system with finite number of modes and application to limit theorems Comments: 22 pages Subjects: Probability (math.PR) [163]  arXiv:1502.05074 (replaced) [src] Title: Evolutionary system, global attractor, trajectory attractor and applications to the nonautonomous reaction-diffusion systems Authors: Songsong Lu Comments: A new paper arXiv:1811.05783 is submitted, which includes the subjects in this paper, but much more. It is not an updated one, it is completely rewritten and is much longer and clearer Subjects: Dynamical Systems (math.DS); Analysis of PDEs (math.AP) [164]  arXiv:1506.02048 (replaced) [pdf, other] Title: Moments of the inverse participation ratio for the Laplacian on finite regular graphs Comments: 24 pages, 10 figures, fixed typos and included new arguments on graph eigenvector distribution Journal-ref: J. Phys. A: Math. Theor. 51, 495003 (2018) Subjects: Mathematical Physics (math-ph); Disordered Systems and Neural Networks (cond-mat.dis-nn) [165]  arXiv:1512.02893 (replaced) [pdf, ps, other] Title: A compactification of outer space which is an absolute retract Comments: Final version. To appear in Annales de l'Institut Fourier Subjects: Geometric Topology (math.GT); Group Theory (math.GR) [166]  arXiv:1603.05084 (replaced) [pdf, ps, other] Title: Brill-Noether theory for cyclic covers Authors: Irene Schwarz Subjects: Algebraic Geometry (math.AG) [167]  arXiv:1608.03245 (replaced) [pdf, ps, other] Title: On the Complexity of Closest Pair via Polar-Pair of Point-Sets Comments: The paper was previously titled, "The Curse of Medium Dimension for Geometric Problems in Almost Every Norm" Subjects: Computational Geometry (cs.CG); Computational Complexity (cs.CC); Metric Geometry (math.MG) [168]  arXiv:1608.05669 (replaced) [pdf, ps, other] Title: The class of Eisenbud--Khimshiashvili--Levine is the local A1-Brouwer degree Comments: accepted for publication in Duke Mathematical Journal [169]  arXiv:1610.05741 (replaced) [pdf, ps, other] Title: Gerstenhaber bracket on the Hochschild cohomology via an arbitrary resolution Authors: Yury Volkov Comments: The paper is accepted to Proceedings of the Edinburgh Mathematical Society. The main difference between the second and the first version is a new short proof of the derived invariance of the Gerstenhaber algebra structure on Hochschild cohomology Subjects: K-Theory and Homology (math.KT); Rings and Algebras (math.RA); Representation Theory (math.RT) [170]  arXiv:1610.09957 (replaced) [pdf, other] Title: Convergence Rates for Kernel Regression in Infinite Dimensional Spaces Subjects: Statistics Theory (math.ST) [171]  arXiv:1611.06687 (replaced) [pdf, ps, other] Title: Fourier-Mukai partners for general special cubic fourfolds Authors: Laura Pertusi Comments: 21 pages Subjects: Algebraic Geometry (math.AG) [172]  arXiv:1611.08318 (replaced) [pdf, ps, other] Title: Mild and viscosity solutions to semilinear parabolic path-dependent PDEs Subjects: Probability (math.PR); Analysis of PDEs (math.AP); Optimization and Control (math.OC) [173]  arXiv:1612.01760 (replaced) [pdf, ps, other] Title: A Maximal Extension of the Best-Known Bounds for the Furstenberg-Sárközy Theorem Authors: Alex Rice Comments: 30 pages, minor typos corrected. arXiv admin note: text overlap with arXiv:1504.04904 Subjects: Number Theory (math.NT); Classical Analysis and ODEs (math.CA); Combinatorics (math.CO) [174]  arXiv:1701.00440 (replaced) [pdf, ps, other] Title: Multi-GGS-groups have the congruence subgroup property Subjects: Group Theory (math.GR) [175]  arXiv:1702.03573 (replaced) [pdf, ps, other] Title: Stochastic Exponentials and Logarithms on Stochastic Intervals -- A Survey Comments: Revised version. Accepted for publication for the special Issue on Stochastic Differential Equations, Stochastic Algorithms, and Applications within the Journal of Mathematical Analysis and Applications. arXiv admin note: text overlap with arXiv:1411.6229 Subjects: Probability (math.PR) [176]  arXiv:1704.05644 (replaced) [pdf, other] Title: Stability of Piecewise Deterministic Markovian Metapopulation Processes on Networks Authors: Pierre Montagnon Subjects: Probability (math.PR) [177]  arXiv:1704.08646 (replaced) [pdf, ps, other] Title: Le principe de Hasse pour les espaces homogènes : réduction au cas des stabilisateurs finis (The Hasse principle for homogeneous spaces: reduction to the case of finite stabilizers) Comments: 30 pages, in French. V2: gave more precise definitions and rewrote some arguments in the section on non-abelian cohomology; gave a categorical description of the gerb associated to a homogeneous space; changed the order of the main steps in the proof of the main theorem; other minor changes [178]  arXiv:1705.02124 (replaced) [pdf, ps, other] Title: Terminal-Pairability in Complete Bipartite Graphs with Non-Bipartite Demands Comments: 15 pages, draws from arXiv:1702.04313 Subjects: Combinatorics (math.CO) [179]  arXiv:1706.02506 (replaced) [pdf, other] Title: Hypergeometric First Integrals of the Duffing and van der Pol Oscillators Comments: 20 pages, 4 color figures; 3rd version revised and accepted in JDE Subjects: Mathematical Physics (math-ph); Exactly Solvable and Integrable Systems (nlin.SI) [180]  arXiv:1707.01874 (replaced) [pdf, other] Title: Maximizing the mean subtree order Comments: This version includes changes suggested by anonymous referees. Accepted to Journal of Graph Theory Subjects: Combinatorics (math.CO) [181]  arXiv:1707.04646 (replaced) [pdf, ps, other] Title: Composite images of Galois for elliptic curves over $\mathbf{Q}$ & Entanglement fields Comments: 34 pages, 6 figures. v2: improved notation and fixed several typos; comments welcome! Subjects: Number Theory (math.NT) [182]  arXiv:1707.05747 (replaced) [pdf, ps, other] Title: Augmented Lagrangian Functions for Cone Constrained Optimization: the Existence of Global Saddle Points and Exact Penalty Property Authors: M.V. Dolgopolik Comments: This is a preprint of an article published by Springer in Journal of Global Optimization (2018). The final authenticated version is available online at: this http URL Journal-ref: Journal of Global Optimization, 71: 2 (2018) 237-296 Subjects: Optimization and Control (math.OC) [183]  arXiv:1707.09843 (replaced) [pdf, ps, other] Title: Results on the Hilbert coefficients and reduction numbers Comments: accepted for publication in the Proceedings Mathematical Sciences Subjects: Commutative Algebra (math.AC); Algebraic Geometry (math.AG) [184]  arXiv:1707.09863 (replaced) [pdf, ps, other] Title: Dimensionality reduction of SDPs through sketching Comments: 15 pages. Significantly shortened and presentation streamlined Subjects: Optimization and Control (math.OC); Data Structures and Algorithms (cs.DS) [185]  arXiv:1708.02104 (replaced) [pdf, ps, other] Title: The Core-Label Order of a Congruence-Uniform Lattice Authors: Henri Mühle Comments: 19 pages, 9 figures, 1 table. Comments are welcome Subjects: Combinatorics (math.CO) [186]  arXiv:1708.03251 (replaced) [pdf, ps, other] Title: Bott-Chern-Aeppli, Dolbeault and Frolicher on Compact Complex 3-folds Authors: Andrew McHugh Subjects: Differential Geometry (math.DG) [187]  arXiv:1709.00332 (replaced) [pdf, ps, other] Title: Well-posedness of networks for 1-D hyperbolic partial differential equations Subjects: Functional Analysis (math.FA) [188]  arXiv:1709.01258 (replaced) [pdf, other] Title: Algebraic characterisations of negatively-curved special groups and applications to graph braid groups Authors: Anthony Genevois Comments: 57 pages, 8 figures. Comments are welcome Subjects: Group Theory (math.GR) [189]  arXiv:1709.07073 (replaced) [pdf, ps, other] Title: A Unified Approach to the Global Exactness of Penalty and Augmented Lagrangian Functions I: Parametric Exactness Authors: M.V. Dolgopolik Comments: 34 pages. arXiv admin note: text overlap with arXiv:1710.01961 Journal-ref: Journal of Optimization Theory and Applications, 176: 3 (2018) 728-744 Subjects: Optimization and Control (math.OC) [190]  arXiv:1710.03665 (replaced) [pdf, ps, other] Title: Existence of Thin Shell Wormholes using non-linear distributional geometry Comments: This paper summarizes the results of my master thesis (handed in on 4. of August 2017, defended on 24. of August 2017) Subjects: Differential Geometry (math.DG); General Relativity and Quantum Cosmology (gr-qc) [191]  arXiv:1710.05143 (replaced) [pdf, ps, other] Title: A Complementary Inequality to the Information Monotonicity for Tsallis Relative Operator Entropy Comments: to appear in Linear Multilinear Algebra Subjects: Functional Analysis (math.FA) [192]  arXiv:1711.00734 (replaced) [pdf, ps, other] Title: Unbounded $p_τ$-Convergence in Vector Lattices Normed by Locally Solid Lattices Authors: Abdulla Aydın Comments: 12 pages. arXiv admin note: text overlap with arXiv:1609.05301, text overlap with arXiv:1706.02006 by other authors Subjects: Functional Analysis (math.FA) [193]  arXiv:1711.06160 (replaced) [src] Title: The Class of Countable Projective Planes is Borel Complete Authors: Gianluca Paolini Comments: This paper has been merged with arXiv:1707.00294 Subjects: Logic (math.LO) [194]  arXiv:1712.00232 (replaced) [pdf, ps, other] Title: Optimal Algorithms for Distributed Optimization Subjects: Optimization and Control (math.OC); Computational Complexity (cs.CC); Multiagent Systems (cs.MA); Systems and Control (cs.SY); Machine Learning (stat.ML) [195]  arXiv:1712.06322 (replaced) [pdf, ps, other] Title: Local and global trace formulae for smooth hyperbolic diffeomorphisms Authors: Malo Jézéquel Comments: New version containing much better results in the Gevrey case. Parts 1 and 3 of the old version are now parts 2 and 3 respectively. Part 2 of the old version has been removed and replaced by parts 4 to 7. Some other minor improvements have also been made Subjects: Dynamical Systems (math.DS) [196]  arXiv:1712.06472 (replaced) [pdf, other] Title: A Bramble-Pasciak conjugate gradient method for discrete Stokes problems with lognormal random viscosity Comments: 23 pages, 3 figures, 1 table Journal-ref: Lecture Notes in Computational Science and Engineering (124), Recent Advances in Computational Engineering, ICCE 2017, Springer, 2018 Subjects: Numerical Analysis (math.NA) [197]  arXiv:1712.07383 (replaced) [pdf, other] Title: Monte-Carlo methods for the pricing of American options: a semilinear BSDE point of view Authors: Bruno Bouchard (CEREMADE), Ki Chau (CWI), Arij Manai (UM), Ahmed Sid-Ali Subjects: Probability (math.PR); Computational Finance (q-fin.CP) [198]  arXiv:1712.09949 (replaced) [pdf, ps, other] Title: Almost formality of quasi-Sasakian and Vaisman manifolds with applications to nilmanifolds Comments: 33 pages Subjects: Differential Geometry (math.DG); Algebraic Topology (math.AT); Symplectic Geometry (math.SG) [199]  arXiv:1801.07751 (replaced) [pdf, other] Title: Torsion and Linking number for a surface diffeomorphism Authors: Anna Florio Subjects: Dynamical Systems (math.DS) [200]  arXiv:1801.08260 (replaced) [pdf, ps, other] Title: Galois theory for general systems of polynomial equations Comments: 17 pages, 1 figure [201]  arXiv:1801.09170 (replaced) [pdf, ps, other] Title: Generalized Littlewood-Richardson coefficients for branching rules of GL(n) and extremal weight crystals Authors: Brett Collins Comments: 28 pages, comments welcome Subjects: Representation Theory (math.RT); Combinatorics (math.CO) [202]  arXiv:1802.01716 (replaced) [pdf, other] Title: A regularised Dean-Kawasaki model: derivation and analysis Comments: 48 pages, 1 figure Subjects: Probability (math.PR); Mathematical Physics (math-ph); Analysis of PDEs (math.AP) [203]  arXiv:1802.04787 (replaced) [pdf, other] Title: Koopman wavefunctions and classical-quantum correlation dynamics Comments: 10 pages, 3 figures Subjects: Mathematical Physics (math-ph); Chemical Physics (physics.chem-ph); Quantum Physics (quant-ph) [204]  arXiv:1802.05861 (replaced) [pdf, ps, other] Title: Generalizing Bottleneck Problems Comments: IEEE International Symposium on Information Theory (ISIT), 2018 [205]  arXiv:1802.08963 (replaced) [pdf, other] Title: The Mutual Information in Random Linear Estimation Beyond i.i.d. Matrices Comments: Presented at the 2018 IEEE International Symposium on Information Theory (ISIT) Journal-ref: 2018 IEEE International Symposium on Information Theory (ISIT), Vail, CO, 2018, pp. 1390-1394 Subjects: Information Theory (cs.IT); Mathematical Physics (math-ph) [206]  arXiv:1802.09294 (replaced) [pdf, ps, other] Title: Regularity theory for parabolic equations with singular degenerate coefficients Comments: submitted, 37 pages. the main results are slightly improved Subjects: Analysis of PDEs (math.AP) [207]  arXiv:1803.00322 (replaced) [pdf, other] Title: Spatial Lobes Division Based Low Complexity Hybrid Precoding and Diversity Combining for mmWave IoT Systems Comments: 12 pages, 9 figures, IEEE Internet of Things Journal Subjects: Information Theory (cs.IT) [208]  arXiv:1803.00402 (replaced) [pdf, ps, other] Title: On Periodic Solutions to Lagrangian System With Constraints Authors: Oleg Zubelevich Subjects: Dynamical Systems (math.DS) [209]  arXiv:1803.02581 (replaced) [pdf, other] Title: Regularity of fractional maximal functions through Fourier multipliers Comments: final version, to appear in Journal of Functional Analysis Subjects: Classical Analysis and ODEs (math.CA) [210]  arXiv:1803.03728 (replaced) [pdf, ps, other] Title: Geodesic nets with three boundary vertices Authors: Fabian Parsch Comments: clarified proof of Special Combined Angle Lemma 3.6, clarified requirements of Conditional Path Independence Lemma 4.9, added references, corrected typos, other minor edits Subjects: Metric Geometry (math.MG); Combinatorics (math.CO); Differential Geometry (math.DG) [211]  arXiv:1803.04117 (replaced) [pdf, ps, other] Title: The Limit of Large Mass Monopoles Comments: v3: corrects a misstatement in Proposition 2.3 and an error in Theorem 5.1 of v2 that led us back to earlier versions of both Theorem 5.1 and Theorem 1.1 Subjects: Differential Geometry (math.DG) [212]  arXiv:1803.07002 (replaced) [pdf, ps, other] Title: Auslander-Reiten $(d+2)$-angles in subcategories and a $(d+2)$-angulated generalisation of a theorem by Brüning Authors: Francesca Fedele Comments: 26 Pages. This is the final, accepted version which is to appear in the Journal of Pure and Applied Algebra Subjects: Representation Theory (math.RT) [213]  arXiv:1803.07154 (replaced) [pdf, ps, other] Title: Lines in metric spaces: universal lines counted with multiplicity Subjects: Combinatorics (math.CO) [214]  arXiv:1804.00879 (replaced) [pdf, ps, other] Title: The categorified Grothendieck-Riemann-Roch theorem Comments: 58 pages. v2: last section rewritten Subjects: K-Theory and Homology (math.KT); Algebraic Geometry (math.AG); Algebraic Topology (math.AT); Category Theory (math.CT) [215]  arXiv:1804.02746 (replaced) [pdf, ps, other] Title: Duality and approximation of Bergman spaces Journal-ref: Adv. Math. 341 (2019), 616-656 Subjects: Complex Variables (math.CV) [216]  arXiv:1804.03173 (replaced) [pdf, other] Title: On the Increasing Tritronquée Solutions of the Painlevé-II Equation Authors: Peter D. Miller Journal-ref: SIGMA 14 (2018), 125, 38 pages Subjects: Classical Analysis and ODEs (math.CA); Exactly Solvable and Integrable Systems (nlin.SI) [217]  arXiv:1804.03420 (replaced) [pdf, ps, other] Title: The Sum-Rate-Distortion Region of Correlated Gauss-Markov Sources Comments: 11 pages Subjects: Information Theory (cs.IT) [218]  arXiv:1804.06971 (replaced) [pdf, ps, other] Title: Uniqueness of Equilibrium with Sufficiently Small Strains in Finite Elasticity Comments: 39 pages Subjects: Analysis of PDEs (math.AP) [219]  arXiv:1805.00542 (replaced) [pdf, other] Title: Morita Invariance of Intrinsic Characteristic Classes of Lie Algebroids Authors: Pedro Frejlich Journal-ref: SIGMA 14 (2018), 124, 12 pages Subjects: Symplectic Geometry (math.SG) [220]  arXiv:1805.03881 (replaced) [pdf, ps, other] Title: High pseudomoments of the Riemann zeta function Comments: This paper has been accepted for publication in Journal of Number Theory Subjects: Number Theory (math.NT) [221]  arXiv:1805.05490 (replaced) [pdf, other] Title: Mahler Measure and the Vol-Det Conjecture Comments: 29 pages. V2: Minor changes, fixed typos, improved exposition Subjects: Geometric Topology (math.GT); Number Theory (math.NT) [222]  arXiv:1805.06218 (replaced) [pdf, ps, other] Title: On the Pólya-Szegö operator inequality Subjects: Functional Analysis (math.FA) [223]  arXiv:1805.10651 (replaced) [pdf, other] Title: Glueing together Modular flows with free fermions Authors: Gabriel Wong Comments: 26 pages, 5 figures Subjects: High Energy Physics - Theory (hep-th); Other Condensed Matter (cond-mat.other); Mathematical Physics (math-ph); Quantum Physics (quant-ph) [224]  arXiv:1805.12547 (replaced) [pdf, other] Title: Long-time predictive modeling of nonlinear dynamical systems using neural networks Comments: 30 pages Subjects: Machine Learning (stat.ML); Machine Learning (cs.LG); Dynamical Systems (math.DS) [225]  arXiv:1806.04392 (replaced) [pdf, other] Title: The classification of multipartite quantum correlation Authors: Szilárd Szalay Comments: 13+4 pages, 3 figures Journal-ref: J. Phys. A: Math. Theor. 51 485302 (2018) [226]  arXiv:1806.04988 (replaced) [pdf, ps, other] Title: On the rank of a random binary matrix Subjects: Combinatorics (math.CO) [227]  arXiv:1806.07193 (replaced) [pdf, other] Title: A Meshfree Generalized Finite Difference Method for Surface PDEs [228]  arXiv:1806.09987 (replaced) [pdf, ps, other] Title: A note on mean equicontinuity Comments: arXiv admin note: substantial text overlap with arXiv:1312.7663 by other authors Subjects: Dynamical Systems (math.DS) [229]  arXiv:1806.11097 (replaced) [pdf, ps, other] Title: Almost symmetric numerical semigroups with given Frobenius number and type Comments: 12 pages, 1 figure. References updated. A conjecture added (Remark 20). Accepted for publication in Journal of Algebra and its Applications Subjects: Commutative Algebra (math.AC) [230]  arXiv:1807.02725 (replaced) [pdf, ps, other] Title: Numerical analysis of a discontinuous Galerkin method for Cahn-Hilliard-Navier-Stokes equations Comments: 31 pages Subjects: Numerical Analysis (math.NA) [231]  arXiv:1807.03028 (replaced) [pdf, ps, other] Title: Bornological, coarse and uniform groups Authors: Igor Protasov Comments: bornology, coarse structure, uniformity, Stone-$\check{C}$ech compactification. arXiv admin note: text overlap with arXiv:1803.10504 Subjects: General Topology (math.GN) [232]  arXiv:1807.03234 (replaced) [pdf, other] Title: Bayesian Sequential Joint Detection and Estimation Comments: 35 pages, 2 figures, accepted for publication in Sequential Analysis Subjects: Signal Processing (eess.SP); Information Theory (cs.IT); Statistics Theory (math.ST) [233]  arXiv:1807.04197 (replaced) [pdf, other] Title: Cut-and-join equation for monotone Hurwitz numbers revisited Comments: 7 pages. v2: Added a second proof of lemma 2.3, using Jucys-Murphy elements, and expanded motivation Subjects: Algebraic Geometry (math.AG); Combinatorics (math.CO) [234]  arXiv:1807.04717 (replaced) [pdf, ps, other] Title: About the Chasm Separating the Goals of Hilbert's Consistency Program from the Second Incompletess Theorem Authors: Dan E. Willard Comments: The bibliography section of this article contains citations to all of Willard's major papers prior to 2018 about logic Subjects: Logic (math.LO) [235]  arXiv:1807.08366 (replaced) [pdf, ps, other] Title: Hilbert Spaces Contractively Contained in Weighted Bergman Spaces on the Unit Disk Authors: Cheng Chu Subjects: Functional Analysis (math.FA) [236]  arXiv:1807.08418 (replaced) [pdf, ps, other] Title: How many weights can a cyclic code have ? Comments: submitted on 21 June, 2018 Subjects: Information Theory (cs.IT) [237]  arXiv:1808.02446 (replaced) [pdf, other] Title: Geometric multipole expansion and its application to neutral inclusion of general shape Comments: 23 pages Subjects: Numerical Analysis (math.NA) [238]  arXiv:1808.03734 (replaced) [pdf, other] Title: Traveling Waves for Nonlocal Models of Traffic Flow Subjects: Analysis of PDEs (math.AP) [239]  arXiv:1809.06068 (replaced) [pdf, ps, other] Title: Bismut Formula for Lions Derivative of Distribution Dependent SDEs and Applications Comments: 30 pages Subjects: Probability (math.PR) [240]  arXiv:1809.07705 (replaced) [pdf, ps, other] Title: On convergence of power series in p-adic field Comments: 11 pages. We highly appreciate valuable comments from the interested researchers Subjects: Number Theory (math.NT) [241]  arXiv:1809.10844 (replaced) [pdf, other] Title: Proximal Recursion for Solving the Fokker-Planck Equation Subjects: Optimization and Control (math.OC) [242]  arXiv:1810.00017 (replaced) [pdf, other] Title: Single Snapshot Super-Resolution DOA Estimation for Arbitrary Array Geometries Authors: A. Govinda Raj, J.H. McClellan (Georgia Institute of Technology) Comments: Accepted for publication in IEEE Signal Processing Letters [243]  arXiv:1810.00080 (replaced) [pdf, ps, other] Title: Differential geometry of invariant surfaces in simply isotropic and pseudo-isotropic spaces Comments: 30 pages; comments are welcomed Subjects: Differential Geometry (math.DG) [244]  arXiv:1810.01793 (replaced) [pdf, other] Title: Length spectra of flat metrics coming from q-differentials Authors: Marissa Loving Comments: An error in the proof of Proposition 3.2 (which is split between the proofs of Lemma 3.1 and 3.2) in the case that q is even has been corrected. The changes involve a slight modification of Definition 3.1 and a short paragraph at the end of each of the proofs of Lemmas 3.1 and 3.2 Subjects: Geometric Topology (math.GT) [245]  arXiv:1810.03168 (replaced) [pdf, ps, other] Title: The AKS Theorem, A.C.I. Systems and Random Matrix Theory Comments: Topical Review (special volume 50 years Toda Lattice), with typos corrected and some additional references Journal-ref: Journal of Physics A: mathematical and theoretical 51 (2018) 423001 (47pp) Subjects: Mathematical Physics (math-ph) [246]  arXiv:1810.04479 (replaced) [pdf, ps, other] Title: Connections Adapted to Non-Negatively Graded Structures Comments: 19 pages. Accepted for publication in the International Journal of Geometric Methods in Modern Physics Subjects: Differential Geometry (math.DG); Mathematical Physics (math-ph); Symplectic Geometry (math.SG) [247]  arXiv:1810.05179 (replaced) [pdf, ps, other] Title: Categorical primitive forms and Gromov-Witten invariants of $A_n$ singularities Comments: 24 pp, references added Subjects: Algebraic Geometry (math.AG); Commutative Algebra (math.AC); Symplectic Geometry (math.SG) [248]  arXiv:1810.05262 (replaced) [pdf, other] Title: Sidon sets and $C_4$-saturated graphs Comments: 12 pages, 0 figures, 1 table, paper Subjects: Combinatorics (math.CO) [249]  arXiv:1810.08238 (replaced) [pdf, ps, other] Title: Degenerate versions of Green's theorem for Hall modules Authors: Matthew B. Young Comments: 21 pages. Minor corrections and clarifications in v2 Subjects: Representation Theory (math.RT); Category Theory (math.CT); Quantum Algebra (math.QA) [250]  arXiv:1810.08693 (replaced) [pdf, ps, other] Title: The total variation distance between high-dimensional Gaussians Comments: 11 pages Subjects: Statistics Theory (math.ST); Probability (math.PR) [251]  arXiv:1810.11445 (replaced) [pdf, ps, other] Title: Asymptotic-preserving schemes for two-species binary collisional kinetic system with disparate masses I: time discretization and asymptotic analysis Subjects: Numerical Analysis (math.NA) [252]  arXiv:1811.01193 (replaced) [pdf, ps, other] Title: On the Kodaira dimension of $\overline{\mathcal{N}}_{g,n}$ Authors: Irene Schwarz Subjects: Algebraic Geometry (math.AG) [253]  arXiv:1811.02418 (replaced) [pdf, ps, other] Title: Non-trivial zeros of Riemann's Zeta function via revised Euler-Maclaurin-Siegel and Abel-Plana summation formulas Authors: Xiao-Jun Yang Comments: V1 has 22 pages, 1 figure and 5 tables; V2 has 26 pages, 1 figure and 7 tables Subjects: General Mathematics (math.GM) [254]  arXiv:1811.02670 (replaced) [pdf, ps, other] Title: Hausdorff closed limits and the causal boundary of globally hyperbolic spacetimes with timelike boundary Comments: 38 pages Subjects: Mathematical Physics (math-ph); General Relativity and Quantum Cosmology (gr-qc); Differential Geometry (math.DG) [255]  arXiv:1811.02725 (replaced) [pdf, ps, other] Title: Static Data Structure Lower Bounds Imply Rigidity [256]  arXiv:1811.02802 (replaced) [pdf, ps, other] Title: New Parameters on MDS Self-dual Codes over Finite Fields Subjects: Information Theory (cs.IT) [257]  arXiv:1811.02878 (replaced) [pdf, other] Title: On the composition for rough singular integral operators Comments: 20 pages Subjects: Classical Analysis and ODEs (math.CA) [258]  arXiv:1811.04713 (replaced) [pdf, ps, other] Title: Gauges, Loops, and Polynomials for Partition Functions of Graphical Models Comments: 14 pages Subjects: Machine Learning (cs.LG); Mathematical Physics (math-ph); Machine Learning (stat.ML) [259]  arXiv:1811.04744 (replaced) [pdf, ps, other] Title: Well-posedness of three-dimensional isentropic compressible Navier-Stokes equations with degenerate viscosities and far field vacuum Comments: 51 Pages Subjects: Analysis of PDEs (math.AP) [260]  arXiv:1811.05069 (replaced) [pdf, ps, other] Title: The first passage time density of Brownian motion and the heat equation with Dirichlet boundary condition in time dependent domains Authors: JM Lee Comments: 18 pages [261]  arXiv:1811.05203 (replaced) [pdf, ps, other] Title: On the Polarization Levels of Automorphic-Symmetric Channels Authors: Rajai Nasser Subjects: Information Theory (cs.IT) [262]  arXiv:1811.05333 (replaced) [pdf, ps, other] Title: A mathematical perspective on the phenomenology of non-perturbative Quantum Field Theory Authors: Ali Shojaei-Fard Comments: Monograph Subjects: Mathematical Physics (math-ph) [263]  arXiv:1811.05595 (replaced) [pdf, ps, other] Title: Transform Methods for Heavy-Traffic Analysis Subjects: Probability (math.PR) [264]  arXiv:1811.05802 (replaced) [pdf, ps, other] Title: Any nonsingular action of the full symmetric group is isomorphic to an action with invariant measure Authors: Nikolay Nessonov Comments: 9 pages Subjects: Representation Theory (math.RT) [265]  arXiv:1811.05937 (replaced) [pdf, other] Title: Opinion dynamics with Lotka-Volterra type interactions Comments: 30 pages, 2 figures Subjects: Probability (math.PR) [ total of 265 entries: 1-265 ] Disable MathJax (What is MathJax?)
c06ba451ca3e070a
A Derivation of the effective spin model XYZ quantum Heisenberg models with p-orbital bosons We demonstrate how the spin- quantum Heisenberg model can be realized with bosonic atoms loaded in the band of an optical lattice in the Mott regime. The combination of Bose statistics and the symmetry of the -orbital wave functions leads to a non-integrable Heisenberg model with anti-ferromagnetic couplings. Moreover, the sign and relative strength of the couplings characterizing the model are shown to be experimentally tunable. We display the rich phase diagram in the one dimensional case, and discuss finite size effects relevant for trapped systems. Finally, experimental issues related to preparation, manipulation, detection, and imperfections are considered. 03.75.Lm, 67.85.Hj, 05.30.Rt Introduction.– Powerful tools developed recently to unravel the physics of many-body quantum systems offer an exciting new platform for understanding quantum magnetism. It is now possible to engineer different systems in the lab that mimic the physics of theoretically challenging spin models (1), thereby performing “quantum simulations” (2). Along these lines, systems of trapped ions and of polar molecules are promising candidates. Trapped ions, for example, have already been employed to simulate both small (3) and large (4) numbers of spins. In these setups, however, sustaining control over the parameters becomes very difficult as the system size increases. Furthermore, due to trapping potentials realizations are limited to chains with up to 25 spins. It is also very difficult to construct paradigmatic spin models with short range interactions using systems of trapped ions. Similar limitations appear when using polar molecules, where the effective spin interactions (5); (6) are obtained from the intrinsic dipole-dipole interactions. Due to the character of the dipolar interaction, these systems give rise to emergent models that are inherently long range and the resulting couplings usually feature spatial anisotropies. Short range spin models can instead be realized with cold atoms in optical lattices (1). A bosonic system in a tilted lattice has recently been used to simulate the phase transition in a 1D Ising model (7). Fermionic atoms were employed to study dynamical properties of quantum magnetism for spin systems (8); (9). This idea, first introduced in Ref. (10), has also been applied to other configurations, and simulation of different types of spin models have been proposed (11). However, due to the character of the atomic -wave scattering among the different Zeeman levels, such mappings usually yield effective spin models supporting continuous symmetries like the model. But as the main goal of a quantum simulator is to realize systems that cannot be tackled via analytical and/or numerical approaches, it is important to explore alternative scenarios that yield low symmetry spin models with anisotropic couplings and external fields. In this paper we propose such a scenario by demonstrating that bosonic atoms in the first excited band ( band) of a two-dimensional (2D) optical lattice can realize the spin- quantum Heisenberg model in an external field. Systems of cold atoms in excited bands feature an additional orbital degree of freedom (12) that gives rise to novel physical properties (13), which include supersolids (14) and other types of novel phases (15), unconventional condensation (16), and frustration (17). Also a condensate with a complex order parameter was recently observed experimentally (44); (19). The dynamics of bosons in the band include anisotropic tunneling and orbital changing interactions, where two atoms in one orbital state scatter into two atoms in a different orbital state. This is the key mechanism leading to the anisotropy of the effective spin model obtained here: These processes reduce the continuous symmetry characteristic of the model, which would effectively describe fermions in the band (20), into a set of discrete symmetries characteristic of the model. In addition, due to the anomalous -band dispersions the couplings of the resulting spin model can favor for anti-ferromagnetic order even in the bosonic case. We also demonstrate how further control of both the strength and sign of the couplings is obtained by external driving. This means that one can realize a whole class of anisotropic models with ferromagnetic and/or anti-ferromagnetic correlations. To illustrate the rich physics that can be explored with this system we discuss the phase diagram of the chain in an external field. This case exhibits ferromagnetic as well as anti-ferromagnetic phases, a magnetized/polarized phase, a spin-flop and a floating phase (21). Finite size effects relevant for the trapped case are examined via exact diagonalization. This reveals the appearance of a devil’s staircase manifested in the form of spin density waves. Finally, we discuss how to experimentally probe and manipulate the spin degrees of freedom. -orbital Bose system.– We consider bosonic atoms of mass in a 2D optical lattice of the form . Assuming that all atoms are in the first excited bands, the tight-binding Hamiltonian is Here creates a bosonic particle in the orbital at site , , and the sum is over nearest neighbors . The tunneling matrix elements are given by where is the Wannier function of orbital at site . Note that is anisotropic. For instance, a boson in the -orbital has a much larger tunneling rate in the -direction than in the -direction. The coupling constants are given by , with the onsite interaction strength determined by the scattering length. The last term in (1) is the orbital changing term describing the flipping of a pair of atoms from the state to the state . Note that this term is absent in the case of fermionic atoms. Effective spin Hamiltonian.– We are interested in the physics of the Mott insulator phase with unit filling in the strongly repulsive limit . Projecting onto the Mott space of singly occupied sites with the operator , the Schrödinger equation becomes with . Here and  (22). Since , we can take . The space of doubly occupied states of a given site is three-dimensional and spanned by , , and . In this space, it is straightforward to find from (1), and subsequent inversion yields with . In particular, the off-diagonal terms in derive from the orbital changing term. Using (2) we can now calculate all possible matrix elements of in the Mott space, where , and . By further employing the Schwinger angular momentum representation, , and , together with the constraint , we can (ignoring irrelevant constants) map (3) onto a spin-1/2 model in an external field (23) Here, means summing over each nearest neighbor pair only once. The couplings are given by , , and . The magnetic field is , where is the onsite energy of the orbital . Equation (4) is a main result of this paper. It demonstrates how -orbital bosons in a 2D optical lattice can realize the quantum spin- Heisenberg model. Several interesting facts should be noted. First, due to the symmetry of the -orbitals (12) and therefore . Furthermore, since we have anti-ferromagnetic instead of the usual ferromagnetic couplings for bosons. Also, we obtain the model when . The presence of can be traced to the orbital changing term in Eq. (1), which reduces the continuous symmetry of and to a set of symmetries. The symmetries reflect the ‘parity’ conservation in the original bosonic picture which classifies the many-body states according to total even or odd number of atoms in the and orbitals. Since the orbital changing term is absent for fermions, the model with anisotropic coupling is a peculiar feature of bosons in the band. We emphasize that the above derivation makes no assumptions regarding the geometry of the 2D lattice - i.e. it can be square, hexagonal etc. 1D phase diagram.– To illustrate the rich physics of the model, we now focus on the case of a 1D lattice where where quantum fluctuations are especially pronounced. Note that by increasing both the lattice amplitude and spacing in the direction keeping , one can exponentially suppress tunneling in the direction to obtain a 1D model, while the and orbitals are still quasi-degenerate (24). In the 1D setting, we will drop the ”direction” subscript on the coupling constants. For 1D, the importance of the orbital changing term can be further illuminated, by employing the Jordan-Wigner transformation for fermionic operators . The result is the fermionic Hamiltonian We see that leads to a pairing term that typically opens a gap in the energy spectrum. Incidentally the limit of in Eq. (5) is a realization of the Kitaev chain (25). Figure 1: (Color online) (a) Schematic phase diagram of the chain. (b) Finite size ’phase diagram’ obtained by exact diagonalization of 18 spins. The finite size ’phase diagram’ comprises an incomplete devil’s staircase of SDW between the PP and AFM phases. The anisotropy parameter is in (b). The schematic phase diagram is illustrated in Fig. 1 (a). At zero field, the model is integrable (26). For large positive values of the system is anti-ferromagnetic (AFM) in the direction. Small values of are characterized by Néel ordering in the direction and the system is in the so-called spin-flop phase (SF). The line for large negative values of is characterized by a ferromagnetic phase (FM) in the direction, and for all the cases, the limit of large external field displays a magnetized phase (PP), where the spins align along the orientation of the field in the direction. These three phases also characterize the phase diagram of the model in a longitudinal field (27). However, for non-zero anisotropy , a gapless floating phase (FP) emerges between the SF and the AFM phases which is characterized by power-law decay of the correlations (28); (21); (29). The transition from the AFM to the FP is of the commensurate-incommensurate (C-IC) type whereas the transition between the FP and SF phases is of the Berezinsky-Kosterlitz-Thouless (BKT) type. For there is a first order transition at between the two polarized phases. Finally, there is an Ising transition between the PP and the SF phases. The experimental realization of the Heisenberg model will inevitably involve finite size effects due to the harmonic trapping potential. Within the local density approximation, the trap renormalizes the couplings so that they become spatially dependent (30), but this effect can be negligible if the orbitals are small compared to the length scale of the trap. In the regime of strong repulsion, the main effect of the trap is instead that it gives rise to “wedding cake” structures with Mott regions of integer filling. This effect was observed in the lowest band Bose-Hubbard model (1), and predicted theoretically to occur for anti-ferromagnetic systems (31). To examine finite size effects, we have performed exact diagonalization in a chain with 18 spins with open boundary conditions. Figure 1 (b) displays the resulting finite size ’phase diagram’. The colors correspond to different values of the total magnetization of the ground state. While the PP phase and the AMF phase are both clearly visible, the numerical results reveal a step like structure of the magnetization in between the two phases. We attribute these steps in to a devil’s staircase structure of spin-density-waves (SDW). As we see from Fig. 1 (b), it is only possible to give a numerical result for the PP-SF Ising transition. In particular, the C-IC and BKT transitions are overshadowed by the transitions between SDW. In the thermodynamic limit the staircase becomes complete and the changes in become smooth. One then recovers the phase diagram of Fig. 1 (a). These transitions, between different SDW, are more pronounced for moderate systems sizes. For a typical experimental system with 50 sites, for example, we estimate 15 different SDW between the AFM and PP phases. Measurements and manipulations.– While time-of-flight measurements can reveal some of the phases (19), single-site addressing techniques (33) will be much more powerful when extracting correlation functions. To address single orbital states or even perform spin rotations, one may borrow techniques developed for trapped ions (43). Making use of the symmetries of the and orbitals, stimulated Raman transitions can drive both sideband and carrier transitions for the chosen orbitals in the Lamb-Dicke regime. These transitions can be made so short that the system is essentially frozen during the operation. Driving sideband transitions in this way, spin rotations may be implemented. For example, a spin rotation around is achieved by driving the red-sidebands for both orbitals (23). As a result, the two orbitals are coupled to the orbital in a configuration and in the large detuned case an adiabatic elimination of the band gives an effective coupling between the and orbitals (46). This scheme, thus, realizes an effective spin Hamiltonian with the effective Rabi frequencies and the detuning. Alternatively, Stark-shifting one of the orbitals results in a rotation around . Since the spin operators do not commute, any rotation can be realized from these two operations. Performing fluorescence on single orbital states by driving the carrier transition acts as measuring . This combined with the above mentioned rotations makes it possible to measure the spin at any site in any direction (23); (43). Figure 2: (Color online) Different types of models are achieved by varying the relative tunneling strength and the relative orbital squeezing. The three different parameter regions are: (I) anti-ferromagnetic couplings in all spin components with , (II) ferromagnetic or anti-ferromagnetic couplings in the -component and anti-ferromagnetic in the -component with , and (III) same as in (II) but with . The inset shows one example of the spin parameters , , and for . Tuning of couplings.– For a square optical lattice, we have . Moreover, in the harmonic approximation , from which it follows that and . This gives ferromagnetic couplings for the component of neighboring spins, while the interactions between and between the components have anti-ferromagnetic couplings. We now show how the relative strength and sign of the different couplings can be controlled by squeezing one of the orbital states. Such squeezing can be accomplished by again driving the carrier transition of either of the two orbitals, dispersively with a spatially dependent field (23). The shape of the drive can be chosen such that the resulting Stark shift is weaker in the center of the sites, resulting in a narrowing of the orbital. To be specific, assume that the ratio of the harmonic length scales of the and orbitals in the direction is tuned. A straightforward calculation using harmonic oscillator functions yields and . The coupling constants now depend on as and . The inset in Fig. 3 displays the three coupling parameters as a function of for . We see that the relative size and even the sign of the couplings can be tuned by varying . In particular, while always has AFM couplings, they can be made both FM or AFM for and . In the main part of Fig. 3, we sketch the different accessible models as a function of and . This clearly demonstrates that one can realize a whole class of spin chains by using this method. Experimental realization.– In Ref. (44), the experimental realization of -orbital bosons in an effective 1D optical lattice with a life-time of several milliseconds was reported. With an average number of approximately two atoms per site, the atoms could tunnel hundreds of times in the band before decaying. Since the main decay mechanism stems from atom collisions (32); (12), an increase of up to a factor of 5 in the lifetime is expected when there is only one atom per site (44). Typical values of the couplings can be estimated from the overlap integrals of neighboring Wannier functions. Considering Rb atoms, m and , and , we obtain and the characteristic tunneling time s. This corresponds to a few dozens of times smaller than the expected lifetimes (44), which should allow for experimental explorations of our results since relaxation typically occurs on a scale less than ten tuneling times (36). In addition, as pointed out in (23), it is possible to increase the lifetimes even further with the use of external driving. A major experimental challenge is to achieve a unit filling of the band. This could be achieved by having an excess number of atoms in the band and then adiabatically opening up the trap such that the unit filling is reached. A minority of sites will still be populated, however, by immobile -orbital atoms. Since the interaction energy between - and -orbital atoms is higher than between two -orbital atoms, processes involving -orbital atoms will be suppressed. The presence of atoms in the band corresponds therefore to introducing static disorder in the system (23). This may affect correlations (53), but the qualitative physics will remain unchanged for concentrations close to a unit filling. A more detailed study of this interesting effect is beyond the scope of the present work. As a final remark we note that the spin correlations discussed here will emerge at temperatures  (10). In addition, we estimate the required entropy (38) by equating the critical temperature to the gap between the ground and first excited states in the anti-ferromagnetic phase. Using the energy spectrum obtained from exact diagonalization, yields the entropy per particle . Experimentally one has in fact already achieved  (39), which indicates that our results are within experimental reach. Conclusions.– We showed that the Mott regime of unit filling of bosonic atoms in the first excited bands of a 2D optical lattice realizes the spin- quantum Heisenberg model. We then illustrated the rich physics of this model by examining the phase diagram of the 1D case. Finite size effects relevant to the trapped systems were discussed in detail. We proposed a method to control the strength and relative size of the spin couplings thereby demonstrating how one can realize a whole class of models. We finally discussed experimental issues related to the realization of this model. We end by noting, that recent experiments reported a 99% loading fidelity of bosons into the -band (51), which indeed opens possibilities to probe rich physics beyond spin- chains. Acknowledgments.– We thank Alexander Altland, Alessandro De Martino, Henrik Johannesson, Stephen Powell, Eran Sela, and Tomasz Sowiński for helpful discussions. We acknowledge financial support from the Swedish research council (VR). GMB acknowledges financial support from NORDITA. Appendix A Derivation of the effective spin model We are interested in the strong coupling regime where the system is deep in the Mott insulator phase with a unit filling of the lattice sites. A natural way of analyzing this limit involves the use of projection operators that divide the Hilbert space of the associated eigenvalue problem in orthogonal subspaces according to site occupations. We define the and operators that project, respectively, into the subspace of states with a unit occupation and into the perpendicular subspace. They decompose the eigenvalue equation , with its associated energy, in the form where is the interaction part of the Hamiltonian. Since , , , and all vanish, it follows that By further substitution of Eq. (7) in the eigenvalue equation, we are left with the Hamiltonian which describes the one particle Mott phase of -orbital bosons So far this result is exact. It explicitly shows the role of the tunneling in the system, namely of coupling the subspace of states where the sites have unitary occupation with the states that have one site doubly occupied. First, a particle tunnels, say, from the site to , where it interacts with another particle according to what is described by . After interaction, one of the particles is brought back to the site , and the final state is again characterized by lattice sites with a unit filling. Equation (8) is the starting point in the derivation of the effective Hamiltonian describing the Mott phase of -orbital bosons. The procedure is developed here for an effective 1D system with dynamics along the -axis, but generalization to the 2D lattice is straightforward. Realization of the 1D configuration relies on the adjustment of the lattice parameters, that should contain potential wells much deeper in the than in the axis, but in such a way that the quasi degeneracy between the different orbital states is still maintained. This means that , and furthermore, due to the strong coupling regime condition, we also have that , . Under these assumptions, the operator in Eq. (8) can be expanded to second order in () in analogy to the customary procedure used for the Hubbard model at half filling (41). In the tight-binding regime considered here, it is enough to consider the 2-site problem. The basis spanning the subspace of states with unit filling is where represents the state with an -orbital atom in site and a -orbital atom in site . The relevant states for the doubly occupied sites is which span the intermediate states of the projection operation. We notice, however, that due to the possibility of transferring population between the different orbital states, the projection of the Hamiltonian in the subspace is not diagonal in this basis of intermediate states. This is a peculiarity of the present model and derives entirely from the orbital changing collisions. As a consequence, we compute , with by calculating the projected Hamiltonian in the subspace and taking its corresponding inverse. Since , it is justified to ignore and to consider . Explicitly, where . We determine the final form of the effective Hamiltonian by computing the relevant matrix elements of (8). To this end, we consider in detail all the different cases where the resulting action of the operator of Eq. (8) in the states of the subspace yield non vanishing contribution. From states of the type the effective Hamiltonian acquires a term of the form In these and the following expressions, it is understood that . In the same way, from the states of the type , corresponding to the operator From the states of the type and the following process the Hamiltonian gains a contribution as Finally, we consider the states of the type , that contribute to the effective Hamiltonian with a term that changes the orbital states of the atoms in both sites The resulting expression for the effective Hamiltonian corresponds thus to We now use the orbital states to define the Schwinger spin operators and together with the constraint of unit occupation of the lattice sites in the Mott phase, i.e. , we rewrite Eq. (9) as Thus, within the strong coupling regime, the physics of the Mott insulator phase is equivalent to the spin-1/2 Heisenberg model in an external field. In terms of the lattice parameters, the expressions for the various couplings follow In terms of Eq. (3) of the main text, we can identify , , and . Appendix B Single site addressing of orbital states Single site addressing for the present setup implies selective detection/manipulation of the two orbitals. Since the spin is encoded in external spatial degrees of freedom rather than internal atomic electronic states, methods like those described in Refs. (42) would not work. To control the spatial state of the atoms at single sites we may instead apply methods borrowed from trapped ion physics (43). Similar methods were already employed in the experiment (44) in order to load bosons from the band to the band. Müller et al. of Ref. (44) did not, however, consider single site addressing and more importantly they did not discuss control of the orbital degree of freedom. Figure 3: (Color online) Schematic figure of coupling between different onsite orbital states. The carrier transition acts upon the internal atomic electronic states, while the red and blue sideband transitions in addition lower and raise the external vibrational state with a single phonon respectively, i.e. couple different orbital degree of freedom. Two internal atomic electronic states, e.g. an and an state for Rb atoms, are Raman coupled with two lasers. This transition is described by the matrix element where and are the laser amplitudes and wave vectors, respectively, and the detuning of the transitions relative to the ancilla electronic state. The spatial dependence of the lasers will induce couplings between vibrational states of the atom, i.e. different bands. The time duration for a -pulse, for example, can be made very short by making the effective Rabi frequency large. In particular, this time can be made short on any other time scale in the system and one can approximately consider the system dynamics frozen during the applied pulse. Indeed, the same assumption applies to any single site addressing in optical lattices. Furthermore, by driving resonant two-photon transitions we do not need to worry about accidental degeneracies between other undesired states. Deep in the Mott insulator phase, as considered in this work, we can approximate single sites with two dimensional harmonic oscillators with frequencies . The Lamb-Dicke parameters (43); (45) become , and within the Lamb-Dicke regime () we can neglect multi-phonon transitions. Thus, in one dimension we have three possible transitions: () Carrier transitions - with no change in the vibrational state, () red sideband transitions - which lower the vibrational state with one quantum, and () blue sideband transitions - which raise the vibrational state with one quantum. The various possibilities are demonstrated in Fig. 3. Since the different transitions are not degenerate, it is possible to select single transitions by carefully choosing the frequencies of the lasers. Moreover, considering for example , i.e. no component in the direction, it is possible to only address the -orbital. Thus, we have a method to singly address the different orbitals. Full control is achieved when every unitary , where and is an effective rotation angle, can be realized. To start with the simplest example, implementation of , we first note that since we are considering the case with a single atom per site such that it is enough to realize the operation of . This is nothing but a phase shift of one of the orbitals. This is most easily done by driving the carrier transition off-resonantly for one of the two orbitals. Since the driving is largely detuned it only results in a Stark shift of the orbital. The operation is preferably achieved by simultaneously driving off-resonantly the red sidebands of the two orbitals. The -band will never get populated due to the large detuning while instead the transition between the two orbitals can be made resonant. More precisely, for the three involved states (with the last entry in the ket-vector being the -orbital) the resulting coupling Hamiltonian in the rotating wave approximation has the form a -coupled system (46) where and have been taken real and for now spatially independent. For we adiabatically eliminate the state to obtain the desired Hamiltonian generating the rotation , namely Note that if the Raman transition between the two orbitals is not resonant, such an action performs a combination of an - and -rotation. To perform -rotations, one could either adjust the phases of the lasers or simply note that . With this at hand, any manipulation of single site spins can be performed. To measure the spin state in a given direction one should combine the rotations with single site resolved fluorescence (i.e. measuring (47). More precisely, since the drive laser can couple to the two orbitals individually, one orbital will be transparent to the laser while the other one will show fluorescence. In other words, one measures on a single site. Other directions of the spin are measured in the same way, but after the correct rotation has been implemented to it. Furthermore, with the help of coincident detection it is possible to also extract correlators  (48). Since there is a single atom at every site, the “parity problem” (42) of these techniques deriving from photon induced atom-atom collisions is avoided and thereby loss of atoms will not limit our measurement procedure. This summarizes how preparation, manipulations, and detection of single site spins can be performed. Finally we note that the methods discussed above can be used in a broader context. For example, there is a transition between two -orbital atoms (one - and one -orbital atom) and one - and the -orbital atom (49). This transition is resonant for any parameters and and could in principle cause rapid decay of the -band state, or even Rabi-type oscillations between the bands. We note, however, that in the experiment of Ref. (44) the collisional decay mechanism was surprisingly small despite this resonant transition. Nevertheless, one could suppress this resonant transition to increase the life-time even further with the technique described above: By driving the red sideband for the two -orbital states dispersively, the and bands will be repelled and thereby this breaks the resonance condition for . Appendix C External parameter control The ideas of the previous section can also be utilized to change the system parameters. The simplest example is the application of which implements a shift in the external field . Apart from the external field, it is also desirable to control the coupling in the component of the spin, , and especially to tune it from ferromagnetic into anti-ferromagnetic. Using the fact that we have This is most easily estimated in the harmonic approximation. Introducing the widths of the orbital wave functions for the spatial directions , in this limit
1c3ec0ddc01ce081
Monday, May 25, 2015 Separability and quantum mechanics Tuesday, Apr 21st Fernando Barbero, CSIC, Madrid  Title: Separability and quantum mechanics  PDF of the talk (758k) Audio [.wav 20MB] by Juan Margalef-Bentabol, UC3M-CSIC, Madrid Classical vs Quantum: Two views of the world In classical mechanics it is relatively straightforward to get information from a system. For instance, if we have a bunch of particles moving around, we can ask ourselves: where is its center of mass? What is the average speed of the particles? What is the distance between two of them? In order to ask and answer such questions in a precise mathematical way, we need to know all the positions and velocities of the system at every moment; in the usual jargon, we need to know the dynamics over the state space (also called configuration space for positions and velocities, or phase space when we consider positions and momenta). For example, the appropriate way to ask for the center of mass, is given by the function that for a specific state of the system, gives the weighted mean of the positions of all the particles. Also, the total momentum of the system is given by the function consisting of the sum of the momenta of the individual particles. Such functions are called observables of the theory, therefore an observable is defined as a function that takes all the positions and momenta, and returns a real number. Among all the observables there are some ones that can be considered as fundamental. A familiar example is provided by the generalized position and momenta denoted as and . In a quantum setting answering, and even asking, such questions is however much trickier. It can be properly justified that the needed classical ingredients have to be significantly changed: 1. The state space is now much more complicated, instead of positions and velocities/momenta we need a (usually infinite dimensional) complex vector space with an inner product that is complete. Such vector space is called a Hilbert space and the vectors of are called states (up to a complex multiplication). 2. The observables are functions from to itself that "behave well" with respect to the inner product (these are called self-adjoint operators). Notice in particular that the outputs of the quantum observables are complex vectors and not numbers anymore! 3. In a physical experiment we do obtain real numbers, so somehow we need to retrieve them from the observable associated with the experiment. The way to do this is by looking at the spectrum of , which consists of a set of real numbers called eigenvalues associated with some vectors called eigenvectors (actually the number that we obtain is a probability amplitude whose absolute value squared is the probability of obtaining as an output a specific eigenvector). The questions that arise naturally are: how do we choose the Hilbert space? how do we introduce fundamental observables analogous to the ones of classical mechanics? In order to answer these questions we need to take a small detour and talk a little bit about the algebra of observables. Algebra of Observables Given two classical observables, we can construct another one by means of different methods. Some important ones are: • By adding them (they are real functions) • By multiplying them • By a more sophisticated procedure called the Poisson bracket The last one turns out to be fundamental in classical mechanics and plays an important role within the Hamiltonian form of the dynamics of the system. A basic fact is that the set of observables endowed with the Poisson bracket forms a Lie algebra (a vector space with a rule to obtain an element out of two other ones satisfying some natural properties). The fundamental observables behave really well with respect to the Poisson bracket, namely they satisfy simple commutation relations i.e. if we consider the - position observable and "Poisson-multiply" it by the - momentum observable, we obtain the constant function if , or the constant function if . One of the best approaches to construct a quantum theory associated with a classical one, is to reproduce at the quantum level some features of its classical formulation. One way to do this is to define a Lie algebra for the quantum observables such that some of such observables mimic the behavior of the Poisson bracket of some classical fundamental observables. This procedure (modulo some technicalities) is known as finding a representation of this algebra. In order to do this, one has to choose: 1. A Hilbert space . 2. Some fundamental observables that reproduce the canonical commutation relations when we consider the commutator of operators. In standard Quantum Mechanics the fundamental observables are positions and momenta. It may seem that there is a great ambiguity in this procedure, however there is a central theorem due to Stone and von Neumann that states that, under some reasonable hypothesis, all the representations are essentially the same. One of the hypotheses of the Stone-von Neumann theorem is that the Hilbert space must be separable. This means that it is possible to find a countable set of orthonormal vectors in (called Hilbert basis) such that any state -vector- of can be written as an appropriate countable sum of them. A separable Hilbert space, despite being infinite dimensional, is not "too big", in the sense that there are Hilbert spaces with uncountable bases that are genuinely larger. The separability assumption seems natural for standard quantum mechanics, but in the case of quantum field theory -with infinitely many degrees of freedom- one might expect to need much larger Hilbert spaces i.e. non separable ones. Somewhat surprisingly, most of the quantum field theories can be handled with our beloved and "simple" separable Hilbert spaces with the remarkable exception of LQG (and its derivative LQC) where non separability plays a significant role. Henceforth it seems interesting to understand what happens when one considers non separable Hilbert spaces [3] in the realm of the quantum world. A natural and obvious way to acquire the necessary intuition is by first considering quantum mechanics on a non-separable Hilbert space. The Polymeric Harmonic Oscillator The authors of [2,3] discuss two inequivalent (among the infinitely many) representations of the algebra of fundamental observables which share a non familiar feature, namely, in one of them (called the position representation) the position observable is well defined but the momentum observable does not even exist; in the momentum representation the roles of positions and momenta are exchanged. Notice that in this setting, some familiar features of quantum mechanics are lost for good. For instance, the position-momentum Heisenberg uncertainty formula makes no sense at all as both position and momentum observables need to be defined. To improve the understanding of such systems and gain some insight for the application to LQG and LQC, the authors of [1] (re)study the -dimensional Harmonic Oscillator (PHO) in a non separable Hilbert space (known in this context as a polymeric Hilbert space). As the space is non separable, any Hilbert basis should be uncountable. This leads to some unexpected behaviors that can be used to obtain exotic representations of the algebra of fundamental observables. The motivation to study the PHO is kind of the same as always: the HO, in addition to being an excellent toy model, is a good approximation to any 1-dimensional mechanical system close to its equilibrium points. Furthermore, free quantum field theories can be thought of as ensembles of infinitely many independent HO's. There are however many ways to generalize the HO to a non separable Hilbert space and also many equivalent ways to realize a concrete representation, for instance by using Hilbert spaces based on: The eigenvalue equations in these different spaces take different forms: in some of them they are difference equations, whereas in others they have the form of the standard Schrödinger equation with a periodic potential. It is important to notice nonetheless that writing Hamiltonian observables in this framework turn out to be really difficult, as only one of the position or momentum observables can be strictly represented. This means that for the other one it is necessary to rely on some kind of approximation (that can be obtained by introducing an arbitrary scale) and choosing a periodic potential with minima corresponding to the one of the quadratic operator. The huge uncertainty in this procedure has been highlighted by Corichi, Zapata, Vukašinac and collaborators. The standard choice leads to an equation known as the Mathieu equation but other simple choices have been explored, as the one shown in the figure. Energy eigenvalues (bands) of a polymerized harmonic oscillator. The horizontal axis shows the position (or the momentum depending on the chosen representation), the vertical axis is the energy and the red line represents the particular periodic extension of the potential used to approximate the usual quadratic potential of the HO. The other lines plotted in this graph correspond to auxiliary functions that can be used to locate the edges of the bands that define the point spectrum in the present example. As we have already mentioned, the orthonormal bases in non separable Hilbert spaces are uncountable. A consequence of this is the fact that the orthonormal basis provided by the eigenstates of the Hamiltonian must be uncountable, i.e. the Hamiltonian must have an uncountable infinity worth of eigenvalues (counted with multiplicity). A somewhat unexpected result that can be proved by invoking classical theorems on functional analysis in non-separable Hilbert spaces is the fact that these eigenvalues are gathered in bands. It is important to point out here that only the lowest-lying part of the spectrum is expected to mimic reasonably well the one corresponding to the standard HO, however it is important to keep also in mind the huge difference that persists: even the narrowest bands contain a continuum of eigenvalues. Some physical consequences The fact that the spectrum of the polymerized harmonic oscillator displays this band structure is relevant for some applications of polymerized quantum mechanics. Two main issues were mentioned in the talk. On one hand the statistical mechanics of polymerized systems must be handled with due care. Owing to the features of the spectrum, the counting of energy eigenstates necessary to compute the entropy in the microcanonical ensemble is ill defined. A similar problem crops up when computing the partition function of the canonical ensemble. These problems can probably be circumvented by using an appropriate regularization and also by relying on some superselection rules that eliminate all but a countable subset of energy eigenstates of the system. A setting where something similar can be done is in the polymer quantization of the scalar field (already considered by Husain, Pawłowski and collaborators). As this system can be thought of as an infinite ensemble of harmonic oscillators, the specific features of their (polymer) quantization will play a significant role. A way to avoid some difficulties here also relies on the elimination of unwanted energy eigenvalues by imposing superselection rules as long as they can be physically justified. [1] J.F. Barbero G., J. Prieto and E.J.S. Villaseñor, Band structure in the polymer quantization of the harmonic oscillator, Class. Quantum Grav. 30 (2013) 165011. [2] W. Chojnacki, Spectral analysis of Schrodinger operators in non-separable Hilbert spaces, Rend. Circ. Mat. Palermo (2), Suppl. 17 (1987) 135–51. [3] H. Halvorson, Complementarity of representations in quantum mechanics, Stud. Hist. Phil. Mod. Phys. 35 (2004) 45-56. No comments: Post a Comment
a753ccdbf1d7adfd
Skip to main content Lewis Baker Supervised by Dr Scott Habershon and Dr Vasilios Stavros. I graduated from the Physics BSc, MPhys degree at Warwick University in 2013, with first class honours. My final year project was within the area of Quantum Noise and Entanglement in Mesoscopic Conductors, supervised by Dr Nicholas d'Ambrumenil. In particular, we were interested in the feasibility of an on-demand source of entangled particles (electron-hole pairs) produced when (theoretically) applying a series of tailored Lorentzian voltage pulses to a mesoscopic conductor (like a quantum dot or quantum point contact). I then completed the MOAC MSc in Mathematical biology and biophysical chemistry with distinction before continuing onto the PhD partition of the 4 year DTC program. Quantum mechanics is without doubt one of the most successful theories to date with its ability to predict properties of systems with fantastic precision and has led to numerous technological advances, which would not have been possible without quantum theory. For example, MRI machines exploit the spin of protons in water to image the body noninvasively; semiconductors are used in most electronics, which are understood from condensed matter physics in terms of band gaps; the periodic table can be explained by the quantum numbers, which are derived from considerations of the Schrödinger equation. How about, Superfluidity, Superconductors and Bose-Einstein condensates? These properties are all explained through quantum mechanics, specifically quantum coherence. Needless to say, the list of applications of quantum mechanics is effectively non-exhaustive! The field of quantum dynamics offers to extend our knowledge of a variety of problems which range from quantum computation, which tries exploiting the superposition of quantum states to provide vastly increased computational abilities; to understanding the light harvesting processes which take place in chloroplasts, which could help us to improve solar energy harvesting technology. So where are my interests? Well, aside from generally being interest in most things quantum, I am interested in the manifestations of quantum mechanics in biological systems and energy transfer in biologically relevant molecules. More specifically, the underpinning theme of my PhD involves understanding energy transfer in small molecules like sun screens (energy dissipation), as well as larger systems like FMO (directed energy transfer). L dot Baker at warwick dot ac dot uk
d974a84caad69c20
Dismiss Notice Join Physics Forums Today! Simulating the universe 1. Mar 5, 2007 #1 It is conjectured that one day it will be possible to produce a simulation of the universe with sentient beings. Obviously many problems will arise. One that occurs to me is that such beings would need to be able to 'see'. So how would you simulate light and sight? Any thoughts Last edited: Mar 5, 2007 2. jcsd 3. Mar 5, 2007 #2 4. Mar 5, 2007 #3 Sorry about the question mark. Ok so how you simulate 'sight' depends on how you simulate a 'thing' that becomes the 'image'. Given a world with many 'things' and many 'viewers' then how does a 'viewer 'know that a 'thing' is present. Does the 'viewer' see the 'thing' by an external (to the viewer) subroutine that calculates for all things and all viewers their relative position and what is viewable? OR Does the viewer see via objects (in the oop sense) internal to the viewer? If so how is information about the 'thing' related to the viewer? Would there be light particles (objects in the oop sense, that 'fly' around the universe from light sources)? Last edited: Mar 5, 2007 5. Mar 5, 2007 #4 there's that thing (actually a lot of them) called "ray tracer". is that what you are talking about? I'm sure many of them are in c++ and have some kind of oop architecture, so... look it up. 6. Mar 5, 2007 #5 User Avatar Science Advisor Homework Helper There are plenty of sentient beings on earth that can't 'see' (including blind humans). They seem to get along just fine. What's the problem? 7. Mar 5, 2007 #6 I am talking about a completely simulated universe. Try the question the other way round. Suppose our reality is just a computer simulation. How do you think they programmed light? 8. Mar 5, 2007 #7 9. Mar 5, 2007 #8 Maxwell's equations describe the interrelationship between electric fields, magnetic fields, electric charge, and electric current. Ok so you can program the equations but what are you applying them to in the simulation. How do you program fundemental particles, or what are there methods and properties going to be? 10. Mar 5, 2007 #9 Now you're changing the question again. Well. People are working on it. 11. Mar 5, 2007 #10 Not really changing the question, just trying to get people to understand what I was actually asking in the first question 12. Mar 5, 2007 #11 universe simulations would either be direct simulations of a local quantum system by qubits- or a classical simulation of qubits which simulates a quantum system- such a simulation would simply perform the Schrödinger equation in a simple logic gate network as with a cellular automaton- it is thought by many that the processes which generate light/ gravity/ and all the other physical dynamics of our world would naturally emerge at higher scales from the operation of this quantum cellular automaton- [because that is how physics in our universe emerges] in other words you can do an accurate simulation but still not understand how the physics of the simulated universe works exactly- but a God's-eye-view would help you learn more- Last edited: Mar 5, 2007 13. Mar 5, 2007 #12 setAI- thank you for answering the question clearly. My underlying thoughts were whether in creating such a simulation how much insight could be gained into the creation of our universe. Now you have told me about qubits I have been able to read about them on the net and gained insight. 14. Mar 5, 2007 #13 User Avatar Science Advisor If you're just trying to reproduce our current perception of the universe you might not need to simulate as far as individual particles or quantum phenomena, certainly not all the time. The uncertainty principle could mean that the modeling system would be able approximate particle states from the information fed to the "sentient beings" only when necessary, while keeping things coherent and avoiding unnecessary computation. If possible we want to avoid having the CPU waste cycles in things that aren't that relevant such as what a tiny particle in front of me is doing, or what is happening on the dark side of the moon right now, until these things become necessary, and when they do just have the system guess an unimpossible state. Last edited: Mar 5, 2007 15. Mar 6, 2007 #14 Job - Would that imply that at the macro level of this simulated universe some events would only occur when observations take place. For example the position of stars would only be calculated and re-positioned when observations were taking place? Or because of the necessity of two observers agreeing on an observable event such as the position of the stars and the predicted position of thses stars in their future is it more efficient to have them continually calculated rather than obtaining the same position through calculations when they are observed? 16. Mar 9, 2007 #15 As I read somewhere tiny particles can change and shift through simple observation by a person. Therefore every molecule is impossible to specifically calculate because of its constant shifting. Especially on last observation. As I see it... If you are thinking video game wise like I am in this sentient being viewing sight scenario then.. There would be a projected geometric shape such as a cone of sight widening a person's sight draw range to infinity (or the ability of such a being with programmed distance limits). Everything outside of this cone of sight would be darkness and completely invisible to the being. It still exists but has no effect on the being except yatta yatta heat ray.:surprised . I'm guessing this would be a rudimentary comp simulation at first with AI. Anything further is useless to contemplate because we will never see its implementation at the current rate of technological increase in our sludgy world. (We're slow folks.):yuck: 17. Mar 10, 2007 #16 ultimately all one would have to do to simulate any possible human brain state [and thus any perceivable reality/history/self of any human-level observer who could ever exist in any possible world] is simply a universal simulator of the human brain: an electrochemical simulation of the brain's neural infrastructure that is able to process roughly 10^16 operations per second on 10^15 bits- given the implications of quantum mechanics [especialy the Superposition Principle] the amount of raw information processed by an observer is the limit of coherent information about any physical system that is 'real' to an observer anyway- [and the complexity of the observer's world- the amount of information that is 'real'- is limited by the beckenstein bound of the observer's local environment] so obvioulsy huge simulations of the entire 10^90 bits of the observable universe are not required- Last edited: Mar 10, 2007
e65492c8c5b8b21e
The transition amplitude for a particle currently in one spacetime point to appear up in another point doesn't respect causality which becomes one of the main reasons to abandon non-relativistic quantum mechanics. We impose the relativistic Hamiltonian $H=\sqrt{c^2p^2+m^2c^4}$ to get the Klein–Gordon equation or more correctly "add" special relativity after 2nd quantizing to fields, which shows how antiparticles crop up and help in preserving causality in this case. Apart from that, the equation is not even Lorentz covariant, which proves it to be non-relativistic. But why does this occur? I mean, the Schrödinger equation is consistent with the de Broglie hypothesis and the latter is so much consistent with relativity, that some books even offer a "derivation" of the same by equating $E=h\nu$ and $E=mc^2$ probably resulting from a misinterpretation of de Broglie's Ph.D. paper. (A derivation isn't exactly possible though). So, the Schrödinger equation should include relativity in it, right? But it doesn't... How does relativity vanish from the Schrödinger equation or did the de-Broglie hypothesis ever not "include" relativity in any way? My suspicion—The "derivation" is not possible, so the common $\lambda=h/mv $ with m as the rest mass, doesn't include relativity in any way. End of story. Is this the reason or is there something else? • $\begingroup$ Related: Does relativistic quantum mechanics really violate causality? $\endgroup$ – Qmechanic Jun 21, 2020 at 18:32 – tpg2114 Jun 21, 2020 at 20:40 • 4 $\begingroup$ This question reads very confused to me. What you are asking is not why the Schroedinger equation is non-relativistic, what you are asking is whether the de Broglie relation can be derived from the relativistic dispersion relation. I suspect you watched a video such as this one: youtube.com/watch?v=xbD_yWgHMVA (and others). The "derivation" in the video is nonsense. $\endgroup$ – Zorawar Jun 22, 2020 at 21:40 • $\begingroup$ possible duplicate: physics.stackexchange.com/q/257787/84967 $\endgroup$ Jun 23, 2020 at 11:17 • 1 $\begingroup$ As I recall, Schrodinger tried making his equation compatible with relativity from the start. He kept getting these positively charged electrons and resulting negative energy states and other "nonsense", so he published a non-relativistic treatment of his equation as it applies to the spectra of hydrogen. $\endgroup$ – R. Romero Oct 25, 2021 at 21:52 7 Answers 7 In non-relativistic Quantum Mechanics (NRQM), the dynamics of a particle is described by the time-evolution of its associated wave-function $\psi(t, \vec{x})$ with respect to the non-relativistic Schrödinger equation (SE) $$ \begin{equation} i \hbar \frac{\partial}{\partial t} \psi(t, \vec{x})=H \psi(t, \vec{x}) \end{equation} $$ with the Hamilitonian given by $H=\frac{\hat{p}^{2}}{2 m}+V(\hat{x}) .$ In order to achieve a Lorentz invariant framework (the SE is only Galilei NOT Lorentz invariant), a naive approach would start by replacing this non-relativistic form of the Hamiltonian by a relativistic expression such as $$ H=\sqrt{c^{2} \hat{p}^{2}+m^{2} c^{4}} $$ or, even better, by modifying the SE altogether such as to make it symmetric in $\frac{\partial}{\partial t}$ and the spatial derivative $\vec{\nabla} .$ However, the central insight underlying the formulation of Quantum Field Theory is that this is not sufficient. Rather, combining the principles of Lorentz invariance and Quantum Theory requires abandoning the single-particle approach of Quantum Mechanics. • In any relativistic Quantum Theory, particle number need not be conserved, since the relativistic dispersion relation $E^{2}=c^{2} \vec{p}^{2}+m^{2} c^{4}$ implies that energy can be converted into particles and vice versa. This requires a multi-particle framework. • This point is often a little bit hidden in books or lectures. Unitarity and causality cannot be combined in a single-particle approach: In Quantum Mechanics, the probability amplitude for a particle to propagate from position $\vec{x}$ to $\vec{y}$ is $$ G(\vec{x}, \vec{y})=\left\langle\vec{y}\left|e^{-\frac{i}{\hbar} H t}\right| \vec{x}\right\rangle $$ One can show that e.g. for the free non-relativistic Hamiltonian $H=\frac{\hat{p}^{2}}{2 m}$ this is non-zero even if $x^{\mu}=\left(x^{0}, \vec{x}\right)$ and $y^{\mu}=\left(y^{0}, \vec{y}\right)$ are at a spacelike distance. The problem persists if we replace $H$ by a relativistic expression in the SE. Quantum Field Theory (QFT) solves both these problems by a radical change of perspective. Remark 1: There are still some cases (however there are a lot subtleties), where one can use RQM in the single-particle approach. Then the SE is replaced by the e.g. Klein-Gordon equation. $$ (\Box+m^2)\;\psi(x)=0 $$ where $\psi(x)$ is still a wave-function. Remark 2: The Schrödinger equation holds for SR. It's not the SE that fails, it's the non-relativistic Hamiltonian that fails. The Dirac equation is the SE, but with the Dirac Hamiltonian. The Schrodinger equation is valid. $$ i \hbar \frac{\partial \psi(x, t)}{\partial t}=\left(\beta m c^{2}+c \sum_{n=1}^{3} \alpha_{n} p_{n}\right) \psi(x, t)=H_\text{Dirac}\;\psi(x, t) $$ • $\begingroup$ Most of what you said was known to me.(I studied QFT already without quite understanding it's necessity).Unfortunately this doesn't answer my question,please notice the quotes in why and where my doubt is--deBroglie is consistent with relativity,but SE is not-why?I am not looking for why SE is not consistent in general(The first few pages of Peskin Schroeder prove them)...Maybe I am not being clear enough in my question.I edited twice already. $\endgroup$ Jun 21, 2020 at 17:39 • 6 $\begingroup$ @ManasDogra You're using a lot quotes and words like why and magic. No it's not very clear. Please look at my last remark. The SE is perfectly fine. It's the Hamiltonian that fails. My answer was also referring to your title "Why is Schroedinger's equation non-relativistic?" which is a quit general title and should help people searching for an answer to exact that question. $\endgroup$ Jun 21, 2020 at 18:00 • 1 $\begingroup$ Basically what I am asking is why is de-Broglie includes relativity but SE doesn't---how can this be possible? $\endgroup$ Jun 21, 2020 at 18:08 • 2 $\begingroup$ I think you have misunderstood the relationship between the Dirac equation and the Schrodinger equation in the full theory. $\endgroup$ Jun 21, 2020 at 22:52 • 4 $\begingroup$ Minor remarks: 1. SE is not invariant under the Galilean group either, it is only invariant under a central extension of the Galilean group. 2. The KG equation doesn't show that one can use RQM with a single particle, it shows rather the exact opposite. $\endgroup$ – ACat Jun 22, 2020 at 0:55 To do relativistic quantum mechanics you have to abandon single-particle quantum mechanics and take up quantum field theory. The Schrödinger equation is an essential ingredient in quantum field theory. It asserts $$ \hat{H} {\psi} = i \hbar \frac{d}{dt} {\psi} $$ as you might guess, but there is a lot of subtlety hiding in this equation when ${\psi}$ refers to a quantum field. If you try to write it using numbers then $\psi$ would be a function of every state of a field $\phi$ which is itself configured over space and time. In $\psi$ you would then have a functional not a function. In correct terminology, the Schrödinger equation here is covariant, but not manifestly covariant. That is, it would take the same form in some other inertial reference frame, but this is not made obvious in the way the equation has been written down. But we have here a very different 'beast' to the Schrödinger equation you meet when you first do quantum mechanics. That would now be called single-particle quantum mechanics. $That$ Schrödinger equation is certainly not covariant, and nor is the whole structure of the theory of single-particle quantum mechanics. The reason for confusion here may be to do with the history of science. Particle physicists started working with the Klein-Gordon (KG) equation under the illusion that it was some sort of relativistic replacement for the Schrödinger equation, and then the Dirac equation was thought of that way too. This way of thinking can help one do some basic calculations for the hydrogen atom for example, but ultimately you have to give it up. For clear thinking you have to learn how to quantise fields, and then you learn that for spin zero, for example, both the Klein-Gordon and the Schrödinger equation have roles to play. Different roles. Neither replaces the other. One asserts what kind of field one is dealing with; the other asserts the dynamics of the field amplitude.$^1$ I have never seen this clearly and squarely written down in the introductory section of a textbook however. Has anyone else? I would be interested to know. Postscript on de Broglie waves de Broglie proposed his relation between wave and particle properties with special relativity very much in mind, so his relation is relativistic (the background is that $(E, {\bf p})$ forms a 4-vector and so does $(\omega, {\bf k})$.) Schrödinger and others, in their work to get to grips with the de Broglie wave idea in more general contexts, realised that an equation which was first order in time was needed. As I understand it, the Schrödinger equation came from a deliberate strategy to look at the low-velocity limit. So from this point of view it does seem a remarkable coincidence that that same equation then shows up again in a fully relativistic theory. But perhaps we should not be so surprised. After all, Newton's second law, ${\bf f} = d{\bf p}/dt$ remains exactly correct in relativistic classical dynamics. $^1$ For example, for the free KG field, the KG equation gives the dispersion relation for plane wave solutions. The Schrödinger equation then tells you the dynamics of the field amplitude for each such plane wave solution, which behaves like a quantum harmonic oscillator. • $\begingroup$ "the Klein-Gordan equation under the illusion that it was some sort of relativistic replacement for the Schrödinger equation" The KG equation is the relativistic form of the Schrödinger equation. $\endgroup$ – my2cts Jun 21, 2020 at 22:54 • 5 $\begingroup$ @my2cts No, it really isn't. See for example the footnote I have added. $\endgroup$ Jun 21, 2020 at 22:57 • 2 $\begingroup$ @my2cts If you treat SE as a classical field equation then you can see it as the non-relativistic limit of the KG equation which is a relativistic (classical) field equation. However, as the equation governing the dynamics of a quantum system, it simply doesn't make sense to see SE as the limit of the KG equation because the KG equation doesn't describe the dynamics of any quantum system (it only describes the on-shell condition). The dynamics of both relativistic and non-relativistic systems are described by the SE, of course, the dof involved are different in the relativistic case. $\endgroup$ – ACat Jun 22, 2020 at 1:16 • 3 $\begingroup$ Just some nit-picking: it's Klein-Gordon (after Walter Gordon), not Klein-Gordan. The name "Gordan" is appropriate in "Clebsch-Gordan coefficients" (after Paul Gordan). $\endgroup$ – akhmeteli Jun 22, 2020 at 3:08 • 1 $\begingroup$ @akhmeteli Thanks! I corrected it. $\endgroup$ Jun 22, 2020 at 3:12 An attempt to share the historical development of non-relativistic wave mechanics discovery by E. Schrödinger as related to following query by OP. "So, the Schrödinger equation should include relativity in it, right? But it doesn't... How does relativity vanish from the Schrödinger equation or did it ever not "include" relativity in any way?" The course lectures given by Hermann Weyl at ETH, Zurich, 1917 were the starting point of this wave equation journey. Its central idea was, what later came to be known as the gauge transformation. Schrödinger had studied the compiled notes very devotedly in 1921 (Influence on thinking) and often used the central idea in his subsequent work. He applied the Weyl's measure theory (metric spaces) to the orbits of the electrons in the Bohr-Sommerfeld atomic models. He considered the path of an electron in a single complete orbit and enforced the Weyl condition of the geodetic path, thereby, implying the existence of the quantized orbits. He later realized that, this work already contained the de Broglie's ideas of the Bohr orbit in terms of the electron waves. In the year 1922, Erwin Schrödinger was suffering the torments of respiratory illness and had moved to Alpine resort of Arosa to recuperate. He had vague ideas about the implications of his formulation about the properties of the electron orbits. It is quite possible, that had he been in a better health, the wave properties of electron could have been clear to him even before de Broglie, from his own work. Einstein had in-fact cited de Broglie's work in making a connection between the quantum statistics and the wave properties of matter and this was known to Schrödinger, who read most of his papers (Influence on thinking). Schrödinger had later said that "wave mechanics was born in statistics" refering to his work in statsitical mechancs of ideal gases. He said that - his approach was nothing more that taking seriously de Broglie-Einstein wave theory of a moving particle, according to which the particle nature is just like an appendage to the basic wave nature. In order to think about what kind of waves would satisfy closed obrits and the relavent equations, he was already thinking in relativistic terms (energy -momentum relations) and was thus natural that his attempt to formualate the wave equation would rest upon the foundation of relativistic equations. His first derivation for the wave equation for particles, before his celebrated Quantisierung als Eigenwertproblem (Quantization as an eigenvalue problem) 1926, was left unpublished and was based entirely upon the relativistic theory as given by de Broglie. The crucial test of any theory at that time was the Hydrogen atom. It was required for any new theory to atleast reproduce some features of the Bohr's work on the H -atom energy levels and the quantum numbers. Further, a relativistic theory must be capable of explainng the fine structure provided by Sommerfeld equation. His relativistic theory did not agree with the experiments because it lacked a key ingredient -the electron spin. The original manusript of his relativistic wave mechanics formualtion is at best lost and only a notebook of calculations is available in the archives. However, his non-relativistic formulation did indeed go to the print and has become a standard textbook material for undergraduate quantum mechanics course. 1. A Life of Erwin Schrödinger (Canto original series) by Walter J. Moore. 2. The Historical Development of Quantum Theory By Jagdish Mehra, Erwin Schrödinger, Helmut Rechenberg. • $\begingroup$ Great info here.Thanks a lot.But still not answering the 'why' thing. $\endgroup$ Jun 22, 2020 at 11:15 • 1 $\begingroup$ Yeah, that's why i said that is information is from the historical perspective. There will be some good answers discussing the physics part as well. $\endgroup$ Jun 22, 2020 at 11:49 • $\begingroup$ I just wanted to let you know that the answer to your question "So, the Schrödinger equation should include relativity in it, right? But it doesn't... How does relativity vanish from the Schrödinger equation or did it ever not "include" relativity in any way?" is actually something that he tried to do to at first but could not make matching predictions with the experiments at that time. $\endgroup$ Jun 22, 2020 at 11:59 • $\begingroup$ The sense in which I said that is because of a seemingly fallatical derivation of debroglie's relation.You are from India right?Then you probably know how much we people are taught in class 11 that debroglie's relation follows from E=mc^2.The facts were known to me but not the details or the reference and it's also beneficial for future users.Thank you. $\endgroup$ Jun 22, 2020 at 12:04 • $\begingroup$ I know Manas. However, not everything can be made clear in a semester coursework. There is so much to process! $\endgroup$ Jun 22, 2020 at 12:11 First of all, the terminology is messy. The original Schrödinger equation is nonrelativistic, however, people often call "Schrödinger equation" whatever they want, no matter what Hamiltonian they use, so, "in their book", Schrödinger equation can be relativistic. So Schrödinger clearly built on de Broglie's relativistic ideas, why did he write a nonrelativistic equation? Actually, he started with a relativistic equation (which we now call the Klein-Gordon equation), however, it did not describe the hydrogen spectra correctly (because it did not take spin into account), so Schrödinger did not dare to publish it. Later Schrödinger noted that the nonrelativistic version (which we now know as the (original) Schrödinger equation) described the hydrogen spectra correctly (up to relativistic corrections:-) ), so he published his nonrelativistic equation. If you are interested, I'll try to look for the references to the above historical facts. EDIT (6/21/2020): Actually, I have found the reference: Dirac, Recollections of an Exciting Era//History of Twentieth Century Physics: Proceedings of the International School of Physics "Enrico Fermi". Course LVII. - New York; London: Academic Press, 1977. -P.109-146. Dirac recollects his conversation with Schrödinger that took place in (approximately) 1940. • $\begingroup$ Yes,I came to know about the historical fact from Weinberg's QFT book volume-1.Thanks for clearing the fact about terminologies. $\endgroup$ Jun 22, 2020 at 11:13 • $\begingroup$ "so, 'in their book', Schrödinger equation can be relativistic" I am surprised that such misnomers occur. Can you give any names (references) of the sinners? $\endgroup$ – my2cts Jun 22, 2020 at 11:25 • 1 $\begingroup$ @my2cts journals.aps.org/pr/abstract/10.1103/PhysRev.143.978 Actually one can find a lot of such examples, so I would say one cannot call such authors "sinners", it's too late for that:-) $\endgroup$ – akhmeteli Jun 23, 2020 at 0:54 The Schrödinger equation is non-relativistic by construction. It follows from the nonrelativistic classical energy expression by applying De Broglie's idea to replace $(E,\vec p)$ by $-i\hbar (\partial_t, \vec \nabla)$. • $\begingroup$ I upvoted because you are atleast on the right track to understand my qn,$\lambda=h/mv$ is non-relativistic? But,what of the "derivation" which does $E=mc^2$ and $E=h/neu$ and $ /neu=c/ \lambda$,for matter waves the velocity is v therefore $\lambda=h/mv$.This uses $E=mc^2$,and yet the relativistic nature of $\lambda=h/mv$ vanishes....WHY does THIS occur?..Is it because this "derivation" is incorrect and doesn't make any sense.I say this because many local high schools of our country give this "derivation",and many people believes in this.(I don't know whether to believe or not,I am confused). $\endgroup$ Jun 21, 2020 at 19:23 • $\begingroup$ @ManasDogra please give a reference to an example of such a derivation. $\endgroup$ – Ruslan Jun 21, 2020 at 20:32 • $\begingroup$ chem.libretexts.org/Bookshelves/… $\endgroup$ Jun 21, 2020 at 20:40 • $\begingroup$ Infact,Search google for "derivation of de broglie equation",you will get many sites giving this derivation--It doesn't make quite a sense to me,but literally everyone in our country is taught this at high school,and this makes everyone think that de-Broglie's relations are relativistic--this is supported by a lot of sites on the internet and possibly false interpretations from deBroglie's original paper. I am partially of the opinion that it can't be "derived",such derivtions are not also present in good books like tht by Eisberg,Resnick,Beiser.etc. $\endgroup$ Jun 21, 2020 at 20:41 • 1 $\begingroup$ Had De Broglie really followed this line of reasoning then he would not have deserved his Nobel prize. A better web article is this: en.wikipedia.org/wiki , complete with a link to the English translation of De Broglie thesis. $\endgroup$ – my2cts Jun 21, 2020 at 23:23 The de Broglie relationships relate energy $E$ and momentum $p$, with frequency $\nu$ and wavelength $\lambda$ \begin{equation} E = h \nu, \ \ p = \frac{h}{\lambda} \end{equation} There is nothing relativistic about this. In fact this is not even really a full theory. To get a full dynamical theory, you need to relate $E$ and $p$ in some way. The Schrodinger equation (for one particle) is built on the non-relativistic relationship \begin{equation} E = \frac{p^2}{2m} \end{equation} When I say "built on", I mean if you compute the energy and momentum for an energy eigenstate of a free particle obeying the Schrodinger equation, the energy and momentum will obey the above relationship. If you want a relativistic theory, you would want to find a wave equation that reproduced the relativistic relationship \begin{equation} E^2 = m^2 c^4 + p^2 c^2 \end{equation} The Klein-Gordon equation is an example of such a wave equation (and indeed Schrodinger tried it first). But, there are problems interpreting the solutions of the Klein-Gordon equation as a wavefunction. We now understand (as others have pointed out on this page) that the problem is that it's not consistent to have an interacting relativistic quantum theory with a fixed number of particles. This leads to the development of quantum field theory. In the above I restricted myself to the case of writing the Schrodinger equation for one particle. As has also been pointed out on this page, one way to generalize quantum mechanics to quantum field theory is to promote the wavefunction to a wavefunctional (a map from field configurations to complex numbers). The wavefunctional obeys the Schrodinger equation (except now where the Hamiltonian is an operator on a much larger Hilbert space). This Schrodinger equation is relativistic, if the quantum field theory it describes is relativistic. However, this is at a much higher level of sophistication than what I believe was being asked. • $\begingroup$ I believe this is the answer OP was looking for. $\endgroup$ – Andrea Oct 26, 2021 at 9:12 As it is known Schrödinger's equation diffuses an initially localized particle as far as we want in a time as small as we want (but with small probability). In formulas the problem is parabolic : $$\partial^2_x u(x,t)=K\partial_t u(x,t)$$ . However one could use relativistic boundary conditions $u(\pm ct,t)=0$ When solving by spectral methods we diagonalize the lhs, obtaining a time dependent base of eigenfunctions. Plugging back in the problem, the coefficient of the development in the base are obtained by integration. I was told this were unsolvable, but the solution I could write is as an infinite system of coupled odes for the coefficients of expansion. The eigenfunctions are $$v_n(x,t)=\cos((2n+1)\frac{\pi x}{2ct})$$, the expansion is written as $$u(x,t)=\sum_{n=0}^\infty a_n(t)v_n(x,t)$$ The solution is formally obtained as an infinite matrix exponentiated. In this case the particle cannot fly ftl to violate (relativistic) causality. Your Answer
1e000af0ffdfb38f
Politecnico di Torino (logo) Single-Electron Phenomena in Electrochemistry of Biosensors Tommaso Serra Single-Electron Phenomena in Electrochemistry of Biosensors. Rel. Alberto Tagliaferro, Sandro Carrara. Politecnico di Torino, Corso di laurea magistrale in Nanotechnologies For Icts (Nanotecnologie Per Le Ict), 2020 PDF (Tesi_di_laurea) - Tesi Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (5MB) | Preview ??In this work, we modelled Coulomb blockade and Coulomb staircase phenomena starting from first-principles and taking advantage of Schrödinger-like problems knowledge. ??Coulomb blockade and staircase phenomena are physical effects related to the quantization of levels in a nanoparticle. In the past, experimental results concerning these phenomena have been modelled with either semiclassical or circuit-like models, which derive electrical parameters such as capacitances and resistances in order to reproduce the measured currents trends. ???? ??Our approach blossoms in the framework of quantum mechanics. As a first step, we studied a simplified model, which is often used to describe a similar quantum structure, in which discretized energy levels exist in the region enclosed between two one-dimensional potential energy barriers. In this model, when electrons are forced to drift across the system under the effect of a bias voltage, they are able to tunnel across the first energy barrier to move to one of the available energy levels, eventually leaving the system by tunneling across the other barrier. ??When dealing with coherent electron transport collisions are not phase-breaking and quantization of the available conducting channels emerges either as a staircase-like or an oscillating resonant current-voltage characteristics, depending on the nanoparticle size. ???? ??Literature provides analytic expressions for quantum tunneling across one, two or multiple barriers when no external bias is applied to the system. However, as our goal was to model the effect of bias voltage we were forced to revert to a numerical approach. To this purpose, we implemented the solution of the out-of-equilibrium Schrödinger equation using Finite Element Method (FEM) and Non-Equilibrium Green's Functions (NEGF) approaches. ???? ??The FEM solution, although computationally less expensive than the NEGF algorithm, proves to be viable only at equilibrium and small-bias conditions, while the NEGF solution provides reliable results over a wider bias voltage range. Hence, we focused on a NEGF approach. The obtained results have been analyzed in order to identify the role that each geometrical parameter plays in determining the current-voltage curve shape. ???? ??Once the first step completed, as our target was to apply the model to the simulation and design of nanoparticle-enhanced electrochemical biosensors, we modified the geometrical structure in order to emulate a nanoparticle with an attached molecule. Now, quantum tunneling can only occur when an energy level within the nanoparticle sits at an energy close enough to that of an unoccupied energy level of the molecule, or vice versa. Current-voltage characteristics obtained for these kind of systems have been found to be very similar both in shape and order of magnitude to those available in literature. ???? ??As the problem to tackle is rather challenging, the present work, although leading to satisfactory results, represents a first critical step in the direction of a comprehensive tool for the design of nanoparticle-enhanced electrochemical biosensors. Improvements can be made in optimizing the solving algorithm, making it more efficient and less time-consuming, as well as including other relevant physical phenomena, thus obtaining a more complete description of the electron transfer. An example could be the hopping phenomenon, which plays a role most probably as relevant as tunneling, but for incoherent electron transport. Relators: Alberto Tagliaferro, Sandro Carrara Academic year: 2020/21 Publication type: Electronic Number of Pages: 83 Corso di laurea: Corso di laurea magistrale in Nanotechnologies For Icts (Nanotecnologie Per Le Ict) Ente in cotutela: Ecole Polytechnique Fédérale de Lausanne (EPFL) (SVIZZERA) Aziende collaboratrici: UNSPECIFIED URI: http://webthesis.biblio.polito.it/id/eprint/16746
49d4c5b6b561b6be
III. About Quantum Mechanics & Biology We’ll start with the assumption that the laws of physics are sufficiently correct as written. Sufficiently correct means that for purposes of reconciling to our 1st person experience we don’t need to discover any new physics, such as a 5th force, or any new secret particle. The laws of physics may be refined in the future, sure, but for our purposes here we’ll assume we’ve got conceptually everything we need and see where we can go with it. This is particularly important regarding quantum mechanics. Quantum mechanics is one of the most successful, most validated, and most accurate theories ever. But, it is an especially weird subject matter and nothing we can say here can make it intuitive or sensible. The hard truth is, when we look up close at the Universe, things look very different than the reality we are accustomed too. One strange concept, called quantum measurement, connects the observer with the observed in an intimate way. For instance, the angle from which we look at an electron affects the direction it is spinning. Without touching, or otherwise disturbing the electron, the mere direction from which we view it affects the state it will be in! Now, the observer does not have to be a person, it can be any macroscopic device, such as a computer, a camera, photomultiplier tube, something called a Stern-Gerlach apparatus, and much more. To this day, there is no scientific consensus about how to interpret the strange nature of quantum mechanics. However, very recently results have finally shed light on this problem. This paper, entitled “Understanding quantum measurement from the solution of dynamical models” by A. Allahverdyan et al. (2013) has shown how quantum measurement can be understood. The paper describes some very complex mathematics, yet, a conceptually simple framework and involves some things called quantum dynamical models, statistical mechanics, and “asymptotic cascades to pointer states“. It also describes how quantum measurements occur without any abrupt or time-irreversible collapse of the wave function. When the coupling between the measuring apparatus and the system is dominant, the system entangles with the measuring apparatus and then cascades to a pointer state registering the measurement, but, when the coupling to the environment dominates, decoherence occurs (when information about the system leaks into the environment). We won’t dwell on the lengthy mathematical details but encourage the interested reader to dive in as this paper is tantamount to a sensible interpretation of quantum mechanics. For our purposes, it will be a sufficient approximation to follow the Copenhagen interpretation and view observation as causing the wave function of the observed to collapse (see Born rule). The wave function will collapse to states dictated by the measurement device (called basis states). This strange point connecting the observer and the observed will turn out to be essential to our attempts to reconcile the 1st and 3rd person perspective, so, below we give a, hopefully, illustrative example. Suppose we place an electron in a toy box through a door in the top as shown in (figure 4). The electron is spin “up” (spinning in a right-handed way about an axis as depicted by the arrow – the arrow points along the axis of rotation like an axis between the north and south poles of the Earth). Technically, electron spin is a little bit more complex than this description, but, for our purposes, this will suffice to illustrate the concepts involved (the interested reader can dive in deeper to electron spin here). Next, we close the door, and open another door on the front side of the box. Fifty percent of the time the electron will be found pointing toward us (out of the page), and the other fifty percent away from us. There is no way to determine definitively which direction (toward or away) the electron will be spinning, all we can calculate are the probabilities. Whichever side of the box we open, that will determine the axis about which the electron is spinning. If we open a side door, the electron will be found spinning about that axis – toward or away with equal probability. However, if we place the electron in the box through the top, close the door, then re-open the top door, the electron will still be in the same state we left it with near one-hundred-percent probability. Figure 4: A spinning electron is inserted through the top of a box with spin pointing up (counter-clockwise rotation about the axis). The door is closed. A door on the front is then opened. The electron can only be found pointing toward the observer or away. In this case, it will do so with 50/50% probability. The side of the box that we open determines the axis of the electron – a strange aspect of quantum mechanics that intimately connects the observer with the observed. Our toy box is a metaphor for a spinning electron in a magnetic field. In the laboratory, electrons may only have spin +1/2 \hbar or -1/2 \hbar (where \hbar = Planck’s constant / (2pi)^{1/2} ). Nothing in between is allowed by Nature. Practically speaking, the electron’s spin is measured by placing it in an external magnetic field. The electron’s spin causes it to produce a magnetic field of its’ own. The electron will always either align its own magnetic field with the external field, or, it will be opposite to it. If opposite, the electron will emit a photon of light and flip to align, since this is a lower energy state (just like bar magnets will flip so that their magnetic fields align). This short video “Visualization of Quantum Physics” (2017) provides a graphic introduction to quantum mechanics, although it is talking about a particle moving freely in space, rather than the spin state of a particle. Another strange aspect of quantum mechanics is entanglement. We can entangle electrons so that they become one system of electrons. In such an entangled system, we can know all there is to know about the system without knowing anything about the individual state of a particle within the system (see a more detail description here). The system of electrons has literally become one thing. Moreover, entanglement is the only phenomenon like this in all of physics, that is, it is the only means by which particles may become one system of particles larger than themselves. Furthermore, there is no theoretical limit on how many particles may be entangled together or how complex the system may be. Now, there are many ways to entangle electrons, but one way is to bring them close together so that they “measure” each other (so their magnetic coupling to one another is stronger than the coupling to the surrounding environment). When one electron measures the other, it will either find it spinning the same direction or opposite – just like any measurement must. The electron is not big enough to force a single outcome like we see when we open the toy box, so the two electrons will sit in a quantum superposition of states: they with both be spinning in the same direction and spinning opposite at the same time (we refer the reader back to this paper for a detailed mathematical description of what “big enough” is). They will remain entangled in a superposition until stronger external interactions, like with the environment or macroscopic observers, cause decoherence or specifically measure the system. Generally, in the laboratory, this happens very fast, like femtoseconds (one millionth of one billionth of a second) and is the reason most scientists are skeptical of quantum mechanics playing anything but a trivial role in biological systems. This paper, “The Importance of Quantum Decoherence in Brain Processes” by Max Tegmark (1999) shows, for example, that whole neurons are way too big to exist in a superposition of states for biologically relevant timescales due to decoherence. Still, there is abundant evidence that quantum superpositions and entanglement exist in biological systems on smaller scales. A comprehensive list can be found in the nascent field of quantum biology for which the reader can find an excellent introduction in this book: “Life on the Edge: The Coming Age of Quantum Biology” by J. Al-Khalili and J. McFadden (2014). The best studied example of quantum effects in biology occurs in the process of photosynthesis in the FMO complex of green sulfur bacteria where quantum states are observed to persist for as long as picoseconds (one trillionth of a second) and which allows plants to convert light from the sun into chemical energy in a perfect, fast, 100% efficient process. Figure 5: Quantum biologyphotosynthesis. Diagram of the FMO complex. Light excites electrons in an antenna. The quantum exciton then transfers through various proteins in the FMO complex to the reaction center to further photosynthesis. by By OMM93 – Own work, CC BY-SA 4.0 via Wikipedia Decoherence rates are highly dependent on the surrounding environment and dynamics of the system. For example, if it is really, really cold (~ absolute zero), and isolated, then entanglement can theoretically last on the order of seconds or longer. The dynamics are critical too – like if something keeps cyclically pushing particles together – entanglement can, again, in theory, last much longer. If the cyclicality happens faster than decoherence, entanglement may, in theory, be sustained indefinitely. Interestingly, biomolecules in organisms are really buzzing, vibrating at frequencies ranging from nanoseconds to femtoseconds. Comparing these times to the picosecond decoherence times observed in photosynthesis and a plausible means by which quantum effects may persist in biological systems becomes apparent. The interested reader can find a mathematical description of this type of dynamic entanglement in these papers: Persistent Dynamic Entanglement from Classical Motion: How Bio-Molecular Machines can Generate Nontrivial Quantum States” by G. G. Guerreschi, J. Cai, S. Popescu, and H.J. Briegel (2012) Dynamic entanglement in oscillating molecules and potential biological implications” by J. Cai, S. Popescu, and H.J. Briegel (2010) Generation and propagation of entanglement in driven coupled-qubit systems” by J. Li and G.S. Paraoanu (2010) Steady-state entanglement in open and noisy quantum systems” by L. Hartmann, W. Dür, and H.J. Briegel (2005). They describe a theory by which entanglement can be sustained in noisy, warm environments in which no static entanglement can survive. In other words, the way physicists try to build quantum computers today by completely shielding the qubits from the outside world probably won’t work in biological systems, but this isn’t the only way to go about it. This is an important point and is critical for the reconciliation we propose in this essay. We should note that this theory of dynamic entanglement has not been experimentally verified, but no one, so far, has done the experiments to look for it. For the rest of this essay, we will assume that this theory pans out experimentally and entanglement will be found to be sustainable in biological systems. Now, suppose we entangle two electrons so that their spins are pointing parallel (see here for more about how this is accomplished in practice). We can place them in two separate boxes as shown in (figure 6). When we open any other door of the box, even if the boxes are at opposite ends of the galaxy, the electrons will be found to be spinning parallel to each other. The measurement of one instantaneously affects the state of the other no matter how far away it is. This is the property of quantum mechanics that Einstein labeled “spooky action at a distance”. Nonetheless, experiment after experiment has supported this strange property (see “Physicists address loophole in tests of Bell’s inequality using 600-year-old starlight“). Figure 6: Two entangled electrons placed into two boxes through the top, separated by a galaxy, then opened on the side (or any side for that matter), will always be found spinning in the same direction -it could be either left or right, but A and B will always point the same way. Picture of Milky Way Galaxy here. Once two electrons are entangled we can perform quantum operations on them to put the system into a superposition of four different states at once: (1) “A” up and “B” down, (2) “A” down and “B” up, (3) “A” up and “B” up, and (4) “A” down and “B” down. Entanglement is not limited to just electrons – photons (light), nuclei, molecules, many forms of quasiparticles including macroscopic collections of billions of particles can be entangled (see, for example, SQUIDs, phonons or solitons). To the extent these configurations have binary states (electrons: spin up or spin down, photons: right-handed or left-handed polarization, etc.) they may represent quantum bits or qubits like in a quantum computer (although analog quantum computing is just as plausible as a binary system). In the case of electrons, spin-up could represent a value of, say, “0” and spin-down a value of, say, “1”, to perform powerful quantum computations. The power of such computations is derived from the quantum computer’s ability to be in superposition of many states at once, equal to 2^n , where n is the number of qubits. So, the two-electron system can be in 2^2=4 states, but a system of eighty electrons could be in 2^{80} states at once – more states than there are atoms in the observable Universe. To imagine how this works, imagine a computer that produces a clone of itself with each tick of the CPU clock. For a 1-GHz clock speed (typical), every billionth of a second the computer doubles the number of states that it is in. So, after one nanosecond, we effectively have two computers working on the problem simultaneously. After two nanoseconds, four computers. After three nanoseconds, eight, etc. Entangled superpositions of this sort are the secret behind the legendary computing power of quantum computers (even though they barely exist yet! J). However, to take advantage of all these superpositions, we must find a clever way to make them interfere with each other. We can’t look at the result until the very end of the computation and when we do the superposition will collapse. If we can get the states to interfere in the right way we can get the system to be in a superposition that is concentrated, with near 100% probability, in a state that corresponds to a solution to our problem. This is the case with a special algorithm known as Shor’s algorithm that can solve certain NP problems in polynomial time. NP problems are those that require exponential time on a classical computer to solve (see more on P vs NP here, and here). Shor’s algorithm uses something called the quantum Fourier transform to achieve this speed-up and is used to factor large integers. This is an important problem in cryptography and is the “go to” technique that encrypts substantially all the traffic over the internet. For example, factoring a 500-digit integer will take longer than the age of the Universe on a classical computer, but less than two seconds on a quantum computer – hat tip to John Preskill for the stats, see his great introductory video lecture on quantum computing here (2016). To perform such computations, it is thought to require about 1,000 qubits. Other examples of NP problems include the infamous traveling salesman problem. It is an open problem whether all NP problems can be solved in polynomial time on a quantum computer. The vast majority of physicists and computer scientists think this is unlikely, however. Figure 7: Quantum subroutine in Shor’s algorithm By Bender2k14 – Own work. Created in LaTeX using Q-circuit CC BY-SA 4.0 via Wikipedia It is time to note, though, that there is a strange, unproven, powerful possibility lurking in quantum mechanics. All the quantum computers that have been designed to date utilize something called the Linear Schrödinger Equation (LSE) – an ubiquitous form in quantum mechanics. That means that the qubits are contained by forces external to the qubit system itself, like an external magnetic field. A much rarer, and controversial, situation in quantum physics involves the Non-Linear Schrödinger Equation (NLSE) and this occurs only when the quantum state wave functions interact with themselves. NLSE systems may potentially have a profound impact on quantum computation because they can theoretically solve NP-complete problems in polynomial time (only if the time evolution of the Schrödinger equation turns out to be nonlinear), and this means they can solve all NP problems in polynomial time. The interested reader can dive in here: “Nonlinear Quantum Mechanics Implies Polynomial-Time Solution for NP-Complete and #P Problems” by D. Abrams, S. Lloyd (1998), and in chapter 5 here “NP-Complete Problems and Physical Reality” by S. Aaronson (2005). So far, such a system has never been implemented, nor even a theoretical design proven though the idea continues to be debated. Here is the latest on the subject: “Nonlinear Optical Quantum-Computing Scheme Makes a Comeback” by D. Brod and J. Combes (2016). The NLSE does appear in the study of a number of special physical systems, , for example, in Bose-Einstein condensates (BEC) (see this paper on building a quantum transistor using the NLSE in a BEC), in fiber optic systems, and also, especially interestingly, in biology: like the Davydov Alpha-Helix Soliton which forms a complex quasiparticle that transports energy up and down the chain of protein molecules, and in Fröhlich condensates which have recently been observed experimentally in biomolecules upon exposure to THz radiation (see Lundholm, et al. 2015). The interested reader can dive into Weinberg’s original paper (1989) on the NLS Equations here for more, and further enhancements of the theory here and here that address some difficulties. Also, see here for the development of the NLSE using Ricati equations. To get some hands-on experience with basic quantum computers, IBM has a 5-qubit machine, albeit strictly implementing a linear Schrödinger equation, online right now that is freely available to everyone. Go to www.ibmexperience.com to learn more. Figure 8: Examples of solutions to Non-Linear Schrödinger Equations. Absolute value of the complex envelope of exact analytical breather solutions of the Nonlinear Schrödinger (NLS) equation in nondimensional form. (A) The Akhmediev breather; (B) the Peregrine breather; (C) the Kuznetsov–Ma breather. From: Miguel Onorato, Davide Proment, Günther Clauss and Marco Klein (2013) “Rogue Waves: From Nonlinear Schrödinger Breather Solutions to Sea-Keeping Test”. PLoS One 8(2): e54629, doi: 10.1371/journal.pone.0054629, PMC 3566097 via Wikipedia Leave a Reply
9fb1a00b9f48639f
slide1 n. Skip this Video Loading SlideShow in 5 Seconds.. Chapter 8 Periodic Properties & Electron Configurations Bushra Javed PowerPoint Presentation Download Presentation Chapter 8 Periodic Properties & Electron Configurations Bushra Javed Chapter 8 Periodic Properties & Electron Configurations Bushra Javed 343 Vues Download Presentation Télécharger la présentation Chapter 8 Periodic Properties & Electron Configurations Bushra Javed Presentation Transcript 1. Chapter 8 Periodic Properties & Electron Configurations BushraJaved 2. Contents • Electron Spin • Electron Configurations of elements • The development of Periodic Table • Periodic Trends 3. Electron Configurations • Quantum-mechanical theory describes the behavior of electrons in atoms • The electrons in atoms exist in orbitals • A description of the orbitals occupied by electrons is called an electron configuration number of electrons in the orbital 1s1 principal energy level of orbital occupied by the electron sublevel of orbital occupied by the electron 4. How Electrons Occupy Orbitals • Calculations with Schrödinger’s equation show hydrogen’s one electron occupies the lowestenergy orbital in the atom • Schrödinger’s equation calculations for multi-electron atoms cannot be exactly solved due to electron-electron interactions in multi-electron atoms 5. How Electrons Occupy Orbitals • The interactions that occur in multi-electron atoms are due to : • electron spin • & energy splitting of sublevels 6. The Property of Electron Spin • Spin is a fundamental property of all electrons • All electrons have the same amount of spin • The orientation of the electron spin is quantized, it can only be in one direction or its opposite • spin up or spin down • The electron’s spin adds a fourth quantum number to the description of electrons in an atom, called the Spin Quantum Number, ms • not in the Schrödinger equation 7. The Property of Electron Spin • Experiments by Stern and Gerlach showed a beam of silver atoms is split in two by a magnetic field. • The experiment reveals that the electrons spin on their axis • spinning charged particles generate a magnetic field 8. Electron Spin • If there is an even number of electrons, about half the atoms will have a net magnetic field pointing “north” and the other half will have a net magnetic field pointing “south” 9. Electron Spin 10. The two possible spin orientations of an electron and the conventions for msare illustrated here. 11. ms, and Orbital Diagrams • mscan have values of +½ or −½ • Orbital Diagrams use a square to represent each orbital and a half-arrow to represent each electron in the orbital • By convention, a half-arrow pointing up is used to represent an electron in an orbital with spin up • Spins must cancel in an orbital • paired 12. unoccupied orbital orbital with one electron orbital with two electrons Orbital Diagrams • We often represent an orbital as a square or a circle and the electrons in that orbital as arrows • the direction of the arrow represents the spin of the electron 13. Pauli Exclusion Principle • No two electrons in an atom may have the same set of four quantum numbers • Therefore no orbital may have more than two electrons, and they must have the opposite spins • Knowing the number orbitals in a sublevel allows us to determine the maximum number of electrons in the sublevel 14. Pauli Exclusion Principle ssublevel has 1 orbital, therefore it can hold 2 electrons psublevel has 3 orbitals, therefore it can hold 6electrons d sublevel has 5 orbitals, therefore it can hold 10 electrons f sublevel has 7 orbitals, therefore it can hold 14 electrons 15. Sublevel Splitting in Multi-electron Atoms • The sublevels in each principal energy shell of Hydrogen all have the same energy • We call orbitals with the same energy degenerate • For multi-electron atoms, the energies of the sublevels are split • caused by charge interaction, shielding and penetration • The lower the value of thelquantum number, the less energy the sublevel has • s (l = 0) < p (l = 1) < d (l = 2) < f (l = 3) 16. Shielding & Effective Nuclear Charge • Each electron in a multi-electron atom experiences both the attraction to the nucleus and repulsion by other electrons in the atom • These repulsions cause the electron to have a net reduced attraction to the nucleus – it is shielded from the nucleus • The total amount of attraction that an electron feels for the nucleus is called the effective nuclear charge of the electron 17. Shielding & Penetration 18. penetration& Effective Nuclear Charge • The closer an electron is to the nucleus, the more attraction it experiences • The better an outer electron is at penetrating through the electron cloud of inner electrons, the more attraction it will have for the nucleus • Penetration causes the energies of sublevels in the same principal level to not be degenerate 19. Effect of Penetration and Shielding • In the fourth and fifth principal levels, the effects of penetration become so important that the s orbital lies lower in energy than the d orbitals of the previous principal level • The energy separations between one set of orbitals and the next become smaller beyond the 4s • the ordering can therefore vary among elements • causing variations in the electron configurations of the transition metals and their ions 20. Filling the Orbitals with Electrons • Energy levels and sublevels fill from lowest energy to high • s→p→d→f • AufbauPrinciple • Orbitals that are in the same sublevel have the same energy • No more than two electrons per orbital • Pauli Exclusion Principle • When filling orbitals that have the same energy, place one electron in each before completing pairs • Hund’s Rule 21. Filling the Orbitals with Electrons The lowest-energy configuration of an atom is called its ground state. Any other allowed configuration represents an excited state. 22. Electron Configuration of Atoms The electron configuration is a listing of the sublevels in order of filling with the number of electrons in that sublevel written as a superscript. Kr = 36 electrons = 1s22s22p63s23p64s23d104p6 23. ‘Building up’ order of Filling in sublevels (Ground State Electron Configurations) Start by drawing a diagram putting each energy shell on a row and listing the sublevels, (s, p, d, f), for that shell in order of energy (left-to-right) 1s 2s 2p 3s 3p 3d 4s 4p 4d 4f 5s 5p 5d 5f 6s 6p 6d 7s Next, draw arrows through the diagonals, looping back to the next diagonal each time 24. Electron Configuration & the Periodic Table • The Group number corresponds to the number of valence electrons • The length of each “block” is the maximum number of electrons the sublevel can hold • The Period number corresponds to the principal energy level of the valence electrons 25. s1 s2 p1 p2p3p4p5 p6 s2 1 2 3 4 5 6 7 d1 d2d3d4d5d6d7d8d9d10 f2f3f4f5f6f7f8f9 f10f11f12f13f14 f14d1 26. Blocks of Elements 27. Blocks of Elements For main-group (representative) elements, ansor ap subshell is being filled. Ford-block transition elements, a d subshell is being filled. Forf-block transition elements, anf subshell is being filled. 28. Electron Configurations Example 1 Write the ground state electron configuration of the chlorine atom, Cl, For chlorine, Cl, Z = 17. 1s2 2s2 2p6 3s2 3p5 29. Electron Configurations Example 2 Write the ground state electron configuration of the manganese atom, Mn 30. Electron Configurations Example 3: What is the ground-state electron configuration of tantalum (Ta)? a. 1s22s22p63s23p64s23d104p65s24d105p6 6s25d104f7 b. 1s22s22p63s23p64s23d104p65s24d105p6 6s24f145d3 c. 1s22s22p63s23p64s23d104p65s24d105p6 6s2 5d3 d. 1s22s22p63s23p64s23d104p65s24d105p6 6s24f14 31. Electron Configurations Example 4 Which of the following electron configurations corresponds to the ground state electron configuration of an atom of a transition element? a. 1s22s22p2 b. 1s22s22p63s23p5 c. 1s22s22p63s23p64s2 d. 1s22s22p63s23p63d54s2 32. Exceptions to the ‘Building up’ order of Filling • About 21 of the predicted configurations are inconsistent with the actual configurations observed. One Possible Reason: • Half filled and fully filled subshells are highly Stable • Cr, Cu ,Ag and U are some of the exceptions. 33. Exceptions to the ‘Building up’ order of Filling Chromium (Z=24) and copper (Z=29) have been found by experiment to have the following ground-state electron configurations: Cr: 1s2 2s2 2p6 3s2 3p6 3d54s1 Cu: 1s2 2s2 2p6 3s2 3p6 3d104s1 In each case, the difference is in the 3d and 4ssubshells. 34. Exceptions to the ‘Building up’ order of Filling • Expected • Cr = [Ar]4s23d4 • Cu = [Ar]4s23d9 • Mo = [Kr]5s24d4 • Ru = [Kr]5s24d6 • Pd = [Kr]5s24d8 • Found Experimentally • Cr = [Ar]4s13d5 • Cu = [Ar]4s13d10 • Mo = [Kr]5s14d5 • Ru = [Kr]5s14d7 • Pd = [Kr]5s04d10 35. Electron Configurations Example 5 Which of the following electron configurations represents an allowed excited state of the indicated atom? a. He: 1s2 b. Ne: 1s2 2s2 2p6 c. Na: 1s2 2s2 2p6 3s2 3p2 4s1 d. P: 1s2 2s2 2p6 3s2 3p2 4s1 36. The Noble Gas coreElectron Configuration The noble gases have eight valence electrons. except for He, which has only two electrons We know the noble gases are especially non-reactive He and Ne are practically inert The reason the noble gases are so non-reactive is that the electron configuration of the noble gases is especially stable 37. Noble Gas Core Electron Configuration • A short-hand way of writing an electron configuration is to use the symbol of the previous noble gas in [] to represent all the inner electrons, then just write the last set. Rb= 37 electrons = 1s22s22p63s23p64s23d104p65s1 = [Kr]5s1 38. Noble Gas Core Electron Configuration Example 6 Which ground-state electron configuration is incorrect? a. Fe: [Ar] 3d5 b. Ca: [Ar] 4s2 c. Mg: [Ne] 3s2 d. Zn: [Ar] 3d10 4s2 39. Writing Electron Configurations The pseudo-noble-gas core includes the noble-gas subshells and the filled inner, (n – 1), d subshell. For bromine, the pseudo-noble-gas core is [Ar]3d10 40. Hund’s rule Hund’s rule states: that the lowest-energy arrangement of electrons in a subshell is obtained by putting electrons into separate orbitals of the subshell with the same spin before pairing electrons 41. Electron Configurations 42. 1s 2s 2p Electron Configurations Example 7 Draw an orbital diagram for nitrogen. 43. 1s 2s Electron Configurations Example 8 Which of the following electron configurations or orbital diagrams are allowed and which are not allowed by the Pauli exclusion principle? If they are not allowed, explain why? • 1s22s12p3 • 1s22s12p8 • 1s22s22p63s23p63d8 • 1s22s22p63s23p63d11 44. Electron Configurations Example 9 write the complete ground state orbital diagram and electron configuration of potassium 45. Valence Electrons • The electrons in all the sublevels with the highest principal energy shell are called the valence electrons • Electrons in lower energy shells are called core electrons • Chemists have observed that one of the most important factors in the way an atom behaves, both chemically and physically, is the numberof valence electrons 46. Valence- shell configuration For main-group elements, the valence configuration is in the form nsAnpB The sum of A and B is equal to the group number. So, for an element in Group VA of the third period, the valence configuration is 3s23p3 47. Valence-shell configuration Example 10 What are the valence-shell configuration for arsenic and zinc? Arsenic is in period 4, Group VA. Its valence configuration is 4s24p3. Zinc, Z = 30, is a transition metal in the first transition series. Its noble-gas core is Ar, Z = 18. Its valence configuration is 4s23d10. 48. Valence-shell configuration Example: 11 What is the valence -shell configuration of Tc, Z = 43
dd60d2fed2bcb375
Computers and cells English: "U.S. Army Photo", from M. ... English: “U.S. Army Photo”, from M. Weik, “The ENIAC Story” A technician changes a tube. Caption reads “Replacing a bad tube meant checking among ENIAC’s 19,000 possibilities.” Center: Possibly John Holberton (Photo credit: Wikipedia) (Oops! One day late this week!) A computer has some similarities to living organisms. Both produce something from, well, not very much. A computer program has data input from various sources, and produces output to various sinks or targets. A living organism takes in nutrients from various sources, and produces branches, leaves, fur, bones, blood and other organs. Of course there are differences. A computer is much, much simpler than a living being, even single celled organism. A computer in general only has a relatively small number of parts, but the “parts” in a living organism number in the billions. And of course, living organisms reproduce, but that may change in the foreseeable future. English: The heterolobosean protozoa species A... English: The heterolobosean protozoa species Acrasis rosea Olive & Stoian. Photographed at the Biology of Fungi Lab, UC Berkeley, California. (Photo credit: Wikipedia) Some animals are sentient, but I’m not going to discuss that here. Maybe in another post. A computer has hardware, software and operates on data. The data is either part of the software or read from buffers in the hardware. It stores its calculations in “memory”, which is special hardware with particularly fast access speeds. English: 1GB SO-DIMM DDR2 memory module English: 1GB SO-DIMM DDR2 memory module (Photo credit: Wikipedia) The computer produces results by placing data into buffers in the hardware. This results in things happening in the real world, such as printing a letter or number on paper, or more frequently these days, on some sort of screen. It may also do many other things, such as control the flow of water by moving a valve or other control mechanism. Computers communicate with other computers, by placing data in an output piece of hardware. The hardware is connected to a distant piece hardware of the same sort which puts the data into a buffer accessible to another computer. This computer may be a specialised computer that merely passes on the data. Such computers are called routers (or modems, or firewalls). Railway network in Wrocław Railway network in Wrocław (Photo credit: Wikipedia) Computers, specialised only in their usage, are found in washing machines, cars, televisions, and we all these days have multi-functional computers in our pockets, our cellphones. It would be hard to find a piece of electronic equipments these days that doesn’t have some sort of computer embedded in it. Very few of these computers are completely isolated – they chatter to one another all the times by various mechanisms. Internet (Photo credit: Wikipedia) (Incidentally, I came across a bizarre example of connectivity of things the other day – a wifi teddy bear. Say you are sitting in the lounge and you want to send a message to your child who is in her bedroom. You pick up your tablet and send a message to a “cloud” web site. This sends a message to your child’s tablet which is in her bedroom with her. The teddy bear, which is connected to the child’s tablet by wifi, growls the message to the child. No doubt scaring her out of her wits.) So in the current technological world everything is connected to everything else. Much like all the cells in a living being are connected to all the other cells in the organism, directly or indirectly. So how far can we take this analogy, where the organism is the network and the individual cells as the computers. (Caveat emptor – I am not a biology expert, so don’t take what I might say from here on as gospel). Embed from Getty Images A computer consists of hardware, software, and operates on data. A cell is sort of squishy, so “hardware” can only be a relative term, but a cell does have a relatively small number of organelles, such as mitochondria. The nucleus, which contains most of the genetic material, acts as the control centre of the cell, much as the CPU is the control centre of a computer. The function of the nucleus is to maintain the integrity of these genes and to control the activities of the cell by regulating gene expression—the nucleus is, therefore, the control center of the cell. In the cell, the genetic material is in some sense the software of the cell. It contains all the necessary information to create the cell itself or more interestingly the information needed to cause the cell to split into two identical daughter cells. This information is generally encoded in the DNA of the chromosomes. Information flows between DNA, RNA and protein... Information flows between DNA, RNA and protein. DNA -> protein is another special transfer, but it is not found in nature. (Photo credit: Wikipedia) The cell also contains, within the nucleus, an organelle called the nucleolus. This organelle (which is part of the nucleus organelle) seems from my reading to mostly relate to RNA, while the rest of the nucleus mostly relates to DNA, very roughly. RNA and DNA perform a complex dance called protein synthesis in organelles called ribosomes. Cells produce chemicals, which can be consider analogous to computer outputs and receive chemicals from other cells, and so cells communicate, in a sense, with each other. Since all cells are equal genetically, it follows that a cell’s type, liver, skin, lung or brain neurone is determined by factors in its environment. The model of an artificial neuron as the activ... The model of an artificial neuron as the activation function of the linear combination of weighted inputs (Photo credit: Wikipedia) This only loosely true as each cells is the daughter of another cell and inherits its type, but in the early days of an organism’s life, before organs are formed cells do differentiate. Just as when computers were new, they were all very similar, keyboard, monitor, and beige case. As the computer-sphere evolved, special types of computer evolved, such as routers and modems, and firewalls. Not to mention phones. Computers became specialised. Similarly cells become differentiated, some going on to become liver cells for example, and others brain cells (neurones). English: Front side of a Juniper SRX210 servic... English: Front side of a Juniper SRX210 service gateway Deutsch: Vorderseite eines Juniper SRX210 Service Gateways (Photo credit: Wikipedia) When an organism is young and a cell divides both cells are the same type, but when the organism is very young there is no differentiation. The DNA in the cell contains the necessary information to determine the cell type and tissues and organs are created in the more complex animals. This process obviously can’t be random, otherwise cells of the various tissue types would be all mixed up. It seems to me, maybe naively, that while the “program” for creating cells is in the DNA, some factors in the environment convey such information as how old the organism is, and what type of cell needs to be created. an example of a Program an example of a Program (Photo credit: Wikipedia) We know from investigations into fractals that a simple equation can result in the creation of an image that looks very much like a tree or grasses and that small changes to the equation can lead to different tree or grass shapes. It is tempting to think that a similar process takes place in organisms – a general rule is given which results in the right sort of cells being produced in the right places. The problem with the fractal idea is that it only creates simple shapes. An arm with fingers, skin and so on is beyond the capabilities of a fractal process so far as I know. Fractals don’t stop. Again, so far as I know there’s no way to iteratively create a tree structures with leaves. Embed from Getty Images So the “software” of the cell, the “program” embedded in the DNA doesn’t appear to be analogous to a simple computer program that draws fractals. Of course that doesn’t mean that we can never describe a simple organism completely in fractal terms, and create analogous distinct individuals. It seems that a long as the analogy is not pushed too far, computers in a distributed network are reasonably similar to living organisms. Please note I am note referring to the fractal type computer programs, but am talking about the way that computers themselves in a network are somewhat analogous to living organisms. Primitive ones! Sample oscillator from hexagonal Game of Life. Sample oscillator from hexagonal Game of Life. (Photo credit: Wikipedia) Why Pi? Based on Image:P math.png Based on Image:P math.png (Photo credit: Wikipedia) If you measure the ratio of the circumference to the diameter of any circular object you get the number Pi (π). Everyone who has done any maths or physics at all knows this. Some people who have gone on to do more maths knows that Pi is an irrational number, which is, looked at one way, merely the category into which Pi falls. There are other irrational numbers, for example the square root of the number 2, which are almost as well known as Pi, and others, such as the number e or Euler’s number, which are less well known. Illustration of the Exponential function Illustration of the Exponential function (Photo credit: Wikipedia) Anyone who has travelled further along the mathematical road will be aware that there is more to Pi than mere circles and that there are many fascinating things about this number to keep amateur and professional mathematicians interested for a long time. Pi has been known for millennia, and this has given rise to many rules of thumb and approximation for the use of the number in all sorts of calculations. For instance, I once read that the ratio of the height to base length of the pyramids is pretty much a ratio of Pi. The reason why this is so leads to many theories and a great deal of discussion, some of which are thoughtful and measured and others very much more dubious. Menkaure's Pyramid Menkaure’s Pyramid (Photo credit: Wikipedia) Ancient and not so ancient civilisations have produced mathematicians who have directly or indirectly interacted with the number Pi. One example of this is the attempts over the centuries to “square the circle“. Briefly squaring the circle means creating a square with the same area as the circle by using the usual geometric construction methods and tools – compass and straight edge. This has been proved to be impossible, as the above reference mentions. The attempts to “trisect the angle” and “double the cube” also failed and for very similar reasons. It has been proved that all three constructions are impossible. English: Drawing of an square inscribed in a c... English: Drawing of an square inscribed in a circle showing animated strightedge and compass Italiano: Disegno di un quadrato inscritto in una circonferenza, con animazione di riga e compasso (Photo credit: Wikipedia) Well, actually they are not possible in a finite number of steps, but it is “possible” in a sense for these objectives to be achieved in an infinite number of steps. This is a pointer to irrational numbers being involved. Operations which involve rational numbers finish in a finite time or a finite number of steps. (OK, I’m not entirely sure about this one – any corrections will be welcomed). OK, so that tells us something about Pi and irrational numbers, but my title says “Why Pi?”, and my question is not about the character of Pi as an irrational number, but as the basic number of circular geometry. If you google the phrase “Why Pi?”, you will get about a quarter of a million hits. Animation of the act of unrolling a circle's c... Animation of the act of unrolling a circle’s circumference, illustrating the ratio π. (Photo credit: Wikipedia) Most of these (I’ve only looked at a few!) seem to be discussions of the mathematics of Pi, not the philosophy of Pi, which I think that the question implies. So I searched for articles on the Philosophy of Pi. Hmm, not much there on the actual philosophy of Pi, but heaps on the philosophy of the film “Life of Pi“. What I’m interested in is not the fact that Pi is irrational or that somewhere in its length is encoded my birthday and the US Declaration of Independence (not to mention copies of the US Declaration of Independence with various spelling and grammatical mistakes). Pi constant Pi constant (Photo credit: Wikipedia) What I’m interested in is why this particular irrational number is the ratio between the circumference and the diameter. Why 3.1415….? Why not 3.1416….? Part the answer may lie in a relation called “Euler’s Identity“. e^{i \pi} + 1 = 0 This relates two irrational numbers, ‘e’ and ‘π’ in an elegantly simple equation. As in the XKCD link, any mathematician who comes across this equation can’t help but be gob-smacked by it. The mathematical symbols and operation in this equation make it the most concise expression of mathematics that we know of. It is considered an example of mathematical beauty. Embed from Getty Images The interesting thing about Pi is that it was an experimental value in the first place. Ancient geometers were not interested much in theory, but they measured round things. They lived purely in the physical world and their maths was utilitarian. They were measuring the world. However they discovered something that has deep mathematical significance, or to put it another way is intimately involved in some beautiful deep mathematics. English: Bubble-Universe's-graphic-visualby pa... English: Bubble-Universe’s-graphic-visualby paul b. toman (Photo credit: Wikipedia) This argues for a deep and fundamental relationship between mathematics and physics. Mathematics describes physics and the physical universe has a certain shape, for want of a better word. If Pi had a different value, that would imply that the universe had a different shape. In our universe one could consider that Euler’s Relation describes the shape of the universe at least in part. Possibly a major part of the shape of the universe is encoded in it. It doesn’t seem however to encode the quantum universe at least directly. English: Acrylic paint on canvas. Theme quantu... English: Acrylic paint on canvas. Theme quantum physics. Français : Peinture acrylique sur toile. Thématique physique quantique. (Photo credit: Wikipedia) I haven’t been trained in Quantum Physics so I can only go on the little that I know about the subject and I don’t know if there is any similar relationship that determines the “shape” of Quantum Physics as Euler’s Relation does for at least some aspects of Newtonian physics. Maybe the closest relationship that I can think of is the Heisenberg Uncertainty Principle. Roughly speaking, (sorry physicists!) it states that for certain pairs of physical variables there is a physical limit to the accuracy with which they can be known. More specifically the product of the standard deviations of the two variables is greater than Plank’s constant divided by two. English: A GIF animation about the summary of ... English: A GIF animation about the summary of quantum mechanics. Schrödinger equation, the potential of a “particle in a box”, uncertainty principle and double slit experiment. (Photo credit: Wikipedia) In other words, if we accurately know the position of something, we only have a vague notion of its momentum. If we accurately know its velocity we only have a vague idea of its position. This “vagueness” is quantified by the Uncertainty Principle. It shows exactly how fuzzy Quantum Physics. The mathematical discipline of statistics underlay the Uncertainty Principle. In a sense the Principle defines Quantum Physics as a statistically based discipline and the “shape” of statistics determines or describes the science. At least, that is my guess and suggestion. Embed from Getty Images To return to my original question, “why Pi?”. For that matter, “why statistics?”. My answer is a guess and a suggestion as above. The answer is that it is because that is the shape of the universe. The Universe has statistical elements and shape elements and possibly other elements and the maths describe the shapes and the shapes determine the maths. This is rather circular I know, but one can conceive of Universes where the maths is different and so is the physics and of course the physics matches the maths and vice versa. We can only guess what a universe would be like where Pi is a different irrational number (or even, bizarrely a rational number) and where the fuzziness of the universe at small scales is less or more or physically related values are related in more complicated ways. Embed from Getty Images The reason for “Why Pi” then comes down the anthropological answer, “Because we measure it that way”. Our Universe just happens to have that shape. If it had another shape we would either measure it differently, or we wouldn’t exist. Embed from Getty Images
1b242e7610044fcc
#36: August 21st – 27th Amazon Braket v1.8.0 • Calculate arbitrary observables when shots=0 Bug Fixes and Other Changes • Remove immutable default args Quantum Alphatron Patrick Rebentrost,Miklos Santha,Siyi YangAug 27 2021 quant-ph arXiv:2108.11670v1 Many machine learning algorithms optimize a loss function with stochastic gradient descent and use kernel methods to extend linear learning tasks to non-linear learning tasks. Both ideas have been discussed in the context of quantum computing, especially for near-term quantum computing with variational methods and the use of the Hilbert space to encode features of data. In this work, we discuss a quantum algorithm with provable learning guarantee in the fault-tolerant quantum computing model. In a well-defined learning model, this quantum algorithm is able to provide a polynomial speedup for a large range of parameters of the underlying concept class. We discuss two types of speedups, one for evaluating the kernel matrix and one for evaluating the gradient in the stochastic gradient descent procedure. Our work contributes to the study of quantum learning with kernels and noise. Quantum adaptive agents with efficient long-term memories Thomas J. Elliott,Mile Gu,Andrew J. P. Garner,Jayne ThompsonAug 25 2021 quant-phcond-mat.stat-mechcs.AIcs.ITmath.IT arXiv:2108.10876v1 Central to the success of adaptive systems is their ability to interpret signals from their environment and respond accordingly — they act as agents interacting with their surroundings. Such agents typically perform better when able to execute increasingly complex strategies. This comes with a cost: the more information the agent must recall from its past experiences, the more memory it will need. Here we investigate the power of agents capable of quantum information processing. We uncover the most general form a quantum agent need adopt to maximise memory compression advantages, and provide a systematic means of encoding their memory states. We show these encodings can exhibit extremely favourable scaling advantages relative to memory-minimal classical agents when information must be retained about events increasingly far into the past. A variational quantum algorithm for the Feynman-Kac formula Hedayat Alghassi,Amol Deshmukh,Noelle Ibrahim,Nicolas Robles,Stefan Woerner,Christa ZoufalAug 25 2021 quant-ph arXiv:2108.10846v1 We propose an algorithm based on variational quantum imaginary time evolution for solving the Feynman-Kac partial differential equation resulting from a multidimensional system of stochastic differential equations. We utilize the correspondence between the Feynman-Kac partial differential equation (PDE) and the Wick-rotated Schrödinger equation for this purpose. The results for a (2+1)(2+1) dimensional Feynman-Kac system, obtained through the variational quantum algorithm are then compared against classical ODE solvers and Monte Carlo simulation. We see a remarkable agreement between the classical methods and the quantum variational method for an illustrative example on six qubits. In the non-trivial case of PDEs which are preserving probability distributions — rather than preserving the ℓ2ℓ2-norm — we introduce a proxy norm which is efficient in keeping the solution approximately normalized throughout the evolution. The algorithmic complexity and costs associated to this methodology, in particular for the extraction of properties of the solution, are investigated. Future research topics in the areas of quantitative finance and other types of PDEs are also discussed. New Trends in Quantum Machine Learning Lorenzo Buffoni,Filippo CarusoAug 24 2021 quant-phcond-mat.dis-nncs.LGstat.ML arXiv:2108.09664v1 Here we will give a perspective on new possible interplays between Machine Learning and Quantum Physics, including also practical cases and applications. We will explore the ways in which machine learning could benefit from new quantum technologies and algorithms to find new ways to speed up their computations by breakthroughs in physical hardware, as well as to improve existing models or devise new learning schemes in the quantum domain. Moreover, there are lots of experiments in quantum physics that do generate incredible amounts of data and machine learning would be a great tool to analyze those and make predictions, or even control the experiment itself. On top of that, data visualization techniques and other schemes borrowed from machine learning can be of great use to theoreticians to have better intuition on the structure of complex manifolds or to make predictions on theoretical models. This new research field, named as Quantum Machine Learning, is very rapidly growing since it is expected to provide huge advantages over its classical counterpart and deeper investigations are timely needed since they can be already tested on the already commercially available quantum machines. Grover search revisited; application to image pattern matching Hiroyuki Tezuka,Kouhei Nakaji,Takahiko Satoh,Naoki YamamotoAug 25 2021 quant-ph arXiv:2108.10854v1 The landmark Grover algorithm for amplitude amplification serves as an essential subroutine in various type of quantum algorithms, with guaranteed quantum speedup in query complexity. However, there have been no proposal to realize the original motivating application of the algorithm, i.e., the database search or more broadly the pattern matching in a practical setting, mainly due to the technical difficulty in efficiently implementing the data loading and amplitude amplification processes. In this paper, we propose a quantum algorithm that approximately executes the entire Grover database search or pattern matching algorithm. The key idea is to use the recently proposed approximate amplitude encoding method on a shallow quantum circuit, together with the easily implementable inversion-test operation for realizing the projected quantum state having similarity to the query data, followed by the amplitude amplification independent to the target index. We provide a thorough demonstration of the algorithm in the problem of image pattern matching. Training a discrete variational autoencoder for generative chemistry and drug design on a quantum annealer A.I. Gircha,A.S. Boev,K. Avchaciov,P.O. Fedichev,A.K. FedorovAug 27 2021 quant-phcs.LGq-bio.BMq-bio.QM arXiv:2108.11644v1 Deep generative chemistry models emerge as powerful tools to expedite drug discovery. However, the immense size and complexity of the structural space of all possible drug-like molecules pose significant obstacles, which could be overcome with hybrid architectures combining quantum computers with deep classical networks. We built a compact discrete variational autoencoder (DVAE) with a Restricted Boltzmann Machine (RBM) of reduced size in its latent layer. The size of the proposed model was small enough to fit on a state-of-the-art D-Wave quantum annealer and allowed training on a subset of the ChEMBL dataset of biologically active compounds. Finally, we generated 42904290 novel chemical structures with medicinal chemistry and synthetic accessibility properties in the ranges typical for molecules from ChEMBL. The experimental results point towards the feasibility of using already existing quantum annealing devices for drug discovery problems, which opens the way to building quantum generative models for practically relevant applications. Variational quantum eigensolver techniques for simulating carbon monoxide oxidation M.D. Sapova,A.K. FedorovAug 26 2021 physics.chem-phquant-ph arXiv:2108.11167v1 A family of Variational Quantum Eigensolver (VQE) methods is designed to maximize the resource of existing noisy intermediate-scale quantum (NISQ) devices. However, VQE approaches encounter various difficulties in simulating molecules of industrially relevant sizes, among which the choice of the ansatz for the molecular wavefunction plays a crucial role. In this work, we push forward the capabilities of adaptive variational algorithms (ADAPT-VQE) by demonstrating that the measurement overhead can be significantly reduced via adding multiple operators at each step while keeping the ansatz compact. Within the proposed approach, we simulate a set of molecules, O22, CO, and CO22, participating in the carbon monoxide oxidation processes using the statevector simulator and compare our findings with the results obtained using VQE-UCCSD and classical methods. Based on these results, we estimate the energy characteristics of the chemical reaction. Our results pave the way to the use of variational approaches for solving practically relevant chemical problems. Quantum kernels with squeezed-state encoding for machine learning Long Hin Li,Dan-Bo Zhang,Z. D. WangAug 26 2021 quant-ph arXiv:2108.11114v1 Kernel methods are powerful for machine learning, as they can represent data in feature spaces that similarities between samples may be faithfully captured. Recently, it is realized that machine learning enhanced by quantum computing is closely related to kernel methods, where the exponentially large Hilbert space turns to be a feature space more expressive than classical ones. In this paper, we generalize quantum kernel methods by encoding data into continuous-variable quantum states, which can benefit from the infinite-dimensional Hilbert space of continuous variables. Specially, we propose squeezed-state encoding, in which data is encoded as either in the amplitude or the phase. The kernels can be calculated on a quantum computer and then are combined with classical machine learning, e.g. support vector machine, for training and predicting tasks. Their comparisons with other classical kernels are also addressed. Lastly, we discuss physical implementations of squeezed-state encoding for machine learning in quantum platforms such as trapped ions. Unraveling correlated materials’ properties with noisy quantum computers: Natural-orbitalized variational quantum eigensolving of extended impurity models within a slave-boson approach Pauline Besserve,Thomas AyralAug 25 2021 quant-phcond-mat.str-el arXiv:2108.10780v1 We propose a method for computing space-resolved correlation properties of the two-dimensional Hubbard model within a quantum-classical embedding strategy that uses a Noisy, Intermediate Scale Quantum (NISQ) computer to solve the embedded model. While previous approaches were limited to purely local, one-impurity embedded models, requiring at most four qubits and relatively shallow circuits, we solve a two-impurity model requiring eight qubits with an advanced hybrid scheme on top of the Variational Quantum Eigensolver algorithm. This iterative scheme, dubbed Natural Orbitalization (NOization), gradually transforms the single-particle basis to the approximate Natural-Orbital basis, in which the ground state can be minimally expressed, at the cost of measuring the one-particle reduced density-matrix of the embedded problem. We show that this transformation tends to make the variational optimization of existing (but too deep) ansatz circuits faster and more accurate, and we propose an ansatz, the Multi-Reference Excitation Preserving (MREP) ansatz, that achieves great expressivity without requiring a prohibitive gate count. The one-impurity version of the ansatz has only one parameter, making the ground state preparation a trivial step, which supports the optimal character of our approach. Within a Rotationally Invariant Slave Boson embedding scheme that requires a minimal number of bath sites and does not require computing the full Green’s function, the NOization combined with the MREP ansatz allow us to compute accurate, space-resolved quasiparticle weights and static self-energies for the Hubbard model even in the presence of noise levels representative of current NISQ processors. This paves the way to a controlled solution of the Hubbard model with larger and larger embedded problems solved by quantum computers. Quantum Artificial Intelligence for the Science of Climate Change Manmeet Singh,Chirag Dhara,Adarsh Kumar,Sukhpal Singh Gill,Steve UhligAug 25 2021 cs.AIphysics.ao-phquant-ph arXiv:2108.10855v1 Climate change has become one of the biggest global problems increasingly compromising the Earth’s habitability. Recent developments such as the extraordinary heat waves in California & Canada, and the devastating floods in Germany point to the role of climate change in the ever-increasing frequency of extreme weather. Numerical modelling of the weather and climate have seen tremendous improvements in the last five decades, yet stringent limitations remain to be overcome. Spatially and temporally localized forecasting is the need of the hour for effective adaptation measures towards minimizing the loss of life and property. Artificial Intelligence-based methods are demonstrating promising results in improving predictions, but are still limited by the availability of requisite hardware and software required to process the vast deluge of data at a scale of the planet Earth. Quantum computing is an emerging paradigm that has found potential applicability in several fields. In this opinion piece, we argue that new developments in Artificial Intelligence algorithms designed for quantum computers – also known as Quantum Artificial Intelligence (QAI) – may provide the key breakthroughs necessary to furthering the science of climate change. The resultant improvements in weather and climate forecasts are expected to cascade to numerous societal benefits. Categories: Week-in-QML Leave a Reply
425cd14ab85d17ea
Nonlinear dispersive regularization of inviscid gas dynamics Govind S. Krishnaswami*, Sachin S. Phatak, Sonakshi Sachdev, A. Thyagaraja *Corresponding author for this work 69 Downloads (Pure) Ideal gas dynamics can develop shock-like singularities with discontinuous density. Viscosity typically regularizes such singularities and leads to a shock structure. On the other hand, in one dimension, singularities in the Hopf equation can be non-dissipatively smoothed via Korteweg-de Vries (KdV) dispersion. In this paper, we develop a minimal conservative regularization of 3D ideal adiabatic flow of a gas with polytropic exponent γ. It is achieved by augmenting the Hamiltonian by a capillarity energy β(ρ)(▿ρ)2. The simplest capillarity coefficient leading to local conservation laws for mass, momentum, energy, and entropy using the standard Poisson brackets is β(ρ) = β*/ρ for constant β*. This leads to a Korteweg-like stress and nonlinear terms in the momentum equation with third derivatives of ρ, which are related to the Bohm potential and Gross quantum pressure. Just like KdV, our equations admit sound waves with a leading cubic dispersion relation, solitary waves, and periodic traveling waves. As with KdV, there are no steady continuous shock-like solutions satisfying the Rankine-Hugoniot conditions. Nevertheless, in one-dimension, for γ = 2, numerical solutions show that the gradient catastrophe is averted through the formation of pairs of solitary waves, which can display approximate phase-shift scattering. Numerics also indicate recurrent behavior in periodic domains. These observations are related to an equivalence between our regularized equations (in the special case of constant specific entropy potential flow in any dimension) and the defocusing nonlinear Schrödinger equation (cubically nonlinear for γ = 2), with β* playing the role of ħ2. Thus, our regularization of gas dynamics may be viewed as a generalization of both the single field KdV and nonlinear Schrödinger equations to include the adiabatic dynamics of density, velocity, pressure, and entropy in any dimension. Original languageEnglish Article number025303 JournalAIP Advances Issue number2 Publication statusPublished - 3 Feb 2020 Dive into the research topics of 'Nonlinear dispersive regularization of inviscid gas dynamics'. Together they form a unique fingerprint. Cite this
16a70136287ec548
Skip to main content Self-Consistent Hybrid Functional Calculations: Implications for Structural, Electronic, and Optical Properties of Oxide Semiconductors The development of new exchange-correlation functionals within density functional theory means that increasingly accurate information is accessible at moderate computational cost. Recently, a newly developed self-consistent hybrid functional has been proposed (Skone et al., Phys. Rev. B 89:195112, 2014), which allows for a reliable and accurate calculation of material properties using a fully ab initio procedure. Here, we apply this new functional to wurtzite ZnO, rutile SnO2, and rocksalt MgO. We present calculated structural, electronic, and optical properties, which we compare to results obtained with the PBE and PBE0 functionals. For all semiconductors considered here, the self-consistent hybrid approach gives improved agreement with experimental structural data relative to the PBE0 hybrid functional for a moderate increase in computational cost, while avoiding the empiricism common to conventional hybrid functionals. The electronic properties are improved for ZnO and MgO, whereas for SnO2 the PBE0 hybrid functional gives the best agreement with experimental data. Metal oxides exhibit many unique structural, electronic, and magnetic properties, making them useful for a broad range of technological applications. Metal oxides are exclusively used as transparent conducting oxides (TCOs) [1], find applications as building blocks in artificial multiferroic heterostructures [2] and as spin-filter devices [3], and even include a huge class of superconducting materials. To develop new materials for specific applications, it is necessary to have a detailed understanding of the interplay between the chemical composition of different materials, their structure, and their electronic, optical, or magnetic properties. For the development of new functional oxides, computational methods that allow theoretical predictions of structural and electronic properties have become an increasingly useful tool. When optical or electronic properties are under consideration, electronic structure methods are necessary, with the most popular approach for solids being density functional theory (DFT). DFT has proven hugely successful in the calculation of structural properties of condensed matter systems and the electronic properties of simple metals [4]. The earliest developed approximate exchange-correlation functionals, however, face limitations, for example severely underestimating band gaps of semiconductors and insulators. Over the last decade, several new, more accurate, exchange-correlation functionals have been proposed. Increased predictive accuracy often comes with an increased computational cost, and the adoption of these more accurate functionals has only been made possible through the continued increase in available computational power. One such more accurate, and more costly, approach is to use so-called hybrid functionals. These are constructed by mixing a fraction of Hartree-Fock exact-exchange with the exchange and correlation terms from some underlying DFT functional. Calculated material properties, such as lattice parameters and band gaps, however depend on the precise proportion of Hartree-Fock exact-exchange, α. Typical hybrid functionals treat α as a fixed empirical parameter, chosen by intuition and experimental calibration. A recently proposed self-consistent hybrid functional approach for condensed systems [5] avoids this empiricism and allows parameter-free hybrid functional calculations to be performed. In this approach, the amount of Hartree-Fock exact-exchange is identified as the inverse of the dielectric constant, with this constraint achieved by performing an iterative sequence of calculations to self-consistency. Here we apply this new self-consistent hybrid functional to wurtzite ZnO and rutile SnO2, both materials with potential applications as TCOs, and MgO, a wide band gap insulator [6]. We examine the implications of the self-consistent hybrid functional for the structural, electronic, and optical properties. In the next section, we present the theoretical background, describe the self-consistent hybrid functional, and give the computational details. We then present results for the structural, electronic, and optical properties for ZnO, SnO2, and MgO, and compare these to data calculated using alternative exchange-correlation functionals and from experiments. The paper concludes with a summary and an outlook. Density functional theory and hybrid functionals DFT is a popular and reliable tool to theoretically describe the electronic structure of both crystalline and molecular systems. DFT provides a mean-field simplification of the many-body Schrödinger equation. The central variable is the electron density \(n(\vec {r}) = \psi ^{*}(\vec {r})\psi (\vec {r})\), determined from the electronic wavefunctions \(\psi (\vec {r})\), and the Hamiltonian is described as a functional of the \(n(\vec {r})\). Within the generalised Kohn-Sham scheme, the potential is $$ v_{\text{GKS}}(\vec{r}, \vec{r}^{\prime}) = v_{\mathrm{H}}(\vec{r}) + v_{\text{xc}}(\vec{r},\vec{r}^{\prime}) + v_{\text{ext}}(\vec{r}). $$ The Hartree potential, v H(r), and the external potential, v ext(r), are in principle known. The exchange-correlation potential, \(v_{\text {xc}}(\vec {r},\vec {r}^{\prime })\), however, is not and must be approximated. Most successful early approximations make use of the local density approximation and the semilocal generalised gradient approximation (GGA), for example, in the parametrisation of Perdew, Burke, and Ernzerhof (PBE) [7]. These approximations already allowed reliable descriptions of structural properties within the computational resources available at the time, but lacked accuracy when determining band energies, especially fundamental band gaps, and d valence band widths of semiconductors. These properties are particularly important for reliable calculations of electronic and optical behaviours of semiconductors. In recent years, so-called hybrid functionals have gained in popularity. In a hybrid functional, some proportion of the local exchange-correlation potential is replaced by Hartree-Fock exact-exchange terms, giving a better description of electronic properties. The explicit inclusion of exact-exchange Hartree-Fock terms make these calculations computationally much more demanding compared to the earlier GGA calculations, and hybrid functional calculations have become routine only in recent years. The fraction of Hartree-Fock exact-exchange admixed in these hybrid functionals, α, is usually justified on experimental or theoretical grounds, and then fixed for a specific functional. This adds an empirical parameter and forfeits the ab initio nature of the calculations. One popular choice of α=0.25 is realised in the PBE0 functional [8]. In this work, we are concerned with full-range hybrid functionals, for which the generalised nonlocal exchange-correlation potential is $$ v_{\text{xc}}(\vec{r},\vec{r}^{\prime}) = \alpha v_{\mathrm{x}}^{\text{ex}}(\vec{r},\vec{r}^{\prime}) + (1-\alpha) v_{\mathrm{x}}(\vec{r}) + v_{\mathrm{c}}(\vec{r}). $$ A common approach is to select α to reproduce the experimental band gap of solid state systems. Apart from adding an empirical parameter into the calculations, fitting the band gap of a material requires reliable experimental data. Moreover, this approach does not guarantee that all electronic properties, e.g. d band widths or defect levels, are correct [9]. Recently, it has been argued from the screening behaviour of nonmetallic systems that α can be related to the inverse of the static dielectric constant [10, 11] $$ \alpha = \frac{1}{\epsilon_{\infty}}, $$ which may then be computed in a self-consistent cycle [5, 12]. This iteration to self-consistency requires additional computational effort, but removes the empiricism of previous hybrid functionals and restores the ab initio character of the calculations. The utility of this approach, however, depends on the accuracy of the resulting predicted material properties. Here, we are interested in the implications for the structural, electronic, and optical properties of oxide semiconductors, and consider ZnO, SnO2, and MgO as an illustrative set of materials. Computational Details The calculations presented in this work have been performed using the projector-augmented wave (PAW) method [13], as implemented in the Vienna ab initio simulation package (VASP 5.4.1) [1416]. For the calculation of structural and electronic properties, standard PAW potentials supplied with VASP were used, with 12 valence electrons for Zn atom (4s 23d 10), 14 valence electrons for Sn (5s 24d 105p 2), 8 valence electrons for Mg (2p 63s 2), and 6 valence electrons for O (2s 22p 4), respectively. When calculating dielectric functions, we have used the corresponding GW potentials, which give a better description of high-energy unoccupied states. To evaluate the performance of the self-consistent hybrid approach, we have calculated structural and electronic data using three functionals: GGA in the PBE parametrisation [7], the hybrid functional PBE0 [8], and the self-consistent hybrid functional [5], which we denote scPBE0. Structural relaxations were performed for the regular unit cells within a scalar-relativistic approximation, using dense k point meshes for Brillouin zone integration (8×8×6 for wurtzite ZnO, 6×6×8 for rutile SnO2, and 10 × 10 × 10 for rocksalt MgO). For each material, we performed several fixed-volume calculations, in the cases of ZnO and SnO2 allowing internal structural parameters to relax until all forces on ions were smaller than 0.001 eV Å−1. Zero-pressure geometries were determined by then fitting a cubic spline to the total energies with respect to the unit cell volumes. To evalutate the self-consistent fraction of Hartree-Fock exact-exchange, α, the dielectric function ε is calculated in an iterative series of full geometry optimisations. To calculate ε , for each of the ground state structures, the static dielectric tensor has been calculated (including local field effects) from the response to finite electric fields. For non-cubic systems (ZnO, SnO2), ε was obtained by averaging over the spur of the static dielectric tensor \(\frac {1}{3}\left (2\epsilon _{\infty }^{\perp }+\epsilon _{\infty }^{\parallel }\right)\). We have considered ε to be converged when the difference between two subsequent calculations falls below ±0.01 [5]. Results and Discussion Structural properties ZnO crystallises in the hexagonal wurtzite structure of space group P63mc (No. 186). SnO2 crystallises in the tetragonal rutile structure of space group P42/mnm (No. 136). MgO crystallises in the cubic rocksalt structure of space group Fm\(\bar {3}\)m (No. 225). Each crystal structure was first fully geometry optimised, as described in the computational details section. The energy/volume data for the GGA, PBE0, and scPBE0 exchange-correlation potentials are plotted in the upper panels of Fig. 1. The GGA functional significantly overestimates the ground state volume relative to experimental values for all three materials. This is due to shortcomings in this simpler early exchange-correlation potential. The PBE0 functional adds a fixed proportion of Hartree-Fock exact-exchange (α=0.25) and produces structural properties in much better agreement with experimental data. Fig. 1 figure 1 Upper panels: total energy (in eV) with respect to the unit cell volume for wurtzite ZnO (left panel), rutile SnO2 (middle panel), and rocksalt MgO (right panel) calculated by means of GGA (black), PBE0 (red), and scPBE0 functionals (green), respectively. The experimental unit cell volume is depicted by the dashed orange line. Lower panels: convergence for the dielectric constant ε is obtained after three steps in the additional self-consistency cycle For our scPBE0 calculations, for each material, the static dielectric constant converged in three iterations (Fig. 1, lower panels). Here, computationally the most expensive part is the full geometry optimisation using the PBE0 functional. Each subsequent step in the self-consistent loop to determine the amount of Hartree-Fock exact-exchange starts from optimised crystal structures of the previous step and reduces the computational costs considerably. Using the self-consistent amount of Hartree-Fock exact-exchange in the self-consistent hybrid functional yielded structural properties in slightly better agreement with experimental data (Fig. 1, upper panels). The improved description of structural properties using the scPBE0 functional is also evident from the lattice constants a (and c), which are given together with those obtained with the other two functionals and experimental data in Table 1. Table 1 Ground state structural parameters for wurtzite ZnO, rutile SnO2, and rocksalt MgO obtained with different approximations for the exchange-correlation potential in comparison to low-temperature experimental data The quality of the structural data compared to experiment can also be seen from Fig. 2 where the coefficients of the different obtained ground state volumes with respect to the experimental one are plotted for the three different oxides. Again, the results obtained with the new self-consistent hybrid functional show closest agreement with experiment. Fig. 2 figure 2 Ground state unit cell volumes V with respect to the experimental volume V exp calculated by means of the GGA (black), PBE0 (red), and scPBE0 functionals (green), respectively. The experimental volumes correspond to V/V exp=1 (dashed horizontal line) Electronic and Optical Properties Figure 3 shows electronic band structures calculated using scPBE0 for wurtzite ZnO, rutile SnO2, and rocksalt MgO. The calculated (versus experimental) direct band gaps are 3.425 eV (3.4449 eV [17]) for ZnO, 3.827 eV (3.596 eV [18]) for SnO2, and 8.322 eV (7.833 eV [19]) for MgO, respectively (Table 1). Figure 4 shows the GGA, PBE0, and scPBE0 calculated band gaps alongside the experimental values. It can be seen that PBE0 calculated band gaps are underestimated compared to the experimental ones for all three oxides, but being very close for SnO2. The scPBE0 calculated band gaps are larger than the PBE0 values, thereby improving the results for ZnO and MgO, but worsen the result for SnO2. In general, the band gaps calculated using the hybrid functionals (PBE0, scPBE0) are within ten per cent of the experimental band gaps. Fig. 3 figure 3 Electronic band structures of wurtzite ZnO (left panel), rutile SnO2 (middle panel), and rocksalt MgO (right panel), calculated with the scPBE0 functional. Energies are in electron volt (eV) and the valence band maximum is set to zero Fig. 4 figure 4 Ground state Kohn-Sham band gaps E KS with respect to the experimental band gap E exp calculated by means of the GGA (black), PBE0 (red), and scPBE0 functionals (green), respectively. The experimental band gaps correspond to E KS/E exp=1 (dashed horizontal line) The scPBE0 calculations provide accurate structural properties and band gaps versus experimental data, and we can therefore be relatively confident when calculating properties less easily accessible directly by experiment. We have calculated the real (ε 1) and imaginary (ε 2) parts of the dielectric functions via Fermi’s Golden rule summing over transition matrix elements. For these calculations, we used the recommended VASP GW pseudopotentials and considerably increased the number of empty bands to ensure converged results. Figure 5 shows the real (ε 1) and imaginary (ε 2) parts of the dielectric functions calculated with the GGA, PBE0, and scPBE0 functionals. Because the GGA functional significantly underestimates the band gap, the imaginary parts of the dielectric functions exhibit an onset, corresponding to the first allowed direct transition at the fundamental band gap, at lower energies. The onset energy improves considerably when switching to the PBE0 hybrid functional and improves further compared to experiment when using the self-consistent hybrid functional. For the two hybrid functionals, the overall shape of the real and imaginary parts of the dielectric functions are very similar in their peak structure but differ compared to the pure GGA functional. One reason for this difference might be the improvements in the d band width and position when using the hybrid functionals compared to the pure GGA one. Clarifying this would require a more in-depth comparison of the different band structures and how their specific features influence the dielectric functions. Fig. 5 figure 5 Real ε 1 (upper panels) and imaginary ε 2 (lower panels) parts of the dielectric functions calculated by means of GGA (black), PBE0 (dashed red), and scPBE0 functionals (green), respectively. Dielectric functions are shown for wurtzite ZnO (left panels), rutile SnO2 (middle panels), and rocksalt MgO (right panels) We have presented a theoretical investigation on the application of a new self-consistent hybrid functional to oxide semiconductors ZnO, SnO2, and MgO. We have presented and compared calculated structural, electronic, and optical properties of these oxides to experimental data, and have discussed the implications of using the new self-consistent hybrid functional. We find that the self-consistent hybrid functional gives calculated properties with accuracies as good as or better than the PBE0 hybrid functional. The additional computational cost due to the self-consistency cycle is justified by avoiding the empiricism of similar hybrid functionals, which restores the ab initio character of these calculations. Density functional theory Generalised gradient approximation Projector-augmented wave Perdew, Burke, and Ernzerhof Transparent conducting oxides Vienna ab initio simulation package 1. Minami T (2005) Transparent conducting oxide semiconductors for transparent electrodes. Semicond Sci Technol 20: S35. Article  Google Scholar  2. Fritsch D, Ederer C (2010) Epitaxial strain effects in the spinel ferrites CoF e2O4 and NiF e2O4 from first principles. Phys Rev B 82: 104117. Article  Google Scholar  3. Caffrey NM, Fritsch D, Archer T, Sanvito S, Ederer C (2013) Spin-filtering efficiency of ferrimagnetic spinels CoF e2O4 and NiF e2O4. Phys Rev B 87: 024419. Article  Google Scholar  4. Hasnip PJ, Refson K, Probert MIJ, Yates JR, Clark SJ, Pickard CJ (2014) Density functional theory in the solid state. Phil Trans R Soc A 372: 20130270. Article  Google Scholar  5. Skone JH, Govoni M, Galli G (2014) Self-consistent hybrid functional for condensed systems. Phys Rev B 89: 195112. Article  Google Scholar  6. Fritsch D, Schmidt H, Grundmann M (2006) Pseudopotential band structures of rocksalt MgO, ZnO, and Mg1−x Zn x O. Appl Phys Lett 88: 134104. Article  Google Scholar  7. Perdew JP, Burke K, Ernzerhof M (1996) Generalized gradient approximation made simple. Phys Rev Lett 77: 3865. Article  Google Scholar  8. Adamo C, Barone V (1999) Toward reliable density functional methods without adjustable parameters: The PBE0 model. J Chem Phys 110: 6158. Article  Google Scholar  9. Walsh A, Da Silva JLF, Wei SH (2008) Theoretical description of carrier mediated magnetism in cobalt doped ZnO. Phys Rev Lett 100: 256401. Article  Google Scholar  10. Alkauskas A, Broqvist P, Pasquarello A (2011) Defect levels through hybrid density functionals: insights and applications. Phys Stat Sol B 248: 775. Article  Google Scholar  11. Marques MAL, Vidal J, Oliveira MJT, Reining L, Botti S (2011) Density-based mixing parameter for hybrid functionals. Phys Rev B 83: 035119. Article  Google Scholar  12. Gerosa M, Bottani CE, Caramella L, Onida G, Di Valentin C, Pacchioni G (2015) Electronic structure and phase stability of oxide semiconductors: performance of dielectric-dependent hybrid functional DFT, benchmarked against GW band structure calculations and experiments. Phys Rev B 91: 155201. Article  Google Scholar  13. Blöchl PE (1994) Projector augmented-wave method. Phys Rev B 50: 17953. Article  Google Scholar  14. Kresse G, Hafner J (1993) Ab initio molecular dynamics for liquid metals. Phys Rev B 47: 558. Article  Google Scholar  15. Kresse G, Hafner J (1994) Ab initio molecular-dynamics simulation of the liquid-metal −amorphous-semiconductor transition in germanium. Phys Rev B 49: 14251. Article  Google Scholar  16. Kresse G, Furthmüller J (1996) Efficiency of ab-initio total energy calculations for metals and semiconductors using a plane-wave basis set. Comput Mat Sci 6: 15. Article  Google Scholar  17. Liang WY, Yoffe AD (1968) Transmission spectra of ZnO single crystals. Phys Rev Lett 20: 59. Article  Google Scholar  18. Reimann K, Steube M (1998) Experimental determination of the electronic band structure of SnO2. Solid State Commun 105: 649. Article  Google Scholar  19. Whited RC, Flaten CJ, Walker WC (1973) Exciton thermoreflectance of MgO and CaO. Solid State Commun 13: 1903. Article  Google Scholar  20. Reeber RR (1970) Lattice parameters of ZnO from 4.2° to 296° K. J Appl Phys 41: 5063. Article  Google Scholar  21. Schulz H, Thiemann KH (1979) Structure parameters and polarity of the wurtzite type compounds SiC-2H and ZnO. Solid State Commun 32: 783. Article  Google Scholar  22. Heltemes EC, Swinney HL (1967) Anisotropy in lattice vibrations of zinc oxide. J Appl Phys 38: 2387. Article  Google Scholar  23. Haines J, Léger JM (1996) X-ray diffraction study of the phase transitions and structural evolution of tin dioxide at high pressure: relationships between structure types and implications for other rutile-type dioxides. Phys Rev B 55: 11144. Article  Google Scholar  24. Summitt R (1968) Infrared absorption in single-crystal stannic oxide: optical lattice-vibration modes. J Appl Phys 39: 3762. Article  Google Scholar  25. Hazen RH (1976) Effects of temperature and pressure on the cell dimension and X-ray temperature factors of periclase. Am Mineral 61: 266. Google Scholar  26. Jasperse JR, Kahan A, Plendl JN, Mitra SS (1966) Temperature dependence of infrared dispersion in ionic crystals LiF and MgO. Phys Rev 146: 526. Article  Google Scholar  Download references This research has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 641864 (INREP). This work made use of the ARCHER UK National Supercomputing Service ( via the membership of the UK’s HPC Materials Chemistry Consortium, funded by EPSRC (EP/L000202) and the Balena HPC facility of the University of Bath. BJM acknowledges support from the Royal Society (UF130329). Authors’ contributions AW and BJM conceived the idea. DF performed the calculations, analysed the data, and wrote the manuscript. DF, BJM, and AW participated in the discussions of the calculated results. All the authors have read and approved the final manuscript. Competing interests The authors declare that they have no competing interests. Author information Authors and Affiliations Corresponding author Correspondence to Daniel Fritsch. Rights and permissions Reprints and Permissions About this article Verify currency and authenticity via CrossMark Cite this article Fritsch, D., Morgan, B.J. & Walsh, A. Self-Consistent Hybrid Functional Calculations: Implications for Structural, Electronic, and Optical Properties of Oxide Semiconductors. Nanoscale Res Lett 12, 19 (2017). Download citation • Received: • Accepted: • Published: • DOI: • Density functional theory • Hybrid functionals • Semiconducting oxides • Dielectric functions PACS Codes • 71.15.Mb • 71.20.Nr • 78.20.Bh
7e71a5a196ff2685
Difference between revisions of "Scale Relativity" From Evo Devo Universe Jump to: navigation, search Line 1: Line 1: NOTE: this is a stub based on an almost totally deleted version of the Wikipedia article on scale relativity: https://en.wikipedia.org/w/index.php?title=Scale_relativity&oldid=938866469 '''Scale relativity''' is a geometrical and [[fractal]] [[space-time]] physical theory. Relativity theories ([[special relativity]] and [[general relativity]]) are based on the notion that position, orientation, movement and acceleration cannot be defined in an absolute way, but only relative to a system of reference. The scale relativity theory proposes to extend the concept of relativity to physical [[Scale (spatial)|scales]] (time, length, energy, or momentum scales), by introducing an explicit "state of scale" in coordinate systems. Latest revision as of 07:35, 2 April 2020 Scale relativity is a geometrical and fractal space-time physical theory. Relativity theories (special relativity and general relativity) are based on the notion that position, orientation, movement and acceleration cannot be defined in an absolute way, but only relative to a system of reference. The scale relativity theory proposes to extend the concept of relativity to physical scales (time, length, energy, or momentum scales), by introducing an explicit "state of scale" in coordinate systems. This extension of the relativity principle using fractal geometries to study scale transformations was originally introduced by Laurent Nottale,Template:Sfn based on the idea of a fractal space-time theory first introduced by Garnet Ord,Template:Sfn and by Nottale and Jean Schneider.Template:Sfn The construction of the theory is similar to previous relativity theories, with three different levels: Galilean, special and general. The development of a full general scale relativity is not finished yet. Feynman's paths in quantum mechanics Richard Feynman developed a path integral formulation of quantum mechanics before 1966.Template:Sfn Searching for the most important paths relevant for quantum particles, Feynman noticed that such paths were very irregular on small scales, i.e. infinite and non-differentiable. This means that in between two points, a particle can have not one path, but an infinity of potential paths. This can be illustrated with a concrete example. Imagine that you are hiking in the mountains, and that you are free to walk wherever you like. To go from point A to point B, there is not just one path, but an infinity of possible paths, each going through different valleys and hills.Template:Sfn Scale relativity hypothesizes that quantum behavior comes from the fractal nature of spacetime. Indeed, fractal geometries allow to study such non-differentiable paths. This fractal interpretation of quantum mechanics has been further specified by Abbot and Wise,Template:Sfn showing that the paths have a fractal dimension 2. Scale relativity goes one step further by asserting that the fractality of these paths is a consequence of the fractality of space-time. There are other pioneers who saw the fractal nature of quantum mechanical paths.Template:SfnTemplate:Sfn Also, as much as the development of general relativity required the mathematical tools of non-Euclidean (Riemannian) geometries, the development of a fractal space-time theory would not have been possible without the concept of fractal geometries developed and popularized by Benoit Mandelbrot. Fractals are usually associated with the self-similar case of a fractal curve, but other more complicated fractals are possible, e.g. considering not only curves, but also fractal surfaces or fractal volumes, as well as investigating fractal dimensions which have other values than 2, and which also vary with scale. Independent discovery Garnet OrdTemplate:Sfn and Laurent NottaleTemplate:Sfn both connected fractal space-time with quantum mechanics. Nottale coined the term "scale relativity" in 1992.Template:Sfn He developed the theory and its applications with more than one hundred scientific papers,[1] two technical books,[2] a popular book,[3] and four popular books in French.[4] Basic concepts Principle of scale relativity The principle of relativity says that physical laws should be valid in all coordinate systems. This principle has been applied to states of position (the origin and orientation of axes), as well as to the states of movement of coordinate systems (speed, acceleration). Such states are never defined in an absolute manner, but relatively to one another. For example, there is no absolute movement, in the sense that it can only be defined in a relative way between one body and another. Scale relativity proposes in a similar manner to define a scale relative to another one, and not in an absolute way. Only scale ratios have a physical meaning, never an absolute scale, in the same way as there exists no absolute position or velocity, but only position or velocity differences. The concept of resolution is re-interpreted as the "state of scale" of the system, in the same way as velocity characterizes the state of movement. The principle of scale relativity can thus be formulated as: the laws of physics must be such that they apply to coordinate systems whatever their state of scale.Template:Sfn The main goal of scale relativity is to find laws which mathematically respect this new principle of relativity. Mathematically, this can be expressed through the principle of covariance applied to scales, that is, the invariance of the form of physics equations under transformations of resolutions (dilations and contractions). Including resolutions in coordinate systems Galileo introduced explicitly velocity parameters in the observational referential. Then, Einstein introduced explicitly acceleration parameters. In a similar way, Nottale introduces scale parameters explicitly in the observational referential. The core idea of scale-relativity is thus to include resolutions explicitly in coordinate systems, thereby integrating measure theory in the formulation of physical laws. An important consequence is that coordinates are not numbers anymore, but functions, which depend on the resolution.Template:Sfn For example, the length of the Brittany coast is explicitly dependent on the resolution at which one measures it: the coastline paradox.Template:Sfn If we measure a pen with a ruler graduated at a millimetric scale, we should write that it is 15 ± 0.1 cm. The error bar indicates the resolution of our measure. If we had measured the pen at another resolution, for example with a ruler graduated at the centimeter scale, we would have found another result, 15 ± 1 cm. In scale relativity, this resolution defines the "state of scale". In the relativity of movement, this is similar to the concept of speed, which defines the "state of movement". The relative state of scale is fundamental to know about for any physical description. For example, if we want to describe the movement and properties of a sphere, we may as well use classical mechanics or quantum mechanics depending on the size of the sphere in question.Template:SfnTemplate:Time needed In particular, information on resolution is essential to understand quantum mechanical systems, and in scale relativity, resolutions are included in coordinate systems, so it seems a logical and promising approach to account for quantum phenomena. Dropping the hypothesis of differentiability The same type of approach has been followed by Nottale to build the theory of scale relativity. The basis of current theories is a continuous and two-times differentiable space. Space is by definition a continuum, but the assumption of differentiability is not supported by any fundamental reason. It is usually assumed only because it is observed that the first two derivatives of position with respect to time are needed to describe motion. Scale relativity theory is rooted in the idea that the constraint of differentiability can be relaxed and that this allows quantum laws to be derived. In terms of geometry, differentiability means that a curve is sufficiently smooth and can be approximated by a tangent. Mathematically, two points are placed on this curve and one observes the slope of the straight line joining them as they become closer and closer. If the curve is smooth enough, this process converges (almost) everywhere and the curve is said to be differentiable. It is often believed that this property is common in nature. However, most natural objects have instead a very rough surface, or contour. For example, the bark of trees and snowflakes have a detailed structure that does not become smoother when the scale is refined. For such curves, the slope of the tangent fluctuates endlessly or diverges. The derivative is then undefined (almost) everywhere and the curve is said to be nondifferentiable. Therefore, when the assumption of space differentiability is abandoned, there is an additional degree of freedom that allows the geometry of space to be extremely rough. The difficulty in this approach is that new mathematical tools are needed to model this geometry because the classical derivative cannot be used. Nottale found a solution to this problem by using the fact that nondifferentiability implies scale dependence and therefore the use of fractal geometry. Scale dependence means that the distances on a nondifferentiable curve depend on the scale of observation. It is therefore possible to maintain differential calculus provided that the scale at which derivatives are calculated is given, and that their definition includes no limit. It amounts to saying that nondifferentiable curves have a whole set of tangents in one point instead of one, and that there is a specific tangent at each scale. To abandon the hypothesis of differentiability does not mean abandoning differentiability. Instead, this leads to a more general framework, where both differentiable and non-differentiable cases are included. Combined with motion relativity, scale relativity by definition thus extends and contains general relativity. As much as general relativity is possible when we drop the hypothesis of euclidean space-time, allowing the possibility of curved space-time, scale relativity is possible when we abandon the hypothesis of differentiability, allowing the possibility of a fractal space-time. The objective is then to describe a continuous space-time which is not everywhere differentiable, as it was in general relativity. Abandoning differentiability doesn't mean abandoning differential equations. The concept of fractal allows work with the nondifferentiable case with differential equations. In differential calculus, we can see the concept of limit as a zoom, but in this generalization of differential calculus, one doesn't look only at the limit zooms (zero and infinity) but also everything in between, that is, all possible zooms. In sum, we can drop the hypothesis of the differentiability of space-time, keeping differential equations, provided that fractal geometries are used. With them, we can still deal with the nondifferentiable case with the tools of differential equations. This leads to a double differential equation treatment: in space-time and in scale space. Fractal space-time If Einstein showed that space-time was curved, Nottale shows that it is not only curved, but also fractal. Nottale has proven a key theorem which shows that a space which is continuous and non-differentiable is necessarily fractal.Template:Sfn It means that such a space depends on scale. Importantly, the theory does not merely describe fractal objects in a given space. Instead, it is space itself which is fractal. To understand what a fractal space means requires to study not just fractal curves, but also fractal surfaces, fractal volumes, etc. Mathematically, a fractal space-time is defined as a nondifferentiable generalization of Riemannian geometry.Template:SfnTemplate:Sfn Such a fractal space-time geometry is the natural choice to develop this new principle of relativity, in the same way that curved geometries were needed to develop Einstein's theory of general relativity.Template:Sfn In the same way that general relativistic effects are not felt in a typical human life, the most radical effects of the fractality of spacetime appear only at the extreme limits of scales: micro scales or at cosmological scales. This approach therefore proposes to bridge not only the quantum and the classical, but also the classical and the cosmological, with fractal to non-fractal transitions (see Fig. 1). More plots of this transition can be seen in the literature.Template:SfnTemplate:Sfn File:Variation of the fractal dimension of space-time geodesics according to the resolution in special scale relativity.png Fig. 1. Variation of the fractal dimension of space-time geodesics (trajectories), according to the resolution, in the framework of special scale relativity. The scale symmetry is broken in two transitioning scales λ and Λ (non-absolute), which divide the scale space in three domains: (1) a classical domain, intermediary, where space-time doesn't depend on resolutions because the laws of movement dominate over scale laws; and two asymptotical domains towards (2) very small and (3) very large scales where scale laws dominate over the laws of movement, which makes explicit the fractal structure of space-time. Minimum and maximum invariant scales A fundamental and elegant result of scale relativity is to propose a minimum and maximum scale in physics, invariant under dilations, in a very similar way as the speed of light is an upper limit for speed. Minimum invariant scale In special relativity, there is an unreachable speed, the speed of light. We can add speeds without end, but they will always be less than the speed of light. The sums of all speeds are limited by the speed of light. Additionally, the composition of two velocities is inferior to the sum of those two speeds. In special scale relativity, similar unreachable observational scales are proposed, the Planck length scale (lP) and the Planck time scale (tP). Dilations are bounded by lP and tP, which means that we can divide spatial or temporal intervals without end, but they will always be superior to Planck's length and time scales. This is a result of special scale relativity (see section 2.7 below). Similarly, the composition of two scale changes is inferior to the product of these two scales.Template:Sfn Maximum invariant scale The choice of the maximum scale (noted L) is less easy to explain, but it mostly consists to identify it with the cosmological constant: L = 1/(Λ1/2).Template:SfnTemplate:SfnTemplate:SfnTemplate:Sfn This is motivated in part because a dimensional analysis shows that the cosmological constant is the inverse of the square of a length, i.e. a curvature. Galilean scale relativity The theory of scale relativity follows a similar construction as the one of the relativity of movement, which took place in three steps: galilean, special and general relativity. This is not surprising, as in both cases the goal is to find laws satisfying transformation laws including one parameter that is relative: the speed in the case of the relativity of movement; the resolution in the case of the relativity of scales. Galilean scale relativity involves linear transformations a constant fractal dimension, self-similarity and scale invariance. This situation is best illustrated with self-similar fractals. Here, the length of geodesics varies constantly with resolution. The fractal dimensions of free particles doesn't change with zooms. These are self-similar curves. In Galilean relativity, recall that the laws of motion are the same in all inertial frames. Galileo famously concluded that "the movement is like nothing".Template:Sfn In the case of self-similar fractals, paraphrasing Galileo, one could say that "scaling is like nothing". Indeed, the same patterns occur at different scales, so scaling is not noticeable, it is like nothing. In the relativity of movement, Galileo's theory is an additive Galilean group: X' = X - VT T' = T However, if we consider scale transformations (dilations and contractions), the laws are products, and not sums. This can be seen by the necessity to use units of measurements. Indeed, when we say that an object measures 10 meters, we actually mean the object measures 10 times the definite predetermined length called "meter". The number 10 is actually a scale ratio of two lengths 10/1m, where 10 is the measured quantity, and 1m is the arbitrary defining unit. This is the reason why the group is multiplicative. Moreover, an arbitrary scale e doesn't have any physical meaning in itself (like the number 10), only scale ratios r = e' / e have a meaning, in our example, r = 10 / 1. Using the Gell-Mann-Lévy method,Template:Sfn we can use a more relevant scale variable, V = ln(e' / e), and then find back an additive group for scale transformations by taking the logarithm – which converts products into sums. When in addition to the principle of scale relativity, one adds the principle of relativity of movement, there is a transition of the structure of geodesics at large scales, where trajectories do not depend on the resolution anymore, where trajectories become classical. This explains the shift of behavior from quantum to classical.Template:SfnTemplate:Sfn See also Fig. 1. Special scale relativity Special scale relativity can be seen as a correction of galilean scale relativity, where Galilean transformations are replaced by Lorentz transformations.Template:SfnTemplate:Sfn The "corrections remain small at 'large' scale (i.e. around the Compton scale of particles) and increase when going to smaller length scales (i.e. large energies) in the same way as motion-relativistic corrections increase when going to large speeds".Template:Sfn In Galilean relativity, it was considered "obvious" that we could add speeds without limit (w = u + v). This composition laws for speed was not challenged. However, Poincaré and Einstein did challenge it with special relativity, setting a maximum speed on movement, the speed of light. Formally, if v is a velocity, v + c = c. The status of the speed of light in special relativity is a horizon, unreachable, impassable, invariant under changes of movement. Regarding scale, we are still within a Galilean kind of thinking. Indeed, we assume without justification that the composition of two dilations is ρ * ρ = ρ2. Written with logarithms, this equality becomes lnρ + lnρ = 2lnρ. However, nothing guarantees that this law should hold at quantum or cosmic scales.Template:Sfn As a matter of fact, this dilation law is corrected in special scale relativity, and becomes: ln ρ + ln ρ = 2 ln ρ / (1 + ln ρ2). More generally, in special relativity the composition law for velocities differs from the Galilean approximation and becomes (with the speed of light c = 1): uv = (u + v) / (1 + u * v) Similarly, in special scale relativity, the composition law for dilations differs from our Galilean intuitions and becomes (in a logarithm of base K which includes a possible constant C = ln K, which plays the same role as c): logρ1 ⊕ logρ2 = (logρ1 + logρ2) / (1 + logρ1 * logρ2) The status of the Planck scale in special scale relativity plays a similar role as the speed of light in special relativity. It is a horizon for small scales, unreachable, impassable, invariant under scale changes, i.e. dilations and contractions. The consequence for special scale relativity is that applying two times the same contraction ρ to an object, the result is a contraction less strong than contraction ρ x ρ. Formally, if ρ is a contraction, ρ * lP = lP. As noted above, there is also an unreachable, impassable maximum scale, invariant under scale changes, which is the cosmic length L.Template:Sfn In particular, it is invariant under the expansion of the universe. General scale relativity In Galilean scale relativity, spacetime was fractal with constant fractal dimensions. In special scale relativity, fractal dimensions can vary. This varying fractal dimension remains however constrained by a log-Lorentz law. This means that the laws satisfy a logarithmic version of the Lorentz transformation. The varying fractal dimension is covariant, in a similar way as proper time is covariant in special relativity. In general scale relativity, the fractal dimension is not constrained anymore, and can take any value. In other words, it is the situation where there is curvature in scale space. Einstein's curved space-time becomes a particular case of the more general fractal spacetime. General scale relativity is much more complicated, technical, and less developed than its Galilean and special versions. It involves non-linear laws, scale dynamics and gauge fields. In the case of non self-similarity, changing scales generates a new scale-force or scale-field which needs to be taken into account in a scale dynamics approach. Quantum mechanics then needs to be analyzed in scale space.Template:SfnTemplate:Sfn Finally, in general scale relativity, we need to take into account both movement and scale transformations, where scale variables depend on space-time coordinates. More details about the implications for abelian gauge fieldsTemplate:SfnTemplate:Sfn and non-abelian gauge fieldsTemplate:Sfn can be found in the literature. Nottale's 2011 bookTemplate:Sfn provides the state of the art. To sum up, one can see some structural similarities between the relativity of movement and the relativity of scales in Table 1: Relativity Variables defining the coordinate system Variables characterizing the state of the coordinate system Movement Space Scale Length of a fractal Variable fractal dimension Scale acceleration Table 1. Comparison between relativity of movement and relativity of scales. In both cases, there are two kinds of variables linked to the coordinate systems: variables which define the coordinate system, and variables that characterize the state of the coordinate system. In this analogy, the resolution can be assimilated to a speed; acceleration to a scale acceleration; space to the length of a fractal; and time, to the variable fractal dimension.Template:Sfn Table adapted from this paper.Template:Sfn Consequences for quantum mechanics The fractality of space-time implies an infinity of virtual geodesics. This remark already means that a fluid mechanics is needed. Note that this view is not new, as many authors have noticed fractal properties at quantum scales, thereby suggesting that typical quantum mechanical paths are fractal. See this article for a review.Template:Sfn However, the idea to consider a fluid of geodesics in a fractal spacetime is an original proposal from Nottale. In scale relativity, quantum mechanical effects appear as effects of fractal structures on the movement. The fundamental indeterminism and nonlocality of quantum mechanics are deduced from the fractal geometry itself. There is an analogy between the interpretation of gravitation in general relativity and quantum effects in scale relativity. Indeed, if gravitation is a manifestation of space-time curvature in general relativity, quantum effects are manifestations of a fractal space-time in scale relativity. To sum up, there are two aspects which allows scale relativity to better understand quantum mechanics. On the one side, fractal fluctuations themselves are hypothesized to lead to quantum effects. On the other side, non-differentiability leads to a local irreversibility of the dynamics and therefore to the use of complex numbers. Quantum mechanics thus receives not only a new interpretation, but a firm foundation in relativity principles. Quantum-classical transition As Philip Turner summarized: the structure of space has both a smooth (differentiable) component at the macro-scale and a chaotic, fractal (non-differentiable) component at the micro-scale, the transition taking place at the de Broglie length scale.Template:Sfn This transition is explained with Galilean scale relativityTemplate:Sfn (see also above). Derivation of quantum mechanics' postulates Starting from scale relativity, it is possible to derive the fundamental "postulates" of quantum mechanics.Template:Sfn More specifically, building on the result of the key theorem showing that a space which is continuous and non-differentiable is necessarily fractal (see section 2.4), Schrödinger's equation, Born's and von Neumann's postulate are derived. To derive Schrödinger's equation, NottaleTemplate:Sfn started with Newton's second law of motion, and used the result of the key theorem. Many subsequent works then confirmed the derivation.Template:SfnTemplate:SfnTemplate:SfnTemplate:SfnTemplate:SfnTemplate:SfnTemplate:Sfn Actually, the Schrödinger equation derived becomes generalized in scale relativity, and opens the way to a macroscopic quantum mechanics (see below for validated empirical predictions in astrophysics). This may also help to better understand macroscopic quantum phenomena in the future. Reasoning about fractal geodesics and non-differentiability, it is also possible to derive von Neumann's postulateTemplate:Sfn and Born's postulate.Template:Sfn With the hypothesis of a fractal space-time, the Klein-Gordon, and the Dirac equation can then be derived.Template:SfnTemplate:SfnTemplate:Sfn The significance of these fundamental results is immense, as the foundations of quantum mechanics which were up to now axiomatic, are now logically derived from more primary relativity theory principles and methods. Gauge transformations Gauge fields appear when scale and movements are combined. Scale relativity proposes a geometric theory of gauge fields. As Turner explains: The theory offers a new interpretation of gauge transformations and gauge fields (both Abelian and non-Abelian), which are manifestations of the fractality of space-time, in the same way that gravitation is derived from its curvature.Template:Sfn The relationships between fractal space-time, gauge fields and quantum mechanics are technical and advanced subject-matters elaborated in details in Nottale's latest book.Template:Sfn Consequences for elementary particles physics Scale relativity gives a geometric interpretation to charges, which are now "defined as the conservative quantities that are built from the new scale symmetries".Template:Sfn Relations between mass scales and coupling constants can be theoretically established, and some of them empirically validated. This is possible because in scale relativity, the problem of divergences in quantum field theory is resolved. Indeed, in the new framework, masses and charges become finite, even at infinite energy. In special scale relativity, the possible scale ratios become limited, constraining in a geometric way the quantization of charges. Let us compare a few theoretical predictions with their experimental measures. Fine-structure constant Template:Main Nottale's latest theoretical prediction of the fine-structure constant at the Z0 scale is:Template:Sfn α−1(mZ) = 128.92 By comparison, a recent experimental measure gives:Template:Sfn α−1(mZ) = 128.91 ± 0.02 At low energy, the theoretical fine-structure constant prediction is:Template:Sfn α−1 = 137.01 ± 0.035; which is within the range of the experimental precision: α−1 = 137.036 SU (2) coupling at Z scale Here the SU(2) coupling corresponds to rotations in a three-dimensional scale-space. The theoretical estimate of the SU (2) coupling at Z scale is:Template:Sfn α−12 Z = 29.8169 ± 0.0002 While the experimental value gives:Template:Sfn α−12 Z = 29.802 ± 0.027. Strong nuclear force at Z scale Special scale relativity predicts the value of the strong nuclear force with great precision, as later experimental measurements confirmed. The first prediction of the strong nuclear force at the Z energy level was made in 1992:Template:Sfn αS (mZ) = 0.1165 ± 0.0005 A recent and refined theoretical estimate gives:Template:Sfn αS (mZ) = 0.1173 ± 0.0004, which fits very well with the experimental measure:Template:Sfn αS (mZ) = 0.1176 ± 0.0009 Mass of the electron As an application from this new approach to gauge fields, a theoretical estimate of the mass of the electron (me) is possible, from the experimental value of the fine-structure constant. This leads a very good agreement:Template:Sfn me(theoretical) = 1.007 me (experimental) Astrophysical applications Macroquantum mechanics Some chaotic systems can be analyzed thanks to a macroquantum mechanics. The main tool here is the generalized Schrödinger equation, which brings statistical predictability characteristic of quantum mechanics into other scales in nature. The equation predicts probability density peaks. For example, the position of exoplanets can be predicted in a statistical manner. The theory predicts that planets have more chances to be found at such or such distance from their star. As Baryshev and TeerikorpiTemplate:Sfn write: With his equation for the probability density of planetary orbits around a star, Nottale has seemingly come close to the old analogy which saw a similarity between our solar system and an atom in which electrons orbit the nucleus. But now the analogy is deeper and mathematically and physically supported: it comes from the suggestion that chaotic planetary orbits on very long time scales have preferred sizes, the roots of which go to fractal space-time and generalized Newtonian equation of motion which assumes the form of the quantum Schrödinger equation. However, as Nottale acknowledges, this general approach is not totally new: The suggestion to use the formalism of quantum mechanics for the treatment of macroscopic problems, in particular for understanding structures in the solar system, dates back to the beginnings of the quantum theoryTemplate:Sfn Gravitational systems Space debris Template:Main At the scale of Earth's orbit, space debris probability peaks at 718 km and 1475 km have been predicted with scale relativity,Template:Sfn which is in agreement with observations at 850 km and 1475 km.Template:Sfn Da Rocha and Nottale suggest that the dynamical braking of the Earth's atmosphere may be responsible for the difference between the theoretical prediction and the observational data of the first peak. Solar system Scale relativity predicts a new law for interplanetary distances, proposing an alternative to the nowadays falsified Titius-bode "law".Template:SfnTemplate:Sfn However, the predictions here are statistical and not deterministic as in Newtonian dynamics. In addition to being statistical, the scale relativistic law has a different theoretical form, and is more reliable than the original Titius-Bode version: The Titius-Bode "law" of planetary distance is of the form a + b × c n, with a = 0.4 AU, b = 0.3 AU and c = 2 in its original version. It is partly inconsistent — Mercury corresponds to n = −∞, Venus to n = 0, the Earth to n = 1, etc. It therefore "predicts" an infinity of orbits between Mercury and Venus and fails for the main asteroid belt and beyond Saturn. It has been shown by Template:Harvtxt that its agreement with the observed distances is not statistically significant. ... [I]n the scale relativity framework, the predicted law of distance is not a Titius-Bode-like power law but a more constrained and statistically significant quadratic law of the form an = a0n2.Template:Sfn Extrasolar systems Template:Main The method also applies to other extrasolar systems. Let us illustrate this with the first exoplanets found around the pulsar PSR B1257+12.Template:Sfn Three planets, A, B and C have been found. Their orbital period ratios (noted PA/PC for the period ratio of planet A to C) can be estimated and compared to observations.Template:Sfn Using the macroscopic Schrödinger equation, the recent theoretical estimates are:Template:Sfn (PA/PC)1/3 = 0.63593 (predicted) (PB/PC)1/3 = 0.8787 (predicted), which fit the observed valuesTemplate:Sfn with great precision: (PA/PC)1/3 = 0.63597 (observed) (PB/PC)1/3 = 0.8783 (observed). The puzzling fact that many exoplanets (e.g. hot Jupiters) are so close to their parent stars receives a natural explanation in this framework. Indeed, it corresponds to the fundamental orbital of the model, where (exo)planets are at 0.04 UA / solar mass of their parent star.Template:Sfn More validated predictions can be found regarding orbital periods and the distances of planets from their parent star.Template:SfnTemplate:SfnTemplate:Sfn Galaxy pairs Daniel da Rocha studied the velocity of about 2000 galaxy pairs,Template:Sfn which gave statistically significant results when compared to the theoretical structuration in phase space from scale relativity. The method and tools here are similar to the one used for explaining the structure in solar systems. Similar successful results apply at other extragalactic scales: the local group of galaxies, clusters of galaxies, the local supercluster and other very large scale structures.Template:Sfn Dark matter Template:Main Scale relativity suggests that the fractality of matter contributes to the phenomenon of dark matter. Indeed, some of the dynamical and gravitational effects which seem to require unseen matter are suggested to be consequences of the fractality of space on very large scales.Template:Sfn In the same way as quantum physics differs from the classical at very small scales because of fractal effects, symmetrically, at very large scales, scale relativity also predicts that corrections from the fractality of space-time must be taken into account (see also Fig. 1Template:Full citation needed). Such an interpretation is somehow similar in spirit to modified Newtonian dynamics (MOND), although here the approach is founded on relativity principles. Indeed, in MOND, Newtonian dynamics is modified in an ad hoc manner to account for the new effects, while in scale relativity, it is the new fractal geometric field taken into consideration which leads to the emergence of a dark potential. On the largest scale, scale relativity offers a new perspective on the issue of redshift quantization. With a reasoning similar to the one which allows to predict probability peaks for the velocity of planets, this can be generalized to larger intergalactic scales. Nottale writes: In the same way as there are well-established structures in the position space (stars, clusters of stars, galaxies, groups of galaxies, clusters of galaxies, large scale structures), the velocity probability peaks are simply the manifestation of structuration in the velocity space. In other words, as it is already well-known in classical mechanics, a full view of the structuring can be obtained in phase space.Template:Sfn Cosmological applications Large numbers hypothesis Template:Main Nottale noticed that reasoning about scales was a promising road to explain the large numbers hypothesis.Template:Sfn This was elaborated in more details in a working paper.Template:Sfn The scale-relativistic way to explain the large numbers hypothesis was later discussed by NottaleTemplate:SfnTemplate:Sfn and by Sidharth.Template:Sfn Prediction of the cosmological constant Template:Main In scale relativity, the cosmological constant is interpreted as a curvature. If one does a dimensional analysis, it is indeed the inverse of the square of a length. The predicted value of the cosmological constant, back in 1993 was:Template:Sfn ΩΛ h2 = 0.36 Depending on model choices, the most recent predictions give the following range:Template:Sfn 0.311 < ΩΛ h2 (predicted) < 0.356, while the measured cosmological constant from the Planck satellite is: ΩΛ h2 (measured) = 0.318 ±0.012.Template:Sfn Given the improvements of the empirical measures from 1993 until 2011, Nottale commented: The convergence of the observational values towards the theoretical estimate, despite an improvement of the precision by a factor of more than 20, is striking.Template:Sfn Dark energy can be considered as a measurement of the cosmological constant. In scale relativity, dark energy would come from a potential energy manifested by the fractal geometry of the universe at large scales,Template:Sfn in the same way as the Newtonian potential is a manifestation of its curved geometry in general relativity. It is worth noting multitudinous similarities between Notale's work and the obscure but rigorous work of Moshe Carmeli. Horizon problem Template:Main Scale relativity offers a new perspective on the old horizon problem in cosmology. The problem states that different regions of the universe have not had contact with each other's because of the great distances between them, but nevertheless they have the same temperature and other physical properties. This should not be possible, given that the transfer of information (or energy, heat, etc.) can occur, at most, at the speed of light. NottaleTemplate:Sfn writes that special scale relativity "naturally solves the problem because of the new behaviour it implies for light cones. Though there is no inflation in the usual sense, since the scale factor time dependence is unchanged with respect to standard cosmology, there is an inflation of the light cone as t → Λ/c″, where Λ is the Planck length scale (ħG/c3)1/2. This inflation of the light cones makes them flare and cross themselves, thereby allowing a causal connection between any two points, and solving the horizon problem (see alsoTemplate:Sfn). Applications to other fields Although scale relativity started as a spacetime theory, its methods and concepts can and have been used in other fields. For example, quantum-classical kinds of transitions can be at play at intermediate scales, provided that there exists a fractal medium which is locally nondifferentiable. Such a fractal medium then plays a role similar to that played by fractal spacetime for particles. Objects and particles embedded in such a medium will acquire macroquantum properties. As examples, we can mention gravitational structuring in astrophysics (see section 5), turbulence,Template:Sfn superconductivity at laboratory scales (see section 7.1), and also modeling in geography (section 7.4). What follows are not strict applications of scale relativity, but rather models constructed with the general idea of relativity of scales.Template:SfnTemplate:Sfn Fractal models, and in particular self-similar fractal laws have been applied to describe numerous biological systems such as trees, blood networks, or plants. It is thus to be expected that the mathematical tools developed through a fractal space-time theory can have a wider variety of applications to describe fractal systems. Superconductivity and macroquantum phenomena Template:Main The generalized Schrödinger equation, under certain conditions, can apply to macroscopic scales. This leads to the proposal that quantum-like phenomena need not to be only at quantum scales. In a recent paper, Turner and NottaleTemplate:Sfn proposed new ways to explore the origins of macroscopic quantum coherence in high-temperature superconductivity. If we assume that morphologies come from a growth process, we can model this growth as an infinite family of virtual, fractal, and locally irreversible trajectories. This allows to write a growth equation in a form which can be integrated into a Schrödinger-like equation. The structuring implied by such a generalized Schrödinger equation provides a new basis to study, with a purely energetic approach, the issues of formation, duplication, bifurcation and hierarchical organization of structures.Template:Sfn An inspiring example is the solution describing growth from a center, which bears similarities with the problem of particle scattering in quantum mechanics. Searching for some of the simplest solutions (with a central potential and a spherical symmetry), a solution leads to a flower shape, the common Platycodon flower (see Fig. 2). In honor to Erwin Schrödinger, Nottale, Chaline and Grou named their book "Flowers for Schrödinger" (Des fleurs pour SchrödingerTemplate:Sfn). File:Schrödinger's flower.png Fig. 2. Schrödinger's flower. Morphogenesis of a flower-like structure, solution of a growth process equation that takes the form of a Schrödinger equation under fractal conditions. For more details, see: Nottale 2007. In a short paper,Template:Sfn researchers inspired by scale relativity proposed a log-periodic law for the development of the human embryo, which fits pretty well with the steps of the human embryo development. With scale-relativistic models, Nottale and Auffray did tackle the issue of multiple-scale integration in systems biology.Template:SfnTemplate:Sfn Other studies suggest that many living systems processes, because embedded in a fractal medium, are expected to display wave-like and quantized structuration.Template:Sfn The mathematical tools of scale relativity have also been applied to geographical problems.Template:SfnTemplate:Sfn Singularity and evolutionary trees In their review of approaches to technological singularities, Magee and DevezasTemplate:Sfn included the work of Nottale, Chaline and Grou:Template:Sfn Utilizing the fractal mathematics due to Mandlebrot (1983) these authors develop a model based upon a fractal tree of the time sequences of major evolutionary leaps at various scales (log-periodic law of acceleration – deceleration). The application of the model to the evolution of western civilization shows evidence of an acceleration in the succession (pattern) of economic crisis/non-crisis, which point to a next crisis in the period 2015–2020, with a critical point Tc = 2080. The meaning of Tc in this approach is the limit of the evolutionary capacity of the analyzed group and is biologically analogous with the end of a species and emergence of a new species. The interpretation of this emergence of a new species remains open to debate, whether it will take the form of the emergence of transhumans, cyborgs, superintelligent AI, or a global brain. Reception and critique Scale relativity and other approaches It may help to understand scale relativity by comparing it to various other approaches to unifying quantum and classical theories. There are two main roads to try to unify quantum mechanics and relativity: to start from quantum mechanics, or to start from relativity. Quantum gravity theories explore the former, scale relativity the latter. Quantum gravity theories try to make a quantum theory of spacetime, whereas scale relativity is a spacetime theory of quantum theory. String theory Template:Main Although string theory and scale relativity start from different assumptions to tackle the issue of reconciling quantum mechanics and relativity theory, the two approaches need not to be opposed. Indeed, CastroTemplate:Sfn suggested to combine string theory with the principle of scale relativity: It was emphasized by Nottale in his book that a full motion plus scale relativity including all spacetime components, angles and rotations remains to be constructed. In particular the general theory of scale relativity. Our aim is to show that string theory provides an important step in that direction and vice versa: the scale relativity principle must be operating in string theory. Quantum gravity Template:Main Scale relativity is based on a geometrical approach, and thereby recovers the quantum laws, instead of assuming them. This distinguishes it from other quantum gravity approaches. Nottale comments: The main difference is that these quantum gravity studies assume the quantum laws to be set as fundamental laws. In such a framework, the fractal geometry of space-time at the Planck scale is a consequence of the quantum nature of physical laws, so that the fractality and the quantum nature co-exist as two different things. In the scale relativity theory, there are not two things (in analogy with Einstein's general relativity theory in which gravitation is a manifestation of the curvature of space-time): the quantum laws are considered as manifestations of the fractality and nondifferentiability of space-time, so that they do not have to be added to the geometric description.Template:Sfn Loop quantum gravity Template:Main They have in common to start from relativity theory and principles, and to fulfill the condition of background independence. El Naschie's E-Infinity theory El Naschie has developed a similar, yet different fractal space-time theory, because he gives up differentiability and continuity. El Naschie thus uses a "Cantorian" space-time, and uses mostly number theory (see Template:Harvnb). This is to be contrasted with scale relativity, which keeps the hypothesis of continuity, and thus works preferentially with mathematical analysis and fractals. Causal dynamical triangulation Template:Main Through computer simulations of causal dynamical triangulation theory, a fractal to nonfractal transition was found from quantum scales to larger scales.Template:Sfn This result seems to be compatible with quantum-classical transition deduced in another way, from the theoretical framework of scale relativity. Noncommutative geometry Template:Main For both scale relativity and non-commutative geometries, particles are geometric properties of space-time. The intersection of both theories seems fruitful and still to be explored. In particular, NottaleTemplate:Sfn further generalized this non-commutativity, saying that it "is now at the level of the fractal space-time itself, which therefore fundamentally comes under Connes's noncommutative geometry.Template:SfnTemplate:Sfn Moreover, this noncommutativity might be considered as a key for a future better understanding of the parity and CP violations, which will not be developed here." Doubly special relativity Template:Main Both theories have identified the Planck length as a fundamental minimum scale. However, as Nottale comments: the main difference between the "Doubly-Special-Relativity" approach and the scale relativity one is that we have identified the question of defining an invariant length-scale as coming under a relativity of scales. Therefore the new group to be constructed is a multiplicative group that becomes additive only when working with the logarithms of scale ratios, which are definitely the physically relevant scale variables, as we have shown by applying the Gell-Mann–Levy method to the construction of the dilation operator (see Sec. 4.2.1).Template:Sfn Nelson stochastic mechanics At first sight, scale relativity and Nelson's stochastic mechanics share features, such as the derivation of the Schrödinger equation.Template:Sfn Some authors point out problems of Nelson's mechanics in multi-time correlations in repeated measurements.Template:Sfn On the other hand, perceived problems could be resolved.Template:Sfn By contrast, scale relativity is not founded on a stochastic approach. As Nottale writes: Here, the fractality of the space-time continuum is derived from its nondifferentiability, it is constrained by the principle of scale relativity and the Dirac equation is derived as an integral of the geodesic equation. This is therefore not a stochastic approach in its essence, even though stochastic variables must be introduced as a consequence of the new geometry, so it does not come under the contradictions encountered by stochastic mechanics.Template:Sfn Bohm's mechanics Bohm's mechanics is a hidden variables theory, which is not the case of scale relativity. In this way, they are quite different. This point is explained by Template:Harvtxt: In the scale relativity description, there is no longer any separation between a "microscopic" description and an emergent "macroscopic" description (at the level of the wave function), since both are accounted for in the double scale space and position space representation.Template:Sfn Cognitive aspects Special and general relativity theory are notoriously hard to understand for non-specialists. This is partly because our psychological and sociological use of the concepts of space and time are not the same as the one in physics. Yet, the relativity of scales is still harder to apprehend than other relativity theories. Indeed, humans can change their positions and velocities but have virtually no experience of shrinking or dilating themselves. Such transformations appear in fiction however, such as in Alice's Adventures in Wonderland or in the movie Honey, I Shrunk the Kids. Sociological analysis Sociologists Bontems and Gingras did a detailed bibliometrical analysis of scale relativity and showed the difficulty for such a theory with a different theoretical starting point to compete with well-established paradigms such as string theory.Template:Sfn Back in 2007, they considered the theory to be neither mainstream, that is, there are not many people working on it compared to other paradigms; but also neither controversial, as there is very little informed and academic discussion around the theory. The two sociologists thus qualified the theory as "marginal", in the sense that the theory is developed inside academics, but is not controversial. They also show that Nottale has a double career. First, a classical one, working on gravitational lensing, and a second one, about scale relativity. Nottale first secured his scientific reputation with important publications about gravitational lensing,Template:SfnTemplate:Sfn then obtained a stable academic position, giving him more freedom to explore the foundations of spacetime and quantum mechanics. A possible obstacle to the growth in popularity of scale relativity is that fractal geometries necessary to deal with special and general scale relativity are less well known and developed mathematically than the simple and well-known self-similar fractals. This technical difficulty may make the advanced concepts of the theory harder to learn. Physicists interested in scale relativity need to invest some time into understanding fractal geometries. The situation is similar to the need to learn non-euclidean geometries in order to work with Einstein's general relativity.Template:Sfn Similarly, the generality and transdiciplinary nature of the theory also made Auffray and Noble comment:Template:Sfn "The scale relativity theory and tools extend the scope of current domain-specific theories, which are naturally recovered, not replaced, in the new framework. This may explain why the community of physicists has been slow to recognize its potential and even to challenge it." Nottale's popular book Template:SfnTemplate:Sfn has been compared with Einstein's popular bookTemplate:Sfn Relativity: The Special and the General Theory. The reactions from scientists to scale relativity are generally positive. For example, Baryshev and Teerikorpi write:Template:Sfn Though Nottale's theory is still developing and not yet a generally accepted part of physics, there are already many exciting views and predictions surfacing from the new formalism. It is concerned in particular with the frontier domains of modern physics, i.e. small length- and time-scales (microworld, elementary particles), large length-scales (cosmology), and long time-scales. Regarding the predictions of planetary spacings, Potter and Jargodzki commented:Template:Sfn In the 1990s, applying chaos theory to gravitationally bound systems, L. Nottale found that statistical fits indicate that the planet orbital distances, including that of Pluto, and the major satellites of the Jovian planets, follow a numerical scheme with their orbital radii proportional to the squares of integers n2 extremely well! Auffray and Noble gave an overview:Template:Sfn Scale relativity has implications for every aspect of physics, from elementary particle physics to astrophysics and cosmology. It provides numerous examples of theoretical predictions of standard model parameters, a theoretical expectation for the Higgs boson mass which will be potentially assessed in the coming years by the Large Hadron Collider, and a prediction of the cosmological constant which remains within the range of increasingly refined observational data. Strikingly, many predictions in astrophysics have already been validated through observations such as the distribution of exoplanets or the formation of extragalactic structures. Although many applications have led to validated predictions (see above), Patrick Peter criticized a provisionally estimated value of the Higgs boson in Template:HarvtxtTemplate:Page needed: a prediction for the Higgs boson that should have been observed at mH ≃113.7GeV...it would appear, according to the book itself, that the theory it describes would be already ruled out by LHC data!Template:Sfn However, this prediction was initially made at a time when the Higgs boson mass was totally unknown.Template:Sfn Additionally, the prediction does not rely on scale relativity itself, but on a new suggested form of the electroweak theory. The final LHC result is mH = 125.6 ± 0.3 GeV,Template:Sfn and lies therefore at about 110% of this early estimate. Particle physicist and skeptic Victor Stenger also noticed that the theory "predicts a nonzero value of the cosmological constant in the right ballpark".Template:Sfn He also acknowledged that the theory "makes a number of other remarkable predictions". See also Citations Template:Refbegin External links • Nottale: list of papers • Template:Harvnb; Template:Nowrap • Template:Harvnb • Template:Harvnb; Template:Harvnb; Template:Nowrap Template:Nowrap
e77359d47ca97aff
Article citationsMore >> Planck Collaboration: Aghanin, N., et al., Planck 2018 results Cosmological parameters, arXiv:1807.06209v2, 20 Sep 2018. has been cited by the following article: Quantum Fluctuations and Variable G Return Einstein’s Field Equation to Its Original Formulation 1Åbo Akademi University, Finland International Journal of Physics. 2021, Vol. 9 No. 3, 169-177 DOI: 10.12691/ijp-9-3-4 Copyright © 2021 Science and Education Publishing Cite this paper: Jarl-Thure Eriksson. Quantum Fluctuations and Variable G Return Einstein’s Field Equation to Its Original Formulation. International Journal of Physics. 2021; 9(3):169-177. doi: 10.12691/ijp-9-3-4. Correspondence to: Jarl-Thure  Eriksson, Åbo Akademi University, Finland. Email: The standard ΛCDM model has successfully depicted most of the astronomical observations. However, the model faces several question marks such as, what was the cause of the Big Bang singularity, what is the physics behind dark matter? The origin of dark energy is still unclear. The present theory, CBU, standing for the Continuously Breeding Universe, has been developed along with known principles of physics. The theory incorporates important ideas from the past. The universe is a complex emerging system, which starts from the single fluctuation of a positron-electron pair. Expansion is driven by the emersion of new pairs. The gravitational parameter G is inversely proportional to the Einsteinian curvature radius r. The Planck length and Planck time tP are dependent of the curvature and accordingly by the size of the universe. It is shown that the solution to the Schrödinger equation of the initial positron-electron fluctuation includes an exponential function parameter equal to the Planck length of the initial event. The existence of a wave function provides a link between quantum mechanics and the theory of general relativity. The fast change of momentum increases the Heisenberg uncertainty window thereby enhancing the positron-electron pair production, especially strong in the early universe. When these findings are introduced in the energy-momentum tensor of Einstein’s Field Equation, the equation acquires a simple configuration without G and a cosmological constant. The universe is a macroscopic manifestation of the quantum world.
380f4eb13a4748da
You are currently browsing the category archive for the ‘opinion’ category. [The following statement is signed by several mathematicians at Stanford and MIT in support of one of their recently admitted graduate students, and I am happy to post it here on my blog. -T] We were saddened and horrified to learn that Ilya Dumanski, a brilliant young mathematician who has been admitted to our graduate programs at Stanford and MIT, has been imprisoned in Russia, along with several other mathematicians, for participation in a peaceful demonstration. Our thoughts are with them. We urge their rapid release, and failing that, that they be kept in humane conditions. A petition in their support has been started at Roman Bezrukavnikov (MIT) Alexei Borodin (MIT) Daniel Bump (Stanford) Sourav Chatterjee (Stanford) Otis Chodosh (Stanford) Ralph Cohen (Stanford) Henry Cohn (MIT) Brian Conrad (Stanford) Joern Dunkel (MIT) Pavel Etingof (MIT) Jacob Fox (Stanford) Michel Goemans (MIT) Eleny Ionel (Stanford) Steven Kerckhoff (Stanford) Jonathan Luk (Stanford) Eugenia Malinnikova (Stanford) Davesh Maulik (MIT) Rafe Mazzeo (Stanford) Haynes Miller (MIT) Ankur Moitra (MIT) Elchanan Mossel (MIT) Tomasz Mrowka (MIT) Bjorn Poonen (MIT) Alex Postnikov (MIT) Lenya Ryzhik (Stanford) Paul Seidel (MIT) Mike Sipser (MIT) Kannan Soundararajan (Stanford) Gigliola Staffilani (MIT) Nike Sun (MIT) Richard Taylor (Stanford) Ravi Vakil (Stanford) Andras Vasy (Stanford) Jan Vondrak (Stanford) Brian White (Stanford) Zhiwei Yun (MIT) In the last week or so there has been some discussion on the internet about a paper (initially authored by Hill and Tabachnikov) that was initially accepted for publication in the Mathematical Intelligencer, but with the editor-in-chief of that journal later deciding against publication; the paper, in significantly revised form (and now authored solely by Hill), was then quickly accepted by one of the editors in the New York Journal of Mathematics, but then was removed from publication after objections from several members on the editorial board of NYJM that the paper had not been properly refereed or was within the scope of the journal; see this statement by Benson Farb, who at the time was on that board, for more details.  Some further discussion of this incident may be found on Tim Gowers’ blog; the most recent version of the paper, as well as a number of prior revisions, are still available on the arXiv here. For whatever reason, some of the discussion online has focused on the role of Amie Wilkinson, a mathematician from the University of Chicago (and who, incidentally, was a recent speaker here at UCLA in our Distinguished Lecture Series), who wrote an email to the editor-in-chief of the Intelligencer raising some concerns about the content of the paper and suggesting that it be published alongside commentary from other experts in the field.  (This, by the way, is not uncommon practice when dealing with a potentially provocative publication in one field by authors coming from a different field; for instance, when Emmanuel Candès and I published a paper in the Annals of Statistics introducing what we called the “Dantzig selector”, the Annals solicited a number of articles discussing the selector from prominent statisticians, and then invited us to submit a rejoinder.)    It seems that the editors of the Intelligencer decided instead to reject the paper.  The paper then had a complicated interaction with NYJM, but, as stated by Wilkinson in her recent statement on this matter as well as by Farb, this was done without any involvement from Wilkinson.  (It is true that Farb happens to also be Wilkinson’s husband, but I see no reason to doubt their statements on this matter.) I have not interacted much with the Intelligencer, but I have published a few papers with NYJM over the years; it is an early example of a quality “diamond open access” mathematics journal.  It seems that this incident may have uncovered some issues with their editorial procedure for reviewing and accepting papers, but I am hopeful that they can be addressed to avoid this sort of event occurring again. The self-chosen remit of my blog is “Updates on my research and expository papers, discussion of open problems, and other maths-related topics”.  Of the 774 posts on this blog, I estimate that about 99% of the posts indeed relate to mathematics, mathematicians, or the administration of this mathematical blog, and only about 1% are not related to mathematics or the community of mathematicians in any significant fashion. This is not one of the 1%. Mathematical research is clearly an international activity.  But actually a stronger claim is true: mathematical research is a transnational activity, in that the specific nationality of individual members of a research team or research community are (or should be) of no appreciable significance for the purpose of advancing mathematics.  For instance, even during the height of the Cold War, there was no movement in (say) the United States to boycott Soviet mathematicians or theorems, or to only use results from Western literature (though the latter did sometimes happen by default, due to the limited avenues of information exchange between East and West, and former did occasionally occur for political reasons, most notably with the Soviet Union preventing Gregory Margulis from traveling to receive his Fields Medal in 1978 EDIT: and also Sergei Novikov in 1970).    The national origin of even the most fundamental components of mathematics, whether it be the geometry (γεωμετρία) of the ancient Greeks, the algebra (الجبر) of the Islamic world, or the Hindu-Arabic numerals 0,1,\dots,9, are primarily of historical interest, and have only a negligible impact on the worldwide adoption of these mathematical tools. While it is true that individual mathematicians or research teams sometimes compete with each other to be the first to solve some desired problem, and that a citizen could take pride in the mathematical achievements of researchers from their country, one did not see any significant state-sponsored “space races” in which it was deemed in the national interest that a particular result ought to be proven by “our” mathematicians and not “theirs”.   Mathematical research ability is highly non-fungible, and the value added by foreign students and faculty to a mathematics department cannot be completely replaced by an equivalent amount of domestic students and faculty, no matter how large and well educated the country (though a state can certainly work at the margins to encourage and support more domestic mathematicians).  It is no coincidence that all of the top mathematics department worldwide actively recruit the best mathematicians regardless of national origin, and often retain immigration counsel to assist with situations in which these mathematicians come from a country that is currently politically disfavoured by their own. Of course, mathematicians cannot ignore the political realities of the modern international order altogether.  Anyone who has organised an international conference or program knows that there will inevitably be visa issues to resolve because the host country makes it particularly difficult for certain nationals to attend the event.  I myself, like many other academics working long-term in the United States, have certainly experienced my own share of immigration bureaucracy, starting with various glitches in the renewal or application of my J-1 and O-1 visas, then to the lengthy vetting process for acquiring permanent residency (or “green card”) status, and finally to becoming naturalised as a US citizen (retaining dual citizenship with Australia).  Nevertheless, while the process could be slow and frustrating, there was at least an order to it.  The rules of the game were complicated, but were known in advance, and did not abruptly change in the middle of playing it (save in truly exceptional situations, such as the days after the September 11 terrorist attacks).  One just had to study the relevant visa regulations (or hire an immigration lawyer to do so), fill out the paperwork and submit to the relevant background checks, and remain in good standing until the application was approved in order to study, work, or participate in a mathematical activity held in another country.  On rare occasion, some senior university administrator may have had to contact a high-ranking government official to approve some particularly complicated application, but for the most part one could work through normal channels in order to ensure for instance that the majority of participants of a conference could actually be physically present at that conference, or that an excellent mathematician hired by unanimous consent by a mathematics department could in fact legally work in that department. With the recent and highly publicised executive order on immigration, many of these fundamental assumptions have been seriously damaged, if not destroyed altogether.  Even if the order was withdrawn immediately, there is no longer an assurance, even for nationals not initially impacted by that order, that some similar abrupt and major change in the rules for entry to the United States could not occur, for instance for a visitor who has already gone through the lengthy visa application process and background checks, secured the appropriate visa, and is already in flight to the country.  This is already affecting upcoming or ongoing mathematical conferences or programs in the US, with many international speakers (including those from countries not directly affected by the order) now cancelling their visit, either in protest or in concern about their ability to freely enter and leave the country.  Even some conferences outside the US are affected, as some mathematicians currently in the US with a valid visa or even permanent residency are uncertain if they could ever return back to their place of work if they left the country to attend a meeting.  In the slightly longer term, it is likely that the ability of elite US institutions to attract the best students and faculty will be seriously impacted.  Again, the losses would be strongest regarding candidates that were nationals of the countries affected by the current executive order, but I fear that many other mathematicians from other countries would now be much more concerned about entering and living in the US than they would have previously. It is still possible for this sort of long-term damage to the mathematical community (both within the US and abroad) to be reversed or at least contained, but at present there is a real risk of the damage becoming permanent.  To prevent this, it seems insufficient for me for the current order to be rescinded, as desirable as that would be; some further legislative or judicial action would be needed to begin restoring enough trust in the stability of the US immigration and visa system that the international travel that is so necessary to modern mathematical research becomes “just” a bureaucratic headache again. Of course, the impact of this executive order is far, far broader than just its effect on mathematicians and mathematical research.  But there are countless other venues on the internet and elsewhere to discuss these other aspects (or politics in general).  (For instance, discussion of the qualifications, or lack thereof, of the current US president can be carried out at this previous post.) I would therefore like to open this post to readers to discuss the effects or potential effects of this order on the mathematical community; I particularly encourage mathematicians who have been personally affected by this order to share their experiences.  As per the rules of the blog, I request that “the discussions are kept constructive, polite, and at least tangentially relevant to the topic at hand”. Some relevant links (please feel free to suggest more, either through comments or by email): In logic, there is a subtle but important distinction between the concept of mutual knowledge – information that everyone (or almost everyone) knows – and common knowledge, which is not only knowledge that (almost) everyone knows, but something that (almost) everyone knows that everyone else knows (and that everyone knows that everyone else knows that everyone else knows, and so forth).  A classic example arises from Hans Christian Andersens’ fable of the Emperor’s New Clothes: the fact that the emperor in fact has no clothes is mutual knowledge, but not common knowledge, because everyone (save, eventually, for a small child) is refusing to acknowledge the emperor’s nakedness, thus perpetuating the charade that the emperor is actually wearing some incredibly expensive and special clothing that is only visible to a select few.  My own personal favourite example of the distinction comes from the blue-eyed islander puzzle, discussed previously here, here and here on the blog.  (By the way, I would ask that any commentary about that puzzle be directed to those blog posts, rather than to the current one.) I believe that there is now a real-life instance of this situation in the US presidential election, regarding the following Proposition 1.  The presumptive nominee of the Republican Party, Donald Trump, is not even remotely qualified to carry out the duties of the presidency of the United States of America. Proposition 1 is a statement which I think is approaching the level of mutual knowledge amongst the US population (and probably a large proportion of people following US politics overseas): even many of Trump’s nominal supporters secretly suspect that this proposition is true, even if they are hesitant to say it out loud.  And there have been many prominent people, from both major parties, that have made the case for Proposition 1: for instance Mitt Romney, the Republican presidential nominee in 2012, did so back in March, and just a few days ago Hillary Clinton, the likely Democratic presidential nominee this year, did so in this speech: I highly recommend watching the entirety of the (35 mins or so) speech, followed by the entirety of Trump’s rebuttal. However, even if Proposition 1 is approaching the status of “mutual knowledge”, it does not yet seem to be close to the status of “common knowledge”: one may secretly believe that Trump cannot be considered as a serious candidate for the US presidency, but must continue to entertain this possibility, because they feel that others around them, or in politics or the media, appear to be doing so.  To reconcile these views can require taking on some implausible hypotheses that are not otherwise supported by any evidence, such as the hypothesis that Trump’s displays of policy ignorance, pettiness, and other clearly unpresidential behaviour are merely “for show”, and that behind this facade there is actually a competent and qualified presidential candidate; much like the emperor’s new clothes, this alleged competence is supposedly only visible to a select few.  And so the charade continues. I feel that it is time for the charade to end: Trump is unfit to be president, and everybody knows it.  But more people need to say so, openly. Important note: I anticipate there will be any number of “tu quoque” responses, asserting for instance that Hillary Clinton is also unfit to be the US president.  I personally do not believe that to be the case (and certainly not to the extent that Trump exhibits), but in any event such an assertion has no logical bearing on the qualification of Trump for the presidency.  As such, any comments that are purely of this “tu quoque” nature, and which do not directly address the validity or epistemological status of Proposition 1, will be deleted as off-topic.  However, there is a legitimate case to be made that there is a fundamental weakness in the current mechanics of the US presidential election, particularly with the “first-past-the-post” voting system, in that (once the presidential primaries are concluded) a voter in the presidential election is effectively limited to choosing between just two viable choices, one from each of the two major parties, or else refusing to vote or making a largely symbolic protest vote. This weakness is particularly evident when at least one of these two major choices is demonstrably unfit for office, as per Proposition 1.  I think there is a serious case for debating the possibility of major electoral reform in the US (I am particularly partial to the Instant Runoff Voting system, used for instance in my home country of Australia, which allows for meaningful votes to third parties), and I would consider such a debate to be on-topic for this post.  But this is very much a longer term issue, as there is absolutely no chance that any such reform would be implemented by the time of the US elections in November (particularly given that any significant reform would almost certainly require, at minimum, a constitutional amendment). A few days ago, inspired by this recent post of Tim Gowers, a web page entitled “the cost of knowledge” has been set up as a location for mathematicians and other academics to declare a protest against the academic publishing practices of Reed Elsevier, in particular with regard to their exceptionally high journal prices, their policy of “bundling” journals together so that libraries are forced to purchase subscriptions to large numbers of low-quality journals in order to gain access to a handful of high-quality journals, and their opposition to the open access movement (as manifested, for instance, in their lobbying in support of legislation such as the Stop Online Piracy Act (SOPA) and the Research Works Act (RWA)).   [These practices have been documented in a number of places; this wiki page, which was set up in response to Tim’s post, collects several relevant links for this purpose.  Some of the other commercial publishers have  exhibited similar behaviour, though usually not to the extent that Elsevier has, which is why this particular publisher is the focus of this protest.]  At the protest site, one can publicly declare a refusal to either publish at an Elsevier journal, referee for an Elsevier journal, or join the board of an Elsevier journal. (In the past, the editorial boards of several Elsevier journals have resigned over the pricing policies of the journal, most famously the board of Topology in 2006, but also the Journal of Algorithms in 2003, and a number of journals in other sciences as well.  Several libraries, such as those of Harvard and Cornell, have also managed to negotiate an unbundling of Elsevier journals, but most libraries are still unable to subscribe to such journals individually.) For a more thorough discussion as to why such a protest is warranted, please see Tim’s post on the matter (and the 100+ comments to that post).   Many of the issues regarding Elsevier were already known to some extent to many mathematicians (particularly those who have served on departmental library committees), several of whom had already privately made the decision to boycott Elsevier; but nevertheless it is important to bring these issues out into the open, to make them commonly known as opposed to merely mutually known.  (Amusingly, this distinction is also of crucial importance in my favorite logic puzzle, but that’s another story.)   One can also see Elsevier’s side of the story in this response to Tim’s post by David Clark (the Senior Vice President for Physical Sciences at Elsevier). For my own part, though I have sent about 9% of my papers in the past to Elsevier journals (with one or two still in press), I have now elected not to submit any further papers to these journals, nor to serve on their editorial boards, though I will continue refereeing some papers from these journals.  As of this time of writing, over five hundred mathematicians and other academics have also signed on to the protest in the four days that the site has been active. Admittedly, I am fortunate enough to be at a stage of career in which I am not pressured to publish in a very specific set of journals, and as such, I am not making a recommendation as to what anyone else should do or not do regarding this protest.  However, I do feel that it is worth spreading awareness, at least, of the fact that such protests exist (and some additional petitions on related issues can be found at the previously mentioned wiki page). [Question for MathOverflow experts: would this type of question be suitable for crossposting there?   The requirement that such questions be “research-level” seems to suggest not.] This is a technical post inspired by separate conversations with Jim Colliander and with Soonsik Kwon on the relationship between two techniques used to control non-radiating solutions to dispersive nonlinear equations, namely the “double Duhamel trick” and the “in/out decomposition”. See for instance these lecture notes of Killip and Visan for a survey of these two techniques and other related methods in the subject. (I should caution that this post is likely to be unintelligible to anyone not already working in this area.) For sake of discussion we shall focus on solutions to a nonlinear Schrödinger equation \displaystyle iu_t + \Delta u = F(u) and we will not concern ourselves with the specific regularity of the solution {u}, or the specific properties of the nonlinearity {F} here. We will also not address the issue of how to justify the formal computations being performed here. Solutions to this equation enjoy the forward Duhamel formula \displaystyle u(t) = e^{i(t-t_0)\Delta} u(t_0) - i \int_{t_0}^t e^{i(t-t')\Delta} F(u(t'))\ dt' for times {t} to the future of {t_0} in the lifespan of the solution, as well as the backward Duhamel formula \displaystyle u(t) = e^{i(t-t_1)\Delta} u(t_1) + i \int_t^{t_1} e^{i(t-t')\Delta} F(u(t'))\ dt' for all times {t} to the past of {t_1} in the lifespan of the solution. The first formula asserts that the solution at a given time is determined by the initial state and by the immediate past, while the second formula is the time reversal of the first, asserting that the solution at a given time is determined by the final state and the immediate future. These basic causal formulae are the foundation of the local theory of these equations, and in particular play an instrumental role in establishing local well-posedness for these equations. In this local theory, the main philosophy is to treat the homogeneous (or linear) term {e^{i(t-t_0)\Delta} u(t_0)} or {e^{i(t-t_1)\Delta} u(t_1)} as the main term, and the inhomogeneous (or nonlinear, or forcing) integral term as an error term. The situation is reversed when one turns to the global theory, and looks at the asymptotic behaviour of a solution as one approaches a limiting time {T} (which can be infinite if one has global existence, or finite if one has finite time blowup). After a suitable rescaling, the linear portion of the solution often disappears from view, leaving one with an asymptotic blowup profile solution which is non-radiating in the sense that the linear components of the Duhamel formulae vanish, thus \displaystyle u(t) = - i \int_{t_0}^t e^{i(t-t')\Delta} F(u(t'))\ dt' \ \ \ \ \ (1) \displaystyle u(t) = i \int_t^{t_1} e^{i(t-t')\Delta} F(u(t'))\ dt' \ \ \ \ \ (2) where {t_0, t_1} are the endpoint times of existence. (This type of situation comes up for instance in the Kenig-Merle approach to critical regularity problems, by reducing to a minimal blowup solution which is almost periodic modulo symmetries, and hence non-radiating.) These types of non-radiating solutions are propelled solely by their own nonlinear self-interactions from the immediate past or immediate future; they are generalisations of “nonlinear bound states” such as solitons. A key task is then to somehow combine the forward representation (1) and the backward representation (2) to obtain new information on {u(t)} itself, that cannot be obtained from either representation alone; it seems that the immediate past and immediate future can collectively exert more control on the present than they each do separately. This type of problem can be abstracted as follows. Let {\|u(t)\|_{Y_+}} be the infimal value of {\|F_+\|_N} over all forward representations of {u(t)} of the form \displaystyle u(t) = \int_{t_0}^t e^{i(t-t')\Delta} F_+(t') \ dt' \ \ \ \ \ (3) where {N} is some suitable spacetime norm (e.g. a Strichartz-type norm), and similarly let {\|u(t)\|_{Y_-}} be the infimal value of {\|F_-\|_N} over all backward representations of {u(t)} of the form \displaystyle u(t) = \int_{t}^{t_1} e^{i(t-t')\Delta} F_-(t') \ dt'. \ \ \ \ \ (4) Typically, one already has (or is willing to assume as a bootstrap hypothesis) control on {F(u)} in the norm {N}, which gives control of {u(t)} in the norms {Y_+, Y_-}. The task is then to use the control of both the {Y_+} and {Y_-} norm of {u(t)} to gain control of {u(t)} in a more conventional Hilbert space norm {X}, which is typically a Sobolev space such as {H^s} or {L^2}. One can use some classical functional analysis to clarify this situation. By the closed graph theorem, the above task is (morally, at least) equivalent to establishing an a priori bound of the form \displaystyle \| u \|_X \lesssim \|u\|_{Y_+} + \|u\|_{Y_-} \ \ \ \ \ (5) for all reasonable {u} (e.g. test functions). The double Duhamel trick accomplishes this by establishing the stronger estimate \displaystyle |\langle u, v \rangle_X| \lesssim \|u\|_{Y_+} \|v\|_{Y_-} \ \ \ \ \ (6) for all reasonable {u, v}; note that setting {u=v} and applying the arithmetic-geometric inequality then gives (5). The point is that if {u} has a forward representation (3) and {v} has a backward representation (4), then the inner product {\langle u, v \rangle_X} can (formally, at least) be expanded as a double integral \displaystyle \int_{t_0}^t \int_{t}^{t_1} \langle e^{i(t''-t')\Delta} F_+(t'), e^{i(t''-t')\Delta} F_-(t') \rangle_X\ dt'' dt'. The dispersive nature of the linear Schrödinger equation often causes {\langle e^{i(t''-t')\Delta} F_+(t'), e^{i(t''-t')\Delta} F_-(t') \rangle_X} to decay, especially in high dimensions. In high enough dimension (typically one needs five or higher dimensions, unless one already has some spacetime control on the solution), the decay is stronger than {1/|t'-t''|^2}, so that the integrand becomes absolutely integrable and one recovers (6). Unfortunately it appears that estimates of the form (6) fail in low dimensions (for the type of norms {N} that actually show up in applications); there is just too much interaction between past and future to hope for any reasonable control of this inner product. But one can try to obtain (5) by other means. By the Hahn-Banach theorem (and ignoring various issues related to reflexivity), (5) is equivalent to the assertion that every {u \in X} can be decomposed as {u = u_+ + u_-}, where {\|u_+\|_{Y_+^*} \lesssim \|u\|_X} and {\|u_-\|_{Y_-^*} \lesssim \|v\|_X}. Indeed once one has such a decomposition, one obtains (5) by computing the inner product of {u} with {u=u_++u_-} in {X} in two different ways. One can also (morally at least) write {\|u_+\|_{Y_+^*}} as {\| e^{i(\cdot-t)\Delta} u_+\|_{N^*([t_0,t])}} and similarly write {\|u_-\|_{Y_-^*}} as {\| e^{i(\cdot-t)\Delta} u_-\|_{N^*([t,t_1])}} So one can dualise the task of proving (5) as that of obtaining a decomposition of an arbitrary initial state {u} into two components {u_+} and {u_-}, where the former disperses into the past and the latter disperses into the future under the linear evolution. We do not know how to achieve this type of task efficiently in general – and doing so would likely lead to a significant advance in the subject (perhaps one of the main areas in this topic where serious harmonic analysis is likely to play a major role). But in the model case of spherically symmetric data {u}, one can perform such a decomposition quite easily: one uses microlocal projections to set {u_+} to be the “inward” pointing component of {u}, which propagates towards the origin in the future and away from the origin in the past, and {u_-} to simimlarly be the “outward” component of {u}. As spherical symmetry significantly dilutes the amplitude of the solution (and hence the strength of the nonlinearity) away from the origin, this decomposition tends to work quite well for applications, and is one of the main reasons (though not the only one) why we have a global theory for low-dimensional nonlinear Schrödinger equations in the radial case, but not in general. The in/out decomposition is a linear one, but the Hahn-Banach argument gives no reason why the decomposition needs to be linear. (Note that other well-known decompositions in analysis, such as the Fefferman-Stein decomposition of BMO, are necessarily nonlinear, a fact which is ultimately equivalent to the non-complemented nature of a certain subspace of a Banach space; see these lecture notes of mine and this old blog post for some discussion.) So one could imagine a sophisticated nonlinear decomposition as a general substitute for the in/out decomposition. See for instance this paper of Bourgain and Brezis for some of the subtleties of decomposition even in very classical function spaces such as {H^{1/2}(R)}. Alternatively, there may well be a third way to obtain estimates of the form (5) that do not require either decomposition or the double Duhamel trick; such a method may well clarify the relative relationship between past, present, and future for critical nonlinear dispersive equations, which seems to be a key aspect of the theory that is still only partially understood. (In particular, it seems that one needs a fairly strong decoupling of the present from both the past and the future to get the sort of elliptic-like regularity results that allow us to make further progress with such equations.) One of the most basic theorems in linear algebra is that every finite-dimensional vector space has a finite basis. Let us give a statement of this theorem in the case when the underlying field is the rationals: Theorem 1 (Finite generation implies finite basis, infinitary version) Let {V} be a vector space over the rationals {{\mathbb Q}}, and let {v_1,\ldots,v_n} be a finite collection of vectors in {V}. Then there exists a collection {w_1,\ldots,w_k} of vectors in {V}, with {1 \leq k \leq n}, such that • ({w} generates {v}) Every {v_j} can be expressed as a rational linear combination of the {w_1,\ldots,w_k}. • ({w} independent) There is no non-trivial linear relation {a_1 w_1 + \ldots + a_k w_k = 0}, {a_1,\ldots,a_k \in {\mathbb Q}} among the {w_1,\ldots,w_m} (where non-trivial means that the {a_i} are not all zero). In fact, one can take {w_1,\ldots,w_m} to be a subset of the {v_1,\ldots,v_n}. Proof: We perform the following “rank reduction argument”. Start with {w_1,\ldots,w_k} initialised to {v_1,\ldots,v_n} (so initially we have {k=n}). Clearly {w} generates {v}. If the {w_i} are linearly independent then we are done. Otherwise, there is a non-trivial linear relation between them; after shuffling things around, we see that one of the {w_i}, say {w_k}, is a rational linear combination of the {w_1,\ldots,w_{k-1}}. In such a case, {w_k} becomes redundant, and we may delete it (reducing the rank {k} by one). We repeat this procedure; it can only run for at most {n} steps and so terminates with {w_1,\ldots,w_m} obeying both of the desired properties. \Box In additive combinatorics, one often wants to use results like this in finitary settings, such as that of a cyclic group {{\mathbb Z}/p{\mathbb Z}} where {p} is a large prime. Now, technically speaking, {{\mathbb Z}/p{\mathbb Z}} is not a vector space over {{\mathbb Q}}, because one only multiply an element of {{\mathbb Z}/p{\mathbb Z}} by a rational number if the denominator of that rational does not divide {p}. But for {p} very large, {{\mathbb Z}/p{\mathbb Z}} “behaves” like a vector space over {{\mathbb Q}}, at least if one restricts attention to the rationals of “bounded height” – where the numerator and denominator of the rationals are bounded. Thus we shall refer to elements of {{\mathbb Z}/p{\mathbb Z}} as “vectors” over {{\mathbb Q}}, even though strictly speaking this is not quite the case. On the other hand, saying that one element of {{\mathbb Z}/p{\mathbb Z}} is a rational linear combination of another set of elements is not a very interesting statement: any non-zero element of {{\mathbb Z}/p{\mathbb Z}} already generates the entire space! However, if one again restricts attention to rational linear combinations of bounded height, then things become interesting again. For instance, the vector {1} can generate elements such as {37} or {\frac{p-1}{2}} using rational linear combinations of bounded height, but will not be able to generate such elements of {{\mathbb Z}/p{\mathbb Z}} as {\lfloor\sqrt{p}\rfloor} without using rational numbers of unbounded height. For similar reasons, the notion of linear independence over the rationals doesn’t initially look very interesting over {{\mathbb Z}/p{\mathbb Z}}: any two non-zero elements of {{\mathbb Z}/p{\mathbb Z}} are of course rationally dependent. But again, if one restricts attention to rational numbers of bounded height, then independence begins to emerge: for instance, {1} and {\lfloor\sqrt{p}\rfloor} are independent in this sense. Thus, it becomes natural to ask whether there is a “quantitative” analogue of Theorem 1, with non-trivial content in the case of “vector spaces over the bounded height rationals” such as {{\mathbb Z}/p{\mathbb Z}}, which asserts that given any bounded collection {v_1,\ldots,v_n} of elements, one can find another set {w_1,\ldots,w_k} which is linearly independent “over the rationals up to some height”, such that the {v_1,\ldots,v_n} can be generated by the {w_1,\ldots,w_k} “over the rationals up to some height”. Of course to make this rigorous, one needs to quantify the two heights here, the one giving the independence, and the one giving the generation. In order to be useful for applications, it turns out that one often needs the former height to be much larger than the latter; exponentially larger, for instance, is not an uncommon request. Fortunately, one can accomplish this, at the cost of making the height somewhat large: Theorem 2 (Finite generation implies finite basis, finitary version) Let {n \geq 1} be an integer, and let {F: {\mathbb N} \rightarrow {\mathbb N}} be a function. Let {V} be an abelian group which admits a well-defined division operation by any natural number of size at most {C(F,n)} for some constant {C(F,n)} depending only on {F,n}; for instance one can take {V = {\mathbb Z}/p{\mathbb Z}} for {p} a prime larger than {C(F,n)}. Let {v_1,\ldots,v_n} be a finite collection of “vectors” in {V}. Then there exists a collection {w_1,\ldots,w_k} of vectors in {V}, with {1 \leq k \leq n}, as well an integer {M \geq 1}, such that • (Complexity bound) {M \leq C(F,n)} for some {C(F,n)} depending only on {F, n}. • ({w} generates {v}) Every {v_j} can be expressed as a rational linear combination of the {w_1,\ldots,w_k} of height at most {M} (i.e. the numerator and denominator of the coefficients are at most {M}). • ({w} independent) There is no non-trivial linear relation {a_1 w_1 + \ldots + a_k w_k = 0} among the {w_1,\ldots,w_k} in which the {a_1,\ldots,a_k} are rational numbers of height at most {F(M)}. In fact, one can take {w_1,\ldots,w_k} to be a subset of the {v_1,\ldots,v_n}. Proof: We perform the same “rank reduction argument” as before, but translated to the finitary setting. Start with {w_1,\ldots,w_k} initialised to {v_1,\ldots,v_n} (so initially we have {k=n}), and initialise {M=1}. Clearly {w} generates {v} at this height. If the {w_i} are linearly independent up to rationals of height {F(M)} then we are done. Otherwise, there is a non-trivial linear relation between them; after shuffling things around, we see that one of the {w_i}, say {w_k}, is a rational linear combination of the {w_1,\ldots,w_{k-1}}, whose height is bounded by some function depending on {F(M)} and {k}. In such a case, {w_k} becomes redundant, and we may delete it (reducing the rank {k} by one), but note that in order for the remaining {w_1,\ldots,w_{k-1}} to generate {v_1,\ldots,v_n} we need to raise the height upper bound for the rationals involved from {M} to some quantity {M'} depending on {M, F(M), k}. We then replace {M} by {M'} and continue the process. We repeat this procedure; it can only run for at most {n} steps and so terminates with {w_1,\ldots,w_m} and {M} obeying all of the desired properties. (Note that the bound on {M} is quite poor, being essentially an {n}-fold iteration of {F}! Thus, for instance, if {F} is exponential, then the bound on {M} is tower-exponential in nature.) \Box (A variant of this type of approximate basis lemma was used in my paper with Van Vu on the singularity probability of random Bernoulli matrices.) Looking at the statements and proofs of these two theorems it is clear that the two results are in some sense the “same” result, except that the latter has been made sufficiently quantitative that it is meaningful in such finitary settings as {{\mathbb Z}/p{\mathbb Z}}. In this note I will show how this equivalence can be made formal using the language of non-standard analysis. This is not a particularly deep (or new) observation, but it is perhaps the simplest example I know of that illustrates how nonstandard analysis can be used to transfer a quantifier-heavy finitary statement, such as Theorem 2, into a quantifier-light infinitary statement, such as Theorem 1, thus lessening the need to perform “epsilon management” duties, such as keeping track of unspecified growth functions such as {F}. This type of transference is discussed at length in this previous blog post of mine. In this particular case, the amount of effort needed to set up the nonstandard machinery in order to reduce Theorem 2 from Theorem 1 is too great for this transference to be particularly worthwhile, especially given that Theorem 2 has such a short proof. However, when performing a particularly intricate argument in additive combinatorics, in which one is performing a number of “rank reduction arguments”, “energy increment arguments”, “regularity lemmas”, “structure theorems”, and so forth, the purely finitary approach can become bogged down with all the epsilon management one needs to do to organise all the parameters that are flying around. The nonstandard approach can efficiently hide a large number of these parameters from view, and it can then become worthwhile to invest in the nonstandard framework in order to clean up the rest of a lengthy argument. Furthermore, an advantage of moving up to the infinitary setting is that one can then deploy all the firepower of an existing well-developed infinitary theory of mathematics (in this particular case, this would be the theory of linear algebra) out of the box, whereas in the finitary setting one would have to painstakingly finitise each aspect of such a theory that one wished to use (imagine for instance trying to finitise the rank-nullity theorem for rationals of bounded height). The nonstandard approach is very closely related to use of compactness arguments, or of the technique of taking ultralimits and ultraproducts; indeed we will use an ultrafilter in order to create the nonstandard model in the first place. I will also discuss a two variants of both Theorem 1 and Theorem 2 which have actually shown up in my research. The first is that of the regularity lemma for polynomials over finite fields, which came up when studying the equidistribution of such polynomials (in this paper with Ben Green). The second comes up when is dealing not with a single finite collection {v_1,\ldots,v_n} of vectors, but rather with a family {(v_{h,1},\ldots,v_{h,n})_{h \in H}} of such vectors, where {H} ranges over a large set; this gives rise to what we call the sunflower lemma, and came up in this recent paper of myself, Ben Green, and Tamar Ziegler. This post is mostly concerned with nonstandard translations of the “rank reduction argument”. Nonstandard translations of the “energy increment argument” and “density increment argument” were briefly discussed in this recent post; I may return to this topic in more detail in a future post. Read the rest of this entry »
bf6cde39d26d18aa
Dismiss Notice Join Physics Forums Today! Feynman's perspective on the double slit 1. Jul 4, 2008 #1 I am reading The Elegant Universe by Brian Greene. It is really wonderful, however, I am confused by a section about Feynman's view on the double slit expiriment. Greene writes: "Feynman proclaimed that each electron that makes it through to the phosphorescent screen actually goes through both slits. It sounds crazy, but hang on: Things get in even more wild. Feynman argued that in traveling from the source to a given point on the Phosphorescent screen each individual electron actually traverses every possible trajectory simultaneously... It goes in a nice orderly way through the right slit. It heads toward the left slit, but suddenly changs course and heads through the right. It meanders back and fourth finally passing through through the left slit. It goes on a long journey to the Andromeda galaxy before turning back and passing through the left slit on its way to the screen. And on and on it goes- the electron, according the Feynman, simultaneously "sniffs" our every possible path connecting its starting location with its final destination." The problem I have with this is not so much the electron being in two places at one time or any of that. What I don't get is how it could go so far away and back, and not have the measurment take forever. I mean if it can only go the speed of light...than how does measuring the interference pattern not take forever? If it takes every possible path in the universe, assuming its finite, than...wouldn't taking this measurment take like over millions of years? Or is this what they talk about when they say quantum mechanics and relativity are not compatible? Thanks, Zach 2. jcsd 3. Jul 5, 2008 #2 User Avatar Gold Member I never got this my self but I'll have a go at explaining. If we know exactly how fast the electron is going we have no idea where it is, giving it no defined boundaries in accordance with the uncertainty principal. 4. Jul 5, 2008 #3 See I always thought the uncertainty was within some reason as in "well the electron, is somewhere within so many unit lengths", planck lengths i would suppose, if that even makes sense, i really don't know. But the point is i always thought it was like "Its somewhere within the room, you know? not within the entire cosmos. but thats beside the point. Wouldn't it take forever for it to make that path from the source to the andromeda galaxy and back and some how create an interference pattern with itself that went straight from the source to the screen... i mean...they must be going at different speeds...the two electrons that are the same one. I really don't get this at all. Was Feynman just wrong? and there is some new theory that makes more sense than his? 5. Jul 5, 2008 #4 Phase velocity can exceed the speed of light, so it should not be a problem for waves to go so far away and back. The waves in quantum mechanics are not physically real. I mean we can not measure the waves in any physical sence Last edited: Jul 5, 2008 6. Jul 5, 2008 #5 User Avatar Staff Emeritus Science Advisor To essentially echo kahoomann, your mistake is in thinking of the electron as a specific "object" that goes back and forth. At the size level of the electron, such things simply don't exist. I can't speak for exactly how Feynmann himself intended that to be interpreted but it would be perfectly valid to think of an electron with a certain "probability" or amplitude of its wave function to go "through A" or "through B" or "through one and then back to the other" as meaning that "part" of it does each of those. Again, just as I warned against thinking of an electron as being one distinct object, so you should not think of that as meaning the electron "breaks" apart. 7. Jul 5, 2008 #6 Ok, let me see if I get this. So an electron, is more like a non-existant wave that is everywhere at the same time, in kind of an implicate order that we can't see, and it usually manifests itself in the most probable spot. But the electron really doesn't have a path of travel does it? That is just an attempt at explaining properties of the electron that are very strange and hard to explain. How could we possibly interact with such a strange thing, I mean how could we possibly get a hold of an electron and shoot it at a double slit if it is so...non physical and bizzare? And one more question, are there any guesses at what decides the probabilities for the spot where an electron manifests? Thank you for your explaining and for being patient with my inability to understand the insanity of the quantum world. 8. Jul 6, 2008 #7 maybe you can read "QED - the strange theory of somethingIdon'tremember" by feynman 9. Jul 6, 2008 #8 The electron really does exist whenever you measure it or look at it, for example, the double-slit experiment. But between measurements, it is a wave packet. The measurement will cause the wavefunction to collapse. Einstein reacting to when he said: "I cannot believe that the Moon exists only because a mouse looks at it." ? . 10. Jul 6, 2008 #9 The book is "QED: the strange theory of light and matter". :smile: So far this all sounds identical to the standard interpretation. Looking at wikipedia (path integal formulation) it seems to me that what Feynman did was re-write the rules for drawing up the mathematical description of a superposition of states in such a way that they became more useful. Fair summary? 11. Jul 6, 2008 #10 This is more a question of interpretation of matter waves as described in Feynman path integral formulation of quantum mechanics, as described in, Wikipedia. Solving the Schrödinger equation with open boundaries, you don't have to think about it. But you could actually plot some kind of "classical" particle trajectories using the definition of the quantum current density, and then you see that these streamlines could be chaotic for certain energies (circulating back and forward between the slits openings), when a plane wave is injected towards a double slit. 12. Jul 8, 2008 #11 User Avatar Gold Member Isn't the probability of the electron being some where and something to do with the schroeder equation. They all add up togive the electrond final position. 13. Jul 8, 2008 #12 User Avatar Staff Emeritus Science Advisor Gold Member I would just like to add that the lectures that the book is based on are available here. (Part 2 is the most interesting one). Have something to add? Similar Discussions: Feynman's perspective on the double slit 1. Double Slit (Replies: 3)
38f77e4513be0a01
Warning: mysql_real_escape_string(): No such file or directory in /home/lukeodom/otherbrothersteve.com/wp-content/plugins/statpress-reloaded/statpress.php on line 1786 Warning: mysql_real_escape_string(): A link to the server could not be established in /home/lukeodom/otherbrothersteve.com/wp-content/plugins/statpress-reloaded/statpress.php on line 1786 A View from the Altar / Build an altar to the Lord your God on top of this rock… (Judges 6:26) Skip to content The Secular Man’s Suicide Pact I recently visited a place with a much larger Moslem population than my home town.  That set me to thinking about things. Leftist progress on diversity looks to be moving right along.  I’m guessing they have a schedule for undoing the dispersion of Babel.  We’ll see how that works out. Trembling as I am to utter the following blasphemy against one of the Secular Man’s highest articles of faith, it appears to me that segregation is pretty much a natural thing for most people.  If we don’t segregate by race, we do it by sex, religion, income, the type of work you do, your IQ, or even your status as a manager or worker bee.  I’ve noted that the Ivy League humanities majors don’t pal around with the NASCAR set a whole lot, not that either side of this travesty of segregation is complaining about it, because each side is equally convinced that the other side is peopled with bigots and idiots.  In fact, NASCAR people won’t even hang around NHRA.  Oh, well. So people prefer to be around folks who are like themselves.  But somehow, acknowledging this obvious fact makes me a blasphemer in the eyes of the Secular Man.  Like Winston said in 1984, “theyll shoot me i don’t care theyll shoot me in the back of the neck i dont care”. There are exceptions to the general trend toward segregation, sometimes benign, sometimes not.  There are cases where we let students come over here and send ours over there.  No harm there.  If you want a kid from France in your home for the school year, more power to you, sez I. And there are Christian societies who send doctors, farmers, engineers and whatnot.  They have an ulterior motive, of course, but it’s an open secret.  They’ll treat lepers and show folks how to have safe drinking water in exchange for a chance to explain about Jesus and the cross.  There is definitely no harm in that.  And — speaking only of my own premillennial views — Christianity decidedly does not teach us to take over the world by force.  If there is to be any force, Jesus will impose it in person when He arrives.  Post-mill folks, I’ll let you speak for yourselves on this. Other cases are not quite so benign.  Caesar desegregated the Italians and the Gauls, but not to help the latter. Likewise the Assyrians in Israel, the Babylonians in Judah, and so on. In general, I’d say anybody who thinks it’s the destiny of his group to take over the world by force is a threat.  In modern times, the Communists and Fascists fit this category and were proud of it.  There was a time when Americans understood this and reacted against it.  The old adage about being better dead than red was a way of acknowledging the open threat posed by Communism while saying that we intended to push back with whatever force it took. And to get back where I started from, Moslems fit this category.  Islam intends to take over the world, by persuasion where it can, by force where it must. In the vast majority of cases, your Moslem co-worker is no threat to you or anybody else.  He’s just a guy trying to get by, raise his kids to be good Moslems, and keep his wife from feeling like a conspicuous fool wearing her burqa. The problem arises when there are enough Moslems to form a society that runs along Islamic lines.  Because Islam expects to own Earth and everyone on it.  The Secular Man, wearing his feelings on his “coexist” bumper sticker, is just not prepared to deal with the reality of an Islam that will not rest until everyone bows to Mecca.  The Secular Man’s refusal to see a threat where there clearly is one looks a lot like a suicide pact. Will the real oligarchy please stand up The Washington Times published an article quoting an Ivy League study saying we live in an oligarchy rather than a republic.  Elites and special interests buy influence and get their way, he says, describing a government of plutocrats more than oligarchs.  The Catholic News Service quotes the same study to the same effect, complaining against big corporations doing a lot of evil influence buying.  The gist of it is that the rich get their way, harming the rest of us. But the biggest source of influence over the way the government governs is the government itself. And here’s how it works.  Something like 148 million Americans are receiving some form of stipend from the government.  Government is essentially buying their votes with money confiscated from the 90 million private sector workers who pay for it all.  So maybe it’s true that we have an oligarchy or plutocracy settling in upon us, but the “evil” corporations are by no means the dominant players in this field.  The money laundering from the government dwarfs all other forms of paid influence, whether it’s from the Koch brothers on the right or Warren Buffet and George Soros on the left. It’s a victory for Big Irony that many of the people complaining the loudest about the influence of Big Corporations are actually part of Big Government and seem completely blind to their role in the most pervasive and ruinous corruption scheme of all. So why are you still a Christian? Ukraine and Obama’s complicated failure Of course you’ve heard by now that Mr. Romney and Mrs. Palin both warned that Mr. Obama’s policy toward Russia would lead to the ongoing crisis in Ukraine. America should have been doing everything possible to help Ukraine establish a sturdy, free economy, a justice system free of graft and corruption, and a credible military. If we were attempting any of these things, it never made any news I could see. But now Obama’s failures are starting to earn compound interest. Obama, who years ago helped bring about the decline of America’s manned space program — not that he did this by himself; he had plenty of help — now has another problem on his hands in the matter of Ukraine. He can’t afford to anger the Russians too much because we depend on them to get our astronauts and supplies to/from the space station. We’re in one of those moments when you realize that great nations have to remain strong in every area. America’s space program has been the envy of the world for 60 years, that is, until the last shuttle flight. Now we have no way to get people and cargo to the space station. So Mr. Obama will be low key in his reactions to the Russian dealings in Ukraine. It would be too embarrassing to have the Russians tell America to kiss off next time we need one of our astronauts to bum a ride in a Soyuz spacecraft. And I feel I need to add something here.  I am not ashamed of America, but I am ashamed of our self-imposed weakness and immorality after years of secular, Socialist-leaning misrule. The looming problems confronting our astronauts are the kinds of weird, unique gotchas that crop up when foreign policy is dominated by the wishful, utopian thinking of liberal academics instead of a hard-headed determination to deal with facts as they are.  Russia is a powerful nation that sees itself as our rival.  Mr. Putin is a smart, tough ex-KGB agent and a fierce nationalist.  He plays to win and won’t hesitate to spill blood to achieve his goals.  Only blind folly could have failed to see that and act accordingly.  Putting our space program into a state of dependency on the Russian program is beyond naïve. So remember that next time a liberal politician tells you America is disliked around the world, embarks on a worldwide apologize-for-America tour, and offers a former KGB agent one of those ludicrous red reset buttons. Why I think there’s a God Louise Antony gave her reasons for thinking there’s no God, and I dealt with those here. But what are the reasons for thinking there is one? In most Christian literature, these boil down to five. Why There’s Something Instead of Nothing The physical universe tells us it had a beginning. The sun isn’t merely shining; it is burning up. The world isn’t merely turning; it is spinning down to a stop. Natural processes everywhere are in a state of decay and decline. The available energy in the universe is a consumable resource. There’s an end point when the mainspring of the whole cosmos will stop ticking. Therefore, it had to have been wound up at some point in the past. So the physical universe isn’t eternal. So something else must have been here before there was a universe. Whatever that was, it must have been eternal and must have had the capacity to bring the universe into being. Why There’s Order Instead of Chaos When you drive through the South and see a 1000-acre tract of pine trees planted in rows, equally spaced along the row and all the same age, you don’t have to ask if somebody did that. When I look at the far more complex arrangements of DNA, it’s obvious that a mighty intelligence made this. DNA contains the coding needed to duplicate itself. But a process capable of creating a DNA molecule from scratch simply does not exist in nature. Nothing even remotely approaching this degree of sophistication has ever been observed, not in nature, nor even in man’s most advanced laboratories. So something eternal and powerful was there before the universe existed. And it had the capacity to bring the universe into being, wind up the spring, and then release the energy through myriads of the most intricately designed mechanisms. Such a being is intelligent beyond all the reckoning of man. Why Things are Right and Wrong People have a moral component to their nature. Ms. Antony shows this when she asks that we all work for peace. Nice thought, though I wish she’d explain why, on atheistic principles, peace is better than war. After all, isn’t evolution driven by conflict and winnowing away the unfit so that only the strongest and smartest survive to breed again? Here’s a case where evolutionists are better than their principles. They generally wish the world were better — and “better” is defined in moral terms. Furthermore, there is, for lack of a better term, a genuine reality underlying morals. We aren’t merely displeased when brutes kidnap little girls and sell them into sexual slavery. No, this is really and truly evil, and wrong. And it’s not just that we feel happy about a man who would redeem little slaves out of their bondage. No, such a deed is really and truly good and right. The fact that morality cannot be derived from nature is not an argument from gaps in our knowledge. Rather, it’s plain to see that there is no arrangement of particles and forces that can ever account for a moral right and wrong because morality involves not just an assessment of facts, but an assertion of authority. Morality is the claim, coming from outside your own head, that you ought or ought not do something. And “ought” inherently arrives in the form of a command. Morality sees what is wrong and authoritatively forbids it. Morality sees what is right and authoritatively commands it. The origin of morality, then, is very much like the origin of the physical universe. It’s here; it’s real, and it defies natural, material explanation. It demands a source that is outside of this world, transcendent, and that was capable of implanting it in the human heart when man was first formed. So — just building the argument — something eternal brought the universe into being, something that was powerful enough to do it, intelligent enough to design it, and this Being possessed a moral code which it then hard wired into the hearts of men. Why We Sense the Transcendent It’s an interesting question as to why, on naturalist/materialist principles, people should have ever evolved to be capable of wondering about what could be outside this physical dimension. Where’s the survival value in such a massive and stressful distraction?  Or to take the question a level deeper, how do matter and energy interact in such a way as to produce conscious beings who ponder things higher than matter and energy? Ms. Antony herself experiences the draw of the transcendent but drops it too soon. The real question is what a sense of transcendence is leading you to.  Being a Christian, it’s obviously my opinion that God created this in us to lead us to Him.  Paul told the Athenians that we “feel after Him,” (Acts 17:27) clearly expecting that even pagan men would have been open minded enough to investigate an intuition shared by virtually all people. We Christians find our sense of transcendence filled, satisfied, yet heightened and completed by knowing our God through His Son, Jesus.  People from other religions testify of their version of the same sense of transcendence.  It’s not my purpose to address those experiences, only to say that whether we’re making out shapes in a fog or seeing in the full light of day, something is there, and we all sense it to some degree.  And although the argument is not dispositive, I can’t frame a better explanation for a sense of transcendence than to propose that God has indeed set “eternity in our hearts” (Eccl 3:11) as a way to both prompt us to seek Him and as a way to experience Him once He is found. The life of Jesus Christ The chief way God chose to reveal Himself to man was through Jesus.  The officers sent to arrest Him said, “Nobody ever spoke like this man.”  We exhaust all the superlatives when we consider Him.  His teachings set the standard for goodness even among those who reject Him.  He led such a life that those who sought His ruin could accuse Him only by lying.  Without money, without armies, without political connections, without allies, without any access to the levers of power, having died young, Jesus did more to change the world for good than all who ever came before or after. And He rose from the dead.  Yes, His followers reported many other miracles He did, turning water to wine, walking on water, feeding multitudes out of a sack lunch. But the miracle of His resurrection was the story they were all, to man, willing to be tortured and die for the privilege of telling it, not because they had anything to gain by it, but because they undeniably believed it to be true. If there is a God such as I have described, and if God became a man, I would expect Him to be a man like Jesus. So that’s it.  It’s why I think God exists and has revealed Himself to us through His Son, Jesus. Answers for an atheist The New York Times published an interview with atheist Louise Antony who confidently affirms that there is no God. Read the linked article if you like, but her arguments against God boil down to a just handful of things. First, Antony says, “I deny that there are beings or phenomena outside the scope of natural law.” This, of course, is no argument at all. It’s just assuming the conclusion. Presupposing materialism merely evades the debate about whether God exists. The Christian idea of God is that He is transcendent, meaning that He is “above” or “beyond” or “outside” the universe. Looking for God by material methods is like prospecting for diamonds with a metal detector. Wrong tool. In her second argument, Antony says religious people can’t all agree on what God is, what He is like, or whether there are more gods than one. This is all true, and all irrelevant. For the sake of argument, let’s assume that all religious people are hopelessly muddled on the nature of God. Does this mean they’re all equally deceived on the existence of God? Not at all. Even in a total fog, people can know something is out there without knowing any details about it. Antony then says she cannot reconcile the existence of evil with the existence of God. Beg pardon, but what is this “evil” she speaks of? The existence of categories like “good” and “evil” assumes a Supreme Authority who establishes what’s good and what’s not. And consider again Antony’s statement, “I deny that there are beings or phenomena outside the scope of natural law.” Yet the very categories of good and evil are outside of natural law. You cannot derive morality from Newton’s laws or the Schrödinger equation. That requires a transcendent source. On the other hand, if good and evil are not real categories, if they’re just cultural norms or her own private intuitions, then her objection vanishes. Her argument amounts to, “I’m displeased (or we are); therefore, there is no god,” which is absurd. But Ms. Antony is left to ponder the motions of her own heart. Why is she outraged by rape or brutality? Who cares, and why should anyone care, if orphans starve, tyrants strut, armed gangs pillage and plunder, girls are bought and sold, and all the rest of human misery is played out before our eyes? If Ms. Antony knows anything at all, she knows there’s Something Big moving out there in the fog. And following that, Ms. Antony should be the first to accept religious experiences. After all, she’s had a big one. She’s felt the wrong of this fallen, sinful world and felt the need to put it all back right. That didn’t evolve from a big cloud of hydrogen gas. God has set eternity in our hearts, and that’s what it sounds like when people pay attention to it, even a little bit. Kim Jung O So now the FCC wants to install government minders in newsrooms across the country to make sure “underserved minorities” get the news they need. I guess we’ll show Kim Jung Un how it’s done. Even Mr. Obama’s lickspittle media has an eyebrow aloft. But don’t worry, lefties — if you like your freedom of the press, you can keep it! Another foretaste of things to come It’s no secret that Christian values are being slowly but inexorably dispossessed in America. Wedding cake bakers who refuse service to homosexual couples get sued over it, and lose. They’re told that once you open your business up to serve the public, then you have to serve whatever comes through the door. But now a bar owner in California says he’ll refuse service to state legislators who vote for anti-gay legislation. Actually, he went a bit farther and said he’d deny them entry to his bar. I’m thinking his valiant pro-gay stand isn’t likely to cost him a lot of money. How many Christians are clamoring to enter a gay bar in California? Still, the principle being established here should tell every Christian that it’s past time to gird up the old loins. Christian bakers are fair game for discrimination suits if they transgress against the Secular Man’s homodoxy on the grounds that public businesses have to accept whatever the public accepts. To borrow from Spurgeon, I’ll adventure to prophesy that anti-Christian bar owners will be immune from suits on the same grounds. Yet — lest we all forget — Californians voted against homosexual marriage, even going so far as to forbid it in their state constitution. So it’s clear that the actual public in California accepts anti-gay legislators just fine. But you can be certain that the bar owner, should he get sued for discrimination, will get a pass. Christians should be waking up to the fact that we’re in a fight. And to paraphrase Mordecai to Esther, don’t think this won’t ever touch you. When politics go bad King Baasha of Israel was a drunkard. His servant Zimri murdered him while he was drunk. Short moral of story: A drunken king can’t be trusted to know who the enemy is. Zimri took over and reigned for about a week. Another servant named Omri found out Baasha was dead and came after Zimri. Zimri neither fought nor fled, but went into his own house and burned it down upon himself. Moral: It’s easier to take over than it is to actually keep order, and once order is lost, you don’t have a lot of options. Omri was a wicked king and plunged Israel deeper into ruinous idolatry. Moral: A guy who just wants to be in charge is about the last man you want in power. Omri’s son Ahab eventually became king. The Bible describes Ahab as worse than all who came before him. He married Jezebel who was even worse than he was. Moral: Getting rid of drunks, killers, and tyrants doesn’t mean things are about to get better. The son might make you wish for his daddy back. And beware the tyrant’s wife. During Ahab’s reign the prophet Elijah called for a drought that lasted for years. Moral: When the right leadership arrives, the fight isn’t over; it’s just starting, and you may dislike his methods. At Mount Carmel, God spoke by fire from heaven. Israel, convinced, repented. They acknowledged that the Lord is God, not Baal, and executed the idolatrous priests. Then the rain came. Moral: Fixing a country starts with fixing hearts. Baghdad Bob and ObamaCare Kathleen Sebelius, the Baghdad Bob of ObamaCare, says job losses due to the idiotic tax scheme are a “popular myth.” Bad timing for her announcement, though, coming right after the administration has been cheesecake grinning and doing happy hands over what a great American blessing it is to escape “job lock” by getting fired. We should all savor this rare moment of unanimity in the political world with both Republicans and Democrats saying that ObamaCare is a failure. The GOP says it’s a failure because (among other things) it makes people lose their jobs. The Democrats say it’s a failure because ObamaCare job losses are only mythical, leaving millions of hapless citizens still locked in a job. Is this a great country or what? Secular Man’s smoking habits Has anyone else noticed how smoking tobacco has been getting less legal while smoking marijuana has been getting more legal? And isn’t it just the funniest thing that so many plain old potheads are claiming it’s for medicinal purposes? Connecticut — nekkid and hoping you won’t notice One of the great insights of the American Revolution is that a government’s authority derives from the consent of the governed. The State of Connecticut passed a law saying everyone in the state must register so-called “assault” weapons and high capacity ammo magazines. Comes now the report that ten thousands of citizens in Connecticut — perhaps millions — have declined to obey. Registration schemes are plainly the first move in a game of confiscation. Many who intend not to surrender their arms are declining to register them. This may turn out to be a very, very big deal. Failure to register a weapon in Connecticut is a class D felony. A class D felony is punishable by up to five years in prison. Despite that, gun owners in Connecticut collectively jutted their jaws and said, “Hell, no.” How big is the problem? Connecticut estimates there are about 370,000 so-called “assault” weapons in Connecticut. Less than 50,000 have been registered. They estimate there are 2.4 million high capacity ammo magazines in the state. About 38,000 have been registered. Theoretically, Connecticut now has well over two million new felons. You can be sure Connecticut pols see the problem just like I do. If a huge swath of the population responds with sullen defiance, the government no longer has the consent of the governed. How is it a legitimate government any more? And how do you recover that once it’s lost? I see three options. 1) Connecticut can openly and humbly restore its legitimacy by repealing the law. 2) Officials can reduce enforcement to some low level that ruins a few people’s lives while leaving most violators untouched yet still under state threat. 3) The state can hire more SWAT teams, build way more prisons and start the crackdown. Option 2 is most likely because the gun law was designed not to solve a problem but to make liberals feel good about themselves. Neither practicing humility nor engorging the prisons would serve that purpose, although criminalizing a bunch of rightwingers would. And if a few of them get busted, well, that’s the price one pays. One problem: Reducing enforcement to a level that prevents serious conflict is claiming victory while hoisting a white flag. It’s like one of those dreams where you show up at work buck nekkid and nobody notices. As Drudge says, “Developing…” From creation clearly seen The recent creation/evolution debate between Ken Ham and Bill Nye was pretty good.  It was not excellent. The rules of the debate didn’t require the contestants to engage one another to any great extent, so the back-and-forth that challenges reasoning didn’t happen. One of the things Mr. Ham said that begged for discussion was his remark that just doing science presupposes God and creation.  Christians schooled in apologetics promptly said rah-rah, but the argument was left as a mere assertion.  Nye declined to ask for explanation, and Ham obliged. Why should there be any such thing as natural law? Why should nature be orderly and predictable? Why should gravitation behave according to a rule so precise that you can measure its effects and write a mathematical equation that tells you exactly what’s going to happen? A Christian would argue from the creation account that God intended His universe to function in an orderly way. Creatures bring forth “after their kind,” it says ten times. The motions of the earth, sun, moon, and stars provide day, night, signs, and seasons. There is order in this, and Paul tells us that the invisible things of God are clearly seen, being understood by what God made. (Rom 1:20) But the deeper question for Mr. Nye would have been this: What is it about your thought process that leads you to look for orderliness in the first place, and why does your mind naturally recognize it and latch onto it? Based on Nye’s frequent and brave admissions about what he doesn’t know, I can only surmise that he’d admit again that he has no idea why the Bing Bang resulted in law and order rather than sheer chaos, and he’d likely admit that he has no idea why his mind should be structured to look for order. Or he might just say it evolved this way, which is the same thing. But the Christian can say that if we take the Word of God as our starting point, the first thing we learn is that God is, that He made the universe, and that He did it in an orderly manner. Further, God immediately set about revealing Himself to man with that revelation being set in a framework of reason and logic. The imago dei means our heads are hard-wired to look for order, to recognize it at once, and to latch onto it when it’s found. For science to exist at all, all these Christian teachings about creation and human nature have to be assumed as prerequisites. They must be presupposed. The questions for Mr. Nye and everyone who investigates science from a naturalistic viewpoint are these: How does the Big Bang account for the fact that the resulting cosmos functions according to fixed laws? And second, how did the mind of man come to look for such things? Christianity has an answer for these questions. Naturalism can’t do any better than offer a shrug and say that’s just the way things are — which is the opposite of true science. Conservatives who can’t connect the dots A few months ago while the electioneering was in full-throated roar, a “conservative” writer lamented that liberal voters seem unable to connect the dots.  One quoted a low-info voter who expressed unconcern about a property tax hike because, said the voter, “I rent an apartment, so property taxes don’t affect me.”  How do you connect intelligently with people this thick? And then today, I was listening to talk radio “conservative” Mark Larsen explaining to a caller that he ‘d have no problem with the Boy Scouts changing their stance on homosexuality to go with the PeeCee flow and start accepting it.  The caller wondered why the institution must change to accommodate the individual rather than the other way around, noting that the Boy Scouts have always required young men to be morally straight. “What is morality?” wondered the blind Mr. Larsen aloud.  After all, Christian denominations have differed over this or that detail.  And whatever would we say to the Metropolitan churches who are openly homosexual?  (Tacit premise in the question: Until you get everything perfect, you’re not allowed to say they’re wrong.) This is a conservative, low-info talk show host who cannot connect the dots.  Well, actually, Larsen says he’s libertarian, but he’s still dense on this topic and unable to connect dots, and here’s why. Morality of any and every sort is an assertion of authority.  The moment you say “ought” or “ought not,” somebody else demands, “Says who?”  Morality requires an anchor.  The Author, the Anchor, is God.  And even though the church admittedly has quibbles a-plenty, we’re all together in relaying to you His judgment that sodomy isn’t okay. Mr. Larsen, apparently unwilling to consider a reliable message from a capable though fallible messenger, has no anchor.  How else can you even ask such a question as, “What is morality?” And once you pull up the anchor, everything tied to it will drift away.  The current debate over homosexuality didn’t spring upon America like a bolt from the sky.  It started way back when Americans grew discontented with the God who insists we should keep our word.  Not long thence, easy-breezy divorce became socially acceptable.  A few years later, pornography began to proliferate.  And then came the sexual revolution with its promiscuity, the shack-ups, the meteoric rise in illegitimacy, the loss of shame as the entertainer class breeds without commitment. First thing you know, many major cities had whole sections of their towns devoted to sodomy, and before you can adjust to that, they’ve got us voting on whether homosexuals have a right to marry one another. And at that point, people like Mr. Larsen cannot render a reason as to what could possibly be bad about that. Prediction: Sometime soon our society will be debating polygamy, pedophilia, bestiality, and necrophilia, and those who (for whatever reason) disapprove of such things but who have no anchor will find themselves as tongue tied as the hapless Mr. Larsen was.  Who’s to say what’s wrong, after all? Without God as the anchor for morals, you will have no morals.  He made the world where it can’t be any other way.  And yes, He did that on purpose.  Morals, like the rights stated in the Declaration of Independence, are derived.  And just as God created us equal and endowed us with rights, so he also created us with the social, civic, and religious obligations we refer to as morality. When you pull up the anchor, you don’t just lose your morals.  You’ll start losing your rights, too.  Same anchor, same God.  Say good-bye to life, liberty, and the pursuit of happiness.  Godless men cannot comprehend, let alone respect, the Bill of Rights.  They have no clue where such things came from, no idea of what makes them special, and no sense of a higher Authority to whom all earthly authorities must give account.  You can no more have rights without morality than you can have a stream without water.  Both flow from the same spring, the Eternal God. God deliver us from leaders who do not know their Maker, or even that they are made. Lance and Oprah The embarrassing spectacle of Lance Armstrong confessing to Oprah has failed to capture the popular imagination. For one thing, Lance is not a sympathetic character.  Americans are not prone to soaring eloquence, so people call him a jerk.  British writer Geoffrey Wheatcroft said of him, “Mr. Armstrong has “a voice like ice cubes,” as one French journalist puts it, and I have to admit that he reminds me of what Daniel O’Connell said about Sir Robert Peel: He has a smile like moonlight playing on a gravestone.” Another thing is that Lance’s confession came too late.  And it was lame.  And it was tacky.  But it fits the pattern now so familiar in no-fault America in which a famous person commits a sin, gets caught, lies about it till the lie becomes ridiculous, then finally stages a theatrical confession.  The staging is usually in proportion to the fame and ego of the perpetrator.  Thus, Lance. Scroll through the mental list of publicly groveling miscreants from Lance back through Anthony Weiner, Bill Clinton, South Carolina governor Mark Sanford, gay/doping preacher Ted Haggard, and a host of others. The spiritual man can see what this is all about.  Adam remains banished from Eden.  The occasional rite of public humiliation is just a couple of the exiles passing by the gate and wishing for a way back in.  But the gate is shut.  The cherub with the flaming sword still bars the road to paradise. A final thing about Lance’s confession is that we can all see it does no good.  The public, momentarily curious, watches the ritual confessions and is vaguely aware of the hopelessness of it all.  To confess seems required.  A wrong was done.  To admit it is demanded.  We all feel the pressure of the demand.  Some of us help exert it.  At the same time, it’s inadequate.  It’s watering a dead tree, and all the same to the tree whether it’s water or tears. The Secular Man, two-dimensional being that he is, confesses to himself and to his peers.  Who else is there?  To the carnal mind, what is paradise but the pleasure he felt before his sin was found out?  A degrading confession seems to be how you shake up the Etch-A-Sketch and redraw the picture. The confession has to feature humiliation and suffering.  Part of the suffering involves the rest of us smirking at the poor dumb schmuck locked in the pillory.  But even when we humiliate ourselves as Lance did, the sin remains.  And even if you suffer to the point of death, you’re just dead and guilty.  Whether you’re confessing to Oprah or CNN, it’s still just praying to a god that cannot save. (Isa 45:20) The riddle is solved at the cross.  It is Christ’s humiliation, not ours, and His suffering and death, that brings remission of sins.  It is our confession to Him, not to Oprah nor to a public filled with critics and voyeurs, that brings peace.
b75f9b9f190e5e02
lördag 29 juni 2013 The Linear Scalar MultiD Schrödinger Equation as Pseudo-Science If we are still going to put up with these damn quantum jumps, I am sorry that I ever had anything to do with quantum theory. (Erwin Schrödinger) The pillars of modern physics are quantum mechanics and relativity theory, which both however are generally acknowledged to be fundamentally mysterious and incomprehensible to even the sharpest minds and thus gives modern physics a shaky foundation. The mystery is so deep that it has been twisted into a virtue with the hype of string theory representing maximal mystery. The basic trouble with quantum mechanics is its multi-dimensional wave function solution depending on 3N space coordinates for an atom with N electrons,  as solution to the linear scalar multi-dimensional Schrödinger equation, which cannot be given a real physical meaning because reality has only 3 space coordinates. The way out to save the linear scalar multidimensional Schrödinger equation, which was viewed to be a gift by God and as such was untouchable, was to give the multidimensional wave function an  interpretation as the probability of the N-particle configuration given by the 3N coordinates. Quantum mechanics based on the linear scalar Schrödinger equation was thus rescued at the cost of making the microscopic atomistic world into a game of roulette asking for microscopics of microscopics as contradictory reduction in absurdum. But God does not write down the equations describing the physics of His Creation, only human minds and if insistence on a linear scalar (multidimensional) Schrödinger wave equation leads to contradiction, the only rational scientific attitude would be to search for an alternative, most naturally as a system of non-linear wave equations in 3 space dimensions, which can be given a deterministic physical meaning. There are many possibilities and one of them is explored as Many-Minds Quantum Mechanics in the spirit of Hartree. It is well known that macroscopic mechanics including planetary mechanics is not linear, and there is no reason to expect that atomistic physics is linear and allows superposition. There is no rational reason to view the linear scalar multiD Schrödinger equation as the basis of atomistic physics (other than as a gift by God which cannot be questioned), and physics without rational reason is unreasonable and thus may represent pseudo-science. The linear scalar multiD Schrödinger equation with an incredibly rich space of solutions beyond reason, requires drastic restrictions to represent anything like real physics.  Seemingly out of the blue, physicists  have come to agree that God can play only with fully symmetric (bosons) or antisymmetric (fermions) wave functions with the Pauli Exclusion Principle as a further restriction. But nobody has been able to come with any rational reasons for the restrictions to symmetry, antisymmetry and exclusion. According to Leibniz Principle of Sufficient Reason, this makes these restrictions into ad hoc pseudo-science.   Inga kommentarer: Skicka en kommentar
b53a460808fe5c9d
« · » Section 12.3: Classical and Quantum-Mechanical Probabilities n = ω = check, then click the input values and play button to see the probability densities instead. check, then click the input values and play button to see the classical probability distributions too. Please wait for the animation to completely load. In this Section we compare the energy eigenfunctions of the harmonic oscillator in position space to those in momentum space and then compare the resulting probability densities to their classical counterpart probability distributions. Momentum-space energy eigenfunctions can be obtained by calculating the Fourier transform of the position-space energy eigenfunction (see Section 8.5). However, in the case of the harmonic oscillator, it is easier to consider the time-independent Schrödinger equation in momentum space: [p2/2m − (mω2ħ2/2) d2/dp2] φ(p) = Eφ(p). (12.16) In momentum space, the operator p represents the momentum and the operator ( d/dp) represents the position operator, x. Compare Eq.(12.6) to Eq.(12.16). What do you notice?  It turns out that the two equations are in the same form, which can be seen if you make the substitution that p = mωx or x = p/mω. Therefore, the solutions to the two differential equations are the same, apart from a scaling factor. From Eq. (12.11) and Eq. (12.16), we have that3 φn(p) = Bn Hnp) exp(−η2p2/2) , (12.17) where η = (mωħ)-1/2 = β/mω. The normalization constant becomes: Bn = (2nn!(mωħπ)1/2)−1/2 , (12.18) In the animations, you can change n and ω, and see the resulting changes in the position-space and momentum-space energy eigenfunctions.  We have used 2m = ħ = 1 and initially ω = 2.  Can you guess why we have chosen this particular value for ω?  Using the first check box, you can view the probability densities in position and momentum space. In the animation, you can also check the box that superimposes the classical probability distributions (in pink) on the quantum-mechanical probability densities. Note the symmetry about x = 0 that the classical position-space and momentum-space probability distributions exhibit. 3If we had Fourier transformed the position-space energy eigenfunctions instead, we would have found the same result as Eq.(12.6), but multiplied by a phase exp(inπ/4) where n is the particular state's quantum number. This adds an overall phase to the momentum-space wave function and, as such, is not important. The OSP Network: Open Source Physics - Tracker - EJS Modeling Physlet Physics Physlet Quantum Physics
99f8e66ee5546f08
Ground state From Wikipedia, the free encyclopedia Jump to: navigation, search 1D ground state has no nodes[edit] In 1D, the ground state of the Schrödinger equation has no nodes. This can be proved considering the average energy of a state with a node at , i.e. . Consider the average energy in this state where is the potential. Now consider a small interval around , i.e. . Take a new wave function to be defined as and and constant for . If is small enough then this is always possible to do so that is continuous. Assuming around , we can write where is the norm. Note that the kinetic energy density everywhere because of the normalization. Now consider the potential energy. For definiteness let us choose . Then it is clear that outside the interval the potential energy density is smaller for the because there. On the other hand, in the interval we have which is correct to order . On the other hand, the contribution to the potential energy from this region for the state with a node, , is which is of the same order as for the state . Therefore, the potential energy is unchanged up to order if we deform the state with a node into a state without a node . We can therefore remove all nodes and reduce the energy, which implies that the ground-state wave function cannot have a node. This completes the proof.
a62c3d62423552d1
Is the Universe Actually Made of Math? Cosmologist Max Tegmark says mathematical formulas create reality. By Adam Frank|Monday, June 16, 2008 photography by Erika Larsen Cosmologists are not your run-of-the-mill thinkers, and Max Tegmark is not your run-of-the-mill cosmologist. Throughout his career, Tegmark has made important contributions to problems such as measuring dark matter in the cosmos and understanding how light from the early universe informs models of the Big Bang. But unlike most other physicists, who stay within the confines of the latest theories and measurements, the Swedish-born Tegmark has a night job. In a series of papers that have caught the attention of physicists and philosophers around the world, he explores not what the laws of nature say but why there are any laws at all. According to Tegmark, “there is only mathematics; that is all that exists.” In his theory, the mathematical universe hypothesis, he updates quantum physics and cosmology with the concept of many parallel universes inhabiting multiple levels of space and time. By posing his hypothesis at the crossroads of philosophy and physics, Tegmark is harking back to the ancient Greeks with the oldest of the old questions: What is real? Tegmark has pursued this work despite some risk to his career. It took four tries before he could get an early version of the mathematical universe hypothesis published, and when the article finally appeared, an older colleague warned that his “crackpot ideas” could damage his reputation. But propelled by optimism and passion, he pushed on. “I learned pretty early that if I focused exclusively on these big questions I’d end up working at McDonald’s,” Tegmark explains. “So I developed this Dr. Jekyll/Mr. Hyde strategy where officially, whenever I applied for jobs, I put forth my mainstream work. And then quietly, on the side, I pursued more philosophical interests.” The strategy worked. Today a professor at the Massachusetts Institute of Technology, Tegmark travels among the world’s top physicists. Backed by this well-earned credibility, his audacious ideas are sparking fascination and taking flight. These days Tegmark is a busy man. With his wife, the Brazilian cosmologist Angelica de Oliveira-Costa, he balances science with the demands of raising two young boys. Our interviewer, theoretical astrophysicist Adam Frank of the University of Rochester in New York, finally caught up with Tegmark as he made his way home to Winchester, Massachusetts, from a conference at Stanford University. In a comic juxtaposition of the profound and the profane, they spoke about the nature of reality by cell phone for three hours as Tegmark jockeyed his way through an airport rental car return, security lines, and a long wait for a delayed flight. A riff on reality would brake to a halt so Tegmark could avoid being hit by a rental-agency van. Just as the conversation plunged into parallel universes, Tegmark would have to downshift the dialogue for the bewildered security guard checking his boarding pass. Tegmark’s infectious excitement over the big issues, from physics and philosophy to kids and cosmology, made for one hell of an afternoon’s ride. Max, you have gained a reputation for thinking far outside the box even for a cosmologist. Have you always pondered deep questions of Life, the Universe, and Everything? No. I was a very confused youth. I came to it all pretty late, and there was no one I talked about philosophy with as a teenager. I did have one friend in high school who did everything the opposite way from everyone else. If people were sending letters in rectangular envelopes, he would make triangular envelopes and send letters in those. I remember thinking, “That is cool. That is how I want to be.” Is that why you decided to go into physics? Actually, my dad is a mathematician, and he was always very encouraging about math, but physics was my single most boring subject in high school. So I began as an undergrad in economics. That was an interesting choice....When did physics show up on your radar screen again? A friend gave me a book, Surely You’re Joking, Mr. Feynman! by the physicist Richard Feynman. It was all about picking locks and picking up women. It had nothing to do with physics, but it struck me how between the lines it said loud and clear, “I love physics!” I couldn’t understand how this could be the same boring stuff from high school. It really piqued my curiosity. How so? If you see some mediocre guy walking down the street arm in arm with Cameron Diaz, you say to yourself, “I’m missing something here.” So I started reading Feynman’s Lectures on Physics and I was like, whoa! Why haven’t I realized this before? So then you changed your major? Umm, no. You don’t pay for college in Sweden, so I was able to do this kind of scam where I enrolled in a different university to do physics without telling them I was already in college for economics. You were in two colleges at the same time? Yeah. You can see I was confused. It got complicated at times. I would have exams in both places on the same day, and I’d have to bike really fast between them. Was it in college that you started to think about the bigger questions? I was taking the one and only quantum physics class offered, and when I got to the chapter on measurement I felt sure that I was missing something. You’re talking about the way the observer appears to affect the measurement of what’s being observed. Right. There is this beautiful mathematical equation in quantum theory called the Schrödinger equation. It uses something called the wave function to describe the system you are studying—an atom, an electron, whatever—and all the possible ways that system can evolve. The usual perspective of quantum mechanics is that as soon as you measure something, the wave function literally collapses, going from a state that reflects all potential outcomes to a state that reflects only one: the outcome you see at the moment the measurement is done. It seemed crazy to me. I didn’t get why you were supposed to use the Schrödinger equation before you measured the atom, but then, while you’re measuring it, the equation doesn’t apply. So I got up my courage and knocked on the door of one of the most famous physicists in Sweden, a man on the Nobel committee, but he just blew me off. It wasn’t until years later that I had this revelation that it wasn’t me who didn’t get it; it was him! It is a beautiful moment in the education of a scientist when you realize that these guys in higher positions of power still don’t have all of the answers. So you took your questions about the Schrödinger equation and the effect of measurement with you when you left for the United States and your Ph.D. at Berkeley? That’s where it all started for me. I had this friend, Bill Poirier, and we spent hours talking about crazy ideas in physics. He was ribbing me because I argued that any fundamental description of the universe should be simple. To annoy him, I said there could be a whole universe that is nothing more than a dodecahedron, a 12-sided figure the Greeks described 2,500 years ago. Of course, I was just fooling around, but later, when I thought more about it, I got excited about the idea that the universe is really nothing more than a mathematical object. That got me thinking that every mathematical object is, in a sense, its own universe. I anticipated problems and did not submit until I had accepted a postdoctoral appointment at Princeton University. My first paper got rejected by three journals. Finally I got a good referee report from Annals of Physics, but the editor there rejected the paper as being too speculative. Wait—that is not supposed to happen. If the referee likes a paper, it usually gets accepted. That’s what I thought. I was fortunate to be friends with John Wheeler, a Princeton theoretical physicist and one of my greatest physics heroes, who recently passed away. When I showed him the rejection letter, he said, “‘Extremely speculative’? Bah!” Then he reminded me that some of the original papers on quantum mechanics were also considered extremely speculative. So I wrote an appeal to Annals of Physics and included Wheeler’s comments. Finally the editors there published it. Still, it wasn’t your bread and butter. You did your Ph.D. and postdoc in cosmology, a totally different subject. It’s ironic that my cover for these more philosophical interests was cosmology, a field that has often been seen as flaky as well. But cosmology was gradually becoming more respectable because computer technology, space technology, and detector technology had combined to give us an avalanche of great information about the universe. What is the multiverse’s first level? Do these level II universes inhabit different dimensions? OK, on to level III. So there are parallel me’s in level III as well. How does your mathematical universe hypothesis fit in? Can you give a simple example of a mathematical structure? Max, this is pretty rarefied territory. On a personal level, how do you reconcile this pursuit of ultimate truth with your everyday life? Sometimes it’s quite comical. I will be thinking about the ultimate nature of reality and then my wife says, “Hey, you forgot to take out the trash.” The big picture and the little picture just collide. Your wife is a respected cosmologist herself. Do you ever talk about this over breakfast cereal with your kids? She makes fun of me for my philosophical “bananas stuff,” but we try not to talk about it too much. We have our kids to raise. Do your theories help with raising your kids, or does that also seem like two different worlds? The overlap with the kids is great because they ask the same questions I do. I did a presentation about space for my son Alexander’s preschool when he was 4. I showed them videos of the moon landing and brought in a rocket. Then one little kid put up his hand and said: “I have a question. Does space end or go on forever?” I was like, “Yeah, that is exactly what I am thinking about now.” Next Page 1 of 4 Comment on this article Discover's Newsletter Collapse bottom bar Log in to your account Email address: Remember me Forgot your password? No problem. Click here to have it emailed to you. Not registered yet?
b04bb16564b421fa
Navigation Menu View Current Issue About JOM The Top One Hundred Candidates Below are the 100 candidate material moments that on-line voters ranked to create the official list of 50 Greatest Materials Moments. Candidate Moment 28,000 BC (estimated) The earliest fired ceramics--in the form of animal and human figurines, slabs, and balls--(found at sites in the Pavlov Hills of Moravia) are manufactured starting about this time. Introduces materials processing. 8000 BC The earliest form of metallurgy begins with the decorative hammering of copper by Old World Neolithic peoples. Leads to the replacement of stone tools with much more durable and versatile copper ones. 5000 BC In and around modern Turkey, people discover that liquid copper can be extracted from malachite and azurite and that the molten metal can be cast into different shapes. Introduces extractive metallurgy--the means of unlocking the Earth's mineralogical treasures. 3500 BC Egyptians smelt iron (perhaps as a by-product of copper refining) for the first time, using tiny amounts mostly for ornamental or ceremonial purposes. Unlocks the first processing secret of what will become the world's dominant metallurgical material. 3000 BC Metal workers in the region of modern Syria and Turkey discover that addition of tin ore to copper ore before smelting produces bronze. Establishes the concept of metals alloying--blending two or more metals to create a substance that is greater than the sum of its parts. 2200 BC The peoples of northwestern Iran invent glass. Introduces the second great nonmetallic engineering material (following ceramics). 1500 BC Potters in China craft the first porcelain using kaolin. Begins a long tradition of exceptional craftsmanship and artistry with this class of ceramics. 1500 BC Metal workers in the Near East develop the art of lost-wax casting. Establishes the ability to create and replicate intricate and complex metallurgical structures. 300 BC Metal workers in south India develop crucible steel making. Produces "wootz" steel which becomes famous as "Damascus" sword steel hundreds of years later, inspiring artisans, blacksmiths, and metallurgists for many generations to come. 200 BC Chinese metal workers develop iron casting. Introduces the primary approach to manufacturing iron objects for centuries in China. 100 BC Glass blowing is developed, probably somewhere in the region of modern Syria, Lebanon, Jordan, and Israel--most likely by Phoenicians. Enables the quick manufacture of large, transparent, and leak-proof vessels. Iron smiths forge and erect a seven meter high iron pillar in Delhi, India. Defies deleterious environmental effects for more than one and a half millennia, creating an artifact of long-standing materials science and archaeological intrigue. Johannes Gutenberg devises a lead-tin-antimony alloy to cast in copper alloy molds to produce large and precise quantities of the type required by his printing press. Establishes the fundamental enabling technology for mass communication. Johanson Funcken develops a method for separating silver from lead and copper, ores of which are often mixed in deposits. Establishes that mining and metals processing operations can recover metals as a by-product of other operations. Vannoccio Biringuccio publishes De La Pirotechnia. Provides the first written account of proper foundry practice. Georgius Agricola publishes De Re Metallica. Provides a systematic and well-illustrated examination of mining and metallurgy as practiced in the sixteenth century. Galileo publishes Della Scienza Mechanica ("on mechanical knowledge"), which he writes after he has been consulted regarding shipbuilding problems. Deals scientifically with the strength of materials. Anton van Leeuwenhoek develops optical microscopy capable of magnifications of 200 times and greater. Enables study of the natural world and its structures that are invisible to the unaided eye. Abraham Darby I discovers that coke can effectively replace charcoal in a blast furnace for iron smelting. Lowers dramatically the cost of ironmaking (enabling large-scale production) and saves regions from deforestation. In Britain, the first glue patent is issued (for fish glue, an exceptionally clear adhesive). Initiates a rapid succession of adhesive developments with natural and then synthetic sources. John Smeaton invents modern concrete (hydraulic cement). Introduces the dominant construction material of the modern age. Luigi Brugnatelli invents electroplating. Originates the widely employed industrial process for functional and decorative applications. Sir Humphry Davy develops the process of electrolysis to separate elemental metals from salts, including potassium, calcium, strontium, barium, and magnesium. Establishes the foundation for electrometallurgy and electrochemistry. Auguste Taveau develops a dental amalgam from silver coins and mercury. Enables repeatable and low-cost dental filling material and establishes one of the earliest examples of metallic biomaterials. Augustin Cauchy presents his theory of stress and strain to the French Academy of Sciences. Provides the first careful definition of stress as the load per unit area of the cross section of a material. Friedrich Wöhler isolates elemental aluminum. Unlocks the most abundant metallic element in the Earth's crust. Wilhelm Albert develops iron wire rope as hoisting cable for mining. Presents an exponential leap of large-scale construction and industrial opportunities over the limitations of hemp rope. Charles Goodyear invents the vulcanization of rubber. Enables enormous progress in the transportation, electricity, manufacturing, and myriad other industries. George Audemars patents "artificial silk" created using the fibrous inner bark of a mulberry tree. Leads to the manufacture of rayon and the era of synthetic fibers, creating sweeping effects on the textiles and materials industries. Henry Bessemer patents a bottom-blown acid process for melting low-carbon iron. Ushers in the era of cheap, large tonnage steel, thereby enabling massive progress in transportation, building construction, and general industrialization. Emile and Pierre Martin develop the Siemens-Martin open-hearth furnace process. Produces large quantities of basic steel by heating a combination of steel scrap and iron ore with gas burners--helping to make steel the world's most recycled metal. Henry Clifton Sorby uses light microscopy to reveal the microstructure of steel. Leads to the use of photomicrography with metals and the science of metallurgy. Dmitri Mendeleev devises the Periodic Table of Elements. Introduces the ubiquitous reference tool of materials scientists and engineers. Alfred Nobel patents dynamite. Proves of immeasurable assistance in conducting large-scale mining. J. Willard Gibbs publishes the first part of the two-part paper "On the Equilibrium of Heterogeneous Substances." Provides a basis for understanding modern thermodynamics and physical chemistry. William Siemens patents the arc-type electric furnace. Leads to the modern electric arc furnace, which is the principle furnace type for the modern electric production of steel. Pierre Manhès constructs the first working converter for copper matte. Initiates the modern period of copper making. Charles Martin Hall and Paul H é roult independently and simultaneously discover the electrolytic reduction of alumina into aluminum. Provides the processing foundation for the proliferation of aluminum for commercial applications Adolf Martens examines the microstructure of a hard steel alloy and finds that, unlike many inferior steels that show little coherent patterning, this steel had many varieties of patterns, especially banded regions of differently oriented microcrystals. Initiates the use of microscopy to identify the crystal structures in metals and correlate these observations to the physical properties of the material. Pierre and Marie Curie discover radioactivity. Marks the beginning of modern-era studies on spontaneous radiation and applications of radioactivity for civilian and military applications. William Roberts-Austen develops the phase diagram for iron and carbon. Initiates work on the most significant phase diagram in metallurgy, providing the foundation for the indispensable tool for other material systems. Johann August Brinell develops a test to estimate the hardness of metals that involves pressing a steel ball or diamond cone against the specimen. Establishes a reliable (and still commonly used) method to determine the hardness properties of virtually all materials. Charles Vincent Potter develops the flotation process to separate metallic sulfide minerals from otherwise unusable minerals. Opens the opportunity for the large-scale recovery of metals from increasingly difficult-to-treat low-grade ores from mining operations. Leon Guillet develops the alloying compositions of the first stainless steels. Expands the versatility of steel for use in corrosive environments. Alfred Wilm discovers the precipitation hardening of aluminum alloys. Yields the "hard aluminum" duraluminum, the first high-strength aluminum alloy. Leo Baekeland synthesizes the thermosetting hard plastic Bakelite. Marks the beginning of the "plastic age" and the modern plastics industry. William D. Coolidge devises ductile tungsten wire, using a powder metallurgical approach, for use as an energy-efficient, high-lumen lighting filament. Spurs the rapid expansion of electric lamps and initiates the science of modern powder metallurgy. Kammerlingh Omnes discovers superconductivity while studying pure metals at very low temperatures. Forms the basis for modern discoveries in low- and high-temperature superconductors and resulting high-performance applications. Max von Laue discovers the diffraction of x-rays by crystals. Creates means to characterize crystal structures and inspires W.H. Bragg and W.L Bragg in developing the theory of diffraction by crystals, providing insight into the effects of crystal structure on material properties. Albert Sauveur publishes Metallography and Heat Treatment of Iron and Steel. Promulgates the "processing-structure-properties" paradigm that guides the materials science and engineering field. Niels Bohr publishes his model of atomic structure. Introduces the theory that electrons travel in discrete orbits around the atom's nucleus, with the chemical properties of the element being largely determined by the number of electrons in each of the outer orbits. Jan Czochralski publishes the paper "Ein Neues Verfahren zur Messung des Kristallisationsgeschwindigkeit der Metalle" ("A New Method for the Measurement of the Crystallization Rate of Metals"), in which he describes a method of growing metallic monocrystals. Becomes the method of choice for growing high-performance materials, such as the silicon crystals used in the semiconductor computer chip industry. A.A. Griffith publishes "The Phenomenon of Rupture and Flow in Solids," which casts the problem of fracture in terms of energy balance. Gives rise to the field of fracture mechanics. Hermann Staudinger publishes work that states that polymers are long chains of short repeating molecular units linked by covalent bonds. Paves the way for the birth of the field of polymer chemistry. John B. Tytus invents the continuous hot-strip rolling of steel. Provides the basis for the inexpensive, large-scale manufacturing of sheet and plate products. Karl Schroter invents cemented carbides as a class of materials. Provides the basis for the workhorse materials of the tool and metal-cutting industries. Cornelius H. Keller patents alkyl xanthates sulfide collectors. Begins a revolution in sulfide mineral flotation, turning worthless mineral deposits into bonanzas. Werner Heisenberg develops matrix mechanics and Erwin Schrödinger invents wave mechanics and the non-relativistic Schrödinger equation for atoms. Forms the basis of quantum mechanics. Waldo Lonsbury Semon invents plasticized polyvinyl chloride (PVC). Becomes one of the world's most versatile and widely used construction materials. Paul Merica patents the addition of small amounts of aluminum to Ni-Cr alloy to create the first "superalloy." Leads to the commercialization of the jet engine, along with increased efficiency for modern power turbine machinery. Clinton Davisson and Lester Germer experimentally confirm the wave nature of the electron. Provides fundamental work necessary for much of today's solid-state electronics. Siegfried Junghans perfects a process for the continuous casting of nonferrous metal. Provides the basis for commercial exploitation of high-volume continuous casting. Arnold Sommerfeld applies quantum statistics to the Drude model of electrons in metals and develops the free-electron theory of metals. Supplies a simple model for the behavior of electrons in a crystal structure of a metallic solid and contributes to the foundation of solid-state theory. Fritz Pfleumer patents magnetic tape. Establishes the technology and leads to many subsequent innovations for data storage. Arne Olander discovers the shape-memory effect in an alloy of gold and cadmium. Leads to the development of the commercial shape-memory alloys that are employed in medical and other applications. Max Knoll and Ernst Ruska build the first transmission electron microscope. Accesses new length scales and enables improved understanding of material structure. Egon Orowan, Michael Polyani, and G.I. Taylor, in three independent papers, propose that the plastic deformation of ductile materials could be explained in terms of the theory of dislocations. Provides critical insight toward developing the modern science of solid mechanics. Wallace Hume Carothers, Julian Hill, and other researchers patent the polymer nylon. Greatly reduces the demand for silk and serves as the impetus for the rapid development of polymers. Erich Schmid and Walter Boas publish Kristallplastizitaet, which details 15 years of research on plastic deformation of metallic single crystals. Leads to a much better understanding of plastic deformation, a critical property of metals. Norman de Bruyne develops the composite plastic Gordon-Aerolite, which consists of high-grade flax fiber bonded together with phenolic resin. Paves the way for the development of fiberglass. André Guinier and G.D. Preston independently report the observation of diffuse streaking in age-hardened aluminum-copper alloys. Leads to the improved understanding of precipitation-hardening mechanisms. Otto Hahn and Fritz Strassmann find that they can split the nucleus of a uranium atom by bombarding it with neutrons Establishes nuclear fission and leads to applications in energy and atomic weapons. Russell Ohl, George Southworth, Jack Scaff, and Henry Theuerer discover the existence of p- and n-type regions in silicon. Provides a necessary precursor to the invention of the transistor eight years later. Wilhelm Kroll develops an economical process for titanium extraction. Establishes the primary means of producing the high-purity titanium needed for products ranging from high-performance aircraft to corrosion-resistant reactors. Frank Spedding develops an efficient process for obtaining high-purity uranium from uranium halides. Enables the development of the atomic bomb in the Manhattan Project. John Bardeen, Walter H. Brattain, and William Shockley invent the transistor Becomes the building block for all modern electronics and the foundation for microchip and computer technology. Bill Pfann invents zone refining. Enables the preparation of high-purity materials, such as the improved semiconductors critical for electronic applications. Nick Holonyak, Jr., develops the first practical visible-spectrum light-emitting diode (LED). Marks the beginning of the use of III-V alloys in semiconductor devices, including heterojunctions and quantum well heterostructures. S. Donald Stookey discovers a heat-treatment process for transforming glass objects into fine-grained ceramics. Leads to the introduction of Pyroceram and CorningWare. A team in Sweden produces the first artificial diamonds by using high heat and pressure. Gives rise to the industrial diamond industry, with applications in machining, electronics, and a variety of other areas. Gerald Pearson, Daryl Chapin, and Calvin Fuller unveiled the Bell Solar Battery--the world's first device to successfully convert useful amounts of sunlight directly into electricity. Serves as the very foundation for solar energy production as well as photo-detector technology. Peter Hirsch and coworkers provide experimental verification by transmission electron microscopy of dislocations in materials. Not only is dislocation theory verified unequivocally, but the power of transmission electron microscopy in materials research is demonstrated. Jack Kilby integrates capacitors, resistors, diodes, and transistors into a single germanium monolithic integrated circuit or "microchip." Makes possible microprocessors and, thereby, high-speed, efficient, convenient, affordable, and ubiquitous, computing and communications systems. Frank VerSnyder develops the directionally solidified columnar-grained turbine blade Enables performance enhancements for jet engines, saving airlines millions of dollars per year in fuel costs alone. Pol Duwez uses rapid cooling to make a gold-silicon alloy that remains amorphous at room temperature. Represents the first true metallic glass, which has been applied in transformer cores and offers significant potential. Richard Feynman presents "There's Plenty of Room at the Bottom" at a meeting of the American Physical Society Introduces the concept of nanotechnology (while not naming it as such). Arthur Robert von Hippel publishes Molecular Science and Molecular Engineering. Creates an emerging discipline aimed at designing new materials on the basis of molecular understanding. Stephanie Kwolek invents the high-strength, low-weight plastic Kevlar. Improves the performance of tires, boat shells, body armor, components for the aerospace industry, and more. Cambridge Instruments introduces a commercial scanning electron microscope. Provides an improved method for the high-resolution imaging of surfaces at greater magnifications and with much greater depth of field than possible with light microscopy. Karl J. Strnat and coworkers discover magneto-crystalline anisotropy in rare-earth cobalt intermetallic compounds. Leads to the fabrication of high-performance permanent magnets of samarium-cobalt and, later, neodymium-iron-boron for use in electronic devices and other areas. Larry Hench and colleagues develop bioactive glass for orthopedic use. Changes the paradigm in biomaterials to include interfacial bonding of the implant with host tissues. James Fergason, utilizing the twisted nematic field effect, makes first operating liquid crystal displays. Completely redefines many products and applications, including computer displays, medical and industrial devices, and the vast array of consumer electronics. Bob Maurer, Peter Schultz and Donald Keck invent low-loss optical fiber. Provides the basis for the increased bandwidth that revolutionized telecommunications. Hideki Shirakawa, Alan MacDiarmid, and Alan Heeger announce the discovery of electrically conducting organic polymers. Leads to the development of flat panel displays using organic light-emitting diodes, solar panels, and optical amplifiers. Heinrich Rohrer and Gerd Karl Binnig invent the scanning tunneling microscope. Provides three-dimensional atomic-scale images of metal surfaces and quickly becomes widely used in research to characterize surface roughness and observe surface defects. Robert Curl, Jr., Richard Smalley, and Harold Walter Kroto discover that some carbon arranges itself in the form of soccer-ball-shaped molecules with 60 atoms called buckminsterfullerenes or "buckyballs." Opens the possibility that carbon can assume an almost infinite number of different structures. Paul Chu creates a superconducting yttrium-barium-copper oxide ceramic. Opens the possibility of large-scale application of superconducting materials Don Eigler spells out "IBM" with individual xenon atoms using a scanning tunneling electron microscope Demonstrates that atoms can be manipulated one by one, the basis for "bottoms-up" production of nanostructures. Sumio Iijima discovers nanotubes, carbon atoms arranged in tubular structures. Creates expectations of important structural and nonstructural applications as nanotubes are about 100 times stronger than steel at just a sixth of the weight while also possessing unusual heat and conductivity characteristics. Eli Yablonovich produces "photonic crystals" by drilling holes in a crystalline material so that light of a certain wavelength cannot propagate in the material. Forms a basis for the development of "photonic transistors." TMS LogoAbout TMS © Copyright The Minerals, Metals & Materials Society.
9dd290e1f63b4662
Berkeley Lab Research News banner Berkeley Lab logo December 15, 2005 A Theoretical Breakthrough Inspired by Experiment Calculating Electron Correlations in the Hydrogen Molecule BERKELEY, CA – Need to understand the details of how a molecule is put together? Want to see the effects of the intricate dance that its electrons do to make a chemical bond? Try blowing a molecule to bits and calculating what happens to all the pieces. That's the approach taken by an international group of collaborators from the University of California at Davis, universities in Spain and Belgium, and the Chemical Sciences Division of the Department of Energy's Lawrence Berkeley National Laboratory.   A hydrogen molecule hit by an energetic photon A hydrogen molecule hit by an energetic photon breaks apart. The ejected electrons, blue, take paths whose possible trajectories (here represented by net-like lobes) depend on how far apart the hydrogen nuclei, red, are at the moment the photon strikes. The bond length at that instant reflects how the molecule's electrons are correlated. (Courtesy Wim Vanroose) When a hydrogen molecule, H2, is hit by a photon with enough energy to send both its electrons flying, the two protons left behind — the hydrogen nuclei — repel each other in a so-called Coulomb explosion. In this event, called the double photoionization of H2, the paths taken by the fleeing electrons have much to say about how close together the two nuclei were at the moment the photon struck, and just how the electrons were correlated in the molecule. Correlation means that properties of the particles like position and momentum cannot be calculated independently. When three or more particles are involved, calculations are notoriously intractable, both in classical physics and quantum mechanics. In the 16 December, 2005 issue of Science the researchers report on the first-ever complete quantum mechanical solution of a system with four charged particles. The groundbreaking calculations were inspired by earlier experiments on the photofragmentation of deuterium (heavy hydrogen) molecules, performed at beamline 9.3.2 of Berkeley Lab's Advanced Light Source (ALS) in 2003 by a group of scientists from Germany, Spain, and several institutions in the United States. The experimenters were led by Thorsten Weber, then with the ALS and now at the University of Frankfurt. "If you were trying to do this experiment and you didn't have access to the Advanced Light Source and a COLTRIMS experimental device" — a sophisticated, position-sensitive detector for collecting electrons and ions — "you'd just fire photons at a random sample of hydrogen molecules and measure the electrons that came out," says Thomas Rescigno of Berkeley Lab's Chemical Sciences Division, one of the authors of the Science paper. "What made this experiment special was that they could measure what happened to all four particles. From their precise positions and energy they could reconstruct the state of the molecule when it was hit." Weber presented early experimental data at a seminar attended by Rescigno, William McCurdy of the Lab's Chemical Sciences Division, who is also a professor of chemistry at the University of California at Davis, and Wim Vanroose, a postdoctoral fellow at Berkeley Lab who is now at the Department of Computer Science at the Katholieke Universiteit Leuven in Belgium. Says Rescigno, "Thorsten teased us with his results, some of which were extremely nonintuitive. What was remarkable was that very small differences in the internuclear distance" — the distance between the two protons at the moment the photon was absorbed — "made for radical differences in the ways the electrons were ejected." "When I saw the results of the molecular experiments, in which small changes in the internuclear distance produced large and unexpected changes in the electron ejection patterns, it immediately occurred to me that the differences were because of the molecule's effects on electron correlations," McCurdy says. McCurdy had recently been working with Fernando Martín, a professor of chemistry at the Universidad Autónoma de Madrid, merging computational techniques developed by Martín with a method McCurdy, Rescigno, and others had developed for calculating systems of three charged particles. Martín and McCurdy extended these methods to the helium atom, a system that, technically speaking, has four charged particles. But because the helium atom's two protons are bound together in the nucleus, the calculated distribution of electrons ejected by the absorption of an energetic photon tend to be quite symmetrical around the nucleus, with most pairs flying off in opposite directions. The picture can look quite different for a hydrogen or deuterium molecule, in which a plot of the likelihood that electrons will be ejected at certain angles groups into lobes that grow increasingly asymmetric as the bond length between the two hydrogen atoms grows longer. McCurdy read this as the effect of the bond length on the correlation of the shared electrons. Indeed, this is what Weber and his colleagues speculated when they published the results of their deuterium photofragmentation studies in Nature in 2004. Rescigno pointed out a fly in the ointment, however — namely that instead of being caused by electron correlations, large differences in ejection patterns caused by small differences in internuclear distance "could just be kinematics." In other words, the scattered electrons might be sharing some of the potential energy stored by the Coulomb repulsion between the two like-charged protons. The closer together these two nuclei are at the moment the photon breaks up the molecule, the more energy goes into the Coulomb explosion, some of which could be transmitted to the outgoing electrons and affect their flight paths. How to decide between kinematic effects or electron correlations? The experimental results could not address the question, since all the data were collected at the same photon energy; whether the electrons were acquiring additional kinetic energy was unknown. But, says Vanroose, "because we were doing computations, we could do experiments the experimenters couldn't do. We had much more flexibility to fix the conditions."   Using supercomputers at the Department of Energy's National Energy Research Scientific Computing Center (NERSC) at Berkeley Lab, at UC Berkeley, and in Belgium, Vanroose was able to rerun the hydrogen molecule experiments "in silico," this time with different photon energies, distributed so that the outgoing electrons always shared exactly the same kinetic energy no matter what the distance between the protons at the moment of photon absorption. The results turned out to be remarkably similar in all cases. Even when kinetic energy made no contribution, the electrons flew off in patterns determined by the length of the bond between the nuclei. Therefore the differences were due almost entirely to the way the electrons were correlated in their orbital paths around the molecule's two nuclei. Martín of UA Madrid sees the new calculations, which are a complete numerical solution for the Schrödinger equation of the photoionization of H2, as "just the beginning. Probing the complicated physics of electron correlations will lead the way to more comprehensive methods combining theory and experiment to address some of the most pressing problems in chemistry." Vanroose credits their success to day-in, day-out collaboration between top-notch theorists and experimenters at Berkeley Lab, "who are talking to each other all the time. The ability of experimentalists to call on the latest computational techniques is good for both; it's why we're two years ahead of other theorists in this field." To Rescigno, the latest results show that "what began as blue-sky physics theory is now connecting with the nuts and bolts of practical experiment."   Says McCurdy, "These large-scale theoretical calculations, stimulated by the need to interpret novel experiments at the ALS, are already stimulating new experiments and establishing a new line of inquiry at Berkeley Lab."   "Complete photo-induced breakup of the H2 molecule as a probe of molecular electron correlation," by Wim Vanroose, Fernando Martín, Thomas N. Rescigno, and C. William McCurdy, appears in the 16 December, 2005 issue of Science. "Complete photo-fragmentation of the deuterium molecule," by T. Weber, A. O. Czasch, O. Jagutzki, A. K. Müller, V. Mergel, A. Kheifets, E. Rotenberg, G. Meigs, M. H. Prior, S. Daveau, A. Landers, C. L. Cocke, T. Osipov, R. Díez Muiño, H. Schmidt-Böcking, and R. Dörner, appeared in the 23 September, 2004 issue of Nature. To unsubscribe from our mailing list, please go to
3ab23293fc62bc35
Take the 2-minute tour × Hi I'm currently learning Hamiltonian and Lagrangian Mechanics (which I think also encompasses the calculus of variations) and I've also grown interested in functional analysis. I'm wondering if there is any connection between functional analysis and Hamiltonian/ Lagrangian mechanics? Is there a connection between functional analysis and calculus of variations? What is the relationship between functional analysis and quantum mechanics; I hear that that functional analysis is developed in part by the need for better understanding of quantum mechanics? share|improve this question The answer for your questions is Yes. In particular, for Quantum Mechanics, see von Neumann, J.: Mathematical Foundations of Quantum Mechanics. Anyway you can find also more information and reference, about this relations in wikipedia. –  Leandro Jul 1 '10 at 0:07 Also see Reed and Simon, Methods of Modern Mathematical Physics, vols 1 - 4. One might argue that the entire tome (well, maybe less so the first half of volume 2 and parts of volume 3) is about application of functional analysis as inspired by the study of Schrodinger equation. –  Willie Wong Jul 1 '10 at 0:13 @Willie: I'm very much a non-applications kind of analyst, but doesn't very basic linear ODE theory have a tinge of functional analysis -- at least in early attempts to get somewhere? –  Yemon Choi Jul 1 '10 at 3:21 @Yemon: The proof of Picard-Lindeloef (and cousins) is a functional analysis proof, since it's a fixed point theorem in Banach spaces. It still doesn't give the theory a functional analytic flavour. The key problem is that the functions, one considers do not live in nice spaces. (Exceptions are known. e.g. Sturm--Liouville Theory, but that is more quantum mechanics). –  Helge Jul 1 '10 at 9:39 @Yemon: I am going to channel a physicist acquaintance of mine to illustrate why I don't really consider the sort of stuff in basic ODE theory functional analysis (though you are absolutely right that there is a an application of functional analysis). He said, during a (physics) seminar, to the nodding approval of the (physics) big wigs in the room: "... and as we all know, ODEs good; PDEs bad." –  Willie Wong Jul 1 '10 at 10:25 6 Answers 6 up vote 4 down vote accepted (1) Depends on what you mean by Hamiltonian and Lagrangian mechanics. If you mean the classical mechanics aspect as in, say, Vladimir Arnold's "Mathematical Methods in ..." book, then the answer is no. Hamiltonian and Lagrangian mechanics in that sense has a lot more to do with ordinary differential equations and symplectic geometry than with functional analysis. In fact, if you consider Lagrangian mechanics in that sense as an "example" of calculus of variations, I'd tell you that you are missing out on the full power of the variational principle. Now, if you consider instead classical field theory (as in physics, not as in algebraic number theory) derived from an action principle, otherwise known as Lagrangian field theory, then yes, calculus of variations is what it's all about, and functional analysis is King in the Hamiltonian formulation of Lagrangian field theory. Now, you may also consider quantum mechanics as "Hamiltonian mechanics", either through first quantization or through considering the evolution as an ordinary differential equation in a Hilbert space. Then through this (somewhat stretched) definition, you can argue that there is a connection between Hamiltonian mechanics and functional analysis, just because to understand ODEs on a Hilbert space it is necessary to understand operators on the space. (2) Mechanics aside, functional analysis is deeply connected to the calculus of variations. In the past forty years or so, most of the development in this direction (that I know of) are within the community of nonlinear elasticity, in which objects of study are regularity properties, and existence of solutions, to stationary points of certain "energy functionals". The methods involved found most applications in elliptic type operators. For evolutionary equations, functional analysis plays less well with the calculus of variations for two reasons: (i) the action is often not bounded from below and (ii) reasonable spaces of functions often have poor integrability, so it is rather difficult to define appropriate function spaces to study. (Which is not to say that they are not done, just less developed.) (3) See Eric's answer and my comment about Reed and Simon about connection of functional analysis and quantum mechanics. share|improve this answer Well,I'm not sure about classical mechanics,but functional analysis certainly has many applications in quantum mechanics via the modeling of wavefunctions by PDEs and operators defined on Hilbert and Banach spaces. A great book for beginning the study of these properties is the classic text by S.B.Sobolev,Some Applications of Functional Analysis in Mathematical Physics,now I believe in it's 4th edition and avaliable through the AMS. A more comprehensive text is the 4-volume work by Barry Simon and Louis Reed, which covers not only basic functional analysis,but all the basic applications to modern physics,such as spectral analysis and scattering theory. Lastly,some less well known applications can be found in Elliott Lieb and Micheal Loss' Analysis. share|improve this answer While Lou Reed has surely enriched the lives of many mathematicians, it is primarily through his musical work with the Velvet Underground rather than any collaboration with Barry Simon. You must be thinking of the mathematician Michael Reed. –  Tom LaGatta Jul 1 '10 at 21:22 One of the biggest problems in mathematical physics is actually to understand the link between Hamiltonian/Lagrangian mechanics and functional analysis. This is because classical mechanics is formulated in the former setting while quantum mechanics is formulated in the functional analysis setting. The act of going from classical mechanics to quantum mechanics is called quantization and basically consists of assigning functional analytic operators to classical observables, in a way that respects the Poisson and Lie brackets. For example in classical quantization we assign position to the operator of multiplication by x and we assign to momentum the operator $-i\frac{d}{dx}$. Both of these act on (a dense subset of) the space $L^2(\mathbb R)$, which is taken to be the space of wave functions in one dimension. You may want to take a look at the orbit method, which is the mathematics involved in a quantization scheme called geometric quantization. Some relevant MO discussion about this are: What is Quantization ? What does "quantization is not a functor" really mean? share|improve this answer Hamilton-Jacobi PDE is a formulation of classical mechanics (as far as I understand; I am no expert in physics) and the unique weak solution is found by a certain calculus of variations problem inspired by optimal control theory. Hamilton-Jacobi is also, I think, somewhat related to the Schrödinger equation. share|improve this answer Very good point. HJE skipped my mind (maybe because the OP mentioned explicitly Hamitonian and Lagrangian mechanics). So it does brings in a tie to calculus of variations. And as a PDE, the general existence of the solution does have a bit of a flavour of functional analysis. –  Willie Wong Jul 1 '10 at 10:11 One instance, where classical mechanics has to be treated with 'functional analysis' are infinite dimensional systems. The prototypical example is the Korteweg-de Vries equation $$ u_t + u_{xxx} + 6 u u_x = 0 $$ which a priori looks like a non-linear PDE. The key now is that it is completely integrable, which means that one can associate to an equivalent evolution for operators on Hilbert spaces. Define $$ L(t) = - \frac{d^2}{dx^2} + u(x,t) $$ as an operator on $L^2(\mathbb{R})$. Then this operator obeys $$ L_t = [P, L], $$ where $P$ is another operator, one can construct from $u$. (The specific form doesn't matter). The operators $P$ and $L$ are known as Lax Pair. (The $P$ stands for Peter not for Pair ☺ ). This is just the Heisenberg picture of quantum mechanics, so one can use the tools developed there, i.e. functional analysis, to investigate this equation. Of special importance is something known as scattering theory. Just on a final point: KdV is a limit of Navier--Stokes, which is a classical system. P.S.: In shameless self-promotion for some details on another system, the Toda Lattice, where it is easier to see that it is classical mechanics (one can write down the Hamiltonian easily), see here. I just made the post about KdV, since it is well-known. share|improve this answer I think you may have copied the KDV equation wrong. (Check the last term on the LHS.) And if you are going to mention scattering theory, you might as well spell out that $(L,P)$ are what is known as a Lax pair to aid people in literature searching. :) –  Willie Wong Jul 1 '10 at 10:18 Fixed these things. Unfortunately, this forum does not support smileys. There should be an ;-) somewhere instead of &#9786; –  Helge Jul 1 '10 at 11:33 i see the smiley just fine. –  Willie Wong Jul 1 '10 at 11:54 There is a very good discussion of this issue in L. Takhtajan's excellent text Quantum Mechanics for Mathematicians; see especially section 2.1. Chapter 1 also treats classical mechanics in a way that naturally extends to the quantum picture. The idea as I read it is this: both classical and quantum mechanics consider some underlying phase space, and a collection of observables, physical values you can measure. These naturally form an algebra. In classical mechanics you assume that you can measure different observables simultaneously without the measurements affecting one another; this turns out to correspond to the condition that the algebra of observables is commutative. A good example is thinking of observables as continuous functions on the phase space, and the Gelfand representation says that this is essentially the only example. So a functional analysis result says that you don't need to do too much functional analysis here (or rather, it's of a fairly trivial kind). In quantum mechanics, the algebra of observables might not be commutative. A good example of such a thing is operators on a Hilbert space (again, in some sense the only example). If you could use a finite-dimensional Hilbert space, you'd just be doing linear algebra. But it turns out the commutation relations that the physics requires can only be satisfied by unbounded operators. This forces you to use infinite-dimensional Hilbert spaces, and puts you into the realm of functional analysis. share|improve this answer Your Answer
e1f76561a5c8f0df
Quantum Physics 1004 Submissions [9] viXra:1004.0121 [pdf] submitted on 25 Apr 2010 Equivalence of Maxwell's Source-Free Equations to the Time-Dependent Schrödinger Equation for a Solitary Particle with Two Polarizations and Hamiltonian |cp| Authors: Steven Kenneth Kauffmann Comments: 17 pages, Also archived as arXiv:1004.1820 [physics.gen-ph]. It was pointed out in a previous paper that although neither the Klein-Gordon equation nor the Dirac Hamiltonian produces sound solitary free-particle relativistic quantum mechanics, the natural square-root relativistic Hamiltonian for a nonzero-mass free particle does achieve this. Failures of the Klein-Gordon and Dirac theories are reviewed: the solitary Dirac free particle has, inter alia, an invariant speed well in excess of c and staggering spontaneous Compton acceleration, but no pathologies whatsoever arise from the square-root relativistic Hamiltonian. Dirac's key misapprehension of the underlying four-vector character of the time-dependent, configuration-representation Schrödinger equation for a solitary particle is laid bare, as is the invalidity of the standard "proof" that the nonrelativistic limit of the Dirac equation is the Pauli equation. Lorentz boosts from the particle rest frame point uniquely to the square-root Hamiltonian, but these don't exist for a massless particle. Instead, Maxwell's equations are dissected in spatial Fourier transform to separate nondynamical longitudinal from dynamical transverse field degrees of freedom. Upon their decoupling in the absence of sources, the transverse field components are seen to obey two identical time-dependent Schrödinger equations (owing to two linear polarizations), which have the massless freeparticle diagonalized square-root Hamiltonian. Those fields are readily modfied to conform to the attributes of solitary-photon wave functions. The wave functions' relations to the potentials in radiation gauge are also worked out. The exercise is then repeated without the considerable benefit of the spatial Fourier transform. Category: Quantum Physics [8] viXra:1004.0120 [pdf] submitted on 24 Apr 2010 Condensed Light - A Reinterpration of Quantum Mechanics and Relativity Authors: Niels Vandamme Comments: 7 pages This paper propounds several hypotheses which offer an alternate explanation to some of the real or purported effects encountered in quantum mechanics and relativity, giving a mechanical explanation for the absolute speed of light, the conversion of matter to energy, and the observed superluminal expansion of the universe. Category: Quantum Physics [7] viXra:1004.0089 [pdf] replaced on 12 May 2010 Quantizing Time and Space - From the Standing Wave to the Primary Gas Structure of a Particle - V Authors: V.A.Induchoodan Menon Comments: 21 pages The author introduces the concept of a primary gas which is an abstract gas where the microstates are occupied successively in time unlike in the case of a real gas where the microstates are occupied simultaneously. He shows that a single plane wave associated with a standing wave formed by the confinement of a luminal wave could be treated as the microstate of the primary gas that represents a particle. This approach makes it possible to understand the dynamics of a particle in terms of the thermodynamics of the primary gas. In this approach, time and space turn out to be the intrinsic properties of the primary gas that represents a particle and the quantized nature of time and space emerges from it in a natural manner. It is shown that the action (with a negative sign) of a particle could be identified with the entropy of the primary gas and the principle of least action is nothing but the second law of thermodynamics. The author shows that the uncertainty relation of quantum mechanics can be derived directly from the equation for fluctuations and he explains the statistical basis of the virtual interactions. Category: Quantum Physics [6] viXra:1004.0078 [pdf] submitted on 12 Apr 2010 The "Measurement Problem" in Quantum Physics Can be Partly Resolved with Analysis of Relatedness Between Space-Time, Physical Time and Psychological Time Authors: Amrit S. Sorli Comments: 5 pages Clocks are systems for measuring frequency, velocity, duration and numerical order t0,t1,t2,...,tn of physical events. Time t obtained with clocks is not a forth dimension X4 of space, time t is only a component of X4 = i * c * t. This view of clock/time as a measuring system sees physical phenomena running exclusively in space and not in time. This view is supported with several experiments which confirm that time t of physical event can be zero. Time is not part of space; time is run of clocks in space. Past, present and future exist as a psychological time in the mind only not in the universe. We experience motion i.e. change in the space through the frame of psychological time. We "project" linear psychological time "past-present-future" into the space, however it is not there. Observer who distinguishes between space-time, physical time and psychological time is aware that in quantum measurement he only measures physical events in space and not in time. Clock/time is merely a measuring device. With this understanding observer's observation, measurement and experience of quantum phenomena are closer to their real nature. Stream of numerical order of quantum phenomena t0,t1,t2,...,tn runs in space only and not in time. Stream of quantum phenomena has no duration on its own. Duration is result of measurement. Category: Quantum Physics [5] viXra:1004.0073 [pdf] submitted on 10 Apr 2010 The Wave Structure of the Electric Field Authors: Michael Harney Comments: 5 pages Maxwell's equations describe the interactions of the electromagnetic field at a macroscopic level. In the 1920s, Louis DeBroglie demonstrated that every moving particle (including an electron) has a wave nature, and we know from Einstein that every wave has a particle nature, which we call the photon. Later in the 1930s, Paul Dirac's development of the famous Dirac equation showed the quantum nature of the electron at relativistic speeds. Then in 1948 Richard Feynman and Julian Schwinger extended these concepts in the development of quantum electrodynamics which gives a full accounting (although a very strange one) of how an electron can borrow energy from the vacuum of space and return it legally as long it does so within limits of the uncertainty principle. Category: Quantum Physics [4] viXra:1004.0072 [pdf] submitted on 10 Apr 2010 Application of Wheeler-Feynman Absorber Theory to Laser Power Output Authors: Michael Harney, Michael Weber, Milo Wolff Comments: 2 pages The method described is designed to increase laser output power using a concept from Wheeler-Feynman absorber theory [1,2] and the work of Tetrode [3], where photons are modeled as sources of energy that must also have a sink (an electron) to be absorbed. According to Wheeler-Feynman and Tetrode, if an electron is not present to absorb the photon, then the photon can never be emitted. In Wheeler-Feynman absorber theory, advanced and retarded potentials resemble time-reversal equations because there must be communication faster than light between the source-photon and the sink-electron, reasoned Feynman, so that the source photon's atom would know whether to emit a photon. This enigma was resolved by Milo Wolff in his work "Exploring the Physics of the Unknown Universe" [4], where he describes the use of spherical scalar in-waves and out-waves that travel at c and whose local speed is based on local-mass density. The in-out waves form electrons and also allows communication between them. Category: Quantum Physics [3] viXra:1004.0046 [pdf] submitted on 6 Apr 2010 Looking For Roots In All The Wrong Places: A Comment on ArXiv:1003.5008 Authors: Ron Bourgoin Comments: 4 pages The authors of ArXiv:1003.5008 tell us they are searching for the foundations of quantum mechanics, a theory they say was born early in the twentieth century. As a matter of fact, the theory was born in the eighteenth century. Category: Quantum Physics [2] viXra:1004.0036 [pdf] submitted on 5 Apr 2010 Higgs Field and the Creation of Mass - Standing Wave Structure of the Electron-IV Authors: V.A.Induchoodan Menon Comments: 12 pages The author develops his idea of the standing electromagnetic half wave structure of the electron and proposes that the confinement of the wave is effected by the interactions with the Higgs field which can be explained on the basis of the uncertainty principle. These interactions allow vacuum to act like a thermal bath with the standing half wave in equilibrium with it. It is shown that this equilibrium is not destroyed even when it is in uniform translational motion. This invariance of the equilibrium to the velocity transformation is another way of looking at the theory of relativity. Category: Quantum Physics [1] viXra:1004.0035 [pdf] replaced on 8 Aug 2010 The Moebius Strip: a Biology of Elementary Particles Authors: Giuliano Bettini Comments: v3 169 pages In Italian, v4 164 pages in English. A book of semi qualitative ideas on electron, quarks and life. We intend to make us a purely electromagnetic image of all interactions and elementary particles, in particular electron, and quarks. This would force even the idea of a single universal vibration, a single field. The electron is interpreted as a small electric current carrying the elementary charge, elementary mass and Planck quantum of action. With the aid of a few math we identify the electron as an electromagnetic half wave closed on a Moebius strip. This is equivalent to a full wavelength making two turns on the border. It is also probably not totally irrelevant to note that this leads to interesting numerics on the fine structure constant. We identify a quark with a confined electromagnetic wave which is not sufficient in itself to complete a closed loop in space. So quarks are pictured as 1/3 and 2/3 of a full wavelength. A space model of their combination leads in a unique way to the entire set of all and only the mesons and baryons. In a quite spontaneous way also the color theory is interpreted. Finally the various helices of quarks are interpreted as living organisms and similarities with a biological behaviour are showed. Arguments here are of course admittedly primitive and mainly qualitative, also if supported with some math, but to my knowledge this overall conjecture has not been discussed elsewhere, and therefore may be useful for further research. Category: Quantum Physics
a21cd1aa97ab20d3
Sunday, July 6, 2014 Why Jesus died many times for our sins St. Augustine was sure that Jesus died just once for our sins. However, Jesus died not only in our particular universe but also in many other parallel universes that are as real as ours. Let’s explore the chain of reasoning behind this claim. One assumption is that whether a particular parallel universe exists falls within the field of astrophysics, not theology nor logic. Astrophysics’ well-accepted Big Bang theory with eternal inflation implies a multiverse containing an unlimited number of parallel universes obeying the same scientific laws as in our particular universe. These other universes (which the physicist Max Tegmark calls Type 1 universes) are distant parts of physical reality. They are not abstract objects. Some contain flesh and blood beings. Parallel universes are not parallel to anything. They are very similar to what David Lewis called possible worlds, but they aren’t the same because his possible worlds must be spatiotemporally disconnected from each other. I cannot state specific criteria for transuniverse identity, but we do need the assumption that, in a universe, personal identity (whatever it is) supervenes on the physical realm. That is, a person can’t change without something physical changing. It is also reasonable to require that in any parallel universe in which Jesus exists he has Mary and Joseph as parents. The claim that Jesus in our universe is identical to Jesus in another universe does conflict with the intuitively plausible metaphysical principle that a physical object is not wholly in two places at once. This principle is useful to accept in our ordinary experience, but it is not accepted in contemporary physics. The Schrödinger equation of quantum field theory describes the extent to which a particle is wholly in many places at once. This is why physicists prefer to say the nucleus of a hydrogen atom is surrounded by an electron cloud rather than by an electron. In the double-slit interference experiment, a single particle goes through two slits at the same time. So, the metaphysical principle should not be used a priori to refute our claim about the transuniverse identity of Jesus. Our universe is the product of our Big Bang that occurred 13.8 billion years ago. It is approximately that part of physical reality we can observe, which is an expanding sphere with the Earth at the center, having a radius of 13.8 billion light years. Our universe once was a tiny bit of explosively inflating material. The energy causing the inflation was transformed into a dense gas of expanding hot radiation. This expansion has never stopped. But with expansion came cooling, and this allowed individual material particles to condense from the cooling radiation and eventually to clump into atoms and stars and then into Jesus. The other Type 1 parallel universes have their own Big Bangs, but they are currently not observable from Earth. However, they are expanding and might eventually penetrate each other. But, they might not. It all depends on whether inflation of dark energy is creating intervening space among the universes faster than the universes can expand toward each other. Scientists don’t have a clear understanding of which is the case. Why trust the Big Bang theory with eternal inflation? Is it even scientific, or is it mere metaphysical speculation? The crude answer is that the theory has no better competitors, and it is has been indirectly tested successfully. Its testable implications are, for example, that the results of measuring cosmic microwave-background radiation reaching Earth should have certain specific quantitative features. These features have been discovered—some only in the last five years. The theory also implies a multiverse of parallel universes having our known laws of science but perhaps different histories. If we accept a theory for its testable implications, then it would be a philosophical mistake not to accept its other implications. One other important assumption being made is that the cosmic microwave-background experiments have not detected any overall curvature in our universe because our universe is in fact not curved. Our universe being curved but finite is also consistent with all our observations. Similarly, if you are standing on a very large globe, it can look flat to you. If our 3-D universe is finite but curved like the surface of a 4-D hypersphere, then space would be extremely large with a very small curvature, but there would be only a finite number of parallel universes, and the argument about Jesus would break down. The most common assumption now among astrophysicists is that our universe is in fact infinite, the multiverse is infinite, and matter is approximately uniformly distributed throughout the multiverse. As Max Tegmark has pointed out, twenty years ago there were many astrophysicists opposed to parallel universes. They would say, “The idea is ridiculous, and I hate it.” Now, there are few opponents of parallel universes, and they say, “I hate it.” Having established that there are infinitely many parallel universes with the same laws but perhaps different histories, let’s return to the issue of whether Jesus died in more than one of them. One implication of the Big Bang theory with eternal inflation is that some universes are exact duplicates of each other. Here is why. If you shuffle a deck of playing cards enough times, then eventually you will have duplicate orderings. The duplicate orderings are the same, not just “David Lewis counterparts.” Similarly, if you have enough finite universes, which are just patterns of elementary particles, and each has a finite number of possible quantum states, then every universe has an infinite number of duplicates. One controversial assumption used here is the holographic principle: Even if spacetime were continuous, it is effectively discrete or pixilated at the Planck level. This means that it can make no effective difference to anything if an object is at position x meters as opposed to position x + 10 -35 meters. This completes the analysis of the chain of reasoning for why Jesus died more than once for our sins. Have you noticed any weak links? Brad Dowden Department of Philosophy Sacramento State 1. Brad, this is very interesting and fun, thanks. I could talk to you about this all day, but I'll confine my question to the idea that a particle can not be in two places at once. First, I wonder what it means to say that "The Schrödinger equation of quantum field theory describes the extent to which a particle is wholly in many places at once." If someone asks. Is particle P wholly at location L? The answer "To some extent," seems to mean the same thing as "No," (on the assumption that some extent is less then wholly." My understanding of the Schrödinger equation is that it tells us the probability that a particle is at any particular location, with the meaning of that statement varying depending on the resolution of the measurement problem you favor. Your remark seems to me to be most consistent with the Many Worlds Interpretation, where the probabilities represented in the Schrödinger equation may reflect the distinct worlds that actually exist. However, you do not make any reference to the Many Worlds Interpretation, and as far as I know, physicists are currently not at all sure what the relation is between the Multiverse and Many Worlds (though Sean Carroll thinks they could be the same: The other question I have is whether we really need to go this route at all. Kant, I believe, would have said that the idea that an object can't be wholly in two places at once is an a priori intuition, and therefore necessarily the case. But he said the same thing about 3-D spacetime, and he was just wrong. So why not simply respond that this intuition can denied without contradiction, and since the denial fits a very good physical theory, we should deny it? Specifically, why not just say that a particle can be wholly present in only one position in a single universe, but it can be wholly present in multiple positions in multiple universes? 1. For some reason the link to Sean Carroll's piece didn't post. 2. Kant’s intuition can be denied without contradiction, and the denial fits well with current physical theory. But I wouldn’t want to draw your conclusion that “why not just say that a particle can be wholly present in only one position in a single universe.” What fits best with physical theory is that a particle can be in multiple locations in a single universe and also in multiple locations across universes. I mentioned quantum mechanics in my blog post just to make the point that all experts agree that a particle can be in two locations at once in our own universe, despite the violation of common sense. You are right that the Schrödinger equation “tells us the probability that a particle is at any particular location,” but don’t assume from this that there is a definite location it has. Schrödinger abandoned the idea that a particle has a definite location in our universe. Niels Bohr’s Copenhagen Interpretation says a particle is at a definite location “to some extent,” meaning the particle is not wholly at any place when unobserved. The italics is what is special about a Copenhagen Interpretation. The Schrödinger equation tell us that particles are here and there at once; a particle is always in a “superposition” of here and there when it is not being observed. Bohr said, “No reality without observation!” I am not that much of an idealist and happen to believe the Copenhagen Interpretation is incorrect because I believe the wave function never collapses. This is the position of Hugh Everett. I was not promoting Everett’s Many Worlds Interpretation of quantum mechanics in my blog post, but it is a reasonable position, though still controversial. In the multiverse theory that I was promoting, the many parallel universes are far away; in the Many Worlds theory the universes are disconnected from our space and are neither near nor far. I think if Einstein were alive today he’d reject the Copenhagen Interpretation and go with the Many Worlds Interpretation. 3. Brad, thanks for that reply. There is a difference, don't you think, between the idea that a particle is not wholly at any place at one time and the idea that a particle is wholly at many places at once? The first formulation seems right to me, the second I have trouble grasping. I like the Everett interpretation, too, especially because of the way it seems to take the mystery out of EPR, but I actually do not quite understand exactly how it interprets the Schrödinger equation. On the basis of what I know it seems to me that it is a kind of hidden variable theory in which what we don't know is, not the actual, definite location of the particle in a particular universe, but whether we occupy a universe in which the particle is definitely in this position or definitely in that one. In other words, on the Everett interpretation, there is no collapse of the wave function, so what the wave function tells us is the probability that we are in a certain kind of universe, but this seems compatible with the view that particles have definite locations in every universe.. Is this wrong-headed and can you shed anymore light on this? (I know it is not central to your post.) 4. Randy, in quantum mechanics when talking about point particles such as electrons in a single universe, I believe it’s not helpful to emphasize a difference between not being wholly at any place at one time and being wholly at many places at once. That’s because a particle has no definite location (when it is not measured, according to the Copenhagen Interpretation). Now I used this example, with its implicit endorsement of the Copenhagen Interpretation, in order to suggest that physicists have for a long time been willing to say a particle can be in two places at once within an atom. However, I don’t endorse the Copenhagen Interpretation myself and prefer the Everett Interpretation, which describes the world as you say in your comments. The Everett many-worlds interpretation is compatible with the view that particles have definite locations in every universe, as you say. And, as you say, this removes the mystery from the EPR paradox (although it adds in mystery in another way—by introducing the unintuitive concept of parallel universes). That is the very reason why I commented that Einstein would approve of this interpretation over the Copenhagen interpretation if he were alive today. However, the Everett interpretation is also compatible with the claim that a particle can have two locations—that the particle can be wholly present in two places, namely having locations in two universes. In the last ten years, the Copenhagen Interpretation has fallen out of favor. 2. Thanks Brad for the thoughtful and fun post. I always learn something from you when you talk physics. I am not competent to comment on the physics so I will accept whatever you say on that, for the sake of argument. But I think the physics is a smokescreen. I avoid any conceptual or theological issues surrounding the nature of Jesus, the death of any god, child-sacrifice, etc. These too are not relevant, just like whether death is a singular or permanent event for Jesus or anybody is also tangential. I see two problems. (1) I think there is an implication failure perhaps due to ambiguity. That the same person with counterparts in many (or even all) other worlds dies could be true, and yet it could be false that each one (or any one) dies many times. Possible worlds or parallel worlds talk doesn’t even support the notion that, in any world where Jesus exists, that Jesus even dies once in all of them. (2) Some kind of quantifier shift error threatens. The general point you raise, that some guy died many times, is possible, sort of like the universe could have undergone an eternal series of crunches and expansions is possible. However, this fails to support any particular Jesus-story or Big-Bang story. It does not entail that anybody actually died many times in any given world. So “(there exists a) Jesus (who) died more than once for our sins” could be false even if “all Jesuses die in any world where any Jesus exists”. Again the latter claim is false, because there is a possible world or parallel universe where he does not die, or some worlds where his counterpart does not. So I suspect that your presumption is false, but again I don’t get the physics: “Similarly, if you have enough finite universes, which are just patterns of elementary particles, and each has a finite number of possible quantum states, then every universe has an infinite number of duplicates.” OK, I am not sure what this proves, but isn’t it possible that you could have an infinite series of arrangements of particles (with a finite number of possible states) and never get any duplicates? To presume this, I think, is to presume that whatever is possible is inevitable. It was dubious for Nietzsche to assume this in his doctrine of Eternal Recurrence and it is also dubious to presume it here. In short, Jesus died (or really did not) in this world, once, and that’s it. If you want to run worlds in parallel or series, the problem of ambiguity remains. I do not presume that names (proper or otherwise) are rigid-designators, so this could be part of my problem with understanding your argument. 1. Scott, you’ve made some very interesting comments. I agree with some and disagree with some others. You said, “isn’t it possible that you could have an infinite series of arrangements of particles (with a finite number of possible states) and never get any duplicates?” I believe the answer is “no.” You might shuffle a deck of cards an infinite number of times and still never get it back to its original order. But if you shuffle it a very large, finite number of times you are absolutely sure of getting two orderings that are the same. You don’t even need to shuffle it an infinite number of times. You are right that it’s mathematically possible there won’t be any duplicates of our universe even if an actually infinite number of parallel universes are generated. However, if you start shuffling with the deck in a certain order, then as you shuffle more and more, the probability of getting it back to the original order gets higher and higher and approaches one in the limit of an infinite number of shuffles. This is an implication of a theorem in probability theory, assuming random shuffles. I’d bet my life that you’ll get the deck back to its original order eventually. So, to speak epistemologically, I’d say I know you’ll produce the duplicate. Ditto for there being multiple Jesuses in the multiverse, assuming random generation of parallel universes. The assumption I left out in my original posting was that the generation of parallel universes is random. In most parallel universes there won’t be a Jesus, nor even homo sapiens. I hope you agree with me that solid historical evidence establishes that there was a Jesus in our universe about two millennia ago. In my blog I chose to talk about Jesus mere as an attention-getter. I could have made the same points talking about Abraham Lincoln. 3. Randy, here’s another thought about your recommendation that we say a particle is wholly present at one place in one universe. In the two-slit diffraction experiment, if you fire particles very slowly at a target, then we can show that the particle goes through both slits and interferes with “itself.” You wouldn't want to say the particle wasn’t “wholly present” when it went through the left slit. It's not like the particle's left half went through the left slit, right? But at the same time the particle was also going through the right slit, even though there was exactly one particle fired at the slits. So, the particle was wholly present in two places at once. 4. Brad, do we want to use the term 'particle' here? I thought the point here is that when, say, an electron interferes with itself it is because it is not behaving as a particle at all, but as a wave. If it were behaving as a particle, then, by definition, it would have to pass through one slit or the other, right? Is there a difference between how Bohr and Everett account for this experiment? As I understand it, Bohr says that the act of measurement collapses the wave function, and forces the entity to behave as a particle. But on the Many Worlds interpretation there is no collapse of the wave function, so why the difference when it is measured and when it is not? Why doesn't the entity continue to behave as a wave? 5. Randy, you are raising deep issues about the philosophy of quantum mechanics. Yes, it is better not to use the word “particle” when we are talking about wave interference. However, even if we were to stick to the classical Copenhagen Interpretation, its principle of complementarity allows the two-slit experiment to be considered either as a wave or as a particle experiment but not at the same time. So, let’s use the word “particle.” When we consider it as a particle experiment, then the particle must go through both slits simultaneously, yet hit the screen behind it at ONE place. It is wholly in two places at once as it goes through the screen. Richard Feynman said this is the essence of quantum weirdness. You asked, “on the Many Worlds interpretation there is no collapse of the wave function, so why the difference when it is measured and when it is not? One answer is that measurement produces quantum decoherence. You then asked, “Why doesn’t the entity continue to behave as a wave?” The answer is that it does. My blog post is basically reporting on the views of Max Tegmark from his 2014 book Our Mathematical Universe. He is leader of the quantum decoherence idea. According to Tegmark, the reason why there is such a big difference between measured and unmeasured particles is quantum decoherence or mixing with lots of other particles, not intrusion of consciousness as the Copenhagen people believe. According to Niels Bohr’s Copenhagen Interpretation of quantum mechanics, particles behave strangely only because they are unmeasured by a conscious being. In Schrödinger’s thought experiment in which there’s a 50-50 chance of the release of cyanide gas within ten minutes of the room being sealed, the Copenhagen people say Schrödinger’s cat is both alive and dead in the room because it is now ten minutes later and no conscious being has yet looked into the room and become aware of the situation. But then someone looks in and the cat is, say, still alive, and this looking collapses the wave function and that is why we never observe macroscopic objects in quantum superposition. Observing always causes collapse. Wrong, says Tegmark. Consciousness is not what collapses the wave function. It never collapses, and consciousness is unimportant. Instead what is important about measurement removing the weirdness is that the object measured gets entangled with the many objects in the measuring tools. Seeing requires bouncing photon objects off the measured object, which destroys quantum superposition. This destruction via getting entangled with many other objects is what Tegmark calls quantum decoherence. According to Tegmark, the reason why we never see macroscopic objects such as cows and people in two places at once is not because they are macroscopic, and not because they are observed, as the Copenhagen Interpretation hypothesizes, but rather because it is too hard to isolate them from other particles and prevent decoherence. I do not understand this process very well, but the point seems to be that it is an object’s interaction with other particles that destroys its quantum superposition via the process of quantum decoherence. But Tegmark’s interpretation of quantum mechanics is not yet standard, so in my original post I did not mention it. 6. Brad, thanks, that's very helpful. Charles Seife's book Decoding the Universe has an approving chapter on Tegmark's decoherence explanation. For him, the key conceptual move is to think of information as something that exists in the world, and measurement as the transfer of information from one place to another. Once you do that, there is no problem thinking of nature itself as constantly making measurements. Do you recommend Tegmark's book? 7. Randy, yes Tegmark's book is very clear and interesting. I'll have to take a look at Seife's book some day. 8. Brad: (This is Cliff, now in the La-La land of retirement.) Interesting as the multiple universe theory is, isn't it the case that there is no way to empirically confirm it? Can we make the jump from: Quantum theory is well-confirmed for our universe and quantum theory implies there are multiple universes, therefore there must be multiple universes? To me that's seems to stretch the idea of empirical confirmation too far. 9. Cliff, I worry about all those questions, too, and am not yet convinced of the claim that Jesus died many times for our sins. You have a fine pragmatic attitude toward the issue. Since we can’t make predictions about other universes, that is something to worry about. If there were no way to empirically confirm the claim that there exist alternative universes, then I wouldn’t believe it either, but there are ways to empirically confirm it—indirectly—because it provides good explanations of observations even if it can’t provide predictions that can be tested. You would say this indirect confirmation is too indirect and it stretches the idea of empirical confirmation too far. Proponents of alternative universes say it is time for science to change and accept this stretching. Here are some relevant quotations from Leonard Susskind in his 2006 book The Cosmic Landscape. “On the theoretical side, an outgrowth of inflationary theory called Eternal Inflation is demanding that the world be a megaverse, full of pocket universes that have bubbled up out of inflating space, like bubbles in an uncorked bottle of champagne." (p. 21) He calls alternative universes “pocket universes,” and he calls the Level I Multiverse the “megaverse.” "There is very little doubt that we are embedded in a vastly bigger megaverse." (21-2) “But certainly the critics are correct that in practice, for the foreseeable future, we are stuck in our own pocket with no possibility of directly observing other ones. Like quark theory, the confirmation will not be direct and will rely on a great deal of theory.” (196) As for rigid philosophical rules, it would be the height of stupidity to dismiss a possibility just because it breaks some philosophers’s dictum about falsifiability. …Just as general are always fight the last war, philosophers are always parsing the last scientific revolution.” (196)
445482fc76872353
Sign up Here's how it works: 1. Anybody can ask a question 2. Anybody can answer Is there anything in the physics that enforces the wave function to be $C^2$? Are weak solutions to the Schroedinger equation physical? I am reading the beginning chapters of Griffiths and he doesn't mention anything. share|cite|improve this question Related: , and links therein. – Qmechanic Jan 17 '12 at 23:57 Related: – Emilio Pisanty Feb 11 '15 at 17:56 Some of this was discussed elsewhere. See « significance of unbounded operators » . It is not true the wave function has to be continuous, it just has to be measurable (i.e., a limit of step functions almost everywhere). Naturally you might wonder what sense Schroedinger's equation makes if you apply it to a step function...but the answer is easier than worrying about distributional weak solutions. The point is that you can solve the time-dependent Schroedinger equation with the exponential $$e^{itH},$$ which is a family of unitary operators, and which is better behaved than the $H$ you have to use in Schroedinger's equation. The $H$ you have to use, for example $$-{\partial ^2\over\partial x^2} + \mathrm{other\ stuff}, $$ is unbounded. And non-differentiable functions are not in its domain. But plugging it in to the power series for exponential converges in norm anyway, and so the resulting operator, being bounded and even unitary on a dense domain of the Hilbert space, can be extended painlessly to the entire space, even step functions. So it makes more sense to say that the solution to Schroedinger's equation with a given initial condition $\psi_o$ is $$\psi_t (x) = e^{itH}\cdot \psi_o (x)$$ and there is no need to bring in distributional weak solutions. These considerations are called the Stone--von Neumann theorem. But such functions are not very important and indeed it is possible to do all of Quantum Mechanics with smooth functions, especially if you take the attitude that, for example, a square well potential would also be unphysical and is really just a simplified approximation of a physical potential which smoothed off those square corners but had a formula that was unmanageable.... See Anthony Sudbery, Quantum Mechanics and the Particles of Nature, which since it is written by a mathematician, is careful about unimportant issues like this. That family of operators I wrote down is called the time-evolution operators, and they are an example of a unitary group with one-parameter, time. It is easy to see that if $\psi_o$, the initial condition, the state of the quantum system at time $t=0$, is nice and smooth, then all the future states will be nice and smooth too. Furthermore, all the usual quantum observables have eigenstates which are nice and smooth, so if you perform a future measurement, you will get a function which is nice and smooth and its future time evolution will remain that way, until the next measurement, etc. until Doomsday. That said, for all practical purposes you may assume all wave functions are smooth and that the only reason you study discontinuous ones is as convenient approximations. The comment one sometimes hears is that a wave function that was not in the domain of the Hamiltonian would « have infinite energy » but this is nonsense. In Quantum Mechanics, you are not allowed to talk about a quantum system as having a definite value of an observable unless it is in an eigenstate of that observable. What you can ask is, what would be the expectation of that observable. If the wave function $\psi$ is discontinuous and not in the domain of the Hamiltonian, it cannot be an eigenstate, but if its energy is measured, the answer will always be finite. Yet, the expectation of its energy does not exist, or you could say, the expectation « is infinite ». Not the energy, its expectation. There is nothing very unphysical about this because expectation itself is not very directly physical: you cannot measure the expectation unless you make infinitely many measurements, and your estimated answer, even for this discontinuous function, will always be a finite expectation. It's just that those estimates are way inaccurate, the expectation really is infinite (like the Cauchy distribution in statistics). But even for such a « bad » wavefunction, all the axioms of Quantum Mechanics apply: the probability that the energy, if measured, will be 7 erg, is calculated the usual way. But these bad wave functions never arise in elementary systems or exercises so most people think they are « unphysical ». And, as I said, if the initial condition is a « good » wave function, the system will never evolve outside of that. This, I think, is connected with the fact that in QM, all systems have a finite number of degrees of freedom: this would no longer be true for quantum systems with infinitely many degrees of freedom such as are studied in Statistical Mechanics. share|cite|improve this answer Right, there's nothing wrong about step functions, delta-functions (the derivatives of the former), and others, and that's why physicists freely work with them and never mention artificial mathematical constraints. Still, some discontinuities may make the kinetic energy infinity, so they don't exist in the finite-energy spectrum. I would add that the most natural space of functions to consider is $L^2$, all square-integrable functions. They may be Fourier-transformed or converted to other (discrete...) bases. A subset also has a finite (expectation value of) energy. – Luboš Motl Jan 18 '12 at 7:22 The time-independent Schroedinger equation for the position-space wavefunction has the form $$\left(\frac{-\hbar^2}{2m}\nabla^2 +(V-E) \right)\Psi=0$$ Where $E$ is the energy of that particular eigenstate, and $V$ in general depends on the position. All physical wavefunctions must be in some superposition of states that satisfy this equation. At least in nonrelativistic QM, the wavefunction is not allowed to have infinte energy. If the second derivative of the wavefunction does not exist or is infinite, it implies that either $V$ has some property that "cancels out" the discontinutiy (as in the infinite square well), or that the wavefunction is continuous and differentiable everywhere. Generally, $\Psi$ must always be continuous, and any spatial derivative of $\Psi$ must exist unless $V$ is infinite at that point. share|cite|improve this answer Here we want to show that there is an easy mathematical bootstrap argument why solutions to the time independent 1D Schrödinger equation $$-\frac{\hbar^2}{2m} \psi^{\prime\prime}(x) + V(x) \psi(x) ~=~ E \psi(x) \qquad\qquad (1)$$ tend to be rather nice. First rewrite eq. (1) in integral form $$ \psi(x)~=~ \frac{2m}{\hbar^2} \int^{x}\mathrm{d}y \int^{y}\mathrm{d}z\ (V(z)-E)\psi(z) .\qquad\qquad (2)$$ There are various cases. 1. Case $V \in {\cal L}^2_{\rm loc}(\mathbb{R})$ is a locally square integrable function. Assume the wavefunction $\psi \in {\cal L}^2_{\rm loc}(\mathbb{R})$ as well. Then the product $(V-E)\psi\in {\cal L}^1_{\rm loc}(\mathbb{R})$ due to Cauchy–Schwarz inequality. Then the integral $y\mapsto \int^{y}\mathrm{d}z\ (V(z)-E)\psi(z)$ is continuous, and hence the wavefunction $\psi$ on the lhs. of eq. (2) is smooth $\psi\in C^{1}(\mathbb{R}).$ 2. Case $V \in C^{p}(\mathbb{R})$ for a non-negative integer $p\in\mathbb{N}_0$. Similar bootstrap argument shows that $\psi\in C^{p+2}(\mathbb{R}).$ The above two cases do not cover a couple of often-used mathematically idealized potentials $V(x)$, e.g., 1. the infinite wall $V(x)=\infty$ in some region. (The wavefunction must vanish $\psi(x)=0$ in this region.) 2. or a Dirac delta distribution $V(x)=V_0\delta(x)$. See also here. share|cite|improve this answer Your Answer
a8f8cc5abcea33a6
« · » Section 12.5: Ramped Infinite and Finite Wells Please wait for the animation to completely load. Ramped wells consist of a potential energy function that is proportional to x added to either a finite well or an infinite well. The result is a finite or an infinite well with a ramped bottom. Solutions to such ramped wells obey a time-independent Schrödinger equation of the following form [−(ħ2/2m)(d2/dx2) + αx] ψ(x) = Eψ(x) , where α refers to the strength of the ramping function. We can now put this equation into a more standard form [d2/dx2 − 2mαx/ħ2 + 2mE/ħ2] ψ(x) = 0 , which has as its solution, Airy functions. For an infinite well, such as shown in Animation 1: Infinite, these solutions must also satisfy the boundary condition (ψ = 0) at the infinite wells, while in the case of a finite well, we must match the Airy functions with exponentials in the classically-forbidden regions. Such a spatially-varying potential energy function means that for a given energy eigenfunction, E V(x) will also change over the extent of the well. Two such potential energy functions (one infinite, one finite) are shown in the animation (ħ = 2m = 1).  Using the slider, you can change the ramping potential, Vr, to see the effect on the energy eigenfunctions and the energy levels.   To see the other bound states, simply click-drag in the energy level diagram on the left to select a level. The selected level will turn red. In particular, where the well is deeper, the difference between E and V is greater. This means that the curviness of the energy eigenfunction is greater there. In addition, where the well is deeper we would expect a smaller energy eigenfunction amplitude. The OSP Network: Open Source Physics - Tracker - EJS Modeling Physlet Physics Physlet Quantum Physics
c5881b724517866f
Higgs boson and conformal symmetry So far, I believed to be the only man on Earth to trust a complete absence of mass terms in the Standar Model (we call this conformal symmetry). I was wrong.  Krzysztof Meissner and Hermann Nicolai anticipated this idea. Indeed, in a model where mass is generally banned, there is no reason to believe that also the field that is the source of mass should keep a mass term (imaginary or real). We have one more reason to believe in such a scenario and it is the hierarchy problem as the quadratic term in the Higgs field just produces that awkward dependence on the square of the cut-off, the reason why people immediately thought that something else must be in that sector of the model. Meissner and Nicolai obtained their paper published on Physics Letters B and can be found here. As they point out in the article, the problem is to get a meaningful mass for the Higgs field, provided one leaves the self-coupling to be small. I do not agree  at all with the reasons for this, the Landau pole, as I have already widely said in this blog. One cannot built general results starting from perturbation theory. But assuming that this is indeed the case, the only mechanism at our disposal to get a mass is the Coleman-Weinberg mechanism. In this case, radiative corrections produce an effective potential that has a non-trivial minimum. The problem again is that this is obtained using small perturbation theory and so, the mass one gets is too small to be physically meaningful. The authors circumvent the problem adding a further scalar field. In this case the model appears to be consistent and all is properly working. What I would like to emphasize is that, if one assumes conformal symmetry to hold for the Standard Model, a single Higgs is not enough. So, I like this paper a lot and I will explain the reasons in a moment. I am convinced that these authors are on the right track. Two days ago these authors come out with another paper (see here). They claim that the second Higgs has been already seen at CDF (Tevatron), at about 325 GeV, while we know there is just a hint (possibly a fluke) from CMS and nothing from ATLAS for that mass. Of course, there is always the possibility that this resonance escaped due to its really small width. My personal view was already presented here. At that time, I was not aware of the work by Meissner and Nicolai otherwise I would have used it as a support. The only point I would like to question is the effective generation of mass. There is no generally accepted quantum field theory for a large coupling, neglecting for the moment attempts arising from string theory. Before to say that string theory grants a general approach for strongly coupled problems I would like to see it to give a solution to the scalar massless quartic field theory in such a case. This is the workhorse for this kind of problems and both the communities of physicists and mathematicians were just convinced that perturbation theory has only one side. As I showed here, this is not true. One can do perturbation theory also when a perturbation is taken to go to infinity. This means that we do not need a Coleman-Weinberg mechanism in a conformal Standard Model but we can do perturbation theory assuming a finite self-interaction: An asymptotic perturbation series can be also obtained in this case. But the fundamental conclusions one can draw from this analysis are the following: • The theory must be supersymmetric. • The theory has a harmonic oscillator spectrum for a free particle given by m_n=(2n+1)(\pi/2K(i))v, being K(i) an elliptic integral and v an integration constant with the dimension of energy. Now, let us look at the last point. One can prove that the decays for the higher excited states are increasingly difficult to observe as their decay constants become exponentially smaller with n (see here, eq. 11). But, if the observed Higgs boson has a mass of  about 125 GeV, one has v=105\ GeV and the next excitation is at about 375 GeV, very near the one postulated by Meissner and Nicolai and also near to the bump seen at CDF. This would be an exciting evidence of existence for supersymmetry: The particle seen at CERN would be supersymmetric! So, what I am saying here is that a conformal Standard Model, not only solves the hierarchy problem, but it is also compelling for the existence of supersymmetry. I think it would be worthy further studies. Krzysztof A. Meissner, & Hermann Nicolai (2012). A 325 GeV scalar resonance seen at CDF? arXiv arXiv: 1208.5653v1 Slowness in a quantum world Some days are gone since my last post but for a very good reason. I was very busy on writing down a new paper of mine. Meanwhile, frantic activity was around in the blogosphere due to EPS Conference in Grenoble. No Higgs yet but we do not despair. Being away from these distractions, I was able to analyze a main problem that was around in quantum mechanics since a nice paper by Karl-Peter Marzlin and Barry Sanders appeared in Physical Review Letters on 2004. My effort is in this paper. But what is all about? We are aware of the concept of adiabatic variation from thermodynamics and mechanics. We know that there exist physical systems that, if we care on doing a really slow variation in time of their parameters, the state of the system does not change too much. Let me state this with an example taken from mechanics. Consider a table with a hole at the center. Through the hole there is a wire with a little ball suspended. An electric motor is keeping the wire on the table side and the ball is performing small oscillations. If the wire is kept fixed, the period of the small oscillations of this pendulum is given by the square root of the ratio between the length of the wire at the hole position and the gravity acceleration multiplied by 2 times Greek pi. Now, we can turn on the motor and vary the length of the wire with some time varying law. This will imply a variation on the frequency of the oscillations of the ball. Now, if we assume a slow variation of the length it happens a nice thing: The system displays a conserved quantity, a so called adiabatic invariant, that the energy of the system varies proportionally to the frequency. This is just an approximate conserved quantity but it is a characteristic of a slow variation of a parameter of this system. In some way, the initial state of the system, properly evolved, is maintained as time evolves as the phase space occupied by the system keeps its form. This is true provided the rate of change of the length of the wire is much smaller than the frequency of the pendulum. This is a quite general result in classical mechanics. In 1928, Max Born and Vladimir Fock asked themselves if something similar is also true in a quantum world. In a classical paper, they were able to show that it is indeed so. Given a Schrödinger equation with a time varying Hamiltonian, under a given condition, a system keeps on staying in the same state properly evolved in time, multiplied by some phases. The validity condition is the critical point. This condition is expressed through the ratio between the rate of change of the Hamiltonian itself and the gap between instantaneous eigenvalues, as also eigenvalues as the states evolve in time. The gap condition is a fundamental one and was put forward on1950 by Tosio Kato. This is quite reminiscent to the case we gave about classical mechanics where the rate of change of the length of the wire, entering into the Hamiltonian, should be kept smaller than the frequency of oscillation and so this condition appears surely reasonable. But things are not that simple. You can consider an atom under the effect of a monochromatic radiation inside a cavity. This is generally well-described by a two-level system and people observe this system oscillating between the two states. Well, if the intensity of the field inside the cavity is small enough, one can see these oscillations dubbed Rabi flopping. This phenomenon is ubiquitous wherever a monochromatic field interacts with an atom but, the presence of a continuum of states changes this coherent effect into a decaying one as observed in everyday life. If we apply the condition for adiabatic approximation as devised by Born and Fock we get an inconsistency. The approximation seems to hold but Rabi flopping is there to say us the the state is changing in time. The system does not stay at all in the same state as time evolves notwithstanding our condition seems to say so. This is exactly what Marzlin and Sanders pointed out with an exactly solvable example. The condition found by Born and Fock for quantum slowness does not appear to be sufficient to grant an adiabatic behavior for a quantum system. This is a bad news as, in quantum computation, some promising technological applications are in view with the adiabatic approximation and we must be certain that our system behaves the way we expect. This opened up a hot debate that is yet to be over. But, what is going on here? The explanation to this inconsistency can be traced back to a couple of papers that I and Ali Mostafazadeh wrote in the nineties (see here and here). What Born and Fock really found is the leading order of a perturbation expansion of the solution of the time dependent Schrödinger equation. This has a deep implication for the validity condition of the adiabatic approximation that represents just the leading order of this series. Indeed, let us consider a system under the effect of a perturbation. There are situations when some corrections grow without bound as time increases. These terms are unphysical as an unbounded solution violates cherished principles of  physics as energy conservation, unitarity  and so on. These are called secularities in literature, due to their timescales, as they were firstly discovered in the perturbation series of computations in astronomy. So, if we stop at the leading order of such a series and blindly apply the condition as devised by Born and Fock and claim for applicability, we can be just wrong. This is exactly what Marzlin, Sanders and others have shown unequivocally. So, if you want to apply the adiabatic approximation and be sure it works, you have to do a more involved task. You have to compute the next-to-leading order correction at least and, accounting eventually for resonant behavior, identify unbounded terms. Techniques exist to resum them. After you will have this done, you will be able to identify the right condition for the adiabatic approximation to work. So, for the two-level system discussed above you will see that only when a very strong field is applied and Rabi flopping cannot happen you will get a consistent adiabatic behavior. What is the lesson to be learned here? The simplest one is that looking in some elder literature can often help to solve a problem. Anyhow, deep into an old dusty corner of quantum mechanics is just hidden a fundamental result: Strong perturbations can be managed in quantum mechanics exactly as the weak ones. An entire new world is open from this that our founding fathers cannot be aware of. Karl-Peter Marzlin, & Barry C. Sanders (2004). Inconsistency in the application of the adiabatic theorem Phys. Rev. Lett. 93, 160408 (2004) arXiv: quant-ph/0404022v6 Marco Frasca (2011). Consistency of the adiabatic theorem and perturbation theory arXiv arXiv: 1107.4971v1 Born, M., & Fock, V. (1928). Beweis des Adiabatensatzes Zeitschrift für Physik, 51 (3-4), 165-180 DOI: 10.1007/BF01343193 Marco Frasca (1998). Duality in Perturbation Theory and the Quantum Adiabatic Approximation Phys.Rev. A58 (1998) 3439 arXiv: hep-th/9801069v3 Ali Mostafazadeh (1996). The Quantum Adiabatic Approximation and the Geometric Phase Phys.Rev. A55 (1997) 1653-1664 arXiv: hep-th/9606053v1 Answer to Terry Tao’s criticism will go published My paper containing the answer to Terry Tao’s criticism will be published in Modern Physics Letters A. You can get a copy of this preprint from arxiv here. Thank you very much, folks! The right mathematical question After my post on the Higgs field (see here) I would like to explain why there is no reason to be afraid. The point is the right mathematical question to be asked.  So, let me state why, from a strict mathematical standpoint, small perturbation theory is not the whole story. Let us consider the differential equation \partial_t\psi=(H+\lambda V)\psi. The exact solution of this equation is the function \psi(t;\lambda). So, one can ask : What happens when \lambda goes to zero? This is a proper mathematical question and the answer, when it exists, is a Taylor series. This is the most celebrated small perturbation theory and the terms of this Taylor series are computed directly from the given differential equation. Of course, I can also ask what happens to the function \psi(t;\lambda) when \lambda goes to infinity. This is perfectly legal from a mathematical standpoint. This dual limit, when exists, produces an asymptotic series that has a development parameter 1/\lambda. This is a strong perturbation theory when each term is computed directly from the given differential equation. Indeed, one can build a machinery for this case and prove the very existence of this technique that extends perturbation theory beyond the realm of a research of a small parameter. Rather, one can consider the case of differential equations with a large parameter and solve it producing an analysis of these equations in a range of the parameter space not reachable with small perturbation theory. Physics is a lucky case for this mathematical question as it is all built on differential equations and having such a technique permits to analyze them in situations never reached before other than with computers. I would like to emphasize that this is applied mathematics but the solutions one obtains can be interesting for physics. Of course, mathematics cannot be questioned except when is wrong.  But when it is right any discussion  is somewhat grotesque. Who fears a non-perturbative Higgs field? One of my preferred readings in the blogosphere is Tommaso Dorigo’s blog. I think this is a widely known blog for people interested about physics and got some citation also at New York Times. Quite recently he published a very interesting post (see here) about the fate of our loved Standard Model taking the move from a very nice paper by J.Ellis, J.R.Espinosa, G.F.Giudice, A.Hoecker, and A.Riotto (see here). These authors are well known and really smart at their work and, indeed, I have noticed this paper as it appeared in arxiv. My readers know that I work on a small part (QCD) of the whole picture arisen in sixties and seventies and I have never taken a look from outside. So, while I appreciated this paper I thought it was not the case to comment on it  in my blog. But reading Tommaso’s post some thoughts come to my mind and these are really pertinent. People put out two kind of constraints on the Higgs part of the standard model to have an idea of what to expect. I give you here the Higgs potential for your needs and one immediately realizes that it introduces two free parameters. The critical one is \lambda and let me explain why. When one does quantum field theory, the only real tool that she has to do any meaningful computation is small perturbation theory. The word “small” is never said but it should be said in any circumstance as this technique only works if you have a small parameter in your theory (a coupling) to use as a development parameter. Otherwise we are lost and all starts to become foggy and not so well-defined. Today, nobody knows how to manage a theory with a strong coupling. Parameter \lambda is exactly such a coupling and we are able to manage a Higgs field when this parameter is small. But when you do small perturbation theory in quantum field theory you realize immediately that infinities come out and you are not able to obtain meaningful results going beyond the first order. For the most interesting theories around we are lucky:  Schwinger, Tomonaga, Feynman and Dyson invented renormalization and this works to remove infinities at each order of perturbation theory in the Standard Model and also for the Higgs, if the coupling is small. We are so accustomed to such a situation that we think that this is all one needs to know to understand quantum field theory: Perturbation theory and renormalization. We think that small perturbation theory is the perturbation theory and nothing else. So, we hope also the Higgs field should fulfill such requirements. Indeed, we are already in trouble in QCD for these same reasons but I have discussed at lengthy such a situation before here and I do not want to repeat myself. There is no reason whatsoever to believe that we know all one has to know to manage a quantum field theory. Higgs could as well be not that light and strong coupled and there is no reason to think that Nature chose the small coupling case to favor us. Of course, if things will not stay this way I will be happy as a light Higgs is favored by supersymmetry and I like supersymmetry. But I would like also to emphasize that we already have all we need to manage analytically a strong coupled Higgs field. This matter I have discussed widely here and in my published papers. So, while we all agree that a light Higgs is favored my view is that we should not have any fear of a non-perturbative Higgs field. Quantum field theory and gradient expansion In a preceding post (see here) I showed as a covariant gradient expansion can be accomplished maintaining Lorentz invariance during computation. Now I discuss here how to manage the corresponding generating functional Z[j]=\int[d\phi]e^{i\int d^4x\frac{1}{2}[(\partial\phi)^2-m^2\phi^2]+i\int d^4xj\phi}. This integral can be computed exactly, the theory being free and the integral is a Gaussian one, to give Z[j]=e^{\frac{i}{2}\int d^4xd^4yj(x)\Delta(x-y)j(y)} where we have introduced the Feynman propagator \Delta(x-y). This is well-knwon matter. But now we rewrite down the above integral introducing another spatial coordinate and write down Z[j]=\int[d\phi]e^{i\int d\tau d^4x\frac{1}{2}[(\partial_\tau\phi)^2-(\partial\phi)^2-m^2\phi^2]+i\int d\tau d^4xj\phi}. Feynman propagator solving this integral is given by and a gradient expansion just means a series into p^2 of this propagator. From this we learn immeadiately two things: • When one takes p=0 we get the right spectrum of the theory: a pole at p_\tau^2=m^2. • When one takes p_\tau=0 and Wick-rotates one of the four spatial coordinates we recover the right Feynman propagator. All works fine and we have kept Lorentz invariance everywhere hidden into the Euclidean part of a five-dimensional theory. Neglecting the Euclidean part gives us back the spectrum of the theory. This is the leading order of a gradient expansion. So, the next step is to see what happens with an interaction term. I have already solved this problem here and was published by Physical Review D (see here). In this paper I did not care about Lorentz invariance as I expected it would be recovered in the end of computations as indeed happens. But here we can recover the main result of the paper keeping Lorentz invariance. One has Z[j]=\int[d\phi]e^{i\int d\tau d^4x\frac{1}{2}[(\partial_\tau\phi)^2-(\partial\phi)^2-m^2\phi^2-\frac{\lambda}{2}\phi^4]+i\int d\tau d^4xj\phi} and if we want something not trivial we have to keep the interaction term into the leading order of our gradient expansion. So we will break the exponent as Z[j]=\int[d\phi]e^{i\int d\tau d^4x\frac{1}{2}[(\partial_\tau\phi)^2-\frac{\lambda}{2}\phi^4]-i\int d\tau d^4x\frac{1}{2}[(\partial\phi)^2+m^2\phi^2]+i\int d\tau d^4xj\phi} and our leading order functional is now This can be cast into a Gaussian form as, in the infrared limit, the one of our interest, one can use the following small time approximation \phi(x,\tau)\approx\int d\tau' d^4y \delta^4(x-y)\Delta(\tau-\tau')j(y,\tau') being now that can be exactly solved giving back all the results of my paper. When the Gaussian form of the theory is obtained one can easily show that, in the infrared limit, the quartic scalar field theory is trivial as we obtain again a generating functional in the form being now after Wick-rotated a spatial variable and having set p_\tau=0. The spectrum is proper to a trivial theory being that of an harmonic oscillator. I think that all this machinery does work very well and is quite robust opening up a lot of possibilities to have a look at the other side of the world. Gradient expansions and quantum field theory It is more than two years that I am working on quantum field theory in the strong coupling limit and I am generally very satisfied with the acceptance by the community about my views. Of course, these are new ideas and may take some time to be accepted. So, I keep on working on them trying to clarify them at best so that people can have a clear understanding of their strengths and weaknesses. One of the ways we researchers have to know how our colleagues consider our views is peer-review. This system is indeed crucial to any serious scientific endeavor and, indeed, I am proud of my achievements only when my peers agree about their value. But peer-review is also useful to my work to know what are the main objections to it. It can happen that sometime these objections are deeply wrong and may be worthwhile to discuss them at length also to have an idea on how such a prejudice arose. We should know that when a mathematical theory enters into the description of nature, whatever mathematical method one uses to exploit it is always correct. So, natural laws in physics are described by differential equations and  whatever method you know to solve them is good provided is also mathematically legal. You should consider mathematics for physicists as a severe judge that grants no appeal. You are right or wrong depending on the correctness of your computation. But in physics there is something more and these are assumptions we start with. You can do the beautiful mathematics in the world but if you started with a wrong concept about how nature works your computations are simply rubbish. One of the criticisms I have received on trying to get my papers published is that one cannot do a gradient expansion because this breaks Lorentz/Poincare’ invariance. This is completely wrong from a mathematical standpoint. As an exercise  you can consider the wave equation in two dimensions as and consider the case where the spatial part is not so important. This can be easily obtained by rescaling time as t\rightarrow\sqrt{\lambda}t and taking the limit \lambda\rightarrow\infty. One gets the solution series solving the equations and so on. All this is perfectly legal from a mathematical standpoint and I get a true solution of the wave equation. But, as you can see, I have broken Lorentz invariance, a symmetry of this equation. So, mathematics says yes while physics seems to say no. The answer is quite simple and is known since a long time: The computation is right but Lorentz invariance is no more manifest. This is due to the fact that I have separated time and space. But if I am able to resum all the terms of the expansion series I will get the right answer that is Lorentz invariant. So, both physics and mathematics give the same answer and is a resounding yes, it works and it works so well that we are left with a kind of strong coupling expansion. So, what should do a smart referee with such a doubt, admitting that a smart referee does not know such mundane facts of physics and mathematics? It should realize that here one is facing a really interesting problem of physics: Could we formulate a gradient expansion in such a way to have Lorentz invariance manifest? I have not an answer yet to this question but I grant to you that is a matter I would like to publish a paper about  somewhere. This is an interesting mathematical problem as well. We know that people met a similar problem at the start of the deep understanding of QED due to Feynman, Schwinger, Tomonaga and Dyson. I think that an answer to this question would have the same scientific value. %d bloggers like this:
1d0bc8dcc88be90c
Tuesday, April 26, 2011 How arrow of geometric time is selected at quantum level? I have discussed in the chapter About the Nature of Time of "Matter, Mind, Quantum" how the arrow of geometric time as a correlate for the experienced arrow of geometric time might be selected in TGD Universe. The discussion does not touch the question what arrow of time means at the level of quantum states. Therefore the notion of negative energy signal propagating backwards in geometric time crucial for TGD inspired quantum biology remains somewhat fuzzy. The recent progress in the understanding of the basic properties of zero energy states makes it possible to understand what arrow of geometric time and the notion of negative energy state and signals propagating to the direction of geomeric past mean at the level of zero energy states. This understanding has surprisingly non-trivial philosophical implications. In the following I shall briefly the quantum view about arrow of time. Arrow of time as an inherent property of zero energy states? The basic idea can be expressed in very conscise form. In positive energy ontology arrow of time characterizes dynamics. In zero energy ontology arrow of time characterizes quantum states. 1. The breaking of time reversal invariance (see this) means that zero energy states can be localized with respect to particle number and other quantum numbers only for future or past light-like boundary of CD but not both. M-matrix generalizing S-matrix provides the time-like entanglement coefficients expressing the state at the second boundary as quantum superposition of states with well-defined particle numbers and other quantum numbers. But only at the second end of CD since one cannot choose freely the states at both boundaries: if this were the case the counterpart of Schrödinger equation would be completely non-deterministic. This is what the breaking of time reversal symmetry means. It occurs spontaneously and assigns to the arrow of subjective time geometric arrow of time. This picture gives a precise meaning to the arrow of geometric time and therefore also for the otherwise fuzzy notion of negative energy signals propagating backwards in space-time playing key role in TGD based models of memory, metabolism, and intentional action (see this). 2. Quantum jump begins with the unitary U-process between zero energy states generating a superposition of zero energy states. After that follows state function reduction cascade proceeding from the level of CD to the level of sub-CDs forming a fractal hierarchy. The reductions cannot take independently at both light-like boundaries of CD as is also clear from the fact that scattering state leads from a prepared state to a quantum superposition of prepared states. The first guess is that the cascade takes place for the second boundary of CD only so that the arrow of geometric time would be same in all scales. This need not be the case always: the geometric arrow of time seems to change in some situations: phase conjugate laser light and spontaneous self-assembly of bio-molecules are good examples about this (see this and this). In fact, one of the defining properties of living matter could be just the possibility that the arrow of geometric time is not same in all scales (size scales of CDs) so that memory, metabolism, and intentional action become possible. In any case, the second end remains a superposition of quantum states. The lack of quantum measurements at the second end of space-times could explain why the conscious percepts are sharply localized in time at the second end of CD. This could also allow to understand memories as reductions occurring at the second, non-standard, end of sub-CDs in the geometric past. 3. The correspondence between the reduced state and the quantum superposition of states at the opposite boundary of CD allows an interpretation in terms of logical implication arrow with all statements present in the superposition implying the statement represented by the reduced state. Only implication arrow rather than equivalence is possible unless the M-matrix is diagonal meaning that there are no interactions. If it is possible to diagonalize M-matrix then in diagonal basis one has equivalences. It must be however emphasized that the physically preferred state basis fixed as in terms of eigenstates of density matrix does not allow diagonal M-matrix. Number theoretic conditions required that the density matrix corresponds to fixed algebraic extension of rationals can also make possible the diagonalization without leaving the extension and this condition might be highly relevant in the TGD inspired view about cognition relying on p-adic number fields and their algebraic extensions (see this). 4. In classical logic implication corresponds to the inclusion of subset by subset. In quantum case it corresponds to the inclusion for sub-space of state space. The inclusions of hyper-finite factors (WCW spinors define HFF of type II1) realize the notion of finite measurement resolution, which would suggest that inclusion arrow has also interpretation in terms of finite measurement resolution. All quantum states equivalent with a given state in the resolution used imply it. Finite measurement resolution would mean that there would infinite number of instances always in the quantum superposition representing the rule A → B. Ironically, both finite measurement resolution and dissipation implying the arrow of geometric time and usually regarded as something negative from the point of view of information processing would be absolutely essential element of logical thinking in this framework. 5. Conscious theorem proving would has as correlate to building of sequences zero energy states representing A → B, B→ C, C → D with basic building bricks representing simple basic rules. These sequences would represent more complex truths. Does state function-state preparation sequence correspond to alternating arrow of geometric time? The state function reduction at light-like boundary of CD implies delocalization at the opposite boundary. This inspires so fascinating questions. 1. Could the state function reduction process take place alternately at the two boundaries of CD so that a kind of flip-flop in which the arrow of geometric time changes back and forth would result, and have interpretation as an alternating sequence of state function reductions and state preparations in the framework of positive energy ontology? 2. State function reductions are needed for sensory percepts. Could the sleep-wake-up period correspond to this kind of process so that during what we call sleep the past boundary of our personal CD would be in wake-up state? Could dreams and memories represent sharing of mental images of this kind of consciousness? Could it be that in the time scale of entire life cycle death is accompanied by birth at the second boundary of personal CD? Could this quantum physics representation for endless sequence of deaths and rebirths? Could the fact that old people often spend they last years in childhood have interpretation in this framework? 3. State preparation-reduction cycle might characterize only living matter whereas for inanimate matter second choice for the arrow of time would be dominant between two U-processes. TGD based reformulation of entropic gravity idea of Verlinde in terms of ZEO does not assume the absence of gravitons and the emergence of space-time (see this). The formulation leads to the proposal that thermodynamical stability selects the arrow of the geometric time and that it could be different for matter and antimatter implying that matter and antimatter reside at different space-time sheets. This would explain the apparent absence of antimatter and also support the view that the arrow alternates only in living matter. The arrow of geometric time and the arrow of logical implication If physics is mathematics in the sense that there is nothing behind quantum states regarded as purely mathematical objects, Boolean logic must have a direct manifestation in the structure of physical states. Physical states should represent quantal Boolean statements which get their meaning via quantum jumps. In TGD framework WCW ("world of classical worlds") spinor fields represent quantum states of the Universe and WCW spinors correspond to fermionic Fock states for second quantized induced spinor fields at space-time surface. Fock state basis has interpretation in terms of Boolean algebra. In positive energy ontology the problem is that fermion number as a super-selection rule would allow very limited number of Boolean statements to be represented. In ZEO the situation changes. The fermionic parts of positive and negative energy parts can be seen as quantum superpositions of Boolean statements with fermion number in given mode (equal to 0 or 1) representing yes/no or true/false. Also various spin like quantum numbers associated with oscillator operators have same interpretation. Zero energy state could be seen as quantum superposition of pairs of elements of Boolean algebras associated with positive and negative energy parts of the zero energy state. The first - and incorrect - interpretation is that zero energy state represents a quantum superposition of equivalent statements a↔ b and thus abstraction A<---> B involving several instances of A and B. The breaking of time reversal invariance allowing localization to definite fermionic quantum numbers at single end of CD only however implies that quantum states can only represent abstraction of logical implication to A→ B rather than equivalence. p-Adic physics for various primes p (see this) would represent correlates for cognition and intentionality. For background see the chapter About the Nature of Time of "Matter, Mind, Quantum". Post a Comment << Home
12583b82d178e2af
By Susan Kruglinski, Oliver Chanarin|Tuesday, October 06, 2009 Roger Penrose could easily be excused for having a big ego. A theorist whose name will be forever linked with such giants as Hawking and Einstein, Penrose has made fundamental contributions to physics, mathematics, and geometry. He reinterpreted general relativity to prove that black holes can form from dying stars. He invented twistor theory—a novel way to look at the structure of space-time—and so led us to a deeper understanding of the nature of gravity. He discovered a remarkable family of geometric forms that came to be known as Penrose tiles. He even moonlighted as a brain researcher, coming up with a provocative theory that consciousness arises from quantum-mechanical processes. And he wrote a series of incredibly readable, best-selling science books to boot. And yet the 78-year-old Penrose—now an emeritus professor at the Mathematical Institute, University of Oxford—seems to live the humble life of a researcher just getting started in his career. His small office is cramped with the belongings of the six other professors with whom he shares it, and at the end of the day you might find him rushing off to pick up his 9-year-old son from school. With the curiosity of a man still trying to make a name for himself, he cranks away on fundamental, wide-ranging questions: How did the universe begin? Are there higher dimensions of space and time? Does the current front-running theory in theoretical physics, string theory, actually make sense? Because he has lived a lifetime of complicated calculations, though, Penrose has quite a bit more perspective than the average starting scientist. To get to the bottom of it all, he insists, physicists must force themselves to grapple with the greatest riddle of them all: the relationship between the rules that govern fundamental particles and the rules that govern the big things—like us—that those particles make up. In his powwow with DISCOVER contributing editor Susan Kruglinksi, Penrose did not flinch from questioning the central tenets of modern physics, including string theory and quantum mechanics. Physicists will never come to grips with the grand theories of the universe, Penrose holds, until they see past the blinding distractions of today’s half-baked theories to the deepest layer of the reality in which we live. You come from a colorful family of overachievers, don’t you? My older brother is a distinguished theoretical physicist, a fellow of the Royal Society. My younger brother ended up the British chess champion 10 times, a record. My father came from a Quaker family. His father was a professional artist who did portraits—very traditional, a lot of religious subjects. The family was very strict. I don’t think we were even allowed to read novels, certainly not on Sundays. My father was one of four brothers, all of whom were very good artists. One of them became well known in the art world, Sir Roland. He was cofounder of the Institute of Contemporary Arts in London. My father himself was a human geneticist who was recognized for demonstrating that older mothers tend to get more Down syndrome children, but he had lots of scientific interests. How did your father influence your thinking? The important thing about my father was that there wasn’t any boundary between his work and what he did for fun. That rubbed off on me. He would make puzzles and toys for his children and grandchildren. He used to have a little shed out back where he cut things from wood with his little pedal saw. I remember he once made a slide rule with about 12 different slides, with various characters that we could combine in complicated ways. Later in his life he spent a lot of time making wooden models that reproduced themselves—what people now refer to as artificial life. These were simple devices that, when linked together, would cause other bits to link together in the same way. He sat in his woodshed and cut these things out of wood in great, huge numbers. So I assume your father helped spark your discovery of Penrose tiles, repeating shapes that fit together to form a solid surface with pentagonal symmetry. It was silly in a way. I remember asking him—I was around 9 years old—about whether you could fit regular hexagons together and make it round like a sphere. And he said, “No, no, you can’t do that, but you can do it with pentagons,” which was a surprise to me. He showed me how to make polyhedra, and so I got started on that. Are Penrose tiles useful or just beautiful? My interest in the tiles has to do with the idea of a universe controlled by very simple forces, even though we see complications all over the place. The tilings follow conventional rules to make complicated patterns. It was an attempt to see how the complicated could be satisfied by very simple rules that reflect what we see in the world. The artist M. C. Escher was influenced by your geometric inventions. What was the story there? In my second year as a graduate student at Cambridge, I attended the International Congress of Mathematicians in Amsterdam. I remember seeing one of the lecturers there I knew quite well, and he had this catalog. On the front of it was the Escher picture Day and Night, the one with birds going in opposite directions. The scenery is nighttime on one side and daytime on the other. I remember being intrigued by this, and I asked him where he got it. He said, “Oh, well, there’s an exhibition you might be interested in of some artist called Escher.” So I went and was very taken by these very weird and wonderful things that I’d never seen anything like. I decided to try and draw some impossible scenes myself and came up with this thing that’s referred to as a tri-bar. It’s a triangle that looks like a three-dimensional object, but actually it’s impossible for it to be three-dimensional. I showed it to my father and he worked out some impossible buildings and things. Then we published an article in the British Journal of Psychology on this stuff and acknowledged Escher. Escher saw the article and was inspired by it? He used two things from the article. One was the tri-bar, used in his lithograph called Waterfall. Another was the impossible staircase, which my father had worked on and designed. Escher used it in Ascending and Descending, with monks going round and round the stairs. I met Escher once, and I gave him some tiles that will make a repeating pattern, but not until you’ve got 12 of them fitted together. He did this, and then he wrote to me and asked me how it was done—what was it based on? So I showed him a kind of bird shape that did this, and he incorporated it into what I believe is the last picture he ever produced, called Ghosts. Is it true that you were bad at math as a kid? I was unbelievably slow. I lived in Canada for a while, for about six years, during the war. When I was 8, sitting in class, we had to do this mental arithmetic very fast, or what seemed to me very fast. I always got lost. And the teacher, who didn’t like me very much, moved me down a class. There was one rather insightful teacher who decided, after I’d done so badly on these tests, that he would have timeless tests. You could just take as long as you’d like. We all had the same test. I was allowed to take the entire next period to continue, which was a play period. Everyone was always out and enjoying themselves, and I was struggling away to do these tests. And even then sometimes it would stretch into the period beyond that. So I was at least twice as slow as anybody else. Eventually I would do very well. You see, if I could do it that way, I would get very high marks. You have called the real-world implications of quantum physics nonsensical. What is your objection? In quantum mechanics an object can exist in many states at once, which sounds crazy. The quantum description of the world seems completely contrary to the world as we experience it. It doesn’t make any sense, and there is a simple reason. You see, the mathematics of quantum mechanics has two parts to it. One is the evolution of a quantum system, which is described extremely precisely and accurately by the Schrödinger equation. That equation tells you this: If you know what the state of the system is now, you can calculate what it will be doing 10 minutes from now. However, there is the second part of quantum mechanics—the thing that happens when you want to make a measurement. Instead of getting a single answer, you use the equation to work out the probabilities of certain outcomes. The results don’t say, “This is what the world is doing.” Instead, they just describe the probability of its doing any one thing. The equation should describe the world in a completely deterministic way, but it doesn’t. Erwin Schrödinger, who created that equation, was considered a genius. Surely he appreciated that conflict. Schrödinger was as aware of this as anybody. He talks about his hypothetical cat and says, more or less, “Okay, if you believe what my equation says, you must believe that this cat is dead and alive at the same time.” He says, “That’s obviously nonsense, because it’s not like that. Therefore, my equation can’t be right for a cat. So there must be some other factor involved.” So Schrödinger himself never believed that the cat analogy reflected the nature of reality? Oh yes, I think he was pointing this out. I mean, look at three of the biggest figures in quantum mechanics, Schrödinger, Einstein, and Paul Dirac. They were all quantum skeptics in a sense. Dirac is the one whom people find most surprising, because he set up the whole foundation, the general framework of quantum mechanics. People think of him as this hard-liner, but he was very cautious in what he said. When he was asked, “What’s the answer to the measurement problem?” his response was, “Quantum mechanics is a provisional theory. Why should I look for an answer in quantum mechanics?” He didn’t believe that it was true. But he didn’t say this out loud much. Yet the analogy of Schrödinger’s cat is always presented as a strange reality that we have to accept. Doesn’t the concept drive many of today’s ideas about theoretical physics? That’s right. People don’t want to change the Schrödinger equation, leading them to what’s called the “many worlds” interpretation of quantum mechanics. That interpretation says that all probabilities are playing out somewhere in parallel universes? It says OK, the cat is somehow alive and dead at the same time. To look at that cat, you must become a superposition [two states existing at the same time] of you seeing the live cat and you seeing the dead cat. Of course, we don’t seem to experience that, so the physicists have to say, well, somehow your consciousness takes one route or the other route without your knowing it. You’re led to a completely crazy point of view. You’re led into this “many worlds” stuff, which has no relationship to what we actually perceive. The idea of parallel universes—many worlds—is a very human-centered idea, as if everything has to be understood from the perspective of what we can detect with our five senses. The trouble is, what can you do with it? Nothing. You want a physical theory that describes the world that we see around us. That’s what physics has always been: Explain what the world that we see does, and why or how it does it. Many worlds quantum mechanics doesn’t do that. Either you accept it and try to make sense of it, which is what a lot of people do, or, like me, you say no—that’s beyond the limits of what quantum mechanics can tell us. Which is, surprisingly, a very uncommon position to take. My own view is that quantum mechanics is not exactly right, and I think there’s a lot of evidence for that. It’s just not direct experimental evidence within the scope of current experiments. In general, the ideas in theoretical physics seem increasingly fantastical. Take string theory. All that talk about 11 dimensions or our universe’s existing on a giant membrane seems surreal. You’re absolutely right. And in a certain sense, I blame quantum mechanics, because people say, “Well, quantum mechanics is so nonintuitive; if you believe that, you can believe anything that’s non­intuitive.” But, you see, quantum mechanics has a lot of experimental support, so you’ve got to go along with a lot of it. Whereas string theory has no experimental support. I understand you are setting out this critique of quantum mechanics in your new book. The book is called Fashion, Faith and Fantasy in the New Physics of the Universe. Each of those words stands for a major theoretical physics idea. The fashion is string theory; the fantasy has to do with various cosmological schemes, mainly inflationary cosmology [which suggests that the universe inflated exponentially within a small fraction of a second after the Big Bang]. Big fish, those things are. It’s almost sacrilegious to attack them. And the other one, even more sacrilegious, is quantum mechanics at all levels—so that’s the faith. People somehow got the view that you really can’t question it. A few years ago you suggested that gravity is what separates the classical world from the quantum one. Are there enough people out there putting quantum mechanics to this kind of test? No, although it’s sort of encouraging that there are people working on it at all. It used to be thought of as a sort of crackpot, fringe activity that people could do when they were old and retired. Well, I am old and retired! But it’s not regarded as a central, as a mainstream activity, which is a shame. After Newton, and again after Einstein, the way people thought about the world shifted. When the puzzle of quantum mechanics is solved, will there be another revolution in thinking? It’s hard to make predictions. Ernest Rutherford said his model of the atom [which led to nuclear physics and the atomic bomb] would never be of any use. But yes, I would be pretty sure that it will have a huge influence. There are things like how quantum mechanics could be used in biology. It will eventually make a huge difference, probably in all sorts of unimaginable ways. In your book The Emperor’s New Mind, you posited that consciousness emerges from quantum physical actions within the cells of the brain. Two decades later, do you stand by that? I think it will be beautiful. Next Page 1 of 2 Comment on this article Collapse bottom bar Log in to your account Email address: Remember me Forgot your password? No problem. Click here to have it emailed to you. Not registered yet?
f756dfe670456a3e
Thursday, March 28, 2013 AskScience Live! So I've got some amazing news. Many of you have heard of reddit, the "frontpage of the internet". In the past I mentioned as one of the greatest science resources that the internet has, and I still believe that. Sure, there are a lot of really horrible things about reddit, but if you're looking in the right places there are some really great conversations. AskScience is one of those places. In the AskScience forum you can ask just about any science question you have and be almost guaranteed to get input from someone that is an expert in that field. AskScience has over 2,000 panelists in fields ranging from biology and chemistry to anthropology and psychology. Not anyone can be a panelist, though. You need to be at least a graduate student in your field and demonstrate that you can explain your field correctly and simply to a lay audience. Moderation can sometimes be pretty heavy handed, which makes the discussion usually on point and science based. Doing something new: AskScienceLive Over the past month or two I have been working with some of the other panelists to create a video podcast version of the forum. We are now pleased to announce AskScienceLive! We've gathered together experts from a few fields, and we're going to be doing a Google "On Air" hangout. Our first episode will be April 11th, 6pm EDT. We will be taking questions asked on Twitter and answering them live. This could really be an awesome event, so please mark your calendars and bring your toughest science questions! Please check out (a current work in progress website) and stay tuned for more information and more episodes to come! Monday, March 25, 2013 What does "Organic" mean, anyway? "Organic" is a term that's thrown around a lot, and it means a lot of different things to a lot of different people. Sometimes organic means that a tomato has been grown without pesticides, other times it means anything derived from living things, and to a chemist it usually means any compound that contains both carbon and hydrogen. An "organic lifestyle" could either mean that you're attempting to live in tune with mother nature or it could mean you don't get out of the lab very often. The internet (or at least the chemistry corner of Twitter) has had some great discussion over the last few weeks dealing with chemophobia. Today I read a great article on the blog "Behind NMR Lines" with a surprisingly un-chemist-like approach to chemophobia. The standard response from chemists who hear complaints about "chemicals" is to say something along the lines of "well, even water is a chemical". The author points out that, while this argument is completely true, it sidesteps the actual concern. When people say "chemical" they obviously don't mean "all matter, everywhere", they mean harmful chemicals. What it means to me isn't what it means to you I wonder if we make the same mistake by being too rigid with our definition of "organic". As a chemist, when I hear the word "organic", I immediately think of...well...chicken wire (the chemistry "shorthand" for organic compounds often looks a lot like fencing for a chicken coop). I think of the definition I know. I've had years of schooling that drilled "organic" into my head as meaning "a compound containing carbon and hydrogen". This definition works, but it makes things like "organic sea salt" sound like absolute nonsense. So for years I have argued that the term "organic", as used by the general public, is not only ridiculous but almost meaningless. However, I wonder if I've been wrong. I wonder if it wouldn't be more appropriate to see organic (chemistry) and organic (food) as homonyms, similar to bark (dog) and bark (tree). Certainly the distributors of "organic sea salt" think so - their product description says: "There are no chemical additives or processing aids used in Marlborough Flaky & Pacific Natural Sea Salt production and there is just one ingredient - seawater. The process fits in with organic principles and is Certified by Bio-Gro New Zealand"  It seems, then, that we are using the same word to mean very different things. Semantic Drift or Homonyms? In response to the "Behind NMR Lines" post, @V_Saggiomo made the following point: Which I think applies just as easily to "organic". You could easily argue that organic chemistry began in 1828, with the synthesis of urea. Prior to this synthesis it was believed that living organisms had a special property (vitalism) that couldn't be synthesized in a lab. The synthesis of urea, a compound found only in living organisms, sent that theory right down the toilet (a pun that is totally intended), and organic chemistry was born. Now let's look at the other use of the word organic. Proponents of organic farming claim that organic farming has been practiced for thousands of years, but I think you could easily reject that claim - ancient farmers didn't use pesticides and other modern farming techniques because they simply weren't available. The term "organic farming" wasn't coined until 1940, and it was used to describe a farm as an organism. The term "organic" in both cases was used because of the connection to a living organism, but that's about the only things they have in common. Organic (chemistry) was used to describe a family of chemical compounds while organic (food) was used to describe a type of farming that avoided "chemicals". Whether semantic drift is the cause for the confusion or the words were meant as homonyms from the start, one thing is clear: We're not saying the same thing. Great, we mean different things. Now what? I think it's obvious that organic farmers are not using the definition that chemists are, and vice versa. Organic farming is a technique that rejects modern techniques because the "chemical" treatments are either a potential health hazard or rob the fruits/vegetables of some nutrient. Instead of arguing about the word organic we should be refuting those claims. Specifically, I can think of two reasons why buying organic food doesn't make sense: 1. There's not much evidence of any health benefits from an organic diet.In fact, there's not even evidence that organic food tastes any better, either. 2. The requirements for an "organic" label are easily manipulated, and you're probably not getting what you think you are when you buy organic foods. So maybe, just like with the word chemical, we should be focusing less on which words people are using and help them understand what they are really saying. Tuesday, March 19, 2013 Living with chemicals: Retinal The Chemical (IUPAC): (2E,4E,6E,8E)-3,7-dimethyl-9-(2,6,6-trimethylcyclohexen-1-yl)nona-2,4,6,8-tetraenal You probably know it as: Retinal The structure: Last week I wrote about betacarotene, the chemical in carrots responsible for their orange color. I mentioned that although betacarotene was necessary vision, eating a ton of it won't give you supervision. As I said, betacarotene is converted to Vitamin A, which is converted to retinal. Retinal is a cool chemical with even cooler chemistry. You may know that in your eyes you have light receptors called rod cells, or "rods". These cells are responsible for low light vision.  Your rods are so sensitive that in a perfectly dark room you would be able to detect a single photon! This is what  rod cell looks like: You'll see that the outer segment has membrane shelves that are lined with rhodopsin. Rhodopsin is a protein containing 348 amino acids. Retinal sits wrapped neatly inside rhodpsin, bound to the amino acid lysine. In a completely dark room the retinal in your eyes will be in the "11-cis" conformation. This is a bent conformation which allows retinal to fit inside rhodopsin. When a photon is absorbed by retinal it changes the conformation to the "all-trans" conformation, and retinal no longer fits inside or rhodopsin.  Rhodopsin then unravels and this sets off a cascade of chemical reactions that end with a signal being sent to the brain. What's even more amazing is how well we understand that cascade. But that's a story for a different day... Thursday, March 14, 2013 Science Myths and Misconceptions - Part IV: Gravity in Space #4 - There is no gravity in space A common view of space is that it is a gravity free zone. I can understand where the confusion comes from (we do call it zero gravity, after all), but there is actually a lot of gravity in space. Gravity is the reason the moon orbits the earth, the earth orbits the sun, and the sun orbits around the center of the galaxy. Gravity is a force that any object with mass will create. We're most familiar with the gravity on earth, but even a small object (like you) will have an attractive force (you can read more about that here). For an example of what "zero gravity" really is, let's look at Commander Chris Hadfield. Commander Hadfield has been on the International Space Station since December 21st, 2012. For those of you who aren't familiar with Hadfield's internet fame he's had a twitter conversation with Captain Kirk, given Reddit the best AMA of all time, and shared some amazing pictures. Now, why hasn't the ISS fallen from the sky in that time? Certainly if there were any gravity at all the entire station would come crashing down, right? The answer, which you already knew, is that the ISS is orbiting the earth. But what does it really mean to orbit? Saying that the ISS is orbiting the earth is really just a fancy way of saying it's falling to the earth but hasn't gotten there yet. The ISS is traveling with a velocity horizontal to the earth of ~17,500 mph (Imagine a point 5 miles away from you. Traveling as fast as the ISS you'd be there in just over a second!). Commander Hadfield doesn't feel like he's moving at that speed though - he's falling to the earth at the same rate that the space station is so he feels weightless. Here's a video of Commander Hadfield washing his hands in space: You can see the rest of the the Science Myths and Misconceptions here! Tuesday, March 12, 2013 Chemistry and Cooking "Chemistry is just like cooking, except you can't lick the spoon!" By training I'm a chemist, though what I do in the lab is far what most people imagine when they think of a chemist. When I say I'm a chemist, I'm sure most people imagine the mad scientist surrounded by glassware, odd colored liquids, and smoking test tubes (come to think of it, this is how I imagine organic chemists!). But the chemistry I do doesn't involve any of those things. I don't wear a lab coat, gloves, or safety goggles (though I do wear laser glasses). I haven't even synthesized anything in a lab in over 3 years. I think part of me misses that, which is why I find it very relaxing to cook.  I don't like to follow a recipe, though. I cook in the style of "a dash of this" and "a pinch of that". If my wife really likes something I've cooked she knows to enjoy the moment because she'll never have that exact dish again. That's not to say my wife doesn't like my cooking, she just knows that every meal will be different.  Usually when I'm cooking my mind turns to chemistry. In particular I've been thinking of the Maillard reaction. Though you may not have heard it of it before, I'm sure you've seen it. Just take a look at this amazing piece of chicken I fried up tonight:  Ok, so maybe I wrote this as an excuse to post this picture. At least I haven't added a filter or something... The browning on the outside of the chicken is due to the Maillard reaction, which happens when amino acids react with sugar in the presence of heat. You'll see it in toast, roasted coffee, maple syrup, caramels, and much more. From that list I'm sure you'll agree that the reaction can create a wide variety of taste. Understanding and utilizing the reaction is the basis of the food industry. The following paragraph is written by a physical chemist. Like I said I haven't done "real" chemistry in years. The process occurs in three stages. First, the carbonyl group of a sugar reacts with an amino acid to produce water and glycosylamine. Then, the glycosylamine pushes a hydrogen around (which is my simplistic - and wrong - way of describing an Amadori rearrangement, but I'll admit it I had to look that up) to produce a variety of aminoketones. In the final step a host of other products are produced depending on the pH, the temperature, and amino acids involved. These different products are why the Maillard reaction creates such a wide variety of tastes. So my question to other chemists that read this is - do you enjoy cooking? For you synthetic chemists, does it feel too much like work to be enjoyable? Let me know in the comments! Monday, March 11, 2013 Living with Chemicals: Betacarotene The Chemical (Systematic name): 1,3,3-Trimethyl-2-[3,7,12,16-tetramethyl-18-(2,6,6-trimethylcyclohex-1-en-1-yl)octadeca-1,3,5,7,9,11,13,15,17-nonaen-1-yl]cyclohex-1-ene You probably know it as: Betacarotene The structure: In the next few blog entries for "living with chemicals" I'm going to be focusing on some of the amazing chemicals that make sight possible. We're going to start with the chemical betacarotene, which you've probably heard helps you see better at night. You may have even been told that eating carrots helps you see better at night because of the high betacarotene content. While it is true that betacarotene is necessary for night vision (it's converted to Vitamin A which is converted to retinal which is necessary for sight), it won't help you see better at night unless you have a real deficiency. However, as is true with many myths, the story behind the myth is more interesting than the myth itself. During WWII, the Britain's Royal Air Force developed a new radar system (the Airborne Interception Radar). This gave them a huge advantage for shooting down Nazi planes. So, to keep the Nazi's from realizing they had a new tool they spread the rumor that their soldiers ate lots of carrots to help them see better in the dark. And it worked so well that people believe it to this day! Betacarotene is also interesting because it's a really long molecule. There are 22 conjugated carbon-carbon bonds (the alternating single and double bonds you see along the chain). This actually means that betacarotene is good evidence that quantum mechanics is a valid theory (that may sound like a leap, but let me explain). Some of the most simplistic models of electrons describe them as orbiting around a single atom. That may make sense, but it's wrong. In systems like betacarotene the electrons actually move freely down the entire chain. In fact, they exist as a wave that exists everywhere along the chain at the same time. We can model the electron system using the energy level diagram below: Only two electrons can occupy each level, so in total 11 energy levels are filled. As the energy level gets higher the space between levels gets smaller and smaller. When light interacts with this system an electron is promoted from the n = 11 level to the n = 12 level. But this can't happen when just any light comes along, it must be light whose energy is equal to the spacing between the two levels - which turns out to be ~480 nm, a nice blue color. So that's why carrots are orange, they reflect orange light and absorb blue light. But we can still learn something cool if we model the system as a particle in a box. The energy of the particle in a box levels can be calculated by: E = {n^2}\frac{{{h^2}}}{{8m{L^2}}} Where n is the energy level, h is the Plank constant, m is the mass of the electron, and L is the length. The difference between level 11 and level twelve is: \Delta E = {E_{12}} - {E_{11}} = \left( {{{12}^2} - {{11}^2}} \right)\frac{{{h^2}}}{{8m{L^2}}} Which can be rearranged so that we can calculate the length of the chain in betacarotene: L = \sqrt {\left( {{{12}^2} - {{11}^2}} \right)\frac{{{h^2}}}{{8m\Delta E}}} = 1.89{\rm{ nm}} Which is a pretty good answer for the length of that long chain in betacarotene, especially since we used a very simple model to get the answer. So next time you bite into a carrot, take a look at its nice orange color and remember that without quantum mechanics it wouldn't be orange at all. Thursday, March 7, 2013 Zero Hour: Zero Credibility A Bad Science on TV post by David Zaslavsky (Check out his awesome physics blog here) There's a new TV show on ABC, Zero Hour, whose previews really piqued my interest earlier this year. Highly skilled assassin, greatest conspiracy in the history of the world, something about clocks. Seems like good clean utterly ridiculous fun. I like horrible disaster movies, so I figured this should fit right in. But only three episodes in, Zero Hour has already managed to butcher the science so badly I'm not sure I can stand it anymore. Let me set the stage: the main character, Hank, is searching for his wife, who has been kidnapped, and to find her he has to locate a series of clocks, each of which has clues leading to the next location. Clock number 2's clue was a map of the constellation Cepheus. Combined with the time and date on the clock (the hands were frozen in place), this supposedly led Hank to the exact location you'd have to be to view the constellation at that time: Chennai, India. But maybe you can see the problem here: a constellation is not visible from only a single location! By their criteria, Hank could be looking for any place in that entire hemisphere of the Earth. Sure, maybe the clue was supposed to identify where Cepheus would be seen directly overhead, but that's not anywhere close to India. Cepheus is a northern hemisphere constellation, very close to the north celestial pole, so the only places it appears directly overhead are in the Arctic. On the other hand, here's the view from Chennai at 8:15 AM on March 8, 1938, the date and time named in the show: Cepheus is almost right on the horizon. The constellation that was directly overhead at the time was Aquila, which they could just as easily have named. To be fair, I guess that wouldn't make for very good TV because it's basically a nondescript rhombus, but then again, Cepheus is just a square with a hat, so you can't be too picky. But that snafu with the constellation, which I could have lived with, pales in comparison to the next (and most recent) episode. The clue from the third clock leads Hank, after some floundering on the acronym IAS (which anyone as smart as he is should instantly recognize), to the Institute of Advanced Study in Princeton, NJ. Let's just bypass the fact that they managed to get almost every location in Princeton wildly wrong — I mean, if you've never been there, you probably wouldn't care. (For the record, the Princeton Public Library looks like an entirely normal public library, not a stuffy university library as it's shown in the show.) What really irks me is the equation they found on Einstein's blackboard at the end. In the show, Einstein erased something from the blackboard he was working on just before he died, which was rumored to be the formula for a new power source that he considered too dangerous for humankind to control. • Real power sources come from engineers tinkering in workshops, not from equations on a blackboard. In the show, the erased formula turns out to be a key to a coded message Einstein left on the rest of his blackboard. • Real physics formulas are neither keys nor coded messages. In reality, you might recognize this formula: Yep, that's the time-independent Schrödinger equation, an entirely mundane equation that forms the basis for nonrelativistic quantum mechanics. It would have had relatively little to do with what Einstein was working on at the time, and certainly there's no way it could have been turned into the key for a coded message which would also make sense as a physics formula. Anyway, it's kind of a moot point by now. The latest news from the "TV gods" is that Zero Hour has been canceled after just these three episodes. Honestly, I'm not surprised. I only wish the lesson to take away from this would be that you can't get away with terrible science on television, and not just that sucky TV is sucky.
6cc81060a4db2436
Tesla Girls, Tesla Coils, She's My Tesla Goy-Um! But Siriusly, an idea that is ahead of its Time is about as popular as the one that goes against the accepted Tide, relegating its proponent to Loopy (Round-the-Bend) status--without further introduction, I give You, Tesla! . I, for one, refuse to subscribe to such a view." --Nikola Tesla Tesla's adamant refusal to adopt the little understood yet highly regarded (then as now) views of the vanguard of his day, put him instead in league with the 'luna-tics', aka Renaissance Thinkers and Prophets, or just your run-of-the-neighborhood crazy mother. While there is no denying that both the Theory of General & Special Relativity do a serviceable job in explaining and even predicting certain phenomena, the very fact that the one necessitates the other hints that they each are incomplete which begs for careful vetting. And while my undistinguished career is a dim bulb compared to either Tesla's or Einstein's resplendent contributions to Science, it hasn't escaped me that Tesla may have had a real concern with Einstien's Relativity and may not have been simply casting aspersions. It may very well be the case that space itself is not 'curved' due to the presence of matter (heavy objects) since what we are observing/experiencing is a 'projection' of the True Nature of the Matter under discussion. By 'projection' I am not refering to our simplistic view of holograms, which may serve as a good analogy, but something much deeper than that. Some 'other' physical entity that due to its extra-dimensionality (hyper-space, or whatever the common catch-phrase-of-the-day is), we cannot gauge directly but should be able to deduce its presence much the same way similes, metaphors, and music can convey something as indeterminate as what someone else is feeling without our having to sense it ourselves. The limited 4D space-time only allows us rare glimses into the larger picture. It's a little like looking out of the porthole (A Gardner Like That One above) of an ocean-going vessel and thinking that's all there is to the ocean. Enough wrangling with the words, time for the GeoMetry (gee, i'm a tree!) using a concrete example. {And to our left, allow me to call Your attention to the Bliss comic panel of Tuesday May 17, 2011, insinuating that a b**tch explains 'it'; an accolade meaning "Congratulations! __ (Insert Name) has now graduated UFO Academy (summa cum laude)"} The orbit of the Moon around the Sun is a convex curve approximated by: (400cos t + cos 13t, 400 sin t + sin 13t) When expanded this yields: 160000 cos^2(t)+cos^2(13 t)+800 cos(13 t) cos(t) + 160000 sin^2(t)+sin^2(13 t)+800 sin(13 t) sin(t) Which can then be simplified to: 160000 + 800 cos (12t) And since we're not trying to land anyone on the Sun or the Moon, but simply doing this as an excercise to bridge the esoteric to the familiar, we can say the relationship approximates: 160000 + 800 cos (13t) or about 40 + cos (13 t) And, thanks to wolframalpha, the (exagerrated) curve for which appears on the right. A few things worth mentioning about the Moon's orbit around the Sun is that it is essentially elliptical, it is locally convex, and when we consider the sidereal month of 27.32 days instead of the synodic month of 29.54 days, the ellipsoid is more like a tridecagon. From studying the concept of 'unwrapping' curves from developmental surfaces, we learned a curve that lies on a conic surface and its ceiling projection are the same type, such that an ellipse projects as an ellipse, a parabola projects as a parabola, and a hyperbola projects as a hyperbola; although, their eccentricity may differ. And we know from observation that the radius vector (r) of objects with elliptical orbits 'sweeps equal areas in equal times'. That last statement about elliptical orbits is how we were taught to think about orbits in school. Thinking in terms of 'uwrapping' and/or 'projecting' curves from developmental surfaces (such as cylinders and cones) simplifies this relationship since the areas are preserved in much the same way arclength is invariant, eliminating the need to consider the Time, and allowing us to focus on the spacial attributes alone. {Proofs and more illustrations can be found at the source used here, "Unwrapping Curves from Cylinders and Cones," by Apostol & Mnatsakanian (The Mathematical Association of America, Monthly 114).} The illustration on the left depicts a conic curve C with its 'ceiling projection' curve C0 (A ceiling projection of a conic curve is like having it lie on the surface of the cone and then the cone is 'popped' open like one would an umbrella). Conic curves projected in this manner have the following attributes: 1. a focus at the Vertex of the cone 2. a directrix L at a line of intersection of the cutting plane and the ceiling plane 3. eccentricity denoted by lambda = tan (alpha) tan (beta) 4. is a polar equation: r (phi) = r(0)/(1+ lambda*sin(phi)) Having one focus at the vertex of the cone is significant in that we know for elliptical orbits the center of mass of "...the orbiting-orbited system is at one focus of both orbits" and the general assumption is that there is "... nothing present at the other focus." However, in the search for 'missing mass', based on conic projections of orbits and the relationships between the viewing plane and the observed point P0, and what we know of the observable 'detectable mass', it would be safe to posit that the second focus is actually the barycenter of the 'missing mass' related to P0 ( which in Reality is a projection of the Actual point P; while it cannot be observed directly from the viewing plane, however, its 'gravitational' effects probably are what manifest as the presence of the secondary focus, which makes it appear as if 'there is nothing present at the other focus'). Conic C (figure above left) and its ceiling projection Co share the following relationships: 1. the ratio of the projected curve Co radial vector (r) to the distance (d) is constant and independent of P or Po and is the same as eccentricity (lambda) s.t. r/d = lambda = tan (alpha) tan (beta) 2. the polar equations of the unwrapped curve (C) and its ceiling projection (Co) are related by a k = 1/sin(alpha) s.t. R(theta) = k r(theta) 3. the area A of the projected curve Co after one complete revolution (phi =2 pi) of unwrapping the curve C is given by: A = E cos (beta) = S sin(alpha)= pi*rho*s (where: E= disk area of conic C, beta = its angle of inclination, S= finite lateral surface of the cone subtended by C and its vertex, alpha = the half-angle of the cone with vertex angle=2(alpha), rho = radius of the cone base, s= unwrapped arclength of C on the surface of the cone. A generalized conic is a planar curve generated when a conic curve is projected onto a ceiling plane with a focus at the vertex (V) of the cone. In the figure above a generalized ellipse with period interval length 2pi/k, k=5.5, lambda = 0.22 is generated after 1 period only. In (b) and (c) the same generalized ellipse after 5 and 11 periods, respectively. The figure on the right depicts an approximate annual orbit of the Moon around the Sun as a generalized conic; 1-year orbit is about 13 synodic months, or 13 periods. According to the figure on the right the period interval length is approximately 55.4 deg =2 pi/k, Similar shapes found in nature may be describe in the same manner. {having run my one brain cell to the ground getting this far, time to move on to a refresher exercise otherwise known as "physics from lyrics"} And, yes, while the aim of all the above may seem nebulous, that's more owing to lack of time than understanding on my part, and the fuzziness clears right up given a little attention. A little like when expressing the simple sentiment "all roads lead to Rome" as a refomulated old Irish Blessing, "May all roads lead You Home", which could prove to be a Curse as much as a Blessing, depending on who You find waiting there for You. The main idea is to derive the orbit as a generalized conic and consider that there is symmetry in the integral of a smooth function f over the closed boundary of a geometric curve S that is equal to the integral of the derivative of the function f ' over the figure S itself. {What she said, expressed succinctly in the equation below, and That's why Math!} The explicit form of this relationship appears in physics time and again. The linear case as the Riemann integral (calculating the function at the endpoints of a line equals the integral of its derivative on the line); the 2-D case of integrating f over the closed curve bounding a surface equals the integral of f’ on the surface (as in Stokes’ Theorem and Ampere’s Law); the 3-D case as given by Gauss’s Law of integrating a function f over a closed surface surrounding a volume equals the integral of f’ throughout the volume. (from THE MAP OF PHYSICS, EXPLORATIONS OF NEORATIONALISM, Essays in the Nature and Uses of Reason, by Dennis Anthony) Ceiling projections of a hole drilled (bored) through a cone’s axis. Compare to Cassini Ovals, below. Bean Curves, Qurartic Curves have Parametric Given By: Ceiling projections of a hole drilled (bored) through a cone’s axis. Note: Bean Curves appear in the first column bottom row, (from Unwrapping Curves...) Cassini Ovals, Quartic Curves Given By The Parametric {I am no apologist, but You know by now that I pretty much forgot much of my key life experiences up to this point, especially the higher-learning part of it, but it seems I can finagle some understanding of it up to 9th grade (which includes Geometry & Trig). (And, don't be misled into thinking or trying to figure out this weird amnesia; that, while it may seem to be selective, at times I think it is a protection mechanism and at other times it may serve as the only polite way to avoid having to confront the outlandish; especially when random strangers 'say' things about me like 'why can't she remember' in the third person while they stare pointedly at me and know I am present. Since I don't know who 'they' are and none of them seem to ever bother with an introduction, I act like I don't 'hear' a thing--so, go right on talking about me to me like I am not in the room!) } I am constantly reminded that 'it' is not about Me, and my Ego is not so grandiose nor so fragile as not to permit continuing to operate under such an assumption--that it isn't about Me. Can You imagine how paranoid I would be if I ever thought it otherwise, especially when confronted with song lyrics about "Every Breath You Take" (The Police). So, let's just say, for argument's sake, that it is about Sting (the talent behind the aforementioned lyrics) and what Sting may have been thinking when he wrote the lyrics to "Ghost Story". Since Sting is an Englishman by birth and speaks English as his first language and was an English teacher, it may be reasonable to assume that he means what he says and says what he means (in English)--making him one of the most literate songsters around--making his lyrics relatively easy to parse for our purposes here. What is the force that binds the stars I wore this mask to hide my scars What is the power that pulls the tide I never could find a place to hide What moves the Earth around the Sun What could I do but run and run and run... -excerpt from " Ghost Story" lyrics by Sting from the album "Brand New Day" While the lyrics are profound and the experience of hearing Sting's rendition of "Ghost Story" quite moving, the answer to many of his questions is "I don't know." At the end of the song, the listener is satisfied with Sting's jump to conclusion, "I must have loved you." Really? Is that all? Just an assertion without any proof? Just a series of observations and a lack of post doctorate studies in physics or math? To demonstrate how well this model runs without Me, I will now show (Computer Command Copy on Write) that Sting unwittingly holds the answers to his own thoughtful questions. (Not by Magic, but by the premise that All Inspiration comes from God--that's why You should just tell me and not have to rouse Mr. Sting up from his sound slumber in the middle of the night to go multi-platinum with it.) W/hat is= the force F, that binds the stars (Gravitational Binding Energy)... Ŵ=∫▒〖F dx〗; Ŵ=BE ; F = dŴ/x+c= dBE/x+c W/hat is= the P(power), that pulls(mechanical work) the tide (tidal) W/hat moves(equation of motion for) the Earth around (polar/cylindrical/orbit) the Sun W/hat could I(Current, Impedence, Impulse) do but run + run + run (streaming) (you could take a walk) W in physics/math can mean a few things directly applicable to the query: W= watt (unit of Power); W= mechanical Work; W= dimensional (w(width) or ωx (frequency); W=Lambert W function; W = W Boson. Just a cursory look at the lyrics indicates that they break down the physics in somewhat of a coherent way, in that Work is the amount of energy transferred by a force acting through a distance in the direction (vector) of the force. The units for Work and Energy are the same (joules), scalar quantities, and Watts are units of Power which is the amount of energy expended or work done per unit time (dW/dt)--a differential. Sting's veiled, yet keen, insight is more evident when W is understood to be the Lambert Function, which provides an exact solution to the quantum-mechanical double-well Dirac delta function model which consists of a time-independent Schrödinger equation for a particle in a potential well defined in one dimension. Applications of the Lambert W function is ubiquitous in physics including, but not limited to, the areas of atomic, molecular, and optical physics. And while W as width may seem out of place with the other W indicators, it makes perfect sense in this context when considering Feynman's assertion, and it seems only appropriate that Feynman be mentioned along with Tesla and Einstein: • "These notions of potential and kinetic energy depend on a notion of lengthscale. For example, one can speak of macroscopic potential and kinetic energy, non-problematic if the various length scales are decoupled , as is often the case instance when friction converts macroscopic work into microscopic thermal Feynman specifies "length" but the general meaning is a scale of 1-dimension, so, length, width, height, etc. are exchangeable. Of course, what Feynman is really talking about, in physics parlance, is the idea of fractals "As Above, So Below." The /hat (^ ) operator, usually presented over a value or function means: Fourier transform; the element removed from a set; a unit vector (a dimensionless vector with magnitude 1); or used to denote an estimator or an estimated value x vs theoretical x. Read as x-hat or x-roof, where x represents the character under the hat. Given the various interpretations of W above, the implications are: • since Work, Energy, and Power are scalar in our 4D space-time, in order for them to be understood as vectors, they must be moved up a notch (or 2 or more) in order for w/hat to make sense as a unit vector (dimensionally speaking); so, based on the 'lyrics to physics' these relationships hold true in higher order dimensions as non-scalars. {So w/hat if I'm wrong? The Theory Of Everything must include Everything from A Tree Grows in Brooklyn to the Kitchen Sink!} ..look, a joke can run its course and in this case been taken too far, but I have to say hearing "the wife" reference coming from strangers is offensive to say the least and utterly ignorant & insensitive to put it mildly.{I need to see my Father, stop paying bills & stop getting e-mails at the office from colleagues who verify my e-mail address is not ever in their send log} No comments: Post a Comment
2201ff10803cddbd
Open main menu Wikipedia β Solitary wave in a laboratory wave channel In mathematics and physics, a soliton is a self-reinforcing solitary wave packet that maintains its shape while it propagates at a constant velocity. Solitons are caused by a cancellation of nonlinear and dispersive effects in the medium. (The term "dispersive effects" refers to a property of certain systems where the speed of the waves varies according to frequency.) Solitons are the solutions of a widespread class of weakly nonlinear dispersive partial differential equations describing physical systems. The soliton phenomenon was first described in 1834 by John Scott Russell (1808–1882) who observed a solitary wave in the Union Canal in Scotland. He reproduced the phenomenon in a wave tank and named it the "Wave of Translation". A single, consensus definition of a soliton is difficult to find. Drazin & Johnson (1989, p. 15) ascribe three properties to solitons: 1. They are of permanent form; 2. They are localized within a region; A hyperbolic secant (sech) envelope soliton for water waves. The blue line is the carrier signal, while the red line is the envelope soliton. A topological soliton, also called a topological defect, is any solution of a set of partial differential equations that is stable against decay to the "trivial solution." Soliton stability is due to topological constraints, rather than integrability of the field equations. The constraints arise almost always because the differential equations must obey a set of boundary conditions, and the boundary has a non-trivial homotopy group, preserved by the differential equations. Thus, the differential equation solutions can be classified into homotopy classes. There is no continuous transformation that will map a solution in one homotopy class to another. The solutions are truly distinct, and maintain their integrity, even in the face of extremely powerful forces. Examples of topological solitons include the screw dislocation in a crystalline lattice, the Dirac string and the magnetic monopole in electromagnetism, the Skyrmion and the Wess–Zumino–Witten model in quantum field theory, the magnetic skyrmion in condensed matter physics, and cosmic strings and domain walls in cosmology. An animation of the overtaking of two solitary waves according to the Benjamin–Bona–Mahony equation – or BBM equation, a model equation for (among others) long surface gravity waves. The wave heights of the solitary waves are 1.2 and 0.6, respectively, and their velocities are 1.4 and 1.2. The upper graph is for a frame of reference moving with the average velocity of the solitary waves. The lower graph (with a different vertical scale and in a stationary frame of reference) shows the oscillatory tail produced by the interaction.[4] Thus, the solitary wave solutions of the BBM equation are not solitons. In 1965 Norman Zabusky of Bell Labs and Martin Kruskal of Princeton University first demonstrated soliton behavior in media subject to the Korteweg–de Vries equation (KdV equation) in a computational investigation using a finite difference approach. They also showed how this behavior explained the puzzling earlier work of Fermi, Pasta, Ulam, and Tsingou.[5] In 1967, Gardner, Greene, Kruskal and Miura discovered an inverse scattering transform enabling analytical solution of the KdV equation.[6] The work of Peter Lax on Lax pairs and the Lax equation has since extended this to solution of many related soliton-generating systems. Note that solitons are, by definition, unaltered in shape and speed by a collision with other solitons.[7] So solitary waves on a water surface are near-solitons, but not exactly – after the interaction of two (colliding or overtaking) solitary waves, they have changed a bit in amplitude and an oscillatory residual is left behind.[8] Solitons in fiber opticsEdit Much experimentation has been done using solitons in fiber optics applications. Solitons in a fiber optic system are described by the Manakov equations. Solitons' inherent stability make long-distance transmission possible without the use of repeaters, and could potentially double transmission capacity as well.[9] Year Discovery 1973 Akira Hasegawa of AT&T Bell Labs was the first to suggest that solitons could exist in optical fibers, due to a balance between self-phase modulation and anomalous dispersion.[10] Also in 1973 Robin Bullough made the first mathematical report of the existence of optical solitons. He also proposed the idea of a soliton-based transmission system to increase performance of optical telecommunications. 1987 Emplit et al. (1987) – from the Universities of Brussels and Limoges – made the first experimental observation of the propagation of a dark soliton, in an optical fiber. 1988 Linn Mollenauer and his team transmitted soliton pulses over 4,000 kilometers using a phenomenon called the Raman effect, named after Sir C. V. Raman who first described it in the 1920s, to provide optical gain in the fiber. 1998 Thierry Georges and his team at France Telecom R&D Center, combining optical solitons of different wavelengths (wavelength-division multiplexing), demonstrated a composite data transmission of 1 terabit per second (1,000,000,000,000 units of information per second), not to be confused with Terabit-Ethernet. The above impressive experiments have not translated to actual commercial soliton system deployments however, in either terrestrial or submarine systems, chiefly due to the Gordon–Haus (GH) jitter. The GH jitter requires sophisticated, expensive compensatory solutions that ultimately makes dense wavelength-division multiplexing (DWDM) soliton transmission in the field unattractive, compared to the conventional non-return-to-zero/return-to-zero paradigm. Further, the likely future adoption of the more spectrally efficient phase-shift-keyed/QAM formats makes soliton transmission even less viable, due to the Gordon–Mollenauer effect. Consequently, the long-haul fiberoptic transmission soliton has remained a laboratory curiosity. 2000 Cundiff predicted the existence of a vector soliton in a birefringence fiber cavity passively mode locking through a semiconductor saturable absorber mirror (SESAM). The polarization state of such a vector soliton could either be rotating or locked depending on the cavity parameters.[11] 2008 D. Y. Tang et al. observed a novel form of higher-order vector soliton from the perspect of experiments and numerical simulations. Different types of vector solitons and the polarization state of vector solitons have been investigated by his group.[12] Solitons in biologyEdit Solitons may occur in proteins[13] and DNA.[14] Solitons are related to the low-frequency collective motion in proteins and DNA.[15] A recently developed model in neuroscience proposes that signals, in the form of density waves, are conducted within neurons in the form of solitons.[16][17][18] Solitons in magnetsEdit In magnets, there also exist different types of solitons and other nonlinear waves.[19] These magnetic solitons are an exact solution of classical nonlinear differential equations — magnetic equations, e.g. the Landau–Lifshitz equation, continuum Heisenberg model, Ishimori equation, nonlinear Schrödinger equation and others. The bound state of two solitons is known as a bion,[20][21][22] or in systems where the bound state periodically oscillates, a breather. In field theory bion usually refers to the solution of the Born–Infeld model. The name appears to have been coined by G. W. Gibbons in order to distinguish this solution from the conventional soliton, understood as a regular, finite-energy (and usually stable) solution of a differential equation describing some physical system.[23] The word regular means a smooth solution carrying no sources at all. However, the solution of the Born–Infeld model still carries a source in the form of a Dirac-delta function at the origin. As a consequence it displays a singularity in this point (although the electric field is everywhere regular). In some physical contexts (for instance string theory) this feature can be important, which motivated the introduction of a special name for this class of solitons. On the other hand, when gravity is added (i.e. when considering the coupling of the Born–Infeld model to general relativity) the corresponding solution is called EBIon, where "E" stands for Einstein. See alsoEdit 1. ^ "Light bullets".  2. ^ Scott Russell, J. (1844). "Report on waves". Fourteenth meeting of the British Association for the Advancement of Science.  3. ^ Korteweg, D. J.; de Vries, G. (1895). "On the Change of Form of Long Waves advancing in a Rectangular Canal and on a New Type of Long Stationary Waves". Philosophical Magazine. 39: 422–443. doi:10.1080/14786449508620739.  4. ^ Bona, J. L.; Pritchard, W. G.; Scott, L. R. (1980). "Solitary‐wave interaction". Physics of Fluids. 23 (3): 438–441. Bibcode:1980PhFl...23..438B. doi:10.1063/1.863011.  5. ^ Zabusky & Kruskal (1965) 6. ^ Gardner, Clifford S.; Greene, John M.; Kruskal, Martin D.; Miura, Robert M. (1967). "Method for Solving the Korteweg–deVries Equation". Physical Review Letters. 19 (19): 1095–1097. Bibcode:1967PhRvL..19.1095G. doi:10.1103/PhysRevLett.19.1095.  8. ^ See e.g.: Maxworthy, T. (1976). "Experiments on collisions between solitary waves". Journal of Fluid Mechanics. 76 (1): 177–186. Bibcode:1976JFM....76..177M. doi:10.1017/S0022112076003194.  Fenton, J.D.; Rienecker, M.M. (1982). "A Fourier method for solving nonlinear water-wave problems: application to solitary-wave interactions". Journal of Fluid Mechanics. 118: 411–443. Bibcode:1982JFM...118..411F. doi:10.1017/S0022112082001141.  Craig, W.; Guyenne, P.; Hammack, J.; Henderson, D.; Sulem, C. (2006). "Solitary water wave interactions". Physics of Fluids. 18 (057106): 25 pp. Bibcode:2006PhFl...18e7106C. doi:10.1063/1.2205916.  9. ^ "Photons advance on two fronts". October 24, 2005. Retrieved 2011-02-15.  10. ^ Fred Tappert (January 29, 1998). "Reminiscences on Optical Soliton Research with Akira Hasegawa" (PDF).  11. ^ Cundiff, S. T.; Collings, B. C.; Akhmediev, N. N.; Soto-Crespo, J. M.; Bergman, K.; Knox, W. H. (1999). "Observation of Polarization-Locked Vector Solitons in an Optical Fiber". Physical Review Letters. 82 (20): 3988. Bibcode:1999PhRvL..82.3988C. doi:10.1103/PhysRevLett.82.3988.  12. ^ Tang, D. Y.; Zhang, H.; Zhao, L. M.; Wu, X. (2008). "Observation of high-order polarization-locked vector solitons in a fiber laser". Physical Review Letters. 101 (15): 153904. Bibcode:2008PhRvL.101o3904T. doi:10.1103/PhysRevLett.101.153904. PMID 18999601.  13. ^ Davydov, Aleksandr S. (1991). Solitons in molecular systems. Mathematics and its applications (Soviet Series). 61 (2nd ed.). Kluwer Academic Publishers. ISBN 0-7923-1029-2.  14. ^ Yakushevich, Ludmila V. (2004). Nonlinear physics of DNA (2nd revised ed.). Wiley-VCH. ISBN 3-527-40417-1.  15. ^ Sinkala, Z. (August 2006). "Soliton/exciton transport in proteins". J. Theor. Biol. 241 (4): 919–27. doi:10.1016/j.jtbi.2006.01.028. PMID 16516929.  16. ^ Heimburg, T., Jackson, A.D. (12 July 2005). "On soliton propagation in biomembranes and nerves". Proc. Natl. Acad. Sci. U.S.A. 102 (2): 9790. Bibcode:2005PNAS..102.9790H. doi:10.1073/pnas.0503823102. PMC 1175000 . PMID 15994235.  17. ^ Heimburg, T., Jackson, A.D. (2007). "On the action potential as a propagating density pulse and the role of anesthetics". Biophys. Rev. Lett. 2: 57–78. arXiv:physics/0610117 . Bibcode:2006physics..10117H. doi:10.1142/S179304800700043X.  18. ^ Andersen, S.S.L., Jackson, A.D., Heimburg, T. (2009). "Towards a thermodynamic theory of nerve pulse propagation". Progr. Neurobiol. 88 (2): 104–113. doi:10.1016/j.pneurobio.2009.03.002.  19. ^ Kosevich, A. M.; Gann, V. V.; Zhukov, A. I.; Voronov, V. P. (1998). "Magnetic soliton motion in a nonuniform magnetic field". Journal of Experimental and Theoretical Physics. 87 (2): 401–407. Bibcode:1998JETP...87..401K. doi:10.1134/1.558674.  20. ^ Belova, T.I.; Kudryavtsev, A.E. (1997). "Solitons and their interactions in classical field theory". Physics-Uspekhi. 40 (4): 359–386. Bibcode:1997PhyU...40..359B. doi:10.1070/pu1997v040n04abeh000227.  21. ^ Gani, V.A.; Kudryavtsev, A.E.; Lizunova, M.A. (2014). "Kink interactions in the (1+1)-dimensional φ^6 model". Physical Review D. 89: 125009. arXiv:1402.5903 . Bibcode:2014PhRvD..89l5009G. doi:10.1103/PhysRevD.89.125009.  22. ^ Gani, V.A.; Lensky, V.; Lizunova, M.A. (2015). "Kink excitation spectra in the (1+1)-dimensional φ^8 model". Journal of High Energy Physics. 2015 (08): 147. arXiv:1506.02313 . doi:10.1007/JHEP08(2015)147. ISSN 1029-8479.  23. ^ Gibbons, G. W. (1998). "Born–Infeld particles and Dirichlet p-branes". Nuclear Physics B. 514 (3): 603–639. arXiv:hep-th/9709027 . Bibcode:1998NuPhB.514..603G. doi:10.1016/S0550-3213(97)00795-5.  24. ^ Powell, Devin (20 May 2011). "Rogue Waves Captured". Science News. Retrieved 24 May 2011.  Further readingEdit External linksEdit
2121f941b7cdd71a
Wednesday, November 6, 2013 old posts from This is a collection of old blog posts, going back to 2006. For some strange reason I thought it would be a good idea to have two blogs. They have been migrated here from a philosophy of science primer - part III • part I: some history of science and logical empiricism, • part II: problems of logical empiricism, critical rationalism and its problems. After the unsuccessful attempts to found science on common sense notions as seen in the programs of logical empiricism and critical rationalism, people looked for new ideas and explanations. the thinker The Kuhnian View Thomas Kuhn’s enormously influential work on the history of science is called the Structure of Scientific Revolutions. He revised the idea that science is an incremental process accumulating more and more knowledge. Instead, he identified the following phases in the evolution of science: • prehistory: many schools of thought coexist and controversies are abundant, • history proper: one group of scientists establishes a new solution to an existing problem which opens the doors to further inquiry; a so called paradigm emerges, • paradigm based science: unity in the scientific community on what the fundamental questions and central methods are; generally a problem solving process within the boundaries of unchallenged rules (analogy to solving a Sudoku), • crisis: more and more anomalies and boundaries appear; questioning of established rules, • revolution: a new theory and weltbild takes over solving the anomalies and a new paradigm is born. Another central concept is incommensurability, meaning that proponents of different paradigms cannot understand the other’s point of view because they have diverging ideas and views of the world. In other words, every rule is part of a paradigm and there exist no trans-paradigmatic rules. This implies that such revolutions are not rational processes governed by insights and reason. In the words of Max Planck (the founder of quantum mechanics; from his autobiography): Kuhn gives additional blows to a commonsensical foundation of science with the help of Norwood Hanson and Willard Van Orman Quine: • every human observation of reality contains an a priori theoretical framework, • underdetermination of belief by evidence: any evidence collected for a specific claim is logically consistent with the falsity of the claim, • every experiment is based on auxiliary hypotheses (initial conditions, proper functioning of apparatus, experimental setup,…). People slowly started to realize that there are serious consequences in Kuhn’s ideas and the problems faced by the logical empiricists and critical rationalists in establishing a sound logical and empirical foundation of science: • postmodernism, • constructivism or the scoiology of science, • relativism. Modernism describes the development of Western industrialized society since the beginning of the 19th Century. A central idea was that there exist objective true beliefs and that progression is always linear. Postmodernism replaces these notions with the belief that many different opinions and forms can coexist and all find acceptance. Core ideas are diversity, differences and intermingling. In the 1970s it is seen to enter scientific and cultural thinking. Postmodernism has taken a bad rap from scientists after the so called Sokal affair, where physicist Alan Sokal got a nonsensical paper published in the journal of postmodern cultural studies, by flattering the editors ideology with nonsense that sounds good. Postmodernims has been associated with scepticism and solipsism, next to relativism and constructivism. Notable scientists identifiable as postmodernists are Thomas Kuhn, David Bohm and many figures in the 20th century philosophy of mathematics. As well as Paul Feyerabend, an influential philosopher of science. To quote the Nobel laureate Steven Weinberg on Kuhnian revolutions: Constructivism excludes objectivism and rationality by postulating that beliefs are always subject to a person’s cultural and theological embedding and inherent idiosyncrasies. It also goes under the label of the sociology of science. In the words of Paul Boghossian (in his book Fear of Knowledge: Against Relativism and Constructivism): Constructivism about rational explanation: it is never possible to explain why we believe what we believe solely on the basis of our exposure to the relevant evidence; our contingent needs and interests must also be invoked. The proponents of constructivism go further: From Barry Barnes’ and David Bloor’s Relativism, Rationalism and the Sociology of Knowledge. In its radical version, constructivism fully abandons objectivism: • Objectivity is the illusion that observations are made without an observer(from the physicist Heinz von Foerster; my translation) • Modern physics has conquered domains that display an ontology that cannot be coherently captured or understood by human reasoning (from the philosopher Ernst von Glasersfeld); my translation In addition, radical constructivism proposes that perception never yields an image of reality but is always a construction of sensory input and the memory capacity of an individual. An analogy would be the submarine captain who has to rely on instruments to indirectly gain knowledge from the outside world. Radical constructivists are motivated by modern insights gained by neurobiology. Historically, Immanuel Kant can be understood as the founder of constructivism. On a side note, the bishop George Berkeley went even as far as to deny the existence of an external material reality altogether. Only ideas and thought are real. Another consequence of the foundations of science lacking commonsensical elements and the ideas of constructivism can be seen in the notion of relativism. If rationality is a function of our contingent and pragmatic reasons, then it can be rational for a group A to believe P, while at the same time it is rational for group B to believe in negation of P. Although, as a philosophical idea, relativism goes back to the Greek Protagoras, its implications are unsettling for the Western mid:anything goes (as Paul Feyerabend characterizes his idea of scientific anarchy). If there is no objective truth, no absolute values, nothing universal, then a great many of humanity’s century old concepts and beliefs are in danger. It should however also be mentioned, that relativism is prevalent in Eastern thought systems, and as an example found in many Indian religions. In a similar vein, pantheism and holism are notions which are much more compatible with Eastern thought systems than Western ones. Furthermore, John Stuart Mill’s arguments for liberalism appear to also work well as arguments for relativism: • fallibility of people’s opinions, • opinions that are thought to be wrong can contain partial truths, • accepted views, if not challenged, can lead to dogmas, • the significance and meaning of accepted opinions can be lost in time. From his book On Liberty. But could relativism be possibly true? Consider the following hints: • Epistemological • problems with perception: synaesthesia, altered states of consciousness (spontaneous, mystical experiences and drug induced), • psychopathology describes a frightening amount of defects in the perception of reality and ones self, • people suffering from psychosis or schizophrenia can experience a radically different reality, • free will and neuroscience, • synthetic happiness, • cognitive biases. • Ontological • nonlocal foundation of quantum reality: entanglement, delayed choice experiment, • illogical foundation of reality: wave-particle duality, superpositions, uncertainty, intrinsic probabilistic nature, time dilation (special relativity), observer/measurment problem in quantum theory, • discreteness of reality: quanta of energy and matter, constant speed of light, • nature of time: not present in fundamental theories of quantum gravity, symmetrical, • arrow of time: why was the initial state of the universe very low in entropy? • emergence, selforganization and structureformation. In essence, perception doesn’t necessarily say much about the world around us. Consciousness can fabricate reality. This makes it hard to be rational. Reality is a really bizarre place. Objectivity doesn’t seem to play a big role. And what about the human mind? Is this at least a paradox free realm? Unfortunately not. Even what appears as a consistent and logical formal thought system, i.e., mathematics, can be plagued by fundamental problems. Kurt Gödel proved that in every consistent non-contradictory system of mathematical axioms (leading to elementary arithmetic of whole numbers), there exist statements which cannot be proven or disproved in the system. So logical axiomatic systems are incomplete. As an example Bertrand Russell encountered the following paradox: let R be the set of all sets that do not contain themselves as members. Is R an element of itself or not? If you really accede to the idea that reality and the perception of reality by the human mind are very problematic concepts, then the next puzzles are: • why has science been so fantastically successful at describing reality? • why is science producing amazing technology at breakneck speed? • why is our macroscopic, classical level of reality so well behaved and appears so normal although it is based on quantum weirdness? • are all beliefs justified given the believers biography and brain chemistry? a philosophy of science primer - part II Continued from part I The Problems With Logical Empiricism The programme proposed by the logical empiricists, namely that science is built of logical statements resting on an empirical foundation, faces central difficulties. To summarize: • it turns out that it is not possible to construct pure formal concepts that solely reflect empirical facts without anticipating a theoretical framework, • how does one link theoretical concepts (electrons, utility functions in economics, inflational cosmology, Higgs bosons,…) to experiential notions? • how to distinguish science from pseudo-science? Now this may appear a little technical and not very interesting or fundamental to people outside the field of the philosophy of science, but it gets worse: • inductive reasoning is invalid from a formal logical point of view! • causality defies standard logic! This is big news. So, just because I have witnessed the sun going up everyday of my life (single observations), I cannot say it will go up tomorrow (general law). Observation alone does not suffice, you need a theory. But the whole idea here is that the theory should come from observation. This leads to the dead end of circular reasoning. But surely causality is undisputable? Well, apart from the problems coming from logic itself, there are extreme examples to be found in modern physics which undermine the common sense notion of a causal reality: quantum nonlocalitydelayed choice experiment. But challenges often inspire people, so the story continues… Critical Rationalism OK, so the logical empiricists faced problems. Can’t these be fixed? The critical rationalists belied so. A crucial influence came from René Descartes’ and Gottfried Leibniz’ rationalism: knowledge can have aspects that do not stem from experience, i.e., there is an immanent reality to the mind. The term critical refers to the fact, that insights gained by pure thought cannot be strictly justified but only critically tested with experience. Ultimate justifications lead to the so called Münchhausen trilemma, i.e., one of the following: • an infinite regress of justifications, • circular reasoning, • dogmatic termination of reasoning. The most influential proponent of critical rationalism was Karl Popper. His central claims were in essence • use deductive reasoning instead of induction, • theories can never be verified, only falsified. Although there are similarities with logical empiricism (empirical basis, science is a set of theoretical constructs), the idea is that theories are simply invented by the mind and are temporarily accepted until they can be falsified. The progression of science is hence seen as evolutionary process rather than a linear accumulation of knowledge. Sounds good, so what went wrong with this ansatz? The Problems With Critical Rationalism In a nutshell: • basic formal concepts cannot be derived from experience without induction; how can they be shown to be true? • deduction turns out to be just as tricky as induction, • what parts of a theory need to be discarded once it is falsified? To see where deduction breaks down, a nice story by Lewis Carroll (the mathematician who wrote the Alice in Wonderland stories): What the tortoise Said to Achilles. If deduction goes down the drain as well, not much is left to ground science on notions of logic, rationality and objectivity. Which is rather unexpected of an enterprise that in itself works amazingly well employing just these concepts. Explanations in Science And it gets worse. Inquiries into the nature of scientific explanation reveal further problems. It is based on Carl Hempel’s and Paul Oppenheim’s formalisation of scientific inquiry in natural language. Two basic schemes are identified: deductive-nomological and inductive-statistical explanations. The idea is to show that what is being explained (the explanandum) is to be expected on the grounds of these two types of explanations. The first tries to explain things deductively in terms of regularities and exact laws (nomological). The second uses statistical hypotheses and explains individual observations inductively. Albeit very formal, this inquiry into scientific inquiry is very straightforward and commonsensical. Again, the programme fails: • can’t explain singular causal events, • asymmetric (a change in the air pressure explains the readings on a barometer, however, the barometer doesn’t explain why the air pressure changed), • many explanations are irrelevant, • as seen before, inductive and deductive logic is controversial, • how to employ probability theory in the explanation? So what next? What are the consequences of these unexpected and spectacular failings of the most simplest premises one would wish science to be grounded on (logic, empiricism, causality, common sense, rationality, …)? The discussion is ongoing and isn’t expected to be resolved soon. Seepart III a philosophy of science primer - part I Naively one would expect science to adhere to two basic notions: • common sense, i.e., rationalism, • observation and experiments, i.e., empiricism. Interestingly, both concepts turn out to be very problematic if applied to the question of what knowledge is and how it is acquired. In essence, they cannot be seen as a foundation for science. But first a little history of science… Classical Antiquity The Greek philosopher Aristotle was one of the first thinkers to introduce logic as a means of reasoning. His empirical method was driven by gaining general insights from isolated observations. He had a huge influence on the thinking within the Islamic and Jewish traditions next to shaping Western philosophy and inspiring thinking in the physical sciences. Modern Era Nearly two thousand years later, not much changed. Francis Bacon (the philosopher, not the painter) made modifications to Aristotle’s ideas, introducing the so called scientific method where inductive reasoning plays an important role. He paves the way for a modern understanding of scientific inquiry. Approximately at the same time, Robert Boyle was instrumental in establishing experiments as the cornerstone of physical sciences. Logical Empiricism So far so good. By the early 20th Century the notion that science is based on experience (empiricism) and logic, and where knowledge is intersubjectively testable, has had a long history. The philosophical school of logical empiricism (or logical positivism) tries to formalise these ideas. Notable proponents were Ernst Mach, Ludwig Wittgenstein, Bertrand Russell, Rudolf Carnap, Hans Reichenbach, Otto Neurath. Some main influences were: • David Hume’s and John Locke’s empiricism: all knowledge originates from observation, nothing can exist in the mind which wasn’t before in the senses, • Auguste Comte’ and John Stuart Mills’ positivism: there exists no knowledge outside of science. In this paradigm (see Thomas Kuhn a little later) science is viewed as a building comprised of logical terms based on an empirical foundation. A theory is understood as having the following structure: observation -> empirical concepts -> formal notions -> abstract law. Basically a sequence of ever higher abstraction. This notion of unveiling laws of nature by starting with individual observations is called induction (the other way round, starting with abstract laws and ending with a tangible factual description is called deduction, see further along). And here the problems start to emerge. See part II Stochastic Processes and the History of Science: From Planck to Einstein How are the notions of randomness, i.e., stochastic processes, linked to theories in physics and what have they got to do with options pricing in economics? How did the prevailing world view change from 1900 to 1905? What connects the mathematicians Bachelier, Markov, Kolmogorov, Ito to the physicists Langevin, Fokker, Planck, Einstein and the economists Black, Scholes, Merton? The Setting • Science up to 1900 was in essence the study of solutions of differential equations (Newton’s heritage); • Was very successful, e.g., Maxwell’s equations: four differential equations describing everything about (classical) electromagnetism; • Prevailing world view: • Deterministic universe; • Initial conditions plus the solution of differential equation yield certain prediction of the future. Three Pillars By the end of the 20th Century, it became clear that there are (at least?) two additional aspects needed in a completer understanding of reality: • Inherent randomness: statistical evaluations of sets of outcomes of single observations/experiments; • Quantum mechanics (Planck 1900; Einstein 1905) contains a fundamental element of randomness; • In chaos theory (e.g., Mandelbrot 1963) non-linear dynamics leads to a sensitivity to initial conditions which renders even simple differential equations essentially unpredictable; • Complex systems (e.g., Wolfram 1983), i.e., self-organization and emergent behavior, best understood as outcomes of simple rules. Stochastic Processes • Systems which evolve probabilistically in time; • Described by a time-dependent random variable; • The probability density function describes the distribution of the measurements at time t; • Prototype: The Markov process. For a Markov process, only the present state of the system influences its future evolution: there is no long-term memory. Examples: • Wiener process or Einstein-Wiener process or Brownian motion: • Introduced by Bachelier in 1900; • Continuous (in t and the sample path) • Increments are independent and drawn from a Gaussian normal distribution; • Random walk: • Discrete steps (jumps), continuous in t; • Is a Wiener process in the limit of the step size going to zero. To summarize, there are three possible characteristics: 1. Jumps (in sample path); 2. Drift (of the probability density function); 3. Diffusion (widening of the probability density function). Probability distribution function showing drift and diffusion: Probability distribution function with drift and diffusion But how to deal with stochastic processes? The Micro View • Presented a theory of Brownian motion in 1905; • New paradigm: stochastic modeling of natural phenomena; statistics as intrinsic part of the time evolution of system; • Mean-square displacement of Brownian particle proportional to time; • Equation for the Brownian particle similar to a diffusion (differential) equation. • Presented a new derivation of Einstein’s results in 1908; • First stochastic differential equation, i.e., a differential equation of a “rapidly and irregularly fluctuating random force” (today called a random variable) • Solutions of differential equation are random functions. However, no formal mathematical grounding until 1942, when Ito developed stochastic calculus: • Langevin’s equations interpreted as Ito stochastic differential equations using Ito integrals; • Ito integral defined to deal with non-differentiable sample paths of random functions; • Ito lemma (generalized integration rule) used to solve stochastic differential equations. • The Markov process is a solution to a simple stochastic differential equation; • The celebrated Black-Scholes option pricing formula is a stochastic differential equation employing Brownian motion. The Fokker-Planck Equation: Moving To The Macro View • The Langevin equation describes the evolution of the position of a single “stochastic particle”; • The Fokker-Planck equation describes the behavior of a large population of of “stochastic particles”; • Formally: The Fokker-Planck equation gives the time evolution of the probability density function of the system as a function of time; • Results can be derived more directly using the Fokker-Planck equation than using the corresponding stochastic differential equation; • The theory of Markov processes can be developed from this macro point of view. The Historical Context • Developed a theory of Brownian motion (Einstein-Wiener process) in 1900 (five years before Einstein, and long before Wiener); • Was the first person to use a stochastic process to model financial systems; • Essentially his contribution was forgotten until the late 1950s; • Black, Scholes and Merton’s publication in 1973 finally gave Brownian motion the break-through in finance. • Founder of quantum theory; • 1900 theory of black-body radiation; • Central assumption: electromagnetic energy is quantized, E = h v; • In 1914 Fokker derives an equation on Brownian motion which Planck proves; • Applies the Fokker-Planck equation as quantum mechanical equation, which turns out to be wrong; • In 1931 Kolmogorov presented two fundamental equations on Markov processes; • It was later realized, that one of them was actually equivalent to the Fokker-Planck equation. 1905 “Annus Mirabilis” publications. Fundamental paradigm shifts in the understanding of reality: • Photoelectric effect: • Explained by giving Planck’s (theoretical) notion of energy quanta a physical reality (photons), • Further establishing quantum theory, • Winning him the Nobel Prize; • Brownian motion: • First stochastic modeling of natural phenomena, • The experimental verification of the theory established the existence of atoms, which had been heavily debate at the time, • Einstein’s most frequently cited paper, in the fields of biology, chemistry, earth and environmental sciences, life sciences, engineering; • Special theory of relativity: the relative speeds of the observers’reference frames determines the passage of time; • Equivalence of energy and mass (follows from special relativity): E = m c^2. Einstein was working at the Patent Office in Bern at the time and submitted his Ph.D. to the University of Zurich in July 1905. Later Work: • 1915: general theory of relativity, explaining gravity in terms of the geometry (curvature) of space-time; • Planck also made contributions to general relativity; • Although having helped in founding quantum mechanics, he fundamentally opposed its probabilistic implications: “God does not throw dice”; • Dreams of a unified field theory: • Spend his last 30 years or so trying to (unsuccessfully) extend the general theory of relativity to unite it with electromagnetism; • Kaluza and Klein elegantly managed to do this in 1921 by developing general relativity in five space-time dimensions; • Today there is still no empirically validated theory able to explain gravity and the (quantum) Standard Model of particle physics, despite intense theoretical research (string/M-theory, loop quantum gravity); • In fact, one of the main goals of the LHC at CERN (officially operational on the 21st of October 2008) is to find hints of such a unified theory (supersymmetric particles, higher dimensions of space). Technorati , laws of nature What are Laws of Nature? Regularities/structures in a highly complex universe Allow for predictions • Dependent on only a small set of conditions (i.e., independent of very many conditions which could possibly have an effect) …but why are there laws of nature and how can these laws be discovered and understood by the human mind? No One Knows! • G.W. von Leibniz in 1714 (Principes de la nature et de la grâce): • Why is there something rather than nothing? For nothingness is simpler and easier than anything • E. Wigner, “The Unreasonable Effectiveness of Mathematics in the Natural Sciences“, 1960: In a Nutshell • We happen to live in a structured, self-organizing, and fine-tuned universe that allows the emergence of sentient beings (anthropic principle) • The human mind is capable of devising formal thought systems (mathematics) • Mathematical models are able to capture and represent the workings of the universe See also this post: in a nutshell. The Fundamental Level of Reality: Physics Mathematical models of reality are independent of their formal representation: invariance and symmetry • Classical mechanics: invariance of the equations under transformations (e.g., time => conservation of energy) • Gravitation (general relativity): geometry and the independence of the coordinate system (covariance) • The other three forces of nature (unified in quantum field theory): mathematics of symmetry and special kind of invariance See also these posts: funadamentalinvariant thinking. Towards Complexity • Physics was extremely successful in describing the inanimate world the in the last 300 years or so • But what about complex systems comprised of many interacting entities, e.g., the life and social sciences? • The rest is chemistryC. D. Anderson in 1932; echoing the success of a reductionist approach to understanding the workings of nature after having discovered the positron • At each stage [of complexity] entirely new laws, concepts, and generalizations are necessary […]. Psychology is not applied biology, nor is biology applied chemistryP. W. Anderson in 1972; pointing out that the knowledge about the constituents of a system doesn’t reveal any insights into how the system will behave as a whole; so it is not at all clear how you get from quarks and leptons via DNA to a human brain… Complex Systems: Simplicity The Limits of Physics • Closed-form solutions to analytical expressions are mostly only attainable if non-linear effects (e.g., friction) are ignored • Not too many interacting entities can be considered (e.g., three body problem) The Complexity of Simple Rules • S. Wolfram’s cellular automaton rule 110: neither completely random nor completely repetitive • [The] results [simple rules give rise to complex behavior] where were so surprising and dramatic that as I gradually came to understand them, they forced me to change my whole view of science […]; S. Wolfram reminiscing on his early work on cellular automaton in the 80s (”New Kind of Science”, pg. 19) Complex Systems: The Paradigm Shift • The interaction of entities (agents) in a system according to simple rules gives rise to complex behavior • The shift from mathematical (analytical) models to algorithmic computations and simulations performed in computers (only this bottom-up approach to simulating complex systems has been fruitful, all top-down efforts have failed: try programming swarming behaviorant foragingpedestrian/traffic dynamics,… not using simple local interaction rules but with a centralized, hierarchical setup!) • Understanding the complex system as a network of interactions (graph theory), where the complexity (or structure) of the individual nodes can be ignored • Challenge: how does the macro behavior emerge from the interaction of the system elements on the micro level? See also these posts: complexswarm theorycomplex networks. Laws of Nature Revisited So are there laws of nature to be found in the life and social sciences? • Yes: scaling (or power) laws • Complex, collective phenomena give rise to power laws […] independent of the microscopic details of the phenomenon. These power laws emerge from collective action and transcend individual specificities. As such, they are unforgeable signatures of a collective mechanism; J.P. Bouchaud in “Power-laws in Economy and Finance: Some Ideas from Physics“, 2001 Scaling Laws Scaling-law relations characterize an immense number of natural patterns (from physics, biology, earth and planetary sciences, economics and finance, computer science and demography to the social sciences) prominently in the form of • scaling-law distributions • scale-free networks • cumulative relations of stochastic processes f(x) = a x^k     <=>   Y = (X/C)^E Scaling laws • lack a preferred scale, reflecting their (self-similar) fractal nature • are usually valid across an enormous dynamic range (sometimes many orders of magnitude) See also these posts: scaling lawsbenford’s law. Scaling Laws In FX • Event counts related to price thresholds • Price moves related to time thresholds • Price moves related to price thresholds • Waiting times related to price thresholds FX scaling law Scaling Laws In Biology So-called allometric laws describe the relationship between two attributes of living organisms as scaling laws: • The metabolic rate B of a species is proportional to its mass M: B ~ M^(3/4) • Heartbeat (or breathing) rate T of a species is proportional to its mass: T ~ M^(-1/4) • Lifespan L of a species is proportional to its mass: L ~ M^(1/4) • Invariants: all species have the same number of heart beats in their lifespan (roughly one billion) allometric law (Fig. G. West) G. West (et. al) proposes an explanation of the 1/4 scaling exponent, which follow from underlying principles embedded in the dynamical and geometrical structure of space-filling, fractal-like, hierarchical branching networks, presumed optimized by natural selection: organisms effectively function in four spatial dimensions even though they physically exist in three. • The natural world possesses structure-forming and self-organizing mechanisms leading to consciousness capable of devising formal thought systems which mirror the workings of the natural world • There are two regimes in the natural world: basic fundamental processes and complex systems comprised of interacting agents • There are two paradigms: analytical vs. algorithmic (computational) • There are ‘miracles’ at work: • the existence of a universe following laws leading to stable emergent features • the capability of the human mind to devise formal thought systems • the overlap of mathematics and the workings of nature • the fact that complexity emerges from simple rules • There are basic laws of nature to be found in complex systems, e.g., scaling laws Technorati , animal intelligence We’re glimpsing intelligence throughout the animal kingdom. Copyright Vincent J. Musi, National Geographic A dog with a vocabulary of 340 words. A parrot that answers “shape” if asked what is different, and “color” if asked what is the same, while being showed two items of different shape and same color. An octopus with “distinct personality” that amuses itself by shooting water at plastic-bottle targets (the first reported invertebrate play behavior). Lemurs with calculatory abilities. Sheep able to recognize faces (of other sheep and humans) long term and that can discern moods. Crows able to make and use tools (in tests, even out of materials never seen before). Human-dolphin communication via an invented sign language (with simple grammar). Dolphins ability to correctly interpret on the first occasion instructions given by a person displayed on a TV screen. This may only be the tip of the iceberg… Read the article Animal Minds in National Geographic`s March 2008 edition. Ever think about vegetarianism? Technorati , complex networks The study of complex networks was sparked at the end of the 90s with two seminal papers, describing their universal: • small-worlds property [1], • and scale-free nature [2] (see also this older post: scaling laws). weighted network unweighted network Today, networks are ubiquitous: phenomena in the physical world (e.g., computer networks, transportation networks, power grids, spontaneous synchronization of systems of lasers), biological systems (e.g., neural networks, epidemiology, food webs, gene regulation), and social realms (e.g., trade networks, diffusion of innovation, trust networks, research collaborations, social affiliation) are best understood if characterized as networks. The explosion of this field of research was and is coupled with the increasing availability of • huge amounts of data, pouring in from neurobiology, genomics, ecology, finance and the Word-wide Web, …, • computing power and storage facilities. The new paradigm states that it is best to understand a complex system, if it is mapped to a network. I.e., the links represent the some kind of interaction and the nodes are stripped of any intrinsic quality. So, as an example, you can forget about the complexity of the individual bird, if you model the flocks swarming behavior. (See these older posts: complexfundamentalswarm theoryin a nutshell.) Only in the last years has the attention shifted from this topological level of analysis (either links are present or not) to incorporate weights of links, giving the strength relative to each other. Albeit being harder to tackle, these networks are closer to the real-world system it is modeling. However, there is still one step missing: also the vertices of the network can be assigned with a value, which acts as a proxy for some real-world property that is coded into the network structure. The two plots above illustrate the difference if the same network is visualized [3] using weights and values assigned to the vertices (left) or simply plotted as a binary (topological) network (right)… [1] Strogatz S. H. and Watts D. J., 1998, Collective Dynamics of ‘Small-World’ Networks, Nature, 393, 440–442. [2] Albert R. and Barabasi A.-L., 1999, Emergence of Scaling in Random Networks, [3] Cuttlefish Adaptive NetWorkbench and Layout cool links… think statistics are boring, irrelevant and hard to understand? well, think again. two examples of visually displaying important information in an amazingly cool way: territory size shows the proportion of all people living on less than or equal to US$1 in purchasing power parity a day. displays a large collection of world maps, where territories are re-sized on each map according to the subject of interest. sometimes an image says more than a thousand words… want to see the global evolution of life expectancy vs. income per capita from 1975 to 2003? and additionally display the co2 emission per capita? choose indicators from areas as diverse as internet users per 1′000 people to contraceptive use amongst adult women and watch the animation. gapminder is a fantastic tool that really makes you think… work in progress… Some of the stuff I do all week… Complex Networks Visualizing a shareholder network: The underlying network visualization framework is JUNG, with theCuttlefish adaptive networkbench and layout algorithm (coming soon). The GUI uses Swing. Stochastic Time Series Scaling laws in financial time series: A Java framework allowing the computation and visualization of statistical properties. The GUI is programmed using SWT. plugin of the month The Firefox add-on Gspace allows you to use Gmail as a file server: tech dependence… Because technological advancement is mostly quite gradual, one hardly notices it creeping into ones life. Only if you would instantly remove these high tech commodities, you’d realize how dependent one has become. A random list of ‘nonphysical’ things I wouldn’t want to live without anymore: • everything you ever wanted to know — and much more • (e.g., news, scholar, maps, webmaster tools, …): basically the internet;-) • Web 2.0 communities (e.g.,,,,,,, …): your virtual social network • towards the babel fish  • recommendations from the fat tail of the probability distribution • Web browsers (e.g., Firefox): your window to the world • Version control systems (e.g., Subversion): get organized • CMS (e.g., TYPO3): disentangle content from design on your web page and more • LaTeX typesetting software (btw, this is not a fetish;-): the only sensible and aesthetic way to write scientific documents • Wikies: the wonderful world of unstructured collaboration • Blogs: get it out there • Java programming language: truly platform independent and with nice GUI toolkits (SWT, Swing, GWT); never want to go back to C++ (and don’t even mention C# or .net) • Eclipse IDE: how much fun can you have while programming? • MySQL: your very own relational database (the next level: db4o) • PHP: ok, Ruby is perhaps cooler, but PHP is so easy to work with (e.g., integrating MySQL and web stuff) • Dynamic DNS (e.g., let your home computer be a node of the internet • Web server (e.g., Apache 2): open the gateway • CSS: ok, if we have to go with HTML, this helps a lot • VoIP (e.g., Skype): use your bandwidth • P2P (e.g., BitTorrent): pool your network • Video and audio compression (e.g., MPEG, MP3, AAC, …): information theory at its best • Scientific computing (R, Octave, gnuplot, …): let your computer do the work • Open source licenses (Creative Commons, Apache, GNU GPL, …): the philosophy! • Object-oriented programming paradigm: think design patterns • Rich Text editors: online WYSIWYG editing, no messing around with HTML tags • SSH network protocol: secure and easy networking • Linux Shell-Programming (”grep”, “sed”, “awk”, “xargs”, pipes, …): old school Unix from the 70s • E-mail (e.g., IMAP): oops, nearly forgot that one (which reminds me of something i really, really could do without: spam) • Graylisting: reduce spam • Debian (e.g., Kubuntu): the basis for it all • apt-get package management system: a universe of software at your fingertips • Compiz Fusion window manager: just to be cool… It truly makes one wonder, how all this cool stuff can come for free!!! climate change 2007 Confused about the climate? Not sure what’s happening? Exaggerated fears or impending cataclysm? A good place to start is a publication by Swiss Re. It is done in a straightforward, down-to-earth, no-bullshit and sane manner. The source to the whole document is given at the bottom. Executive Summary The Earth is getting warmer, and it is a widely held view in the scientific community that much of the recent warming is due to human activity. As the Earth warms, the net effect of unabated climate change will ultimately lower incomes and reduce public welfare. Because carbon dioxide (CO₂) emissions build up slowly, mitigation costs rise as time passes and the level of CO₂ in the atmosphere increases. As these costs rise, so too do the benefits of reducing CO₂ emissions, eventually yielding net positive returns. Given how CO₂ builds up and remains in the atmosphere, early mitigation efforts are highly likely to put the global economy on a path to achieving net positive benefits sooner rather than later. Hence, the time to act to reduce these emissions is now.The climate is what economists call a “public good”: its benefits are available to everyone and one person’s enjoyment and use of it does not affect another’s. Population growth, increased economic activity and the burning of fossil fuels now pose a threat to the climate. The environment is a free resource, vulnerable to overuse, and human activity is now causing it to change. However, no single entity is responsible for it or owns it. This is referred to as the “tragedy of the commons”: everyone uses it free of charge and eventually depletes or damages it. This is why government intervention is necessary to protect our climate. Climate is global: emissions in one part of the world have global repercussions. This makes an international government response necessary. Clearly, this will not be easy. The Kyoto Protocol for reducing CO₂ emissions has had some success, but was not considered sufficiently fair to be signed by the United States, the country with the highest volume of CO₂ emissions. Other voluntary agreements, such as the Asia-Pacific Partnership on Clean Development and Climate – which was signed by the US – are encouraging, but not binding. Thus, it is essential that governments implement national and international mandatory policies to effectively reduce carbon emissions in order to ensure the well-being of future generations. The pace, extent and effects of climate change are not known with certainty. In fact, uncertainty complicates much of the discussion about climate change. Not only is the pace of future economic growth uncertain, but also the carbon dioxide and equivalent (CO₂e) emissions associated with economic growth. Furthermore, the global warming caused by a given quantity of CO₂e emissions is also uncertain, as are the costs and impact of temperature increases. Though uncertainty is a key feature of climate change and its impact on the global economy, this cannot be an excuse for inaction. The distribution and probability of the future outcomes of climate change are heavily weighted towards large losses in global welfare. The likelihood of positive future outcomes is minor and heavily dependent upon an assumed maximum climate change of 2° Celsius above the pre-industrial average. The probability that a “business as usual” scenario – one with no new emission-mitigation policies – will contain global warming at 2° Celsius is generally considered as negligible. Hence, the “precautionary principle” – erring on the safe side in the face of uncertainty – dictates an immediate and vigorous global mitigation strategy for reducing CO₂e emissions. There are two major types of mitigation strategies for reducing greenhouse gas emissions: a cap-and-trade system and a tax system. The cap-and-trade system establishes a quantity target, or cap, on emissions and allows emission allocations to be traded between companies, industries and countries. A tax on, for example, carbon emissions could also be imposed, forcing companies to internalize the cost of their emissions to the global climate and economy. Over time, quantity targets and carbon taxes would need to become increasingly restrictive as targets fall and taxes rise. Though both systems have their own merits, the cap-and-trade policy has an edge over the carbon tax, given the uncertainty about the costs and benefits of reducing emissions. First, cap-and-trade policies rely on market mechanisms – fluctuating prices for traded emissions – to induce appropriate mitigating strategies, and have proved effective at reducing other types of noxious gases. Second, caps have an economic advantage over taxes when a given level of emissions is required. There is substantial evidence that emissions need to be capped to restrict global warming to 2 °C above preindustrial levels or a little more than 1 °C compared to today. Given that the stabilization of emissions at current levels will most likely result in another degree rise in temperature and that current economic growth is increasing emissions, the precautionary principle supports a cap-and-trade policy. Finally, cap-and-trade policies are more politically feasible and palatable than carbon taxes. They are more widely used and understood and they do not require a tax increase. They can be implemented with as much or as little revenue-generating capacity as desired. They also offer business and consumers a great deal of choice and flexibility. A cap-and-trade policy should be easier to adopt in a wide variety of political environments and countries. Whichever system – cap-and-trade or carbon tax – is adopted, there are distributional issues that must be addressed. Under a quantity target, allocation permits have value and can be granted to businesses or auctioned. A carbon tax would raise revenues that could be recycled, for example, into research on energy-efficient technologies. Or the revenues could be used to offset inefficient taxes or to reduce the distributional aspects of the carbon tax. Source: “The economic justification for imposing restraints on carbon emissions”, Swiss Re, Insights, 2007; PDF scaling laws 1. scaling-law distributions, 2. scale-free networks, 3. cumulative relations of stochastic processes. • a logarithmic mapping yields a linear relationship, • scaling the function’s argument x preserves the shape of the functionf(x), called scale invariance. See (Sornette, 2006). Scaling-Law Distributions • the sales of music, books and other commodities, • the population of cities, • the income of people, • the areas burnt in forest fires, Scale-Free Networks Scale-free networks are Cumulative Scaling-Law Relations Nature, 393, 440–442. See also this post: laws of nature. swarm theory National Geographic`s July 2007 edition: Swarm Theory benford’s law In 1881 a result was published, based on the observation that the first pages of logarithm books, used at that time to perform calculations, were much more worn than the other pages. The conclusion was that computations of numbers that started with 1 were performed more often than others: if d denotes the first digit of a number the probability of its appearance is equal to log(d + 1). The phenomenon was rediscovered in 1938 by the physicist F. Benford, who confirmed the “law” for a large number of random variables drawn from geographical, biological, physical, demographical, economical and sociological data sets. It even holds for randomly compiled numbers from newspaper articles. Specifically, Benford’s law, or the first-digit law, states, that the occurrence of a number with first digit 1 is with 30.1%, 2 with 17.6%, 3 with 12.5%, 4 with 9.7%, 5 with 7.9%, 6 with 6.7%, 7 with 5.8%, 8 with 5.1% and 9 with 4.6% probability. In general, the leading digit d ∈ [1, …, b−1] in base b ≥ 2 occurs with probability proportional to log_b(d + 1) − log_b(d) = log_b(1 + 1/d). First explanations of this phenomena, which appears to suspend the notions of probability, focused on its logarithmic nature which implies a scale-invariant or power-law distribution. If the first digits have a particular distribution, it must be independent of the measuring system, i.e., conversions from one system to another don’t affect the distribution. (This requirement that physical quantities are independent of a chosen representation is one of the cornerstones of general relativity and called covariance.) So the common sense requirement that the dimensions of arbitrary measurement systems shouldn’t affect the measured physical quantities, is summarized in Benford’s law. In addition, the fact that many processes in nature show exponential growth is also captured by the law, which assumes that the logarithms of numbers are uniformly distributed. So how come one observes random variables following normal and scaling-law distributions? In 1996 the phenomena was mathematically rigorously proven: if one repeatedly chooses different probability distribution and then randomly chooses a number according to that distribution, the resulting list of numbers will obey Benford’s law. Hence the law reflects the behavior of distributions of distributions. Benford’s law has been used to detect fraud in insurance, accounting or expenses data, where people forging numbers tend to distribute their digits uniformly. There is an interesting observation or conjecture to be made from the Mataphysics Map in the post what can we know?, concerning the nature of infinity. The Finite Many observations reveal a finite nature of reality: • Energy comes in finite parcels (quatum mechanics) • The knowledge one can have about quanta is a fixed value (uncertainty) • Energy is conserved in the universe • The speed of light has the same constant value for all observers (special relativity) • The age of the universe is finite • Information is finite and hence can be coded into a binary language Newer and more radical theories propose: • Space comes in finite parcels • Time comes in finite parcels • The universe is spatially finite • The maximum entropy in any given region of space is proportional to the regions surface area and not its volume (this leads to the holographic principle stating that our three dimensional universe is a projection of physical processes taking place on a two dimensional surface surrounding it) So finiteness appears to be an intrinsic feature of the Outer Reality box of the diagram. There is in fact a movement in physics ascribing to the finiteness of reality, called Digital Philosophy. Indeed, this finiteness postulate is a prerequisite for an even bolder statement, namely, that the universe is one gigantic computer (a Turing complete cellular automata), where reality (thought and existence) is equivalent to computation. As mentioned above, the selforganizing structure forming evolution of the universe can be seen to produce ever more complex modes of information processing (e.g., storing data in DNA, thoughts, computations, simulations and perhaps, in the near future, quantum computations). There is also an approach to quantum mechanics focussing on information stating that an elementary quantum system carries (is?) one bit of information. This can be seen to lead to the notions of quantisation, uncertainty and entanglement. The Infinite It should be noted that zero is infinity in disguise. If one lets the denominator of a fraction go to infinity, the result is zero. Historically, zero was discovered in the 3rd century BC in India and was introduced to the Western world by Arabian scholars in the 10th century AC. As ordinary as zero appears to us today, the great Greek mathematicians didn’t come up with such a concept. Indeed, infinity is something intimately related to formal thought systems (mathematics). Irrational numbers have an infinite number of digits. There are two measures for infinity: countability and uncountablility. The former refers to infinite series as 1, 2, 3, … Whereas for the latter measure, starting from 1.0 one can’t even reach 1.1 because there are an infinite amount of numbers in the interval between 1.0 and 1.1. In geometry, points and lines are idealizations of dimension zero and one, respectively. So it appears as though infinity resides only in the Inner Reality box of the diagram. The Interface If it should be true that we live in a finite reality with infinity only residing within the mind as a concept, then there should be some problems if one tries to model this finite reality with an infinity-harboring formalism. Perhaps this is indeed so. In chaos theory, the sensitivity to initial conditions (butterfly effect) can be viewed as the problem of measuring numbers: the measurement can only have a finite degree of accuracy, whereas the numbers have, in principle, an infinite amount of decimal places. In quatum gravity (the, as yet, unsuccessful merger of quantum mechanics and gravity) many of the inherent problems of the formalism could by bypassed, when a theory was proposed (string theory) that replaced (zero space) point particles with one dimensionally extended objects. Later incarnations, called M-theory, allowed for multidimensional objects. In the above mentioned information based view of quantum mechanics, the world appears quantised because the information retrieved by our minds about the world is inevitably quantised. So the puzzle deepens. Why do we discover the notion of infinity in our minds while all our experiences and observations of nature indicate finiteness? medical studies medical studies often contradict each other. results claiming to have “proven” some causal connection are confronted with results claiming to have “disproven” the link, or vice versa. this dilemma affects even reputable scientists publishing in leading medical journals. the topics are divers: • high-voltage power supply lines and leukemia [1], • salt and high blood pressure [1], • heart diseases and sport [1], • stress and breast cancer [1], • smoking and breast cancer [1], • praying and higher chances of healing illnesses [1], • the effectiveness of homeopathic remedies and natural medicine, • vegetarian diets and health, • low frequency electromagnetic fields and electromagnetic hypersensitivity [2], basically, this is understood to happen for three reasons: • i.) the bias towards publishing positive results, • ii.) incompetence in applying statistics, • ii.) simple fraud. publish or perish. in order the guarantee funding and secure the academic status quo, results are selected by their chance of being published. an independent analysis of the original data used in 100 published studies exposed that roughly half of them showed large discrepancies in the original aims stated by the researchers and the reported findings, implying that the researchers simply skimmed the data for publishable material [3]. this proves fatal in combination with ii.) as every statistically significant result can occur (per definition) by chance in an arbitrary distribution of measured data. so if you only look long enough for arbitrary results in your data, you are bound to come up with something [1]. often, due to budget reasons, the numbers of test persons for clinical trials are simply too small to allow for statistical relevance. ref. [4] showed next to other things, that the smaller the studies conducted in a scientific field, the less likely the research findings are to be true. statistical significance - often evaluated by some statistics software package - is taken as proof without considering the plausibility of the result. many statistically significant results turn out to be meaningless coincidences after accounting for the plausibility of the finding [1]. one study showed that one third of frequently cited results fail a later verification [1]. another study documented that roughly 20% of the authors publishing in the magazine “nature” didn’t understand the statistical method they were employing [5]. iii.) a.) two thirds of of the clinical biomedical research in the usa is supported by the industry - double as much as in 1980 [1]. it was shown that in 1000 studies done in 2003, the nature of the funding correlated with the results: 80% of industry financed studies had positive results, whereas only 50% of the independent research reported positive findings. it could be argued that the industry has a natural propensity to identify effective and lucrative therapies. however, the authors show that many impressive results were only obtained because they were compared with weak alternative drugs or placebos. [6] iii.) b.) quoted from “Andrew Wakefield (born 1956 in the United Kingdom) is a Canadian trained surgeon, best known as the lead author of a controversial 1998 research study, published in the Lancet, which reported bowel symptoms in a selected sample of twelve children with autistic spectrum disorders and other disabilities, and alleged a possible connection with MMR vaccination. Citing safety concerns, in a press conference held in conjunction with the release of the report Dr. Wakefield recommended separating the components of the injections by at least a year. The recommendation, along with widespread media coverage of Wakefield’s claims was responsible for a decrease in immunisation rates in the UK. The section of the paper setting out its conclusions, known in the Lancet as the “interpretation” (see the text below), was subsequently retracted by ten of the paper’s thirteen authors. In February of 2004, controversy resurfaced when Wakefield was accused of a conflict of interest. The London Sunday Times reported that some of the parents of the 12 children in the Lancet study were recruited via a UK attorney preparing a lawsuit against MMR manufacturers, and that the Royal Free Hospital had received £55,000 from the UK’s Legal Aid Board (now the Legal Services Commission) to pay for the research. Previously, in October 2003, the board had cut off public funding for the litigation against MMR manufacturers. Following an investigation of The Sunday Times allegations by the UK General Medical Council, Wakefield was charged with serious professional misconduct, including dishonesty, due to be heard by a disciplinary board in 2007. In December of 2006, the Sunday Times further reported that in addition to the money given to the Royal Free Hospital, Wakefield had also been personally paid £400,000 which had not been previously disclosed by the attorneys responsible for the MMR lawsuit.” wakefield had always only expressed his criticism of the combined triple vaccination, supporting single vaccinations spaced in time. the british tv station channel 4 exposed in 2004 that he had applied for patents for the single vaccines. wakefield dropped his subsequent slander action against the media company only in the beginning of 2007. as mentioned, he now awaits charges for professional misconduct. however, he has left britain and now works for a company in austin texas. it has been uncovered that other employees of this us company had received payments from the same attorney preparing the original law suit. [7] should we be surprised by all of this? next to the innate tendency of human beings to be incompetent and unscrupulous, there is perhaps another level, that makes this whole endeavor special. the inability of scientist to conclusively and reproducibly uncover findings concerning human beings is maybe better appreciated, if one considers the nature of the subject under study. life, after all, is an enigma and the connection linking the mind to matter is elusive at best (i.e., the physical basis of consciousness). the bodies capability to heal itself, i.e., the placebo effect and the need for double-blind studies is indeed very bizarre. however, there are studies questioning, if the effect exists at all;-) taken from  (consult also for the corresponding links for the sources cited below) [1] This article in the magazine issued by the Neue Zürcher Zeitung by Robert Matthews [2] C. Schierz; Projekt NEMESIS; ETH Zürich; 2000 [3] A. Chan (Center of Statistics in Medicine, Oxford) et. al.; Journal of the American Medical Association; 2004 [4] J. Ioannidis; “Why Most Published Research Findings Are False” ; University of Ioannina; 2005 [5] R. Matthews, E. García-Berthou and C. Alcaraz as reported in this “Nature” article; 2005 [6] C. Gross (Yale University School of Medicine) et. al.; “Scope and Impact of Financial Conflicts of Interest in Biomedical Research “; Journal of the American Medical Association; 2003 [7] H. Kaulen; “Wie ein Impfstoff zu Unrecht in Misskredit gebracht wurde”; Deutsches Ärzteblatt; Jg. 104; Heft 4; 26. Januar 2007 in a nutshell Science, put simply, can be understood as working on three levels: • i.) analyzing the nature of the object being considered/observed, • ii.) developing the formal representation of the object’s features and its dynamics/interactions, • iii.) devising methods for the empirical validation of the formal representations. To be precise, level i.) lies more within the realm of philosophy (e.g., epistemology) and metaphysics (i.e., ontology), as notions of origin, existence and reality appear to transcend the objective and rational capabilities of thought. The main problem being: “Why is there something rather than nothing? For nothingness is simpler and easier than anything.”; [1]. In the history of science the above mentioned formulation made the understanding of at least three different levels of reality possible: • a.) the fundamental level of the natural world, • b.) inherently random phenomena, • c.) complex systems. While level a.) deals mainly with the quantum realm and cosmological structures, levels b.) and c.) are comprised mostly of biological, social and economic systems. a.) Fundamental Many natural sciences focus on a.i.) fundamental, isolated objects and interactions, use a.ii.) mathematical models which are a.iii.) verified (falsified) in experiments that check the predictions of the model - with great success: “The enormous usefulness of mathematics in the natural sciences is something bordering on the mysterious. There is no rational explanation for it.”; [2]. b.) Random Often the nature of the object b.i.) being analyzed is in principle unknown. Only statistical evaluations of sets of outcomes of single observations/experiments can be used to estimate b.ii.) the underlying model, and b.iii.) test it against more empirical data. Often the approach taken in the fields of social sciences, medicine, and business. c.) Complex Moving to c.i.) complex, dynamical systems, and c.ii.) employing computer simulations as a template for the dynamical process, unlocks a new level of reality: mainly the complex and interacting world we experience at our macroscopic length scales in the universe. Here two new paradigms emerge: • the shift from mathematical (analytical) models to algorithmic computations and simulations performed in computers, • simple rules giving rise to complex behavior: “And I realized, that I had seen a sign of a quite remarkable and unexpected phenomenon: that even from very simple programs behavior of great complexity could emerge.”; [3]. However, things are not as clear anymore. What is the exact methodology, and how does it relate to underlying concepts of ontology and epistemology, and what is the nature of these computations per se? Or within the formulation given above, i.e., iii.c.), what is the “reality” of these models: what do the local rules determining the dynamics in the simulation have to say about the reality of the system c.i.) they are trying to emulate? There are many coincidences that enabled the structured reality we experience on this planet to have evolve: exact values of fundamental constants (initial conditions), emerging structure-forming and self-organizing processes, the possibility of (organic) matter to store information (after being synthesized in supernovae!), the right conditions of earth for harboring life, the emergent possibilities of neural networks to establish consciousness and sentience above a certain threshold, … Interestingly, there are also many circumstances that allow the observable world to be understood by the human mind: • the mystery allowing formal thought systems to map to patterns in the real world, • the development of the technology allowing for the design and realization of microprocessors, • the bottom-up approach to complexity identifying a micro level of simple interactions of system elements. So it appears that the human mind is intimately interwoven with the fabric of reality that produced it. But where is all this leading to? There exists a natural extension to science which fuses the notions from levels a.) to c.), namely • information and information processing, • formal mathematical models, • statistics and randomness. Notably, it comes from an engineering point-of-view and deals with quantum computers and comes full circle back to level i.), the question about the nature of reality: “[It can be shown] that quantum computers can simulate any system that obeys the known laws of physics in a straightforward and efficient way. In fact, the universe is indistinguishable from a quantum computer.”; [4]. At first blush the idea of substituting reality with a computed simulation appears rather ad hoc, but in fact it does have potentially falsifiable notions: • the discreteness of reality, i.e., the notion that continuity and infinity are not physical, • the reality of the quantum realm should be contemplated from the point of view of information, i.e., the only relevant reality subatomic quanta manifest is that they register one bit of information: “Information is physical.”; [5]. [1] von Leibniz, G. W., “Principes de la nature et de la grâce”, 1714 [2] Wigner, E. P., “Symmetries and Reflections”, MIT Press, Cambridge, 1967 [3] Wolfram, S., “A New Kind of Science”, Wolfram Media, pg. 19, 2002 [4] Lloyd, S., “Programming the Universe”, Random House, pgs. 53 - 54, 2006 [5] Landauer, R., Nature, 335, 779-784, 1988 See also: “The Mathematical Universe” by M. Tegmark. Related posts: See also this post: laws of nature. what can we know? Put bluntly, metaphysics asks simple albeit deep questions: • Why do I exist? • Why do I die? • Why does the world exist? • Where did everything come from? • What is the nature of reality? • What is the meaning of existence? • Is there a creator or omnipotent being? Although these questions may appear idle and futile, they seem to represent an innate longing for knowledge of the human mind. Indeed, children can and often do pose such questions, only to be faced with resignation or impatience of adults. To make things simpler and tractable, one can focus on the question “What can we know?”. When you wake up in the morning, you instantly become aware of your self, i.e., you experience an immaterial inner reality you can feel and probe with your thoughts. Upon opening your eyes, a structured material outer reality appears. These two unsurmountable facts are enough to sketch a small metaphysical diagram: Focussing on the outer reality or physical universe, there exists an underlying structure forming and selforganizing process starting with an initial singularity or Big Bang (extremely low entropy state, i.e., high order, giving rise to the arrow or direction of time). Due to the exact values of physical constants in our universe, this organizing process yields structures eventually giving birth to stars, which, at the end of their lifecycle, explode (supernovae) allowing for nuclear reactions to fuse heavy elements. One of these heavy elements brings with it novel bonding possibilities, resulting in a new pattern: organic matter. Within a couple of billion years, the structure forming process gave rise to a plethora of living organisms. Although each organism would die after a short lifespan, the process of life as a whole continued to live in a sustainable equilibrium state and survived a couple of extinction events (some of which eradicated nearly 90% of all species). The second law of thermodynamics states, that the entropy of the universe is increasing, i.e., the universe is becoming an ever more unordered place. It would seem that the process of life creating stable and ordered structures violates this law. In fact, complex structures spontaneously appear where there is a steady flow of energy from a high temperature input source (the sun) to a low temperature external sink (the earth). So pumping a system with energy leads it to a state far from the thermodynamic equilibrium which is characterized by the emergence of ordered structures. Viewed from an information processing perspective, the organizing process suddenly experienced a great leap forward. The brains of some organisms had reached a critical mass, allowing for another emergent behavior: consciousness. The majority of people in industrialized nations take a rational and logicall outlook on life. Although one might think this is an inevitable mode of awareness, it actually is a cultural imprinting as there exist other civilization putting far less emphasis on rationality. Perhaps the divide between Western and Eastern thinking illustrates this best. Whereas the former is locked in continuous interaction with the outer world, the latter focuses on the experience of an inner reality. A history of meditation techniques underlines this emphasis on the nonverbal experience of ones self. Thought is either totally avoided, or the mind is focused on repetitive activities, in effect deactivating it. Recall from fundamental that there are two surprising facts to be found. On the one hand, the physical laws dictating the fundamental behavior of the universe can be mirrored by formal thought systems devised by the mind. And on the other hand, real complex behavior can be emulated by computer simulations following simple laws (the computers themselves are an example of technological advances made possible by the successfull modelling of nature by formal thought systems). This conceptual map allows one to categorize a lot of stuff in a concise manner. Also, the interplay between the outer and inner realities becomes visible. However, the above mentioned questions remain unanswered. Indeed, more puzzles appear. So as usual, every advance in understanding just makes the question mark bigger… Continued here: infinity? invariant thinking… Arguably the most fruitful principle in physics has been the notion of symmetry. Covariance and gauge invariance - two simply stated symmetry conditions - are at the heart of general relativity and the standard model (of particle physics). This is not only aesthetically pleasing it also illustrates a basic fact: in coding reality into a formal system, we should only allow the most minimal reference to be made to this formal system. I.e. reality likes to be translated into a language that doesn’t explicitly depend on its own peculiarities (coordinates, number bases, units, …). This is a pretty obvious idea and allows for physical laws to be universal. But what happens if we take this idea to the logical extreme? Will the ultimate theory of reality demand: I will only allow myself to be coded into a formal framework that makes no reference to itself whatsoever. Obviously a mind twister. But the question remains: what is the ultimate symmetry idea? Or: what is the ultimate invariant? Does this imply “invariance” even with respect to our thinking? How do we construct a system that supports itself out of itself, without relying on anything external? Can such a magical feat be performed by our thinking? Taken from this newsgroup message See also: fundamental While physics has had an amazing success in describing most of the observable universe in the last 300 years, the formalism appears to be restricted to the fundamental workings of nature. Only solid-state physics attempts to deal with collective systems. And only thanks to the magic of symmetry one is able to deduce fundamental analytical solutions. In order to approach real life complex phenomena, one needs to adopt a more systems oriented focus. This also means that the interactions of entities becomes an integral part of the formalism. Some ideas should illustrate the situation: • Most calculations in physics are idealizations and neglect dissipative effects like friction • Most calculations in physics deal with linear effect, as non-linearity is hard to tackle and is associated with chaos; however, most physical systems in nature are inherently non-linear • The analytical solution of three gravitating bodies in classical mechanics, given their initial positions, masses, and velocities, cannot be found; it turns out to be a chaotic system which can only be simulated in a computer; however, there are an estimated hundred billion of galaxies in the universe Systems Thinking Systems theory is an interdisciplinary field which studies relationships of systems as a whole. The goal is to explain complex systems which consist of a large number of mutually interacting and interwoven parts in terms of those interactions. A timeline: • Cybernetics (50s): Study of communication and control, typically involving regulatory feedback, in living organisms and machines • Catastrophe theory (70s): Phenomena characterized by sudden shifts in behavior arising from small changes in circumstances • Chaos theory (80s): Describes the behavior of non-linear dynamical systems that under certain conditions exhibit a phenomenon known as chaos (sensitivity to initial conditions, regimes of chaotic and deterministic behavior, fractals, self-similarity) • Complex adaptive systems (90s): The “new” science of complexity which describes emergence, adaptation and self-organization; employing tools such as agent-based computer simulations In systems theory one can distinguish between three major hierarchies: • Suborganic: Fundamental reality, space and time, matter, … • Organic: Life, evolution, … • Metaorganic: Consciousness, group dynamical behavior, financial markets, … However, it is not understood how one can traverse the following chain: bosons and fermions -> atoms -> molecules -> DNA -> cells -> organisms -> brains. I.e., how to understand phenomena like consciousness and life within the context of inanimate matter and fundamental theories. e.g., systems view Category Theory The mathematical theory called category theory is a result of the “unification of mathematics” in the 40s. A category is the most basic structure in mathematics and is a set of objects and a set of morphisms (maps). A functor is a structure-preserving map between categories. This dynamical systems picture can be linked to the notion of formal systems mentioned above: physical observables are functors, independent of a chosen representation or reference frame, i.e., invariant and covariant. Object-Oriented Programming This paradigm of programming can be viewed in a systems framework, where the objects are implementations of classes (collections of properties and functions) interacting via functions (public methods). A programming problem is analyzed in terms of objects and the nature of communication between them. When a program is executed, objects interact with each other by sending messages. The whole system obeys certain rules (encapsulationinheritancepolymorphism, …). Some advantages of this integral approach to software development: • Easier to tackle complex problems • Allows natural evolution towards complexity and better modeling of the real world • Reusability of concepts (design patterns) and easy modifications and maintenance of existing code • Object-oriented design has more in common with natural languages than other (i.e., procedural) approaches Algorithmic vs. Analytical Perhaps the shift of focus in this new weltbild can be understood best when one considers the paradigm of complex system theory: • The interaction of entities (agents) in a system according to simple rules gives rise to complex behavior: Emergence, structure-formation, self-organization, adaptive behavior (learning), … This allows a departure from the equation-based description to models of dynamical processes simulated in computers. This is perhaps the second miracle involving the human mind and the understanding of nature. Not only does nature work on a fundamental level akin to formal systems devised by our brains, the hallmark of complexity appears to be coded in simplicity (”simple sets of rules give complexity”) allowing computational machines to emulate its behavior. complex systems It is very interesting to note, that in this paradigm the focus is on the interaction, i.e., the complexity of the agent can be ignored. That is why the formalism works for chemicals in a reaction, ants in an anthill, humans in social or economical organizations, … In addition, one should also note, that simple rules - the epitome of deterministic behavior - can also give rise to chaotic behavior. The emerging field of network theory (an extension of graph theory,yielding results such as scale-free topologies, small-worlds phenomena, etc. observed in a stunning veriety of complex networks) is also located at this end of the spectrum of the formal descriptions of the workings of nature. Finally, to revisit the analytical approach to reality, note that in the loop quantum gravity approach, space-time is perceived as a causal network arising from graph updating rules (spin networks, which are graphs associated with group theoretic properties), where particles are envisaged as ‘topological defects’ and geometric properties of reality, such as dimensionality, are defined solely in terms of the network’s connectivity pattern. list of open questions in complexity theory. 2 Responses to “complex” 1. jbg » Blog Archive » complex networks Says: […] The new paradigm states that it is best to understand a complex system, if it is mapped to a network. I.e., the links represent the some kind of interaction and the nodes are stripped of any intrinsic quality. So, as an example, you can forget about the complexity of the individual bird, if you model the flocks swarming behavior. (See these older posts: complex, fundamental, swarm theory, in a nutshell.) […] 2. jbg » Blog Archive » laws of nature Says: […] See also these posts: complex, swarm theory, complex networks. […] What is science? • Science is the quest to capture the processes of nature in formal mathematical representations So “math is the blueprint of reality” in the sense that formal systems are the foundation of science. In a nutshell: • Natural systems are a subset of reality, i.e., the observable universe • Guided by thought, observation and measurement natural systems are “encoded” into formal systems • Using logic (rules of inference) in the formal system, predictions about the natural system can be made (decoding) • Checking the predictions with the experimental outcome gives the validity of the formal system as a model for the natural system Physics can be viewed as dealing with the fundamental interactions of inanimate matter. For a technical overview, go to the here. math models • Mathematical models of reality are independent of their formal representation This leads to the notions of symmetry and invariance. Basically, this requirement gives rise to nearly all of physics. Classical Mechanics Symmetry, understood as the invariance of the equations under temporal and spacial transformations, gives rise to the conservation laws of energy, momentum and angular momentum. In layman terms this means that the outcome of an experiment is unchanged by the time and location of the experiment and the motion of the experimental apparatus. Just common sense… Mathematics of Symmetry The intuitive notion of symmetry has been rigorously defined in the mathematical terms of group theory. Physics of Non-Gravitational Forces The three non-gravitational forces are described in terms of quantum field theories. These in turn can be expressed as gauge theories, where the parameters of the gauge transformations are local, i.e., differ from point to point in space-time. The Standard Model of elementary particle physics unites the quantum field theories describing the fundamental interactions of particles in terms of their (gauge) symmetries. Physics of Gravity Gravity is the only force that can’t be expressed as a quantum field theory. Its symmetry principle is called covariance, meaning that in the geometric language of the theory describing gravity (general relativity) the physical content of the equations is unchanged by the choice of the coordinate system used to represent the geometrical entities. To illustrate, imagine an arrow located in space. It has a length and an orientation. In geometric terms this is a vector, lets call it a. If I want to compute the length of this arrow, I need to choose a coordinate system, which gives me the x-, y- and z-axes components of the vector, e.g., a = (3, 5, 1). So starting from the origin of my coordinate system (0, 0, 0), if I move 3 units in the x direction (left-right), 5 units in the y-direction (forwards-backwards) and 1 unit in the z direction (up-down), I reach the end of my arrow. The problem is now, that depending on the choice of coordinate system - meaning the orientation and the size of the units - the same arrow can look very different: a = (3, 5, 1) = (0, 23.34, -17). However, everytime I compute the length of the arrow in meters, I get the same number independent of the chosen representation. In general relativity the vectors are somewhat like multidimensional equivalents called tensors and the commonsense requirement, that the calculations involving tensor do not depend on how I represent the tensors in space-time, is covariance. It is quite amazing, but there is only one more ingredient needed in order to construct one of the most estethic and accurate theories in physics. It is called the equivalence principle and states that the gravitational force is equivalent to the forces experienced during acceleration. This may sound trivial, has however very deep implications. micr macro math models Physics of Condensed Matter This branch of physics, also called solid-state physics, deals with the macroscopic physical properties of matter. It is one of physics first ventures into many-body problems in quantum theory. Although the employed notions of symmetry do not act at such a fundamental level as in the above mentioned theories, they are a cornerstone of the theory. Namely the complexity of the problems can be reduced using symmetry in order for analytical solutions to be found. Technically, the symmetry groups are boundary conditions of the Schrödinger equation. This leads to the theoretical framework describing, for example, semiconductors and quasi-crystals (interestingly, they have fractal properties!). In the superconducting phase, the wave functionbecomes symmetric. The Success It is somewhat of a miracle, that the formal systems the human brain discovers/devises find their match in the workings of nature. In fact, there is no reason for this to be the case, other than that it is the way things are. The following two examples should underline the power of this fact, where new features of reality where discovered solely on the requirements of the mathematical model: • In order to unify electromagnetism with the weak force (two of the three non-gravitational forces), the theory postulated two new elementary particles: the W and Z bosons. Needless to say, these particles where hitherto unknown and it took 10 years for technology to advance sufficiently in order to allow their discovery. • The fusion of quantum mechanics and special relativity lead to the Dirac equation which demands the existence of an, up to then, unknown flavor of matter: antimatter. Four years after the formulation of the theory, antimatter was experimentally discovered. The Future… Albeit the success, modern physics is still far from being a unified, paradox-free formalism describing all of the observable universe. Perhaps the biggest obstacles lies in the last missing step to unification. In a series of successes, forces appearing as being independent phenomena, turned out to be facets of the same formalism: electricity and magnetism was united in the four Maxwell equations; as mentioned above, electromagnetism and the weak force were merged into the electroweak force; and finally, the electroweak and strong force were united in the framework of the standard model of particle physics. These four forces are all expressed as quantum (field) theories. There is only one observable force left: gravity. The efforts to quantize gravity and devise a unified theory, have taken a strange turn in the last 20 years. The problem is still unsolved, however, the mathematical formalisms engineered for this quest - namely string/M-theory and loop quantum gravity - have had a twofold impact: • A new level in the application of formal systems is reached. Whereas before, physics relied on mathematical branches that where developed independently from any physical application (e.g., differential geometry, group theory), string/M-theory is actually spawning new fields of mathematics (namely in topology). • These theories tell us very strange things about reality: • Time does not exist on a fundamental level • Space and time per se become quantized • Space has more than three dimensions • Another breed of fundamental particles is needed: supersymmetricmatter Unfortunately no one knowns if these theories are hinting at a greater reality behind the observable world, or if they are “just” math. The main problem being the fact that any kind of experiment to verify the claims appears to be out of reach of our technology… 4 Responses to “fundamental” 1. jbg » Blog Archive » complex networks Says: 2. jbg » Blog Archive » what can we know? Says: 3. jbg » Blog Archive » in a nutshell Says: […] fundamental and complex […] 4. jbg » Blog Archive » laws of nature Says: […] See also this post: funadamental, invariant thinking. […]
c12a59eff1ba9eec
onsdag 31 augusti 2016 Ännu Mer Undervisningstid i Matematik! Riksdagen har sagt ja till Regeringens förslag att ytterligare utöka den totala undervisningstiden i matematik i grundskolan med 105 tim från 1020 tim till 1125 tim, detta efter att tiden ökades med 120 tim 2013. Den totala undervisningstiden i alla ämnen är 6785 tim vilket innebär att var sjätte skoldag, eller nästan en hel dag varje vecka, skall ägnas matematik under alla grundskolans 9 år. Lagrådsremissen bakom beslutet argumenterar på följande sätt : 1. Matematik är ett av tre ämnen som krävs för behörighet till samtliga nationella program i gymnasieskolan.  2. Grundläggande kunskaper i matematik är också en förutsättning för att klara många högskoleutbildningar. 3. För de enskilda eleverna är det av stor vikt att de får de kunskaper i matematik de kommer att behöva i yrkeslivet eller för fortsatta studier.  4. Att de har sådana kunskaper är viktigt även för samhället i stort. 5. Mycket tyder dock på att svenska elevers matematikkunskaper försämrats under 2000-talet. 6. Som redovisas i promemorian finns det internationell forskning som stöder sambandet mellan utökad undervisningstid och kunskapsresultat. 7. Någon förändring av kursplanen och kunskapskraven i matematik med anledning av utökningen av undervisningstiden är inte avsedd. Logiken förefaller vara att om ytterligare tid ägnas åt en kursplan/undervisning med dokumenterat dåligt resultat, så kommer resultaten att förbättras.  Vem kan ha hittat på ett så befängt förslag? Sverker Lundin ger i Who wants to be scientific , anyway? en förklaring: Matematik (eller vetenskap) har blivit modernitetens nya religion när den gamla nu har lagt sig att dö, en religion som ingen vuxen egentligen tror på och mycket få utövar, men en religion som det blivit klädsamt och politiskt korrekt att bekänna sig till i modernitetens tecken, men då bara i "interpassiv" form med försvarslösa skolelever som mottagare av predikan.  I detta narrspel är finns det aldrig tillräckligt med ritualer för att uppvisa sin fasta tro, och timantalet i matematik kommer således att fortsätta att öka, medan resultaten fortsätter att sjunka och det bara  blir viktigare och viktigare både för de enskilda eleverna och samhället i stort att de kunskaper i matematik som behövs i skolan också lärs ut i skolan. De nya 105 timmarna skall företrädesvis tillföras mellanstadiet, medan de 120 som tillfördes 2013 avsåg främst lågstadiet. Detta speglar en utbredd förställning att något fundamentalt har gått snett i den tidiga matematikundervisningen, oklart dock vad, och att om bara detta tidiga misstag, oklart vad, undviks eller snabbt rättas till genom extra timmar, så kommer allt att gå så mycket bättre. Men en ensidig jakt på att undvika det första misstaget, oklart vilket det är, kommer naturligtvis medföra att det inte blir mycket tid över till förkovran i senare årskurser, men det kanske inte gör så mycket... måndag 15 augusti 2016 New Quantum Mechanics 19: 1st Excitation of He Here are results for the first excitation of Helium ground state into a 1S2S state with excitation energy = 0.68 = 2.90 -2.22, to be compared with observed 0.72: söndag 14 augusti 2016 New Quantum Mechanics 18: Helium Ground State Revisited Concerning the ground state and ground state energy of Helium the following illumination can be made: Standard quantum mechanics describes the ground state of Helium as $1S2$ with a 6d wave function $\psi (x1,x2)$ depending on two 3d Euclidean space coordinates $x1$ and $x2$ of the form • $\psi (x1,x2) =C \exp(-Z\vert x1\vert )\exp (-Z\vert x2\vert )$,       (1) with $Z =2$ the kernel charge, and $C$ a normalising constant. This describes two identical spherically symmetric electron distributions as solution of a reduced Schrödinger equation without electronic repulsion potential, with a total energy $E =-4$, way off the observed $-2.903$.  To handle this discrepancy between model and observation the following corrections in the computation of total energy are made, while keeping the spherically symmetric form (1) of the ground state as the solution of a reduced Schrödinger equation:   1 . Including Coulomb repulsion energy of (1) gives  $E=-2.75$. 2. Changing the kernel attraction to $Z=2 -5/16$ claiming screening gives $E=-2.85$. 3. Changing Coulomb repulsion by inflating the wave function to depend on $\vert x1-x2\vert$ can give  at best $E=-2.903724...$ to be compared with precise observation according to Nist atomic data base $-2.903385$ thus with an relative error of $0.0001$. Here the dependence on $\vert x1-x2\vert$ of the inflated wave function upon integration with respect to $x2$ reduces to a dependence on only the modulus of $x1$. Thus the inflated non spherically symmetric wave function can be argued to anyway represent two spherically symmetric electronic distributions. We see that a spherically symmetric ground state of the form (1) is attributed to have correct energy, by suitably modifying the computation of energy so as to give perfect fit with observation. This kind of physics has been very successful and convincing (in particular to physicists), but it may be that it should be subject to critical scientific scrutiny. The ideal in any case is a model with a solution which ab initio in direct computation has correct energy, not a  model with a solutions which has correct energy only if the computation of energy is changed by some ad hoc trick until match. The effect of the fix according to 3. is to introduce a correlation between the two electrons to the effect that they would tend appear on opposite sides of the kernel, thus avoiding close contact. Such an effect can be introduced by angular weighting in (1) which can reduce electron repulsion energy but at the expense of increasing kinetic energy by angular variation of wave functions with global support and then seemingly without sufficient net effect. With the local support of the wave functions meeting with a homogeneous Neumann condition (more or less vanishing kinetic energy) of the new model, such an increase of kinetic energy is not present and a good match with observation is obtained. fredag 12 augusti 2016 New Quantum Mechanics 17: The Nightmare of Multi-Dimensional Schrödinger Equation Once Schrödinger had formulated his equation for the Hydrogen atom with one electron and with great satisfaction observed an amazing correspondence to experimental data, he faced the problem of generalising his equation to atoms with many electrons. The basic problem was the generalisation of the Laplacian to the case of many electrons and here Schrödinger took the easy route (in the third out of Four Lectures on Wave Mechanics delivered at the Royal Institution in 1928) of a formal generalisation introducing a set of new independent space coordinates and associated Laplacian for each new electron, thus ending up with a wave function $\psi (x1,...,xN)$ for an atom with $N$ electrons depending on $N$ 3d spatial coordinates $x1$,...,$xN$. Already Helium with a Schrödinger equation in 6 spatial dimensions then posed a severe computational problem, which Schrödinger did not attempt to solve.  With a resolution of $10^2$ for each coordinate an atom with $N$ electrons then gives a discrete problem with $10^{6N}$ unknowns, which already for Neon with $N=10$ is bigger that the total number of atoms in the universe. The easy generalisation thus came with the severe side-effect of giving a computationally hopeless problem, and thus from scientific point meaningless model. To handle the absurdity of the $3N$ dimensions rescue steps were then taken by Hartree and Fock to reduce the dimensionality by restricting wave functions to be linear combinations of products of one-electron wave functions $\psi_j(xj)$ with global support: • $\psi_1(x1)\times\psi_2(x2)\times ....\times\psi_N(xN)$     to be solved computationally by iterating over the one-electron wave functions. The dimensionality was further reduced by ad hoc postulating that only fully symmetric or anti-symmetric wave functions (in the variables $(x1,...,xN)$) would describe physics adding ad hoc a Pauli Exclusion Principle on the way to help the case. But the dimensionality was still large and to get results in correspondence with observations required ad hoc trial and error choice of one-electron wave functions in Hartree-Fock computations setting the standard. We thus see an easy generalisation into many dimensions followed by a very troublesome rescue operation stepping back from the many dimensions. It would seem more rational to not give in to the temptation of easy generalisation, and in this sequence of posts we explore such a route. PS In the second of the Four Lectures Schrödinger argues against an atom model in terms of charge density by comparing with the known Maxwell's equations for electromagnetics in terms of electromagnetic fields, which works so amazingly well, with the prospect of a model in terms of energies, which is not known to work. torsdag 11 augusti 2016 New Quantum Mechanics 16: Relation to Hartree and Hartree-Fock The standard computational form of the quantum mechanics of an atom with N electrons (Hartree or Hartree-Fock) seeks solutions to the standard multi-dimensional Schrödinger equation as linear combinations of wave functions $\psi (x1,x2,...,xN)$ depending on $N$ 3d space coordinates $x1$,...,$xN$ as a product: • $\psi (x1,x2,...,xN)=\psi_1(x1)\times\psi_2(x2)\times ....\times\psi_N(x_N)$  where the $\psi_j$ are globally defined electronic wave functions depending on a single space coordinate $xj$. The new model takes the form of a non-standard free boundary Schrödinger equation in a wave function $\psi (x)$ as a sum: • $\psi (x)=\psi_1(x)+\psi_2(x)+....+\psi_N(x)$, where the $\psi_j(x)$ are electronic wave functions with local support on a common partition of 3d space with common space coordinate $x$. The difference between the new model and Hartree/Hartree-Fock is evident and profound.  A big trouble with electronic wave functions having global support is that they overlap and demand an exclusion principle and new physics of exchange energy.  The wave functions of the new model do not overlap and there is no need of any exclusion principle or exchange energy. PS Standard quantum mechanics comes with new forms of energy such as exchange energy and correlation energy. Here correlation energy is simply the difference between experimental total energy and total energy computed with Hartree-Fock and thus is not a physical form of energy as suggested by the name, simply a computational /modeling error. onsdag 10 augusti 2016 New Quantum Mechanics 15: Relation to "Atoms in Molecules" Atoms in Molecules developed by Richard Bader is a charge density theory based on basins of attraction of atomic kernels with boundaries characterised by vanishing normal derivative of charge density. This connects to the homogeneous Neumann boundary condition identifying separation between electrons of the new model under study in this sequence of posts. Atoms in Molecules is focussed on the role of atomic kernels in molecules, while the new model primarily (so far) concerns electrons in atoms. New Quantum Mechanics 14: $H^-$ Ion Below are results for the $H^-$ ion with two electrons and a proton. The ground state energy comes out as -0.514, slightly below the energy -0.5 of $H$, which means that $H$ is slightly electro-negative and thus by acquiring an electron into $H^-$ may react with $H^+$ to form $H2$ (with ground state energy -1.17), as one possible route to formation of $H2$. Another route is covered in this post with two H atoms being attracted to form a covalent bond. The two electron wave functions of $H^-$ occupy half-spherical domains (depicted in red and blue) and meet at a plane with a homogeneous Neumann condition satisfied on both sides. söndag 7 augusti 2016 New Quantum Mechanics 13: The Trouble with Standard QM Standard quantum mechanics of atom is based on the eigen functions of the Schrödinger equation for a Hydrogen atom with one electron, named "orbitals" being the elements of the Aufbau or build of many-electron atoms in the form of s, p, d and f orbitals of increasing complexity, see below. These "orbitals" have global support and has led to the firm conviction that all electrons must have global support and so have to be viewed to always be everywhere and nowhere at the same time (as a basic mystery of qm beyond conception of human minds). To handle this strange situation Pauli felt forced to introduce his exclusion principle, while strongly regretting to ever have come up with such an idea, even in his Nobel Lecture: • Already in my original paper I stressed the circumstance that I was unable to give a logical reason for the exclusion principle or to deduce it from more general assumptions.  • I had always the feeling and I still have it today, that this is a deficiency.  • Of course in the beginning I hoped that the new quantum mechanics, with the help of which it was possible to deduce so many half-empirical formal rules in use at that time, will also rigorously deduce the exclusion principle.  • Instead of it there was for electrons still an exclusion: not of particular states any longer, but of whole classes of states, namely the exclusion of all classes different from the antisymmetrical one.  • The impression that the shadow of some incompleteness fell here on the bright light of success of the new quantum mechanics seems to me unavoidable.  In my model electrons have local support and occupy different regions of space and thus have physical presence. Besides the model seems to fit with observations. It may be that this is the way it is. The trouble with (modern) physics is largely the trouble with standard QM, the rest of the trouble being caused by Einstein's relativity theory. Here is recent evidence of the crisis of modern physics: The LHC "nightmare scenario" has come true. Here is a catalogue of "orbitals" believed to form the Aufbau of atoms: And here is the Aufbau of the periodic table, which is filled with ad hoc rules (Pauli, Madelung, Hund,..) and exceptions from these rules: lördag 6 augusti 2016 New Quantum Mechanics 12: H2 Non Bonded Here are results for two hydrogen atoms forming an H2 molecule at kernel distance R = 1.4 at minimal total energy of -1.17 and a non-bonded molecule for larger distance approaching full separation for R larger than 6-10 at a total energy of -1. The results fit quite well with table data listed below. The computations were made (on an iPad) in cylindrical coordinates in rotational symmetry around molecule axis on a mesh of 2 x 400 along the axis and 100 in the radial direction. The electrons are separated by a plane perpendicular to the axis through the the molecule center, with a homogeneous Neumann boundary condition for each electron half space Schrödinger equation. The electronic potentials are computed by solving a Poisson equation in full space for each electron. PS To capture energy approach to -1 as R becomes large, in particular the (delicate) $R^{-6}$ dependence of the van der Waal force, requires a (second order) perturbation analysis, which is beyond the scope of the basic model under study with $R^{-1}$ dependence of kernel and electronic potential energies. %TABLE II. Born–Oppenheimer total, E %Relativistic energies of the ground state of the hydrogen molecule %L. Wolniewicz %Citation: J. Chem. Phys. 99, 1851 (1993);  for two hydrogen atoms separated by a distance R bohr  R    energy 0.20 2.197803500  0.30 0.619241793  0.40 -0.120230242  0.50 -0.526638671  0.60 -0.769635353  0.80 -1.020056603  0.90 -1.083643180  1.00 -1.124539664  1.10 -1.150057316  1.20 -1.164935195  1.30 -1.172347104  1.35 -1.173963683  1.40 -1.174475671  1.45 -1.174057029  1.50 -1.172855038  1.60 -1.168583333  1.70 -1.162458688  1.80 -1.155068699  2.00 -1.138132919  2.20 -1.120132079  2.40 -1.102422568  2.60 -1.085791199  2.80 -1.070683196  3.00 -1.057326233  3.20 -1.045799627  3.40 -1.036075361  3.60 -1.028046276  3.80 -1.021549766  4.00 -1.016390228  4.20 -1.012359938  4.40 -1.009256497  4.60 -1.006895204  4.80 -1.005115986  5.00 -1.003785643  5.20 -1.002796804  5.40 -1.002065047  5.60 -1.001525243  5.80 -1.001127874  6.00 -1.000835702  6.20 -1.000620961  6.40 -1.000463077  6.60 -1.000346878  6.80 -1.000261213  7.00 -1.000197911  7.20 -1.000150992  7.40 -1.000116086  7.60 -1.000090001  7.80 -1.000070408  8.00 -1.000055603  8.50 -1.000032170  9.00 -1.000019780  9.50 -1.000012855 10.00 -1.000008754  11.00 -1.000004506  12.00 -1.000002546 onsdag 3 augusti 2016 New Quantum Mechanics 11: Helium Mystery Resolved The modern physics of quantum mechanics born in 1926 was a towering success for the Hydrogen atom with one electron, but already Helium with two electrons posed difficulties, which have never been resolved (to be true). The result is that prominent physicists always pride themselves by stating that quantum mechanics cannot be understood, only be followed to the benefit of humanity, like a religion: • I think I can safely say that nobody understands quantum mechanics. (Richard Feynman, in The Character of Physical Law (1965)) Text books and tables list the ground state of Helium as $1S^2$ with two spherically symmetric electrons (the S) with opposite spin in a first shell (the 1), named parahelium.  The energy of a $1S^2$ state according to basic quantum  theory is equal to -2.75 (Hartree), while the observation of ground state energy  is -2.903. To handle this apparent collapse of basic quantum theory, the computation of energy is changed by introducing a suitable perturbation away from spherical symmetry which delivers the wanted result of -2.903, while maintaining that the ground state still is $1S^2$. Of course, this does not make sense, but since quantum mechanics is not "anschaulich" or  "visualisable" (as required by Schrödinger) and therefore cannot be understood by humans, this is not a big deal.  By a suitable perturbation the desired result can be reached, and we are not allowed to ask any further questions following the dictate of Dirac: Shut up and calculate. New Quantum Mechanics resolves the situation as follows: The ground state is predicted to be a spherically (half-)symmetric continuous electron charge distribution with each electron occupying a half-space, and the electrons meeting on at plane (free boundary) where the normal derivative for each electron charge distribution vanishes. The result of ground state energy computations according to earlier posts shows close agreement with the observed -2.903: Notice the asymmetric electron potential and the resulting slightly asymmetric charge distribution with polar accumulation. The model shows a non-standard electron configuration, which may be the true one (if there is anything like that).
6b442805b483172a
Sixty Years of Quantum Physics By Edward U. Condon Washington University, St. Louis, Missouri [Read before the Society December 2, 1960] I was invited to speak on the occasion of the 1500th Regular Meeting of the Society, and of course am delighted to be able to come and do it. But those who conveyed the invitation could not refrain from reminding me that I owed the Society a retiring presidential address. I was President, in 1951, and it was in the fall of that year that I departed hastily to go to Corning Glass Works to be Director of Research. That was a very interesting experience, and I am still connected with the glass business, though I am also doing professing. I started my career in experimental physics and lasted one day. When I started work on a doctoral thesis at the University of California in 1925, I had to set up a vacuum system. All experimental physicists in those days had to get a Cenco pump on the floor and glass tubing up to something that was on the table. I started out like all the rest but broke so much glass the first day that they suggested I go into theoretical physics. I told this story at Corning after I became their Director of Research. Mr. Amory Houghton, Chairman of the Board, who is now our Ambassador to France, said, "Isn't it good that at last you are in a place where you can't possibly break enough glass to make any difference." Looking back over the various possibilities of things that might be suitable to talk about this evening, I thought it would be interesting to review the historical development of what I now would like to call quantum physics, rather than quantum mechanics, because it has grown and expanded in such a way that it permeates all of modern physics. In fact it is extremely difficult to think of any actively cultivated part of physics that is not directly involved with Planck's quantum constant "h". The basic discovery by Planck was made within a week or two of exactly 60 years ago, so I thought it might be interesting to discuss this subject. The subject of quantum physics started with the statistical theory of the distribution of energy in the black-body spectrum. The spectrum of radiated energy in equilibrium with matter in an enclosure is commonly called black-body radiation because it is the kind of radiation that would be emitted by a perfect absorber. The active problem in 1900 was the explanation of the distribution of energy in the spectrum. It is interesting to realize that the subject has quite an ancient history. The first application of thermodynamics to black-body radiation goes back to 1859 when Kirchhoff first developed the ideas of radiative exchanges, and the connections between emission and absorption, rules according to which a good emitter is a good absorber, and a poor emitter is a poor absorber. In 1884 the discovery had been made of what we now call the Stefan-Boltzmann law, that the total radiation goes up as the fourth power of the absolute temperature. It was discovered by Stefan experimentally and interpreted theoretically by Boltzmann, making one of the earliest applications of thermodynamics to radiation after those first ideas of Kirchhoff's. In 1894 came the discovery by Wien of the displacement law, which tells how the distribution of energy over various wavelengths changes with the absolute temperature. The big problem at that time was to try to understand the reason for this distribution. Contrary to the general belief, which has become true in the last 30 years or so, that all physics is really done by young men in their 20's, the discovery of Planck was done when he was at the advanced age of 42. In 1900 he had already put a part of his career of research work behind him and was a professor in the University of Berlin, so that his work on quantum physics was done 21 years after he had received his doctorate for a thesis on the second law of thermodynamics. His thesis, it is interesting to note, was done under Kirchhoff and Helmholtz at Berlin. In his autobiography he says that he is quite confident neither of them ever read it. Thermodynamics was Planck's first love, his principal love throughout physics. In fact there are many indications that he was rather annoyed with his discovery of the Planck constant of action and did his best for about 15 years or so, on up to about 1915, to find ways of evading his own discovery and reconciling the theory that he had discovered with classical theory. This resembles somewhat the story that I used to hear from Professor Ladenburg at Princeton, about Roentgen. Everybody knows about the great consequences of Roentgen's discovery of the Roentgen rays, or x rays. Ladenburg was a student of Roentgen. He said that Roentgen was annoyed with his x rays because he did not understand what they were and much preferred classical subjects. So the upshot of it was that Ladenburg did a doctoral thesis under Roentgen just a few years after Roentgen had discovered x rays, on the subject of the correction to Stokes' law for a body falling through a viscous medium in a cylindrical tube, allowing for the finite diameter of the tube and the wall effect. They had a long pipe filled with castor oil, which is the traditional viscous material. It reached from the top floor of the laboratory to the basement. He said nothing ever gave Roentgen quite as much pleasure as to see the steel ball arrive down at the basement just when the calculation said it ought to. You can tell by a great deal of Planck's writings and readings that he felt much the same way about classical physics in relation to the modern developments. Lord Rayleigh had published a theory that was based on the equipartition of energy doctrine, that goes back to Maxwell, Waterston, and Boltzmann, whereby every degree of freedom in the radiation field should have had the energy kT. He knew it did not, because that would have given an infinite or divergent result. But nevertheless that was where the theoretical thinking of that time led, and this served to point up the importance of the quantum modifications that had to be made. One of the things that I found interesting in looking back in the history of this theory is that it has always been referred to as the Rayleigh-Jeans law, and I had supposed that Rayleigh and Jeans had worked together on it. In point of fact Rayleigh derived it and made a mistake by a factor of 8 and Jeans corrected that in a letter to Nature, so that dividing the original Rayleigh formula by 8 was Jeans' contribution. It was an essential contribution because it is a mistake that we all might make very readily. In counting up the degrees of freedom in the radiation field that are associated with frequencies between n and n + dn one has to calculate how many integers there are whose squares add up to a certain value, and it is natural to take the volume of a sphere of a certain radius. But in fact only one takes an octant out of this sphere because the integers, all three of them, have to be positive, and that is where Rayleigh went wrong. The radiation measurements that served to inspire Planck were being made at the Physikalische-Technische Reichsanstalt by some of the great names of early days of radiation measurement work: Lummer, Pringsheim, and Rubens. So that the problem of distribution of energy in the spectrum was very much to the fore and very good measurements were being made. It was on October 19, 1900, that Planck presented his radiation formula to the German Physical Society at a meeting in Berlin, strictly as an empirical interpolation formula between the Rayleigh-Jeans law, which is valid at long wavelengths, and the Wien law, which is valid at short wavelengths. By interpolating in between he had been able to find a simple formula that extended across the whole regime. But at that time he had no theoretical basis for it whatever. That night Rubens took the data to which he had access and made a very careful comparison with Planck's formula—a more careful one than Planck himself had made at that time—and found that it represented the data with extraordinary accuracy, much better than an empirical formula usually does. He called on Planck the next morning with a strong conviction that there was some real fundamental truth in the formula and not just an accidental agreement. Planck then set to work to find a theoretical basis for this formula and worked very hard for quite a while. In his biography he speaks of this as the most difficult period of his whole life. Then, just within less than two months, on December 14, 1900—so we are just 12 days ahead of the 60th Anniversary—he presented a paper to the Physical Society of Berlin in which he took the decisive step. By applying the Boltzmann principle for the connection of entropy with probability—which up to that time had hardly been used at all—he was able to work out the spectral distribution of energy that would be in equilibrium with a system of electrical oscillators. In order to get the desired result he had to suppose that the energy of each oscillator was built up in finite steps of energy, whereas in all of physics hitherto energy had been a continuous variable. To agree with the Wien displacement law he then had to assume that the finite size of these steps was proportional to the frequency, and so the energy quanta were hn. In that way he arrived at the famous formula Un = 8phn3 / c3 (ehn/kT - 1) for the density of the energy in the spectral frequency range between n and n + dn in black-body radiation at absolute temperature T. As is readily seen, in the limit of hn/kT small compared with 1 this formula transforms into the Jeans formula, and in the limit of hn large compared with kT it becomes the Wien formula and represents the data with great accuracy in between. Additional measurements of the same sort were later made with great precision at the National Bureau of Standards by W. W. Coblentz. One of the most extraordinary things about this is the accuracy with which Planck was able to get these fundamental constants. At that time there was no good value available for Avogadro's number, or for the charge on the electron; and the values that Planck was able to get out of his work were much closer than is usually appreciated. When he first represented the data, in order to obtain a fit with the old black-body data he had to assume that h was 6.885×10-27 erg/sec, and k, which we now call the Boltzmann constant, 1.429×10-16 erg/°K. The present best value for the first number is 6.6252×10-27 instead of 6.885×10-27 and for the second number, 1.3804×10-16 instead of 1.429×10-16. At that very first time Planck got Planck's constant only about 4.4 percent too high, and Boltzmann's constant about 3.5 percent too high relative to the best modern values. This was actually the first time that the Boltzmann constant had been evaluated. Let me just remind you of its relation with the other basic constants that have so much importance. The gas constant, R, as we ordinarily know it, per gram mole, is equal to the Avogadro number, N, times k; and the Faraday, F, the amount of charge needed to plate out a gram mole of univalent ions, is equal to Avogadro's constant times the charge of the electron. That is R = Nk and F = Ne. These molar quantities, R and F, were well known, and good values for them were available in those days, but what was not known was the Avogadro number N. However, if you know any one of these quantities you can get the other. So that as it turns out, the getting of the Boltzmann constant, k, enabled one to get N by the first equation, and then, using that N in combination with the knowledge of the Faraday, F, one was able to get the charge on the electron. The electron had only been recognized about three years earlier by J. J. Thomson, and while the ratio of its charge to mass was known, its charge by itself was not well known. You will find in the literature of that time values published for e, the charge on the electron, ranging all the way from 1.29×10-10 electrostatic units, on up to 6.5×10-10 electrostatic units, which was given by J. J. Thomson, and a little while later revised back down to 3.4×10-10. In other words, at that time one only knew the charge on the electron to a factor of about 5 or 6. On the other hand if you take the value of the Faraday and the value of k and solve for N from the gas constant and then solve for e, you find, surprisingly enough, that e equals 4.69×10-10 electrostatic units, which is only 2.3 percent below the presently recognized value. Thus in the space of just a month or two Planck first found an empirical formula which to this day gives the most accurate representation of the spectral distribution of the radiant energy; second, he found a derivation of that formula. In order to get the derivation he had to introduce the extraordinary idea of energy quantization into physics. Third, he obtained an excellent value for the charge on the electron, which everybody was trying to do at that time. You might expect that this would cause a great deal of excitement among physicists at that time, but it did not. If you search through the journals you find that practically nothing is said about Planck in the years 1900 through 1904. I was very much intrigued, therefore, when just before this meeting Mr. Marton recalled that a search of the records of this Society indicated that in 1902 Arthur L. Day gave a report on Planck's work. Thus The Philosophical Society of Washington was one of the earliest to pay attention to it. The first real extension of Planck's work came with Einstein's famous paper of 1905, the paper for which he got the Nobel prize. (It is important to realize that Einstein did not get the Nobel prize for the theory of relativity. They might give it to him now if he were around, but they did not in those days.) Planck wrote only one other paper on the subject in that period between 1900 and 1905 and this was mainly an expository paper. There is one brief mention by Burbury, another paper by van der Waals, Jr., and that is all. In those days Planck was almost completely ignored. In Planck's own autobiography he tells of his own attitude toward the Planck constant, and I thought it would be interesting to read his own words on that, of course translated into English. He said: While the significance of the quantum of action for the interrelation between entropy and probability was thus conclusively established, the great part played by this new constant in the uniform regular occurrence of physical processes still remained an open question. I therefore tried immediately to weld the elementary quantum of action, h, somehow into the framework of classical theory. But in the face of all such attempts the constant showed itself to be obdurate. So long as it could be regarded as infinitesimally small, i.e., dealing with higher energies and longer periods of time, everything was in perfect order. But in the general case difficulties would arise at one point or another, difficulties which became more noticeable as higher frequencies were taken into consideration. The failure of every attempt to bridge that obstacle soon made it evident that the elementary quantum of action plays a fundamental part in atomic physics and that its introduction opened up a new era in natural science, for it heralded the advent of something entirely unprecedented and was destined to remodel basically the physical outlook and thinking of man which, ever since Leibniz and Newton laid the ground work for infinitesimal calculus, were founded on the assumption that all causal interactions are continuous. He goes on in a more personal vein to say: My futile attempts to fit the elementary quantum of action somehow into the classical theory continued for a number of years [actually until 1915] and they cost me a great deal of effort. Many of my colleagues saw in this something bordering on a tragedy. But I feel differently about it, for the thorough enlightenment I thus received was all the more valuable. I now knew for a fact that the elementary quantum of action played a far more significant part in physics than I had originally been inclined to suspect, and this recognition made me see clearly the need for the introduction of totally new methods of analysis and reasoning in the treatment of atomic problems. In spite of Jeans' intimate association with this problem, you find no reference whatever to the Planck black-body law in the first edition of his Dynamical Theory of Gases, which was published in 1904, four years after Planck's work. In the Landolt-Bornstein Tables, published in 1905, we find an extraordinary thing, namely, that it gives widely different values for what is often called the Loschmidt number, the number of molecules in one cubic centimeter of various gases under standard conditions. Of course this value should be the same for all gases. But they solemnly give you a table with 2.1×1019 for air, 4.2×1019 for nitrogen, 7.3×1019 for hydrogen, and so on. Apparently Landolt and Bornstein did not believe in the Avogadro number. Planck got 2.76×1019 for this number, which as we have seen is a good value. Josiah Willard Gibbs was America's first great theoretical physicist. He was elected, I find, to membership in the Washington Academy of Sciences in 1900. He died in 1903 at the age of 64. There is no indication in any of his publications or notes that he left behind that he paid any attention to Planck's work. He had puzzled over the problem of the specific heat of polyatomic gases, which everybody was puzzled about at that time, because it has too low a value to correspond with the equipartition law. There is some indication that he found these difficulties with the equipartition law revealed in the specific heat of gases somewhat depressing, and I find an indication of that perhaps in an interesting paragraph from the preface to his famous work on statistical mechanics, published in 1902, a year before Gibbs' death: In the present state of science it seems hardly possible to frame a dynamic theory of molecular action which shall embrace the phenomena of thermodynamics, of radiation and of the electrical rnanifestations which accompany the union of atoms. Yet any theory is obviously inadequate that does not take account of all these phenomena. [There is a wonderful sentence at the end of this paragraph which I think we all ought to realize was written by Gibbs in 1902.] Certainly one is building on an insecure foundation who rests his work on hypotheses concerning the constitution of matter. Lord Kelvin's Baltimore lectures, which were delivered at The Johns Hopkins University in 1884, but were not published until 1904, had a great deal of revision up to that time. The preface to these is very interesting to those who have anything to do with editing or getting things through the press. He admits that he had been working on the revision for all of the 19 years. I can well imagine he was a popular fellow around the print shop. That work includes as its appendix B, the famous lecture to which I am sure you have all heard allusions, "Nineteenth Century Clouds over the Dynamical Theory of Heat and Light." That lecture was delivered in April 1900, some months before Planck's work, and then it was published originally in the Philosophical Magazine of July 1901. It makes no reference to the black-body radiation or to Planck's work, although cloud 2—he had his clouds numbered—was this same concern about the failure of equipartition of energy as evidenced by the specific heats of gases, the same problem that was troubling Gibbs. Lord Rayleigh's publication of what we now call the Rayleigh-Jeans law was in the Philosophical Magazine in 1900. He did not return to the subject again until 1905 when he wrote several notes in Nature in which he concedes or agrees with the comment that Jeans had made about being wrong by a factor of 8. It is interesting in that he says that he "has not succeeded in following Planck's reasoning." That is how Planck's work was received by Lord Rayleigh, one of the greatest British physicists. Rayleigh actually published papers actively through 1919, but he seems to have had no more to say on black-body radiation than what he said in that one 1905 paper. Search through his published papers reveals two more items relating to modern quantum physics. In the 1906 Philosophical Magazine he comments on the classical radiative properties of the atom models that resemble J. J. Thomson's. However, he goes beyond them in that he regards the negative charge as distributed more like a continuous fluid and studies it as a normal mode of vibration problem. In an editorial note that he adds in his collected papers, written in 1911, he refers back to some old work of an 1897 paper. An interesting thing to me appears in this note when he comments that all kinds of models of normal modes of vibrations of continuous systems always come up with formulas in which the square of the frequency is written additively as the sum of contributions coming from the different degrees of freedom—from what we would now call the different quantum numbers. Rayleigh was wedded to a classical vibration theory model where the squares of the frequencies get in because of the second derivative with regard to the time based on Newton's law of mechanics. Nowadays when we lecture on quantum mechanics we just quietly make the Schrödinger equation contain the first time derivative so we will not have this trouble. That is the advantage of making up your equations as you go along as compared with getting them from Newton or somebody like that. Steeped in acoustics as he was, Rayleigh does say: "A partial escape from these difficulties might be found in regarding the actual spectrum lines as due to difference tones from primaries of much higher pitch." That is a well-known device that gives physicists license to pass from a square term to a linear term. That is to say, a small change in the square is linear. There is still something else in the 1906 paper which intrigues me. Rayleigh devotes a paragraph to the problem of the sharpness of spectral lines despite the random character of the conditions of excitation and concludes with a paragraph that sounds very modern. I will quote that: It is possible, however, that the conditions of stability or of exemption from radiation may, after all, demand this definiteness, notwithstanding that in the comparatively simple cases treated by Thomson, the angular velocity is open to variation. According to this view, the frequencies observed in the spectrum may not be frequencies of disturbance or of oscillation in the ordinary sense at all, but rather form an essential part of the original constitution of the atom as determined by conditions of stability. Maybe one reads into that one's present knowledge of the later developments of quantum theory, but I found that very interesting as a foreshadowing of the way we look at it now. Even as late as 1911 we find Lord Rayleigh worrying about Kelvin's cloud 2, the specific heat difficulty, although Einstein had really put that difficulty away in 1907. In 1911 Rayleigh wrote to Walter Nernst to express his concern: If we begin by supposing an elastic body to be rather stiff, the vibrations have their full share of kinetic energy [that is the equipartition law] and this share cannot be diminished by increasing the stiffness…. We all know that increasing the stiffness makes the interval between the vibration quantum levels greater, so that they do not take part practically in the equipartition law simply because they cannot get enough energy to be even excited to the first state. However, Rayleigh goes on: Perhaps this failure might be invoked in support of the views of Planck and his school that the laws of dynamics as hitherto understood cannot be applied to the smallest parts of the bodies. But I must confess that I do not like this solution of the puzzle . . . I have a difficulty in accepting it as a picture of what actually takes place. We do well I think to concentrate attention on the diatomic gaseous molecule. Under the influence of collisions the molecule freely and rapidly acquires rotation. [He knows this from the specific heat.] Why does it not also acquire vibration along the line joining the two atoms? If I rightly understand, the answer of Planck is that in consideration of the stiffness of the union, the amount of energy that should be acquired at each collision falls below the minimum possible and that therefore none at all is acquired [this is of course exactly what we know] an argument which certainly sounds paradoxical. This is the end of it for Rayleigh. So we can see that the acceptance of these ideas was something that came very, very slowly. The examples I have chosen illustrate that very little was stated about the subject at all from 1900 to 1905, and even after that you find the great men of the period hesitant and unwilling to build it into their thinking. Let us now turn to Einstein's famous 1905 paper, which I must confess I had not read until I got to thinking over the preparation for this lecture. It is one of the papers we all hear about in school and worship, but do not read. One of the odd things about this paper is that "h" is not in it, believe it or not. In the paper Einstein denotes by the letter b what we now would call h/k, and then he writes R/N for what we call k, and thus you find in that paper that the energy of a light quantum is not hn at all. It is Rbn/N, which certainly takes a bit of getting used to. The title of his paper is an interesting one: "Heuristic Viewpoint Concerning the Emission and Transformation of Light," indicating I think that he meant that there is something in the paper, but he does not quite know what. At least that is what I mean when I say "heuristic." Einstein might, of course, have meant something else. He says: The energy of a ponderable body cannot be divided into indefinitely many indefinitely small parts, whereas the energy emitted by a point light source is regarded on the Maxwell theory or more generally according to every wave theory as continuously spread over a continuously increasing volume. Such wave theories of light have given a good representation of purely optical phenomena and will surely not be replaced by any other theory. [He was right in that. They have not been replaced.] He Continues: It is to be remembered, however, that the optical observations referred to time mean values, not to instantaneous values, and it is quite conceivable that, in spite of complete success in dealing with diffraction, reflection, refraction, dispersion, et cetera, such a theory of continuous fields could lead to contradictions with experience when applied to phenomena of light emission and absorption. After a little more discussion he makes the key declaration which played such a decisive role in all subsequent developments in which in one sentence he says: According to the supposition here considered, the energy in the light propagated from rays from a point is not smeared out continuously over larger and larger volumes, but rather consists of a finite number of energy quanta localized at space points, which move without breaking up and which can be absorbed or emitted only as wholes. Oddly enough, though everybody quotes this paper purely for the photoelectric effect in the discussion nowadays, the photoelectric effect is only one section, paragraph 8, of the whole paper. The first six paragraphs are concerned entirely with another way of looking at the details of the statistical distribution of the black-body radiation law, and the entropy of radiation along the lines of the quantized wave theory. Finally paragraph 7 is an interpretation of the Stokes' rules for photoluminescence. In ordinary fluorescence and phosphorescence, Stokes' law, which goes way back to 1860, says that the wavelength of the fluorescent light is always greater than the wavelength of the exciting light, or nearly so. There is some radiation, that is called anti-Stokes radiation, for which the wavelength is a little shorter. That puzzled me, why so little stress in Einstein's paper on the photoelectric effect compared with these other things. Then comes section 8 which deals with the photoelectric effect. So I asked Professor A. L. Hughes, one of the pioneers of photoelectric work, who is at our place—he is emeritus professor in Washington University in St. Louis—how that could be. He told me how very primitive the knowledge of photoelectric effect was at that time. No vacuum work on photoelectricity had been done, and even that which had been done was done with very poor vacuums, under very poor conditions. In point of fact no effort to find out the retarding potential required to stop the photocurrent in a definite circuit had been made. What had been observed was that if you insulate a metal object and shine light on it, it will build up to a certain potential and then stay at that potential. That is, it builds up its own retarding potential and finally prevents the escape of further electrons just out in the air. Different metals differed in regard to what potential would be built up by certain light, and it was found that as one went to more and more violet light one got a higher potential. But there were only very crude measurements indeed, so crude that one would hardly think that there was any possibility of a fundamental understanding being involved there. That perhaps is the reason why the photoelectric effect was so little stressed in that paper. The Stokes' law argument was a much more direct experimental one and, conversely, it seems rather odd to me as I think about it, that Stokes' law is not more stressed today in teaching the subject. It was in 1907 that the specific heat work of Einstein clarified the problem of low-temperature specific heat. It is fascinating to look up some of the historical information that is available in the literature. I do not mean that one has to go to ancient history, just the history of the last century. For example, in 1904, when the great St. Louis World's Fair was held, various distinguished visitors were giving lectures. Lord Kelvin gave a speech there that was rather suggestive of a sort of inverse neutrino theory. The thing that was bothering him at that time was—this is a little off the subject of quantum theory, but I think it is interesting—the measurement just made by the Curies of the amount of energy being given off by radium per unit time. They had not measured the half life and the energy given off did not show any signs of weakening, and you know that physicists are great on extrapolation. They said that radium gives off energy perpetually—that was the word, perpetually. So the question was, how could anything radiate perpetually at this tremendous rate, unheard of when expressed in terms of energies of usual chemical reactions. Kelvin had an idea which he propounded at this talk, that perhaps there was some kind of energy that we were not able to detect—like the neutrinos—that was floating around in space, and that radium had the property of absorbing it in that form and then reconverting it, like a fountain, and shooting it out, and that is what we observe. Even in those days people were perfectly willing to balance the books on conservation of energy in ways like that. The next major historical event was the development of the Bohr atom model in 1913. At this point, since we are just talking a little bit of anecdotal material about the history of our subject, I will tell a story that I learned from George Gamow. The young Bohr—he was about 26 at that time—came to England from Copenhagen to work in the Cavendish Laboratory. The great J. J. Thomson was at the height of his powers. Bohr came to the great center to study fundamental atomic physics; but within a few months he left the Cavendish Laboratory and went up to Manchester to work under a relatively unknown fellow named Rutherford. The question is why did he do that? According to Gamow, Bohr had gotten into trouble with "J. J." because he was a little critical of the Thomson atom model, and "J. J." had politely indicated to him that it might be nice if he left Cambridge and went to work with Rutherford. That is how Bohr went to work for Rutherford, which was advantageous, I think, for all. It was not so good for the Thomson model but it was fine for the future development of physics. To bring this story up to date, Gamow told me that in 1928, when he worked on the alpha particle tunneling paper, the basic work which Gurney and I did simultaneously in Princeton, Rutherford sent Gamow to see Bohr and to tell him about this exciting new development. He also wrote him a letter—which Gamow said he still has a copy of—saying, "Please pay attention to this fellow; there is something in it. It isn't cockeyed. You remember how it was with you when you went to 'J. J.' and he wouldn't listen; so now you listen to Gamow." I do not know whether there is any truth in that or not, but at any rate Bohr did listen to Gamow. Of course, the most exciting immediate experimental consequence of Bohr's work was, besides the direct interpretation of the spectrum of hydrogen that was well known at that time—I mean the facts of the Balmer series, which went way back into the 19th century—was the interpretation of spectroscopic term values as being energy levels with the associated implication that controlled electron impact would produce controlled excitation of atoms and molecules. This was the work that was immediately followed up by James Franck and Gustav Hertz, and for which they received the Nobel prize in physics in 1926. That work was very quickly taken up here in Washington, in the pioneer work of Paul Foote and F. L. Mohler. At the National Bureau of Standards the accountants were rather stuffy, and had rather sharp lines about appropriations and budgets, so that all the work on critical potentials for which the Bureau of Standards became famous was carried on under a budget number which had something to do with improving pyrometric methods. I am not quite sure whether this work helped much pyrometry but it certainly was a great addition to the development of fundamental science. The period of the second decade of our subject was also characterized by the very first extension of the idea of quantized energy levels to the interpretation of band spectra, rotation and vibration spectra, and infrared. A curious thing about the atom model work of Bohr, prior to 1923 or 1924, was that if you look at the current papers you get the impression that everybody in the world was terrifically excited about the Bohr model and believed in it hook, line, and sinker, including the atom orbits as they are used in the ads for the atomic age nowadays. Bohr on the other hand was constantly making remarks, speeches, and admonitions to the effect that this is temporary and we ought to be looking for a way to do it right. The great breakthrough, as the saying goes nowadays, came about 1924,1925, and 1926, when the idea of waves of accompanying electrons was first published by Prince Louis de Broglie as a doctor's thesis, and was again ignored. I do know anybody who read that paper until a year or two later. Erwin Schrödinger then founded the great discoveries of wave mechanics on de Broglie's work which were published in the spring of 1926 in a series of papers in the Annalen der Physik. Just before Schrödinger's work in late 1925, Born, Jordan, and Heisenberg had developed the matrix mechanics methods. For about a year they were thought of as two rival and distinct theories, until Schrödinger and Carl Eckart, a young physicist in Chicago, who is now in La Jolla, recognized the mathematical identity of the two theories. I had the good fortune to get my doctorate in the summer of 1926 when all these things were at their highest peak of excitement, and went to Göttingen to work with Born. There was a young graduate student there named Robert Oppenheimer whom I got acquainted with at that time. This was an extremely difficult period because the rate of advance was so great, and the whole subject was so obscure to all of us that it was an extremely difficult thing to keep up with matters. I remember that that fall David Hilbert was lecturing on quantum theory, although he was in very poor health at the time. He had anemia, and liver extract was not available at that time and he was eating a vast quantity of liver every day and saying he would rather not live than eat that much liver. His life was saved by the fact that the liver extract was discovered just about that time. But that is not the point of my story. What I was going to say is that Hilbert was having a great laugh on Born and Heisenberg and the Göttingen theoretical physicists because when they first discovered matrix mechanics they were having, of course, the same kind of trouble that everybody else had trying to solve problems and manipulate and really do things with matrices. So they went to Hilbert for help, and Hilbert said the only times that he had ever had anything to do with matrices was when they came up as a sort of by-product of the eigenvalues of the boundary value problem of a differential equation. So if you look for the differential equation which has these matrices you can probably do more with that. They thought that was a goofy idea, and that Hilbert did not know what he was talking about. So he was having a lot of fun pointing out to them that they could have found out Schrödinger's wave mechanics six months earlier if they had paid a little more attention to him. I mention some of the things that were occurring during those years because I do not believe that anybody who did not live through that period has quite an appreciation of what a tremendous number of new discoveries were made at that time that are still of fundamental importance, and have blossomed out into whole new areas of physics that are subjects for courses in themselves. In 1926 we had the whole wave mechanics as we know it, and the whole matrix mechanics formulated. And just a little before that, we had the discovery of the electron spin and of the Pauli exclusion principle. In 1927 came the whole theory of the chemical valence bond as a perturbation problem in quantum mechanics with correlations over electron pairs with their spins antiparallel. Then came almost simultaneously with that the whole development of Fermi-Dirac statistics and its clarification of the problems of metal theory. A few months after that, came the Dirac papers on the quantization of the electromagnetic field which explained at last the difference between spontaneous and induced emission and put the two together in a unified theory. Soon after that came the whole Dirac relativistic theory of the electron which later led to the prediction of the positron. It was a little after that, in 1928, that the interpretation of natural alpha radioactivity came as a consequence of the barrier leakage idea, also an essential element of quantum mechanics and an essential element of its statistical or probability interpretation. I think it is fair to say that the barrier leakage idea was the opening of the modern period of the application of quantum mechanics to nuclear physics. Nuclear physics, in terms of real specific models, has never had a classical past. Nobody tried in those days to develop specific models of the structure of a nucleus. Another big year for discoveries was 1932, the year in which heavy hydrogen was discovered, which from a nuclear point of view means the deuteron, the year in which the first production of an artificial nuclear reaction was accomplished by Cockcroft and Walton, and the year in which the positron was discovered, the antiparticle associated with the electron, as we call it nowadays. In that same decade, a few years later, 1936 saw the development of the Fermi theory of beta decay based on the neutrino hypothesis that had been introduced by Pauli, in almost a joking way, a year or two earlier. I remember in the summer of 1937 when we had a conference on beta decay theory at Cornell University, and a lot of us were having trouble worrying about it, Fermi was in the audience sitting in the back row just smiling and smiling as he usually did. People tried to get him to comment, and he said, "I have always been surprised that people take that theory so seriously." But, of course, as we know, it has turned out to be remarkably correct; that is, the basic formalism which Fermi developed then for accounting for the four-fermion interactions, even in spite of the great crisis it went through in 1957 with the discovery of nonconservation of parity. The basic formalism, as Fermi first introduced it, has beautifully stood the test of time. Of course in the year 1936 you have work which is important to us here because it was done by prominent people in Washington. I refer to the work of Hafstad, Heidenberg, and Tuve in the first real studies of proton-proton scattering, which gave direct evidence of forces between protons other than the Coulomb forces, that is, short-range nuclear forces between protons. The theoretical interpretations of those results was largely done by Gregory Breit and myself up in Princeton, associated with Richard Present, who is now at the University of Tennessee. That gave the first evidence that we had of what is now called the charge independence of nuclear forces, because the additional short-range force that was revealed in this way turned out quantitatively to be very closely the same as the force between a proton and a neutron that is revealed in the normal state of the deuteron. From about 1932 on we have the whole field of nuclear physics coming into being in a big way with deuterons available, with machines available, both cyclotrons and Van de Graaf machines. In the latter part of that decade we began to have the first theories of Bethe and Marshak on the application of specific models of nuclear reactions to finding satisfactory sources of stellar energy. I think perhaps I must give up at this point, because the last two decades have seen such an overwhelmingly rapid and vast amount of progress, spreading out into a great many different fields, that one could not possibly in the short time remaining do more than just mention it. We had, in the decade from 1940 to 1950, the whole development of the modern point of view on quantum electrodynamics. It came rather late in the decade, with the discovery of the Lamb shift and the experimental confirmation of the abnormal magnetic moment of the electron, the fact that it was somewhat off from the original Dirac theory. We had at last the clarification of the puzzling features of the mesons in cosmic rays whereby it turned out that there were the two kinds, the pi mesons and the mu mesons, the pi mesons decaying into the mu mesons. The latter part of the decade represented the beginning of public knowledge of fission, and the engineering and political uses of fission. At the same time, going off in quite another direction, what has turned out to be of equal importance has been the whole wide development of the application of Fermi statistics to electrons in solids, first resulting in the major classification of properties of metals, then of semiconductors, and then finally of really modern tailored effects that led to devices such as transistors and so on. The decade just passed has corresponded to an enormous further development along these same lines. We have the study of nuclear reactions going on up to higher energies of some hundreds of millions of volts, with predominant interest in the study of polarization effects in nuclear reactions as another way of getting at points of detail; the recognition of the nonconservation of parity; the experimental discovery of the neutrino; the recognition that the Fermi interaction that applies in weak interactions is more general than simply the beta decay and also applies to muon decay and other related processes; and the discovery of the strange particles. And then finally, as a roundup of mentioning things that we do not have time to talk about, the extraordinarily fine extensions that have been made in the last five years of the theory of broad, modern, good perturbation theory methods for dealing with the many-body problem. That involved not only the better calculation of nuclear models but also, at last, after many years of effort, is beginning to provide a real understanding of superfluids. In conclusion I should like to emphasize again that all this scientific activity started almost exactly sixty years ago—barring two weeks—on December 14, 1900, when Planck's constant was first introduced into physics. In the sixty years that have intervened it is now almost impossible to find many papers in physics which do not deal directly or indirectly with phenomena that are fully and basically conditioned by the existence of that one universal constant. - Historical Index - Home -
31bd8e8ea0d546dc
All Issues Volume 17, 2018 Volume 16, 2017 Volume 15, 2016 Volume 14, 2015 Volume 13, 2014 Volume 12, 2013 Volume 11, 2012 Volume 10, 2011 Volume 9, 2010 Volume 8, 2009 Volume 7, 2008 Volume 6, 2007 Volume 5, 2006 Volume 4, 2005 Volume 3, 2004 Volume 2, 2003 Volume 1, 2002 Communications on Pure & Applied Analysis 2016 , Volume 15 , Issue 3 Select all articles Nonexistence of positive solutions for polyharmonic systems in $\mathbb{R}^N_+$ Yuxia Guo and  Bo Li 2016, 15(3): 701-713 doi: 10.3934/cpaa.2016.15.701 +[Abstract](26) +[PDF](393.7KB) In this paper, we study the monotonicity and nonexistence of positive solutions for polyharmonic systems $\left\{\begin{array}{rlll} (-\Delta)^m u&=&f(u, v)\\ (-\Delta)^m v&=&g(u, v) \end{array}\right.\;\hbox{in}\;\mathbb{R}^N_+.$ By using the Alexandrov-Serrin method of moving plane combined with integral inequalities and Sobolev's inequality in a narrow domain, we prove the monotonicity of positive solutions for semilinear polyharmonic systems in $\mathbb{R_+^N}.$ As a result, the nonexistence for positive weak solutions to the system are obtained. On Compactness Conditions for the $p$-Laplacian Pavel Jirásek 2016, 15(3): 715-726 doi: 10.3934/cpaa.2016.15.715 +[Abstract](39) +[PDF](369.4KB) We investigate the geometry and validity of various compactness conditions (e.g. Palais-Smale condition) for the energy functional \begin{eqnarray} J_{\lambda_1}(u)=\frac{1}{p}\int_\Omega |\nabla u|^p \ \mathrm{d}x- \frac{\lambda_1}{p}\int_\Omega|u|^p \ \mathrm{d}x - \int_\Omega fu \ \mathrm{d}x \nonumber \end{eqnarray} for $u \in W^{1,p}_0(\Omega)$, $1 < p < \infty$, where $\Omega$ is a bounded domain in $\mathbb{R}^N$, $f \in L^\infty(\Omega)$ is a given function and $-\lambda_1<0$ is the first eigenvalue of the Dirichlet $p$-Laplacian $\Delta_p$ on $W_0^{1,p}(\Omega)$. Well-posedness and ill-posedness results for the Novikov-Veselov equation Yannis Angelopoulos 2016, 15(3): 727-760 doi: 10.3934/cpaa.2016.15.727 +[Abstract](39) +[PDF](563.2KB) In this paper we study the Novikov-Veselov equation and the related modified Novikov-Veselov equation in certain Sobolev spaces. We prove local well-posedness in $H^s (\mathbb{R}^2)$ for $s > \frac{1}{2}$ for the Novikov-Veselov equation, and local well-posedness in $H^s (\mathbb{R}^2)$ for $s > 1$ for the modified Novikov-Veselov equation. Finally we point out some ill-posedness issues for the Novikov-Veselov equation in the supercritical regime. A weighted $L_p$-theory for second-order parabolic and elliptic partial differential systems on a half space Kyeong-Hun Kim and  Kijung Lee 2016, 15(3): 761-794 doi: 10.3934/cpaa.2016.15.761 +[Abstract](48) +[PDF](547.8KB) In this article we consider parabolic systems and $L_p$ regularity of the solutions. With zero boundary condition the solutions experience bad regularity near the boundary. This article addresses a possible way of describing the regularity nature. Our space domain is a half space and we adapt an appropriate weight into our function spaces. In this weighted Sobolev space setting we develop a Fefferman-Stein theorem, a Hardy-Littlewood theorem and sharp function estimations. Using these, we prove uniqueness and existence results for second-order elliptic and parabolic partial differential systems in weighed Sobolev spaces. Wenbo Cheng , Wanbiao Ma and  Songbai Guo 2016, 15(3): 795-806 doi: 10.3934/cpaa.2016.15.795 +[Abstract](31) +[PDF](413.7KB) A class of virus dynamic model with inhibitory effect on the growth of uninfected T cells caused by infected T cells is proposed. It is shown that the infection-free equilibrium of the model is globally asymptotically stable, if the reproduction number $R_0$ is less than one, and that the infected equilibrium of the model is locally asymptotically stable, if the reproduction number $R_0$ is larger than one. Furthermore, it is also shown that the model is uniformly persistent, and some explicit formulae for the lower bounds of the solutions of the model are obtained. A Liouville-type theorem for higher order elliptic systems of Hé non-Lane-Emden type Frank Arthur and  Xiaodong Yan 2016, 15(3): 807-830 doi: 10.3934/cpaa.2016.15.807 +[Abstract](56) +[PDF](476.2KB) We prove there are no positive solutions with slow decay rates to higher order elliptic system \begin{eqnarray} \left\{ \begin{array}{c} \left( -\Delta \right) ^{m}u=\left\vert x\right\vert ^{a}v^{p} \\ \left( -\Delta \right) ^{m}v=\left\vert x\right\vert ^{b}u^{q} \end{array} \text{ in }\mathbb{R}^{N}\right. \end{eqnarray} if $p\geq 1,$ $q\geq 1,$ $\left( p,q\right) \neq \left( 1,1\right) $ satisfies $\frac{1+\frac{a}{N}}{p+1}+\frac{1+\frac{b}{N}}{q+1}>1-\frac{2m}{N} $ and \begin{eqnarray} \max \left( \frac{2m\left( p+1\right) +a+bp}{pq-1},\frac{2m\left( q+1\right) +aq+b}{pq-1}\right) >N-2m-1. \end{eqnarray} Moreover, if $N=2m+1$ or $N=2m+2,$ this system admits no positive solutions with slow decay rates if $p\geq 1,$ $q\geq 1,$ $\left( p,q\right) \neq \left( 1,1\right) $ satisfies $\frac{1}{ p+1}+\frac{1}{q+1}>1-\frac{2m}{N}.$ Well-posedness and scattering for fourth order nonlinear Schrödinger type equations at the scaling critical regularity Hiroyuki Hirayama and  Mamoru Okamoto 2016, 15(3): 831-851 doi: 10.3934/cpaa.2016.15.831 +[Abstract](79) +[PDF](481.9KB) In the present paper, we consider the Cauchy problem of fourth order nonlinear Schrödinger type equations with derivative nonlinearity. In one dimensional case, the small data global well-posedness and scattering for the fourth order nonlinear Schrödinger equation with the nonlinear term $\partial _x (\overline{u}^4)$ are shown in the scaling invariant space $\dot{H}^{-1/2}$. Furthermore, we show that the same result holds for the $d \ge 2$ and derivative polynomial type nonlinearity, for example $|\nabla | (u^m)$ with $(m-1)d \ge 4$, where $d$ denotes the space dimension. A class of generalized quasilinear Schrödinger equations Yaotian Shen and  Youjun Wang 2016, 15(3): 853-870 doi: 10.3934/cpaa.2016.15.853 +[Abstract](27) +[PDF](438.2KB) We establish the existence of nontrivial solutions for the following quasilinear Schrödinger equation with critical Sobolev exponent: \begin{eqnarray} -\Delta u+V(x) u-\Delta [l(u^2)]l'(u^2)u= \lambda u^{\alpha2^*-1}+h(u),\ \ x\in \mathbb{R}^N, \end{eqnarray} where $V(x):\mathbb{R}^N\rightarrow \mathbb{R}$ is a given potential and $l,h$ are real functions, $\lambda\geq 0$, $\alpha>1$, $2^*=2N/(N-2)$, $N\geq 3$. Our results cover two physical models $l(s)=s^{\frac{\alpha}{2}}$ and $l(s) = (1+s)^{\frac{\alpha}{2}}$ with $\alpha\geq 3/2$. Traveling waves for a diffusive SEIR epidemic model Zhiting Xu 2016, 15(3): 871-892 doi: 10.3934/cpaa.2016.15.871 +[Abstract](46) +[PDF](461.7KB) In this paper, we propose a diffusive SEIR epidemic model with saturating incidence rate. We first study the well posedness of the model, and give the explicit formula of the basic reproduction number $\mathcal{R}_0$. And hence, we show that if $\mathcal{R}_0>1$, then there exists a positive constant $c^*>0$ such that for each $c>c^*$, the model admits a nontrivial traveling wave solution, and if $\mathcal{R}_0\leq1$ and $c\geq 0$ (or, $\mathcal{R}_0>1$ and $c\in[0,c^*)$), then the model has no nontrivial traveling wave solutions. Consequently, we confirm that the constant $c^*$ is indeed the minimal wave speed. The proof of the main results is mainly based on Schauder fixed theorem and Laplace transform. Qualitative properties of solutions to an integral system associated with the Bessel potential Lu Chen , Zhao Liu and  Guozhen Lu 2016, 15(3): 893-906 doi: 10.3934/cpaa.2016.15.893 +[Abstract](35) +[PDF](403.5KB) In this paper, we study a differential system associated with the Bessel potential: \begin{eqnarray}\begin{cases} (I-\Delta)^{\frac{\alpha}{2}}u(x)=f_1(u(x),v(x)),\\ (I-\Delta)^{\frac{\alpha}{2}}v(x)=f_2(u(x),v(x)), \end{cases}\end{eqnarray} where $f_1(u(x),v(x))=\lambda_1u^{p_1}(x)+\mu_1v^{q_1}(x)+\gamma_1u^{\alpha_1}(x)v^{\beta_1}(x)$, $f_2(u(x),v(x))=\lambda_2u^{p_2}(x)+\mu_2v^{q_2}(x)+\gamma_2u^{\alpha_2}(x)v^{\beta_2}(x)$, $I$ is the identity operator and $\Delta=\sum_{j=1}^{n}\frac{\partial^2}{\partial x^2_j}$ is the Laplacian operator in $\mathbb{R}^n$. Under some appropriate conditions, this differential system is equivalent to an integral system of the Bessel potential type. By the regularity lifting method developed in [4] and [18], we obtain the regularity of solutions to the integral system. We then apply the moving planes method to obtain radial symmetry and monotonicity of positive solutions. We also establish the uniqueness theorem for radially symmetric solutions. Our nonlinear terms $f_1(u(x), v(x))$ and $f_2(u(x), v(x))$ are quite general and our results extend the earlier ones even in the case of single equation substantially. On the differentiability of the solutions of non-local Isaacs equations involving $\frac{1}{2}$-Laplacian Imran H. Biswas and  Indranil Chowdhury 2016, 15(3): 907-927 doi: 10.3934/cpaa.2016.15.907 +[Abstract](35) +[PDF](476.4KB) We derive $C^{1,\sigma}$-estimate for the solutions of a class of non-local elliptic Bellman-Isaacs equations. These equations are fully nonlinear and are associated with infinite horizon stochastic differential game problems involving jump-diffusions. The non-locality is represented by the presence of fractional order diffusion term and we deal with the particular case of $\frac 12$-Laplacian, where the order $\frac 12$ is known as the critical order in this context. More importantly, these equations are not translation invariant and we prove that the viscosity solutions of such equations are $C^{1,\sigma}$, making the equations classically solvable. Oscillatory integrals related to Carleson's theorem: fractional monomials Shaoming Guo 2016, 15(3): 929-946 doi: 10.3934/cpaa.2016.15.929 +[Abstract](31) +[PDF](441.3KB) Stein and Wainger [21] proved the $L^p$ bounds of the polynomial Carleson operator for all integer-power polynomials without linear term. In the present paper, we partially generalise this result to all fractional monomials in dimension one. Moreover, the connections with Carleson's theorem and the Hilbert transform along vector fields or (variable) curves %and a polynomial Carleson operator along the paraboloid are also discussed in details. Layer solutions for an Allen-Cahn type system driven by the fractional Laplacian Yan Hu 2016, 15(3): 947-964 doi: 10.3934/cpaa.2016.15.947 +[Abstract](37) +[PDF](439.7KB) We study entire solutions in $R$ of the nonlocal system $(-\Delta)^{s}U+\nabla W(U)=(0,0)$ where $W:R^{2}\rightarrow R$ is a double well potential. We seek solutions $U$ which are heteroclinic in the sense that they connect at infinity a pair of global minima of $W$ and are also global minimizers. Under some symmetric assumptions on potential $W$, we prove the existence of such solutions for $s>\frac{1}{2}$, and give asymptotic behavior as $x\rightarrow\pm\infty$. Infinitely many solutions for nonlinear Schrödinger system with non-symmetric potentials Weiwei Ao , Liping Wang and  Wei Yao 2016, 15(3): 965-989 doi: 10.3934/cpaa.2016.15.965 +[Abstract](30) +[PDF](496.0KB) Without any symmetric conditions on potentials, we proved the following nonlinear Schrödinger system \begin{eqnarray} \left\{\begin{array}{ll} \Delta u-P(x)u+\mu_1u^3+\beta uv^2=0, \quad &\mbox{in} \quad R^2\\ \Delta v-Q(x)v+\mu_2v^3+\beta vu^2=0, \quad &\mbox{in} \quad R^2 \end{array} \right. \end{eqnarray} has infinitely many non-radial solutions with suitable decaying rate at infinity of potentials $P(x)$ and $Q(x)$. This is the continued work of [8]. Especially when $P(x)$ and $Q(x)$ are symmetric, this result has been proved in [18]. Ground state solutions for fractional Schrödinger equations with critical Sobolev exponent Kaimin Teng and  Xiumei He 2016, 15(3): 991-1008 doi: 10.3934/cpaa.2016.15.991 +[Abstract](30) +[PDF](430.2KB) In this paper, we establish the existence of ground state solutions for fractional Schrödinger equations with a critical exponent. The methods used here are based on the $s-$harmonic extension technique of Caffarelli and Silvestre, the concentration-compactness principle of Lions and methods of Brezis and Nirenberg. Global regular solutions to two-dimensional thermoviscoelasticity Jerzy Gawinecki and  Wojciech M. Zajączkowski 2016, 15(3): 1009-1028 doi: 10.3934/cpaa.2016.15.1009 +[Abstract](31) +[PDF](416.5KB) A two-dimensional thermoviscoelastic system of Kelvin-Voigt type with strong dependence on temperature is considered. The existence and uniqueness of a global regular solution is proved without small data assumptions. The global existence is proved in two steps. First global a priori estimate is derived applying the theory of anisotropic Sobolev spaces with a mixed norm. Then local existence, proved by the method of successive approximations for a sufficiently small time interval, is extended step by step in time. By two-dimensional solution we mean that all its quantities depend on two space variables only. Inversion of the spherical Radon transform on spheres through the origin using the regular Radon transform Sunghwan Moon 2016, 15(3): 1029-1039 doi: 10.3934/cpaa.2016.15.1029 +[Abstract](37) +[PDF](4710.4KB) A spherical Radon transform whose integral domain is a sphere has many applications in partial differential equations as well as tomography. This paper is devoted to the spherical Radon transform which assigns to a given function its integrals over the set of spheres passing through the origin. We present a relation between this spherical Radon transform and the regular Radon transform, and we provide a new inversion formula for the spherical Radon transform using this relation. Numerical simulations were performed to demonstrate the suggested algorithm in dimension 2. Bogdanov-Takens bifurcation of codimension 3 in a predator-prey model with constant-yield predator harvesting Jicai Huang , Sanhong Liu , Shigui Ruan and  Xinan Zhang 2016, 15(3): 1041-1055 doi: 10.3934/cpaa.2016.15.1041 +[Abstract](36) +[PDF](2561.0KB) Recently, we (J. Huang, Y. Gong and S. Ruan, Discrete Contin. Dynam. Syst. B 18 (2013), 2101-2121) showed that a Leslie-Gower type predator-prey model with constant-yield predator harvesting has a Bogdanov-Takens singularity (cusp) of codimension 3 for some parameter values. In this paper, we prove analytically that the model undergoes Bogdanov-Takens bifurcation (cusp case) of codimension 3. To confirm the theoretical analysis and results, we also perform numerical simulations for various bifurcation scenarios, including the existence of two limit cycles, the coexistence of a stable homoclinic loop and an unstable limit cycle, supercritical and subcritical Hopf bifurcations, and homoclinic bifurcation of codimension 1. Traveling wave solutions in a nonlocal reaction-diffusion population model Bang-Sheng Han and  Zhi-Cheng Wang 2016, 15(3): 1057-1076 doi: 10.3934/cpaa.2016.15.1057 +[Abstract](41) +[PDF](1955.9KB) This paper is concerned with a nonlocal reaction-diffusion equation with the form \begin{eqnarray} \frac{\partial u}{\partial t}=\frac{\partial^{2}u}{\partial x^{2}}+u\left\{ 1+\alpha u-\beta u^{2}-(1+\alpha-\beta)(\phi\ast u) \right\}, \quad (t,x)\in (0,\infty) \times \mathbb{R}, \end{eqnarray} where $\alpha $ and $\beta$ are positive constants, $0<\beta<1+\alpha$. We prove that there exists a number $c^*\geq 2$ such that the equation admits a positive traveling wave solution connecting the zero equilibrium to an unknown positive steady state for each speed $c>c^*$. At the same time, we show that there is no such traveling wave solutions for speed $c<2$. For sufficiently large speed $c>c^*$, we further show that the steady state is the unique positive equilibrium. Using the lower and upper solutions method, we also establish the existence of monotone traveling wave fronts connecting the zero equilibrium and the positive equilibrium. Finally, for a specific kernel function $\phi(x):=\frac{1}{2\sigma}e^{-\frac{|x|}{\sigma}}$ ($\sigma>0$), by numerical simulations we show that the traveling wave solutions may connects the zero equilibrium to a periodic steady state as $\sigma$ is increased. Furthermore, by the stability analysis we explain why and when a periodic steady state can appear. 2016  Impact Factor: 0.801 Email Alert [Back to Top]
5b36784ceffd40cc
Friday, April 29, 2011 Fun with an Argon Atom Photon-recoil bilocation experiment at Heidelberg A recent experiment on Argon atoms by Jeri Tomkovic and five collaborators at the University of Heidelberg has demonstrated once again the subtle and astonishing reality of the quantum world. Erwin Schrödinger, who devised the Schrödinger equation that governs quantum behavior, also demonstrated the preposterousness of his own equation by showing that under certain special conditions quantum theory seemed to allow a cat (Schrödinger's Cat) to be alive and dead at the same time. Humans can't yet do this to cats, but clever physicists are discovering how to put larger and larger systems into a "quantum superposition" in which a single entity can comfortably dwell in two distinct (and seemingly contradictory) states of existence. The Heidelberg experiment with Argon atoms (explained popularly here, in the physics arXiv here and published in Nature here) dramatically demonstrates two important features of quantum reality: 1) if it is experimentally impossible to tell whether a process went one way or the other, then, in reality, IT WENT BOTH WAYS AT ONCE (like a Schrödinger Cat); 2) quantum systems behave like waves when not looked at--and like particles when you look. The Heidelberg physicists looked at laser-excited Argon atoms which shed their excitation by emitting a single photon of light. The photon goes off in a random direction and the Argon atom recoils in the opposite direction. Ordinary physics so far. But Tomkovic and pals modified this experiment by placing a gold mirror behind the excited Argon atom. Now (if the mirror is close enough to the atom) it is impossible for anyone to tell whether the emitted photon was emitted directly or bounced off the mirror. According to the rules of quantum mechanics then, the Argon atom must be imagined to recoil IN BOTH DIRECTIONS AT ONCE--both towards and away from the mirror. But this paradoxical situation is present only if we don't look. Like Schrödinger's Cat, who will be either alive or dead (if we look) but not both, the bilocal Argon atom (if we look) will always be found to be recoiling in only one direction--towards the mirror (M) or away from the mirror (A) but never both at the same time. To prove that the Argon atom was really in the bilocal superposition state we have to devise an experiment that involves both motions (M and A) at once. (Same to verify the Cat--we need to devise a measurement that looks at both LIVE and DEAD cat at the same time.) To measure both recoil states at once, the Heidelberg guys set up a laser standing wave by shining a laser directly into a mirror and scattered the bilocal Argon atom off the peaks and troughs of this optical standing wave. Just as a wave of light is diffracted off the regular peaks and troughs of a matter-made CD disk, so a wave of matter (Argon atoms) can be diffracted from a regular pattern of light (a laser shining into a mirror). When an Argon atom encounters the regular lattice of laser light, it is split (because of its wave nature) into a transmitted (T) and a diffracted (D) wave. The intensity of the laser is adjusted so that the relative proportion of these two waves is approximately 50/50. In its encounter with the laser lattice, each state (M and A) of the bilocated Argon atom is split into two parts (T and D), so now THE SAME ARGON ATOM is traveling in four directions at once (MT, MD, AT, AD). Furthermore (as long as we don't look) these four distinct parts act like waves. This means they can constructively and destructively interfere depending on their "phase difference". The two waves MT and AD are mixed and the result sent to particle detector #1. The two waves AT and MD are mixed and sent to particle detector #2. For each atom only one count is recorded--one particle in, one particle out. But the PATTERN OF PARTICLES in each detector will depend on the details of the four-fold experience each wavelet has encountered on its way to a particle detector. This hidden wave-like experience is altered by moving the laser mirror L which shifts the position of the peaks of the optical diffraction grating. In quantum theory, the amplitude of a matter wave is related to the probability that it will trigger a count in a particle detector. Even though the unlooked-at Argon atom is split into four partial waves, the looked-at Argon particle can only trigger one detector. The outcome of the Heidelberg experiment consists of counting the number of atoms detected in counters #1 and #2 as a function of the laser mirror position L. The results of this experiment show that, while it was unobserved, a single Argon atom was 1) in two places at once because of the mirror's ambiguisation of photon recoil, then 2) four places at once after encountering the laser diffraction grating, 3) then at last, only one place at a time when it is finally observed by either atom counter #1 or atom counter #2. The term "Schrödinger Cat state" has come to mean ANY MACROSCOPIC SYSTEM that can be placed in a quantum superposition. Does an Argon atom qualify as a Schrödinger Cat? Argon is made up of 40 nucleons, each consisting of 3 quarks. Furthermore each Argon atom is surrounded by 18 electrons for a total of 138 elementary particles--each "doing its own thing" while the atom as a whole exists in four separate places at the same time. Now a cat surely has more parts than a single Argon atom, but the Heidelberg experiment demonstrates that, with a little ingenuity, a quite complicated system can be coaxed into quantum superposition. Today's physics students are lucky. When I was learning quantum physics in the 60s, much of the quantum weirdness existed only as mere theoretical formalism. Now in 2011, many of these theoretical possibilities have become solid experimental fact. This marvelous Heidelberg quadralocated Argon atom joins the growing list of barely believable experimental hints from Nature Herself about how She routinely cooks up the bizarre quantum realities that underlie the commonplace facts of ordinary life. Saturday, April 23, 2011 Philip Wagner Phil Wagner at Ben Lomond Art Center Colorful denizen of the Late BistroScene, founder of the Santa Cruz Emerald Street Poets, moral philosopher and  intellectual sparring partner, Irish fiddle player and Santa Cruz good old boy, Phil Wagner shares here a tale from his youth. On Sunday, the A-bomb doesn't fall but rain does, drops the shape of grapes that burst themselves on the city. On the kitchen wall,  a photo of uncle Jim in uniform waving from a pontoon bridge  over the Kumho River on his way to shoot Koreans. On Monday, I go into the third grade to open page one of the Catechism, "Who is God?" then the picture book about how to hide when the A-bomb falls and how God and President Truman would save our town and me, under my desk repeating "Our Father, whoartinheaven . . ." until I die or until President Truman sounds the all-clear siren so I can walk home and study page two of the Catechism. On the way home I wade the edge of the creek that rages near the school grounds that this same water can find its way into a grape, then a tear and, after I push a large stone into the stream, can so easily leap out of its trench to wash away a playground. How fragile history must be when a single well-timed and placed stone can kill a giant or re-route a river. Tuesday, and no bombs blast in spite of our sins but more rain arrives and a photo of "Uncle Jim holding his ground." Dad explains, "Wave after wave of Koreans attack him and his machine gun." I go to school anyway. "God loves me," I sing, as my friend Zimmer leads the charge, splashing into the re-routed river  that washes away layer after layer of what remains of the school grounds. "How much ground was lost?" Father McLaughlin's eyes screw shut, ". . . one maybe two years of instruction." he tells Sister Gerard, "The devil just got loose." Zimmer's mom goes into the principal's office and comes out holding Zimmer by the hand. He waves good-bye  and I'll never see my friend again-- nor Uncle Jim. On Thursday more rain. We turn to page three of the Catechism. Sunday, April 17, 2011 Quantum Tantra At The Tannery Nick befriending some atoms The Oldest Tannery in the West (Salz Tannery in Santa Cruz, CA) closed its doors in 2001 and was converted into a space for artists to live, work and perform. The former tannery is located on Highway 9, just inside Santa Cruz's northern limits on the San Lorenzo River. Old-time residents will remember that, during the transformation from leather factory to artist's lofts, the site was enlivened by dozens of plywood cows. The Tannery Arts Center has been up and running for several years and has now invited local physicist Nick Herbert to appear on April 21 ( Room 117, 7-9 PM ) as part of their Thursday Evening Lecture Series. Nick will talk about the deepest unsolved problem in physics, the "quantum reality question"-- the awkward situation that the most powerfully predictive theory humans have ever possessed comes at the price of renouncing a picture of reality. We cannot really say what atoms really look like. Nor can we say "what happens" when we measure an atom. But we can predict (sometimes with an accuracy of 11 decimal places) the results of these not-very-well-understood measurements. Physicists do not fully understand what they are doing when they do quantum mechanics but whatever it is they are doing they do it very successfully. In the second part of his presentation Nick will introduce "quantum tantra" his latest obsession: an attempt to develop a more intimate way of connecting with nature using clues and technologies from quantum physics. The Tannery Arts Center is located near the intersection of Highway 9 and Highway 1 in Santa Cruz (close to Costco). Admission is free. Copies of Nick's Quantum Reality and Physics On All Fours will be available for sale and signing. Works by Stephen Lynch and Diana Hobson will also be on exhibit. Click to Expand Friday, April 15, 2011 A Frog Named Jagger In alternate universe K-9, a rogue artificial intelligence was successfully violating the Second Law of Thermodynamics through quantum currency manipulation. As part of his plan to bring this silicon-based outlaw to justice, Ferdinand Feghoot was posing as the manager of a walk-in bank on the planet Jeffbezos. The chief teller of Feghoot's bank was an Earth I female named Patricia Whack, a distant relative of the Whacks who run Whack and Crusher, the demolition company that changed history by unmasking the real scoundrels responsible for the destruction of America's World Trade Center. Just before closing time, a frog walks into Feghoot's bank and approaches Patricia Whack's desk. "Ms Whack," he says," I'd like to get a $30,000 loan to take a holiday." Very confused, Patty explains that she'll have to consult with the bank manager and disappears into the back office. She finds Feghoot behind his desk playing Romulan solitaire and says, "There's a frog called Kermit Jagger out there who claims to know you and wants to borrow $30,000, and he wants to use this as collateral." Ferdinand Feghoot looks back at her and says: "It's a knickknack, Patty Whack. Give the frog a loan. His old man's a Rolling Stone." Tuesday, April 12, 2011 The Bologna E-Cat Four Bologna E-Cats: nearest one covered in insulation. Honda Eu6500 Portable Generator Monday, April 11, 2011 Lunar Backside Lunar Farside: Astronomy Picture of the Day (APOD) Mendeleev, Landau, Lebedev, Lemonosov, Lobachevski, Popov, Chebyshev, Evdokimov. Why so many Slavic names On hidden lunar farside craters? Cause Soviet cameras saw them first NASA's lenses got there later. Gagarin, Feoktisov, Golitsyn, Zhukovski, Kurchatov, Numerov, Rasumnov, Tsilokovski. Fair planetoid that shapes our tides Twenty-two hundred miles wide Here's where they're immortalized Russians on the moon's backside.
65dbe71b27373cba
Erasmus Langer Siegfried Selberherr Oskar Baumgartner Markus Bina Hajdin Ceric Johann Cervenka Lado Filipovic Wolfgang Gös Klaus-Tibor Grasser Hossein Karamitaheri Hans Kosina Hiwa Mahmoudi Alexander Makarov Marian Molnar Mahdi Moradinasab Mihail Nedjalkov Neophytos Neophytou Roberto Orio Dmitry Osintsev Vassil Palankovski Mahdi Pourfath Karl Rupp Franz Schanovsky Anderson Singulani Zlatan Stanojevic Ivan Starkov Viktor Sverdlov Oliver Triebl Stanislav Tyaginov Paul-Jürgen Wagner Michael Waltl Josef Weinbub Thomas Windbacher Wolfhard Zisser Zlatan Stanojevic Zlatan Stanojevic studied at the Technische Universität Wien where he received the BSc degree in electrical engineering and the degree of Diplomingenieur in Microelectronics in 2007 and 2009, respectively. He is currently working at the Institute for Microelectronics at the Technische Universität Wien. His research interests include semi-classical modeling of carrier transport, thermoelectric and optical effects in low-dimensional structures. Convergent Modeling of Optoelectronic Nanostructures The Mid-InfraRed (MIR) and TeraHertz (THz) regions of the electromagnetic spectrum have a vast range of potential applications. These include security (surveillance, detection of explosives), safety (gas sensing, detection of hazardous compounds) as well as a variety of scientific and industrial applications, such as chemical analysis, thermography, medical imaging, and non-destructive testing. The key to entering these potential applications is the ability to design optoelectronic devices specifically tailored to the particular application at hand. The design of such devices turns out to be a challenging task because it involves several physical systems that have to be grasped, modeled and simulated simultaneously, the two most prominent being the electronic and the optical system. The trend in modern optoelectronics is to exploit the same mechanisms for both carriers and light, such as spatial confinement or superlattices. Thus it seems a natural approach to treat optics and carrier dynamics on equal grounds. By doing so we gain a clearer picture of the structural similarities of seemingly unrelated physical phenomena, which in turn allows us to exploit synergies. Effects that are well known in one particular field may be found and used in another, potentially leading to the discovery of new devices. One famous example of the synergistic approach is photonic crystals. Well known principles from solid state physics, such as Bloch theorem or the Kronig-Penney model, serve as a basis for novel optical devices, featuring nonlinear dispersion, photonic bandgaps, etc. Another example is the use of absorbing boundary conditions (perfectly matched layers) to simulate quasi-open quantum systems, a concept borrowed from electromagnetic simulation. Our simulation framework embeds this way of thinking. One can simply change the governing equation from the Schrödinger equation to the Maxwell equations and vice versa, or even investigate a different domain under the same conditions, e.g. lattice vibrations. Thus, the governing equation appears merely as one degree of freedom in the modeling process. This makes our framework a particularly useful tool in basic research where rapid model prototyping in conjunction with experimental efforts helps unveil new physical phenomena. By providing unified model components, such as governing equations, boundary conditions, or numerical solution procedures, we can cover a broad range of problems encountered in research on optoelectronic nanostructures and even anticipate new problems that have yet to appear. Bandstructure of a 2D periodic photonic crystal; the lowest two transversal electric and transversal magnetic modes are shown.
94bb6fe25507eb0a
Tuesday, October 20, 2015 ... Français/Deutsch/Español/Česky/Japanese/Related posts from blogosphere Quantum field theory obeys all postulates of quantum mechanics In particular, QFT can't contradict the superposition principle A beginner who tries to learn quantum field theory, a user named Darkblue, asked a question on the Physics Stack Exchange, Is a single photon always circularly polarized? Darkblue has promoted – and is still promoting – the thesis that only circular polarizations and not linear polarizations may exist for an individual photon. To strengthen his point, he also refers to a (zero-citation) preprint by two crackpots that proposed to experimentally "test" a similar nonsensical claim. (Yes, these two authors – proud senior janitors at IEEE – and Darkblue clearly belong to the Šmoitian sect of morons who feel very self-confident whenever they say "we want to make an experiment", regardless of how stupid the experiment and especially its interpretation is.) Among other gems, the paper claims that a linearly polarized photon moving in the \(z\)-direction should have \(J_z=0\hbar\). Well, the expectation value \(\langle J_z\rangle =0\hbar\) may be zero but the vanishing value is strictly forbidden as a result of a measurement (zero isn't among the eigenvalues): there are no longitudinal or scalar photons. I have posted an answer which I won't repost here. It says that we must decide what we measure and if we measure the circular polarization of one photon (left or right), which is a maximum package of information we may measure about an individual photon's polarization, we always get one of these two results. So the circular polarization always exists but typically, the predicted circular polarization isn't certain. The observables "which linear polarization" and "which circular polarization" don't commute with each other, just like \(x\) and \(p\) don't, so if we know one of those, we can't know about the other and vice versa. In other words, we work with superpositions of states. Linearly polarized photons are the most "balanced" superpositions of the circular ones and they're equally possible to emerge in Nature. I wrote something about the classical electromagnetic waves – states with many coherent photons, too. For one photon, a linear polarization and a circular polarization are not strictly mutually exclusive; for the state of many coherent photons in the same state, the linear and circular polarization become mutually exclusive. Darkblue has been annoyingly stubborn and much of his or her stupidity is idiosyncratic in character. Hopeless beginners like that sometimes decide to get stuck in a certain kind of a stupid delusion that no one else shares. But I wrote this blog post because of the "deja vu" effect. Some of darkblue's argumentation seems very similar to some widespread myth that hasn't been discussed too often on this blog, if ever. The myth is the following: Myth: Quantum field theory is a modified, from quantum mechanics different, description of Nature in the same sense in which quantum mechanics differs from classical mechanics. So QM and QFT may disagree in their statements. Well, this opinion is completely wrong. Quantum field theories are a subclass of quantum mechanical theories. All quantum mechanical theories obey the universal postulates of quantum mechanics, the axioms of the QM framework, such as • All information about the properties of the objects (the state of the physical system) is measured through the so-called observables. • Observables are represented by linear operators. • Allowed values resulting from a measurement are eigenvalues of the measured operator (the spectrum). • These operators form an associative but non-commutative algebra. • A complex Hilbert space is the minimal nontrivial representation of the algebra of these operators. All elements of this vector space may emerge as states after a measurement of a complete/maximal set of commuting observables (the superposition postulate) because each state is an eigenstate of an appropriate observable. • Expectation values of projection operators are physically interpreted as the probabilities of the corresponding outcomes of the measurements. The probabilities are basically everything one can predict in physics. In terms of pure states from the minimum representation (the Hilbert space), the probabilities are squared absolute values of the complex amplitudes (Born's rule). Quantum field theories agree with all of those things. They just have a particular type of the operator algebras – the algebras of operators. In QFT, we assume that the operators may be organized in terms of things like \(\hat\phi_i(x,y,z,t)\) and the Heisenberg equation governing these operators (more precisely, operator distributions) are supposed to be Lorentz-covariant (relativistic). The Hilbert spaces in QFT have a rich structure and we must get used to some mathematical effects not present in the case of finite-dimensional (and simple enough infinite-dimensional) Hilbert spaces, such as the need for renormalization etc. But if you haven't gotten to the topics of renormalization at all, I can assure you that your opinion that QFT is "something different" than a QM theory are complete misunderstandings. QFTs are strictly a subset of QM theories. I believe that there exists a simple sociological reason why some beginners tend to think that QFT is a "different kind of a theory" than QM, and it's the following: The organization of knowledge into many subjects and courses makes the logical structure of physics look more complex and less unequivocal than it is. What do I mean? Students learn classical mechanics and then electromagnetism and then quantum mechanics, and so on. These subjects mostly cover "non-overlapping" theories. So people assume that the same thing holds for quantum mechanics and quantum field theory and other pairs. But this segregation of the physics knowledge into many boxes or courses completely misses the key point about the "basic logical kinds of theories" we have in physics. The key point is the following one: There only exist two types of theories in all of physics: 1. Classical physics 2. Quantum mechanics 3. There is simply not third way. I added the third option in order to emphasize that it doesn't exist. Up to 1924, all state-of-the-art theories assumed the framework of classical physics. According to classical physics, there is an objective "state of the world". The possible states of the world form a set which is physically referred to as the "phase space". Because physics wants to deal with the continuous evolution (evolution in continuous time), the phase space is a smooth manifold, possibly an infinite-dimensional one. Equations of motion are usually deterministic and may be expressed by arrows along lines on the phase space that determine where a given state evolves after \(\Delta t\). Since 1925, all state-of-the-art theories in physics had to be quantum mechanical. Schrödinger designed his non-relativistic quantum (or wave) mechanics for one electron in the atom. Multiparticle generalizations were found much like the relativistic Dirac equation. The latter two were extended to Quantum Electrodynamics, the most accurately tested example of a quantum field theory. Many quantum field theories – relativistic and non-relativistic ones – were studied by particle physicists and condensed matter physicists. And string theory is the only new "kind" of a theory that may be, at least in a certain sense, considered to go "beyond" quantum field theory (although in other senses, QFT and ST are the same thing or in the same universality class). QFT and ST are quantum mechanical theories. They respect the general QM framework – just choose different operator algebras or Heisenberg equations or the "technical details". Even hopeless proposed theories such as loop quantum gravity are fully quantum mechanical ones. It's as simple as that. There is no "third way". The number of truly qualitatively different logical frameworks in physics is just two, a much smaller number than the number of physics courses in a college. The courses differ on the laws of physics that are only distinguished by the choice of the degrees of freedom and equations they obey, not in the basic logic through which the mathematics is connected to the observations! Darkblue, the user at the Physics Stack Exchange, decided to "only allow" circular polarizations partly because he or she wanted to interpret the decomposition of the electromagnetic field too dogmatically. Darkblue has probably seen an equation like\[ A_\mu(x^\alpha) &= \int \frac{d^3 k}{\sqrt{2E(\vec k)}}\,\sum_\lambda \epsilon_\mu (\vec k)\cdot \hat a_{\vec k, \lambda}\cdot e^{-ik_\nu x^\nu} +\\ &+ \text{Hermitian conjugate} \] where the sum over polarizations was immediately said to go over \(\lambda\in\{L,R\}\). So Darkblue concluded that only the circular polarizations may exist or may be allowed. But the equation doesn't say anything of the sort. An equation is a way to rewrite something; an equation never explicitly or implicitly says that it is the only way. When we write \(15=10+5\), it doesn't prevent other identities such as \(15=7+8\) from being true as well. (Just to be sure, it's also true that if you write two random enough long expressions and the \(=\) sign in between, the equation will probably be wrong. But the equality may still hold for infinitely many choices of the right hand side.) And be sure that the summation over the circular polarizations \(L,R\) is exactly as good as the summation e.g. over the linear polarizations \(x,y\). Well, the axes \(x,y\) are good for a photon moving in the \(z\)-direction while the two transverse axes would have to be called differently for a general direction of \(\vec k\). There is no "canonical" way to choose the two axes replacing \(x,y\) as a function of \(\vec k\). But this is not a "special" disadvantage of the linear polarizations. In fact, even if we use the circular polarizations, there is a problem: there is no "canonical" choice of the phases of the complex vectors \(\epsilon_{L/R}(\vec k)\). In fact, these two problems – the absence of a "canonical" choice of the two axes for the linear polarization; and the absence of a "canonical" choice of the phase of the complex vectors for the circular polarization – are actually just one and the same problem expressed in two different variables! It's because the linearly polarized photon is a superposition of the circularly polarized ones such as\[ \ket x &= \frac{\ket L + \ket R}{\sqrt 2} \\ \ket y &= \frac{\ket L - \ket R}{i\sqrt 2} \] Note that the absolute values of the coefficients in front of \(\ket L\) and \(\ket R\) are the same in both cases. But the relative phase is different – and the relative phase is what determines the axis of the linear polarization. Just to be sure, the inverse relationships look analogous:\[ \ket L &= \frac{\ket x + i\ket y}{\sqrt 2} \\ \ket R &= \frac{\ket x - i\ket y}{\sqrt 2} \] The Hilbert space of one photon moving in the direction of \(\vec k\) with the frequency \(|\vec k|\) is two-dimensional and \( \{\ket L,\ket R\} \) is as good a basis as \( \{\ket x,\ket y\} \). There is nothing better about the circular polarization basis relatively to the linear polarization basis – or vice versa. Any complex linear superposition is equally "allowed". It may be equally "real" if you prepare the photon in the appropriate way. Now, Darkblue wanted to believe that this superposition principle is just some idiosyncrasy of the course on (non-relativistic) quantum mechanics and it may or should be forgotten in quantum field theory. But this is a total nonsense as well, of course. First of all, the photons have nothing to do with non-relativistic quantum mechanics per se. Photons are particles of the light which consequently move by the speed of light, \(c\). At speeds that are this high, you need relativity to study the motion. So the correct description of the motion of photons is unavoidably a relativistic theory. A non-relativistic theory of quantum mechanics just isn't good enough for photons. Second, in practice, when we talk about the (relativistic, as I just explained) quantum mechanical theory describing the photons, we can't use a theory that is "completely analogous" to the theories of the non-relativistic type. We actually need something like a quantum field theory. Quantum field theory is the simplest framework (I say "simplest" because string theory may be sometimes said to be a "more advanced framework") in which the behavior of the photons may be quantitatively described at all. So when I talked about the observables encoding the polarization of a photon and the measurements done to determine this piece of information, I actually meant quantum field theory: nothing simpler can work. But just because we talk about QFT doesn't mean that we should forget about the superposition principle and the "democracy" between all choices of the bases. The Hilbert space of QFT is a linear vector space, just like in QM. QFT is not just analogous to QM: QFT is a special case of quantum mechanics! As far as you think that the Hilbert space of a QFT is something "very different" from the Hilbert space of non-relativistic quantum mechanics describing the same or similar physical system, you must have completely misunderstood something essential about at least one of these theories. In reality, the non-relativistic QM is a limit, the non-relativistic limit, of QFT. So if you extract the relevant Hilbert space for a given physical system assuming low speeds and the dynamical equations (e.g. the Heisenberg or Schrödinger equations governing these systems), you must get exactly the same thing up to subleading terms that (relatively) vanish in the \(v/c\to 0\) limit. You may need to rename things a bit and change the notation while you're relating your QM and QFT description of the hydrogen atom, for example. But if you can't manage to do this step, then you haven't understood QFT yet! Because \(\ket x,\ket y\) are as good ket vectors for one photon as \(\ket L,\ket R\) – the superposition principle or the linearity of the vector space prohibits some "preferred bases" that are qualitatively better than others – and because in QFT, these photons may be created by creation operators, it follows that the creation operators \(a^\dagger_x,a^\dagger_y\) creating photons with a linear polarization are as good as their circularly polarized counterparts \(a^\dagger_L,a^\dagger_R\), too. After all, these creation operators are linear operators acting on the Hilbert space of the QFT (the Fock space, at least in the case of a free QFT). And one thing you are encouraged to do with the linear operators is to define their (arbitrary) linear combinations. The linear combinations of \(a^\dagger_x,a^\dagger_y\) needed to get \(a^\dagger_L,a^\dagger_R\) or vice versa mimic the structure of the linear combinations we wrote for the one-photon states. It's no coincidence. (When you define linear combinations of linear operators acting on the same Hilbert space, you want the individual terms to have the same units, the same grading, and belong to the same superselection sector etc. because it's impolite to add apples and oranges. But clearly, all these conditions are satisfied when we talk about linear and circular polarizations of photons. All the coefficients that we need are basically multiples of powers of \(i\) and powers of \(\sqrt{2}\).) Maybe the reduction of quantum field theory – e.g. a Dirac field – to the non-relativistic quantum mechanics in the \(v\ll c\) limit isn't sufficiently well explained in the basic courses of QFT if it is explained at all. But at the end, I do think that the main reason why Darkblue and other beginners tend to invent nonsensical fairy-tales such as "QFT bans linear polarizations" is nothing else than the notorious anti-quantum zeal. In an exchange – that was obviously a waste of time because Darkblue is self-evidently a mentally impotent moron – the poster wrote: @LubošMotl: Thank you, I think I get my mistake. What I thought was QFT, is not QFT. The true QFT picture evolves linear complex combination of pure Fock states. My mental picture evolves pure Fock states with phases. By pure Fock States I mean a state with an integer number of energy quantum and an integer number of LH spin and an integer number of RH spin. (In my mental picture without superposition principle the linearly polarized creation operator is invalid because it's not pure). Conceptually I like my picture more. The paper's experiment could tell if it's wrong. I guess I'll wait. Darkblue wrote "I think I get it" and then he repeats exactly the same "more likeable" stupidity about the ban on the superpositions which is the root of this whole question he posted and everything surrounding it. The idiotic fallacious would-be argument "a holy experiment may make me right and make you wrong" is added as a bonus. Sorry, it can't. The experiment, its proponents, and its proposed intepretations are as stupid as Darkblue. Darkblue wants to ban the superpositions and only allow some (basis) states – in this case, the occupation number eigenstates in the Fock space – that may be considered "elements of the phase space" i.e. the possible objective states of the world. The whole quantum mechanics would reduce down to classical physics if \(0\) and \(1\) were the only probabilities (and probability amplitudes) that the theory allows. BTW the simplest experiments with linear polarizers and circular polarizers easily falsify all theories that would propose that one of these kinds of polarizations of photons don't exist. It's flabbergastingly stupid to propose that a "new experiment" could reverse or undo this conclusion. Also, when I say that people believing that one may ban superpositions in quantum mechanics are imbeciles with a putrefying brain, you may have noticed that Gerard 't Hooft recently attempted to propose theories where the superpositions were banned. Does it mean that Gerard 't Hooft currently belongs to the unprestigious group that I have described? The answer is that my statements speak for themselves. It's the very point of quantum mechanics that it allows and needs all the complex superpositions. Operators don't commute – the "which polarizations" operators don't commute; \(x,p\) don't commute; no non-constant observable commutes with the Hamiltonian \(H\), and so on. And that's why the eigenstates of some operators – which may result from a measurement – are nontrivial linear superpositions of eigenstates of other operators – eigenstates that may result from another measurement. You can't do any physics while denying this basic omnipresent fact. Certain people are religiously obsessed anti-quantum zealots who hate everything that makes quantum mechanics quantum. And they're looking for every conceivable and inconceivable opportunity to throw away all the postulates of quantum mechanics such as the superposition principle. The transition from non-relativistic QM to QFT is just another opportunity for them – so they hope that the picture that QFT ultimately creates has to be basically classical, without any superpositions of allowed states and without any uncertainty. They can't ever be satisfied. Quantum mechanics will never go away because non-quantum ("classical") physics has been falsified and the act of falsification is irreversible. Quantum field theory doesn't undo quantum mechanics; string theory doesn't do such a thing, either. The newer theories don't return physics any closer to the classical way of thinking. And the classical framework for physics has simply been proven wrong for 90 years. You can't learn any modern physics if you're incapable of understanding this fundamental facts and the reasons why competent physicists are certain about it. If you're determined to keep your psychological problem with the very notion of a superposition of two quantum states, to believe that the Universe is prettier without superpositions, you should give up physics. You have no chance to penetrate it. Get employed as a chimp in your local zoo, as a feminist in the Department of Women's Studies, or as something else that is much closer to apes than to modern physicists. Thank me very much for my wise advise. You are welcome. P.S.: What really drives me up the wall about these Darkblue idiots is their verbal pride about "experiments" even though they are obviously retarded when it comes to the relevant experiments, too. When I was 6 years old, I played with the linear polarization filter on top of LCD displays from calculators. In effect, as a very small kid, I've done lots of experiments equivalent to this one: Now, I could probably find similar videos with the calculator displays, too. Or this longer video showing how the polarizing filters may remove unwanted layers from photographs. But this simpler one above is a "cooler" version of what I did. You can see that the light coming from the display is polarized. If you add another filter, you can send the total amount of light that gets through to zero, depending on the orientation of the extra filter. You may verify that the percentage of the light that gets through is proportional to \(\cos^2 \gamma\) where \(\gamma\) is the angle of the rotation in between the two adjacent filters. So the video above also plays with a pair of filters on top of each other, rotated relatively to each other etc. You can make numerous observations. Even if you don't measure the "cosine" quantitatively, you may derive many facts about the conditions that are needed for the light intensity to be basically unchanged; or for it to drop to zero. Because he or she claims that the linear polarization can't exist, this Darkblue idiot clearly couldn't have done any experiments like that, experiments that many kids are familiar with and that are standard at high schools, sometimes at basic schools. But he or she extracts tons of self-confidence from talking about experiments, anyway. I simply can't stand pompous fools of this kind. They are exactly like W*it or Sm*lin – obnoxious caricatures of scientists and piles of stinky šit. Add to del.icio.us Digg this Add to reddit snail feedback (0) :
e4acb37dc5e3093f
Chemical Principles/Electronic Structure and Atomic Properties From Wikibooks, open books for an open world Jump to: navigation, search I have been asked sometimes how one can be sure that elsewhere in the universe there may not be further elements) other than those in the periodic system. I have tried to answer by saying that it is like asking how one knows that elsewhere in the universe there may not be another whole number between 4 and 5. Unfortunately) some persons think that is a good question) too. George Wald, 1964 We now know the wave functions and energy levels for a hydrogen atom. With this information and the aufbau (or buildup) process, we can go on to determine the electronic structures for atoms of all the elements. These structures lead directly to the periodic table of Figures 7-3 and 7-4. As we shall see, the structures explain the stability of eight-electron shells in noble gases and the trends in ionization energies and electron affinities of the elements. Buildup of Many-Electron Atoms[edit] Although we cannot solve the Schrödinger equation exactly for many electron atoms, we can show that no radical new features are expected as the atomic number increases. There are the same quantum states, the same four quantum numbers (n, l, m, and s), and virtually the same electronic probability functions or electron-density clouds. The energies of the quantum levels are not identical for all elements; rather, they vary in a regular fashion from one element to the next. In studying the electronic structure of a many-electron atom, we shall assume the existence of a nucleus and the required number of electrons. We shall assume that the possible electronic orbitals are hydrogenlike, if not identical to the hydrogen orbitals. Then we shall build the atom by adding electrons one at a time, placing each new electron in the lowest-energy orbital available. In this way we shall build a model of an atom in its ground state, or the state of the lowest electronic energy. Wolfgang Pauli (1900 - 1958) first suggested this treatment of many-electron atoms, and called it the aufbau, or buildup, process. The aufbau process involves three principles: 1. No two electrons in the same atom can be in the same quantum state. This principle is known as the Pauli exclusion principle. It means that no two electrons can have the same n, l, m, and s values. Therefore, one atomic orbital, described by n, l, and m, can hold a maximum of two electrons: one of spin + \textstyle\frac{1}{2} and one of spin - \textstyle\frac{1}{2}. We can represent an atomic orbital by a circle and an electron by an arrow: File:Chemical Principles Equation 9.1.png When two electrons occupy one orbital with spins + \textstyle\frac{1}{2} and one of spin - \textstyle\frac{1}{2}, we say that their spins are paired. A paired spin is represented as follows: File:Chemical Principles Equation 9.2.png 2. Orbitals are filled with electrons in order of increasing energies. The s orbital can hold a maximum of 2 electrons. The three p orbitals can hold a total of 6 electrons, the five d orbitals can hold 10, and the seven f orbitals can hold 14. We must decide on the order of increasing energies of the levels before we can begin the buildup process. For atoms with more than one electron, in the absence of an external electric or magnetic field, energy depends on n and l (the size and shape quantum numbers) and not on m, the orbital-orientation quantum number. 3. When electrons are added to orbitals of the same energy (such as the five 3d orbitals), one electron will enter each of the available orbitals before a second electron enters any one orbital. This follows Hund's rule, which states that in orbitals of identical energy electrons remain unpaired if possible. This behavior is understandable in terms of electron-electron repulsion. Two electrons, one in px orbital and one in a py orbital, remain farther apart than two electrons paired in the same px orbital (Figure 8-22). A consequence of Hund's rule is that a half-filled set of orbitals (each orbital containing a single electron) is a particularly stable arrangement. The sixth electron in a set of five d orbitals is forced to pair with another electron in a previously occupied orbital. The mutual repulsion of negatively charged electrons means that less energy is required to remove this sixth electron than to remove one of the five in a set of five half-filled d orbitals. Similarly, the fourth electron in a set of three p orbitals is held less tightly than the third. Relative Energies of Atomic Orbitals[edit] File:Chemical Principles Fig 9.1.png Figure 9-1 Radial distribution fnctions for electrons in the 3s, 3p, and 3d atomic orbitals of hydrogen. These curves are obtained by spinning the orbital in all directions around the nucleus to smear out all details that depend on direction away from the nucleus, and then by measuring the smeared electron probabil ity as a function of distance from the nucleus. The 35 orbital, which is already spherically symmetrical without the smearing operation. has a most probable radius at 13 atomic units and two minor peaks close to the nucleus. The 3p orbital has a maximum density near r = 12 atomic units, one spherical node at r = 6 atomic units and a density peak close to the nucleus. The 3d orbital has only one density peak, which occurs very close to the Bohr orbit radius of 9 atomic units. The shapes of the three orbitals before the spherical smearing process are to the right of each curve. An electron in the hydrogen atom with n = 2 will be in the neighborhood of r = 4 atomic units. The scale of distances changes in many electron atoms, but relative distances in different orbitals in the atom are the same as in H. An electron in a 3s orbital is more stable than one in a 3p or 3d orbital because it has a greater probability of being inside the orbital of n = 2 electrons, in which it experiences a greater attraction from the nucleus. The 3p orbital is similarly more stable than the 3d. The 3s, 3p, and 3d orbitals in the hydrogen atom have the same energy but differ in the closeness of approach of the electron to the nucleus (Figure 9-1). The energy of an electron in an orbital depends on the attraction exerted on it by the positively charged nucleus. Electrons with low principal quatum numbers will lie close to the nucleus and will screen some of this electrostatic attraction from electrons with higher principal quantum numbers. In the Li+ ion, the effective nuclear charge beyond 1 or 2 atomic units from the nucleus is not the true nuclear charge of +3, but a net charge of +1 produced by the nucleus plus the two 1s electrons. Similarly, the lone n = 3 electron in sodium experiences a net nuclear charge of approximately +1 rather than the full nuclear charge of +11. If the net charge from the nucleus and the filled inner orbitals were concentrated at a point at the nucleus, then the energies of 3s, 3p, and 3d orbitals would be the same. But the screening electrons extend over an appreciable volume of space. The net attraction that an electron with a principal quantum number of 3 experiences depends on how close it comes to the nucleus, and whether it penetrates the lower screening electron clouds. As in Sommerfeld's elliptical-orbit model, the s orbital comes closer to the nucleus and is somewhat more stable than the p, and the p is more stable than the d. This is the reason for the variation of the l energy levels in the lithium energy-level diagram in Figure 8-13. For a given value of the principal quantum number, n, the order of increasing energy is s, p, d, f, g, . . . . It is less easy to decide whether and when the high l-value orbitals of one n overtake the low-l orbitals of the next: for example, whether a 4f orbital has a higher energy than a 5s, or a 3d a higher energy than a 4s. The question was originally settled empirically by choosing the order of overlap that accounted for the observed structure of the periodic table. The energies have since been calculated theoretically, and (fortunately for quantum mechanics) they agree with the observed order of levels. The sequence of energy levels is shown in Figure 9-2. Orbital Configurations and First Ionization Energies[edit] File:Chemical Principles Fig 9.2.png Figure 9-2 Idealized diagram of the energy levels of the hydrogenlike atomic orbitals during the buildup of many-electron atoms. On each level are written the symbols of those elements that are completed with the addition of electrons on that level. Note the nearly equal energies of 4s and 3d, of 5s and 4d, of 6s, 4f, and 5d, and finally of 7s, 5f, and 6d. The near equivalence of energies is reflected in some irregularity in the order of filling levels in the transition metals and inner transition metals. Elements with such irregularities are circled. For example, after the 6s and 7s orbitals fill in lanthanum and actinium, the next electron goes into a d rather than an f orbital. See Figure 9-3 for details. File:Chemical Principles Fig 9.3.png Figure 9-3 "Superlong" form of the periodic table. Column head indicates last electron to be added in the Pauli buildup process. Those elements whose electronic structures in the ground state differ from this simple buildup model are in color. In Gd, Cm, Cr, Mo, Cu, Ag, and Au, this difference arises from the extra stability of half-filled (f7, d5) or completely filled (d10) shells. Other deviations arise from the extremely small energy differences between d and s, or d and f levels. These deviations are less important to us now than the overall patterns of buildup and the way they account for the structure of the periodic table. He is placed over Be in Group IIA since the second electron is added to complete the s orbital in each of these elements. In the usual periodic table (inside front cover), He is placed over Ne, Ar, and the other noble gases to indicate that the entire valence shell is filled in these elements. We shall build up the electronic structures of the atoms in the periodic table by adding electrons to the hydrogenlike orbitals in order of increasing energy, and by increasing the nuclear charge by one at each step. During this process we shall pay particular attention to the relationship between the orbital electronic configurations of atoms and their first ionization energies. The first ionization energy (IE1) of an atom is the energy required to remove one electron: atom(g) + energy(IE1) positive ion(g) + e- Numerical IE1 values are given in Table 9-1.* Use the periodic table in Figure 9-3 as an aid as you follow this building process. A hydrogen (H) atom has only one electron, which in the ground state must go in the 1s orbital. So we write 1s1 (the superscript represents the number of electrons in the orbital), and illustrate the electronic configuration as follows: *It is common to refer to IE1 simply as IE: This is done in Table 9-1. File:Chemical Principles Table 9.1.png File:Chemical Principles Equation 9.3.png In helium (He), the second electron also can be in the 1s orbital if its spin is paired with that of the first electron. In spite of electron - electron repulsion , this electron is more stable in the 1s orbital than in the higher-energy 2s orbital: File:Chemical Principles Equation 9.4.png Because of the electron - electron repulsion, the first ionization energy of He is less than we might might have expected for an atom with a nuclear charge of +2. A simple calculation illustrates this point. If electron - electron repulsion were not important, each electron would feel the full force of the +2 nuclear charge, and the first ionization energy could be calculated from the one-electron formula: IE1 = -E1 = \textstyle\frac{Z^2k}{n^2} = \frac{(2)^2(1312 kJ mole^{-1})}{(1)^2}   = 5248 kJ mole-1 However, the experimental value of IE1 for He is much less, 2372 kJ mole-1. Although the strong attraction of a 1s electron to the +2 He nucleus is partially counterbalanced by the electron - electron repulsion, the IE1 is still very large, showing how tightly each electron is bound in He. Lithium (Li) begins the next period in the periodic table. Two electrons fill the 1s orbital; the third electron in Li must, by the Pauli exclusion principle, occupy the next lowest-energy orbital, namely, the 2s: File:Chemical Principles Equation 9.5.png The fourth electron in beryllium (Be) fills the 2s orbital, and the fifth electron in boron (B) must occupy one of the higher-energy 2p orbitals: File:Chemical Principles Equation 9.6.png For B, the first ionization energy is less than that of Be because its outermost electron is in a less-stable (higher-energy) orbital. In carbon (C), two of the three 2p orbitals contain an electron. As Hund's rule predicts, in nitrogen (N) the three p electrons are found in all three 2p orbitals, instead of two being paired in one: File:Chemical Principles Equation 9.7.png The fourth 2p electron in an oxygen (O) atom is held less tightly than the first three because of the electron - electron repulsion with the other electron in one of the 2p orbitals. The first ionization energy of O is accordingly low. The general trend across this period is for each new electron to be held more tightly because of the increased charge on the nucleus. Because the others 2s and 2p electrons are approximately the same distance from the nucleus, they do not shield the new electron from the steadily increasing charge. This increased charge overcomes the electron repulsion as the fifth 2p electron is added in fluorine (F). Therefore, the fifth electron is held very tightly in F, and the first ionization energy increases again. The most stable configuration results when the sixth 2p electron is added to complete the n = 2 shell with the noble gas neon (Ne): File:Chemical Principles Equation 9.8.png The complete n = 1 shell of two electrons is often given the symbol K, and the complete n = 2 shell of eight electrons is given the symbol l. A briefer representation of the Ne atom then is Ne: KL For all but a few atoms, writing the complete orbital electronic structure is a tedious procedure. It is also unnecessary because only the outer electrons are important in chemical reactions. We call the chemically important or outer electrons the valence electrons. The valence electrons of an atom are the electrons in the s and p orbitals beyond the closed-shell configurations. For example, in Li the two 1s electrons in He, they are chemically unreactive. Thus we say that the valence electronic structure of Li is 2s. Similarly, the valence electronic structure of Be is 2s2; of B, 2s22p1; of C, 2s22p2; of N, 2s22p3; of O, 2s22p4; and of F, 2s22p5. The buildup of the third period of the periodic table proceeds exactly as that of the preceding period did. Each new electron is bound more firmly because of the increasing nuclear charge, except for the fluctuations at aluminum (Al) and sulfur (S) produced by the filling of 3s in magnesium (Mg) and the half-filling of 3p in phosphorus (P): File:Chemical Principles Equation 9.9.png The outermost electron for each element in this period is bound less firmly than the outermost electron in the corresponding element of the previous period because the n = 3 electrons are farther from the nucleus. Therefore, the first ionization energies for the n = 3 elements are smaller than for the corresponding n = 2 elements. With the completion of the 3s and 3p orbitals, we have again reached a particularly stable electronic configuration with the noble gas argon (Ar). Something unusual happens in the fourth period. The 4s orbital penetrates closer to the nucleus than does the 3d orbital, and at this point in the buildup process the 4s has slightly lower energy than the 3d. Hence, the one and two electrons that are added to form potassium (K) and calcium (Ca) go into the 4s orbital before the 3d orbital is filled in the elements scandium (Sc) through zinc (Zn). If we assume a constant inner electronic configuration of KL 3s23p6, the valence electronic configurations for the 4s and 3d elements are 3d04s1 Mn  3d54s2 Ca  3d04s2 Fe  3d64s2 Sc  3d14s2 Co  3d74s2 Ti  3d24s2 Ni  3d84s2 3d34s2 Cu  3d104s1 Cr  3d54s1 Zn  3d104s2 There are two anomalies in this order of filling. The half-filled (d5) and filled (d10) levels are particularly stable, therefore the chromium (Cr) and copper (Cu) atoms have only one 4s electron each. Example 1 The valence electronic configuration for the ground state of chromium is 3d54s1. Predict the configuration of the first (i.e., lowest-energy) excited state of chromium. The aufbau process predicts 3d44s2 for the ground state, but the extra stability of a half-filled level makes the 3d54s1 configuration slightly lower in energy than 3d44s2. The latter configuration thus becomes the first excited state. For those elements whose ground-state configurations differ from those predicted by the aufbau process, the predicted configuration is that of an excited state usually only slightly higher in energy than the ground state. Although the 4s orbital penetrates closer to the nucleus than the 3d and therefore has a lower energy, the majority of the probability density of the 4s orbital is farther from the nucleus than in the 3d. An electron in a 4s orbital is simultaneously farther from the nucleus, on the average, than a 3d electron and more stable because of the small but not negligible probability that it will be very close to the nucleus. In chemical bonding, the energies of electrons in such a closely spaced levels in atoms are not as significant as distances of the electrons from the nucleus. Therefore, the 4s electrons have more of an effect on chemical properties than the relatively buried 3d electrons. With the exception of Cr and Cu, all the elements from Ca through Zn have the same outer electronic structure: two 4s electrons. The chemical properties of this series of elements will vary less rapidly than those in a series in which s or p electrons are being added. This is the reason for the relatively unchanging properties of the transition metals. After the 3d orbitals are filled, the 4p orbitals fill, in a straightforward manner, to form the representative elements from gallium (Ga), 3d104s24p1, to the noble gas krypton (Kr), 3d104s24p6. The first ionization energy, which had risen with increasing nuclear charge in the transition metals, plummets at Ga when the next electron is placed in the less stable 4p orbital. The fifth period repeats the same pattern: first the filling of the 5s orbitals, then an interruption while the buried 4d orbitals are filled in another series of transition metals, and finally the filling of the 5p orbitals, ending with the noble gas xenon (Xe), 4d105s25p6. The common feature of all noble gases is the outermost electronic arrangement s2p6. This is the origin of the stable eight-electron shells that we mentioned in Chapter 7. The late filling of the d orbitals (and f orbitals) produces the observed lengths of the periods of the periodic table: first 2, then 8, then only 8 instead of 18 for n = 3, then only 18 instead of 32 for n = 4. According to the energy diagram in Figure 9-2, the 6s orbital is more stable than the 5d, although the difference is small and there are exceptions. The idealized filling pattern is for the 6s orbital to fill in cesium (Cs) and barium (Ba), followed by the deeply buried 4f orbitals in the 14 inner transition elements lanthanum (La) through ytterbium (Yb). There are minor deviations from this pattern, as shown in Figure 9-3. The most important of these deviations is that the first electron after Ba goes into the 5d orbital in La and not into the 4f. Lanthanum is more properly a transition metal than an inner transition metal. It is more relevant to understand the idealized filling pattern, however, than to worry about the individual exceptions to it. The chemical properties of the inner transition metals cesium (ce) to lutetium (Lu) vary even less than the properties of the transition metals, because successive electrons are in the deeply buried 4f orbitals. After the 4f orbitals are filled, the balance of the third transition-metal series, hafnium (Hf) to mercury (Hg), occurs with the filling of the 5d orbitals. The representative elements thallium (Tl) through radon (Rn) are formed as the 6p orbitals fill. The seventh and last period begins in the same way. First the 7s orbital fills, then the inner transition metals from actinium (Ac) to nobelium (No) - with the irregularities shown in Figure 9-3 - and finally the beginning of a fourth transition-metal series with lawrencium (Lr). There are more deviations from this simple f-first, d-next filling pattern in the actinides than in the lanthanides (Figure 9-3), and consequently the first few actinide elements show a greater diversity of chemical properties than do the lanthanides. In summary, the idealized sequence of filling of orbitals across a period is as follows: 1. For period n, the ns orbital is filled first with two electrons. These elements are the alkali metals (Group IA) and the alkaline earths (Group IIA) and are classed with the representative elements. 2. The very deeply buried (n - 2) f orbitals are filled next. They exist only for (n - 2) greater than 3, or for Periods 6 and 7. These elements, which have virtually identical outer electronic structure and therefore virtually identical chemical properties, are the inner transition metals. 3. The less deeply buried (n -1)d orbitals are then filled if they exist. They exist only for (n -1) greater than 2, or for Period 4 and greater. These elements are similar to one another, but not as similar as the inner transition metals. They are called the transition metals (B groups). 4. Finally, the three np orbitals are filled to form the remaining representative elements (Groups IIIA-VIIA) and to conclude in each period with the outermost s2p6 configuration of the noble gases. We can now explain many of the facts that we presented in Chapter 7. The structure of the periodic table, with its groups and periods, can be seen to be a consequence of the order of energy levels (Figure 9-2). Elements in the same group have similar chemical properties because they have the same outer electronic structure in the s and p orbitals. The outer valence electrons that are so important in chemistry are these s and p electrons. The closed, inert shell of the noble gases is the completely filled s2p6 configuration. We can understand the mechanism of formation of the transition metals and the inner transition metals in terms of the filling of inner d and f orbitals. We can see the reasons for general trends across a period or down a group, and for local fluctuations within a period. Electron Affinities[edit] Another atomic property that depends strongly on the orbital electronic configuration is the electron affinity (EA), which is the energy change that accompanies the addition of an electron to a gaseous atom to form a negative ion: Atom (g) + e- negative ion (g) If energy is released when an atom adds an electron to form a negative ion, the EA has a positive value. If energy is required, the EA is negative. (Values for the known atomic electron affinities are given in Table 9-1.) Within a period, the halogens have the highest electron affinities because, after the effect of screening electrons in lower quantum levels has been accounted for, the net nuclear charge is greater for a halogen than for any other element in the period. The noble gases have negative electron affinities because the new electron must be added to the next higher principal quantum level in each atom. Not only would the added electron be farther from the nucleus than the other electrons, it also would receive the full screening effect from all the others. Lithium and sodium have moderate electron affinities; beryllium has a negative electron affinity, and magnesium has a near zero electron affinity. In Be and Mg the valence s orbital is full and the added electron must go into a higher-energy p orbital. Nitrogen and phosphorus have low electron affinities because an added electron must pair with an electron in one of the half-filled p orbitals. Example 2 Write the ground-state orbital electronic configurations of Cl+, Cl, and Cl-. What are the valence electronic configurations of these species? The atomic number of Cl is 17. Therefore the positive ion Cl+ has 16 electrons, Cl has 17, and Cl- has 18. The ground-state orbital electronic configurations are as follows: Cl+: 1s22s22p63s23p4 or KL 3s23p4 Cl: 1s22s22p63s23p5 or KL 3s23p5 Cl-: 1s22s22p63s23p6 or KL 3s23p6 The valence electronic configurations are 3s23p4 for Cl+, 3s23p5 for Cl, and 3s23p4 for Cl-. Example 3 The first ionization energy of p, 1063 kJ mole-1, is greater than that of S, 1000 kJ mole-1. Explain this difference in terms of the valence orbital electronic configurations of P and S atoms. The valence orbital electronic configuration of P is 3s23p3; of S, 3s23p4. The 3p shell is exactly half-filled in a P atom, whereas there is one extra electron in S that is forced to pair with another 3p electron: Chemical Principles Equation 9.10.png As a result of the added electron-electron repulsion of the paired 3p electrons in atomic s, the normal trend of increasing first ionization energies with increasing atomic number in a given period is reversed, with the IE1 of P being grater than the IE1 of S. This effect illustrates the special stability associated with a half-filled p shell. After the half-filled p shell is disrupted (p3 to p4), the electron-electron repulsions associated with the addition of the fifth and sixth p electrons in Cl and Ar are not large enough to override the attractive effect of the increasing positive nuclear charge. Thus the ionization energies of S, Cl, and Ar increase in the usual order (S < Cl < Ar). Example 4 In each of the following orbital electronic configurations, does the configuration represent a ground state, an excited state, or a forbidden state (that is, a configuration that cannot exist): (a) 1s22s22p24s1; (b) 1s12s22p1; (c) 1s22s22p6; (d) 1s22s22p53s3 ? (a) The ground-state configuration for an atom or ion with 7 electrons is 1s22s22p3. If it is an atom, then it must be N (atomic number 7). The configuration 1s22s22p24s1 represents a state in which a 2p electron has been excited to a 4s orbital. Therefore the configuration represents an excited state. (b) The configuration 1s22s22p1 represents an excited state (for a 4-electron atom or ion, 1s22s2 is the ground state; if it is an atom, it is Be). (c) The configuration 1s22s22p6 represents a ground state (F-, Ne, Na+, Mg2+). (d) The ground-state configuration for an atom or ion with 12 electrons is 1s22s22p63s2 (magnesium atom). In the configuration 1s22s22p53s3, 3 electrons are placed in the 3s orbital, which can take only 2 (one with s = +\textstyle\frac{1}{2} and one with s = -\textstyle\frac{1}{2}). The configuration with three 3s electrons violates the Pauli principle and cannot exist. It represents a forbidden state. Example 5 The electron affinity of Si, 138 kJ mole-1, is much larger than that of P, 75 kJ mole-1. Explain why this is so in terms of valence-orbital electronic configurations. (The valence-orbital configuration of Si is 3s23p2; of P, 3s23p3. Adding one electron to Si to give Si<sup.- gives a half-filled 3p shell (Si- is 3s23p3); addition of one electron to P disrupts a half-filled 3p shell (P- is 3s23p4). The special stability of the half-filled shell of Si-, taken with the extra electron-electron repulsion in the 3s23p4 configuration of P-, accounts for the fact that the EA of Si is larger than the EA of P. Types of Bonding[edit] File:Chemical Principles Fig 9.4.png Figure 9-4 The two hydrogen 1s orbitals overlap to form an electron-pair covalent bond in H2 A covalent bond forms between combining atoms that have electrons of similar, or equal, valence-orbital energies. For example, two atoms of hydrogen are joined by a covalent bond in the H2 molecule. The energy required to separate two bonded atoms is known as the bond energy. For H2, the bond energy (corresponding to the process H2 H + H) is 432 kJ mole-1. The two electrons in H2 are shared equally by the two hydrogen 1s orbitals. This, in effect, gives each hydrogen atom a stable, closed-shell (helium-type configuration. An orbital representation of the covalent electron-pair bond in H2 is shown in Figure 9-4. An ionic bond is formed between atoms with very different ionization energies and electron affinities. This situation allows one atom in a two-atom pair to transfer one or more valence electrons to its partner. An atom of Na is so different from an atom of Cl, for example, that it is not possible for atoms in NaCl to share their electrons equally. The Na atom has a relatively low IE1 of 498 kJ mole-1 and a small EA of 117 kJ mole-1. Therefore, it will readily form Na+ in the presence of an atom with a high EA. The chlorine atom has an EA of 356 kJ mole-1 and an IE1 of 1255 kJ mole-1. Rather than lose an electron, a Cl atom has a strong tendency to gain one. The result is that in diatomic NaCl an ionic bond is formed, Na+Cl-, in which the 3s valence electron in Na is transferred to the one vacancy in the 3p orbitals of Cl. Atomic Radii[edit] The separation of the nuclei of two atoms that are bonded together (such as H2 or Na+Cl-) is called the bond distance. In the hydrogen molecule, H2, the bond distance is 0.74 Å. Each hydrogen atom in H2 may be assigned an atomic radius of 0.37 Å. The average radii of atoms o some representative elements shown in the periodic-table arrangement of Figure 9-5 were determined from experimentally observed bond distances in many molecules. The atomic radius in most cases is compared with the size of the appropriate closed-shell positive or negative ion. You will notice (Figure 9-5) that the atomic radii become smaller across a given row (or period) of the periodic table. This shrinkage occurs because in any given period s and p orbitals acquire additional electrons, which are not able to shield each other effectively from the increasing positive nuclear charge. Thus an increase in the effective nuclear charge, thereby decreasing the effective atomic radius. This is why a Be atom, for example, is smaller than a Li atom. From H to Li there is a large increase in effective atomic radius; the third electron in a Li atom is in an orbital that has a much larger effective radius than the H 1s orbital has. According to the Pauli principle, the third electron in Li must be in an orbital with a larger principal quantum number, namely the 2s orbital. Seven more electrons can be added to the 2s and 2p orbitals, which have approximately the same radii. However, these electrons do not effectively shield each other from the positive nuclear charge as it increases, and the result is an increase in the effective nuclear charge and a corresponding decrease in radii in the series Li (Z = 3) through Ne (Z = 10). After Ne, additional electrons cannot be accommodated by the n = 2 level. Thus an eleventh electron must go into the n = 3 level, specifically, into the 3s orbital. Since the effective radii increase from the n = 1 to n = 2 to n = 3 valence orbitals, the effective size of an atom also increases with increasing atomic number within each group in the periodic table. File:Chemical Principles Fig 9.5.png Figure 9-5 Relative atomic radii of some elements compared with the radii of the appropriate closed-shell ions. Radii are in angstroms. Solid spheres represent atoms and dashed circles represent ions. Notice that positive ions are smaller than their neutral atoms and negative ions are larger. Why is this so? Example 6 Predict the order of decreasing atomic radii for S, Cl, and Ar. Predict the order for Ca2+, Cl-, Ar, K+, and S2-. The order S > Cl > Ar is correct for the radii of these atoms, because the nuclear charge increases by one unit from S to Cl and by one unit from Cl to Ar. The valence electrons are attracted more strongly to the nuclei with higher positive charges in any given period, so the atomic radii decrease correspondingly. For isoelectronic atomic and ionic species (those having the same number of electrons), radii decrease as the nuclear charge (atomic number) increases, again because of increasing electron-nucleus attraction. Thus the correct order for the isoelectronic species is S2- > Cl- > Ar > K+ > Ca2+. Most bonds in molecules fall somewhere between the extremes of the covalent and ionic types. The bond in the HF molecule, for example, is neither purely covalent nor purely ionic. Just how unequal is the sharing of electrons in HF? And which atom in HF is able to attract the greater share of the bonding electrons, H or F? To anser the second question, and to provide a qualitative guide to the first, Linus Pauling (b. 1901) defined a quantity called electronegativity, χ (the Greek letter chi), in 1932: two years later R. S. Mulliken (b. 1896) showed that electronegativity could be related to the average of the electron affinity and the ionization energy of an atom. Pauling obtained electronegativity values by comparing the energy of a bond between unlike atoms, AB, with the average energies of the A2 and B2 bonds. If HF formed a covalent bond as in H2 and F2, then we would expect the bond energy in HF to be close to the average (say, the arithmetic mean or the geometric mean) of the bond energies in H2 and F2. However, in molecules such as HF, the bonds are stronger than predicted from such averages. The bond energy of HF is 565 kJ mole-1, whereas the bond energies of H2 and F2 are 432 and 139 kJ mole-1, respectively. The geometric mean of the last two values is (139 \times 432)1/2 = 245 kJ mole-1, which is much less than the observed bond energy of HF. This "extra" bond energy (designated Δ) in an AB molecule is assumed to be a consequence of the partial ionic character of the bond due to the electronegativity difference between atoms A and B. The electronegativity difference between A and B may be defined as χA - χB = 0.102Δ1/2 (9-1) in which χA and χB are the electronegativities of atoms A and B, and Δ is the extra bond energy in kilojoules per mole. The extra bond energy is calculated from the equation Δ = DEAB - [(DEA2)(DEB2)]1/2 in which DE is the particular bond dissociation energy. In equation 9-1 the square root of Δ is used because it gives a more consisten set of atomic electronegativity values. Since only differences are obtained from equation 9-1, one atom must be assigned a specific electronegativity value, and then the values for the other atoms can be calculated easily. In a widely adopted electronegativity scale, the most electrnegative atom, F, is assigned a value of 3.98. (Electronegativity values based on this assignment are given in Table 9-1.)
f5b9b1e8e34bad2e
Viewpoint: A view from the edge H. A. Fertig, Department of Physics, Indiana University, Bloomington, IN 47405 Published February 23, 2009  |  Physics 2, 15 (2009)  |  DOI: 10.1103/Physics.2.15 Observation of Chiral Heat Transport in the Quantum Hall Regime G. Granger, J. P. Eisenstein, and J. L. Reno Published February 23, 2009 | PDF (free) +Enlarge image Figure 1 Illustration: Alan Stonebraker Figure 1 (a) Energy spectrum of a clean integer quantum Hall system as a function of guiding center position X. States near the edges disperse and carry current. By filling them to different Fermi energies EF(L) and EF(R), a net current flows down the sample which yields a quantized Hall resistance RH. (b) Interpretation in terms of classical orbits: In bulk, electrons orbit around a fixed position and do not carry current. At the edges specular reflection induces current-carrying motion. Charged particles tend to undergo circular motion in a magnetic field. The trapping of such particles by magnetic field lines plays a key role in a host of physical phenomena, from protecting the planet from harmful radiation to the confinement of plasmas in fusion experiments. At much lower energies than these examples, quantum mechanics introduces a remarkable range of phenomena, particularly for electrons confined to two dimensions in a perpendicular magnetic field. Here the classical orbits described above become quantized with microscopically small radii of approximately 10nm in a 10T magnetic field. With electrons pinned to such small areas, one might expect that such systems would be rather poor conductors. However, the most basic observation in this system is the quantum Hall effect, where for sufficiently clean and cold samples one may pass a current I between two contacts, and observe a voltage drop VH essentially perpendicular to the current flow, such that the Hall resistance RH=VH/I takes the form of h/e2N, with N an integer (integer quantized Hall effect) or a rational fraction (fractional quantized Hall effect), independent of the details of the sample geometry. In such systems edges play a crucial role, where persistent local currents lead to the quantization of the Hall effect. This extremely precise quantization, however, limits how much may be learned from charge transport at the edge. But now a new type of measurement, reported in Physical Review Letters by G. Granger, J. P. Eisenstein, and J. L. Reno at Caltech, involving heat transport at the edge, may provide new insight [1]. A resolution of the seeming paradox of the existence of persistent currents despite the pinning of electrons by the magnetic field was introduced by Bert Halperin years ago [2]. In an idealized situation where a sample is very clean, one can solve the Schrödinger equation for the system in the presence of an edge. The wave functions are extended parallel to the edge but are confined around a positional quantum number X, called the guiding center coordinate, perpendicular to the edge. States far from the edge have a simple harmonic oscillator spectrum with discrete energy levels called Landau levels. The separation between allowed values of X is tiny and so there is a huge number of allowed states for each Landau level n. The energies of states with guiding center closer to the edge deviate from the bulk value, generally increasing as X approaches the edge [see Fig. 1a]. Classically, this corresponds to skipping trajectories involving specular reflection of the electrons [Fig. 1b], allowing their transport down the length of the system. Thus each edge carries a current, even in equilibrium, and these currents are chiral; they have a unique direction determined only by the relative orientation of the magnetic field and the edge itself. The edge currents have a number of remarkable properties. Because there is a large gap between the Fermi energy and states in the bulk, disorder cannot admix the edge states with states deeper in the sample such that current might leak away from an edge. Moreover, the chirality of the edge states guarantees that neither disorder nor roughness at the edge will backscatter edge electrons, as would happen in more standard one-dimensional systems [3]. Thus the Hall currents, and the quantized Hall effect, are very robust, provided the Fermi energy is well inside a gap between Landau levels in the bulk [2]. Clear evidence for these chiral edge currents has been found in experiment, for example, in careful studies of the Hall currents as a function of sample width [4], or by detecting the magnetic moment associated with them [5]. Edge states play a crucial role in the thermodynamics of the quantum Hall system at low temperatures: they are the only gapless excitations available in the system, at least in the clean limit. These excitations are formed by exciting electrons from just below the Fermi energy at an edge to just above [see Fig. 1a], and can have arbitrarily low energy, so they are important at any temperature, no matter how small. The actual occupation of the guiding center states for integer quantum Hall states follows a Fermi-Dirac distribution in equilibrium. This is very difficult to detect through charge transport because of the robustness of the total current, which is nearly unaffected for temperatures well below the bulk gap. Granger and coauthors have developed a method for probing this property of the edge states. Using very small contacts laid out around the system edges, they injected electrons at a higher temperature than the system bulk, and measured the local electron temperature a distance away from the injection point. The result was that the electrons were warmer than the bulk electrons on only one side of the injection point, down the direction that the hot electrons should be carried by the edge current. This constitutes one of the most direct experimental demonstrations of chiral edge currents to date. This new method is promising in that it probes properties near the quantum Hall edge that are inaccessible in charge transport experiments. The cooling of the electrons along the edge must involve inelastic processes, including coupling with lattice phonons and localized electron states in the bulk. For the N=1 quantized Hall effect there are thought to be exotic, gapless collective excitations involving spin textures [6], to which charge transport is insensitive but heat transport may couple. There are also situations in the fractional quantized Hall regime in which several edge states may carry currents in different directions at the same edge; such counterpropagating modes have not been observed in charge transport, most likely due to the effects of disorder when states of opposite chirality are proximate to one another [3, 7]. The fractional quantum Hall edge is more complicated than its integer counterpart; it cannot be described in terms of single particle physics, and is best described in terms of a correlated state known as a Luttinger liquid [8]. One possible explanation of the missing counterpropagating modes is that disorder restructures the edge modes into a single charged forward propagating mode and one or more neutral counterpropagating modes [9]. Experiments that measure charge transport do not appear to be sensitive to the latter. But neutral modes carry energy, and may be detectable in thermal transport. In principle such experiments could open a long-sought window on fractional quantum Hall states. These highly correlated states have subtle topological properties that can be distinguished by their edge state structure [10]. In real disordered samples these are likely to carry only a single charged mode [7] so that detection of neutral modes will be needed to learn about their edge state structure. Such measurements could also provide insight into fractional quantum Hall states involving pairing [11] or larger clustering of electrons that are at the heart of proposals to realize topological quantum computation [12]. While the robustness of charge transport at the quantum Hall edge yields up the beauty of precise quantization of the Hall resistance, it limits what can be learned from it. The observation of chiral heat transport offers a new and potentially exciting window into the low-energy physics of the quantum Hall system. 1. G. Granger, J. P. Eisenstein, and J. L. Reno, Phys. Rev. Lett. 102, 086803 (2009). 2. B. I. Halperin, Phys. Rev. B 25, 2185 (1982). 3. T. Giamarchi, Quantum Physics in One Dimension (Oxford, New York, 2004)[Amazon][WorldCat]. 4. B. E. Kane, D. C. Tsui, and G. Weimann, Phys. Rev. Lett. 59, 1353 (1987). 5. E. Yahel, D. Orgad, A. Palevski, and H. Shtrikman, Phys. Rev. Lett. 76, 2149 (1996). 6. R. Côté et al., Phys. Rev. Lett. 78, 4825 (1997). 7. C. L. Kane and M. P. A. Fisher, Phys. Rev. B 51, 13449 (1995). 8. X. G. Wen, Phys. Rev. B 41, 12838 (1990). 9. C. L. Kane, M. P. A. Fisher, and J. Polchinski, Phys. Rev. Lett. 72, 4129 (1994). 10. X. G. Wen, Adv. Phys. 44, 405 (1995). About the Author: H. A. Fertig H. A. Fertig Herbert Fertig is a Professor of Physics at Indiana University in Bloomington. He received his undergraduate degree from Princeton University and his Ph.D. from Harvard University under the direction of Bert Halperin in 1988. After a postdoctoral fellowship with Sankar Das Sarma at the University of Maryland, he joined the faculty of the University of Kentucky in 1991, moving to Indiana in 2004. Professor Fertig was elected a Fellow of the American Physical Society in 2001, was chosen as a Lady Davis Fellow for 2006–7, and in 2008, he was recognized as an Outstanding Referee for the Physical Review journals. His research focuses mostly on low-dimensional electron systems, with particular interests in quantum Hall effects, and recently on graphene. New in Physics
7bdc3b7a8e488498
Superfluid vacuum theory From Wikipedia, the free encyclopedia   (Redirected from Superfluid vacuum) Jump to: navigation, search Superfluid vacuum theory (SVT), sometimes known as the BEC vacuum theory, is an approach in theoretical physics and quantum mechanics where the fundamental physical vacuum (non-removable background) is viewed as superfluid or as a Bose–Einstein condensate (BEC). The microscopic structure of this physical vacuum is currently unknown and is a subject of intensive studies in SVT. An ultimate goal of this approach is to develop scientific models that unify quantum mechanics (describing three of the four known fundamental interactions) with gravity, making SVT a candidate for the theory of quantum gravity and describing all known interactions in the Universe, at both microscopic and astronomic scales, as different manifestations of the same entity, superfluid vacuum. The concept of a luminiferous aether as a medium sustaining electromagnetic waves was discarded after the advent of the special theory of relativity. The aether, as conceived in classical physics leads to several contradictions; in particular, aether having a definite velocity at each space-time point will exhibit a preferred direction. This conflicts with the relativistic requirement that all directions within a light cone are equivalent. However, as early as in 1951 P.A.M. Dirac published two papers where he pointed out that we should take into account quantum fluctuations in the flow of the aether.[1][2] His arguments involve the application of the uncertainty principle to the velocity of aether at any space-time point, implying that the velocity will not be a well-defined quantity. In fact, it will be distributed over various possible values. At best, one could represent the aether by a wave function representing the perfect vacuum state for which all aether velocities are equally probable. These works can be regarded as the birth point of the theory. Inspired by the Dirac ideas, K. P. Sinha, C. Sivaram and E. C. G. Sudarshan published in 1975 a series of papers that suggested a new model for the aether according to which it is a superfluid state of fermion and anti-fermion pairs, describable by a macroscopic wave function.[3][4][5] They noted that particle-like small fluctuations of superfluid background obey the Lorentz symmetry, even if the superfluid itself is non-relativistic. Nevertheless, they decided to treat the superfluid as the relativistic matter - by putting it into the stress–energy tensor of the Einstein field equations. This did not allow them to describe the relativistic gravity as a small fluctuation of the superfluid vacuum, as subsequent authors have noted. Since then, several theories have been proposed within the SVT framework. They differ in how the structure and properties of the background superfluid must look like. In absence of observational data which would rule out some of them, these theories are being pursued independently. Relation to other concepts and theories[edit] Lorentz and Galilean symmetries[edit] According to the approach, the background superfluid is assumed to be essentially non-relativistic whereas the Lorentz symmetry is not an exact symmetry of Nature but rather the approximate description valid only for small fluctuations. An observer who resides inside such vacuum and is capable of creating or measuring the small fluctuations would observe them as relativistic objects - unless their energy and momentum are sufficiently high to make the Lorentz-breaking corrections detectable.[6] If the energies and momenta are below the excitation threshold then the superfluid background behaves like the ideal fluid, therefore, the Michelson–Morley-type experiments would observe no drag force from such aether.[1][2] Further, in the theory of relativity the Galilean symmetry (pertinent to our macroscopic non-relativistic world) arises as the approximate one - when particles' velocities are small compared to speed of light in vacuum. In SVT one does not need to go through Lorentz symmetry to obtain the Galilean one - the dispersion relations of most non-relativistic superfluids are known to obey the non-relativistic behavior at large momenta.[7][8][9] To summarize, the fluctuations of vacuum superfluid behave like relativistic objects at "small"[nb 1] momenta (a.k.a. the "phononic limit") and like non-relativistic ones at large momenta. The yet unknown nontrivial physics is believed to be located somewhere between these two regimes. Relativistic quantum field theory[edit] In the relativistic quantum field theory the physical vacuum is also assumed to be some sort of non-trivial medium to which one can associate certain energy. This is because the concept of absolutely empty space (or "mathematical vacuum") contradicts to the postulates of quantum mechanics. According to QFT, even in absence of real particles the background is always filled by pairs of creating and annihilating virtual particles. However, a direct attempt to describe such medium leads to the so-called ultraviolet divergences. In some QFT models, such as quantum electrodynamics, these problems can be "solved" using the renormalization technique, namely, replacing the diverging physical values by their experimentally measured values. In other theories, such as the quantum general relativity, this trick does not work, and reliable perturbation theory cannot be constructed. According to SVT, this is because in the high-energy ("ultraviolet") regime the Lorentz symmetry starts failing so dependent theories cannot be regarded valid for all scales of energies and momenta. Correspondingly, while the Lorentz-symmetric quantum field models are obviously a good approximation below the vacuum-energy threshold, in its close vicinity the relativistic description becomes more and more "effective" and less and less natural since one will need to adjust the expressions for the covariant field-theoretical actions by hand. Curved space-time[edit] According to general relativity, gravitational interaction is described in terms of space-time curvature using the mathematical formalism of Riemannian geometry. This was supported by numerous experiments and observations in the regime of low energies. However, the attempts to quantize general relativity led to various severe problems, therefore, the microscopic structure of gravity is still ill-defined. There may be a fundamental reason for this—the degrees of freedom of general relativity are based on may be only approximate and effective. The question of whether general relativity is an effective theory has been raised for a long time.[10] According to SVT, the curved space-time arises as the small-amplitude collective excitation mode of the non-relativistic background condensate.[6][11] The mathematical description of this is similar to fluid-gravity analogy which is being used also in the analog gravity models.[12] Thus, relativistic gravity is essentially a long-wavelength theory of the collective modes whose amplitude is small compared to the background one. Outside this requirement the curved-space description of gravity in terms of the Riemannian geometry becomes incomplete or ill-defined. Cosmological constant[edit] The notion of the cosmological constant makes sense in a relativistic theory only, therefore, within the SVT framework this constant can refer at most to the energy of small fluctuations of the vacuum above a background value, but not to the energy of the vacuum itself.[13] Thus, in SVT this constant does not have any fundamental physical meaning, and related problems such as the vacuum catastrophe, simply do not occur in the first place. Gravitational waves and gravitons[edit] According to general relativity, the conventional gravitational wave is: 1. the small fluctuation of curved spacetime which 2. has been separated from its source and propagates independently. Superfluid vacuum theory brings into question the possibility that a relativistic object possessing both of these properties exists in nature.[11] Indeed, according to the approach, the curved spacetime itself is the small collective excitation of the superfluid background, therefore, the property (1) means that the graviton would be in fact the "small fluctuation of the small fluctuation", which does not look like a physically robust concept (as if somebody tried to introduce small fluctuations inside a phonon, for instance). As a result, it may be not just a coincidence that in general relativity the gravitational field alone has no well-defined stress–energy tensor, only the pseudotensor one.[14] Therefore, the property (2) cannot be completely justified in a theory with exact Lorentz symmetry which the general relativity is. Though, SVT does not a priori forbid an existence of the non-localized wave-like excitations of the superfluid background which might be responsible for the astrophysical phenomena which are currently being attributed to gravitational waves, such as the Hulse–Taylor binary. However, such excitations cannot be correctly described within the framework of a fully relativistic theory. Mass generation and Higgs boson[edit] The Higgs boson is the spin-0 particle that has been introduced in electroweak theory to give mass to the weak bosons. The origin of mass of the Higgs boson itself is not explained by electroweak theory. Instead, this mass is introduced as a free parameter by means of the Higgs potential, which thus makes it yet another free parameter of the Standard Model.[15] Within the framework of the Standard Model (or its extensions) the theoretical estimates of this parameter's value are possible only indirectly and results differ from each other significantly.[16] Thus, the usage of the Higgs boson (or any other elementary particle with predefined mass) alone is not the most fundamental solution of the mass generation problem but only its reformulation ad infinitum. Another known issue of the Glashow–Weinberg–Salam model is the wrong sign of mass term in the (unbroken) Higgs sector for energies above the symmetry-breaking scale.[nb 2] While SVT does not explicitly forbid the existence of the electroweak Higgs particle, it has its own idea of the fundamental mass generation mechanism - elementary particles acquire mass due to the interaction with the vacuum condensate, similarly to the gap generation mechanism in superconductors or superfluids.[11][17] Although this idea is not entirely new, one could recall the relativistic Coleman-Weinberg approach,[18] SVT gives the meaning to the symmetry-breaking relativistic scalar field as describing small fluctuations of background superfluid which can be interpreted as an elementary particle only under certain conditions.[19] In general, one allows two scenarios to happen: • Higgs boson exists: in this case SVT provides the mass generation mechanism which underlies the electroweak one and explains the origin of mass of the Higgs boson itself; • Higgs boson does not exist: then the weak bosons acquire mass by directly interacting with the vacuum condensate. Thus, the Higgs boson, even if it exists, would be a by-product of the fundamental mass generation phenomenon rather than its cause.[19] Also, some versions of SVT favor a wave equation based on the logarithmic potential rather than on the quartic one. The former potential has not only the Mexican-hat shape, necessary for the spontaneous symmetry breaking, but also some other features which make it more suitable for the vacuum's description. Logarithmic BEC vacuum theory[edit] In this model the physical vacuum is conjectured to be strongly-correlated quantum Bose liquid whose ground-state wavefunction is described by the logarithmic Schrödinger equation. It was shown that the relativistic gravitational interaction arises as the small-amplitude collective excitation mode whereas relativistic elementary particles can be described by the particle-like modes in the limit of low energies and momenta.[17] The essential difference of this theory from others is that in the logarithmic superfluid the maximal velocity of fluctuations is constant in the leading (classical) order. This allows to fully recover the relativity postulates in the "phononic" (linearized) limit.[11] The proposed theory has many observational consequences. They are based on the fact that at high energies and momenta the behavior of the particle-like modes eventually becomes distinct from the relativistic one - they can reach the speed of light limit at finite energy.[20] Among other predicted effects is the superluminal propagation and vacuum Cherenkov radiation.[21] Theory advocates the mass generation mechanism which is supposed to replace or alter the electroweak Higgs one. It was shown that masses of elementary particles can arise as a result of interaction with the superfluid vacuum, similarly to the gap generation mechanism in superconductors.[11][17] For instance, the photon propagating in the average interstellar vacuum acquires a tiny mass which is estimated to be about 10−35 electronvolt. One can also derive an effective potential for the Higgs sector which is different from the one used in the Glashow–Weinberg–Salam model, yet it yields the mass generation and it is free of the imaginary-mass problem[nb 2] appearing in the conventional Higgs potential.[19] See also[edit] 1. ^ The term "small" refers here to the linearized limit, in practice the values of these momenta may not be small at all. 2. ^ a b If one expands the Higgs potential then the coefficient at the quadratic term appears to be negative. This coefficient has a physical meaning of squared mass of a scalar particle. 1. ^ a b Dirac, P. A. M. (24 November 1951). "Is there an Æther?". Letters to Nature. Nature. 168 (4282): 906–907. Bibcode:1951Natur.168..906D. doi:10.1038/168906a0. Retrieved 16 October 2012.  2. ^ a b Dirac, P. A. M. (26 April 1952). "Is there an Æther?". Nature. 169 (4304): 702–702. Bibcode:1952Natur.169..702D. doi:10.1038/169702b0.  3. ^ K. P. Sinha, C. Sivaram, E. C. G. Sudarshan, Found. Phys. 6, 65 (1976). 4. ^ K. P. Sinha, C. Sivaram, E. C. G. Sudarshan, Found. Phys. 6, 717 (1976). 5. ^ K. P. Sinha and E. C. G. Sudarshan, Found. Phys. 8, 823 (1978). 6. ^ a b G. E. Volovik, The Universe in a helium droplet, Int. Ser. Monogr. Phys. 117 (2003) 1-507. 7. ^ N. N. Bogoliubov, Izv. Acad. Nauk USSR 11, 77 (1947). 8. ^ N.N. Bogoliubov, J. Phys. 11, 23 (1947) 10. ^ A. D. Sakharov, Sov. Phys. Dokl. 12, 1040 (1968). This paper was reprinted in Gen. Rel. Grav. 32, 365 (2000) and commented in: M. Visser, Mod. Phys. Lett. A 17, 977 (2002). 11. ^ a b c d e K. G. Zloshchastiev, Spontaneous symmetry breaking and mass generation as built-in phenomena in logarithmic nonlinear quantum theory, Acta Phys. Polon. B 42 (2011) 261-292 ArXiv:0912.4139. 12. ^ M. Novello, M. Visser, G. Volovik, Artificial Black Holes, World Scientific, River Edge, USA, 2002, p391. 13. ^ G.E. Volovik, Int. J. Mod. Phys. D15, 1987 (2006) ArXiv: gr-qc/0604062. 14. ^ L.D. Landau and E.M. Lifshitz, The Classical Theory of Fields, (1951), Pergamon Press, chapter 11.96. 15. ^ V. A. Bednyakov, N. D. Giokaris and A. V. Bednyakov, Phys. Part. Nucl. 39 (2008) 13-36 ArXiv:hep-ph/0703280. 16. ^ B. Schrempp and M. Wimmer, Prog. Part. Nucl. Phys. 37 (1996) 1-90 ArXiv:hep-ph/9606386. 17. ^ a b c A. V. Avdeenkov and K. G. Zloshchastiev, Quantum Bose liquids with logarithmic nonlinearity: Self-sustainability and emergence of spatial extent, J. Phys. B: At. Mol. Opt. Phys. 44 (2011) 195303. ArXiv:1108.0847. 18. ^ S. R. Coleman and E. J. Weinberg, Phys. Rev. D7, 1888 (1973). 19. ^ a b c V. Dzhunushaliev and K.G. Zloshchastiev (2013). "Singularity-free model of electric charge in physical vacuum: Non-zero spatial extent and mass generation". Cent. Eur. J. Phys. 11 (3): 325–335. Bibcode:2013CEJPh..11..325D. arXiv:1204.6380Freely accessible. doi:10.2478/s11534-012-0159-z.  20. ^ K. G. Zloshchastiev, Logarithmic nonlinearity in theories of quantum gravity: Origin of time and observational consequences, Grav. Cosmol. 16 (2010) 288-297 ArXiv:0906.4282. 21. ^ K. G. Zloshchastiev, Vacuum Cherenkov effect in logarithmic nonlinear quantum theory, Phys. Lett. A 375 (2011) 2305–2308 ArXiv:1003.0657.
e75b927b9a9be684
Translate This Blog Saturday, April 30, 2016 Is Civility Lost On The Internet? Maintaining a blog makes me a potential target, especially considering that I have an international readership and currently run over 50,000 hits per month. Over the years I've had some pretty hateful things said about me and my opinions, and I've allowed the vast majority of those comments to be posted as I believe in free discourse and open speech. I've only deleted comments a handful of times even if they strongly opposed me, and those deletions were because of inappropriate language (kids read this blog) or racist comments. One of the points of contention in the past has been the fact that very few vets take payment plans. This was a big discussion on a post I made back in 2012, though it has cropped up in some other blogs I've written. The 2012 post had several rather strong comments, including some rather hateful ones directed and me and others of my profession. I ended up closing it to further discussion because of the direction the conversations were taking.  A few weeks ago I received a brief email from m_michaels17. Here it is in its entirety.  You should not be a vet. Your views are twisted and vile. Payment plans don't work? They work fine at my clinic. They work in every other business and practice. There are two professions that only except cash... drug dealer and veterinarian.  The personal attack didn't really bother me too much. I have a pretty thick skin and realize that people like this are in the wrong. In fact, it somewhat amused me. What did bother me was that this person felt that it was worth their time to send a personal email comparing me and my whole profession to drug dealers and making a rather hostile attack. Who feels this strongly, and believes that it's okay to spend their time reaching out and directly lambasting a person whom they do not know? Unfortunately I see this far too often on the internet. It is extremely easy to post something hateful and disrespectful from the anonymity of your own home, things that the same person would likely never say in person. Vehemence and polarization is becoming the rule in our modern society, and I think it has trended this way in large part because of the internet. These attitudes are especially prevalent in the current US election cycle. People are losing friendships because of Facebook posts. Democrats are demonizing Republicans, and Republicans are doing the same back across the aisle. Some of this is done at protests and rallies, but much of it is conducted online through blogs, Twitter, Facebook, and other social media. It is really easy to compare your opponent to Hitler and call his supporters vile names when you aren't face-to-face with them. Now don't get me wrong. Hateful, irresponsible comments did not begin in the 21st century. One of the dirtiest political campaigns in US history was between two founding fathers, Adams and Jefferson, in 1800. There were some rather incredible statements made about each candidate, none of which would be said in 2016 (seriously, it's rather interesting to read about....some links are here, here, and here). But I think that the internet has made these comments easier to find and faster to spread. Whatever happened to civil debate? What happened to disagreement without animosity? Why is it that so many people go on personal attacks when they find something they don't like? And why do so many people go off on rants without actually knowing the facts or doing any research? For example, let's look at m_michaels17 again. Let's break down those comments with reason and a lack of vitriol. You should not be a vet.  Why not?  Because you disagree with me?  Because you think my viewpoints are wrong?  I have a successful practice, my clients often comment about how I care about them and their pets (we do surveys), I am good at what I do, I help families every day and have saved many lives over my career.  I have volunteered my time to go to schools and children's museums to educate kids about pet care and veterinary medicine.  I have mentored numerous veterinary students and newly graduated vets.  But because I don't take payment plans I should get out of the profession.  Right? Your views are twisted and vile.  So it's twisted and vile to not accept a credit risk when companies who do it professionally won't extend credit to a person?  It's twisted and vile to expect payment from clients for the care I provide so I can pay my staff, order drugs, pay utilities, fix equipment, and support my family?  It's twisted and vile to expect people to give back for services rendered?  Here we see a great example of a logical fallacy called argumentum ad hominem, which happens when someone avoids the actual topic by directing an attack at their opponent.  Rather than presenting any evidence that payment plans work in veterinary medicine, or countering my argument that they don't work, m_michaels17 simply attacks me directly. Payment plans don't work? They work fine at my clinic.  Congratulations for your clinic being one of the very, very few that actually does this successfully!  Again, we have a logical fallacy.  Forget the fact that I have 18 years of experience as a vet and a total of 32 years in the profession.  Forget the fact that most veterinary consultants recommend against in-house payment plans.  Forget the fact that most vets who have done payment plans say that they rarely get paid in full on these bills.  Because it works at the clinic m_michaels17 goes to it must be able to work at every veterinary practice.  I'm also skeptical that this clinic does well with their payment plans.  However, because I don't know the facts on that situation I'm going to avoid making assumptions. Well, we certainly accept more than cash.  We also accept Mastercard, Visa, Discover, American Express, and Care Credit.  Every single one of those are credit systems where a client can repay the debt over time.  Hey, there is your payment plan!  See, we do accept payment plans!  We just let the client pay through someone else's credit system, and let that company accept the risks.  If someone cannot qualify for a credit line through a company that does this professionally, why should we be responsible for taking that risk?  If a person can't get a credit card or Care Credit, they are a very high risk.  Oh, and every other profession works on payment plans?  So when you go to the grocery store and want to purchase $400 worth of groceries that store will allow you to pay them back over a few months.  Right?  What about Walmart, Olive Garden, and your local jeweler?  When you go to your pharmacy to pick up a $300 prescription they don't expect full payment, do they?  I'm sure a band at a wedding accepts payment over the course of a year after they've played at a ceremony.  So only drug dealers and veterinarians accept up-front payment?  No other profession does?  When I recently went to my ophthalmologist they certainly expected full payment for my new glasses before they would order them.  And I think that most reasonable people would realize that the businesses and professions that only accept full payment far outnumber those who take payments.  Some stores may allow layaway, though that is not common, and even if this is the case the person must pay in full before they pick up their product. If m_michaels17 had wanted a discussion, then they would have been able to break down my viewpoint as I did theirs.  But they weren't interested in doing so.  They didn't even want to try and convince me of the error of my ways. All they were concerned about was telling me how horrible and evil I am personally, and then lumping my entire profession into this basket.  They are a perfect example of what is so wrong about communication on the internet. Where is civility nowadays?  On the internet it's certainly hard to come by.  But it doesn't have to be. Be sure to look for my next post, where I get into this issue with m_michaels17 in more detail. Wednesday, April 27, 2016 When Helping People Gets Unethical I came across an interesting article in one of my veterinary magazines, DVM 360.  You can read the full article here, but let me quickly summarize it for you. A relief vet was working at a long-established clinic.  He discovered that the owner would do surgeries at no cost to needy clients, but was using recently expired drugs to do so.  The relief vet (Dr. Han) refused to do that and talked to the practice owner (Dr. Keets) about the issue.  Here are some quotes from that article. Dr. Han was diplomatic—after all, he wanted to maintain a good working relationship with Dr. Keets. He said that he understood that Dr. Keets was well-intentioned but that substandard care of indigent patients was unacceptable. Dr. Keets replied that the care was not substandard. All his patients were monitored during and after surgery. If any animals showed signs of pain or inadequate anesthesia this was addressed immediately. He went on to say that offering charitable services required realistic monetary considerations. If he could not use recently outdated medications, he could not afford to offer these much-needed services. He went on to say that Dr. Han traveled from practice to practice assisting veterinarians and pets on a short-term basis. He on the other hand had a responsibility to a clientele that day-in and day-out needed services they could not afford. As a result, he had to be creative in order to assist them. A bit frustrated, Dr. Han finally said that Dr. Keets’ practices were a violation of practice statutes. Dr. Keets’ reply? “I’ve never had a complaint, and I have scores of grateful pets and pet owners.” This is a difficult situation.  Dr. Keets was doing what he thought was best for the community and was sincerely trying to help people out.  To be honest, drugs don't suddenly go bad or become dangerous at midnight on the day of expiration.  And most expired drugs would lose efficacy rather than become toxic.  However, those expiration dates are there for a reason, and they need to be followed. I agree with Dr. Keets where he said "offering charitable services required realistic monetary considerations."  That's very true.  Veterinary practices can't routinely give away services for free and still expect to stay in business while maintaining high medical standards.  If he didn't use those medications, "he could not afford to offer these much-needed services".  Again, that's true.  The only way he could afford to give away these surgeries was to use drugs that were no longer valid.  If he used drugs that were still within their expiration date he would have lost money and not been able to provide these services.  I've written many times about how it isn't realistic to expect veterinarians to give away services, especially surgeries, and not have their business suffer.  So bravo to Dr. Keets for recognizing this. However, was what he did ethical?  No.  And it wasn't even legal.  His reply of "I've never had a complaint, and I have scores of grateful pets and pet owners" is not a good defense.  It is an attempt at justifying an unethical behavior.  Just because someone doesn't complain about a behavior doesn't make that behavior right.  Here is an analysis from the author of the article. It is absolutely true that the use of expired medications is a violation of the veterinary practice act in every state. Dr. Keets was aware of this but chose to help those in need and also manage any complications that may have arisen from the use of the expired medications. There is no doubt Dr. Keets was well-intentioned. But he could have solved his medication issues in other ways. Advising vendors of his charitable efforts and asking them to participate would have been an option, as well as soliciting his more affluent clients and enlisting them in an effort to help his good works. Rules and laws exist to prevent abuse and protect our patients. Dr. Keets gets an “A” for effort but does not pass the profession’s ethical standards test. Should a veterinarian violate ethical standards and state laws in order to help people?  Personally I don't think so.  While I'm absolutely not a "big government" sort of person, I also believe in trying to uphold the letter and spirit of the law.  It is wrong to break the law because it seems convenient or helpful to do so.  It also puts that doctor on very shaky ground with his license and business.  Let's imagine for a moment that something went wrong when he was using these expired drugs.  The pet had severe complications and died, in part because the drugs were not effective and they were not able to administer proper medications in time.  The client learns that expired drugs were used and brings a lawsuit against Dr. Keets, as well as reporting him to the state board.  Because he knowingly used these drugs against state law he would have no chance of winning the lawsuit and would be facing big fines from the stat veterinary board, and possibly even be in danger of losing his license to practice. To me it is not worth risking my ability to work and support my clients and family.  While I admire Dr. Keets' desire to help people, he is going about it the wrong way.  Hopefully this gives pet owners some insight into the challenges of trying to help those who have few funds and are in need.  I'm not saying that vets should never make the attempt, but they need to make sure they are following legal and ethical standards.  If they have to break the law or violate ethics in order to help people, they shouldn't do so. Sunday, April 24, 2016 "But He's Still Hungry" Every day I have discussions with my clients about how much food to feed their pet.  Something that often confuses people is that they are following the amount recommended on the package, but their dog or cat still wants more.  "They're still hungry," the client says.  And because we don't want our pets to be hungry people tend to give them more food. Stop and think about how many times you, the pet owner, eats when you're not hungry.  When you go to the movies are you really so hungry that you need that popcorn and candy?  When you're sitting at home watching Netflix and munching on that bag of chips, is it really because you have missed a meal?  When you've finished that meal at the restaurant are you looking at the desert menu because you feel hunger pains? People and animals eat for two reasons.  First, they feel the sensation of hunger.  Second, they like the taste.  I'm sure that every person reading this blog has eaten something when they weren't truly hungry but just had a craving for that particular food or snack.  And I'm sure every person has continued to eat until far past the point of hunger sensations having stopped.  Did you ever reach the end of a meal and have been so full that you think back and realize that you shouldn't have eaten so much?  But you didn't think about that until after you were finished and your pants were too tight.  I've been in every one of these situations myself, so I speak from experience. Animals are often the same way.  Yes, they eat because they are hungry.  But when that hunger has passed they will also eat because they like the taste of the food.  Since they are continuing to seek out food that can make the owners mistakenly think that they are actually hungry, when in fact they may feel quite full.  There is also an instinct that is retained in many pets where they will gorge themselves since they don't have a good long-term memory to know that they will get fed again tomorrow. You cannot determine how much food to give a dog or cat based on whether or not they will continue to eat it.  If you try this method, you will end up with obese, unhealthy pets. Instead, use the package directions as a starting point, and then consult with your vet on what your pet's body condition score is.  A highly active pet may need more than what is on the package.  Dogs and cats who are basically couch potatoes may need considerably less than that amount.  When I see a pet I look at their proportions and how much body fat they have on them.  If they are normal then I will let the owner know that they are feeding the right amount, even if the pet still acts "hungry".  If they are overweight I'll tell them to switch to a lower calorie food and decrease the amount fed, even if they are following the package and the pet wants to eat more.  Most people don't realize how many overweight or obese pets we veterinarians see who always seem "hungry".  If they weren't getting enough food, they wouldn't be overweight! Pay attention to your pet and actually measure out the amount of food you are giving.  It's going to be much better for your pet's health. Thursday, April 21, 2016 Critical Thinking About Pet Protector Recently a reader asked my opinion about Pet Protector, a product designed for protecting against fleas and ticks.  I had never heard of it so I looked into it a bit.  From what I can see there seems to be a lot of rather bogus science behind it. The company's website is, went there to try and learn about it.  The product is a metal disc that is worn on a dog's or cat's collar, and which gives protection against fleas and ticks for four years.  All without chemicals.  Sounds pretty amazing, right?  Here are some quotes from the company on how it works. The Pet Protector Disc uses advanced technology to emit Magnetic and Scalar waves, creating a protective shield around your pets' body and repelling all external parasites. Produces Scalar waves and creates an impenetrable, protective shield around the animal's body Officially tested and proven The Pet Protector Disc is made of high quality steel alloys. It is charged with a specific combination of Magnetic and Scalar waves, which after being triggered by the animal’s movement (blood circulation), produce an invisible energy field around the entire animal’s body. Pet Protector’s Scalar waves are totally harmless to people and animals (they go absolutely undetected by humans and animals alike) and they are only effective against external parasites, repelling them from the shielded area. Therefore, the Pet Protector Disc acts preventatively; it drives fleas, ticks and mosquitoes away before they get the chance to infest your pet, versus all other anti-parasite products, which kill external parasites after they have already infested your pet. Now that sounds like a pretty high-tech product, doesn't it?  And not having to use chemicals is so much better! But let's not take this on face value or even just look at the testimonials on the website (which are always hand-picked for the best ones).  Let's spend some time looking into this effect and the claims.  And above all, let's use actual critical thinking (as we always should). First, what the heck are "Scalar waves"?  I did a quick Google search learned a few things.  These kinds of waves have been researched since around the time of Nikola Tesla, and nowadays are firmly in the camp of pseudoscience.  When you find people supporting the idea of scalar waves you find them talking about conspiracy theories, ultimate healing, super weapons, weather control, and similar crackpot ideas.  Here are some choice quotes from some forums and websites. In physics, a quantity described as "scalar" only contains information about its magnitude. In contrast, a "vector" quantity contains information both about its magnitude and about its direction. By this definition, a "scalar wave" in physics would be defined as any solution to a "scalar wave equation". In reality, this definition is far too general to be useful, and as a result the term "scalar wave" is used exclusively by cranks and peddlers of woo. The main current proponent of scalar wave pseudophysics is zero-point energy advocate Thomas E. Bearden, who has concocted an entire pseudoscientific "scalar field theory" unrelated to anything in actual physics of that name.  Bearden was pushing the medical effects of scalar waves as early as 1991. He specifically attributed their powers to cure AIDS, cancer and genetic diseases to their quantum effects and their use in "engineering the Schrödinger equation." They are also useful in mind control. What is a “scalar wave” exactly? Scalar wave (hereafter SW) is just another name for a “longitudinal” wave. The term “scalar” is sometimes used instead because the hypothetical source of these waves is thought to be a “scalar field” of some kind similar to the Higgs Field for example. Because the concept of an all pervasive, material Ether was discarded by most scientists, the thought of vortex-like electric and/or magnetic waves existing in free space, without the support of a viscous medium, was thought to be impossible. However later experiments carried out by Dayton Miller, Paul Sagnac, E.W. Silvertooth, and others have contradicted the findings of Michelson & Morley. More recently Italian Mathematician-Physicist Daniele Funaro, American Physicist-Systems Theorist Paul LaViolette, and British Physicist Harold Aspden have all conceived of (and mathematically formulated) models for a free space Ether that is dynamic, fluctuating, self-organizing, and allows for the formation & propagation of SW/LW. I try to imagine what physics would be like without mathematics. I think it would be like this "scalar wave" business. A lot of guys coming up with ideas and swapping lies 'cause math is hard. A scalar is just a number. A wave is a repetitive variation in that number. For example the altitude of each point in Wisconsin forms a scalar wave. Or sound waves, all you can hear is the intensity of the superimposed tones; the intensity is just a number (yeah, maybe a complex number) and it varies repetitively (i.e the cycles of the tones).  You've asked about Bearden before and the answer is the same: while Greer is a second order crackpot, Bearden may well be certifiably insane - he is, at the very least, a liar and a fraud.  Tom Bearden is a notorious crackpot. Has been for years. References available upon request. I kinda hate to go through this exercise again, but, if you are really interested in facts, I don't mind. He is a fraud, charlatan and temple priest of bad science. I hope I am not sugar coating this too much. It seems that most reputable physicists don't believe in the various scalar wave applications that are touted by the fringes of science and medicine.  So to me this is one of the biggest strikes against Pet Protector, as it is the primary reason why it is supposed to work. But for a moment let's assume that scalar waves really do exist in the way that they're stated.  Would this product work and is it backed up by studies? Let's first look at one of the primary statements made by Pet Protector:  Pet Protector’s Scalar waves are totally harmless to people and animals (they go absolutely undetected by humans and animals alike) and they are only effective against external parasites, repelling them from the shielded area.  Does that make scientific sense?  No, not really.  I can find no information on the website on exactly why it affects parasites but not the host.  With typical topical chemicals a product works by affecting neurotransmitters found in insects and arachnids that are not found in mammals.  They are considered safe for most pets because they affect things that the hosts don't have.  I can't find anything about scalar waves that would cause them to be unnoticed by dogs and cats but not fleas or ticks. Here is more from the website: 1. The Pet Protector Disc does not have the ability to eliminate existing parasites or their larvae  2. The Pet Protector Disc can only repel new parasites from inhabiting your pet  3. The Pet Protector Disc needs 7 to 20 days (depending on the pet’s size) to create a strong enough Scalar Wave field around your pet's whole body, protecting it from fleas and ticks successfully. This is what I find interesting.  The premise behind the disc is that it actually and literally creates a invisible force-field around your pet.  Stop and say that out loud.  It sounds rather odd, doesn't it?  Somehow the disc creates an invisible bubble that doesn't actually touch the pet.  If it did, it would repel the parasites that already exist on the pet.  How does the disc do that?  Electromagnetic waves are supposed to emanate in a straight line from the origin source, and should spread out in all directions.  Magnets and gravity can change the direction of these waves, but you have to have pretty powerful equipment to make a noticeable difference.  Somehow a disc that looks like an ID tag has the power and ability to not pass through the pet but instead make a sphere around it.  Do you realize how strange that sounds?  And there is nothing on the website that gives details on how this might actually happen, or links to the science behind it.  You basically just have to trust the company that what they say is true. Okay, so now let's assume that a product like this actually works and there are ones on the market who perform exactly as expected.  Does Pet Protector show evidence of actually repelling parasites?  For this we can go to the "Official Product Testing" part of the website. The study was conducted over 4 years in the US, Argentina, Spain, and Australia.  The dogs and cats were selected randomly and were in homes with owners.  There were 22 pets selected in each geographical location, for a total of 88 over the study.  The animals were determined to be "100% free of any external parasites", had the disc attached to their collar, and were isolated for 15 days to give the disc time to fully activate.  On the 16th day they were released back to their normal environment and the owners were told not to do anything different.  The pets were examined weekly for four years, with only an occasional tick found during that entire time. All of that sounds good, and if you look at the study document you'll see "Official" stamped in the corner of every page.  It certainly sounds convincing and scientific.  But this is far from being a true study of efficacy.  There are numerous unanswered questions, and this so-called study would be laughed at by any peer-reviewed scientific journal. • How were the pets determined to be parasite-free?  What methods were used and what was the expertise level of those doing the exams? • What were the baseline parasite levels in the various locations?  I don't know about the non-US locations, but in America the study was performed in California, which has one of the lower rates of fleas and mosquitoes in the country.  If they wanted to do a real study they should have come to the southeastern states.  Here in Georgia I never have a month go by where I don't see pets with fleas, even in the dead of winter.   • Did the lifestyles of the pets allow them access to parasites?  A cat that is strictly indoors is never going to have a tick, so making a claim of "see, our product prevented ticks" is rather pointless. Dogs that are hunting or camping are going to have a higher risk of fleas and ticks than a toy breed that only goes outside a few minutes per day to use the potty.  A pet owner who is doing routine treatment of the yard against insects is going to have a lower risk of fleas and ticks than one who isn't. • Did any of the pets chosen have a history of fleas or ticks being seen?  Even here in Georgia I have dog owners who aren't using any form of flea or tick control and yet we never see those parasites on their pets.  I routinely have clients who say "Oh, I've never seen any fleas so I don't need prevention", and despite my skepticism I can't find a single flea on the pet.  If one of these clients was using a Pet Protector the company would say "see, no parasites!"  Yet the pet never had them in the past, so why would they have them now? • Who was doing the weekly exams?  If it was the owners, I don't believe them.  I've had many, many situations opposite that I just mentioned, where they insist there are no fleas at all yet I glance at the pet and find a half dozen very easily.  Pet owners may not know how to examine the pet, may miss something, or may not easily recognize a parasite. • Where are the controls?  Here is one of the biggest problems with the Pet Protector data.  There are no controls.  If we wanted to test true efficacy we would have dogs and cats of similar breed in the same environment who used just a metal tag rather than the Pet Protector disc, and the owners didn't know which was which.  Having this kind of "blind" study with control removes bias from the people doing the routine exams.  You also have more validity in the data because if the control animals had fleas but the study ones didn't you could say that it was protective.  But if the control animals also didn't have any fleas then the lack of parasites had nothing to do with the product.  Pet Protector simply doesn't have this information. Do you know how most flea and tick products are tested?  It is generally in a laboratory with research animals.  They are certified parasite-free by the researchers, who are usually specialists in parasitology.  A specific number of fleas ticks are placed on the pet (usually 100), and the same number are placed on every animal.  Counts are regularly made to see how many of those parasites placed are remaining, as well as the numbers on the control animals (who get the same parasites but not the product).  In some studies a new set of parasites is placed on the pet periodically to determine the duration of efficacy.  Can you see how this method is much more precise and valid that the one used by Pet Protector? Hopefully you can see the incredibly numerous things wrong with this product, from the pseudoscience premise to the lack of anything that could be called a true scientific study.  There are many statements made by the company and their "study", none of which have solid science behind them.   While this product is almost certainly harmless, I can't believe that it would have any real efficacy and would be a waste of the consumer's money.  I would not recommend buying it. Monday, April 18, 2016 Parent, Guardian, Or Owner? I recently read an article on the Veterinary Information Network questioning current terminology such as "pet parent" for those who have animals in their home.  It was something I hadn't given much thought about, but Dr. Chiara Switzer made some interesting points.  VIN is a subscription-only service so I can't link to the full article, but here are some quotes from it. The terms “pet parent” and “fur baby” that are so in vogue these days bring the division to the fore. Some people love the terms, referring to themselves as the mom or dad of their pet and rejecting the concept of being owners or even caregivers of their beloved animals. Other people find the term offensive because of its implication that animals would have equal status to human beings, or the suggestion that they are unemotional if they don’t consider their dog or cat to be like their child. The division can intensify if one side tries to impose its philosophy on the other; for example, if people who consider their pets as children criticize as uncaring those who don’t treat their animals as family. There’s also something that strikes me as rather manipulative about it — when someone tells me that I became a “pet parent” when I got my puppy, it seems to me as if they are trying to define the relationship they think I should have with my dog, rather than the relationship I want to have with my dog (let alone the relationship my dog will choose to have with me, which unfortunately doesn’t always match our plans). I also wonder if those who call themselves “pet parents” are just using a trendy term, or whether they truly have the same relationship with their pet as they do (or did, or will) with their children. Or do they imagine that’s the relationship they would have had with their children, had they had any? I hope not — I think it does a disservice to animals to treat them like children, and it does a disservice to children to treat them like pets. Personally, I like the term “guardian.” It implies looking after something living and sentient, specifying my responsibility without specifying an emotional relationship. I do know that I’m not my pet's parent, even though I care for my pup and want to help her to grow up well, happy and safe. My relationship with my pet might change as we each age and grow, but she’ll never be my fur baby and I’ll never be her mom. I'm old-fashioned enough that I still refer to my clients as "owners".  This is the term that has seen the most use over my lifetime and what I've become accustomed to.  I think that most of my clients are used to that term and don't think about it otherwise.  The term stems from the fact that in the US animals are considered a special form of property, just like if a couch, TV, or car were alive.  For better or worse most laws are based around this issue of pets as property, hence the tendency to say "owners". But does that really properly classify or define the relationship?  Probably not.  A century ago people looked at dogs and cats more like they did livestock, though there has been a long tradition of keeping them as pets rather than as working animals.  Nowadays people have much different relationships with their pets, letting them sleep in their beds, buying clothing for them, taking them to "day care" and play dates, and otherwise treating them like a special kind of child or part of the family.  I'll admit that I do that in my own home to some degree.  There are really no limits on where our pets sleep, and we snuggle with them every day.  However, I don't think I'd consider myself a "parent" as I absolutely look at them differently than I do my own children.  As much as I love my pets, I would chose my children over them without hesitation if the need called for it. I don't know that I personally like the term "guardian", as my relationship with them is more than just that of a caretaker.  I have a truly emotional relationship with my dogs and cats and being simply a guardian seems to take that out of the equation. While Dr. Switzer likes the term because it doesn't define any kind of emotions, I think that those emotions play an important part of having a pet. But "owner" seems somewhat cold and unemotional as well.  I'm used to the term and will likely continue to use it, but having a pet is more than merely owning them.  I feel a much closer bond to my pets than I do to my laptop, yet I "own" both.  As I've been writing this I realize that to me none of these terms really properly defines what most people, myself included, feel about their pets.  While I've had some clients that really do treat their pets similar to children, I have others that tell me "it's just a dog".  I don't think that one term really properly encompasses everyone who has a pet, and I don't think the ones we've been using are completely adequate to describing what is happening. I'm curious as to what you think.  For the first time in years I'm putting up a poll, and will leave it up for the next month.  I'm interested in how my readers define themselves.  I would also love to see comments on this topic, as it's one that hits right in the heart. Friday, April 15, 2016 Canine Influenza Moves To Cats By now most dog owners in the US know about the risks of canine influenza, especially people in the Chicago and Atlanta areas.  The most recent strain of "dog flu", H3N2, has proven to be a highly contagious disease.  Thankfully it is rare to have a dog die from it, though some can become quite sick.  But now it may not be just dogs that are at risk. Here's copy from a recent article about some cats who contracted H3N2 in Wisconsin.  It's short, so I'm going to copy it here, but the original article is at this link. Sandra Newbury, a clinical assistant professor and director of the Shelter Medicine Program at the University of Wisconsin School of Veterinary Medicine, tested multiple cats at an animal shelter in northwest Indiana, according to the release. The cats tested positive for the H3N2 canine influenza virus. "Suspicions of an outbreak in the cats were initially raised when a group of them displayed unusual signs of respiratory disease," Newbury said in the release. "While this first confirmed report of multiple cats testing positive for canine influenza in the U.S. shows the virus can affect cats, we hope that infections and illness in felines will continue to be quite rare." Right now that's pretty much all we have.  From what I can see this is rare right now.  However it shows that cross-species transmission is possible, and that worries me.  Last year the US veterinary community was taken by surprise when H3N2 influenza made such a rapid spread across the country.  I live and practice near Atlanta and I absolutely didn't expect what happened here.  So when we stay that cats catching this virus is rare, I take that with a grain of salt.  H3N2 was unheard of in the US until a little over a year ago, and it made quite the impact.  Could that happen in cats? I know that I'll be watching this development with interest, and will be watching out for any signs like this in cats in my area.  I don't want to be blindsided like we were last year. Tuesday, April 12, 2016 Pollen Season Saturday, April 9, 2016 Why Superman Is Super.....Poignancy In Comic Books It is a great time to be a comic book fan.  Movies and TV shows with our favorite characters are breaking records and gaining ratings.  But it still goes back to the printed comics, the artists and writers.  Some writers are better than others, and us die-hard fans can talk about the stories of various people like David, Claremont, McFarlane, Miller, and Johns (if you're a true fan you'll immediately recognize those names).  It seems that some writers "get" a character better than others.   For example, let's look at Superman.  He is arguably the single most recognized superhero on the planet.  While he's certainly not my favorite character, I admire some things about him when he is written correctly.  Many people think that the power of Superman comes from his invulnerability, strength, vision, speed, or any of his other powers.  But that's not the point of his character.  Though he is an orphaned alien and one of the most powerful beings in comics, his true power comes from his humble upbringing in Smallville, Kansas.  It's the humanity and morality instilled in him by the Kents that makes him such an amazing character.  This background makes him a true hero, and gives him the drive to keep control over his powers. I want to share some panels of a Superman comic that is really powerful and illustrates this point clearly.  This is the kind of hero we need, not the gritty, dark tone set in recent movies.  (click here for a larger version) Wednesday, April 6, 2016 The Truly Important Things In Life Last month I blogged about some thoughts regarding how I wanted to be remembered at the end of my life.  At my current age and stage of my career, I am realizing that a pursuit of advancement as a vet isn't really that important to me.  It only matters as it relates to how I can support my family and our preferred lifestyle.  My life is really defined by my wife, children, friends, and how I relate to them. One of my favorite comic strips of all time is Calvin & Hobbes.  Recently I came across this wonderful strip by the creator, Bill Watterson.  It resonated with me as it quite accurately sums up my recent feelings.  I hope you enjoy it as much as I have. (if you have a hard time reading it below, here is a link to the post where I found it) Sunday, April 3, 2016 Kidney Failure And Cancer. Treat or Not? I recently got this email from Amanda..... I would greatly appreciate it if you would be willing to give me your opinion on my situation. Now the funds are not the main reason, I will find the money if needed but my husband says our beloved Sammy is his best friend and wouldn't want him to suffer for one more day than necessary. My 10 year old cat has a huge (orange size) mass in his stomach about where the left kidney would be.  I have only had blood test done and he is in renal failure.  Doctors want to take x-rays and do a ultra sound to see if the mass can be operated on or to see if he has cancer and if it has spread to his lungs or other organs.  Then we would see if he has cancer or if the mass could even be operated on because of how large it is.  Mind you my cat is a medium size 12lb cat with a  orange size tumor in his stomach, which cannot be comfortable for him. I noticed a decline in his play around 6 months ago but I contributed that to us just moving to a new place.  He is thirsty all the time and practically lives in the bathroom now cause he always wants water from the faucet.  Every time we go to the bathroom he's in there asking for us to turn on the water.  He plays less and less with me and doesn't like to be touched at all around his stomach area. So I'm at a crossroads.  Do I do the x-rays and ultrasound to see if its cancer or to see if it can be operated on?  But either way my husband doesn't want to put our cat through any surgery.  The mass is so large I couldn't even imagine how much they would have to cut him open just to remove it.  This is horrible to think about putting him through this just so he could live a few more years for my own selfish reasons...  Or am I'm being stupid and not going through with it cause he could have a few more years? Any advise would be greatly appreciated.  I don't believe he is suffering too bad yet but I do believe that he is not his regular self, he is always drinking water, he is always peeing, he has some good days and some bad ones.  But if I could stop his suffering I will, I just don't know when the best time would be.  My heart is breaking just the thought of having to put him down or even having to have him operated on and him dying on the operating table. There is never an easy decision in situations like this.  Even if you know it's the best thing for your pet you still feel bad about ending their life (no matter how peaceful it may be).  I'll help as best as I can, but ultimately this is a decision between you and your own vet. The first thing that concerns me about your story is the fact that he is in renal failure.  An animal has to lose 2/3 of its kidney function before you will see any abnormalities and 3/4 of kidney function before the pet acts sick.  The kidneys don't regenerate, which is why there is so much redundancy.  Once parts of the kidney is damaged, it stays that way.  We can do great with only 50% of our kidneys, which is why we can donate one and not need any treatment.  But we can't function with much less than that. If a cat is showing signs of renal failure this essentially means that one kidney is completely non-functional and the other one is only half working.  If you remove the non-functioning kidney you still have the problems with the remaining one.  If only one kidney was damaged due to cancer and the other one was normal, you actually wouldn't see blood abnormalities, and you could remove the bad kidney without problems. So just with the lab tests alone this sounds like a bad situation that wouldn't respond well to therapy and would be a high surgery risk. If you wanted to pursue diagnostics you could probably start with just either chest x-rays or an ultrasound, not necessarily both.  If tumors show up on chest radiographs the cancer has spread and surgery isn't going to help.  If an abdominal ultrasound shows both kidneys affected (which I would suspect) or tumors throughout the abdomen, surgery isn't going to be a solution.  Sometimes it's worthwhile doing additional diagnostic tests for the peace of mind of absolutely knowing that there aren't any other options. Based on what you've shared, Amanda, I would not hold out hope that surgery would help.  However, definitely talk to your own vet about that, as I don't know other details of the case.  This is a situation where I wouldn't rely only on my advice since I'm not close to the case.  Your vet may know some aspects that I don't and that may change the decision. If surgery isn't an option due to cost, inability to help, or simply not wanting to do it, then you have to look at overall quality of life.  I've often believed that it's better to euthanize one day too early rather than one day too late.  If you know what the eventual outcome will be in a terminal case, I don't like waiting until the pet is actually starting to suffer.  If I know that they will be suffering at some point, I want to help them pass on before the suffering starts.  But that's not a clear-cut point in many cases, and it's often a very subjective decision. This past week I saw a cat who was being treated for hyperthyroidism.  We had her on medication for several months, even increasing the dosage beyond the typically recommended amount, but we still couldn't get her thyroid levels down to normal levels.  She wasn't herself, was continuing to slowly lose weight, and overall was unregulated.  The only other option was to refer her to a specialist for radioactive iodine therapy.  While this is a great treatment it is very expensive and therefore out of reach for many cat owners.  These clients simply couldn't afford this option, so we were left in a tough situation.  Medical therapy wasn't working, the cat was slowly worsening, and they couldn't go to the next step.  As hard as it was for them, they decided that the best thing for their beloved cat was to euthanize her. Amanda, I don't know if any of this helped, but it may give you some different thoughts.  Go over this with your own vet and look at what is going to be best for your family and pet. Friday, April 1, 2016 A New Adventure...Nuevo Atlantis I have an exciting announcement to make.  It's one that I've kept quiet for a long time until everything was finalized and set in stone.  But it's a pretty life-changing one and I'm glad to be able to share it with the world here (my family has already been informed). I'm going to be a colonist.  An undersea colonist. A few years ago an amazing project was announced, Nuevo Atlantis.  This is a new, international expedition into settling the sea floor in order to help with land-based overpopulation and to make farming and resource mining easier.  The ocean is a biome rich in minerals and biological resources, ripe for colonization.  And yes, it's actually become technologically possible. Articles have been written on this issue over the last few years, and it's pretty exciting (here's one from the BBC, and one from Discover Magazine).  Heck, this has been a dream in science-fiction for a long, long time!  Now the time has come to make this dream a reality. Nuevo Atlantis is being spearheaded by Dr. Gabe Farraige, a very smart man with PhDs in both engineering and oceanography.  He has gathered a team of 20 experts, including biologists, astrophysicists (there are a lot of similarities between survival in space and underwater), psycologists, and others.  I had a chance to meet him last year and he blew me away with his charisma and intelligence.  I couldn't help but want to join the project. Why me?  Why a veterinarian?   Personally I've always been fascinated by ocean animals and their modifications for living in that environment.  That's strange because I actually don't like the beach and would rather be in the mountains.  But animals such as sharks, whales, octopuses, and even starfish have always interested me.  I briefly toyed around with an education in oceanology, but decided against it.  I've wondered if I made a mistake, though I'm generally happy with my career choice.  Now is a chance to rectify that. A veterinarian actually makes a lot of sense!  We are trained scientists and have special knowledge of animal biology.  Did you know that there have actually been veterinarian astronauts?  Both Dr. Richard Linnehan and  Dr. Alex Dunlap have been part of NASA (Dr. Dunlap is the Chief Veterinarian) and have been on shuttle missions.  Besides being a scientist on an underwater expedition, Dr. Farraige is bringing families into the colony to make it as livable as possible.  And that means family pets!  Somebody is going to need to take care of those pets, and that somebody is me. And above all, I am a huge fan of the superhero Aquaman, so how could I pass this up? Nuevo Atlantis is currently being constructed of the western coast of northern Africa.  A recent topographic study shows  some of the early traces of the project. Here are some concept pictures, showing what the colony will eventually look like. Here is an early photo of workers laying the framework. Part of the colony has already been built.  Here are some pictures of those already starting to live there. Doesn't that look incredibly cool???  Can you see why I'm excited to be moving???  My wife and children are also pretty wound up, though my son is nervous about having hundreds of feet of water pushing down on us.  I think he'll adjust. Nuevo Atlantis is projected to be completed in late 2017.  By then there will be schools, businesses, living quarters, restaurants, and everything needed for a sustainable life underwater.  I will be moving in early 2017, likely March.  Before that I will be going through more education and training, learning the risks and techniques of living in a hostile environment. Nuevo Atlantis is still accepting applications, though the remaining spaces are very limited.  If you want to apply, or simply to learn about this amazing project, click on this link. As time gets closer I'll give more updates.  And once we move I will keep up this blog, as I'm sure everyone would want.
c0b1d9bd9efd2362
Tell me more × Consider standard quantum mechanics, but forget about the collapse of the wavefunction. Instead, use decoherence through interaction with the environment to bring the evolving quantum state into an eigenstate (rspt arbitrarily close by). Question: Can this theory be fundamentally deterministic? If one takes into account that the variables of the environment are not known, then the evolution is of course 'undetermined' in a probabilistic sense, but that isn't the question. Question is if fundamentally quantum mechanics with environmentally induced decoherence can be deterministic. Note that I'm not saying it has to be. I might be mistaken, but it seems to me decoherence could be followed by an actual non-deterministic process still, so the decoherence alone doesn't settle the question of determinism or non-determinism. Question is if one still needs a non-deterministic ingredient? Update: Please note that I asked whether the evolution can fundamentally be deterministic, or whether it has to be non-deterministic. It is clear to me that for all practical purposes it will appear non-deterministic. Note also that my question does not refer to the prepared state after tracing out the environmental degrees of freedom, but to the full evolution of system and environment. Does one need a non-deterministic ingredient to reproduce quantum mechanics, or can it with the help of decoherence be only apparently non-deterministic yet fundamentally deterministic? share|improve this question What definition of determinism are you using? – Holowitz Jan 21 '11 at 14:45 If you know the state at time t_1, you can in principle calculate everything that's going to happen, or did happen at time t_2. – WIMP Jan 23 '11 at 10:32 You need to rephrase your question if both myself and Matt have misunderstood your intention. Are you now asking if you can deterministically collapse to a particular eigenstate? The answer to that is no, because it violates linearity. – Joe Fitzsimons Jan 23 '11 at 10:56 @ Joe: I just updated the question, hope it's clearer now? You don't need to collapse exactly to a particular eigenstate, just arbitrarily close by (as I already stated in my original question). – WIMP Jan 23 '11 at 11:15 I've posted an updated answer to answer the question as I now understand it. – Joe Fitzsimons Jan 23 '11 at 12:22 5 Answers up vote 3 down vote accepted Short answer to "Can this theory be fundamentally deterministic?": No. Decoherence is the diagonalization of the density matrix in a preferred basis, with the off-diagonals vanishing at late times. Since you can get the same final diagonal matrix from several possible initial pure states of the system under consideration, there's a necessary loss of information and irreversibility. (I'm guessing this is what you meant by non-deterministic) A bit more detail: Decoherence proceeds by the rapid establishment of entanglement-induced correlations between the system and the infinite degrees of freedom of the decohering environment. The second law prevents this process from being reversible (since S has to always increase, and S is zero for the pure state, while it is greater than zero for the decohered mixed state). If you take the second law to be fundamental, then the non-determinism here is fundamental too. UPDATE: The updated question now refers to the full evolution of system + environment, in other words, the entire universe. Since there's nothing else for the universe to entangle with, it will remain in a pure state and evolve deterministically for ever if it always was in a pure state. I however don't know if the universe is in a pure state or a mixed state. Anyone? share|improve this answer I would answer the same thing, so plus one point. ;-) – Luboš Motl Jan 21 '11 at 19:29 Another way of saying this is the process is not unitary. – Lawrence B. Crowell Jan 22 '11 at 3:34 This isn't technically correct. Decoherence can be the result of an entirely deterministic (and unitary process), since you only care about the reduced density matrix for the system in question. Open quantum systems -do- decohere, even though the entire process is unitary. The larger wavefunction of the entire system is still pure, but the state of the local system becomes mixed. – Joe Fitzsimons Jan 22 '11 at 6:26 My understanding is that the evolution turns non-unitary once you've traced out the environmental degrees of freedom. That's not what I'm asking for. Also, time evolution doesn't need to be unitary to be deterministic. – WIMP Jan 23 '11 at 10:34 I concur with the criticisms of this answer: you are making the mistake of only considering the subsystem in question, not the total system. The subsystem loses information, yes, but it's lost to the total system, which may well be reversible as a whole. – Greg Graviton Jan 23 '11 at 14:19 show 2 more comments I don't disagree with the other answers, but I want to try to use different words: Evolution of a quantum state is deterministic in the sense that it is given by the Hamiltonian. Quantum mechanics doesn't need anything beyond unitary evolution. So in that sense, the answer is deterministic. However, decoherence means that eventually a quantum state may evolve into a superposition of very nearly orthogonal states, which for large enough systems will resemble to arbitrarily high precision the answer you might get from assuming that there is a nondeterministic, nonunitary process of "wavefunction collapse." For all practical purposes, since we are large classical observers, we can observe only one such nearly-orthogonal combination, and the interference with other possible outcomes will be unmeasurably small. So, in this sense the outcome is "nondeterministic." If this seems counterintuitive to you, consider something analogous about classical statistical mechanics and thermodynamics. If I start with a collection of gas molecules all bunched up in one corner of the room, it is a very atypical (low-entropy) state under any natural coarse-graining of phase space. Now, by entirely reversible interactions, it can become a typical (high-entropy) state with molecules scattered all over the room. This process appears to have lost information, in the sense that I would have to do many, many difficult measurements to ascertain that a short time before this was a very special state indeed. But really, the underlying physics is deterministic, so in principle the final state remembers where it came from, although for all practical purposes if I tried to evolve it backwards I would never discover the right answer. (To be clear, I'm not claiming a very sharp analogy here. But I'm saying that the notion that microscopically deterministic evolution can be consistent with apparent or "for all practical purposes" loss of determinism in real observations is something that might be more intuitive in this context.) share|improve this answer Good answer! – Joe Fitzsimons Jan 22 '11 at 6:28 It doesn't seem counterintuitive to me, but that wasn't my question. I'm not asking "for all practical purposes." I know that for all practical purposes it will appear non-deterministic in the sense that we won't be able to predict what's going to happen -- too many variables in the environment. I'm asking if the time evolution of the system is fundamentally 'in principle' deterministic. That of course can only be before you've averaged over the variables of the environment. – WIMP Jan 23 '11 at 10:41 Then I would say yes, it is "in principle" deterministic; the Hamiltonian specifies how everything evolves. Most of my answer was just trying to explain how to reconcile this with the observation that quantum effects look nondeterministic. – Matt Reece Jan 23 '11 at 15:41 Thanks. Yes, that was also my understanding. – WIMP Jan 24 '11 at 7:40 +1 for thermodynamics mention, I think some day decoherence and 2nd law will shake hands – HDE Jan 19 '12 at 0:16 UPDATE: It seems that we have not been answering the question WIMP intended. Here is an updated answer to deal with what I now understand to be the question: Given any unknown quantum state $|\psi\rangle$, can there be any deterministic process which will make it collapse onto a particular state $|\phi \rangle$, if $|\phi \rangle\langle \psi| \neq 0$? The answer to this question is no, because it violates the linearity of quantum mechanics, allowing us to distinguish between non-orthogonal states. This is trivial, because states orthogonal to $|\phi \rangle$ will have zero probability of collapsing onto it. This may not seem like a big deal, but it turns out that linearity is fundemantal to quantum mechanics on many levels. If we remove this constraint, then entanglement can be used to signal, and hence create problems with causality. No signalling seems one of the most fundamental features of physics, showing up in many independent theories (electromag, quantum mechanics, relativity, etc.). To see how this can be done, consider an entangled state $\frac{1}{\sqrt{2}}(|01\rangle - |10\rangle)$. This is the anti-symmetric state: for any basis $\sigma$ a measurement resulting in outcome $m$ will leave the other qubit in the opposite eigenstate of $\sigma$. Thus, if you could deterministically collapse onto the state $|0\rangle$ then you can be sure the your half of the EPR pair was not left in state $|1\rangle$ after the measurement on the other half. So, for Alice to communicate with Bob, she need only choose to measure in the $X$ or $Z$ basis. Measuring in $X$ will mean Bob receives the output $|0\rangle$ with probability 1, where as measuring in $Z$ will return result $|1\rangle$ with probability $\frac{1}{2}$. Although this is probabilistic, you can repeat the process arbitrarily many times to get exponentially close to perfect communication. This instantaneous communication breaks causality. If you allow all states to collapse to the target state, then the only solution is a channel which swaps the state with another ancilla system. Systems which can perform such deterministic collapse can always be used to signal, as well as allowing all sorts of additional weirdness like efficient solutions to PSPACE-complete problems in computation and time travel. As a result, this is totally impossible within the current framework of physical theories, and there are very substantial reasons to believe that it is a feature of any physical theory that is valid in our world. The answer is no, if by deterministic you mean possessing a local hidden variable interpretation. This follows directly from the observed violations of Bell's inequality, which ever interpretation of quantum mechanics you choose (what you are referring to is known as the Everett interpretation of quantum mechanics). Bell's inequality works as follows: Given to possible local measurement operators ($A_i$ and $B_i$) at each of two localions $i \in \{1,2\}$, what is the maximum value of the expectation value of $\langle A_1 B_1 + A_1 B_2 + A_2 B_1 - A_2 B_2\rangle$. What Bell showed was that this can take on a value of at most 2 for any local hidden variables theory. However quantum mechanics allows it to take on values up to $2\sqrt{2}$, and many experiments have recorded violations of this inequality, showing values in the range $2 < v \leq 2\sqrt{2}$. This essentially rules out a local hidden variable model. If, however, you mean can the unitary interaction of two particles give rise to decoherence, then the answer is yes, as follows: Imagine two particles initially in the state $1/\sqrt{2}(|0\rangle + |1\rangle)$. Now imagine they interact via an Ising interaction. After a certain time, they will be in the joint state $1/2(|00\rangle - |10\rangle - |01\rangle + |11\rangle)$. This is still a pure state, and so no decoherence has occurred. However, imagine one of these particles moves off far away (into the environment). If we only have access to one of these particles, then its reduced density matrix will be $1/2(|0\rangle \langle 0|+|1\rangle \langle 1|)$, which is simply a classical random distribution over the two orthogonal states, the same as would occur do to a collapse of the wavefunction. share|improve this answer Thanks for the reply. Even though you've correctly interpreted the meaning of deterministic, you haven't answered my question. You're saying the theory can't be deterministic because that would be in conflict with experimental tests on Bell's inequality. That's not correct. The theory could also be non-local instead, you just need to violate one of the assumptions to prove the theorem. – WIMP Jan 23 '11 at 10:38 @WIMP: The first line of my answer reads: "if by deterministic you mean possessing a local hidden variable interpretation". Certainly global hidden variables can be made to work, but then they always can. – Joe Fitzsimons Jan 23 '11 at 10:42 Yes, I know. But that non-local hidden variable theories are not excluded by Bell's theorem isn't an answer to my question. If you want to pursue that line of thought, the question would then be whether decoherence, rspt the entanglement with the environment is a sort of non-locality that spoils the assumptions to Bell's inequality and thus, isn't excluded by experiment. Also, I was wondering about the possibility of deterministic evolution from a theoretical rather than an experimental point of view. – WIMP Jan 23 '11 at 10:53 @WIMP: Can you please revise the question to make it clear exactly what you are asking? It's currently not clear either from the question or subsequent comments. – Joe Fitzsimons Jan 23 '11 at 10:59 @ Joe: Thanks for the update. Two things: First, you don't need to collapse exactly into |0>, you just need to get close enough so it would 'for all practical purposes' appear to be an eigenstate, see question & update. Second, that the evolution is fundamentally deterministic doesn't mean that you could in practice deterministically collapse. (Besides this, I didn't ask if it leads to instantaneous messaging as you seem to think. Also, instantaneous messaging doesn't necessarily cause problems with causality, but that's a different point.) – WIMP Jan 24 '11 at 9:13 show 1 more comment Not sure if you've ever heard of the De Broglie-Bohm pilot wave interpretation of QM, but it is a fundamentally deterministic interpretation. share|improve this answer Yes, I've heard of it. But my question wasn't if there is any deterministic interpretation of QM, but if the standard interpretation with decoherence instead of collapse can be deterministic. – WIMP Jan 24 '11 at 7:45 My answer is NO. There is a problem in many QM considerations that we have an isolated system obeying (deterministic!) Schrödinger equation that is a subject of mystical "measurements" that introduce non-deterministic behavior. But one can't make a measurement and leave system isolated (indeed saying that system is isolated is always an approximation in QM, and much heavier than in CM). In fact measurement is an act of introducing interactions with measuring apparatus, measurer, coffee measurer drinks, ... -- so the measurement result can be in theory calculated, but would involve inaccessible and enormous amounts of information. This makes it practically non-deterministic, but fundamentally it is no better than classical chaos. share|improve this answer Actually, you are arguing that the answer is yes, rather than no. I haven't asked whether it is practically non-deterministic, but whether it can be fundamentally deterministic though appear non-deterministic (much like chaos indeed). – WIMP Jan 24 '11 at 7:48 My answer to "Does decoherence need non-determinism?" is No (-; – mbq Jan 24 '11 at 10:19 Your Answer
fb040775df1dacdc
Open Access Nano Express Weak and strong confinements in prismatic and cylindrical nanostructures Author Affiliations For all author emails, please log on. Received:16 April 2012 Accepted:19 June 2012 Published:5 July 2012 © 2012 Vorobiev et al.; licensee Springer. Nanostructures (NS) of different kinds have been actively studied during the last two decades, both theoretically and experimentally. A special interest was focused on quasi-one-dimensional NS such as nanowires, nanorods, and elongated pores that not only modify the main material's parameters, but are also capable of introducing totally new characteristics such as optical and electrical anisotropy, birefringence, etc. In particular, the existence of nanoscale formations on the surface (or embedded into semiconductor) result in quantum confinement effects. As the motion of the carriers (or excitons) becomes restrained, their energy spectra change, moving the permitted energy levels towards higher energies as a consequence of confinement. In the experimental measurements, such modification would be noticed as a blueshift of energy-related characteristics, such as, for example, the edge of absorption. This paper is dedicated to the theoretical investigation of confined particle problem, aiming to explain the available experimental data basing on geometry of corresponding nanoparticles present in the particular material. Here, we focus on elongated NS that can be approximated as prisms or cylinders with different shapes of cross section. The theoretical treatment of NS is based on the solution of the Schrödinger equation, usually within the effective mass approximation [1-4], although for small NS, such approach can be questioned because the symmetry describing a nanoparticle may not inherit its shape symmetry but would rather depend on atomistic symmetry [5]. In addition, at small scale, it becomes necessary to take into account atomic relaxation and piezoelectric phenomena [6] that may strongly influence the energy states of confined particles and split their energy levels. The detailed consideration of these phenomena can be accounted using the pseudopotential method [7] introduced by Zunger's group that, after a decade, became a standard energy level model for detailed description of quantum dots. However, in cases when dimensions of nano-objects are large enough to validate the effective mass approximation, it is possible to obtain analytical solution to the problem of a particle confined within a quantum dot. An important element of the quantum mechanical description is the boundary conditions; the traditional impenetrable wall conditions (1) are not always realistic and, (2) in many cases (depending on the shape of NS), could not be written in simple analytical form, thus complicating the further analysis. To overcome these problems, we proposed to use a mirrorlike boundary condition [8-10] assuming that the electron confined in an NS is specularly reflected by its walls acting as mirrors. In addition to a significant simplification of problem solution, this method favors the effective mass approximation. Within the same framework, one can study pores as ‘inverted’ nanostructures (i.e., a void surrounded by semiconductor material) considering the ‘reflection’ of the particle's wave function from the surfaces limiting a pore. Thus, one will obtain essentially the same solution of the Schrödinger equation (and the energy spectrum) for both the pore and NS of the same geometry and size. A previous attempt to treat walls of a quantum system as mirrors in quantum billiard problem [11] yielded quite a complicated analytical form of the boundary conditions that made the solution of Schrödinger equation considerably more difficult. In our treatment of the NS boundary as a mirror, the boundary condition equalizes absolute values of the particle's Ψ function in an arbitrary point inside the NS and the corresponding image point with respect to a mirror-reflective wall. Thus, depending on the sign of the equated Ψ values, one will obtain even and odd mirror boundary conditions. For the case of odd mirror boundary conditions (OMBC), Ψ functions in real point and its images should have the opposite sign, which means that the incident and reflected de Broglie waves cancel each other at the boundary. This case is equivalent to the impenetrable walls with vanishing Ψ function at the boundary, representing a ‘strong’ confinement case. However, some experimental data (see, e.g., [4]) show the evidence that a particle may penetrate the barrier, later returning into the confined volume. Thus, the wave function will not vanish at the boundary, and the system should be considered as a ‘weak’ confinement case as long as the particle flux through the boundary is absent. This case corresponds to even mirror boundary conditions (EMBC), when Ψ function in real point and its images are the same. Below, we analyze solutions of the Schrödinger equation for several cylindrical structures, using mirror boundary conditions of both types and making comparison of the energy spectra obtained with experimental data found in the literature. We start with the simplest case that could be easily treated on the basis of traditional approach - a NS shaped as a rectangular prism with a square base (with the sides a = b oriented along the axes x and y; the side c > a is set along the z direction). Assuming, as it is usually done in the literature, the absence of a potential inside the NS and separating the variables, we look for the solution of the stationary Schrödinger equation ΔΨ + k2 Ψ = 0 (where k2 = 2 mE/ħ2 and m being the particle's effective mass) as the product of plain waves propagating in both directions along the coordinate axes: For this case, the even mirror boundary conditions are as follows [10]: That renders the following solution (Equation 1) of the Schrödinger equation: with wave vector components It gives the following energy spectrum: The odd mirror boundary conditions are obtained from Equation 2 by inverting the sign of the left-hand-side function. The solution will then be as follows: The wave vector components will be the same as that presented in Equation 4, yielding the same energy spectrum (Equation 5). Using the traditional impenetrable wall boundaries, one will also obtain the solution in the form (Equation 6) that coincides with the OMBC solution that has a vanishing Ψ function at the boundary. Therefore, the energy spectrum is the same for both types of mirror boundary conditions and impenetrable wall boundary, although the solutions themselves are not equal. In [7], we demonstrated that for NS of spherical shape, the energy spectrum found with EMBC (weak confinement) is different from that corresponding to impenetrable walls conditions. From Equation 5, it is evident that the energy spectrum of prismatic (cylindrical) NS is a sum of the spectra corresponding to the two-dimensional cross-section NS (a square with side length a) and the one-dimensional wire of length c. In a similar manner, the spectrum for cylinders with other cross-section shapes can be constructed using the solutions for two-dimensional triangular or hexagonal structures analyzed previously [8,9]. Below, we present the analysis of cylindrical NS. Let us consider a nanostructure with a circular cross section of diameter a and cylinder height c. The solution of the problem using a traditional approach can be found in [12,13]. In our case, we make variable separation in cylindrical coordinates: We note that the value of p defines the angular momentum: L = pħ. In the case of EMBC, one can apply mirror reflection from the base, which gives B = C, resulting in the following wave function: Strong confinement (OMBC) gives B = −C, which introduces sinkz instead of coskz in Equation 7A. The radial function F(r) is the solution of the following radial equation: It is Bessel's differential equation regarding the variables kr, the solution of which is given by the cylindrical Bessel function of integer order |p|: J|p|(kr); with, k = ħ−1(2mEn)1/2. Here, m is the effective mass of the particle, and En is the quantized kinetic energy corresponding to the motion in two-dimensional circular quantum well. The total energy consists of energy contribution for the motion within cross-section plane and along the vertical axis z: E = En + Ez. The energy En depends on the values of k and is obtained using boundary conditions. In the traditional case of impenetrable walls, the Ψ function vanishes at the boundary so that the energy values are determined by the roots (nodes) of the cylindrical Bessel function (see Figure 1 for different order numbers n, and also Table 1). The same situation will take place for OMBC, yielding zero wave function at the boundary so that the nodes q|p|i of the Bessel function will define the energy values. thumbnailFigure 1. Cylindrical Bessel functionsJn(x). Curve numbers correspond to order n. Table 1. Argument values at nodes and extremes of cylindrical Bessel function If the EMBC are used, the situation becomes different since the function values in the points approaching the boundary of the nanostructure should match those in the image points, making the boundary to correspond to the extremes of the Bessel function (which was strictly proved for the spherical quantum dots (QDs) [10]). Table 1 gives several values of the Bessel function argument kr corresponding to the function nodes (q|p|i) and extremes (t|p|i) calculated for function orders 0, 1, 2, and 3. At the boundary, r = a/2; therefore, the corresponding value of k is 2q|p|i/a for OMBC and 2 t|p|i/a for EMBC. The energy spectrum for a particle confined in a circular-shaped quantum well is as follows: Here, the parameter s|p|i takes the values of q|p|i for OMBC (strong confinement) and t|p|i for EMBC (weak confinement). The quantization along the z axis for both the boundary condition types will be <a onClick="popup('','MathML',630,470);return false;" target="_blank" href="">View MathML</a>, yielding the total energy In the case of EMBC, the ground state (GS) energy will be obtained with t11 = 1.625: In the OMBC case, the GS will be determined by the smallest q value of 2.4: Equations 10, 11, and 11A can be used for the analysis of optical processes in the NS discussed. In particular, blueshift in exciton ground state can be found from Equations 11 and 11A if one substitutes a reduced exciton mass in place of particle mass m. Using Equation 10, it is possible to obtain in a similar way the energies corresponding to the higher excited states. For long NS with sufficiently large c, the second term in energy does not affect the GS. Thus, the solution for cylindrical NS based on even mirror boundary conditions EMBC (weak confinement) gives the GS shift due to quantum confinement that is (2.4/1.625)2 = 2.18 times smaller than the value obtained for the strong confinement case. In the case of spherical QD [10], the difference was four times. It is reasonable that for strong confinement, the blue shift value exceeds that obtained for the weak confinement case. To illustrate this, we present in Figure 2 the comparison of ground state energy obtained with OMBC and EMBC (using Equations 11 and 11A) on NS diameter for a cylindrical quantum well with parameters of silicon (effective mass for electron 0.26 and 0.49 for a hole, which corresponds to reduced exciton mass of 0.17; bandgap is 1.1 eV for 300 K). As one can see from the figure, the difference of the exciton bandgap scales down with increase of the NS diameter, with invariably higher values observable for the strong confinement case described by OMBC. thumbnailFigure 2. Dependence of ground state energy on diameter of a cylindrical nanostructure. The plot shows the data obtained with odd and even mirror boundary conditions for an NS with parameters of silicon. The choice of OMBC or EMBC has to be made taking into account the probability of electron tunneling through the walls forming the nanostructure. One can expect that in the case of isolated NS strong confinement (OMBC), approximation will be more appropriate, whereas for NS surrounded by other solid or liquid media (core-shell QDs [10] and pores in semiconductor media), weak confinement with EMBC should be used. Results and discussion Considerable scientific interest has been attracted to semiconductor nanorods (nanowires) and cylindrical pores. Let us mention here publications dealing with arrays of cylindrical pores in sapphire [14], ZnO nanorods grown within these pores [15], as well as CuS and In2O3 nanowires. Usually, the experiments report on relatively large structures measuring 30 nm or more in diameter. As one can see from Equations 11 and 11A, in these cases, the expected blueshift will be about 0.01 eV or less for both the weak and strong confinements. Nevertheless, there exists literature data referring to nanorods of sufficiently small diameter for a pronounced confinement effect. A paper [16] reports on CdS nanorods with a diameter of 5 nm and a length of 40 nm embedded into a liquid crystal. The authors study the optical anisotropy caused by the alignment of the nanorods. To determine it, they measure polarization of photoluminescence due to electron–hole recombination, reporting that the spectral maximum of luminescence is located at 485 nm (2.56 eV), which exceeds the bandgap of the bulk CdS by 0.14 eV. Taking the electron effective mass in CdS [17] as 0.16 m0 and hole effective mass 0.53 m0, one can find the reduced mass μ = 0.134 m0 and the blueshift 0.12 eV using Equation 11, which agrees reasonably with the experiment. As CdS nanostructure is surrounded by liquid crystal media, we were using the EMBC or weak confinement approximation. Another study [18] is focused on the optical properties of CuS nanorods measuring 6 to 8 nm in diameter and 40 to 60 nm in length; the authors report definite blueshift of fundamental absorption edge. Alas, we found no data on the effective masses for CuS, so it was not possible to make numerical comparison with the theory. A particular example of cylindrical QDs is presented by quasi-circular organic molecules like coronene C24H12 (see Figure 3). In this case c < < a, which makes the second term in Equations 10, 11, and 11A very large even for nz = 1, meaning that it has no contribution to the optical properties of the molecule in visible light because the transitions between the states with different nz will correspond to radiation in deep ultraviolet. Therefore, the spectrum is defined by the first term in Equations 10 and 11 that essentially replicates the solution obtained for the case of a long cylinder. thumbnailFigure 3. Coronene molecule (a) formula and (b) computer-rendered three-dimensional image. Another paper [19] presents the experimental data concerning the optical properties of coronene molecules in tetrahydrofuran (THF) solution. Since the molecules are submerged into media, we expect that weak confinement/EMBC will be most appropriate for solution of the problem. Strong absorption lines were registered at photon energies of 4.1 to 4.3 eV, with weaker absorption down to 3.5 eV. To use our methodology, one should first determine the diameter a of a circle embracing the molecule with its 12 atoms of carbon (Figure 3). The C-C bond length in coronene is d = 1.4 AǺ, which corresponds to the side of a hexagon. Thus, one would have a = <a onClick="popup('','MathML',630,470);return false;" target="_blank" href="">View MathML</a> = 0.741 nm. Taking in (Equation 11) m as free electron mass and using only the first term, we obtain the ground state energy EGS= 0.73 eV. The higher energy states (Equation 10) will be defined by the values of s|p|i = t|p|i equal to 2.92, 3.713, 4.30 etc. The corresponding energies are 2.353, 3.805, and 5.1 eV that result in transition energies 1.62, 3.1, and 4.37 eV. The first value is out of the spectral range investigated in [19]; the other two could reasonably fit the absorption observed. If we attempt to treat the case on the basis of strong confinement approximation (OMBC), one should use the q|p|i values in the formulas (Equations 10 and 11A), yielding the ground state of 1.591 eV and excited states at 3.78, 7.21, and 8.35 eV. Therefore, the transition energies would be 2.19, 5.62, and 6.76 eV which have nothing in common with the experimental values, proving that the previous conclusion to use EMBC based on the fact that coronene molecules are embedded into THF medium was the right one. Yet, another paper [20] is devoted to studying coronene-like nitride molecules with the composition N12X12H12, where X can be B, Al, Ga or In. Depending on X, the bond length will vary, giving different values of well diameter a. The authors of [20] give the transition energies between the ground state and the first excited state, corresponding to HOMO-LUMO transition EHL. For these isolated molecules, the strong confinement case/OMBC is expected to be appropriate. The bond lengths and EHL values reported in [20] are listed in Table 2 together with values of a calculated from bond length and the transition energies ΔE found using the expression (Equation 10) with corresponding q values. One can see that ΔE values are reasonably close to the experimental EHL. Solution of the same problem using weak confinement/EMBC results in large discrepancies that fails to explain the experimental data, confirming the correctness of the decision to choose OMBC for isolated molecules. Table 2. The lowest transition energies in coronene-like molecules Theoretical description of prismatic and cylindrical nanostructures (including pores in semiconductor) is made using two types of mirror boundary conditions for solution of the Schrödinger equation, resulting in simple analytical procedure to obtain wave functions that offer reasonably good description of optical properties of nanostructures of various shapes. The expressions for energy spectra are defined by the geometry and dimensions of the nanostructures. The even mirror boundary conditions correspond to weak confinement that is applicable for the cases when the nanostructure is embedded into another media (which is especially true for a case of a pore) that enables tunneling through the boundary of the nanostructure. In contrast, odd mirror boundary conditions are more appropriate in the treatment of isolated nanostructures where strong confinement exists. Both cases are illustrated with experimental data, proving good applicability of the corresponding type of boundary conditions. Competing interests The authors declare that they have no competing interests. Authors’ contributions YVV and VRV performed calculations and drafted the manuscript. BM helped in drafting the manuscript. PPH and JG-H provided helpful discussions and improvement for the manuscript. All authors read and approved the final manuscript. The authors thank the FCT Projeto Estratégico PEst-OE/FIS/UI0091/2011 (Portugal) and CONACYT Basic Science Project 129269 (Mexico). 1. Efros AL, Efros AL: Interband absorption of light in a semiconductor sphere. Sov. Phys. Semicond 1982, 16(7):772-775. OpenURL 3. Liu JL, Wu WG, Balandin A, Jin GL, Wang KL: Intersubband absorption in boron-doped multiple Ge quantum dots. Appl Phys Lett 1999, 74:185-187. Publisher Full Text OpenURL 4. Dabbousi BO, Rodriguez-Viejo J, Mikulec FV, Heine JR, Mattoussi H, Ober R, Jensen KF, Bawendi MG: (CdSe)ZnS Core-shell quantum dots: synthesis and characterization of a size series of highly luminescent nanocrystallites. J Phys Chem B 1977, 101:9463-9475. OpenURL 5. Bester G, Zunger A: Cylindrically shaped zinc-blende semiconductor quantum dots do not have cylindrical symmetry: atomistic symmetry, atomic relaxation, and piezoelectric effects. Physical Review B 2005, 71:045318. OpenURL 6. Bester G, Wu X, Vanderbilt D, Zunger A: Importance of second-order piezoelectric effects in zinc-blende semiconductors. Phys Rev Lett 2006, 96:187602. PubMed Abstract | Publisher Full Text OpenURL 7. Zunger A: Pseudopotential theory of semiconductor quantum dots. Phys. Stat. Sol. B 2001, 224:727-734. Publisher Full Text OpenURL Phys. Stat. Sol. C. 2008, 5:3802-3805. Publisher Full Text OpenURL Science in China Series E: Technological Sciences. 2009, 52:15-18. Publisher Full Text OpenURL 11. Liboff RL, Greenberg J: The hexagon quantum billiard. J Stat Phys 2001, 105:389-402. Publisher Full Text OpenURL 12. Robinett RW: Visualizing the solutions for the circular infinite well in quantum and classical mechanics. Am J Phys 1996, 64(4):440-446. Publisher Full Text OpenURL 13. Mel’nikov LA, Kurganov AV: Model of a quantum well rolled up into a cylinder and its applications to the calculation of the energy structure of tubelene. Tech Phys Lett 1997, 23(1):65-67. Publisher Full Text OpenURL 14. Choi J, Luo Y, Wehrspohn RB, Hilebrand R, Schilling J, Gösele U: Perfect two-dimensional porous alumina photonic crystals with duplex oxide layers. J Appl Phys 2003, 94(4):4757-4762. OpenURL 15. Zheng MJ, Zhang LD, Li GH, Shen WZ: Fabrication and optical properties of large-scale uniform zinc oxide nanowire arrays by one-step electrochemical deposition technique. Chem Phys Lett 2002, 363:123-128. Publisher Full Text OpenURL 16. Wu K-J, Chu K-C, Chao C-Y, Chen YF, Lai C-W, Kang CC, Chen C-Y, Chou P-T: CdS nanorods embedded in liquid crystal cells for smart optoelectronic devices. Nano Lett 2007, 7(1):1908-1913. OpenURL 18. Freeda MA, Mahadevan CK, Ramalingom S: Optical and electrical properties of CuS nanorods. Archives of Physics Research. 2011, 2(3):175-179. OpenURL 19. Xiao J, Yang H, Yin Z, Guo J, Boey F, Zhang H, Zhang O: Preparation, characterization and photoswitching/light-emitting behaviors of coronene nanowires. J Mater Chem 2011, 21:1423-1427. Publisher Full Text OpenURL 20. Chigo Anota E, Salazar Villanueva M, Hernández Cocoletzi H: Electronic properties of group III-A nitride sheets by molecular simulation. Physica Status Solidi C 2010, 7:2252-2254. Publisher Full Text OpenURL
b6e9dbf62e3e9964
User login You are here Journal Club Theme of February 2009: Finite Element Methods in Quantum Mechanics N. Sukumar's picture pask_review2005.pdf1.36 MB Welcome to the February 2009 issue. In this issue, we will discuss the use of finite elements (FEs) in quantum mechanics, with specific focus on the quantum-mechanical problem that arises in crystalline solids. We will consider the electronic structure theory based on the Kohn-Sham equations of density functional theory (KS-DFT): in real-space, Schrödinger and Poisson equations are solved in a parallelepiped unit cell with Bloch-periodic and periodic boundary conditions, respectively. The planewave pseudopotential approach is the method of choice in such quantum-mechanical simulations, but there has been growing interest in recent years on the use of various real-space mesh approaches, of which finite elements are gaining increasing prominence. Most of us are very well-aware of the use of finite elements in solid and structural mechanics applications, but might be less familiar with its place or potential in quantum-mechanical calculations. To bridge this gap, I provide a brief introduction to the topic and sketch the main ingredients, which is supplemented by links to three review articles for further details. Parts of these journal articles would be very accessible to readers of iMechanica who are familiar with FE and other grid-based methods. In the September 15, 2008 Journal Club issue, the computational challenges in electronic-structure calculations have been outlined by Vikram Gavini; please take a look at the nice overview of DFT that he has written. The solution of the equations of DFT provide a means to determine material properties completely from quantum-mechanical first principles (ab initio), without any experimental input or tunable parameters. This facilitates fundamental understanding and robust predictions for a wide range of properties across the gamut of material systems. The quantum-mechanical problem in a crystalline solid consists of determining the electronic charge density (corresponding wavefunction and effective potential) for a system consisting of positively charged nuclei, surrounded by negatively charged electrons. In the all-electron problem, the Coulomb potential Z/r diverges (solution has cusps and oscillates rapidly near nuclear positions), and hence is not readily amenable to accurate numerical calculations for even moderate system sizes. For first principles computations in crystalline solids, the pseudopotential approach is widely adopted in most quantum molecular dynamics codes. The electrons in the inner shell (core electrons) are tightly bound and do not contribute to any valence bonding. In the pseudopotential approach, the core electrons are frozen in their atomic state, and the divergent Coulomb potential is replaced by a modified potential (pseudopotential) such that the valence states close to the core are less oscillatory, but do not change outside the core region. Only pseudoatomic wavefunctions for the valence (outermost shells) states are solved for in the Schrödinger equation, which makes it numerically tractable via planewaves or with real-space mesh techniques. On using the pseudopotential approach, the KS-DFT equations consist of the solution of single-particle like Schrödinger equations, which are coupled to a Poisson equation. The single-particle Schrödinger equation (atomic units used) for the ith state is: subject to boundary conditions consistent with Bloch's theorem. In the above equation, and are the pseudoatomic wavefunction and energy eigenvalue, respectively. The total effective potential now consists of a local ionic part, a non-local ionic part, the electronic (Hartree) potential and the exchange-correlation potential (due to many-body interactions and Pauli's exclusion principle). Typically, to enable predictive capabilities, energy eigenvalues need to be computed within 1 mHa (chemical) accuracy (1 Hartree ~ 27.21 eV). The single-particle pseudowavefunctions are squared sum to form the charge density, which is used in the Poisson equation to solve for the Coulomb potential (local ionic and electronic contributions). Once again is formed and the process is repeated until self-consistency is attained (charge density and effective potential do not change). The total energy, forces, etc., can now be computed to enable quantum molecular dynamics simulations. The solution of the Poisson equation scales linearly with the number of degrees of freedom N, but the Schrödinger solution varies as the cube of N (see this plot). For self-consistency, since repeated Poisson and Schrödinger equations are solved for and in excess of 1000 eigenfunctions may be required for large systems (~100 atoms or more), the solution of the Schrödinger equation is the limiting step in the solution of the equations of DFT. Unlike linear equations that are relatively easy to solve, accurate eigensolutions for thousands of eigenpairs are computationally demanding and algorithmically challenging, since the eigenfunctions also have to meet the orthogonality constraint. From Planewaves to Real-Space Mesh Techniques As the preceding discussion indicates, the efficient solution of the Schrödinger equation in DFT computations is of paramount importance. This need is especially pronounced for metallic systems with heavier atoms and under extreme conditions (high-temperature and/or -pressure) whose pseudopotentials are deep and sharply localized. In such instances, planewaves (Fourier bases) are less than ideal since they have the same resolution everywhere, and hence real-space approaches with their ability to have variable resolution in space (adaptive) become more attractive. More importantly, for large-scale electronic-structure calculations, basis-sets that are compactly-supported such as finite elements or wavelets lead to structured sparse system matrices and hence are more amenable to iterative solution schemes and to implementation on massively parallel computational platforms. As a variational method, finite elements are systematically improvable and convergence of the energy eigenvalues is from above (min-max theorem). Furthermore, boundary conditions (periodic, Bloch, Dirichlet or a combination of these) are easily incorporated within the weak formulation, which make FEs attractive for modeling crystalline solids, molecules, clusters, or surfaces. These attributes make finite elements particularly promising for density functional calculations, and hence the optimism that in the upcoming years there will emerge wider interest in exploring finite element methods in quantum-mechanical computations. I close with a reading list, and a short summary of each review article. 1. T. L. Beck (2000), "Real-Space Mesh Techniques in Density-Functional Theory," Reviews of Modern Physics, Vol. 72, Number 4, pp. 1041–1080. [arXiv] [Journal] Beck's review discusses finite-difference and finite element formulations in DFT. Emphasis is placed on Poisson and nonlinear Poisson-Boltzmann equations in electrostatics and on solutions of Hartree-Fock and KS-equations (eigenvalue problems) of DFT. First, Beck provides a context for real-space mesh techniques by describing some of the prominent developments so far on electronic structure methods (planewaves, Gaussian, LCAO, etc.). Then, the theory behind KS-DFT and classical electrostatics is described. Second-order FD is briefly discussed, and then a higher-order finite-difference method is used to solve the Poisson equation with a singular charge density. The finite element formulation for the Poisson equation is presented. Multigrid solvers are attractive for real-space methods and the main feature of a multigrid method are discussed. Beck presents energy minimization techniques to solve the Schrödinger eigenproblem, and provides a historical perspective on the developments in finite-difference and finite element methods for self-consistent calculations. Lastly, attention is given to time-dependent DFT calculations.  2. J. E. Pask and P. A. Sterne (2005), "Finite Element Methods in Ab Initio Electronic Structure Calculations," Modelling and Simulation in Materials Science and Engineering, Vol. 13, pp. R71–R96. [Journal] A PDF of this paper is uploaded (courtesy of Pask). Pask and Sterne review finite element bases and their use in the self-consistent solution of the Kohn-Sham equations of DFT. The solution of the Schrödinger and Poisson equations is discussed, with particular attention to the imposition of the required Bloch-periodic and periodic boundary conditions, respectively. The use of these solutions in the self-consistent solution of the Kohn-Sham equations and computation of the DFT total energy is then discussed, and applications are given. To impose Bloch-periodic boundary conditions, the wavefunction is rewritten in terms of a periodic function u(x), and the variational (weak) form for the Schrödinger equation in terms of u(x)is derived. The weak form within the unit cell and expressions for the contributions of the non-local part of the pseudopotential are presented. In general, the trial and test functions can now be complex-valued functions, and the overlap matrix S and the local part of the Hamiltonian H are Hermitian. Band structure for a Si pseudopotential is presented and the optimal sextic convergence in energy for cubic finite elements is demonstrated. In the context of crystalline calculations, particular attention is given to the handling of the long-range Coulomb interaction via use of neutral charge densities in the Poisson solution. Self-consistent finite element solutions for the band structure of GaAs are shown, and uniform convergence with mesh refinement to the exact self-consistent solution is demonstrated.  3. T. Torsti et al. (2006), "Three Real-Space Discretization Techniques in Electronic Structure Calculations,"Physica Status Solidi. B, Basic Research, Vol. 243, Number 5, pp. 1016–1053. [arXiv] [Journal] In this paper, Torsti and co-workers compare and contrast the performance of finite-differences, finite elements, and wavelets in electronic-structure calculations. The computational problems under consideration to be solved are the single-particle Schrödinger equation for the pseudowavefunction, and the Poisson equation for the Coulomb potential. Basic theory on FD, FE, and wavelets is first presented; hierarchical finite element bases are also touched upon. Solution approaches for linear equation solvers and for generalized eigenproblem solvers are discussed in significant detail. Applications of finite-differences to quantum dots, surface nanostructures and positron calculations are presented, whereas finite element solutions for all-electron calculations for molecules are performed. Finally, the similarities and differences between the different methods of discretization are nicely summarized. Thanks for introducing a very interesting theme this month.  Naively, it would seem to me that wavelets might be better suited to this class of problems than Lagrange-interpolant based finite elements.  Can you comment further on the differences/relative advantages? N. Sukumar's picture Yes, good point, and I do not have a definitive answer.  Wavelets and FE in DFT calculations are contemporaries (similar starting times), and in recent years, use of wavelets (Daubechies/interpolatory) has also been on the rise.  Goedecker in Basil and co-workers have over the past 2-3 years published quite a few nice papers showing all-electon calculations for the Poisson equation and also a recent one on DFT calculations using wavelets where the promise of the approach is revealed (have posted the links to two of the refs at the bottom). Unlike FD, FE and wavelets are both basis-set approaches and the variational arguments also holds for both. On the plus side, wavelets are localized and also orthogonal in real and Fourier space (simpler matrix structures), and they do have the adaptive nature built-in (via scaling) in the construction. For FEM, typically upto cubic FE (serendipity elements) have been used so far in DFT calculations, but would be interesting to see what spectral FE can provide for these problems. Unlike most structural/solid applications where the mesh can become complicated, here the FE meshes are either regular or parallelepiped and hence the basis functions are piecewise-polynomial. The bottom-line is the need to reduce the number of degrees of freedom to solve the Schrodinger equation (at an acceptable accuracy), and hence codes with similar capabilities (e.g., wavelets and Lagrange/spectral FE) need to be available so that head-to-head comparisons can be made. Genovese et al. (2007), Efficient and accurate three-dimensional Poisson solver for surface problems, J. Chem. Phys., 127, 054704. Genovese et al. (2008), Daubechies wavelets as a basis set for density functional pseudopotential calculations, J. Chem. Phys., 129, 014109. The main advantages of wavelets relative to FE are orthogonality (for some types) and ease of local refinement (hierarchical). The main disadvantages (the price paid) are lesser locality (thus potentially less efficient massively parallel implementation), more rapid oscillations and thus more difficult matrix element and other integrals, and lesser simplicity (being non-polynomial, in some cases defined only by recursion). Goedecker's recent outstanding paper is the culmination of a decade or so of work on wavelets in DFT. Other major work in the area has been by Arias. See, e.g., Title: Multiresolution analysis for efficient, high precision all-electron density-functional calculations Author(s): Engeness TD, Arias TA Source: PHYSICAL REVIEW B   Volume: 65   Issue: 16 Article Number: 165106   Published: APR 15 2002 A close reading of these should make clearer the above enumerated issues. Which basis's (FE, wavelet, or other) mix of advantages and disadvantages will prevail in the large-scale DFT context in the coming decade or so will become clearer as the parallel implementations of each are brought to fruition. As of now, it is very much an open question. My own sense, however, as evidenced by my continued work in the area, is that the ultimate simplicity and locality of FE will prevail in the large-scale, parallel context. Looking forward, especially interesting in this regard are hierarchical, spectral, and partition-of-unity FE. The latter of which I am pursuing presently. Vikram Gavini's picture Suku, Thanks for the interesting post. John, Thanks for clarifying the pros and cons of wavelets vs finite-elements. I am curious if any one tried max-ent functions as a basis for solving the Kohn Sham equations, or if you have any thoughts in that direction? They are smooth and can capture the wave functions with lesser number of basis functions than standard FE. At the same time they offer the desirable coarse-graining features of FE. N. Sukumar's picture Thanks for the remarks and questions. I haven't tried max-ent for the three-dimensional Schrödinger operator, but here are a few related points. Accuracy considerations when solving the Schrödinger eigenproblem are a lot more restrictive and demanding than most typical FE applications. With it, robustness and systematic improvability (e.g., with mesh refinement/more basis functions) are especially germane. Trilinear finite elements provide very slow convergence for the Schrödinger eigenproblem, and hence Pask chose cubic FE. A general higher-order max-ent basis might open some doors in this regard (Arroyo/Ortiz have come-up with quadratically smooth max-ent). With max-ent and meshfree in general, numerical integration of the weak form integral also needs to be carefully addressed since numerical integration errors can limit convergence in the energy eigenvalues when very high precision is required.  Of course, smooth bases are very desirable for solving KS-equations; a compactly-supported, higher-order-precision smooth basis set that can be integrated (very accurately) and which facilitates easy imposition of periodic/Bloch-periodic boundary conditions would be a preferred choice. On the face of it, given that the mesh for electronic-structure calculations is simple enough, element-centric basis functions might prove to be useful since they render the numerical integration issue to be less difficult. Possibly smooth bases like NURBS or higher-order max-ent on elements might be able to fit within this slot. Setting aside the smoothness property, piecewise-polynomial FE basis have all the other desired attributes, and one also has a good understanding of what is required to obtain variational convergence (avoid `variational crimes') with them, which makes them attractive. Henry Tan's picture Thanks for the interesting topic. I am trying to understand the method. What does particle here represent? N. Sukumar's picture I presume you point to the reference to `single-particle' just before the Schrödinger equation. This is a consequence of the Kohn-Sham ansatz to get to the electronic charge density: the interacting system of electrons is replaced by an equivalent one of non-interacting `electrons' in a mean (potential) field. Each single-electron (single-particle) obeys the Schrödinger equation: the wavefunctions are pseudowavefunctions and their sole purpose is to construct the charge density, when squared-summed. Pradeep Sharma's picture Dear Suku, thanks for the informative discussion on this interesting topic. Recently, non-equilibrium problems have acquired increased prominence in the context of many applications e.g. nanocapacitors for energy storage, quantum dots among others. For example, a nanocapacitor (metal-dielectric-metal thin film supercell) under electric field conditions cannot be handled by DFT. Time dependent DFT is a possible solution but convergence is often a question mark and certainly the problem becomes computaitonally intensive. Are finite element based methods also likely to offer advantages in such classes of problems? N. Sukumar's picture Sorry, not aware of the FE literature in time-dependent DFT. Here is a ref. that Pask suggested that might be of interest: Title: Finite-element implementation for electron transport in nanostructures Author(s): Havu P, Havu V, Puska MJ, et al. Source: JOURNAL OF CHEMICAL PHYSICS Volume: 124 Issue: 5 Article Number: 054707 Published: FEB 7 2006 Dear all, We just submitted an article to J. Chem. Phys. about DFT and TDDFT with finite elements. Here is the preprint: Our implementation uses hiearchical finite-elements on unstructured meshes. It is still in quite early stage of development, but it works and the results are promising. We are working on parallel implementation and we have some ideas how to reduce the computational cost significantly. N. Sukumar's picture Thanks for sharing your work, which is very interesting. Nice to read about how higher-order tets, advanced mesh generation algorithms for molecules, and all-electron (not pseudopotentials) computations for DFT/TDDFT can all come-together. In the all-electron case using finite elements, how is the 1/r Coulomb singularity handled (sufficiently large value used) in the numerical computations?  I'd welcome your comments and perspectives on the pros/cons of FEs vis-a-vis existing all-electron methods. For those interested, here is a short description on TDDFT, and an article on the basics of TDDFT. Coulomb singularity We haven't done any detailed analysis, but the Coulomb singularity does not seem to be a big problem. Our mesh generation scheme produces highly nonuniform meshes. The mesh has a vertex at each nucleus (the source of the singularity), and then we create (~spherically) symmetric layers of tetras around each nucleus. The radius of the layers increases geometrically (r_i = q r_i-1, where e.g. q = sqrt(2)), until the radius is order of the inter-atomic distance. Finally, these atomic meshes are merged together. The largest edge of the mesh to the smallest edge ratio is order of 103-104. The Coulomb singularity could be a problem only when assembling the matrix element (of the Hamiltonian) int chi_i(r) 1/|r| chi_i(r) dr which is associated to the vertex basis function at the nucleus, because other basis functions are zero at this point. As we use the Gauss-Legendre quadrature, the 1/r is never evaluated at the nucleus (i.e., r=0). So, in practice, no special treatment is required. However, maybe some kind of special quadrature scheme would be more efficient and reduce the number of elements required near the nuclei. I would appreciate any comments about this. FEM vs other all-electron methods At first, I have to say that it is way too early to draw any conclusions, but a few things come into my mind... At the moment, I can remember only three other types of all-electron methods. (1) The linear combination of atomic orbitals (LCAO) either with numerical atomic orbitals (FHIaims, Dmol) or with Gaussians (Gaussian, TurboMole, Dalton, ...) is the most used one. (2) The linearly augmented planewave (LAPW) and (3) the projector augmented wave (PAW) method are the two other ones. The PAW can be used together with planewaves or finite difference. LAPW and PAW use "atomic spheres" - nonoverlapping spheres around atoms, inside which a special treatment is applied for all-electron wavefunctions. So, one has finite-difference or plane-wave part plus all-electron atomic spheres. Pros vs. LCAO: 1) Real-space & local basis functions => easy and efficient parallelization via domain decomposition 2) Different boundary conditions (cluster/molecule, periodic, mixed) are easy to implement 3) Strong external (especially, time-dependent) potentials (e.g. electric or magnetic fields) are not a problem. Cons vs. LCAO: 1) Huge number of degrees of freedom (DOF), LCAO: 10-100/atom vs. FEM 1000-10000/atom 2) LCAO has tens of years history behind => strong experience Pros vs. LAPW & PAW: 1) All (core and valence) electrons are treated on equal footing (i.e. no atomic spheres) => simplicity, adding new features is easy 2) Nonuniform discretization => sparse material can be handled without wasting DOFs Cons vs. LAPW & PAW: 1) Performance??? DOF in FEM is more expensive than finite-difference or planewave DOF To summarize, I would say that, for large scale adoption of FEM, what matters is the speed. A FEM based method should be able to perform a basic DFT calculation for a relatively large and dense system (e.g. carbon dioxide on metal surface/slab, ~100 atoms) roughly in the same time as PAW or LAPW based method. At the moment, my FEM implementation is one order of magnitude solver than FD+PAW method, but we have some ideas how to reduce the computational cost significantly. There exist also a few special systems, for example, time-depedent systems beyond linear response regime, where FEM has significant advantages over above methods. But again, it's quite early to draw any conclusions. Within one year, I should have much more insight to this question. N. Sukumar's picture Thanks for your detailed response, and for providing us with the place of FEs within the context of widely used all-electron methods. Clearly, fewer number of DOFs and improved speed are two issues that are important for finite elements, and as you allude to, it will take time for further developments and maturity in FE codes are needed for the gap to potentially close. For the Coulomb singularity, the need for a geometric mesh in 3D with element edge ratios of , renders the mesh generation (element quality) an important research problem unto itself, which you have solved. Insofar as the quadrature goes, tailored quadrature rules would seem to be beneficial, and there is quite a bit of experience in the boundary element community in treating singular and hypersingular integrands. We also have some recent experience on developing Gauss-like quadratures for polynomial and non-polynomial (singular) integrands for finite elements. First impressions are that some of these approaches or variants of the same might have a place to improve the efficiency of evaluating the Hamiltonian with the 1/r singularity. Pradeep Sharma's picture Lauri, thanks...this is a very useful summary. Jarson Zhang's picture It's really a interesting and useful topic, but it's too hard! Thank you too! Subscribe to Comments for "Journal Club Theme of February 2009: Finite Element Methods in Quantum Mechanics" Recent comments More comments Subscribe to Syndicate
df221598f3620a8e
Dashen-Frautschi Fiasco On April 29, at the 1965 spring meeting of the American Physical Society in Washington, Freeman J. Dyson of the Institute of Advanced Study (Princeton) presented an invited talk entitled "Old and New Fashions in Field Theory," and the content of his talk was published in the June issue of the Physic Today, on page 21-24. This paper contains the following paragraph. The first of these two achievements is the explanation of the mass difference between neutron and proton by Roger Dashen, working at the time as a graduate student under the supervision of Steve Frautschi. The neutron-proton mass difference has for thirty years been believed to be electromagnetic in origin, and it offers a splendid experimental test of any theory which tries to cover the borderline between electromagnetic and strong interactions. However, no convincing theory of the mass-difference had appeared before 1964. In this connection I exclude as unconvincing all theories, like the early theory of Feynman and Speisman, which use one arbitrary cut-off parameter to fit one experimental number. Dashen for the first time made an honest calculation without arbitrary parameters and got the right answer. His method is a beautiful marriage between old-fashioned electrodynamics and modern bootstrap techniques. He writes down the equations expressing the fact that the neutron can be considered to be a bound state of a proton with a negative pi meson, and the proton a bound state of a neutron with a positive pi meson, according to the bootstrap method. Then into these equations he puts electromagnetic perturbations, the interaction of a photon with both nucleon and pi meson, according to the Feynman rules. The calculation of the resulting mass difference is neither long nor hard to understand, and in my opinion, it will become a classic in the history of physics. Dyson was talking about the paper by R. F. Dashen and S. C. Frautschi published in Phys. Rev. 135, B1190 and B1196 (1964). They use the S-matrix formalism for bound states. Later in the same year, Steve Adler and Roger Dashen became full professors at the Institute for Advanced Study. Naturally, they were admired by their colleagues, and many young physicists studied Dashen's paper on the neutron-proton mass difference. I was one of those who studied the paper carefully during the summer of 1965. I then published a paper in the Physical Review [142, 1150 (1966)]. How could those two distinguished physicists make this kind of mistake? If you solve the Schrödinger equation for negative energy states, there are two solutions: "good" and "bad." We then impose a localization condition to eliminate bad wave functions. This results in a discrete spectrum of bound-state energy levels. Thus a slight departure from a given energy level will result in a bad wave function. Click here for a recent article on this subject. Dashen and Frautschi use the S-matrix formalism where the bound-states appear as poles in the complex energy plane. A slight mislocation will lead to the inclusion of the bad wave function. This is precisely the course of the so-called "Dashen-Frautschi Fiasco." I am indeed fortunate to have an excellent education in physics. I took my first-year quantum mechanics when I was a senior (1957-58) at Carnegie Tech (now called Carnegie Mellon University). Michel Baranger was the professor, taught me why bound-state energy levels are discrete. I still remember those good and bad wave functions he drew on the blackboard. In 1997, I attended his 70th birthday celebration held at MIT. My wife and I posed with him in this photo. We were very happy! Toward a New Research Program After publication of my paper, many voiced objections based on the belief that Princeton could not have made this kind of mistake or misjudgment. There were also many who knew Dashen's calculation was wrong, but they were not sympathetic to me. Their assumption was that I would disappear from the physics world. One prominent Princeton professor told me wave functions have nothing to do with physics and everything should come the S-matrix and the current algebra. I am standing in front of Feynman's portrait at the entrance of Fermi Lab's Feynman Computing Center (June 2003). Since then, I became devoted to wave functions. In 1970, I was very fortunate to find a very important person who shared the same ideology as mine. His name was Richard P. Feynman. People these days ask me what connection I have with Feynman. This was the very beginning of my Feynman connection. Yet, it takes time to transform ideology to concrete results in physics. For this, I lived in isolation for fifteen or twenty years (depending on how to count). It was like living in prison. The person who pulled me out of prison was Eugene Paul Wigner, and I am forever grateful to him. As for the wave functions, I was particularly interested in their localization property, as you can see from the "good" and "bad" wave functions. The burning issue was and still is whether the hydrogen wave function localized in one Lorentz frame appears to be localized to observers in other Lorentz frames. This is a well-defined problem, and I enjoyed working on this problem in the past. I enjoy giving invited talks on this subject under current titles, such as symmetries of extended particles, covariance of Feynman's parton picture, Feynman's rest of the universe, squeezed states, Feynman's decoherence, Wigner's little groups, and other trademarks of current interest. You may visit my Scope of Research page to find out what I am talking about.
b906b55f048942ad
New straighforward approach to teaching quantum mechanics (scottaaronson.com) 155 points by georgecmu on July 31, 2012 | hide | past | web | favorite | 55 comments I took 2 courses on Quantum Mechanics during my undergrad and I'm taking the Coursera course now as well for fun. My biggest quibble with how QM is presented is that it is, paradoxically, never tied into physics. Quantum Physics is really a Linear Algebra + Statistics but with extension to complex numbers. In my experience it is just another math course and there is no Physics anywhere. There's always talk of measuring states, applying Hadamard gates and writing down their decomposition in the eigenbasis but it's abstract and meaningless. What does a particle with some particular wave function look like? As it evolves in time it "smears" out according to the time evolution equations, but how fast is the process? Does it occur on scale of nanoseconds? Seconds? Hours? How is it suspended, or operated on? What exactly is an example of a measurement? There are so many tangible questions that would help with intuition, but are never addressed. This would be my approach to teaching quantum mechanics. Connect it to a concrete physical system, explore in detail going back and forth between experiment and math. And best, maybe even simulate the system somehow. Some time ago I made an effort to try to simulate what I learned and this was the result (as an example): http://www.youtube.com/watch?v=a88GlrUmI9Y&feature=plcp I'ts ugly and it's probably wrong, but it's tangible and the best I could do because finding this kind of Quantum Mechanics, as opposed to a lot of talk about measuring things is very hard. Cool demo. Interesting questions. There are nobel prizes waiting for the answers to some of them. Warning: The following is not precisely target at your post. The core of some your questions are the kind answerable only with mu. Essentially, you are asking for some physical intuitions to relate the quantum world up to the world as we see it. But the thing is that there is very little in our macroscopic reality that relates to the quantum world. All analogies are broken. Here's the key thing to realize. Quantum mechanics is hard not because it is complex =). Far from it. It hard because we have no mental basis with which to represent its concepts. The opposite should hold too. A Quantum intuition would find our world bizarre, very hard to understand and - unlike how we feel about QM - justifiably complex. But if one takes a multicultural appreciation approach to how systems evolve, QM becomes a bit less offensive to one's sensibilities. The only path to approximate intuition is to drill the math and think about the concepts. Simulations are another great option. They allow one to create a rope bridge from the math to something that feels a bit more concrete. Still not intuitive but better than nothing. The idea is to get to the point where you can use the math as a map. So you won't ever be able to feel as comfortable with it as galilean relativity but you can form questions, think about reality and use the map to guide your mind. I am the opposite of you - I think a lot of the physical or incidentals of experiments are useless baggage for building that map. What does a particle with some particular wave function look like? At this point it is not useful to think about how things look. Focusing on what things "look" like would again be just be a rough analogy, possibly misleading (similar in spirit to focusing too much on tangent lines for derivatives) and might create a false crutch by waylaying the brain from becoming more comfortable with the abstract surroundings. And then there is the question of: is the wave function? Does it make sense to think of it as something physical? (I don't think so). Does it occur on scale of nanoseconds? Seconds? Hours? While kinda opposite in direction to your questions, work on decoherence is answering some questions of timing. But nothing will shed light on entanglement, coherence and aspects of measurements better than quantum computers. Lets hope they get invented soon or less preferably, proven not to be possible. Each would learn us a lot. There is an advantage to the original formulations of QM and they are precisely these. It's true that in Quantum Field Theory you don't see these Bell Inequality ideas and I've seen people working in Quantum Information theory who struggle to prove that you can multiply a wavefunction by an arbitrary phase factor and it is an unobservable change. QFT has real current statistics and Lagrangian densities and Feynman diagrams, which give you a much more tangible feel of what physics you're describing. Heisenberg equations of motion are a good place to start. The original way we stumbled upon quantum mechanics was due to Heisenberg, who noticed that a lot of the wavy stuff people wanted to explain could be explained if Hamilton's equations of motion df/dt = {f, H} + ∂f/∂t were generalized by treating x(t) and p(t) as matrices and insisting that they do not commute, leaving instead [x, p] = i ħ as a matrix version of an "uncertainty principle." The corresponding quantum equation for an observable  is that dÂ/dt = i [Ĥ/ħ, Â], which allows you to start (most famously) with a harmonic oscilator Ĥ and derive the Hamilton equations dx/dt = p/m, dp/dt = - k x, precisely due to the failure of x and p to commute. So you get this direct connection between known physics equations and the quantum theory, and often the same thing which is responsible for driving the uncertainty relation also drives all of classical physics. What does a particle with some particular wave function look like? |Psi|^2 in the appropriate basis, I should say. In quantum mechanics there is a discrete energy level spacing ΔE, and the answer is that no observable can change much faster than ħ / ΔE. For the harmonic oscillator for example, nothing can change much faster than 1/Ω by this criterion. What exactly is an example of a measurement? This can get a little touchy but a good example is a photon hitting a photomultiplier tube and generating a click -- especially if you start to play with the polarization of the photon to generate entanglement and so forth. Edit: With all that said, I highly recommend watching Feynman's New Zealand lectures which explain, among other things, why CDs show rainbow patterns (in a time, sadly, before people had CDs and so Feynman kind of just says "I wish I could have brought you an example.") http://vega.org.uk/video/subseries/8 You're right. And it's well explained despite the subtle jabs at Quantum Informations theory =P. There are definite advantages to the physics focused approach. As a physicist (which I am not), in your practice of QM you get familiarish concepts like spin, momentum , oscillators and Lagrangians. But despite the wild successes of QFT, when it comes to explaining things it is extremely hard to do without either using broken analogies or talking about Feynman diagrams, Lagrangians and Hamiltonians. To realistically depict what is going on you are replacing one branch of math with another more complex branch with the main advantage being that someone who has learned the math of classical mechanics can have a slightly stronger physical intuition. The advantage of QIT is that because it is relatively simple already, a slightly simplified form is still easier to understand and more representative of the real thing than a highly simplified explanation of QFT. The other advantage of this is that the simplicity allows the raw structure to be exposed and tackled much more readily. If I would bet I would say answers to questions like what is the wave function exactly , how does the macroscopic universe arise from the cloudy quantum picture, what is really going on in measurement etc will come from QIT. I get the impression that many physicist don't have much respect for foundations but these questions are still worth answering and would have a practical effect on our world right away. QIT is young yet, there are advantages to being able to take multiple viewpoints of the same thing. The correct viewpoint can vastly simplify a problem. To quote Egan: " Everything becomes clearer, once you express it in the proper language." Oh, I don't mean to demean the field of Quantum Information. Especially, I find it really useful to run through the double-slit experiment by labelling one slit as |0> and one slit as |1>, and then going through both slits comes out as sqrt(1/2)[ |0> + |1> ] = |+>, which has certain "off-diagonal terms" in its "density matrix." If all that formalism is built up, you can have fun working through when these off-diagonal terms exist and when they do not, especially in cases where you take a new qubit as |0> and then entangle it into the system with a CNOT gate, where you get |00> + |11>. If folks are confused by the above, I have begun trying to explain it here: https://github.com/drostie/essay-seeds/blob/master/physics/d... It's really kind of rough (in particular I'd like to use proper HTML subscripts rather than Unicode subscripts eventually) but it should be intelligible to a bright student who wants to know the basic ideas of QM. I suppose that most of the time you are drawing the probability density (|phi|^2), but in a few case you show Re(phi) and Im(phi). It looks a little to wavy. For example, if the initial state of a free particle is a (real) Gaussian distribution, after a time it should still be a wider Gaussian distribution (with the wave function phi modulated by a complex phase). I think that this is due to a problem in the numerical method. In spite of this, the general behavior of the simulation looks right. For example, in the harmonic oscillator, if the initial configuration is a well localized state, after 1/2 period the configuration should be a well localized distribution on the other side of the parabola, and this exactly what the simulation shows. yeah, but this is more general than qm - it's amusing to ask (other) physicists how fast electrons move in typical wires (a purely classical number, but most won't have a clue). It's not exactly clear what you mean by "velocity" in that question. Drift velocity? Fermi velocity? (and others) These two differ by multiple orders of magnitude, so it's easy to wrongly conclude that another physicist doesn't have a clue. Intuitive explanation: drift velocity is the overall streaming velocity of the moving electrons, fermi velocity is the velocity of a single electron. The reason these are different is that electrons move back and forth heavily, thus the fermi velocity is high. But under normal voltages, they move just slighly more in one direction than the other on average, which is the drift velocity. Of course this is highly simplified, and the real story is quantum mechanical. maybe i'm being over-sensitive here, but it really feels like you're making excuses and trying to hide behind details. surely it's obvious i mean drift velocity from the context (how can you call fermi velocity classical?). i don't want to drop names, or pull rank, or argue from authority, but this comment is based on memories of a happy afternoon chatting with other students. none of them said "oh, i don't understand, do you mean drift or fermi or one of the many other velocities i can think of?" instead, to a man or woman they said some random large number than stared, then did the maths, and then burst out laughing. they were smart people. and i admit i was one of the dumbest (and i didn't come up with the question - i can't remember who did). and none of them felt the need to make excuses or smokescreens about learning something new. I didn't read the "(a purely classical number, but most won't have a clue)" as being part of the question, but rather as commentary about people answering the question. Also "how fast do electrons move" does really suggest the velocity of an electron and not the net average velocity...anyway, calculating drift velocity is easy. From the current and the charge of an electron, calculate how many electrons are passing through the wire per second. Then calculate the number of electrons in the wire from the material properties. The answer is the ratio between these two, multiplied by the length of the wire. Since this contains many numbers that most people (even physicists) don't know off the top of their head (density & weight of copper, avogadro's number, electron charge) this is a bit hard to guesstimate within an order of magnitude without looking things up, but it will be a small number of meters per second. This sounds a lot like what I started reading at LessWrong: http://lesswrong.com/lw/r5/the_quantum_physics_sequence/ One line that particularly stuck with me is: "Dragging a modern-day student through all this may be a historically realistic approach to the subject matter, but it also ensures the historically realistic outcome of total bewilderment. Talking to aspiring young physicists about 'wave/particle duality' is like starting chemistry students on the Four Elements." In Physics sometimes and approximated theory / model is useful, in spite it is incorrect. The problem is that the Schrödinger / Heisenberg quantum states of a particle are a lie. If you have an electron, it doesn't follow the Schrödinger / Heisenberg equation. It can emit a photon an reabsorb it a little time after. This is not part of that equation and has a very easy to measure effect that is the Lambs Shift http://en.wikipedia.org/wiki/Lamb_shift#Lamb_shift_in_the_hy... . And the electron can even do more crazy things, but they are luckily more difficult to measure. This is the reason to use Quantum field theory. * If you are only going to buy a lens to put in front of your photodetector, then probably the wave/particle duality is a good enough approximation (in spite that it doesn't make sense). * If you are doing quantum-chemistry then use the Schrödinger equation (in spite that it doesn't work for strong electric fields and is not compatible with special relativity). * If you work near a big particle accelerator you should try at lest to use the standard model (in spite that renormalization doesn't make sense). * And I hope that you never have to use string theory to explain and experiment. It's nothing like the quantum physics sequence. Yudkowsky starts out making up numbers from nowhere and asserts things for purely philosophical reasons, whereas Aaronson actually has proofs (albeit left to the reader). Sure, the numbers in "Quantum Explanations" are made up, but (1) the experiments are real, and (2) everything besides the numbers is accurate (as far as I know). Plus, the goal of the sequence was never to actually explain quantum physics. It was to explain why a realist perspective (the wave function is all there is, and the math says it doesn't collapse, so it really doesn't) is by far the most probably correct, despite the fact that it makes no new prediction compared to previous interpretations. That, plus answering some philosophical question with physics. Aaronson's explanation definitely is a step in the right direction. I still have a quibble however: he keeps mentioning "probability" as an analogy to amplitude. That confuses his explanation in my opinion. I'd rather have a straight explanation of QM math, then an explanation about its similarities with probability theory. And the mixed state paragraph seems to conflate subjective probability and actual distribution of amplitude. Ick. Why couldn't I read this 5 years ago? When Yudkowsky says configuration space in that sequence, he doesn't mean a separable Hilbert space, because he doesn't believe (for philosophical reasons) that that's the correct setting for QM. But… Of course it's not the correct setting for QM. Before even talking about Turing computability and infinite set atheism (which rule out a continuous, infinite configuration space), configuration space is folded on itself around the identity axis. Unless you think (a,b) is not the same configuration as (b,a), even though their amplitudes would add up before we have access to their square at the experimental level? Evidence towards "its the same configuration" looks quite overwhelming. Or, could a "permutable" space, where (here with 2 dimensions) (x,y)=(y,x) for all x and y, be a Hilbert space as well? Overall, I'm not sure what you're talking about. Can you be more explicit, or provide some links? I don't know how you expect me to respond to this. Infinite set atheism is basically Yudkowsky's reason for denying Hilbert space, so I don't know why we should talk before it. Read the comments on "The Quantum Arena" -- Yudkowsky didn't even know whether the thing he was railing against as an uncountably infinite set was indeed infinite! (Presumably he has updated by now.) Well, it depends on the situation. I assume you're talking about the configuration space of the position of two indistinguishable particles, in which case of course I think they're the same configuration (that's what 'indistinguishable' means) and you're just beating down a straw man. If wavefunctions in general are members of a Hilbert space, then so are symmetric wavefunctions. All of the wavefunctions for two particles described within are elements of L^2(R^2); the subset of physically realizable wavefunctions forms a subspace which is also a Hilbert space (answering your question about "permutable" spaces). TL;DR: Don't try to learn QM from EY. Thanks for the link. > Don't try to learn QM from EY. Well… I agree. But then again, I don't think he really was trying to teach it. The way I see it, he just lifted confusions you would have if you start to really learn QM. > Two other perfect examples of "obvious-in-retrospect" theories are evolution and special relativity. Admittedly, I don't know if the ancient Greeks, sitting around in their togas, could have figured out that these theories were true. But certainly -- certainly! -- they could've figured out that they were possibly true: that they're powerful principles that would've at least been on God's whiteboard when She was brainstorming the world. Actually, people studying the Pre-socratics like to point out that Anaxagoras (IIRC) theorizes something eerily close to evolution: that at the beginning, there were all sorts of random creatures, and only the ones which did well survived and created more creatures like them. I don't know why noone followed up on it (in contrast to something like heliocentrism, where we know why the Greeks abandoned it for what were excellent and unobjectionable reasons at the time). I find the references to "God" (e.g., "...why did God choose to do it that way and not some other way?") to be both distracting and unhelpful working through the text itself. It signals to my brain (possibly incorrectly) that either (1) the author is trying to inject an unwarranted religious idea into an otherwise potentially helpful explanation in a devious way, or (2) the author has sadly mis-chosen a loaded term that doesn't add anything helpful to the explanation at all, and instead detracts from working through it because it creates the nagging question in my head of, "Is he really suggesting that God (whatever that may be in the reader's mind) chose to do this?". A better option (for just the chosen example) would be: "... why does the Universe do it that way and not some other way?". This is exceedingly common language in the mathematical community, particularly for the generation of Scott Aaronson and the one before him. Oh, okay. Thank you so much for pointing that out to me. I somewhat assumed as much, and I honestly wasn't intending to create anything remotely ad hominem in my critique. Since it's not so common in hardly anything I typically read, it just stuck out strangely (I am only just digging into understanding QM/QP). Again, sincere thanks. Are they referring to a God in the religious sense, or using it as a shorthand for "nature" or "the universe"? Some of them are no doubt serious; I seem to remember a study showing that mathematicians are more likely to be theist than atheist. In my experience, "Why did God pick X?" and the like is shorthand for, "Is there a classification/uniqueness theorem that says X is the only possible solution to the problem?", which is how it is used here. Again, thanks for drawing out the shorthand. I personally prefer reading the "is there a classification/uniqueness theorem that says..." over "why did God choose to...". It requires zero cycles to process the intent of the first version, while the second makes me stop each time and wonder which one the author means. I actually find it very illustrative. It is a way of saying that something is just what it is, and we can't really say why. It is both succinct and rooted in culture, thus rather easily understandable. Using such metaphor is similar to writing poetry: you can compress a pack of thoughts and emotions in just a few words. And you can replace "God" with whatever you want if you're that much biased. This is actually quite a projection on your end. Saying "God" != "something is just what it is, and we can't really say why." It is typically, from an investigatory and cultural-historical standpoint, a curiosity-stopper, not an explanation (or even a signifier of an explanation). It is equivalent to stating "magic" (ignoring for just this moment the generational and historical bit offered as explanation for the term's usage). Including references to "God" in an attempted explanation of quantum mechanics is not "easily understandable". Moreover, it's rootedness in culture brings with it myriad histories that impact the way a reader understands what a writer intends. Explaining quantum mechanics by anthropomorphising a God-construct who "chose" to make something one way versus some other way is distracting to readers who do not inject God-construct into their own explanations specifically to avoid making understanding more difficult on the reader. I can accept that this is a practice from the author's field and generation and still find it distracting and unhelpful without that equating to me being "that much biased." God does not elicit a mentally neutral concept that compresses "a pack of thoughts and emotions in just a few words" when discussing quantum mechanics. It would still be better for the author to not use "God", insofar as it eliminates mysterious language from his explanation, thus reducing the overhead of his reader needing to substitute a non-loaded term (especially when dealing with the sciences, where injecting the term "God" has a history rooted in Western culture of not being too helpful and enlightening). I can accept it being a practice among the community. That, however, does not imply it is an easily understood, necessary, or sufficient shorthand. I'm sold, although I'd still start by sitting them down with a copy of Feynman's QED. When doing math about a thing, it helps tremendously if you already have some sort of intuition about what the math is describing. Also, the historical stuff is fun and interesting. Physics students should probably be taught this stuff regardless, but maybe not as part of a class on theory. On the other hand, Feynman's most important lesson is thoroughly modern: the surest way to understand something "intuitively" is to create it. Cool idea. Screw learning QM - let's rediscover it on our own! > the surest way to understand something "intuitively" is to create it. And there's no reason the best way to create something has to be the way it was created the first time. I find historical narrative to be an effective way to learn, but just reading names, dates, and important events is boring. You have to pretend like you have a time machine. An effective historical lesson takes you back in time and puts you right at the scene of the development, almost allowing you to have a conversation with the characters and follow along with their thought process. That being said, I appreciate this straightforward approach just as much, and find that having a good understanding of the core concepts first, makes adjusting the settings on the "time machine" much smoother. IMO, the simple approach is dangerous. Saying here is the theory have at it is fine if you want to teach someone how to design transistors, but the point is not that QM is correct it's just the best approximation yet and in no way the 'holy word' on how the universe operates. The single most important thing to teach science students is to discard existing theory's when reality does not support them and to require any new theory to also be supported by the body of existing experimental evidence. > the simple approach is dangerous Definitely. However, it also happens to be the approach used since kindergarten up to high school, and in most universities. Whatever the field, as far as I know, students learn real science only when they start their PhD. Sometimes even later. I would love to see a real science course, where students come up with theories and discard them, again and again, until they come up with something probably correct (or not). That would teach them to accept being wrong (I hope). Today, it's hard to do for anything but open questions, which tend to be pretty difficult. I recently did one of my Wikipedia curiosity dives along a similar route. Starting with cathode ray tubes in the 1800s through the discovery of electrons, photons and then into the discovery of quantum effects. Trying to understand what were the questions that were raised by each experiment and how they led to the next set of experiments and theories. It was very enlightening, though Wikipedia's dry style and inconsistent depth on each subject makes it hard to follow the narrative sometimes. Yes, someone really needs to do a fun Bill and Ted style video series (or even better - a full blown video game franchise) on a wide variety of topics. I would pay good money for entertaining, interactive, and highly educational trips such as: Life at Bell Labs with Dennis Ritchie, Finding Mario with Shigeru Miyamoto, Building and Selling Your First Computer with Steve Wozniak and Steve Jobs. So many great stories, and so much untapped storytelling potential... Wikipedia, History Channel, and random documentaries on Netflix aren't good enough! Which were muddled, confused, and, pretty much by definition, partially incorrect. Historical development necessarily involves recapitulating the confusions of the people who finally got confused enough to throw over their original ideas. Even if you trim the history down to only the path that actually lead to modern theory, you still have a lot of confusion to get through before you can actually introduce the main idea of the topic in all its glory. And that's in addition to the confusion students have naturally. Very true. Though there is still some value in the journey through history, aside from learning the main idea. It can serve as inspiration for your own quest, and also challenge you to question modern theory as those who came before did. Being equipped with some fundamentals first, makes exploring the history less confusing. I think Charles Petzold's book "Code", is a great example of blending historical narrative with core concepts - I highly recommend it. Perhaps this is a good approach to Quantum Mechanics for a course about Quantum Computers. (Maybe is has too much emphasis is the probability part.) Quantum Mechanics has a little set of clear rules that are easy to follow and get the correct result. So if your plan is to have some little black quantum boxes and combine them to get quantum computations perhaps it is a good idea to jump directly to the rules and forget about the history part. The problem arises when you really want to understand how the little black quantum boxes work, or how this scales to bigger systems and to the real work. The correspondence principle says that if the system is big enough the quantum effects should almost disappear, and the system should be correctly approximated by Newtonian Physic. (There are some macroscopic effects that depend on the quantum nature of the world, the most clear is superfluity.) The biggest problem is the correspondence, how to make the connection between the real world that you see and the quantum physics. The pseudo-historic path gives in each step a more complex model, but the jumps are smaller. (For example, usually the Schrödinger's model is explained before the Heisenberg's model because it is more intuitive, but Heisenberg's work was a few years before Schrödinger's work.) In this approach, there is no reference to the method that you should use to transform a classical model to a quantum model. The main examples are the harmonic oscillator and the hydrogen-like atom that are used as the base to explain the properties of the matter. By completely random occurrence, I was watching this video yesterday: Its by the fellow that does Minute Physics on Youtube. Towards the end of his talk, he specifically talks about this topic. He makes a comparison between mathematics and physics, wherein mathematics you are taught the math without a ton of history. No scaffolding. In physics, you are taught the entire history of physics, i.e. scaffolding, from the ground up. He also advocates directly teaching modern theory and leaving most of the history out. Make learning physics more like learning math. As a PhD student in mathematics, I have to say that history is valuable in mathematics because it explains why people are interested in certain problems. When I was a master level student, I always got annoyed by advanced courses in algebra (things like homological algebra) where it was often unclear why one should be interested in the problems stated there. Prompting the profs to give some short historical overview can be very enlightening. I do believe you can leave the vast majority of the history out. Advances in notation etc. were made for a reason. But a little bit of history can be quite important for context. History may be valuable in mathematics, but it is demonstrably skipped. Few know much about it. When I was in grad school in math, I made the interesting discovery that if p and q are polynomials over a commutative ring, any polynomial that is symmetric in the roots of p and q is actually a polynomial in the original ring in the coefficients of p and q. (The construction works whether or not the ring can be embedded in a field where said roots actually exist.) Using this observation it is trivial, for instance, to write down in fully factored form a polynomial that has sqrt(2) + cube_root(3) as a root. This construction was news to various mathematicians that I talked to, including a combinatorics prof who studied symmetric polynomials and a number theorist who worked on stuff related to the algebraic integers. Then finally I talked to a very old mathematician with an interest in history. He told me that I had rediscovered an old way to do things. At his encouragement I went to the library, and picked up an algebra book from the 1800s. My construction was taught, and there was a whole chapter full of problems where students were expected to use it to come up with polynomials with specific roots. Furthermore as I looked into it both of the professors that I mentioned before worked in areas whose history dated back to the observation that I mentioned. It was used in the original proof that the algebraic integers form a ring, and that construction was the original reason that people were interested in symmetric polynomials. For another demonstration of how little of their own history mathematicians know, ask anyone why the notation for the second derivative is d^2y/dx^2. Then ask them where the f' notation comes from. Then ask them what Cauchy was trying to do that lead to Cauchy sequences. Most will draw a blank on all three. Don't read on until you're satisfied that you don't know the answers. In the original infinitesmal notation, d was an operator. It could be defined by d(y) = y(x + dx) - y(x). And you'd calculate a slope as dy/dx (drop any infinitesmal bits). Well when you work out d(dy/dx)/dx it turns out that you get d(d(y))/(dx * dx) which is more compactly written d^2y/dx^2. The f' notation was introduced by Lagrange in an attempt to get rid of infinitesmals by defining differentiation as a formal algebraic operation on polynomials and power series. This fell apart when Fourier demonstrated that apparently well-behaved power series could be used to construct pathological things like step functions. Cauchy came up with Cauchy sequences while attempting to define infinitesmals rigorously. His approach fell apart on the seemingly trivial example of how you rigorously prove the chain rule when the derivative of the inner thing is 0. (He was trying to avoid 0/0, but in that special case you get 0/0 all over the place.) Motivation is essential, but there are motivations other than the ones that historically lead to the creation of the field. In number theory, one of my favorite examples, the motivations that lead people to be interested in it now come from modern cryptography, which flatly didn't exist when the field was founded but provides more interesting and relevant examples than the obsessions of century-old mathematicians. That's not true for a lot of subjects. For example, every math program I've looked at teaches integration of complex functions the way it was arrived at historically, which involves increasingly complex shapes in the plane that eventually lead to the general case - some program spend half a semester going through this construction. However, if you're familiar with vector calculus, it is an immediate result of Green's theorem in the plane - doesn't even take 5 minutes of work. Engineering programs often go that route. And if you structured the material in the "right" way, it would be even simpler - Green's theorem in the plane is itself a special case of Stokes' theorem; However, "simpler" is relative - stokes theorem is much more abstract. So it's "simpler" in the sense that you can prove much, much less. It's more complex in the sense that you have to grok all those higher-level abstractions without a mental picture. Sort of like software engineering - LISP is a better fundamental system. But it is too abstract for most people in the field. And so are APL/J/K, although in a different way. No, no, no, no. Instinctively I recoil at that idea. I dont quite know how to explain why, but I'll give it a go... Im good with computers because I was there in the early days when it was pretty simple. Everything since has build upon those simple blocks. So, I can always work it back as it were. It kinda means I can simplify current complexities down to simple fundamentals. So, it leaves me with an ability to be presented something new, or different and very quickly I can understand it. Does that make any sense? I assume many people here are exactly like that, or recognise what Im trying to say. Pathetic I know, but if any one can put it better, please do!!!! Anyway, all I know is that I am so much better off knowing how it all came together than people who don't. I don't understand how any one can claim to understand any subject with out understanding it's journey as it were. I make a distinction here between know how to use something and understand it. If all one wants to do is "use", then understanding isn't absolutely necessary. You know, plenty of people drive cars with out understanding anything about how they work. No one is saying that knowledge of the history isn't useful. But it's not the most efficient way to learn the concepts. For example, it would be crazy to start learning basic programming by first studying the physics behind computers. Learn the history behind your subject, but separate it from the pedagogy. Otherwise you'll have a much harder time learning knowledge that you can actually apply. Technically, a truly historical development in terms of teaching programming would involve plugboards and specialized hardware design freshman year, punch cards sophomore year, teletypes junior year, and then 'intelligent' terminals senior year. We simply wouldn't have time to teach anything similar to modern networking, which only really came about in the 1980s, let alone GUI design or Web programming. > I was there in the early days when it was pretty simple. I think you have an interesting idea of 'simple': A lot of software becomes orders of magnitude simpler to design and reason about once you have enough RAM to organize it in the obvious way. For example, for many parsing tasks, a recursive-descent parser is the obvious way, but you can only design your software that way once you have a language that can directly express recursion and a computer that can grow a big enough call stack (on the stack or in the heap) to allow the parser to juggle nontrivial sentences. Otherwise, you're left the laborious, non-obvious, labor-intensive, but ultimately pointless task of turning your design into a program your tools can implement. Honestly, I think that says a lot more about your intelligence than your history. That's actually a pretty fair definition of intelligence, in fact. How about this: Teaching networking now is teaching TCP/IP and everything that TCP/IP rests upon, such as Ethernet and Wi-Fi. Theory naturally flows from practice, and that is the practice. The OSI Model? Gone. Out. Forget it. Nobody actually implements all seven layers; at most, we have four. NCP? What? If you know what NCP stands for, congratulations, you know something entirely useless. "Jeopardy!" would love you. So why teach the stuff we know is useless? It actively discourages students. It makes them think they're just wasting their time, primarily because they are. The history of computer networking is a valid topic. It should be taught in its own course, it deserves its own course. It does not deserve to be shoved into a course on how networking actually works. In chemistry class (where we learned particle physics in high school) I really hated having to unlearn all the crap we were quizzed on in the last section to learn some new crap that I'd be tested on and then expected NOT to use later because it was obsolete! /rant Anyway, if you really want to teach the history of a subject like math, you should probably do it after teaching the modern understanding of the subject. In math class our teacher explained some of the controversy over the invention of calculus, and the origin of the different notations. It gave us some appreciation for the unintuitive and tricky nature of the subject, which looks very simple now. > the controversy over the invention of calculus ... which lead to the less-intuitive epsilon-delta proof framework replacing the much more intuitive infinitesimal framework calculus was originally founded on, until the 1960s when infinitesimals were reformulated as part of nonstandard analysis. (Unless the controversy you're talking about is the utterly uninteresting one about Leibniz vs Newton.) Another example where a historical development would lead students through a completely pointless diversion (epsilon-delta proofs) simply because Robinson was born in 1918 instead of 1618. I was trying to play devil's advocate but I honestly couldn't think of a good reason to teach Calculus using epsilon-delta vs. Nonstandard Analysis. The way Calculus is taught today is already unrigorous until you get to Analysis, so there's no strong reason to teach students using epsilon-delta first. Right, because students struggling with "for all P there exists Q" won't have any problem with higher-order logic? It's always seemed to me a more sensible road to both simpler and more rigorous calculus courses would somehow involve reducing the scope of functions under consideration, since most of the exercises involve analytical functions anyway. Then, when students are ready for analysis, it can be more about "how to reduce nasty cases to problems you know how to solve, and how to recognize the truly pathological specimens where you can't" rather than "everything you thought you knew is wrong." You don't need higher order logic to present a nonstandard analysis approach at the same level as of rigour as a standard limits-based calculus course. Keisler wrote a great infinitesimal-based Calc book (http://www.math.wisc.edu/~keisler/calc.html). In terms of pedagogy I've found that there's a huge leap that students make between Math focused on computations and anything involving proofs. The reason it's difficult to make Calculus rigorous is not because you have to address a ton of cases-it's because understanding proofs is really hard. This is why students who take Algebra and then Analysis or vice versa tend to do much better in their second course-because they're already used to proofs. So I don't really think it's possible to make a first-year Calculus course more rigorous by sticking to analytic functions, because you still have to get over the proof barrier. I did mean Leibniz vs. Newton but I remember infinitesimals being glossed and wishing we would get into them more.
e9bd43cd142b8e59
« first day (3151 days earlier)      last day (61 days later) »  12:00 AM So what are all 4 numbers and does the equality hold? got it they are equal thank you Ted! You're welcome. Always remember to play with examples! ok, thanks . Have to go, bye :) 12:06 AM Hey chat! heya @Lucas how you're doing @Ted? Quite well, thanks, and you? hlo folk Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional... 12:12 AM Heya @Eric. How is the dimension of a simplicial complex defined? @user193319: Where did you use $k$? Not sure. I just wrote the problem verbatim. Oh, I see. how goes 12:19 AM @TedShifrin life's giving me a break, so better now. thanks :) uni is the easiest part of going to uni no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only Sounds too easy for you. @user193319: So can you show there are no $k$-simplices for $k>n$? you mean I might be underestimating it? No, @Lucas. I think the opposite. 12:23 AM No, but I can try. So, the dimension has to do with the number $k$-simplices that can be fit into $X_n(\Bbb{Z})$? well, thank you then. :) @Lucas what’s the content of a math degree in Brazil By dimension they mean the highest-dimensional simplex you can have. @ÉricoMeloSilva hold on Ah, okay. Thanks, I'll think about what you've said. 12:26 AM @Erico, you can find it here however, Unicamp's structure is designed to have less obligatory classes and give you more free time to do scientific research iirc you must do at least 2 in a year after you get into the math course itself (the first 3 semesters are with the physics course so you can choose which one you like the most after this period) cool cool @Eric: So Korean or what for dinner tonight? "period of time". lmao non-native speakers suck. :p maybe taiwanese porridge @Lucas are you planning on trying to go to grad school? my dream is to study at IMPA and Berkeley 12:33 AM It takes a strong person to succeed at Berkeley. so lift weights yeah they kind of bury u w teaching Oh, most places do that, @Eric. That wasn't my issue at all. @LucasHenrique i decided not to apply to IMPA for various reasons but I was just at berkeley and met a few cool ppl @TedShifrin i guess so, among my options it was by far the heaviest teaching load Huge state school, so sure. Princeton = spoiled brats. 12:34 AM lots of people need lots of teachers uh oh am i gonna be a spoiled brat I think so. @TedShifrin the lack of :P ending that sentence is unsettling I was referring to undergrads, mostly, but, yeah. I aim to unsettle. 12:37 AM my impression of berkeley from visiting their grad open house was that it’s very easy to get lost there Yeah, that was more the issue I was speaking to. One has to be a very aggressive graduate student to make sure to get attention. At UGA we pampered the grad students, by comparison. seems to be a problem with large state schools... And many of us on the faculty tried to make sure no one slipped through the cracks. @TedShifrin that’s rlly good 12:51 AM Anybody know if there's a question similar to this out there? or resources that may help https://math.stackexchange.com/q/3156156/445911 I checked around but couldn't find anything (among the many "recommended" posts) that quite fit. Anyone who is familiar with Latex, can you tell me I get this? When I do $\frac{u_{max} - u_{min}}{x_{max} - x_{min}}$ Why does the numerator decide to be on the left instead of top? Looks like you messed up somewhere, the first 'u' is not in math mode No, you don't want \frac. Maybe a copy/paste error? Sometimes copying things into a latex editor can mess it up. When I removed the $, it started working... 12:56 AM Weird. @TedShifrin why wouldn't you want \frac ? /curious You want $a_b$, where $b$ has an overline over the expression. @TedShifrin I think DCL wants the corrected version, where it is actually a fraction. Unless I'm mistaken Presumably "can you tell me I get this?" should've been "can you tell me why I get this?" Ah, I didn't try to psychoanalyze the English. I just wanted the numerator to be on top of the denominator, instead of hanging around on the left side. @DemCode: What are you trying to do? Oh, OK. So you do want \frac. 12:59 AM That should've read "why I get this". Typo! Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine It was just the fraction. Interesting. I don't know why this isn't compiling. Hm, weird. Are you editing it in stackexchange? Have a link to the question/answer? 1:03 AM This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right? @DemCodeLines I still find it odd that the first u is not in math mode but the rest is... maybe if you figure out why that's occurring it may solve the problem Yeah, that's not my problem, though. Oh, I left out a {. @abuchay: What is $A$? How do you know the subgroup has to be free? @ryan Yup, working on it @TedShifrin sorry, $A$ is a proper subgroup of $\mathbb{Z}^n$. So you need to prove it's free abelian. 1:09 AM @DemCodeLines I'm not quite sure why it's not working for you. However, if you post the answer/question and put the link to it in here, someone will certainly propose an edit that fixes it for you. Anyway, hope you get it resolved. I'm off now I literally copied your text and pasted it, @DemCode, and that's what I got. For mine, I put the subscripts in text mode, because I hate the math mode with text. There was a \[ missing earlier somewhere in the document, but \] is in there later on. Adding that \[ fixed it (started working with $). So one dollar sign was missing, basically. @TedShifrin correct me if i am wrong; isn't $A$ free abelian group since it is a subgroup of $\mathbb{Z}^n$ and it can be spanned by ${e_i}_{i \in I}$. It is a theorem, @abuchay, that a subgroup of a free abelian group must be free. It is non-obvious. 1:21 AM @TedShifrin Alright, i can prove that. Thanks. Heya chat. sup @Fargle Nothing much, how have you been? mostly good, agonizing over grad decisions but that’s not a real problem. hbu? Wow, it's a @Fargle. 1:26 AM Just hanging around. Been reading mostly. Actual literature? mr. shifrin Q: why is the isomorphism $T_p(T_pM)\cong T_pM$ natural You're just asking about $T_x(V) \cong V$ for any vector space $V$. You have a global chart. Yes, but how is that iso natural How is it not? 1:32 AM Doesn't it depend on the basis you use? Hell no. @TedShifrin No, math. :P Blah, @Fargle :P I do read other things. Just haven't lately. 1:33 AM the iso literally looks like (p, v) maps to v what could be more natural The proof I know of $T_pV \cong V$ for a vector space $V$ begins with "take a basis of $V$..." One definition of natural would be that if you have an iso $\phi\colon V\cong W$ it induces an iso $T_xV \cong T_{\phi(x)}W$? Let's actually look at the definition of natural. pulls out Mac Lane For me, I'd show $T_p(V) \cong T_0(V) = V$. As Eric suggests, I don't see why you need any basis. @Fargle pshhhh nerd 1:37 AM I suppose it depends on which definition of tangent space you use. Which one are you using? I use whichever one I feel like. Here I'll use equivalence classes of curves. @ÉricoMeloSilva :( me too tho Is it not natural for any definition? I don't see why it would matter. All definitions are equivalent, maybe one is easier? 1:38 AM I think you are using a naïve definition of "natural," i.e., independent of choices. But there is a definition more like the one I said above. I.e., it corresponds right under morphisms. Alright, but I just want to be sure that there is no possible way for my linearisation to be anything but the one I define it to be. So I want there to be no choice whatsoever for the identification of $T_pM$ with the tangent space to the fibre. However, it's not clear to me how or that I can do this. I've made several remarks. This is what "natural" was meaning to me. Well, no one will protest that $T_p(V) \cong V$ is not natural. You should just learn it. I am struggling to show it for myself. 1:43 AM For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right. Translate curves through $p$ to curves through $0$. Normally I would set up a chart which is just an isomorphism $V\cong\mathbb R^n$ (so pick a basis). Someone please help me with this. @TedShifrin so just take $\beta(t) := \gamma(t) - p$ Yes, of course. @MrAP: I disqualify the question entirely on the grounds that "neither" in (d) is totally incorrect. 1:45 AM Why is that? Because "neither" refers to precisely two. that’s what ted said about showing the natural isomorphism to T_0V I was thinking it would be either (a) or (c). You can't rule out $Y$ if $d=1$. (a) seems more likely to me. 1:46 AM "more likely" isn't mathematics. So that shows that any curve through $p$ with tangent vector $v$ at $p$ corresponds to a curve through $0$ with tangent vector $v$ at $0$. Then you have to give an example where they're not all positive. So translation is an isomorphism Oh, I can give lots of examples where they're not all positive. Well, the first you suggested. 1:49 AM BTW, @MrAP, don't you know that the GM/AM inequality can be an equality sometimes? Yes. I know. Anyhow ... I'm out of here. :( ted i luv u @TedShifrin tchau tchau @TedShifrin, will it be (d)? If all variables are substituted with 1, the inequalites do not hold. And I think the "neither" in (d) should "none". 2:04 AM @MrAP as Ted said, if it is (d), then you should be able to find a counterexample It is 1 If all the variables are 1 then the inequalities do not hold. >proper fraction A proper fraction is one of the form $a/b$ where $a < b$ and $a,b\in\mathbb Z$ Oh I forgot about that. Its 1/2 If all the variables are 1/2, then the inequalities do not hold. I think. Do better than just think: calculate. In fact, if all the variables are equal proper fractions then the inequalities do not hold. I had calculated before writing that. The first inequality becomes $3/2>6$ which is false and the second inequality becomes $3/2>3(16)^{1/3}$ which is false. 2:17 AM well you are right that the inequalities do not hold, but your calculation is wrong. Set $a=b=c=d$ and re-write your inequalities. Oops. I am totally out of my mind. It would be $3/2>3/2$ and $3/2>3/(16)^{1/3}$. 1/16 in the last cube root So its option (d). If you say so. What do you mean by that? 2:24 AM I mean that if that is what you have shown, then that's what it is. Not to cast you into doubt. You don't say so? No, I haven't looked at the question Like the other options 2:38 AM Ok. Fine. 1 hour later… 3:57 AM Hi, what is meant by when you say, "a group $H$ embeds in to $Aut(C_{p}^2)=GL(2,p)$, where $C_{p}^2$ denotes the (non-cyclic) abelian group of order $p^2$ and $p$ is a prime? Can we think about the structure of H by this, if we know the order of H too? 4:40 AM Well, you know (or can find) the order of $GL(2,p)$, so the order of $H$ would need to divide that. 4:55 AM Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present? Like can we think $H=C_q \times C_r$ or something like that from the given data? When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities? 5:13 AM Please help me with this question. Thanks a lot in advance. 3 hours later… 8:39 AM Man, how did it become so popular to group together windows from the same application that desktops stopped offering the option to not do that? 9:20 AM When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$. And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$. In this case how can I write G using notations/symbols? Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$? 9:33 AM First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.? As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations? There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$. 9:52 AM So why does the Schrödinger equation look the way it does (Also anyone else notice that ö looks like a shocked face) $\vec F_{\Bbb Q^2}=(x,y)$ $\vec F = \Bbb Q^2 \to \Bbb Q^2$ 10:51 AM I am having hard time understanding 'unique upto associates' in the definition of unique factorization domain. I am using this definition: please help! You want to say that $\Bbb Z$ is a UFD even though $6=(-2)(-3)=2\times 3$ has two distinct factorization, because they are different in a way which shouldn't matter Hi @Balarka Hey @Alessandro @Balarka et al Hey @ÍgjøgnumMeg 11:04 AM Hi @ÍgjøgnumMeg! How are you? Hey @Alessandro, I'm alright, working atm and trying to prepare for my interview lol How about you? When will it be? This monday :( @ÍgjøgnumMeg I finished my exams so now I'm free until the new semester begins in April, I'm looking forward to it Nice :) 11:06 AM Oh, that's soon! What are you interviewing for? You're ditching alg. geo? It's for a full DAAD Scholarship which I've been shortlisted for @Alessandro Teach me geometric group theory lmao @AlessandroCodenotti Thank you @ÍgjøgnumMeg Yeah, I wanted to go to the lectures without doing the exam at the end, but it overlaps with a logic course now that ur free i mean 11:07 AM Ah, that's great, good luck for the scholarship! Good luck from me as well I think you already know more GGT than me :P I don't know anything precisely I'm free as in having no lectures and no exams, I'm still preparing a seminar talk for the next semester Oh what's it on 11:09 AM Kunen's inconsistency theorem and the large cardinals I1, I2, I3 (they ran out of cool names I guess) on the verge of inconsistency runs away Have you looked at the proof that word problem is solvable for hyperbolic groups? Sounds up your alley Kunen's inconsistency basically says that there is no nontrivial elementary embedding of the universe $V$ into itself Yep, you have to show that they admit Dehn presentations What's a Dehn presentation 11:16 AM A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$ If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword There is good motivation for such a definition here Let me see if I can interpret the Dehn presentation in terms of the van Kampen diagram or something. It's some kind of cancellation condition; if my word in $F(S)$ is trivial in $G$ then there is some large subword $u_i$... Well, if $w$ contains $u_i$ as a subword then in $G$ length of $w$ gets reduced Because I just replace $u_i$ by $v_i$ in $w$, as $u_i = v_i$ in $G$. Oh it's some shortening procedure for geodesics Yeah the proof that hyperbolic groups have Dehn presentations is basically that you can take shortcuts with local geodesics but the details are quite ugly, they're in the paper I linked above (also this is something I know nothing about but hyperbolic groups are automatic groups as well which is a much bigger class of groups with solvable word problem) I'm leaving for lunch now, bye! Cya! I'll think about this a bit @AlessandroCodenotti This is equivalent to finding a geodesic representative for the word, right? The "final word" after all these reductions would be a length minimizer 11:35 AM @Alessandro thank you :) I am just relearning a lot of the stuff from my dissertation so I can waffle about it So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$ It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$. Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative I don't know how to interpret this coarsely in $\pi_1(S)$ Say we take the pairs (0,100), (1,99), etc, (50,50) Pairs of integers adding to 100 Let's do the sieve of Eratosthenes We're left with (1,99), (3,97), (5,95), etc, (49,51) Now 3 Notice that (1,99) dies ('cause of the 99). (3,97) doesn't die just because we don't sieve out the prime itself, but (7,93) dies and (9,91) dies Does that make sense so far? So we started with 51 pairs, then went to 24 and now we have (3,97), (5,95), (11,89), (17,83), etc, (47,53) so that's 9 pairs I realize 1 never gets sieved out by Eratosthenes, so if instead of 100 we had some number that was 1 more than a prime, it would never get sieved out @BalarkaSen Am I making sense? I'm doing Goldbach here So we have a lower bound of 50(1/2)((3-2)/3) pairs surviving I think (which works 'cause that's 8⅓ and we have 9 pairs) Then what, we sieve out the 5s (5,95) and (35,65) die and I think everything else stays We're left with (3,97), (11,89), (17,83), (23,77), (29,71), (41,59), and (47,53) That's 7 pairs and we can get a lower bound of 50(1/2)((3-2)/3)((5-2)/5)=5 which still works @BalarkaSen So I have someone on Reddit who is really bad at English (and communicating in general) who is convinced he has a proof of the Goldbach conjecture and his argument is basically, do the sieve of Eratosthenes on the pairs $(k,2n-k)$, give a lower bound for the surviving pairs at each step, and show that the lower bound is never 0 like I started doing above So I haven't really thought it 100% through yet but right now my task is to figure out why this doesn't work 2 hours later… 1:39 PM hi everyone, a quick question: is there a name for the "inverse" of the commutant? i.e. the problem of having a sub algebra and trying to find if it can be seen as the commutant of another algebra? @BalarkaSen OK so I see the main problem: once we've sieved out all the multiples of 2,3,…,$p_{k-1}$ in the range $[0,n]$, we assume that roughly $1/p_k$ of the remainder are multiples of $p_k$ and, through numerical experiments, that seems to be true but the problem is, the numbers in the range $[0,n]$ that aren't multiples of 2,3,…$p_{k-1}$ aren't necessarily evenly distributed so it's theoretically possible that a much higher than expected proportion of them is a multiple of $p_k$ 2:12 PM I don't understand in above proof why p must be associate to one of the irreducibles occurring either in the factorization of a or in the factorization of b. Hi! I was looking for books on analysis and set theory. I saw this link but I'm not able to figure out which book this chapter is a part of. Does anyone know? @ParasKhosla, sorry to derail the discussion. Which book are you using for complex analysis? @Silent I'm not using any book. I took a course from Coursera offered by Wesleyan University. I'm currently looking to research on what book to use. I found this chapter of a book which seems to be good but I can not seem to figure out which book it is @Silent I have yet to find a really good book on complex analysis @anakhro Is there no link for hardcopy? maybe on Amazon or elsewhere 2:25 PM But Visual Complex Analysis is probably one of the better. @ParasKhosla afaik these are strictly online notes. Oh. There's a different feeling while studying from an actual book. I guess I'll have to look for other resources. Although thanks a lot You could just print it out. Probably cheaper than buying a book @ParasKhosla OK! Thank you very much for suggesting this! seems that it covers many topics Yeah that's still an option What? @Silent the course? @anakhro in india, it is way cheaper to by an indian edition of book, better, if you buy used copy! :) 2:29 PM @Silent I am sure there are budget printers who allow you to print even cheaper. They have to make money off the books somehow. @ParasKhosla yeah. it covers conformal mapping, residue theorem etc! the core and cool topics Yeah it's really great. Anyways are you a math student in India? @Silent @anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale. @ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math. Oh cool @Silent @ParasKhosla, so that book does not seem be be either on set theory or analysis! Which subject does it cover? 2:36 PM Basic algebra Yeah perhaps that abstract algebra. rings and groups in further chapters @anakhro oh! @Silent did you do an undergrad in math? @anakhro, its my first time that I saw graphs in algebra text. I knew that graphs are closely related to (equivalent to?) topology. How does algebra relate to graph? @anakhro well, not formally. (I regret that.) Graphs are largely a combinatorial object. Here they seem to be just adding an additional topic of graphs at the end, despite it not really pertaining that closely with the rest of the material. That being said, algebra sees many applications in graph theory. Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants. == Branches of algebraic graph theory == === Using linear algebra === The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap... I can probably guess that they are using symmetries and permutation groups on graphs in this course. For example, orbits and studying the automorphism groups of graphs. 2:43 PM @anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix! Can a triangle on a surface @anakhro Thank you very much for all this information! @anakhro How did you guess that? :) automorphism groups are basically where group theory developed from. Automorphisms, and then symmetries. Graph theory makes heavy use of both of these concepts. @ParasKhosla, how can i increase speed in coursera video? can i see those vids in youtube, as we can do in edx? Can you tell the geometry of a surface based on how A triangle is curved on it 2:50 PM Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case am i right? 3:04 PM hi chat @Ultradark you will have to define "geometry". Hi @vzn hi all really jazzed about this new discovery, anyone into dynamical systems theory? has a lot of neat new math, looks breakthru or even revolutionary :) @chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$ could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure. but... yeah, idk why i did this. 3:15 PM @chandx the kernel is all degree 1 polynomials, si. But you can just calculate it easily. Let $P(x) = a + bx + cx^2 + dx^3$, find $P''$ and $P'''$, and then find $F(P)$. Then let your expression for $F(P) = 0$, then solve for $a,b,c,d$. @Silent @Silent Yes you can speed up the video look at the panel on the bottom, right side maybe has the tool to modify speed Thank you so much!
226a9db0e6b1ce7b
Quantum theory of observation/Fundamental concepts and principles From Wikibooks, open books for an open world Jump to navigation Jump to search The principles of quantum physics[edit] Four principles (Dirac 1930, von Neumann 1932, Cohen-Tannoudji, Diu & Laloë 1973, Weinberg 2012) are enough: • The space of states of a quantum system is a complex Hilbert space, that is a complex vector space (cf. 1.1) equipped with a scalar product and complete for the norm defined by this product. • The evolution between two instants of an isolated system is determined by a unitary operator. • The state space of a composite system is the tensor product of the spaces of its components. • A last principle, the Born Rule, says how to calculate the probabilities of measurement results from the state vector of the observed system. It will be explained below (cf. 2.4). It gives a physical meaning to the scalar product in the Hilbert space (cf. 2.6). The postulate of evolution has been formulated in its integral form. In its differential form, it is the Schrödinger equation, It will not be used in this book because the integral form is better suited to the theory of observation. The third principle can be considered as a consequence of the first. It is not a strictly logical consequence, but as soon as we accept the first principle, and as we conceive that a system may be composed of parts, which may be in different states, we are obliged to accept the third principle. The first principle was stated in its current and slightly incorrect form. More precisely, it states that a physical state must be identified with a ray - a subspace of collinear vectors - of the Hilbert space, or, if probabilities are calculated, with the set of unit vectors of such a ray. The null vector, which is of null length, is therefore not a state vector. In practice, the difference between vector and ray does not pose difficulties in identifying quantum states. In the first principle, the clause of completeness is necessary to reason on the state spaces of infinite dimension. In general, a clause of countability is added for the basis states. These clauses are not necessary when we reason, as in this book, on complex vector spaces of finite dimension, because they are always complete. It is often added to these principles that physical quantities must be represented by observables, that is to say, hermitian operators on the state space of the observed system (cf. 5.2). This addition is not necessary (Zeh, in Joos, Zeh &... 2003). Another principle, the postulate of state vector (or wave function) collapse is often considered a quantum principle. It contradicts the principle of unitary evolution and Schrödinger's equation. It can not therefore be part of quantum physics, otherwise the theory would be contradictory. This postulate is, however, often considered necessary to give physical meaning to quantum mathematics, but Everett (1957) showed that it is not (cf. 4.4 and 4.5). Ideal measurements[edit] A measurement is determined by a basis of states of the measuring device: the are the pointer states. indexes the possible results of the measurement. When a measurement is ideal (von Neumann 1932), there exists an orthonormal basis of states of the observed system such that the interaction between the detected system and the detector is described by: for all and , where is the evolution operator between the initial time, before the measurement, and the final time, when the measurement is complete, is the initial state of the detector, are the eigenstates associated with the result . indexes the eigenstates associated with the same result. The eigenstates of a measurement result are those for which observation certainly leads to this result. When a measurement is ideal, and only in this case, if the observed system is in an eigenstate then it remains in the same state, it is not disturbed by the measurement process. When a single eigenstate is associated with a measurement result, it can be said that it is detected, or pointed to, by the measurement. is an abbreviated form of where represents the tensor product of two vectors (cf.1.7). The existence theorem of multiple destinies[edit] According to the principle of unitary evolution, if the initial state of the observed system is , the final state after the measurement must be: The correspond to different measurement results. is therefore a superposition of measurement results. The fundamental theorem of quantum measurement is thus obtained directly from the principles of quantum physics: if the observed system is initially in a superposition of measurement eigenstates, the complete system (observed system plus measuring device) is in a superposition of measurement results. This theorem is very surprising. At the end of a measurement, a single result is observed. A superposition of measurement results is not a measurement result. How can we understand that quantum physics predicts the existence of ? Hugh Everett III (1957) proposed the following answer: describes a superposition of the observer's destinies. An observer obtains a single measurement result because he knows of himself only one of his destinies. The other measurement results are also obtained, but in the observer's other destinies. Quantum physics describes a universe in which beings have very numerous destinies. The world as known by an observer is only a tiny part of quantum reality, a destiny among myriads of others. The fundamental theorem of quantum measurement can thus also be called the existence theorem of multiple destinies. It is a direct consequence of the principles of quantum physics. It has been stated in the particular case of ideal measurements but remains valid for all forms of quantum measurement. To deny it, one must deny that quantum physics correctly describes reality. It is possible that a new physics surpasses quantum physics and proves that these other destinies do not exist, but until now quantum physics has always proved its validity. No experimental results have ever disproved it. The existence theorem of multiple destinies is empirically verifiable, but in a severely limited way. We will show later, in the chapter on quantum entanglement, that an observer can not observe his other destinies, but that a second observer can in principle observe them. It is in principle possible to observe that, after an observation, an observer system has several destinies, with experiments of the "Schrödinger's cat" type. But this conclusion is limited to reversible observation processes, because observation destroys information. Since the processes of life are irreversible, the simultaneous existence of the multiple destinies of a living being can not be observed. The destruction of information by observation[edit] The observation of a quantum superposition of states necessarily destroys the information carried by each of the superposed states. Let be a basis of pointer states, the initial state of the measuring device, a basis of states of the observed system and the evolution operator for the measurement. For the superposition to be observed, there should be a pointer state such that where is a state of the observed system. If the observation does not destroy the information carried by each of the it must be possible to know the state of the observed system before the measurement from its state after the measurement, so we must have states , all orthogonal to each other such that for each , where the are states of the measuring device. These two conditions imply that all must be equal to .This means that the measuring apparatus measures nothing since it only jumps from to for any state of the observed system. Hence a measuring apparatus which observes a superposition of states necessarily destroys the information carried by each of these states. This simple theorem can be extended to a more realistic model of a measuring apparatus where the initial and pointer states are in statistical ensembles. This theorem shows to what condition the existence theorem of multiple destinies is empirically verifiable. If the measurement is irreversible, if the gathered information can not be erased, then the quantum superposition can not be observed. But this does not forbid to verify the existence theorem of multiple destinies with reversible measurements. We can and do verify it on microscopic measuring systems: atoms, trapped photons or ions ... A quantum computer is a way of verifying the existence of the multiple destinies of the qubits that constitute them. The Born Rule[edit] The complex numbers in the superposition are conceived as probability amplitudes. The probability of observing the result is . It can be admitted as a principle: If the initial state of the observed system is where the are an orthonormal basis of measurement eigenstates, then the probability of the result is . To apply this rule, must be normalized: . One can try to prove this fourth principle from the first three (Everett 1957, Zurek 2003 ...). These proofs are controversial and will not be discussed here. The Born rule was stated only for ideal measurements. It will be shown that it can be generalized to all forms of measurement (cf. 5.1 and 5.3). Can we observe quantum states?[edit] In order to observe, the state of the observing system (the detector, the measuring instrument) after the measurement must provide information on the state of the observed system before the measurement. An observation is perfect if one can deduce exactly the state of the detected system from the result of the measurement. Quantum measurements are never perfect. If the initial state of the detected system is not known in advance, the final state of the detector is never sufficient to know the state of the detected system, because many different states of the detected system can lead to the same result. It is sufficient that they have a non-zero probability of producing this result. The only information given by the observation is that the state of the observed system did not have a zero probability of producing this result. If the result has been obtained, all we know about the initial state of the observed system is that How then can we know the state vector of a quantum system? One can not detect the quantum state of an initially unknown system, produced spontaneously by Nature. On the other hand, material systems can be prepared in such a way that they are found in a single quantum state. If there is a measurement result whose eigenstate is unique, then it can be verified that this quantum state actually exists. Simply repeat the preparation a lot of times and verify that you always get the same measurement result. Observation alone is not enough to know the quantum states. It is necessary to act on matter to prepare it in the states that one wishes to observe. An ideal measurement is one way to prepare a state, when the measurement results each have a single eigenstate. If the result of the measurement is then it is known with certainty that the observed system state is just after the measurement. This can be verified by repeating the measurement on the system just prepared. In order to escape the existence theorem of multiple destinies, many physicists assert that state vectors are only theoretical tools for calculating probabilities, and that they do not really represent reality (Peres 1995). But when the observed system has been properly prepared, one can know with certainty its state vector, one can check it without any doubt being allowed. Physicists now know how to prepare, manipulate and observe the state vectors they imagine (for example, Haroche & Raimond 2006), is not it sufficient to assert that the state vector really represents the physical state of the observed system? Orthogonality and incomplete discernability of quantum states[edit] It is sometimes said, improperly, that when a material being is in a superposition of states like , it is at the same time in the states and . For example, when commenting on Young's experiment, it is said that the photon passes through both slits at the same time. That sounds absurd. If the photon is in a slit, it can not be in the other. To say that it is in both simultaneously is therefore a contradiction. It is said that it passes through the two slits only to say that if one were to detect its passage, one would find it in one slit or the other. But it will never be found in both at the same time. If a being is in the state it is not in the state and vice versa. When it is in the state , it is not in the state or in the state , but in a third state, different from the previous two (Griffiths 2004). If a being is prepared in the state it can not be detected immediately after preparation in the state because and are orthogonal, ie their scalar product is zero. On the other hand, if a being is prepared in the state , there is a probability one half of detecting it in the state and the same probability to detect it in the state , because . When two states are orthogonal, they are completely discernible, because they can both be eigenstates of the same measuring instrument. When they are not orthogonal, their difference is partially blurred, especially if their scalar product is close to 1. They are not completely discernible, in the sense that it is not possible to make a measurement which would enable to distinguish them, because they can not be eigenstates of the same measuring instrument associated with different values. It is sometimes said that is the probability that a being in the state is in the state . This sounds like an absurdity, since if and are different, a being can not be in both states at the same time. But if we hear it charitably, we understand that it only means that a being prepared in the state has a probability to be detected in the state . This is why we are tempted to say that if is not orthogonal to , a being in the state is partially in the state. The incompatibility of quantum measurements[edit] If and are eigenstates of a measurement, they can be said to be observed or pointed states. On the other hand, superpositions of and are not states which can be observed by such a measurement. Let and be two new basis vectors (the names and come from the spin 1/2 theory). et can also be eigenstates of a measurement, and hence the states observed by this measurement. The measurements of {} on the one hand, and {math> x ^ +, x ^ - </math>} on the other hand are incompatible. There is no state of the observed system such as because no quantum state is an eigenstate of both measurements. If one measurement is made immediately after another, random results will always be found. It is not possible to prepare the observed system in a state which provides a certain result for the two successive measurements. If the result of the first measurement is certain, the latter can not be. When two measuresments have a common basis of eigenstates, they are said to be compatible. If they are ideal measurements, they do not disturb each other. One can be done just before the other without affecting its result. The existence of incompatible measurements is an immediate consequence of the principle of quantum superposition. It is a typically quantum effect which has no equivalent in classical physics. What can be said about the reality of and when the observed system is in the state  ? Since the system is in a superposition of and . Neither is real. It is only their superposition which is real. Heisenberg showed that the position and momentum measurements of a quantum particle are incompatible (Heisenberg 1930). They can only disrupt each other. It is therefore impossible to attribute simultaneously exactly defined position and momentum to the same particle. This incompatibility is mathematically translated by the relation where is the indeterminacy of the position measurement, the indeterminacy of the momentum measurement and the Planck constant divided by . If this relation is not respected, no quantum state can be an eigenstate of both measurements. It must be called Heisenberg's relation of indeterminacy rather than a relation of uncertainty, because the latter expression suggests that we do not know simultaneously the position and the momentum of a particle but that they could be known. For there to be uncertainty, there must be something to know that we do not know. But the incompatibility of quantum measurements does not say that there is more to know than what we observe. On the contrary, it says that there is no real state for which position and momentum are exactly defined. Such states can not be observed because they do not exist. Heisenberg's relation does not come from our ignorance, or from our uncertainty, but from the indeterminacy of reality. The quantum states can not be simultaneously eigenstates of all possible measurements, because of their incompatibility. Uncertainty and density operators[edit] A density operator is used to describe situations where the quantum state of a system is not known accurately. It is defined from a set of states , supposed to be normalized, but not necessarily orthogonal, each assigned a probability . By definition where is the orthogonal projection on the state . If the are orthogonal, then is the weight of in . With a slight abuse of language, we can say that is the density of presence in the state . It is said of that it describes a mixture of states, or a mixed state. The same density operator can be defined from different mixtures, but we shall show later (4.3) that such mixtures can not be distinguished by observation. A density operator contains all available information on the state of the observed system. If all are equal to zero except , the state vector is known with exactness. The system is then said to be in a pure state. In this case . The formalism of density operators can therefore be applied to both mixed states and pure states. The trace of a density operator is always equal to one: If a system is prepared in the mixed state the probability that it is detected in a pure state is . As a density operator determines the probabilities of detection of all quantum states, it determines the probabilities of all the results of all possible measurements. In this sense, it completely determines the physical state of the system. Following chapter >>
1eb20f47370d82bb
DOI: 10.14704/nq.2015.13.3.797 Some Solutions of Extensive Quantum Equations in Biology, Formation of DNA and Neurobiological Entanglement Yi-Fang Chang Based on the extensive quantum theory of biology and NeuroQuantology, the Schrödinger equation with the linear potential may become the Bessel equation. Its solutions are Bessel functions, and may form the double helical structure of DNA in three dimensional spaces. From this model we may predict the discrete bound energy spectrum of DNA. Moreover, we discuss some solutions of quantum mechanics and their meaning. Further, we research the entangled state of neurobiology by the extensive quantum method and the nonlinear theory. New experiments shown that the quantum entangled state should be a new fifth interaction, for its verification neuroscience will possibly take a very important role. biology; quantum mechanics; DNA; Schrödinger equation; Bessel function; entangled state; neuroscience Full Text: Full Text PDF Aspect E, Grangier P and Roger G. Experimental realization of Einstein-Podolsky-Rosen-Bohm gedankenexperiment: a new violation of Bell's inequalities. Phys Rev Lett 1982; 49(2):91-94. Ballentine LE. Quantum Mechanics. A Modern Development. World Scientific Publishing. 1998. Benham CJ, Harvey S, Olson WK and Swigon D. Mathematics of DNA Structure, Function and Interactions. Springer. 2009. Bennett CH, Brassard G, Crepeau C, et al. Teleporting an unknown quantum state via dual classical and Einstein-Podolsky-Rosen channels. Phys Rev Lett 1993; 70: 1895-1899. Bernroider G. Quantum-neurodynamics and the relation to conscious experience. NeuroQuantology 2003; 1(2):163-168. Bouwmeester D, Pan JW, Daniell K et al. Experimental quantum teleportation. Nature 1997; 390(6660):575-579. Brandt S and Dahmen HD. The Picture Book of Quantum Mechanics (Second Ed). Springer-Verlag. 1994. Chang Yi-Fang. Development of Titius-Bode law and the astronomic quantum theory. J Yunnan Univ 1993; 16(4):297-203. Chang Yi-Fang. Nonlinear whole biology and its basic laws. Chinese Science Abstracts 2001; 7: 227-228. Chang Yi-Fang. Development of Titius-Bode law and the extensive quantum theory. Phys Essays 2002; 15(2):133-137. Chang Yi-Fang. Combination and incompatibility between quantum mechanics and relativity and their developments. J Yunnan Univ 2008; 30(1):41-46. Chang Yi-Fang. Extensive quantum biology, applications of nonlinear biology and nonlinear mechanism of memory. NeuroQuantology 2012a; 10(2):183-189. Chang Yi-Fang. Nonlinear whole biology and loop quantum theory applied to biology. NeuroQuantology 2012b; 10(2):190-197. Chang Yi-Fang. Neural synergetics, Lorenz model of brain, soliton-chaos double solutions and physical neurobiology. NeuroQuantology 2013a; 11(1):56-62. Chang Yi-Fang. Chaos, fractal in biology, biothermodynamics and matrix representation on hypercycle. NeuroQuantology 2013b; 11(4):527-536. Chang Yi-Fang, Extension and complete structure of the special relativity included superluminal and neutrino-photon with mass. International Journal of Modern Theoretical Physics 2013c; 2(2):53-73. Chang Yi-Fang. Extensive quantum theory of DNA and biological string. NeuroQuantology 2014; 12(3):356-363. Erol M. Schrödinger wave equation and function: Basics and concise relations with consciousness/mind. NeuroQuantology 2010a; 8(1):101-109. Erol M. Quantum entanglement: Fundamentals and relations with consciousness/mind. NeuroQuantology 2010b; 8(3):390-402. Fan H, Zaidi HR and Klauder JR. New approach for calculating the normally ordered form of squeeze operators. Phys Rev 1987; D35(6): 1831-1834. Fan H and Song T. New parametrized entangled state representations and their applications. International Journal of Theoretical Physics. 2003; 42(8): 1773-1779. Fan H and Wang W. Coherent-entangled state in three-mode and its applications. Communications in Theoretical Physics 2006; 46:975-981. Fan H and Liu S. New approach for finding multi-partite entangled state representations via the IWOP technique. International Journal of Modern Physics A 2007; 22(24): 4481-4494. Gazzaniga MS. The Cognitive Neurosciences. MIT Press, 1995. Grandpierre A, Chopra D and Kafatos MC. The universal principle of biology: determinism, quantum physics and spontaneity. NeuroQuantology 2014; 12(3): 364-373. Harrison WA. Applied Quantum Mechanics. World Scientific Publishing, 2000. Khrennikov A and Basieva I. Quantum model for psychological measurements: from the projection postulate to interference of mental observables represented as positive operator valued measures. NeuroQuantology 2014; 12(3): 324-336. Landau LD and Lifshitz EM. The Classical Theory of Fields. (Fourth ed.) Pergamon Press, 1975. Micklos DA, Freyer GA and Greg DA. DNA Science: A First Course (Second Edition). Cold Spring Harbor Laboratory Press. 2003. Nesvizhevsky VV, Börner HG, Petukhov AK, et al. Quantum states of neutrons in the Earth's gravitational field. Nature. 2002; 415: 297-299. Nicholls JG, Martin AR, Wallace BG and Fuchs PA. From Neuron to Brain (Fourth ed.). Sinauer Associates, Inc. 2001. Ostovari M, Alipour A and Mehdizadeh A. Entanglement between bio-photons and Tubulins in brain: implications for memory storage and information processing. NeuroQuantology 2014; 12(3): 350-355. Pratt D. Consciousness, causality, and quantum physics. NeuroQuantology 2003; 1(1):58-67. Shan G. A possible quantum basis of panpsychism. NeuroQuantology 2003; 1(1):4-9. Tarlaci S. Quantum field theory and consciousness. NeuroQuantology 2005; 3:228-245. Tarlacı S. A historical view of the relation between quantum mechanics and the brain: a NeuroQuantologic perspective. NeuroQuantology 2010b; 8(2):120-136. Tarlaci S. On probabilistic quantum thinking. NeuroQuantology 2010c; 8(4):S1-2. Tarlacı S. What should a consciousness mind-brain theory be like? Reducing the secret of the rainbow to the colours of a prism. NeuroQuantology 2013; 11(2):360-377. Tarlacı S, Pregnolato M. Quantum neurophysics: From non-living matter to quantum neurobiology and psychopathology. Int J Psychophysiol 2015 Feb 7. doi: 10.1016/j.ijpsycho.2015.02.016. Tressoldi P, Pederzoli L, Caini P, Ferrini A, Melloni S, Richeldi D, Richeldi F and Duma GM. Mind-matter interaction at a distance of 190 km: effects on a random event generator using a cutoff method. NeuroQuantology 2014; 12(3): 337-343. Vimal RLP. Subjective experience aspect of consciousness. Part I: Integration of classical, quantum, and subquantum concepts. NeuroQuantology 2009a; 7(3):390-410. Vimal RLP. Subjective experience aspect of consciousness. Part II: Integration of classical and quantum concepts for emergence hypothesis. NeuroQuantology 2009b; 7(3):411-434. Vimal RLP. Towards a theory of everything. Part II: Introduction of consciousness in Schrödinger equation and standard model. NeuroQuantology 2010a; 8(3):304-313. Vimal RLP. Towards a theory of everything. Part III: Introduction of consciousness in loop quantum gravity and string theory and unification of experiences with fundamental forces. NeuroQuantology 2010b; 8(4):571-599. Zbiden H, Brendel J, Gisin N, et al. Experimental test of nonlocal quantum correlation in relativistic configurations. Phys Rev 2001; A63(2):022111/1-10. Zeng J. Quantum Mechanics (fourth Ed). Science Press, 2007. Supporting Agencies | NeuroScience + QuantumPhysics> NeuroQuantology :: Copyright 2001-2019
c094616f1ff3cc83
Search results “Meaning of quantum cryptography ppt” What is QUANTUM CRYPTOGRAPHY? What does QUANTUM CRYPTOGRAPHY mean? QUANTUM CRYPTOGRAPHY meaning - QUANTUM CRYPTOGRAPHY definition - QUANTUM CRYPTOGRAPHY explanation. Source: Wikipedia.org article, adapted under https://creativecommons.org/licenses/by-sa/3.0/ license. Quantum cryptography is the science of exploiting quantum mechanical properties to perform cryptographic tasks. The best known example of quantum cryptography is quantum key distribution which offers an information-theoretically secure solution to the key exchange problem. Currently used popular public-key encryption and signature schemes (e.g., RSA and ElGamal) can be broken by quantum adversaries. The advantage of quantum cryptography lies in the fact that it allows the completion of various cryptographic tasks that are proven or conjectured to be impossible using only classical (i.e. non-quantum) communication (see below for examples). For example, it is impossible to copy data encoded in a quantum state and the very act of reading data encoded in a quantum state changes the state. This is used to detect eavesdropping in quantum key distribution. History: Quantum cryptography uses Heisenberg's uncertainty principle formulated in 1927, and the No-cloning theorem first articulated by Wootters and Zurek and Dieks in 1982. Werner Heisenberg discovered one of the fundamental principles of quantum mechanics: "At the instant at which the position of the electron is known, its momentum therefore can be known only up to magnitudes which correspond to that discontinuous change; thus, the more precisely the position is determined, the less precisely the momentum is known, and conversely” (Heisenberg, 1927: 174–5). This simply means that observation of quanta changes its behavior. By measuring the velocity of quanta we would affect it, and thereby change its position; if we want to find a quant's position, we are forced to change its velocity. Therefore, we cannot measure a quantum system's characteristics without changing it (Clark, n.d.) and we cannot record all characteristics of a quantum system before those characteristics are measured. The No-cloning theorem demonstrates that it is impossible to create a copy of an arbitrary unknown quantum state. This makes unobserved eavesdropping impossible because it will be quickly detected, thus greatly improving assurance that the communicated data remains private. Quantum cryptography was proposed first by Stephen Wiesner, then at Columbia University in New York, who, in the early 1970s, introduced the concept of quantum conjugate coding. His seminal paper titled "Conjugate Coding" was rejected by IEEE Information Theory Society, but was eventually published in 1983 in SIGACT News (15:1 pp. 78–88, 1983). In this paper he showed how to store or transmit two messages by encoding them in two "conjugate observables", such as linear and circular polarization of light, so that either, but not both, of which may be received and decoded. He illustrated his idea with a design of unforgeable bank notes. In 1984, building upon this work, Charles H. Bennett, of the IBM's Thomas J. Watson Research Center, and Gilles Brassard, of the Université de Montréal, proposed a method for secure communication based on Wiesner's "conjugate observables", which is now called BB84. In 1991 Artur Ekert developed a different approach to quantum key distribution based on peculiar quantum correlations known as quantum entanglement. Random rotations of the polarization by both parties (usually called Alice and Bob) have been proposed in Kak's three-stage quantum cryptography protocol. In principle, this method can be used for continuous, unbreakable encryption of data if single photons are used. The basic polarization rotation scheme has been implemented. The BB84 method is at the basis of quantum key distribution methods. Companies that manufacture quantum cryptography systems include MagiQ Technologies, Inc. (Boston, Massachusetts, United States), ID Quantique (Geneva, Switzerland), QuintessenceLabs (Canberra, Australia) and SeQureNet (Paris, France). Views: 1728 The Audiopedia Quantum Cryptography in 6 Minutes Quantum Cryptography explained simply. Regular encryption is breakable, but not quantum cryptography. Today we'll look at the simplest case of quantum cryptography, quantum key distribution. It uses the Heisenberg Uncertainty Principle to prevent eavesdroppers from cracking the code. Hi! I'm Jade. Subscribe to Up and Atom for new physics, math and computer science videos every week! *SUBSCRIBE TO UP AND ATOM* https://www.youtube.com/c/upandatom *Let's be friends :)* TWITTER: https://twitter.com/upndatom?lang=en *QUANTUM PLAYLIST* https://www.youtube.com/playlist?list=PL1lNrW4e0G8WmWpW846oE_m92nw3rlOpz *SOURCES* http://gva.noekeon.org/QCandSKD/QCandSKD-introduction.html https://www.sans.org/reading-room/whitepapers/vpns/quantum-encryption-means-perfect-security-986 https://science.howstuffworks.com/science-vs-myth/everyday-myths/quantum-cryptology.htm The Code Book - Simon Singh *MUSIC* Prelude No. 14 by Chris Zabriskie is licensed under a Creative Commons Attribution license (https://creativecommons.org/licenses/by/4.0/) Source: http://chriszabriskie.com/preludes/ Artist: http://chriszabriskie.com/ Views: 27393 Up and Atom Quantum Cryptography Explained This episode is brought to you by Squarespace: http://www.squarespace.com/physicsgirl With recent high-profile security decryption cases, encryption is more important than ever. Much of your browser usage and your smartphone data is encrypted. But what does that process actually entail? And when computers get smarter and faster due to advances in quantum physics, how will encryption keep up? http://physicsgirl.org/ ‪http://twitter.com/thephysicsgirl ‪http://facebook.com/thephysicsgirl ‪http://instagram.com/thephysicsgirl http://physicsgirl.org/ Help us translate our videos! http://www.youtube.com/timedtext_cs_panel?c=UC7DdEm33SyaTDtWYGO2CwdA&tab=2 Creator/Editor: Dianna Cowern Writer: Sophia Chen Animator: Kyle Norby Special thanks to Nathan Lysne Source: http://gva.noekeon.org/QCandSKD/QCand... http://physicsworld.com/cws/article/n... https://epic.org/crypto/export_contro... http://fas.org/irp/offdocs/eo_crypt_9... Music: APM and YouTube Views: 279822 Physics Girl What is Quantum cryptography? One of the main reason that i am creating these videos are due to the problems i faced at the time of making presentation, so take the required info from this and make your presentation an informational one. Till now i got so many open source stuff so felt of offering the world so here are the efforts. Thank you I created this video with the YouTube Slideshow Creator (http://www.youtube.com/upload) Views: 87 Keep Going Adversarial neural cryptography research - simple explanation Did it for my data mining class presentation. Research paper is available at: https://www.openreview.net/pdf?id=S1HEBe_Jl Views: 497 johnnyg88 quantum cryptography Introduction to the Quantum Cryptography lab Views: 5116 Paul Francis What is POST-QUANTUM CRYPTOGRAPHY? What does POST-QUANTUM CRYPTOGRAPHY mean? POST-QUANTUM CRYPTOGRAPHY meaning - POST-QUANTUM CRYPTOGRAPHY definition - POST-QUANTUM CRYPTOGRAPHY explanation. Source: Wikipedia.org article, adapted under https://creativecommons.org/licenses/by-sa/3.0/ license. SUBSCRIBE to our Google Earth flights channel - https://www.youtube.com/channel/UC6UuCPh7GrXznZi0Hz2YQnQ Post-quantum cryptography refers to cryptographic algorithms (usually public-key algorithms) that are thought to be secure against an attack by a quantum computer. This is not true for the most popular public-key algorithms, which can be efficiently broken by a sufficiently large quantum computer. The problem with the currently popular algorithms is that their security relies on one of three hard mathematical problems: the integer factorization problem, the discrete logarithm problem or the elliptic-curve discrete logarithm problem. All of these problems can be easily solved on a sufficiently powerful quantum computer running Shor's algorithm. Even though current, publicly known, experimental quantum computers are too small to attack any real cryptographic algorithm, many cryptographers are designing new algorithms to prepare for a time when quantum computing becomes a threat. This work has gained greater attention from academics and industry through the PQCrypto conference series since 2006 and more recently by several workshops on Quantum Safe Cryptography hosted by the European Telecommunications Standards Institute (ETSI) and the Institute for Quantum Computing. In contrast to the threat quantum computing poses to current public-key algorithms, most current symmetric cryptographic algorithms and hash functions are considered to be relatively secure against attacks by quantum computers. While the quantum Grover's algorithm does speed up attacks against symmetric ciphers, doubling the key size can effectively block these attacks. Thus post-quantum symmetric cryptography does not need to differ significantly from current symmetric cryptography. Views: 251 The Audiopedia Quantum Computers Explained – Limits of Human Technology Where are the limits of human technology? And can we somehow avoid them? This is where quantum computers become very interesting. Check out THE NOVA PROJECT to learn more about dark energy: www.nova.org.au Support us on Patreon so we can make more stuff: https://www.patreon.com/Kurzgesagt?ty=h Get the music of the video here: https://soundcloud.com/epicmountain/quantum-computers https://epicmountainmusic.bandcamp.com/track/quantum-computers http://epic-mountain.com Wakelet: https://wakelet.com/wake/42ji9UMJzN?v=st Or follow us on social media or reddit: http://kurzgesagt.org https://www.reddit.com/r/kurzgesagt https://www.facebook.com/Kurzgesagt https://twitter.com/Kurz_Gesagt THANKS A LOT TO OUR LOVELY PATRONS FOR SUPPORTING US: Tamago231, H.H. Lewis, Kirin Tantinon, David, Max Lesterhuis, Marek Belski, Gisle, Colin Millions, Gregory Wolfe II, Lenoir Preminger, Abel X, Matt Knights, Amjad Al Taleb, Ian Bruce, Kris Wolfgramm, 麒麟 于, Christopher Shaw, 靖羊, Tomas Grolmus, Essena O’Neill, Kyle Messner, Pedro Devoto, Mark Radford, Ann-Marie Denham, Davide Pluda, Rik Vermeer, Justin Ritchie, Nicole White, Whireds, Claus Vallø, Jason Talley, Andrew Wu, Christian Dechery, Michael Howell, Michal Hanus, Cavit, Amary Wenger, JDKBot, Jason Eads, FreedomEagleAmerica, Roberto Maddaloni, TiagoF11, Harsha CS, Abhimanyu Yadav, Tracy Tobkin, Mike Fuchs, Elizabeth Mart, Jacob Wenger, Jeff Udall, Ricardo Affonso, Mauro Boffardi, Audrin Navarro, Troy Ross, Keith Tims, Santiago Perez, James, Jack Devlin, Chris Peters, Kenny Martin, Frederick Pickering, Lena Savelyeva, Ian Seale, Charles Ju, Brett Haugen, David Ramsey, Benjamin Dittes, Michelle Schoen, Albert Harguindey Sanchez, Michael King, Alex Kyriacou Alla Khvatova Thomas Rowan, Siim Sillamaa, David Bennell, Janzen,Bryn Farnsworth, Adam Recvlohe, Manuel Arredondo, Fred McIntyre, Maldock Manrique, Дмитрий, Ishita Bisht, Jake Ludwig, Zach Seggie, Casey Sloan, Myndert Papenhuyzen, rheingold3, AncientCulture, Orion Mondragon, Jan, Michael Kuperman, Alexander Argyropoulos Quantum Computers Explained – Limits of Human Technology Help us caption & translate this video! http://www.youtube.com/timedtext_cs_panel?c=UCsXVk37bltHxD1rDPwtNM8Q&tab=2 Cryptography PPT Asim Views: 131 MhdAsim Quantum Computers Explained in Detail | Future of Computing Namaskaar Dosto, is video mein maine aapse Quantum Computers ke baare mein baat ki hai, waise toh aap sabhi Computers use karte hai, aur aapka mobile phone bhi ek computer hi hai, aapne super computers ke baare mein bhi suna hoga, magar kya aapne kabhi quantum computers ke baare mein suna hai? Quantum Computers aur Quantum Computing aane waale time mein nayi computing ko leke aayega, aur yeh hamari bahut hi jyada help karega alag alag complex calculations karne mein. aap yeh video dekhiye aur aap Quantum Computing ke baare mein sab kuch jaan jayenge. Mujhe umeed hai ki quantum computing ka yeh video aapko jarur pasand aayega. Win Galaxy S7, S7 Edge Here: http://bit.ly/TheMegaGiveaway Share, Support, Subscribe!!! Subscribe: http://bit.ly/1Wfsvt4 Youtube: http://www.youtube.com/c/TechnicalGuruji Twitter: http://www.twitter.com/technicalguruji Facebook: http://www.facebook.com/technicalguruji Instagram: http://instagram.com/technicalguruji Google Plus: https://plus.google.com/+TechnicalGuruji About : Technical Guruji is a YouTube Channel, where you will find technological videos in Hindi, New Video is Posted Everyday :) Views: 89985 Technical Guruji Visual criptography (explained with an example) This video explain what is visual criptography with an example. The analyzed phases are: 1) How to generate the key 2) Encryption of the choosen image with the key 3) Visual decryption: overlapping the key with the encrypted image ▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬ Claims relating to video: Free Stock video by Videezy.com Song: it's different - Shadows (feat. Miss Mary) [NCS Release] Music provided by NoCopyrightSounds. Video Link: https://youtu.be/UHbDJ1oPuGo Download Link: http://NCS.lnk.to/Shadows Views: 1443 Ali One Informatica How Does a Quantum Computer Work? Views: 3098101 Veritasium Quantum Cryptography | CaltechX and DelftX on edX | Course About Video Learn how quantum communication provides security that is guaranteed by the laws of nature. Take this course free on edX: https://www.edx.org/course/quantum-cryptography-caltechx-delftx-qucryptox#! ABOUT THIS COURSE How can you tell a secret when everyone is able to listen in? In this course, you will learn how to use quantum effects, such as quantum entanglement and uncertainty, to implement cryptographic tasks with levels of security that are impossible to achieve classically. This interdisciplinary course is an introduction to the exciting field of quantum cryptography, developed in collaboration between QuTech at Delft University of Technology and the California Institute of Technology. By the end of the course you will: - Be armed with a fundamental toolbox for understanding, designing and analyzing quantum protocols. - Understand quantum key distribution protocols. - Understand how untrusted quantum devices can be tested. - Be familiar with modern quantum cryptography – beyond quantum key distribution. This course assumes a solid knowledge of linear algebra and probability at the level of an advanced undergraduate. Basic knowledge of elementary quantum information (qubits and simple measurements) is also assumed, but if you are completely new to quantum information additional videos are provided for you to fill in any gaps. WHAT YOU'LL LEARN - Fundamental ideas of quantum cryptography - Cryptographic concepts and tools: security definitions, the min-entropy, privacy amplification - Protocols and proofs of security for quantum key distribution - The basics of device-independent quantum cryptography - Modern quantum cryptographic tasks and protocols Views: 9998 edX CryptoGraphy Presentation Presentation for Quantum Cryptography Views: 211 Allen Joseph Quantum Encryption Explained This video explains what is quantum entanglement and how does it work. Enjoy! Views: 8510 Daniel Liu Prime Numbers & Public Key Cryptography A simple explanation of how prime numbers are used in Public Key Cryptography from ABC1 science program Catalyst Views: 64444 Simon Pampena [Hindi] What is Cryptography ? | Kya hai cryptography ? | Explained in simple words Hello Dosto Aaj hum baat karenge cryptography ke bare me ki ye kya hota hai aur iska itemaal kaise aur kaha hota hai. iska sambandh kisi bhi data ya message ko safely pohchane se hota hai aur uski security badhayi jati hai taaki bich me koi an-adhikarik tarike se usko access na kar paye. aasha karta hoo apko ye video pasand ayegi agar aapko ye video achhi lage to isse like kare aur apne dosto ke sath share kare aur abhi tak aapne mera channel subscribe nahi kia hai to jarur is channel ko subscribe kare. Subscribe to my channel for more videos like this and to support my efforts. Thanks and Love #TechnicalSagar LIKE | COMMENT | SHARE | SUBSCRIBE ---------------------------------------------------------------------------------- For all updates : SUBSCRIBE Us on Technical Sagar : www.youtube.com/technicalsagarindia LIKE us on Facebook https://www.facebook.com/technicalsagarindia Follow us on Twitter : http://www.twitter.com/iamasagar Views: 102854 Technical Sagar Quantum cryptography demonstration Researchers at the University of Calgary and SAIT Polytechnic demonstrate sending information between the two institutions using unbreakable codes derived using quantum physics. This is a significant milestone in Canada and paves the way for commercial applications for producing highly secure information networks. Views: 9253 UofCVideos What is a Quantum Internet? | QuTech Academy Video: What is a Quantum Internet? Do you want to learn more about Quantum Computers and the Quantum Internet? View the complete course at: https://www.edx.org/course/quantum-internet-quantum-computers-how-delftx-qtm1x More courses at http://qutech.nl/edu/ Views: 690 QuTech Academy What is Encryption? Public Key Encryption? Explained in Detail Namaskaar Dosto, is video mein maine aapko encryption ke baare mein bataya hai, aap sabhi ne computer aur internet use karte time Encryption aur decryption ke baare mein jarur suna hoga, usme aapko SSL encrytpion TSL Encryption, Public Key encryption, private key encryption wagereh ke baare mein bhi suna hoga, aur abhi recently whatsapp ne bhi end to end encryption launch kiya hai, toh aise mein hamare man mein bahut se sawaal hai ki aakhir yeh encryption hota kya hai? Encryption hum hamari email pe bhi use karte hai, aur hum online banking karte time bhi encryption ka use karte hai. Mujhe umeed hai ki yeh video dekhne ke baad aap encryption aur decryption ke baare mein sab kuch jaan jayenge, aur saath hi saath public key encryption ke baare mein bhi samajh jayenge. aur aap aaraam se whatsapp ke encryption feature ko bhi use kar payenge. Win Galaxy S7, S7 Edge Here: http://bit.ly/TheMegaGiveaway Share, Support, Subscribe!!! Subscribe: http://bit.ly/1Wfsvt4 Youtube: http://www.youtube.com/c/TechnicalGuruji Twitter: http://www.twitter.com/technicalguruji Facebook: http://www.facebook.com/technicalguruji Instagram: http://instagram.com/technicalguruji Google Plus: https://plus.google.com/+TechnicalGuruji About : Technical Guruji is a YouTube Channel, where you will find technological videos in Hindi, New Video is Posted Everyday :) Views: 205435 Technical Guruji Quantum Teleportation and Cryptography Topics covered: Cryptography, OTP and QKD, physical qubits, quantum coin flipping, quantum cloning circuit, Bell state circuit, quantum teleportation circuit. Views: 1788 Quantum Computing QKD - BB84 Protocol - Sarah Croke - QCSYS 2011 Sarah Croke, a postdoctoral fellow at Waterloo's Perimeter Institute for Theoretical Physics, lectures at the Institute for Quantum Computing on the physics behind the BB84 Protocol in quantum key distribution. The lecture was part of the Quantum Cryptography School for Young Students (QCSYS) 2011. For information on attending QCSYS 2012, visit http://iqc.uwaterloo.ca/conferences/qcsys2012/qcsys-home iqc.uwaterloo.ca Twitter: @QuantumIQC www.facebook.com/QuantumIQC quantumfactory.wordpress.com What is Post Quantum Cryptography? What if all "secured" websites could no longer be trusted to keep your data safe? The impact on eCommerce, banking, and other websites we use every day would be devastating. Learn about Quantum Computing, and why this is a very real risk not too far away. Download the guide to learn more https://web.securityinnovation.com/what-is-post-quantum-cryptography. Views: 6160 Security Innovation Chem 131A. Lec 04. Quantum Principles: Complementarity, Quantum Encryption, Schrodinger Equation UCI Chem 131A Quantum Principles (Winter 2014) Lec 04. Quantum Principles -- Complementarity, Quantum Encryption, Schrodinger Equation -- View the complete course: http://ocw.uci.edu/courses/chem_131a_quantum_principles.html Instructor: A.J. Shaka, Ph.D License: Creative Commons BY-NC-SA Terms of Use: http://ocw.uci.edu/info. More courses at http://ocw.uci.edu This course provides an introduction to quantum mechanics and principles of quantum chemistry with applications to nuclear motions and the electronic structure of the hydrogen atom. It also examines the Schrödinger equation and study how it describes the behavior of very light particles, the quantum description of rotating and vibrating molecules is compared to the classical description, and the quantum description of the electronic structure of atoms is studied. Quantum Principles (Chem 131A) is part of OpenChem: http://ocw.uci.edu/collections/open_chemistry.html This video is part of a 28-lecture undergraduate-level course titled "Quantum Principles" taught at UC Irvine by A.J. Shaka, Ph.D. Recorded on January 17, 2014. Index of Topics: 0:00:20 Localized Wavfunctions 0:11:30 Fourier Series 0:13:21 Quantum cryptography 0:28:32 Time Evolution 0:47:22 A Free Particle Required attribution: Shaka, A.J. Quantum Principles 131A (UCI OpenCourseWare: University of California, Irvine), http://ocw.uci.edu/courses/chem_131a_quantum_principles.html. [Access date]. License: Creative Commons Attribution-ShareAlike 3.0 United States License. (http://creativecommons.org/licenses/by-sa/3.0/us/deed.en_US). Views: 3566 UCI Open Symmetric Key and Public Key Encryption Modern day encryption is performed in two different ways. Check out http://YouTube.com/ITFreeTraining or http://itfreetraining.com for more of our always free training videos. Using the same key or using a pair of keys called the public and private keys. This video looks at how these systems work and how they can be used together to perform encryption. Download the PDF handout http://itfreetraining.com/Handouts/Ce... Encryption Types Encryption is the process of scrambling data so it cannot be read without a decryption key. Encryption prevents data being read by a 3rd party if it is intercepted by a 3rd party. The two encryption methods that are used today are symmetric and public key encryption. Symmetric Key Symmetric key encryption uses the same key to encrypt data as decrypt data. This is generally quite fast when compared with public key encryption. In order to protect the data, the key needs to be secured. If a 3rd party was able to gain access to the key, they could decrypt any data that was encrypt with that data. For this reason, a secure channel is required to transfer the key if you need to transfer data between two points. For example, if you encrypted data on a CD and mail it to another party, the key must also be transferred to the second party so that they can decrypt the data. This is often done using e-mail or the telephone. In a lot of cases, sending the data using one method and the key using another method is enough to protect the data as an attacker would need to get both in order to decrypt the data. Public Key Encryption This method of encryption uses two keys. One key is used to encrypt data and the other key is used to decrypt data. The advantage of this is that the public key can be downloaded by anyone. Anyone with the public key can encrypt data that can only be decrypted using a private key. This means the public key does not need to be secured. The private key does need to be keep in a safe place. The advantage of using such a system is the private key is not required by the other party to perform encryption. Since the private key does not need to be transferred to the second party there is no risk of the private key being intercepted by a 3rd party. Public Key encryption is slower when compared with symmetric key so it is not always suitable for every application. The math used is complex but to put it simply it uses the modulus or remainder operator. For example, if you wanted to solve X mod 5 = 2, the possible solutions would be 2, 7, 12 and so on. The private key provides additional information which allows the problem to be solved easily. The math is more complex and uses much larger numbers than this but basically public and private key encryption rely on the modulus operator to work. Combing The Two There are two reasons you want to combine the two. The first is that often communication will be broken into two steps. Key exchange and data exchange. For key exchange, to protect the key used in data exchange it is often encrypted using public key encryption. Although slower than symmetric key encryption, this method ensures the key cannot accessed by a 3rd party while being transferred. Since the key has been transferred using a secure channel, a symmetric key can be used for data exchange. In some cases, data exchange may be done using public key encryption. If this is the case, often the data exchange will be done using a small key size to reduce the processing time. The second reason that both may be used is when a symmetric key is used and the key needs to be provided to multiple users. For example, if you are using encryption file system (EFS) this allows multiple users to access the same file, which includes recovery users. In order to make this possible, multiple copies of the same key are stored in the file and protected from being read by encrypting it with the public key of each user that requires access. References "Public-key cryptography" http://en.wikipedia.org/wiki/Public-k... "Encryption" http://en.wikipedia.org/wiki/Encryption Views: 497752 itfreetraining What is Cryptography - Introduction to Cryptography - Lesson 1 In this video I explain the fundamental concepts of cryptography. Encryption, decryption, plaintext, cipher text, and keys. Learn Math Tutorials Bookstore http://amzn.to/1HdY8vm Donate - http://bit.ly/19AHMvX STILL NEED MORE HELP? Connect one-on-one with a Math Tutor. Click the link below: https://trk.justanswer.com/aff_c?offer_id=2&aff_id=8012&url_id=232 :) Views: 103949 Learn Math Tutorials Google Chrome is experimenting with Post-Quantum Cryptography Quantum computers are a fundamentally different sort of computer that take advantage of aspects of quantum physics to solve certain sorts of problems dramatically faster than conventional computers can. While the Quantum Computers will be very useful in various ways, they can create problems in some ways. Specifically, if large quantum computers can be built then they may be able to break the asymmetric cryptographic primitives that are currently used in TLS, the security protocol behind HTTPS. Quantum computers exist today but, for the moment, they are small and experimental, containing only a handful of quantum bits. It's not even certain that large machines will ever be built, although Google, IBM, Microsoft, Intel and others are working on it. Adiabatic quantum computers, like the D-Wave computer that Google operates with NASA, can have large numbers of quantum bits, but currently solve fundamentally different problems. However, a hypothetical, future quantum computer would be able to retrospectively decrypt any internet communication that was recorded today, and many types of information need to remain confidential for decades. Thus even the possibility of a future quantum computer is something that we should be thinking about today. The study of cryptographic primitives that remain secure even against quantum computers is called “post-quantum cryptography”. Google has announced an experiment in Chrome where a small fraction of connections between desktop Chrome and Google's servers will use a post-quantum key-exchange algorithm in addition to the elliptic-curve key-exchange algorithm that would typically be used. By adding a post-quantum algorithm on top of the existing one, Google is able to experiment without affecting user security. The post-quantum algorithm might turn out to be breakable even with today's computers, in which case the elliptic-curve algorithm will still provide the best security that today’s technology can offer. Alternatively, if the post-quantum algorithm turns out to be secure then it'll protect the connection even against a future, quantum computer. Google's aims with this experiment are to highlight an area of research that it believes to be important and to gain real-world experience with the larger data structures that post-quantum algorithms will likely require. There are many post-quantum algorithms available. Google selected a post-quantum algorithm named "New Hope” for this experiment. News Source: https://security.googleblog.com/2016/07/experimenting-with-post-quantum.html Related Video: IBM Quantum Experience allows anyone to access IBM's Quantum Computer over the Web https://www.youtube.com/watch?v=8VPwtlOwfGE Watch more #Technology News Videos at https://www.youtube.com/playlist?list=PLK2ccNIJVPpB_XqWWq_oaZGIDzmKiSkYc Buy T-Shirts and other Merchandise at https://shop.spreadshirt.com/QualityPointTech/ What is Cryptography In Hindi II Cryptography Details In Hindi II Cryptography Explained In Hindi Hindi Mein Jaankari Amazon Shopping Link:- https://www.amazon.com/shop/hindimeinjaankari --~-- What is Cryptography In Hindi II Cryptography Details In Hindi II Cryptography Explained In Hindi Hello Friends welcome to my channel Hindi Mein Jaankari...HMJ Dosto aaj ke is video mein hum baat karenge what is cryptography dosto agar aap internet user hain to aapne kabhi na kabhi cryptography ke baare mein jarur suna hoga agar nahin to is video ko pura dekhe dosto aaj ke is video mein aapko cryptography details in hindi mein batayenge aur yeh kya kaam karta hai aur cryptography ke kya fayade hain yeh pura chapter cryptography explained in hindi mein karenge. Dosto jab bhi aap koi email ya message kisi ko bhejte hain yar phir aapka jo data hai aap jiske saath share karna chahte aur aap chahte ki aapta data private rahe aur koi bhi us data ko server aur aapke beech se na chura sake to dosto is cryptography kahte hain dosto ismein jo bhi aapka data hota use encrypt kar dia jaata hai aur jiske saath aap data ko share karte hai woh us data ko decrypt karta hai aur use read kar leta hai dosto jitni bhi website hain jine address baar mein https likha aata hai green colour mein woh ssl protect websites hote hain aapke data ko aapke information ko encrypt kar ke server mein store karti hain taki aapki information ko koi na chura sake. Dosto what is cryptography ya cryptography details in hindi ke baare mein aapko is video mein bahut kuch pata chalega dosto cryptography ko 2 parts mein devide kia hai symetric cryptography aur asymetric cryptography. ====================================================== PLZ …..LIKE………SHARE………COMMENT……..SUBSCRIBE…. Follow Us On Social Media:- Follow Us On Facebook:- https://www.facebook.com/Hindi-Mein-Jaankari-178986192654764/ Follow Us On Twitter:-https://twitter.com/meinjaankari Follow Us On Instagram:-https://www.instagram.com/hindimeinjaankari/ Follow Us On Google Plus:- https://plus.google.com/u/0/114440527599226210280 Follow Us On Pinterest:- www.pinterest.com/hindimeinjaankari ====================================================== Background Music:- Ishikari Lore by Kevin MacLeod is licensed under a Creative Commons Attribution license http://incompetech.com/music/royalty-free/index.html?isrc=USUAN1100192 Views: 19454 Hindi Mein Jaankari Shoucheng Zhang: "Quantum Computing, AI and Blockchain: The Future of IT" | Talks at Google Views: 93162 Talks at Google What is ID-BASED CRYPTOGRAPHY? What does ID-BASED CRYPTOGRAPHY mean? ID-BASED CRYPTOGRAPHY meaning - ID-BASED CRYPTOGRAPHY definition - ID-BASED CRYPTOGRAPHY explanation. Source: Wikipedia.org article, adapted under https://creativecommons.org/licenses/by-sa/3.0/ license. SUBSCRIBE to our Google Earth flights channel - https://www.youtube.com/channel/UC6UuCPh7GrXznZi0Hz2YQnQ Identity-based cryptography is a type of public-key cryptography in which a publicly known string representing an individual or organization is used as a public key. The public string could include an email address, domain name, or a physical IP address. The first implementation of identity-based signatures and an email-address based public-key infrastructure (PKI) was developed by Adi Shamir in 1984, which allowed users to verify digital signatures using only public information such as the user's identifier. Under Shamir's scheme, a trusted third party would deliver the private key to the user after verification of the user's identity, with verification essentially the same as that required for issuing a certificate in a typical PKI. Shamir similarly proposed identity-based encryption, which appeared particularly attractive since there was no need to acquire an identity's public key prior to encryption. However, he was unable to come up with a concrete solution, and identity-based encryption remained an open problem for many years. The first practical implementations were finally devised by Sakai in 2000, and Boneh and Franklin in 2001. These solutions were based on bilinear pairings. Also in 2001, a solution was developed independently by Clifford Cocks. Identity-based systems allow any party to generate a public key from a known identity value such as an ASCII string. A trusted third party, called the private key generator (PKG), generates the corresponding private keys. To operate, the PKG first publishes a master public key, and retains the corresponding master private key (referred to as master key). Given the master public key, any party can compute a public key corresponding to the identity ID by combining the master public key with the identity value. To obtain a corresponding private key, the party authorized to use the identity ID contacts the PKG, which uses the master private key to generate the private key for identity ID. Identity-based systems have a characteristic problem in operation. Suppose Alice and Bob are users of such a system. Since the information needed to find Alice's public key is completely determined by Alice's ID and the master public key, it is not possible to revoke Alice's credentials and issue new credentials without either (a) changing Alice's ID (usually a phone number or an email address which will appear in a corporate directory); or (b) changing the master public key and re-issusing private keys to all users, including Bob. This limitation may be overcome by including a time component (e.g. the current month) in the identity. Views: 437 The Audiopedia Microsoft Quantum Development Kit: Introduction and step-by-step demo Krysta Svore, principal researcher at Microsoft, demonstrates the new Microsoft Quantum Development Kit, now in preview. The Quantum Development Kit makes it easy for you to start experimenting with quantum computing now and includes: · A native, quantum-focused programming language called Q# · Local and Azure-hosted simulators for you to test your Q# solution · And sample Q# code and libraries to help you get started In this demo, she walks through a few code examples and explains where quantum principles like superposition and entanglement apply. She explains how quantum communication works using teleportation as your first "Hello World" inspired program. And keep watching to see more complex computations with molecular hydrogen. Get started by accessing the preview at https://www.microsoft.com/quantumdevkit and there you can access Q#, the simulators, code samples and tutorials. Documentation: https://docs.microsoft.com/en-us/quantum/index?view=qsharp-preview Code samples: https://github.com/microsoft/quantum Watch other shows in this series to learn about the Quantum Development Kit and how you can access local quantum simulators or in Azure. Views: 149827 Microsoft Mechanics Quantum Computing: A Threat to Leading Financial Players (Vern Brownell) - Exponential Finance 2014 As the former CTO of Goldman Sachs, Vern Brownell understands the technical challenges facing large financial institutions. He’ll share his unique perspective as the head of the leading quantum computer manufacturer, D-Wave, to provide an understanding of quantum computing and its radical impact on present and future computing. [HINDI] What is Cryptography? | Simple Practical in Android | Cryptography क्या होता है? Link to My Blog:- http://techdjdey.blogspot.in/ Video Editor used:- HitFilm 4 Express. https://hitfilm.com/ Screen recorder used:- iSpring Free Cam 8. https://www.ispringsolutions.com/ispring-free-cam Music:- Elektronomia-Sky High. https://www.youtube.com/watch?v=TW9d8vYrVFQ PLEASE LIKE AND SHARE THIS VIDEO. SUBSCRIBE to my Channel here:- https://www.youtube.com/channel/UCFKcqq9IOdwHdgfq6GEL8gw?sub_confirmation=1 My other videos:- https://www.youtube.com/channel/UCFKcqq9IOdwHdgfq6GEL8gw/videos IGNORE THESE BELOW: 1. what is the difference between steganography and cryptography c cryptography c cryptography example c cryptography library c cryptography tutorial cryptographic protocol e cryptography f# cryptography lightweight cryptography nonlinear cryptography o cryptography r cryptography what does cryptography have to do with math what does cryptography look like today what is a certificate in cryptography what is a cipher in cryptography what is a crib in cryptography what is a generator cryptography what is a hash cryptography what is a nonce in cryptography what is a public key cryptography what is a quantum cryptography what is a seed in cryptography what is a symmetric cryptography what is advantages of cryptography what is asymmetric key cryptography what is biometric cryptography what is bitcoin cryptography what is broken cryptography what is cbc cryptography what is chaos cryptography what is chaotic cryptography what is cipher cryptography what is classical cryptography what is cloud cryptography what is confusion and diffusion in cryptography with example what is conventional cryptography what is cryptographic hash function what is cryptographic hash functions examples what is cryptography what is cryptography algorithm what is cryptography and cryptanalysis what is cryptography and encryption what is cryptography and history what is cryptography and how is it used what is cryptography and how is it used in it security what is cryptography and its importance what is cryptography and its types what is cryptography and network security what is cryptography and network security ppt what is cryptography and network security wikipedia what is cryptography and number theory what is cryptography and steganography what is cryptography and theoretical informatics what is cryptography and why is it important what is cryptography and why is it used what is cryptography basics what is cryptography computer what is cryptography define the process of encryption and decryption what is cryptography definition what is cryptography doc what is cryptography encryption what is cryptography engineering what is cryptography error what is cryptography explain what is cryptography explain in detail what is cryptography explain its types what is cryptography hashing what is cryptography how is it used in it security what is cryptography in .net what is cryptography in c# what is cryptography in computer what is cryptography in computer network what is cryptography in computer security what is cryptography in cyber security what is cryptography in hindi what is cryptography in information security what is cryptography in java what is cryptography in mathematics what is cryptography in network security what is cryptography in networking what is cryptography in operating system what is cryptography in os what is cryptography in registry what is cryptography in security what is cryptography in simple language what is cryptography in web security what is cryptography key what is cryptography key management what is cryptography law what is cryptography library what is cryptography method what is cryptography module what is cryptography network security what is cryptography next generation what is cryptography pdf what is cryptography ppt what is cryptography provide an example what is cryptography quora what is cryptography rng seed what is cryptography salary what is cryptography salt what is cryptography service what is cryptography slideshare what is cryptography software what is cryptography system what is cryptography teach ict what is cryptography technique what is cryptography technology what is cryptography tools what is cryptography tutorial point what is cryptography types what is cryptography used for what is cryptography used for today what is cryptography video what is cryptography virus what is cryptography wikipedia what is cryptography with diagram what is cryptography with example what is cryptography yahoo what is cryptography yahoo answers what is cryptography youtube what is data cryptography what is des cryptography what is difference between cryptography and encryption what is difference between cryptography and steganography what is diffusion cryptography what is digital cryptography what is discrete logarithm in cryptography what is distributed cryptography what is dna cryptography what is dsa cryptography what is ecc cryptography what is elementary cryptography Views: 2494 Dhrubajyoti Dey What is VISUAL CRYPTOGRAPHY? What does VISUAL CRYPTOGRAPHY mean? VISUAL CRYPTOGRAPHY meaning - VISUAL CRYPTOGRAPHY definition - VISUAL CRYPTOGRAPHY explanation. Source: Wikipedia.org article, adapted under https://creativecommons.org/licenses/by-sa/3.0/ license. SUBSCRIBE to our Google Earth flights channel - https://www.youtube.com/channel/UC6UuCPh7GrXznZi0Hz2YQnQ Visual cryptography is a cryptographic technique which allows visual information (pictures, text, etc.) to be encrypted in such a way that decryption becomes the job of the person to decrypt via sight reading. One of the best-known techniques has been credited to Moni Naor and Adi Shamir, who developed it in 1994. They demonstrated a visual secret sharing scheme, where an image was broken up into n shares so that only someone with all n shares could decrypt the image, while any n - 1 shares revealed no information about the original image. Each share was printed on a separate transparency, and decryption was performed by overlaying the shares. When all n shares were overlaid, the original image would appear. There are several generalizations of the basic scheme including k-out-of-n visual cryptography. Using a similar idea, transparencies can be used to implement a one-time pad encryption, where one transparency is a shared random pad, and another transparency acts as the ciphertext. Normally, there is an expansion of space requirement in visual cryptography. But if one of the two shares is structured recursively, the efficiency of visual cryptography can be increased to 100%. Some antecedents of visual cryptography are in patents from the 1960s. Other antecedents are in the work on perception and secure communication. Visual cryptography can be used to protect biometric templates in which decryption does not require any complex computations. Views: 1223 The Audiopedia Asymmetric encryption - Simply explained How does public-key cryptography work? What is a private key and a public key? Why is asymmetric encryption different from symmetric encryption? I'll explain all of these in plain English! 🐦 Follow me on Twitter: https://twitter.com/savjee ✏️ Check out my blog: https://www.savjee.be 👍🏻 Like my Facebook page: https://www.facebook.com/savjee @Aman_Ishwar Student of CS IIIrd Year...... www.facebook.com/aman.ishwar Views: 383 Fozail Ahmad The Enigma Machine Explained As technology increases, so do the methods of encryption and decryption we have at our disposal. World War II saw wide use of various codes from substitution ciphers to employing Navajo code talkers in the Pacific theater. Here, science journalist and author Simon Singh demonstrates the German enigma machine, a typewriter-like device used to encrypt communications. He demonstrates not only its operation, but both the strength and fatal flaws in its method. Watch the Full Program Here: https://youtu.be/nVVF8dgKC38 Original Program Date: June 4, 2011 The World Science Festival gathers great minds in science and the arts to produce live and digital content that allows a broad general audience to engage with scientific discoveries. Our mission is to cultivate a general public informed by science, inspired by its wonder, convinced of its value, and prepared to engage with its implications for the future. Subscribe to our YouTube Channel for all the latest from WSF. Visit our Website: http://www.worldsciencefestival.com/ Like us on Facebook: https://www.facebook.com/worldsciencefestival Follow us on twitter: https://twitter.com/WorldSciFest Views: 516530 World Science Festival Safe and Sorry – Terrorism & Mass Surveillance Sources: Terrorist surveillance program: Original press release: http://1.usa.gov/1p0lZXT Assessment of potential effect of surveillance measures if implemented before 9/11: Interview with FBI director Robert Mueller: http://bit.ly/1MvHNpB FBI investigations of immigrants: "NSEERS effect" report: http://bit.ly/1qU8Wcu Quote on aggressive racial profiling: Article "Are we safer?" by David Cole, Professor of Law at Georgetown University Law Center: http://bit.ly/1Sc8tLo Extent of NSA surveillance: NSA power point slides on collecting buddy lists, obtained by Washington Post: http://wapo.st/1cWi0SM NSA slides on prism data collection, obtained by The Guardian: http://bit.ly/1qmj46r NSA results from mass surveillance vs. target surveillance: Report from the Presidents NSA Review group 2013 (recommending to stop mass data mining because of lack of results): http://1.usa.gov/1bK0q7x Article from ProPublica: http://bit.ly/1PAusfR Analysis from the New America Foundation: http://bit.ly/1SSq8ea Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World by Bruce Schneier Surveillance program didn`t stop any major attacks: Full video of court hearing with NSA director Keith B. Alexander on surveillance: http://cs.pn/1Yv1G0N Official report on results of phone surveillance policy: http://1.usa.gov/1bK0q7x Article on debunked claims: http://bit.ly/1p0n2ae Official judge ruling on matter points to no evidence: https://www.propublica.org/documents/item/902454-judge-leon-ruling#document/p62 Report by the legal affairs and human rights committee of the parliamentary assembly of the Council of Europe: http://bit.ly/1qr9aXC Boston marathon bomber was known to FBI: Official press release: http://1.usa.gov/1Vrw4vI FBI asked Apple for help: Official court order: http://bit.ly/24auFf6 Apple`s refusal to crack iPhone: Official public statement: http://apple.co/1Lt7ReW Objections against FBI demands from cryptographers: Brad Smith keynote at the RSA information security conference: http://bit.ly/1Vrwd1Y (especially relevant from minute 7 on) Statement by Information Technology Industry Council: http://bit.ly/1Q9cg7N Amicus briefs supporting Apple: http://apple.co/1OSBypU FBI changing their story about needing Apple`s help: Initial article on Washington Post: http://wapo.st/1KqHIT7 Initial story on Reutersblog: http://reut.rs/1SCl73o Update on Reuters: http://reut.rs/1NdTJae Article on ACLU about possible work-around: http://bit.ly/1OZ2nZL Blogpost on another possible workaround: http://bit.ly/1Vrwv98 NSA can turn on iPhone remotely: BBC interview with Edward Snowden: http://bit.ly/1Nab09Q Article on Wired: http://bit.ly/1hvZMNn Abuse of anti-terrorism laws: Proof of Patriot Act laws used for investigating other crimes, especially drugs: http://bit.ly/1LXBu9X „Sneak and Peak“ report: http://bit.ly/1RVGhgM Enforcement of French anti-terrorism laws: Detailed explanation of new powers given by extended laws: http://bit.ly/1OYBpSl Original law text (in french): http://bit.ly/1qraiKQ Abuse of french anti-terrorism laws: Human rights watch reports cases: http://bit.ly/1SZmwpH Climate change protesters placed under house arrest: http://reut.rs/20DYZfa Censorship in Hungary, Poland and Spain: http://bit.ly/20DZ3eS http://bit.ly/1Qgc7lX http://bit.ly/1WtmIyv http://bit.ly/1MvJ8N7 Jail time for government critics in Turkey: http://bit.ly/1oXBctf Effects of surveillance on our society: List of issues of power abuse since 9/11 by American Civil liberties union: http://bit.ly/1U6Rux4 General overview over the topic: http://bit.ly/1Pyj8uR http://bit.ly/1RVH2GF http://bit.ly/MZe4qY Safe and Sorry– Terrorism & Mass Surveillance Help us caption & translate this video! http://www.youtube.com/timedtext_cs_panel?c=UCsXVk37bltHxD1rDPwtNM8Q&tab=2 What is QUANTUM LITHOGRAPHY? What does QUANTUM LITHOGRAPHY mean? QUANTUM LITHOGRAPHY meaning - QUANTUM LITHOGRAPHY definition - QUANTUM LITHOGRAPHY explanation. Source: Wikipedia.org article, adapted under https://creativecommons.org/licenses/by-sa/3.0/ license. SUBSCRIBE to our Google Earth flights channel - https://www.youtube.com/channel/UC6UuCPh7GrXznZi0Hz2YQnQ Quantum lithography is a type of photolithography, which exploits non-classical properties of the photons, such as quantum entanglement, in order to achieve superior performance over ordinary classical lithography. Quantum lithography is closely related to the fields of quantum imaging, quantum metrology, and quantum sensing. The effect exploits the quantum mechanical state of light called the NOON state. Quantum lithography was invented at Jonathan P. Dowling's group at JPL, and has been studied by a number of groups. Of particular importance, quantum lithography can beat the classical Rayleigh criterion for the diffraction limit. Classical photolithography has an optical imaging resolution that cannot be smaller than the wavelength of light used. For example, in the use of photolithography to mass-produce computer chips, it is desirable to produce smaller and smaller features on the chip, which classically requires moving to smaller and smaller wavelengths (ultraviolet and x-ray), which entails exponentially greater cost to produce the optical imaging systems at these extremely short optical wavelengths. Quantum lithography exploits the quantum entanglement between specially prepared photons in the NOON state to achieve the smaller resolution without the requirement of shorter wavelengths. For example, a beam of red photons, entangled ten at a time in the NOON state, would have the same resolving power as a beam of x-ray photons. The field of quantum lithography is in its infancy, and although experimental proofs of principal have been carried out using the Hong–Ou–Mandel effect, it is still a long way from commercial application. Views: 91 The Audiopedia Visual Cryptography Hiding your images in style since 1994. Copyright Protection Scheme for Digital Images Using Visual Cryptography and Sampling Methods Ching-Sheng Hsu Young-Chang Hou July 2005 RIT, IMGS-362 Image Processing & Computer Vision II Views: 27574 Matt Donato IOTA tutorial 1: What is IOTA and some terminology explained If you like this video and want to support me, go this page for my donation crypto addresses: https://www.youtube.com/c/mobilefish/about Update: In this video i mentioned Curl and the vulnerability found in this algorithm. However it seems that this is NOT correct. Please read: https://blog.iota.org/official-iota-foundation-response-to-the-digital-currency-initiative-at-the-mit-media-lab-part-1-72434583a2 This is part 1 of the IOTA tutorial. In this video series different topics will be explained which will help you to understand IOTA. It is recommended to watch each video sequentially as I may refer to certain IOTA topics explained earlier. IOTA is not an acronym for Internet of Things, (IoT) but it just mean something very small. David Sønstebø, Sergey Ivancheglo, Dominik Schiener and Serguei Popov founded IOTA in 2015. IOTA Foundation main focus is Internet of Things and the Machine Economy but this technology is well suited for payments between humans as well. The IOTA white paper can be found at: https://iota.org/IOTA_Whitepaper.pdf All IOTA's which will ever exist have already been created.  The total IOTA supply is: 2,779,530,283,277,761 IOTAs IOTA features - Scalability The network becomes stronger when the number of transactions increases. IOTA can achieve high transaction throughput. - Decentralisation IOTA has no miners. Every transaction maker is also a transaction validator which means every transaction maker actively participates in the consensus. - No transaction fees IOTA has no transaction fees which means IOTA can be used for micropayments. - Quantum computing protection Quantum computers will be able to crack current data encryption methods much faster than current classical computers. IOTA uses the Winternitz One-Time Signature Scheme which is a quantum-resistant algorithm. See: https://eprint.iacr.org/2011/191.pdf IOTA is the 3rd generation public permissionless distributed ledger, based on a Directed Acyclic Graph (DAG). IOTA called this DAG the tangle. The tangle is NOT the same as the Blockchain. A tangle is a data structure based on Directed Acyclic Graph (DAG). Each transaction always validates 2 previous non validated transactions. Directed means the graph is pointing to one direction. Tips are the unconfirmed transactions in the tangle graph. Height is the length of the longest oriented path to the genesis. Depth is the length of the longest reverse-oriented path to some tip. Making a transaction is a 3 step process: - Signing: Your node (computer / mobile) creates a transaction and sign it with your private key. - Tip Selection: Your node chooses two other unconfirmed transactions (tips) using the Random Walk Monte Carlo (RWMC) algorithm. - Proof of Work: Your node checks if the two transactions are not conflicting. Next, the node must do some Proof of Work (PoW) by solving a cryptographic puzzle (hashcash). Hashcash works by repeatedly hashing the same data with a tiny variation until a hash is found with a certain number of leading zero bits. This PoW is to prevent spam and Sybil attacks. The goal of the Random Walk Monte Carlo algorithm is to generate fair samples from some difficult distribution. The Random Walk Monte Carlo (RWMC) algorithm is used in two ways: - To choose two other unconfirmed transactions (tips) when creating a transaction. - And to determine if a transaction is confirmed. To determine the confirmation level of your transaction we need the depth to start from and we execute the Random Walk Monte Carlo algorithm N times, the probability of your transaction being accepted is therefore M of N. M being the number of times you land on a tip that has a path to your transaction. If you execute RWMC 100 times, and 60 tips has a path to your transaction, than your transaction is 60% confirmed. It is up the the merchant to decide to accept the transaction and exchange goods. It is the same as Bitcoins where you want to wait for at least 6 blocks for high value transactions. Transactions with bigger depths takes longer to be validated. An IOTA Reference Implementation (IRI), wallet and libraries are available at: https://github.com/iotaledger To setup a full node you need to tether with neighbours by exchanging your ip address with theirs. Once you have sent a transaction from an address, you should never use this address again. A tangle can get branch off and back into the network. This is called partitioning. The Coordinator or ‘Coo’ for short, are several full nodes scattered across the world run by the IOTA Foundation. It creates zero value transactions called milestones which full nodes reference to. Check out all my other IOTA tutorial videos https://goo.gl/aNHf1y Subscribe to my YouTube channel: https://goo.gl/61NFzK The presentation used in this video tutorial can be found at: https://www.mobilefish.com/developer/iota/iota_quickguide_tutorial.html #mobilefish #howto #iota Views: 37404 Mobilefish.com Simple introduction to Shamir's Secret Sharing and Lagrange interpolation A presentation part of the cryptography course at Chalmers University, 2015. Starting with simple examples, we introduce Shamir's Secret Sharing Scheme and how Lagrange interpolation fits in. I may have misspoken at some points, but that's to keep you alert :) Solution and follow up in https://youtu.be/rWPZoz0aux4 Views: 13880 SimplyScience Digital Signatures Explained - Keep your's Safe!!! Namaskaar Dosto, maine is video mein aapse Digital Signatures ke baare mein baat ki hai, aap sabhi ne bahut baar inke baare mein suna hoga, daily life mein toh aap sabhi signatures ko use karte hai, but Digital Signatures ekdum alag concept hai aur kaafi important bhi hai. Mujhe umeed hai ki aapko Digital Signatures ke baare mein yeh video pasand aayega. Share, Support, Subscribe!!! Subscribe: http://bit.ly/1Wfsvt4 Youtube: http://www.youtube.com/c/TechnicalGuruji Twitter: http://www.twitter.com/technicalguruji Facebook: http://www.facebook.com/technicalguruji Facebook Myself: https://goo.gl/zUfbUU Instagram: http://instagram.com/technicalguruji Google Plus: https://plus.google.com/+TechnicalGuruji About : Technical Guruji is a YouTube Channel, where you will find technological videos in Hindi, New Video is Posted Everyday :) Views: 268342 Technical Guruji
631928fec18507d5
DOI: 10.14704/nq.2012.10.3.539 Brain Photons as the Quanta of the Quantum String Miroslaw Kozlowski, Janina Marciak-Kozlowska In this paper the new Schrödinger equation for brain waves is proposed and solved for quantum well with infinite boundaries. The spectra of the alpha, beta theta delta and gamma photons are calculated and agreement of the calculated spectra and electroencephalography human brain is rather good. The width of the quantum well − source of the brain waves is of the order of 10−6 nm i.e., is the order of the nucleus radius. The brain photons are emitted as quantum of the quantum string, i.e., with angular frequency ωn where quantum number n =1,2,3,4,5 respectively. NeuroQuantology | September 2012 | Volume 10 | Issue 3 | Page 453-461 consciousness; quantum string; brain waves; quantons Full Text: Supporting Agencies | NeuroScience + QuantumPhysics> NeuroQuantology :: Copyright 2001-2019
dd60e277bb27082c
next up previous contents Next: Perturbation Theory Up: quantrev Previous: The Hydrogen Atom   Contents Approximate Methods The problems discussed in the previous section (harmonic oscillator, rigid rotator, etc.) are some of the few quantum mechanics problems which can be solved analytically. For the vast majority of chemical applications, the Schrödinger equation must be solved by approximate methods. The two primary approximation techniques are the variational method and perturbation theory. David Sherrill 2006-08-15
4845da271d62ed6e
Macs in Chemistry Insanely great science Mac OS X Applications J-M Jamberoo:- Molecular Editor Jamberoo (former JMolEditor) is a java program for displaying, analyzing, editing, converting, and animating molecular systems.Supported file types GromacsGro files Mopac 2002 Log file xyz file JAMMING:- 3D Protein analysis Although several approaches based on physics principles (e.g., electrostatics) have been reported to identify critical residues from the three dimensional structure of proteins (3), there is a limited accessibility to software aimed to identify critical residues from protein structures. Network analysis on the other hand has been recently shown to be succesfull and complementary to protein sequence-based approaches, JAMMING provides a means to identify critical residues from protein 3D structures Efficient identification of critical residues based only on protein structure by network analysis Cusack M, Thibert B, Bredesen DE and del Rio G. 2007. PLoS ONE 2(5):e421 JChem :- Java chemoinformatcs toolkit JChem Modules: JChem Base: adds a chemical interface to corporate databases, which can be applied for combined SQL and structural queries; imports/exports molecules, substructures, or reactions in standard formats (Molfile, SD file, RD file, SMILES, SMARTS, etc.). JChem Cartridge: adds chemical knowledge to the Oracle platform giving automatic access to Oracle's security, scalability, and replication features. Standardier: structure canonicalization tool converting molecules from different formats into standard representation. Screen: screening based on pharmacophore or chemical fingerprints or other descriptors. Reactor: generating reaction products from reaction equations and reactants. Fragmenter: generating building blocks based on Recap rules from molecule libraries. Serial Molecule Generator: transforming molecules by a sequence of user-defined transformations. Chemical Term Evaluator: evaluating chemical expressions. JKlustor: clustering and diversity calculations based on molecular fingerprints or other properties Also a review of Instant JChem JChem 3.2.1 has been released. For the list of changes please see: J-ICE:- Interface for Crystallographic and Electronic Properties. J-ICE ia a Jmol Interface for Crystallographic and Electronic Properties. J-ICE can deal with CASTEP, CRYSTAL09 (as well as 06, 03 and 98), Gaussian09 (G03), GROMACS, QUANTUM ESPRESSO, VASP, Wien2k, FHI-aim, CIF, PDB and many others formats. You can try it out online J-ICE: a new Jmol interface for handling and visualizing Crystallographic and Electronics properties, P. Canepa, R.M. Hanson, P. Ugliengo, M. Alfredsson J. Appl. Cryst., (2011), 44" [doi] Jmol :-Molecule viewer Jmol is a Java molecular viewer for three-dimensional chemical structures. Features include reading a variety of file types and output from quantum chemistry programs, and animation of multi-frame files and computed normal modes from quantum programs. Also available as an applet. Jmol is a fully compatible replacement for rasmol/chime in that it will use rasmol/chime scripts JSDraw:- Javascript chemisty library JSDraw™ is a chemical structure editor/viewer built in 100% Javascript, running on all platforms, including Windows, Mac, Linux, iPad/iPhone, Android tablets/phones and Chromebooks. Now includes a spreadsheet. jVisualizer :- NMR analysis jVisualizer for analysing and visualising first-order NMR coupling patterns. Ketcher:- Chemical drawing Ketcher is a javascript based chemical drawing package. Standalone mode Ketcher supports the standalone mode in which no server support is required. In this mode, SMILES loading and automatic layout are not available. Scalable Vector Graphics (SVG) for rendering Ketcher uses SVG to achieve the highest quality in-browser chemical structure rendering. SVG standard is supported by most modern browsers and provides smooth and light-weight drawing. Note: Internet Explorer through version 8 does not support SVG. IE rendering is based on VML (Vector Markup Language) instead. SMILES strings SMILES is a compact format for chemical structure representation. Ketcher provides you with the ability to load and save structures in this useful format. Automatic layout (clean up) The server-side structure layout algorithm is developed on C++ as a part of the Indigo toolkit. It provides fast 2D structure representations that satisfy common chemical drawing standards. Other features Hotkeys. For a more rapid and convenient way of structure drawing, Ketcher offers a variety of hot keys. See Editing Tips for the list of hot keys. Stereochemistry. Ketcher provides complete stereochemistry support during the editing, loading, and saving of chemical structures. Undo/Redo. Ketcher stores a full history of performed actions, and the user can rollback to any previous state. In-place atom editing. Ketcher allows for the direct input of atom labels and charges. See Editing Tips for details. Molfiles support. In addition to SMILES strings, Ketcher also supports Molfile saving and loading. Kinemage :-Protein structure analysis Kinemage have a variety of very useful bits of software for investigating protein structure Mage a 3D display program, KiNG, Rduce, Probe, Dang Kinetiscope :- method for the accurate simulation of chemical reactions Kinetiscope is a scientific software tool that provides the bench scientist with an easy-to-use, rapid, interactive method for the accurate simulation of chemical reactions. Kinetiscope does not integrate sets of coupled differential equations to predict the time history of a chemical system. Instead, it uses a general, rigorously accurate stochastic algorithm to propagate a reaction. The stochastic method for kinetics simulations was first published in the early 1970's by Professor D. L. Bunker1 (University of California at Irvine), and by Dr. D. Gillespie2 (Naval Weapons Center, China Lake). The stochastic method is comparable in efficiency to integration for simple kinetic schemes, and significantly faster for stiff systems. It is superior for modeling reactions which depend on sporadic events and fluctuations, such as explosions and nucleation, and handles easily complex situations such reactions in continuously changing volumes. A particularly useful feature of stochastic simulation methods is that they offer enhanced flexibility in setting up reaction mechanisms since they do not require explicit checking for conservation of mass and energy. The library includes simulations of gas phase, solution phase and solid state reactions such as co- and terpolymerization ... radical chain-initiated polymerization (including a sample spreadsheet for extracting molecular weight distributions) ... kinetic resolution of enantiomeric mixtures ... chemistry in supercritical media ... pH-dependent model enzyme kinetics ... thermogravimetric analysis ... temperature programmed desorption ... smog chemistry ... silane chemistry in a chemical vapor deposition reactor ... model batch and flow catalytic reactors ... curing of polymers with significant volume shrinkage ... synthetic protocols for preparation of a photosensitizer ... chemical oscillators ... electrochemical reactions studied by cyclic voltammetry ... photochemical reactions from a pulse light source ... pharmacokinetics of drug dosing ... and imaging chemistry in photoresist materials. 1. D. L. Bunker, B. Garrett, T. Kliendienst and G.S. Long III, Combustion and Flame 23, 373 (1974). 2. D. T. Gillespie, Journal of Computational Physics 22, 403 (1976). KiSTheIP:- Kinetic and Statistical Thermodynamical Package KiSThelP is a cross-platform free open-source program developed to estimate molecular and reaction properties from electronic structure data. To date, three computational chemistry software formats are supported (Gaussian, GAMESS, NWChem). Some key features are: • gas-phase molecular thermodynamic properties (offering hindered rotor treatment) • thermal equilibrium constants • transition state theory rate coefficients (TST, VTST) including one-dimensional tunnelling effects (Wigner and Eckart) • RRKM rate constants, for elementary reactions with well-defined barriers. KiSThelP is intended as a working tool both for the general public and also for more expert users. It provides graphical front-end capabilities designed to facilitate calculations and interpreting results. KiSThelP enables to change input data and simulation parameters directly through the GUI and to visually probe how it affects results. Users can access results in the form of graphs and tables. The graphical tool offers customizing of 2D-plots, exporting images and data files. KNIME:- Data Exploration platform • KNIME, pronounced, is a modular data exploration platform that enables the user to visually create data flows (often referred to as pipelines), selectively execute some or all analysis steps, and later investigate the results through interactive views on data and models. KnowledgeMiner :- A self-organising data mining tool KnowledgeMiner A self-organising data mining tool LabCal:- iPhone app to calculate molarity The Laboratory Calculator is an utility to calculate the molarity, to convert gram and mole and to compute dilutions of stock solutions. LabCalPro:- Enhanced version of LabCal LabCalPro is the enhanced version of LabCal. The Laboratory Calculator is an utility to calculate a molarity, to convert gram and mole and to compute dilutions of stock solutions LabView:- Data acquisition and analysis LabView is a graphical programming environment supporting more than 80 measurement devices, and driver software for data acquisition and instrument control, for developing custom measurement and automation systems based on Mac OS X LAMMPS:- Molecular Dynamics LAMMPS has potentials for soft materials (biomolecules, polymers) and solid-state materials (metals, semiconductors) and coarse-grained or mesoscopic systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the atomic, meso, or continuum scale. LAMMPS runs on single processors or in parallel using message-passing techniques and a spatial-decomposition of the simulation domain. The code is designed to be easy to modify or extend with new functionality. LAMMPS is distributed as an open source code under the terms of the GPL. The current version can be downloaded here LaTeX Chemistry Package Chemistry packages for using with the document typesetting system LaTeX. Includes packages for reactions, molecular formulae, R and S codes, chemistry journals, symmetry elements and even to draw chemical structure Libint:- Functions to compute two body integrals Libint library is used to evaluate the traditional (electron repulsion) and certain novel two-body matrix elements (integrals) over Cartesian Gaussian functions used in modern atomic and molecular theory. The idea of the library is to let computer write optimized code for computing such integrals. There are two primary advantages to this: much less human effort is required to write code for computing new integrals, and code can be optimized specifically for a particular computer architecture (e.g., vector processor). Libint has been utilized to implement methods such as Hartree-Fock (HF) and Kohn-Sham density functional theory (KS DFT), second-order Moller-Plesset perturbation theory (MP2), coupled cluster singles and doubles (CCSD) method, as well as explicitly correlated R12 methods. From the compilation notes:- MacOS X (versions 10.1 and 10.2) seems to set stacksize limit too low for the library generators to run properly. Use unlimit to increase the limit and then proceed with compilation On Intel Xeon and Pentium4 systems we obtain significantly higher performance with Intel C/C++ compilers. In the latest test we compared Intel C/C++ compiler (version 7.1) against GNU gcc compiler (versions 3.2 and 3.3). libint performs at least 10% better with the former compiler. The recommended compiler options are -O3 -xW -tpp7. LiBMCS:- Clusters based on maximum common substructure LibraryMCS clusters a set of chemical structures on a structural basis. Structures that share a common substructure are clustered together. The common substructure is identified by the clustering program, and it is always the largest one among all substructures found in the structure set. Such substructure is called the Maximum Common Substructure (MCS). No predefined fragments are applied in finding the MCS. Ligand Scout:- LigandScout is a software tool that allows to rapidly and transparently derive 3D pharmacophores from structural data of macromolecule/ligand complexes in a fully automated and convenient way. LigBuilder: Build ligands for protein active sites I recently came across a program called LigBuilder developed at the Molecular Design Laboratory, it is a multiple-purposed program written for structure-based drug design procedure. Based on the three-dimensional structure of the target protein, it can automatically build ligand molecules within the binding pocket and subsequently screen them. To quote from the website:- "(1) The program analyzes the binding pocket of the target protein and derives the key interaction sites. A pharmacophore model is suggested and it could be applied to 3D database searching. (2) User can choose either growing strategy or linking strategy to develop ligand molecules. (3) Molecules are constructed by using fragments as building blocks. Various kinds of structural manipulation are provided, such as growing, linking, and mutation. On-the-fly minimization of conformation is performed during the building-up procedure. While the target protein is kept rigid, flexibility of the ligand molecules is considered. (4) Molecules are evolved by Genetic Algorithm. The fitness score of a molecule is evaluated by considering its chemical viability as well as binding affinity. (5) Chemical rules for judging "drug-likeness" are applied to screen the resultant molecules. Chemical stability, synthesis feasibility, and toxicity can also be taken into account by defining "forbidden structure" libraries. (6) All the input and output molecules are in popular format, i.e. protein in PDB format and ligand in Sybyl Mol2 format. The program is very easy to use and maintain." LigBuilder is written in ANSI C++ language and has been tested on UNIX and LINUX but not MacOSX, I've downloaded the siource and with minor modificartions got it to run under MacOSX. For more detailed description of LigBuilder, please refer to: Wang, R.; Gao, Y.; Lai, L. "LigBuilder: A Multiple-Purpose Program for Structure-Based Drug Design", J.Mol.Model., 2000, 6, 498-516. Instructions for running LigBuilder under MacOSX. Download LigBuilder A folder called LigBuilderv1.2 will be created in 'Downloads' in that folder you should find bin/ default working directory pocket/ source codes of POCKET module grow/ source codes of GROW module link/ source codes of LINK module process/ source codes of PROCESS module parameter/ necessary parameters example/ test examples manual/ user manual in HTML fragment.mdb/ building-block fragment library forbidden.mdb/ forbidden substructure library toxicity.mdb/ toxic substructure library LigBuilder has four main modules, i.e. POCKET, GROW, LINK, and PROCESS. You need to compile them respectively. You can do this by simply entering each subdirectory, i.e. "pocket/", "grow/", "link/", and "process/", and typing "make" to run the Makefile scripts. The scripts will compile the source codes automatically and generate the executable codes. The default C++ compiler assigned in the Makefile script is SGI "CC" compiler. To compile under MacOSX you should open the "MakeFile" in a text editor (i used BBEdit) and modify the first line in the "Makefile" scripts as "CC = g++" before compiling. You need to do this in each of the POCKET, GROW, LINK, and PROCESS folders. Now open a terminal window and cd to the LigBuilderv1.2 folder >cd /Downloads/LigBuilderv1.2/ Now cd to each of the POCKET, GROW, LINK, and PROCESS folders in turn and run make and then copy the resulting executable to usr/local/bin, you will need the admin password. >cd pocket/ g++ -c -o main.o main.c g++ -c -o parameter.o parameter.c g++ -c -o protein.o protein.c g++ -c -o ligand.o ligand.c g++ -c -o pocket.o pocket.c g++ -c -o misc.o misc.c g++ main.o parameter.o protein.o ligand.o pocket.o misc.o -o pocket -lm >sudo cp pocket /usr/local/bin/pocket >cd .. >cd grow/ g++ -c -o main_grow.o main_grow.c g++ -c -o basic.o basic.c g++ -c -o parameter.o parameter.c g++ -c -o forcefield.o forcefield.c g++ -c -o fraglib.o fraglib.c g++ -c -o pocket.o pocket.c g++ -c -o check.o check.c g++ -c -o misc.o misc.c g++ -c -o ligand.o ligand.c g++ -c -o logp.o logp.c g++ -c -o grow.o grow.c g++ -c -o mutate.o mutate.c g++ -c -o score.o score.c g++ -c -o search.o search.c g++ -c -o ga_grow.o ga_grow.c g++ main_grow.o basic.o parameter.o forcefield.o fraglib.o pocket.o check.o misc.o ligand.o logp.o grow.o mutate.o score.o search.o ga_grow.o -o grow -lm >sudo cp grow /usr/local/bin/grow >cd .. >cd link g++ -c -o main_link.o main_link.c g++ -c -o basic.o basic.c g++ -c -o parameter.o parameter.c g++ -c -o forcefield.o forcefield.c g++ -c -o fraglib.o fraglib.c g++ -c -o pocket.o pocket.c g++ -c -o check.o check.c g++ -c -o misc.o misc.c g++ -c -o ligand.o ligand.c g++ -c -o logp.o logp.c g++ -c -o link.o link.c g++ -c -o mutate.o mutate.c g++ -c -o score.o score.c g++ -c -o search.o search.c g++ -c -o ga_link.o ga_link.c g++ main_link.o basic.o parameter.o forcefield.o fraglib.o pocket.o check.o misc.o ligand.o logp.o link.o mutate.o score.o search.o ga_link.o -o link -lm >sudo cp link /usr/local/bin/link >cd .. >cd process g++ -c -o main_process.o main_process.c g++ -c -o parameter.o parameter.c g++ -c -o basic.o basic.c g++ -c -o ligand.o ligand.c g++ -c -o population.o population.c g++ -c -o check.o check.c g++ -c -o misc.o misc.c g++ main_process.o parameter.o basic.o ligand.o population.o check.o misc.o -o process -lm >sudo cp process /usr/local/bin/process LSD:- Logic for Structure Determination Logic for Structure Determination (LSD) find all possible molecular structures of an organic compound that are compatible with its spectroscopic data. Structure building relies on connectivity data found in 2D NMR spectra, without any reference to a chemical shift database. Molecular structures containing up to 50 non-hydrogen atoms were investigated by means of the LSD program. The measurement protocol that is required by LSD includes the recording of 1D 1H and 13C as well as 2D COSY, HSQC and HMBC spectra. The status of each atom must be defined. It includes the atom symbol, the hybridization state (sp3, sp2 or sp), the number of attached hydrogen atoms, and the electric charge. This part of the data set is most often easily deduced by the user from elementary chemical shift knowledge. The status of the heteroatoms is deduced from the elemental molecular formula. Carbon-carbon bonds are deduced from COSY and HSQC data while HMBC and HSQC data provide connectivity relationships through one or two bonds for non-hydrogen atom. The constraints imposed by atom status and 2D NMR data may be enforced by other atom neighborhood relationships. For example, it is possible to force a carbon atom to be bound only to carbon atoms. The user is responsible for such supplementary data. Contradictory constraints lead LSD to fail in the search of a solution structure. The low resolution of HMBC and HSQC spectra in the C-13 chemical shift domain causes peak assignment ambiguities. It is possible to define groups of resonances and to assign a HMBC correlation peak to a group. This means that the correlation is caused by at least one member of the group. The solutions may be selected using a substructure or a combination of substructures. Those violating Bredt's rule are also discarded. The input to LSD is coded by the user as a text file, according to the instructions in the MANUAL_ENG.html document. A program named OUTLSD reads the generated solutions and converts them into various formats: bonds lists, 2D coordinates, fancy 3D coordinates (fancy, due to the lack of stereochemical information), and SMILES chains. The 2D coordinates can be converted to Postscript drawings and to .mol (SDF) files. Execution of the LSD program may be controlled by specific instructions for output formatting such as: single step execution, search of the biggest found fragment (for debgging purpose), report writing, verbosity level, substructure search. Tens of structures were investigated by means of the LSD program, essentially in the field of natural product chemistry, and especially for terpenes and alkaloids. Lucas:- Molecule Viewer and Editor Last Updated 11 May 2015 Lumo:- Molecular Orbital Visualisation Lumo accelerates the visualization of molecular orbitals from electronic structure calculations by harnessing the power of the graphics processing unit in modern macs. Lumo currently reads formatted checkpoint calculations from Gaussian03/09 calculations and there is preliminary support for Orca output files. Lumo was designed to speed up the slow part of looking at molecular orbitals and making molecular orbital diagrams. Lumo eliminates several steps along the process by reading in the output of programs like Gaussian, quickly visualizing the orbitals, and creating pictures of the essential orbitals in seconds. Lumo requires Mac OS 10.6 or higher, 64-bit processor, and an OpenCL capable compute device. Lumo is routinely run on MacBook Pros and MacBook Airs. For analysis of larger systems, it is recommended to have at least 4GB of system RAM. There is a movie of Lumo in action on the website MacChess :-Crystallography MacChess A Macromolecular crystallographic facility MacMolPlt :-GAMESS viewer Now renamed wxMacMolPlt is a cross-platform (Mac OS X, Linux and Windows) gui for preparing, submitting and visualizing input and output for the GAMESS quantum chemistry package. Features include a graphical molecule builder, GAMESS input generation, animation of output and visualization of molecules, normal modes, orbitals and other properties Macro MW:- iPhone app for calculating the molecular weight of your DNA/RNA or protein. Marvin :-Structure drawing/viewing Marvin is a java application/applet recently updated, for displaying and editing chemical structures, it can be used to render different structure formats including SMILES/SMARTS and MOL files, it can also be used as a drawing package to generate a variety of file formats. There is a more detailed review here. 1. With the XpertDef module you define brand new polymer chemistries (what are the atoms, what are the monomers that make the polymer, what are the chemical modifications that you might need to simulate biological or synthetic chemical reactions, what are the different ways you might need to cut a polymer sequence into pieces (chemical or enzymatic), what are the different ways that a small oligomer might fragment in the mass spectrometer's gas phase, and so on... 2. With the XpertCalc module, you get a desktop calculator that understands your polymer chemistry definitions as defined in XpertDef. The calculator allows any kind of chemical reaction and is infinitely programmable. Any calculation is recorded in a logbook that is exportable to put in the lab-book; 3. With the XpertEdit module, you get a sophisticated polymer sequence editor and a chemical center where a huge amount of simulations might be performed. Anything mass-related is virtually feasible in XpertEdit. 4. With the XpertMiner module, you will get (it is being implemented) a data mining center. You'll be able to drag and drop data from the mass spectra (in the form of m/z lists) and data from the simulations performed in the XpertEdit module. Once there, all the data will be available for comparison, arbitrary calculations like, say : "this list of m/z values should be applied the following mass increase" or "this list of m/z values should be appplied this reaction: -H+Na". The possibilities should be infinite. Mathematica :- Data analysis Mathematica 8 Almost any area of science requires computational analysis and as Mathematica has evolved it has supported more and more areas of science. Chemistry is very well supported with detailed physical, chemical, and molecular properties of more than 34,000 compounds. Simulate your chemical processes with ready-to-deploy, fully interactive models using a combination of powerful computation, statistics and optimization, instant interactivity, and built-in chemical data. One system, one integrated workflow. A number of demonstrations and case studies are available. MathMagic:- Equation editor MathMagic is a WYSIWYG math editor with Graphic user interface, with support for MathML, LaTeX, MS Equation Editor, and more. Maud:- Material Analysis Using Diffraction MAUD stands for Material Analysis Using Diffraction. It is a general diffraction/reflectivity analysis program mainly based on the Rietveld method. MayaChemTools:- CompChem Perl Scripts MayaChemTools is a growing collection of Perl scripts, modules, and classes to support day-to-day computational discovery needs. The current release of MayaChemtools provides command line scripts for the following tasks: Manipulation of SD, CSV/TSV, Sequence/Alignments, and PDB files Analysis of data in SD, CSV/TSV, and Sequence/Alignments files Information about data in SD, CSV/TSV, Sequence/Alignments, PDB, and fingerprints files Exporting data from Oracle and MySQL tables into text files Properties of periodic table elements, amino acids, and nucleic acids Elemental analysis Generation of fingerprints corresponding to path lengths, MACCS keys, extended connectivity, atom neighborhoods, topological atom pairs, topological atom torsions, topological pharmacophore atom pairs, and topological pharmacophore atom triplets Calculation of similarity matrices using a variety of similarity and distance coefficients McQSAR:- QSAR equations using the genetic function approximation paradigm McQSAR:  A Multiconformational Quantitative Structure−Activity Relationship Engine Driven by Genetic Algorithms McQSAR, an extension to the traditional GA approach to derive QSARs. McQSAR is able to use descriptors for multiple representations per compound, such as different conformers, tautomers, or protonation forms. Test runs show that the algorithm converges to a set of representations that describe the binding mode of the set of input molecules to a reasonable resolution provided that suitable descriptors based on the three-dimensional structure are used. Mikko J. Vainio and Mark S. Johnson (2005) McQSAR: A Multiconformational Quantitative Structure-Activity Relationship Engine Driven by Genetic Algorithms. J. Chem. Inf. Model. 45, 1953-1961 DOI Mercury :- Crystal structure viewer Mercury updated to version 2.0 (Jan 2008) is now available for MacOSX from CCDC. Mercury offers a comprehensive range of tools for structure visualisation and the exploration of crystal packing. Its features include: Input of hit-lists from ConQuest, or other format files such as CIF, PDB, MOL2 and MOLfile. A full range of structure display styles, including displacement ellipsoids (please note that displacement ellipsoids can be displayed for CIFs or SHELX res files which contain Uequiv or Uij values only). The ability to measure and display distances, angles and torsion angles involving atoms, centroids and planes. The ability to create and display centroids, least-squares mean planes and Miller planes. The ability to display unit cell axes, the contents of any number of unit cells in any direction, or a slice through a crystal in any direction. Location and display of intermolecular and/or intramolecular hydrogen bonds, short nonbonded contacts, and user-specified types of contacts. The ability to build and visualise a network of intermolecular contacts. The ability to show extra information about the structure on display, such as the chemical diagram (if available) and the atomic coordinates. The ability to calculate, display and save the powder diffraction pattern for the structure on view. The ability to save displays MestReNova NMR processing software MestReNova is the natural evolution of the popular application MestReC. In addition to all the functionality available in MestReC, MestReNova incorporates a wealth of additional features: • Full WYSIWYG with different Zoom in/out levels • Powerful Undo/Redo mechanism • Powerful drawing tools with advanced text editing capabilities. • Anti-aliasing for improved drawing quality • Cutting tool to exclude non-interesting regions from the spectrum • Automatic processing capabilities • Powerful scripting engine • Molecular viewer • Peak to atom assignment module • Prediction of 1H and 13C NMR from chemical structure • Simulation of spin systems with any number of spin particles • Automatic fitting of experimental to predicted spectrum Millsian:- Molecular Modeling Millsian a new approach to molecular modeling According to Millsian theory, atoms and bonds are made up of discrete surfaces of negative charge, not probability-density clouds. This gives Millsian the ability to calculate and render the exact charge distribution profiles for molecules of any size and complexity. Further, electrons are localized in molecules to specific regions, i.e. functional groups which act as building blocks, or independent units, in larger structures. Using two basic equations, Millsian has solved the important functional groups of chemistry, allowing molecules of arbitrary size and complexity to be modeled trivially and almost instantly on a personal computer. To learn about how Millsian translates the underlying theory into a molecular modeling product, please consult the papers: Millsian 2.0: A Molecular Modeling Software for Structures, Charge Distributions and Energetics of Biomolecules, W. Xie, R.L. Mills, W. Good, A. Makwana, B. Holverstott, N. Hogle - 08/18/09 Total Bond Energies of Exact Classical Solutions of Molecules Generated by Millsian 1.0 Compared to Those Computed Using Modern 3-21G and 6-31G* Basis Sets - R.L. Mills, B. Holverstott, W. Good, N. Hogle, A. Makwana - 07/23/09 The Nature of the Chemical Bond Revisited and an Alternative Maxwellian Approach - R.L. Mills, Physics Essays, Vol. 17, No. 3, September (2004), pp. 342-389. View accompanying spreadsheets. mMass :- Open Source Mass Spectrometry Tool mMass is an open source and multi-platfrom package of many tools for mass spectrometric data analysis, mainly in proteomics. Mnova:- Spectroscopic and analytical software Mnova is a multipage, multivendor, multitechnique and multiplatform analytical chemistry software suite designed as a container for our NMR & MS plugins. MOE :-Molecular modelling Chemical Computing Group have released MOE for MacOSX. The Molecular Operating Enviroment is a comprehensive piece of software providing extensible tools for molecualr modelling, bioinformatics, computer aided molecular design all built using Scientific Vector Language (SVL). This programming language allows the rapid construction of novel tools many examples of which are available via SVL exchange. Read a review of MOE I wrote for MacResearch MODELLER :-Protein modelling MODELLER is used for homology or comparative modeling of protein three-dimensional structures. The user provides an alignment of a sequence to be modeled with known related structures and MODELLER automatically calculates a model containing all non-hydrogen atoms. MODELLER implements comparative protein structure modeling by satisfaction of spatial restraints and can perform many additional tasks, including de novo modeling of oligopeptides, optimization of various models of protein structure with respect to a flexibly defined objective function, multiple alignment of protein sequences and/or structures, clustering, searching of sequence databases, comparison of protein structures, etc. ModelFree:- A program for optimizing "Lipari-Szabo model ModelFree A program for optimizing "Lipari-Szabo model free" parameters to heteronuclear relaxation data MoFa :-Chemoinformatics MoFa is a program for finding frequent, discriminative molecular substructures in a set of molecules. The name MoFa is an acronym for Molecular Fragment Miner MOLCAS :-Quantum chemistry MOLCAS is a quantum chemistry software developed by scientists to be used by scientists. It is not primarily a commercial product and it is not sold in order to produce a fortune for its owner (the Lund University). The authors have tried in MOLCAS to assemble their collected experience and knowledge in computational quantum chemistry. MOLCAS is a research product and it is used as a platform by the Lund quantum chemistry group in their work to develop new and better computational tools in quantum chemistry Molconn :-QSAR Molconn is the standard program for generation of Molecular Connectivity, Shape, and Information Indices for Quantitative Structure Activity Relationship (QSAR) Analyses. MOLDEN :-Molecular density display MOLDEN is a package for displaying molecular density.  It is tuned to the Ab Initio packages GAMESS* and GAUSSIAN. Instructions for installing molden are Molecula Numerica:- Molecular Dynamics Molecula Numerica is a software categorized in Molecular Dynamics Simulator. The software is produced to give priority to the visualization of the atoms/molecules. You can see what happened in the atomic world in a REAL-TIME sense. Of course the actual time scale of the phenomena in atomic world is pretty small, we mean you can see the computation results in REAL-TIME. The software simulates not only translating motions but also rotational motions of atoms/molecules. Multi-atom molecules are dealt as a rigid body. So the software has a limitation coming from which High frequency vibration of the bonding is neglected. But the time integration is effectively fast by the simple model. To solve the rotational motion the simulator adopts quaternion based rotational equation. As for the scheme for time stepping, a Leap-Frog is adopted. Molecule :- Molecular editor • Molecule for Macintosh. Molecule is a program for generating, editing and displaying molecular structures • A native Mac OSX version is underway Molegro Data Modeller :- Chemoinformatics data analysis Molegro Data Modeller offers a high-quality modelling tool based on state-of-the-art data mining techniques. Highlights of Molegro Data Modeller: - Regression: Multiple Linear Regression, Support Vector Machines, and Neural Networks - Feature selection and cross-validation is simple to set up and use (using the built-in wizards) - Principal Component Analysis (PCA) - Visualization: Histograms, 2D scatter plots, and 3D plots - Clustering: K-means clustering and density-based clustering - Built-in algebraic data transformation tool - Outlier Detection - Sophisticated subset creation: create diverse subsets by sampling from n-dimensional grids Molegro Viewer:- Molecule Viewer Molegro Molecular Viewer is a free cross-platform application for visualization of molecules and Molegro Virtual Docker results. Features at a glance • Free! • Cross-platform: Windows, Linux, and Mac OS X is supported. • Share and view results from Molegro Virtual Docker docking runs. • Imports and exports PDB, SDF, Mol2, and MVDML files. • Built-in raytracer for high-quality images. • Automatic preparation of molecules. • Molecular surface and backbone visualization. • Labels, sequence viewer and biomolecule generator. • Cropping of molecules and clipping planes. • Structural protein alignment. • Support for KNIME workflows. Molegro Virtual Docker :- Protein ligand docking Molegro Virtual Docker is an integrated platform for predicting protein - ligand interactions. Molegro Virtual Docker handles all aspects of the docking process from preparation of the molecules to determination of the potential binding sites of the target protein, and prediction of the binding modes of the ligands. New features in version 5.5: * A new 'Energy Maps' tool provides volumetric visualization of protein force fields. This makes it possible to understand why a compound interacts with a given receptor, and may provide insights on how to improve the binding. * We also added a new execution mode in the Docking Wizard: 'Run Docking in Multiple Processes'. This makes it possible to run medium sized jobs on a local machine, while utilizing multiple CPU cores and even multiple GPU graphics cards. For large jobs on multiple machines, Molegro Virtual Grid should still be used. * The ray-tracer has been improved to more closely match the live 3D view output. This makes it possible to create high resolution renderings of the 3D view. Molegro Virtual Grid :- Grid controller for docking Molegro Virtual Grid creates an infrastructure for distributing docking runs on multiple machines. By simply installing the MVG agent on a computer, its resources can be used transparently by the grid controller. Virtual Grid support is built into Molegro Virtual Docker: for instance, to dock a library of compounds against a receptor, simply setup a compound data source, and select 'start job on Virtual Grid' in the Docking Wizard. Molegro Virtual Grid is multi-core aware and can be installed on any platform: Linux, Windows, and Mac. The machines in the grid do not need to run the same operating system. Molecules:- A molecular viewer for the iPhone and iPod Touch Molecules is an application from Sunset Lake Software for the iPhone and iPod Touch that allows you to view three-dimensional renderings of molecules and manipulate them using your fingers. The molecules can be downloaded from the Protein Data Bank ( You can rotate the molecules by moving your finger across the display, zoom in or out by using two-finger pinch gestures, or pan the molecule by moving two fingers across the screen at once. Molecules is free and its source code is available under the BSD license. Molekel :- Molecular visualiser Molekel is a multiplatform molecular visualization program being developed at the Swiss National Supercomputing Centre (CSCS). Molekel was developed at the University of Geneva and CSCS/ETH Zurich in the early nineties, and is currently being rewritten under an open source GPL license. Current version is available Molinspiration :-Chemoinformatics Molinspiration Molinspiration specializes in the development of cheminformatics software in Java. Molinspiration tools are therefore platform independent and may be run on any PC, Mac, UNIX or LINUX machine. The software is distributed in a form of toolkits, which may be used as stand-alone computational engines, used to power web-based tools, or easily incorporated into larger in-house Java applications MOLMOL :-Molecule viewer MOLMOL A molecule viewer ported to Darwin Moloc :- Molecular Design Software Suite Moloc a Molecular Design Software Suite, includes Small Molecule Modeling Matching Utilities Conformational analysis Peptide and Protein Modeling Pharmacophore Modeling Similarity Concepts and Database Mining Diversity Analysis and Similarity Models Dynamics: trajectory generation and evaluation X-ray Facilities Display Features MolPro:- Ab initio calculations Molpro is a complete system of ab initio programs for molecular electronic structure calculations, designed and maintained by H.-J. Werner and P. J. Knowles, and containing contributions from a number of other authors. As distinct from other commonly used quantum chemistry packages, the emphasis is on highly accurate computations, with extensive treatment of the electron correlation problem through the multiconfiguration-reference CI, coupled cluster and associated methods. The recently developed explicitly correlated coupled-cluster methods yield CCSD(T) results with near basis set limit accuracy already with double-ζ or triple-ζ basis sets, thus reducing the computational effort for calculations of this quality by two orders of magnitude. Using local electron correlation methods, which significantly reduce the increase of the computational cost with molecular size, accurate ab initio calculations can be performed for much larger molecules than with most other programs. MolView X :- Molecule viewer MolView X is the OSX version of MolView MolWeight:- iPhone for calculation of the molecular weight MolWeight is a simple tool for scientists and students that allows calculation of the molecular weight and other key properties of peptides and oligonucleotides from their sequence. In addition, a calculator is provided for determining the molecular weight of any substance from its chemical formula MolWorks :-Chemoinformatics MOLWORKS a platform for chemical information software written in Java MOPAC:- Semi-empirical quantum chemistry MOPAC12 A practical quantum chemistry tool for modeling biological systems and co-crystals. MOPAC2012™ brings major improvements in the prediction of intermolecular interactions and hydrogen-bonding. This significantly improves geometries and energies of proteins, crystals, co-crystals, metal clusters, inorganics and other condensed phase systems Moscito:- simulation software for molecular dynamics (MD) simulation. Moscito simulation software for molecular dynamics (MD) simulation. Moscito is designed for condensed phase and gas phase MD simulations of molecular aggregates. Standard molecular mechanics force-fields such as AMBER, OPLS, CHARMM and GROMOS can be employed. Simulations can be carried out in different ensembles such as NVE, NVT or NPT using the weak coupling scheme. (Smooth Particle Mesh) Ewald summation is used for long range electrostatic interactions. Moscito is quite fast on Intel/AMD architectures since some essential code has been written in assembler. A parallel version (MPI) of the MD code is part of the distribution. The distribution comes with a number of tools for setting up and analysing MD simulation runs. MOSFLM :- CCD analysis MOSFLM for processing image plate and CCD data. An excellent description of how to install is available here MoSS:- Molecular Substructure Miner MoSS is mainly a program to find frequent molecular substructures and discriminative fragments in a database of molecule descriptions. It can be used in the context of drug discovery and synthesis prediction for the purpose of analyzing the outcome of screening tests. Given a database of graphs, MoSS finds all (closed) frequent substructures, that is, all substructures that appear with a user-specified minimum frequency in the database (and do not have super-structures that occur with the same frequency). Motofit:- refining X-ray data Motofit co-refines Neutron and X-ray reflectometry data, using the Abeles matrix / Parratt recursion and least squares fitting (Genetic algorithm or Levenberg Marquardt). It works in the IGOR Pro environment (TM Wavemetrics). MPQC :-Parallel quantum chemistry MPQC is the Massively Parallel Quantum Chemistry Program. It computes properties of atoms and molecules from first principles using the time independent Schrödinger equation. It runs on a wide range of architectures ranging from individual workstations to symmetric multiprocessors to massively parallel computers. Its design is object oriented, using the C++ programming language and has been ported to G5/MacOSX. MultiSEq 2.0:- A unified bioinformatics analysis environment MultiSeq is a unified bioinformatics analysis environment that allows one to organize, display, and analyze both sequence and structure data for proteins and nucleic acids. Special emphasis is placed on analyzing the data within the framework of evolutionary biology. MultiSeq is included with VMD • A paper describing MutiSeq has been published ; (Elijah Roberts, John Eargle, Dan Wright, and Zaida Luthey Schulten. MultiSeq: Unifying sequence and structure data for evolutionary analysis. BMC Bioinformatics, 2006, 7:382.) Multiwfn:- Waveform analysis Multiwfn is capable of plotting curve map, plot plane map, generate grid data and display isosurface, perform topology analysis and basin analysis for ELF (electron localization function) (as well as ELF-pi, ELF-sigma and localized orbital locator (LOL)).
bc0d9af7ca68d5f0
Take the 2-minute tour × I am working through Griffiths’ Introduction to Quantum Mechanics. In chapter 1, he attempts to impose a condition such that $$\frac{d}{dt}\int_{-\infty}^\infty\left|\psi(x,t)\right|^2dx=0$$ so that the normalization of a solution to the Schrödinger equation is independent of time. He derives that $$\frac{d}{dt}\int_{-\infty}^\infty\left|\psi(x,t)\right|^2dx=\frac{i\hbar}{2m}\left.\left(\psi^\star\frac{\partial\psi}{\partial x}-\frac{\partial\psi^\star}{\partial x}\psi\right)\right|_{-\infty}^\infty.$$ Griffiths then concludes that a wave function must satisfy $$\psi\rightarrow 0\qquad\textrm{as}\qquad x\rightarrow\pm\infty.$$ 1. Is this condition really enough? 2. For example, are there square-integrable functions $\psi$ such that $$\psi\rightarrow 0,\frac{\partial\psi^\star}{\partial x}\rightarrow\infty\qquad\textrm{as}\qquad x\rightarrow\pm\infty?$$ 3. What would be a necessary condition to impose on $\psi$ to ensure that the above quantity is zero? 4. The question Must the derivative of the wave function at infinity be zero? suggests that having a function with compact support is sufficient. Is this necessary? share|improve this question I don't think you are describing Griffith's argument correctly. He does not claim that $\psi \rightarrow 0$ as $x \rightarrow \infty$ as a consequence of the time independence of the normalization of $\psi$. He claims $\psi \rightarrow 0$ as a consequence of $\psi$ being normalizable. However he then has a horrible footnote warning about good mathematicians with pathological counterexamples and saying that in physics the wave function always goes to zero at infinity. Except of course he later uses a formalism for scattering problems where $\psi(x,t)$ behaves as $e^{ikx}$ at infinity. ;) –  Jeff Harvey Dec 24 '12 at 18:10 1 Answer 1 up vote 6 down vote accepted 1. No this is not enough. You were given counterexamples on the web site you mention in 4. 2. Yes, there are such functions, for example $(\sin x^3)/x$ 3. It is hard to tell what is necessary, besides the trivial condition $\psi\psi'\to 0$. 4. Compact support is sufficient but not necessary. 5. You wrote the formula incorrectly: your RHS is $0$. 6. Physicists frequently do not state their conditions precisely, you should accept this when you read physics literature. share|improve this answer @5. I corrected the typo. –  james Dec 24 '12 at 17:31 Your Answer
6ffa6620bcf86174
string theory FAQ String theory physics, mathematical physics, philosophy of physics Surveys, textbooks and lecture notes theory (physics), model (physics) experiment, measurement, computable physics Quantum field theory This page is to provide non-technical or maybe semi-technical discussion of the nature and role of the theory of fundamental physics known as string theory. For more technical details and further pointers see at string theory. What is string theory? What is called perturbative string theory is a variant of perturbation theory in quantum field theory (QFT). It is a definition of an “S-matrix” of all scattering amplitudes of quantum objects which is similar to that obtained from quantum field theory, but crucially different. One way that the S-matrix in quantum field theory is defined in perturbative field theory is – in worldline formalism – as a formal power series summing over labelled graphs (called Feynman diagrams) of the correlators/n-point functions of a 1-dimensional QFT defined on these graphs (the worldline theory of the particles/virtual particles whose scattering amplitude are to be computed). As a (natural) variant of this, the string perturbation series is defined to be a similar series, but instead of over 1-dimensional graphs now over 2-dimensional surfaces (“worldsheets”) and instead of summing correlators of a 1d QFT, summing correlators of a 2-dimensional QFT, specifically a 2d CFT. The premise of perturbative string theory as a theory about the observable world is that fundamental scattering processes such as observed in particle accelerator experiments and which are to good approximation described by the Feynman perturbation series (the S-matrix) of the standard model of particle physics are more accurately described by such a string perturbation series. More conceptually, the premise is that 1. the standard model of particle physics and gravity is just an effective field theory (this is generally expected, independently of string theory) 2. that a string perturbation series can provide its UV-completion – the string scattering amplitudes/S-matrix are finite at each loop, hence are already renormalizations of the underlying effective field theory amplitudes (the higher massive string oscillations serve as natural counterterms to the massless interactions of the effective low energy theory). While the string perturbation series is a well-defined expression analogous to the Feynman perturbation series, by itself it lacks a conceptual property of the latter: the Feynman perturbation series is known, in principle, to be the approximation to something, namely to the corresponding complete hence non-perturbative quantum field theory. The idea is that the string perturbation series is similarly the approximation to something, to something which would then be called non-perturbative string theory, but that something has not been identified. This situation is analogous to the following simple setup: the theory of smooth functions on the real line can be approximated by the theory of Taylor series. Now the notion of Taylor series has some variants, say to the theory of species. But then the question arises: what is to species as Taylor series were to smooth functions? There are a host of educated guesses of what non-perturbative string theory might be, if anything, but it remains unknown. At some point the term M-theory had been established for whatever that non-perturbative theory is, but even though it already has a name, it still remains unknown. (Or rather: its full incarnation remains unknown. What is well defined is 11-dimensional supergravity with some M-brane effects and gauge enhancement included, and that is what is presently being studied under the name “M-theory”, see for instance at M-theory on G2-manifolds; and see also at F-theory.) Therefore if the qualification “perturbative”/“non-perturbative” is suppressed, then the term “string theory” is quite ambiguous and has frequently led to misunderstanding. Perturbative string theory is a well defined and formally suggestive variant of established perturbation theory in QFT. Non-Perturbative string theory on the other hand is a hypothetical refinement of this perturbative theory of which there are maybe some hints, but which by and large remains mysterious, if it exists at all. Then why not consider perturbative pp-brane scattering for any pp? The above motivation of perturbative string theory as the evident result of replacing the definition of an S-matrix perturbation theory, via 1-dimensional Feynman diagrams encoding worldlines of particles, by 2-dimensional diagrams (Riemann surfaces) encoding worldsheets of strings raises the evident question: why stop at strings? Why not consider an S-matrix built as the sum of the correlators of a worldvolume field theory for each p+1p+1-dimensional manifold, encoding the propagation of a membrane for p=2p = 2, and generally of (what is called) a p-brane? To answers this, it is again crucial to distinguish between the perturbation theory and the non-perturbative theory. On the one hand, study of the string perturbation theory shows that indeed strings interact with and gives rise to pp-branes for many different values of pp. But the dynamics of all these higher dimensional branes itself seems to be intrinsically non-perturbative. What does not seem to exist is a sensible perturbation series for pp-brane scattering with p>1p \gt 1. The reason is that it is hard and seems impossible to make sense of this. There are two technical problems: 1. for p>1p \gt 1 then the standard worldvolume action functionals (Nambu-Goto action), are not renormalizable; 2. for p>1p \gt 1 the moduli spaces of p+1-manifolds are not controllable. So for p>1p \gt 1 one a) does not know how to define the “Feynman amplitudes” and b) even if one did, one does not know how against what to integrate them. Each of these two problems in itself makes a pp-brane perturbation theory for p>1p \gt 1 be hard to come by. That is, incidentally, the very reason for the term “M-theory”. First there had been the observation that the super 1-brane in 10d target spacetimes is accompanied by a 2-brane in 11d target spacetime, now called the M2-brane (for the history see Mike Duff, The World in Eleven Dimensions). This suggested the evident idea that there ought to be perturbation theory for 2-branes – called membranes, hence that there ought to be “membrane theory” in direct analogy with “string theory”. But the above two problems make a direct such analogy unlikely. Nevertheless, since there might be a less obvious, more sophisticated kind of analogy, Edward Witten proposed to say “M-theory” as an abbreviation, not to commit himself to what exactly might be really going on, and leaving open for the future if “M” is for “membrane” or for something else: M stands for magic, mystery, or membrane, according to taste (Witten 95) What are the equations of string theory? All local field theories in physics are prominently embodied by key equations, their equations of motion. For instance classical gravity (general relativity) is essentially the theory of Einstein's equations, quantum mechanics is governed by the Schrödinger equation, and so forth. But perturbative string theory is not a local field theory. Instead it is an S-matrix theory (see What is string theory?). Therefore instead of being given by an equation that picks out the physical trajectories, it is given by a formula for how to compute scattering amplitudes. That formula is the string perturbation series: it says that the probability amplitude for n inn_{in} asymptotic states of strings coming in (into a particle collider experiment, say), scattering, and n outn_{out} other asymptotic string states emerging (and hitting a detector, say) is a sum over all Riemann surfaces with (n in,n out)(n_{in}, n_{out})-punctures of the n-point functions of the given 2d CFT that defines the scattering vacuum. More in detail, a string background is equivalently a choice of 2d SCFT of central charge 15 (a “2-spectral triple”), and in terms of this the formula for the S-matrix element/scattering amplitude for a bunch of asymptotic string states ψ in 1,,ψ in n in\psi^1_{in}, \cdots, \psi^{n_{in}}_{in} coming in, and a bunch of states ψ out 1,,ψ out n out\psi^1_{out}, \cdots, \psi^{n_{out}}_{out} coming out is schematically of the form S ψ in 1,,ψ in n in,ψ out 1,,ψ out n out=gλ gmodulispaceof(n in,n out)puncturedsuperRiemannsurfacesΣ g n in,n outofgenusg(SCFTCorrelatoroverΣofstatesψ in 1,,ψ in n in,ψ out 1,,ψ out n out) S_{\psi^1_{in}, \cdots, \psi^{n_{in}}_{in}, \psi^1_{out}, \cdots, \psi^{n_{out}}_{out}} \;=\; \underset{g \in \mathbb{N}}{\sum} \lambda^g \underset{ {moduli \; space \; of} \atop {{(n_{in},n_{out}) punctured} \atop {{super\; Riemann \; surfaces} \atop {{\Sigma^{n_{in}, n_{out}}_g} \atop {of\; genus\; g}}}} }{ \int } \left( SCFT \; Correlator \; over \; \Sigma \; of \; states\; {\psi^1_{in}, \cdots, \psi^{n_{in}}_{in}, \psi^1_{out}, \cdots, \psi^{n_{out}}_{out}} \right) expressing the S-matrix element (scattering amplitude) shown on the left as a formal power series in the string coupling constant with coefficients the integrals over moduli space of super Riemann surfaces of the worldsheet correlators (nn-point functions) for the given incoming and outgoing string states. With more technical details filled in, this formula reads as follows: for the bosonic string, as found in Polchinski 01, volume 1, equation (5.3.9) and for the superstring, as found in Polchinski 01, volume 2, equation (12.5.24) This is the equation that defines perturbative string theory. And this is of just the same form as the Feynman diagram perturbation series in local quantum field theory, the only difference being that the latter is more complicated: there one has to sum over Feynman diagrams with labeling for all intermediate particles (virtual particles) and with some arbitrary “cutoff” to make the integrals well defined, whereas here we simply sum over all super Riemann surfaces. The different intermediate virtual particles as well as the renormalization counterterms are all taken care of by the higher string modes, encoded in the worldsheet CFT correlators. There was a time in the 1960s, when quantum field theorists around Geoffrey Chew proposed that precisely such formulas for S-matrix elements should be exactly what defines a quantum field theory, this and nothing else. The idea was to do away with an explicit concept of spacetime and local interactions, and instead declare that all there is to be said about physics is what is seen by particles that probe the physics by scattering through it. This is an intrisically quantum approach, where there need not be any classical action functional defined in terms of spacetime geometry. Instead, all there is a formula for the outcome of scattering experiments. Historically, this radical perspective fell out of fashion for a while with the success of QCD and the quark model in its formulation as as local field theory coming from an action functional: Yang-Mills theory. But fashions come and go, and the original idea of Geoffrey Chew and the S-matrix approach continues to make sense in itself, and it is this form of a physical theory that perturbative string theory is an example of. Ironically, more recently, the S-matrix-perspective also becomes fashionable again in Yang-Mills theory itself, with people noticing that scattering amplitudes at least in super Yang-Mills theory have good properties that are essentially invisible when expressing them as vast sums of Feynman diagram contributions as obtained from the action functional. For more on this see at amplituhedron. On the other hand, there is also an analog of the second quantized field-theory-with-equations for string scattering: this is called string field theory, and this again is given by equations of motion. For instance the equations of motion of closed string field theory are of the form Qψ+12ψψ+16ψψψ+=0, Q \psi + \tfrac{1}{2} \psi \star \psi + \tfrac{1}{6} \psi \star \psi \star \psi + \cdots = 0 \,, where Ψ\Psi is the string field, QQ is the BRST operator and \star is the string field star product. (For the bosonic string this is due to Zwiebach 92, equation (4.46), for the superstring this is in Sen 15 equation (2.22)). The string field Ψ\Psi has infinitely many components, one for each excitation mode of the string. Its lowest excitations are the modes that correspond to massless fundamental particles, such as the graviton. Expanding the equations of motion of string field theory in mode expansions (“level expansion”) does reproduce the equations of motions of these fields as a perturbation series around a background solution and together with higher curvature corrections. Why is string theory controversial? As a theory of the observable universe that is supposed to checked by experiment and which predicts that all fundamental particles of the standard model of particle physics are secretly, if one probes at high enough energy, excitations of superstrings, string theory is an unproven hypothesis. Current particle accelerator technology (notably the LHC) are about 15 orders of magnitude (hence far, far) away from the energy scale at which these strings would manifest themselves directly (at least for many models in string phenomenology). However, in principle and possibly there are indirect effects of string theory that would be probe-able with current experiment. Notably one standard scenario is that string theory is realized in a Kaluza-Klein compactification model and if so, then the properties of the fiber space (traditionally but not necessarily taken to be a Calabi-Yau manifold) on which the KK-reduction takes place determines the species and masses and interactions of the fundamental particles that we do see in experiments. Since the standard model of particle physics is pretty baroque in its field content, the hope here would be that a choice of fiber space could explain for instance that there are three generations of particles, if not even explain their couplings and masses. For decades there has been the more or less implicit idea that the number of choices of the fiber spaces is small. That would mean – or would have meant – that string theory can make predictions not just as quantum field theory does, namely about particle scattering given the nature of the particles, but even about that nature itself, hence that it could, in the extreme case, for instance predict the mass of the elementary particles. Notice that if so, this would be quite remarkable. For all we know, the interaction terms and masses of the fundamental particles could be just as coincidental as the number of planets in our solar system, and their distance from the sun (both of which were once speculated to follow from first principles of mathematics, which of course they do not). While there was never any real indication that such a prediction would be possible, later the idea became widespread that it seems indeed rather unlikely. If so, then we are back to the first point, namely 15 orders of magnitude and hence impractically far away from testing string theory directly. Then there are two possible standpoints, and they account for the controversy: 1. If you want to learn right now about the fine detail of the standard model of particle physics and just that, for instance if you are a genuine particle physicist, then string theory may give you exactly no useful information whatsoever and all time spent on it is plain wasted. The frustration over the fact that there has been the claim that string theory may directly help with GUT building and other standard model physics and that none of this turns to work out is responsible for much of the criticism. And, by all accounts, rightly so. 2. If however you feel more generally theoretically inclined, say because you are a theoretical physicist wondering about the possibilities of quantum field theory in general, or if you are even a mathematician, happy to handle theories that are not supposed have direct relation to experiment, then the situation may be very different. String theory might be right as an assumption about the fundamental nature of the observable universe, and might at the same time still be entirely useless for experimentally based particle physics in the present age. In that case, if you are interest in experimental particle physics you should not expect much help in your lifetime. But if you are a theoretical physicist or a mathematician interested in broader conceptual questions, then it may be unwise to ignore the theory. Does string theory make predictions? How? String theory makes predictions much as quantum field theory does, too: the theory predicts observables once a model in the theory has been chosen. Most every theory and model in physics has parameters and makes predictions only after sufficiently many parameters have been fixed (i.e., specified) by measuring them in experiment. For instance Newton’s theory of gravity says that the gravitational force of a pointlike mass is proportional to the inverse square of the distance from that mass. This is the theory, the proportionality factor (now called Newton's constant) is the free parameter that has to be fixed by experiment. In string theory as in any other theory of physics, it is the same general principle, only that the theory is much richer. There are lots of models in string theory that make very detailed statements about the resulting physics. Moreover, many of these have good general agreement with presently observed data. One speaks of “semi-realistic models”, see at string phenomenology the discussion at Semi-realistic models in string theory. (The division line between “semi-realistic” and “realistic” here runs through very technical territory involving the fine details of string mathematics and standard model physics, and hence becomes a bit subtle. This is the reason why one sees some authors worrying about not finding a single “realistic” model while others are worried about already having found too many of them to feel comfortable with…) The remaining problem is the following, and this is not specific to string theory but faced by any theory that provides a UV-completion of the standard model plus gravity (quantum gravity): the problem is that after parameters have been fixed this way by finding a model that reproduces the standard model reasonably accurately, all the remaining properties of the model, hence the predictions of the model, tend to be at high energies (“Planck scale”) and hence not within reach of present experiments such as the LHC. This is a very general aspect of present particle physics: while theoretically it is fairly clear that the standard model plus gravity must have a UV-completion by something, at presently available experimental energies that standard model works rather perfectly. While this is a general fact of particle physics and model building, not special to string theory, a sociological aspect of string theory is that in the 1980s many theoreticians started to believe and claim that string theory would be better than ordinary model building in that when fully understood it would admit only very few models, such that even the parameters measured in the standard model would be predicted by the theory and some more basic parameters. More recently this hope has vanished, and much of what should be an absolute estimate of string theory is more a perception in the negative gradient of this hope curve. But one technical specialty of string theory over QFT model building exists in either case: what in the standard model are external parameters put into a QFT Lagrangian, in string theory models are all dynamical fields of the theory instead, called moduli. The simple familiar example to compare this to is the cosmological constant in Einstein gravity: one can either consider it as an external parameter, a constant real number coefficient in front of the volume form summand of the Einstein-Hilbert Lagrangian, or else one can consider Einstein gravity coupled to a scalar field with some potential and consider those solutions to the equations of motion where this field is almost constant to good approximation. In such a case the field itself serves as an effective cosmological constant. (This is the mechanism behind the theory of cosmic inflation, see there for more details.) Hence the theory has one less external parameter (the “cosmological constant” is not fundamentally really a constant), which has instead been replaced by a field. In string theory this happens with all the parameters. (Except for one single constant: the string coupling constant. From the perspective of “M-theory” even that disappears. See at string theory – scales.) There is no external choice of parameter, but there remains the choice of studying “solutions to the equations of motion” (which in string theory means: choices of 2d CFTs) which might model observed physics. That is why in string theory instead of adjusting parameters one searches solutions. Since these are also called “vacua”, one searches vacua. The infamous term “landscape of string theory vacua” refers to attempts to understand the space of possibilities here more globally. But very little is actually known to date. In summary: models built in string theory make predictions just as any other model in theoretical physics does. The situation is actually better in that in principle the choice of model in string theory is constrained by the theory. While the standard model of particle physics is just written to paper, when one reproduces approximations to it in string theory one has to check that various consistency conditions are satisfied which guarantee that the parameters assumed indeed do arise as configurations of the fields in the theory. Notice that there is no a priori reason in quantum field theory that of all of the huge space of possible field theories (all local Lagrangians) the one that describes our world is a Yang-Mills gauge theory coupled to gravity – an “Einstein-Yang-Mills-Dirac-Higgs theory”. This is not a prediction of QFT but a theoretical assumption based on experimental observation, and only after this assumption is made, and only after a few dozen further parameters are fixed by hand, does the standard model of particle physics start to make any predictions at all. On the other hand, string theory does predict this general form of action functionals: the effective field theories obtained in string theory are generally gauge theories coupled to gravity. This fact is what originally led to the strong interest in string theory. Nevertheless, in spite of this higher predictivity of string theory in principle, in practice it has not yet led to much insight that would actually affect particle physics models in practice. Aside: How do physical theories generally make predictions, anyway? It may be good to compare to established and (essentially) uncontroversial theories. A good example to think of, in particular in comparison to string theory, is the theory of Einstein-gravity, also known as the theory of general relativity. Today there is a standard model of cosmology which is a model built in the theory of general relativity, that has been experimentally tested to high accuracy (see also at cosmic inflation - Experimental evidence). This model cannot be predicted by the theory of Einstein-gravity. It is one of many possible solutions, one point in a vastly infinitely dimensional space of solutions of general relativity. It is in turn based on the class of models known as FRW models, which enforces strong constraints on the parameters of models in general relativity. In fact in its plain vanilla version, the FRW model describes the whole universe by just three numbers, the density and pressure of homogenous matter filling the universe, and the value of the cosmological constant. This model with this choice of constrained parameters is entirely a man-made assumption, not predicted in any way from theory. But the point is that once this model has been postulated, then one can use the theory to see what it predicts about the remaining parameters, such as here the fluctuations of the cosmic microwave background radiation in a universe described by this model. This is the general pattern of how predictions are made in physical theories: 1. posit a theory; 2. specify some of the parameters of the theory – “build a model 3. check what the theory demands for the remaining parameters – these are the predictions of that model in that theory; 4. if experiment disagrees with the predictions then 1. see if this can be repaired by modifying the model a little, hence by varying some of the originally fixed parameters; 2. if this is impossible or gets to be too complicated to the degree of feeling “unnatural”, then either look for an entirely different model or, if all fails, modify or eventually abandon the theory and find a new one. Then repeat. That this model building process is a matter of trial and error is well witnessed for instance by the early history of cosmology. Right after Einstein had found general relativity and the Einstein equations, he tried the cosmological model with non-vanishing cosmological constant, because back then he thought that would fit observation. Some later observations suggested that there is no cosmological constant, and the constant was set to zero again. Einstein, referring to his original cosmological model, spoke of his “biggest blunder”. But a few decades later, new observations show that the cosmological constant is small but non-vanishing after all, and these days we put it back into our “standard model”. Clearly the theory does not help us predict this, otherwise there would be no reason for this repeated process of trial and error. On the other hand, once we do fix these global parameters, the theory does predict plenty of further observations in the model, which can and have been checked. But curiously, the modern version of the standard model of cosmology with its cosmological constant (“dark energy”) and dark matter component, asserts that the vast majority of all constituents of the observable universe are in fact unobservable, except for their gravitational effect. This means: the parameters of the model can be made to fit observation, but only with the rather noteworthy consequence that the model now models something which to a large extent is not what it set out to model. At this point there are two options: either one can take it that the standard model of cosmology predicts the existence of vast amounts of dark matter and dark energy, because only with this assumption is the model consistent with observations (but with this assumption it is very well consistent with observation). On the other hand, one might feel that with all this dark matter assumed, the model has become too contrived to be “natural”, and that instead this is a hint that the theory of Einstein gravity is not quite right. This is sociologically currently a minority position, but it is certainly intellectually a possible standpoint; it goes by the name MOND, see there for more discussion. Similar descriptions can be given of the standard model of particle physics. This, too, makes predictions that have been tested to fantastic accuracy. But it does so only after lots and lots of parameters have been chosen. To start with, there is nothing in quantum field theory that demands that the theory of fundamental particle physics has to be a Yang-Mills gauge theory coupled to Einstein-gravity, an Einstein-Yang-Mills theory. Then there is no specific reason why the gauge group of that theory is what is observed (though there are constraints on the possible gauge groups, from quantum anomaly cancellation). There is no reason in general QFT that matter appears in the fundamental particle representations that it does (electrons, quarks, neutrinos), that it appears in three “generations” of similar structure but different mass, and no reason for all of the coupling constants in the model such as the Yukawa couplings. All these parameters have been and are being adjusted by hand, whenever observations suggest so. The latest change was the introduction of mass terms (Higgs coupling) for neutrinos and then the confirmation that there is indeed a Higgs boson sector. None of this could be derived from first principles of quantum field theory. Even for the Higgs mechanism there are plenty of potential theoretical alternatives (e.g. technicolor). All of this is part of the model building and not predicted by quantum field theory. This is why one speaks of the standard model of particle physics and of the standard model of cosmology and not of the “standard theory”. Because the theory is fixed in both cases, Yang-Mills gauge theory and Einstein gravity. What one needs however in order to make any predictions in these theories is a choice of model. In string theory it is just like this. One can write down models that look like the standard model of particle physics coupled to gravity at low energy, and then see what the theory predicts as further observations. The difference here is that there are many more constraints on models in string theory (string theory vacua) than there are on models in quantum field theory (for which one can write down essentially any old local Lagrangian). On the other hand, a model of string theory that fits reasonably well the standard model of particle physics (such as for instance the G2-MSSM) typically predicts new phenomena only at energy scales not testable by present experiment. This is however necessarily so for any theory of quantum gravity, and hence a fact that one has to live with if one is going to be interested in “beyond the standard model” model building in the first place. All other proposals for “beyond the standard model” physics share this problem. Necessarily. However, what has often been considered attractive in string theory is that despite this, string theory does reasonably constrain the vast possibility space of models in quantum field theory. While there is no reason in QFT that our world is described by Yang-Mills gauge theory and Einstein-gravity, this is precisely what models in string theory generically predict. This may not seem like a practically useful prediction given that we already assumed this since the 1915s and 1960s, respectively, by building it into our “standard models”, but from the standpoint of conceptual understanding over pure empirical measurement it might be regarded as a suggestive aspect that in string theory the existence of Einstein-Yang-Mills theory can be derived from a more fundamental theory, in the vast space of other possible local Lagrangians. Of course the latter can also be said for other proposals, such as the spectral action principle. What string theory offers on top of such other proposals is that it derives Einstein-Yang-Mills theory and provides a UV-completion for it. The trouble is just that, by the nature of UV completions, these concern the behaviour at higher energy scales, and in this case at higher energy scales than can conceivably be measured in the near future. This is a general problem of quantum gravity phenomenology and as such not specific to string theory. However, string theory has potential effects on at least the conceptual understanding of quantum field theory beyond direct measurement of high energy effects. See at string theory results applied elsewhere for more on this. Is string theory testable? This question overlaps with the question How does string theory make predictions?, but it maybe deserves its own answer. In both QFT as well as string theory one “builds models” within the general theory and tests these, as far as their predictions are about available experiments. Loads of string theoretic models and non string-theoretic models have been excluded by LHC data in 2012, when the experiment ruled out more and more of the possible parameter space for one global spacetime supersymmetry somewhere at the electroweak symmetry breaking scale (see also Does string theory predict supersymmetry below). So all this was testable, has been tested and turned out to be wrong. So model building in string theory is much as in QFT. When a model is ruled out, it does not necessarily mean that string theory is ruled out or that QFT is ruled out, but it means that the possibilities for adjusting the free parameters in the theory are being reduced. To see that this is a common scientific process, it may help to look at some important historical examples. For instance shortly after Einstein proposed the theory of gravity now named after him, he proposed a cosmological model within that theory. Since he thought back then that the observable universe was static, he chose a free parameter of his theory, the cosmological constant, to take just such a value that the resulting equations of motion fitted his expectations. But just shortly afterwards it became clear that this is wrong, that instead the observable universe is expanding. Was Einstein’s theory wrong? No, his model within the theory was wrong. (He famously called it his “biggest blunder”, but it’s common for models to be ruled out. It’s fundamentally a trial and error process, after all. ) The model was discarded and quickly a new model was “built”, the now standard FRW model, still within Einstein’s theory. That has nicely fitted all data since, with slight adjustments, and so we are fond of it and call it the standard model of cosmology. This kind of model-building process happens within string theory, too: by iteration the models are being tested, discarded or adjusted, tested again, etc. Actually, string theory model building is more constrained than plain QFT model building, due to the fact that at the heart of it there are no free parameters, since all parameters are instead fields of the theory, as mentioned before in How does string theory make predictions?. So ultimately it gives more, not fewer, reasons to discard a model already on theoretical grounds. There are QFT models which cannot be realized in string theory, because the constraints on the parameters are stronger in string theory, because string theory is not just any old effective field theory that can be further adjusted as the energy scale is increased, but is already a UV-completion. It either makes sense at all energies, or not at all. (A practical problem here is that in computations usually lots of approximations are introduced which are not always guaranteed to be viable. For instance nobody really has a good theoretical handle if all the points in the alleged landscape of string theory vacua really are solutions to the theory. They have all been checked to be so only in some approximation.) An example of how string theory is more constrained in its model building than effective QFT keeps the community busy since 1998: then it was observed experimentally that the cosmological constant of the observable universe these days is small but positive. But accommodating a positive cosmological constant into string theoretic models is harder than negative cosmological constant. So this observation tested a large region of string model building and found it to be wrong. But as for the example of Einstein’s “biggest blunder” above, even with all these models ruled out, the theory is still not ruled out (maybe one day with more observations it will!) Instead, as in the historical example, the failure of the favorite models lead to new theoretical activity in understanding the theory and its remaining possible models. All this talk about “metastable vacua”, etc since in string theory originates in this experimental observation in 1998. On the other hand, no other theoretical framework was equally tested by this astronomical observation. In all other existing theoretical frameworks you simply take a pen and change the sign of the cosmological constant by hand. The theory does not control it, it’s a free parameter. So it seems justifiable to say that string theory is actually more testable than other existing theoretical frameworks. Of course to appreciate what this means one has to pay attention to what it means to test a theory by testing all its parameter space of models, as illustrated by the Einstein “biggest blunder”-example. Is string theory causal, given that it is not local on the string scale? In an S-matrix theory such as perturbative string theory (see above at What is string theory) the property of causality is embodied by the fact that the S-matrix shows certain analycity features. (Therefore the S-matrix approach to quantum field theory is often referred to as “the analytic S-matrix”). Since, as opposed to a fundamental particle, the string is extended, at the string scale string theory is not given by a local field theory. This superficially seems to suggest that at such scales also causality might be violated in string theory. However, computation shows that the string scattering S-matrix comes out suitably analytic and causal (e.g. Martinec 95). A detailed analysis for how this comes about has been given in (Erler-Gross 04). They write: Perhaps then it comes as a surprise that critical string theory produces an analytic S-matrix consistent with macroscopic causality. In absence of any other known theoretical mechanism which might explain this, despite appearances one is lead to believe that string interactions must be, in some sense, local. We find that string theory avoids problems with nonlocality in a surprising way. In particular, we find that the Witten vertex is “local enough” to allow for a nonsingular description of the theory which is completely local along a single null direction. unlike lightcone string field theory, it is clear that cubic string field theory at least has a local limit where all spacetime coordinates are taken to the midpoint. We investigate this limit with a careful choice of regulator and show that at any stage the theory is nonsingular but arbitrarily close to being local and manifestly causal. We believe that the existence of this limit, though singular, must account for the macroscopic causality of the string S-matrix. Thus, string theory is local enough to avoid the inconsistencies of a theory which is acausal and nonlocal in time, but is nonlocal enough to make string theory different from quantum field theory Does string theory predict supersymmetry? To understand this question and its answer, it is important to know that in general symmetries in physics (and in mathematics) come in a local and in a global flavor. For instance the theory of gravity is a theory which has as a local symmetry the Lorentz group. But any model of the theory – a spacetime – may or may not have global Lorentz symmetry. In fact, the generic solution to the Einstein equations has no global Lorentz symmetry left. A global Lorentz symmetry of spacetime with all matter and force fields in it would mean that the world looks the same if we arbitrarily translate in some direction, or arbitrarily rotate in some plane. It would be rather bizarre to live in a spacetime with such a property! (Mathematically the distinction is this: given a coset space G/HG/H, then the corresponding Klein geometry has global GG-symmetry, while the corresponding Cartan geometries have local GG-symmetry.) Now supersymmetry refers to a super Lie group extension of the Lorentz group. Here the theory of supergravity has local supersymmetry just as Einstein gravity has local Lorentz symmetry, but the general model in supergravity has no global supersymmetry left. This is just as unlikely as, and directly analous to, a spacetime having a global translation symmetry. (Mathematically this is now the situation of super Cartan geometry.) Contrary to that, local supersymmetry is rather generic in low dimensions. For instance the worldline theory of any spinning particle – such as an electron – is locally supersymmetric. (See the references here.) So local supersymmetry has been experimentally verified for the observable universe when fermions were first observed to exist, which is since the Stern-Gerlach experiment. Similarly, if one proceeds from the worldline formalism for spinning particles to the worldsheet theory of spinning strings, the most natural choice of Lagrangian for fermions on the worldsheet automatically satifies local supersymmetry. This is how supersymmetry was found historically: people wrote down the obvious action functional for spinning strings which was just intended to produce fermions and then it was noticed that this exhibits a curious extra “super”-symmetry. Ever since the spinning string is called the superstring. The miracle that then happens is that while the second quantization of spinning particles (which is fermionic quantum field theory) does not necessarily itself exhibit local supersymmetry, the second quantization of spinning strings (which is string theory with fermions) does itself also exhibit local supersymmetry: the effective field theory induced by spinning strings is not just Einstein-Yang-Mills-Dirac theory, but is locally supersymmetric in that it is higher dimensional supergravity. So: string theory implies that if there are fermions at all, then there is local supersymmetry, hence supergravity. strings&fermionssupergravity strings \;\; \& \;\; fermions \;\;\; \Rightarrow \;\;\; supergravity On the other hand, any model in string theory may or may not retain global supersymmetry at some energy scale, after spontaneous symmetry breaking. In fact the generic model has no reason to preserve any global supersymmetry at all, just as the generic solution of Einstein's equations does not preserve any Lorentz group symmetry. The condition that a Kaluza-Klein compactification of 10-dimensional supergravity to 4d exhibits precisely one global supersymmetry is equivalent to the compactification space being a Calabi-Yau manifold. While this is a famous condition that has been extensively studied (see at supersymmetry and Calabi-Yau manifolds), nothing in the theory requires KK-compactification on Calabi-Yau manifolds. These are (or were) only considered because for phenomenological reasons it is (or was) expected that our observed world exhibits global low-energy supersymmetry. The role of local supersymmetry (supergravity) which superstring theory does predict is quite different from that of low energy global supersymmetry. Notice that as soon as there is a fermion in the world it follows that spacetime is described by supergeometry, since fermion field are necessarily formalized by anticommuting functions in the Lagrangian defining any prequantum field theory containing them. Now supergeometry alone does not mean that there is an action of the super Poincaré Lie algebra on anything, which is what is called supersymmetry. On the other hand, in low worldvolume dimension this tends to be inevitable. For instance the worldline theory of any spinning particle (such as electrons) necessarily has worldline supersymmetry, see at spinning particle - Worldline supersymmetry and at spinning particle - Worldline supersymmetry - References. Similarly, the standard sigma-model worldsheet action functional of the spinning string was not originally intended to be supersymmetric, but just so happens to come out this way. The special phenomenon as opposed to the superparticle is that the worldsheet supersymmetry of the string implies that its second quantization effective field theory is also locally supersymmetric, hence is a theory of supergravity. How/why does string theory depend on “backgrounds”? To answer this question one needs – as discussed at What is string theory? – to distinguish between perturbative string theory and non-perturbative string theory. Perturbative string theory, being a variant of traditional perturbation theory in quantum field theory, is by construction a perturbation about a background, just as any perturbation series is. To stick with the (close) analogy to Taylor series mentioned before in What is string theory?: if the full object of study is a smooth function on the real line, then a “perturbative” approximation to this by a Taylor series involves a choice of point on the real line around which the Taylor series is developed. The series itself represents the original function restricted to the formal neighbourhood of that point. This example also serves to illustrate in which sense a perturbation series “depends” on a choice of background: for a given smooth function and for two points on the real line that lie within the convergence radius of the Taylor series of that function around the respective other point, the expansion does not depend on the “choice of background” in the sense that the value of one series evaluated at a given point equals the value of the other series evaluated at the same point. The only restriction to this statement is that some points may lie outside of the convergence radius. For QFT perturbation series we have essentially the same situation: the correlators of the theory are expressed as formal power series in the coupling constants of the theory and in Planck's constant, which are the formal approximations to the true non-perturbative correlators expanded about zero coupling and vanishing Planck’s constant. At that 0-point the theory is non-interacting and “classical”, meaning that this is the point of a solution to the classical equations of motion of the non-interacting theory. Such a solution is also called a vacuum or a background of the quantum theory. Notably in theories containing gravity (such as string theory), such a background involves a solutions to Einstein's equations, hence involves fixing a spacetime manifold on which all further perturbative quantum fields propagate. By the above logic, while the specific perturbation series depends on the choice of this classical solution, hence the choice of “background” or of vacuum, this dependency is not a property of the underlying non-perturbative theory but is a defining property of what it means to consider a perturbative approximation. (The subtlety being that for all QFTs of interest the radius of convergence of that formal series is necessarily 0, see the discussion at Isn’t it fatal that the string perturbation series does not converge? ) By design, all this applies also to perturbative string theory. As mentioned before, there is the idea that perturbative string theory is indeed the perturbative approximation to an as-yet unknown non-perturbative string theory. To the extent that this is true, the dependence of the string perturbation series on the choice of “background” should be of the same superficial nature as it is for traditional perturbative QFT. But this remains a conjecture. Consistency arguments for this speculation have been given in (Witten xy). A theoretical framework for formalizing these questions is string field theory, in the context of which much of this has been formalized (…). For more on what perturbative string theory around a given such background looks like, see also the question How is string theory related to the theory of gravity? Did string theory provide any insight relevant in experimental particle physics? Yes. One curious aspect of string theory is that independently of its role as a source for models in particle physics, it provides connections in the space of all possible quantum field theories: lots of different quantum field theories (many of them highly unrealistic as phenomenology goes, but interesting for theoretical investigations) appear as different limits and special cases inside string theory, and their embedding into a single framework this way explains many unexpected relations between them. One of this has led to recent progress simply in computational tools of perturbation series. LHC physicists claimed that without the insight from string theory, the evaluation software used at the LHC could not have been precise enough to see the Higgs in the data. Details on this are linked to at This is maybe the example most directly related to experimental physics, but there are varius other relations (“dualities”) between QFTs learned from or better understood with string theory. (Seiberg duality for instance, long list will go here…) Does string theory tell us anything about cosmology, such as the Big bang or cosmic inflation? No. Looking back at the answer to What is string theory? we have that the only thing really defined is perturbative string theory – and quantum perturbation theory is clearly insufficient for discussion of strongly coupled phenomena such as cosmological phenomena (as opposed to those tiny excitations studied in particle accelerators). More precisely: given that perturbative string theory does describe classical gravitational backgrounds with small quantum fluctuations about them, and given that this is already all that traditional cosmology considers, certainly all of traditional cosmology can sensibly be modeled in string theory. But perturbative string theory cannot say much about the long-expected strong quantum corrections to gravitational effects in quantum gravity. This hasn’t stopped people from making lots of speculations, and there is something like a “field” called “string cosmology” in that you find authors writing about this, giving talks and lectures about it. But this is a bit like a soccer match with a pingpong ball. What would be needed here is an understanding of non-perturbative string theory. On the other hand, in theories of supergravity such as string theory, there are some observables that are independent of the coupling constant (called “protected”): the BPS states. If one can compute such observables in perturbation theory and then has an idea of what these correspond to as the coupling constant is set to a finite value, then one can know the value of the corresponding observable in the non-perturbative theory without even knowing that non-perturbative theory itself. This is the mechanism by which perturbative string theory does make statements about black hole entropy, where black holes by their very nature are strongly coupled gravitational configurations. More on how this works is at black holes in string theory. Since the singularity involved in back holes is a similar kind of singularity as that involved in the big bang one might think that some analogous method is useful in the latter case. But if so, it has not surfaced so far. What is the relationship between string theory and quantum field theory? Recall from some of the previous answers: 1. Pertubative string theory is defined to be the asymptotic perturbation series which are obtained by summing correlators/n-point functions of a 2d superconformal field theory of central charge -15 over all genera and moduli of (punctured) super Riemann surfaces. 2. Perturbative quantum field theory is defined to be the asymptotic perturbation series which are obtained by applying the Feynman rules to a local Lagrangian – which equivalently, by worldline formalism, means: obtained by summing the correlators/n-point functions of 1d field theories (of particles) over all loop orders of Feynman graphs. So the two are different. But for any perturbation series one can ask if there is a non-renormalizable local Lagrangian such that its Feynman rules reproduce the given perturbation series at sufficiently low energy. If so, one says this Lagrangian is the effective field theory of the theory defined by the original perturbation series (which, if renormalized, is conversely then a “UV-completion” of the given effective field theory). Now one can ask which effective quantum field theories arise this way as approximations to string perturbation series. It turns out that only rather special ones do. For instance those that arise all look like quantum anomaly-free Einstein-Yang-Mills-Dirac theory (consistent quantum gravity plus Yang-Mills gauge fields plus minimally-coupled fermions). Not like phi^4 theory, not like the Ising model, etc. (Sometimes these days it is forgotten that QFT is much more general than the gauge theory plus gravity plus fermions that is seen in what is just the standard model of particle physics. QFT alone has no reason to single out gauge theories coupled to gravity and spinors in the vast space of all possible anomaly-free local Lagrangians.) On the other hand now, within the restricted area of Einstein-Yang-Mills-Dirac theories, it currently seems that by choosing suitable worldsheet 2d CFTs one can obtain a large portion of the possible flavors of these theories in the low energy effective approximation. Lots of kinds of gauge groups, lots of kinds of particle content, lots of kinds of coupling constants. There are still constraints as to which such QFTs are effective QFTs of a string perturbation series, but they are not well understood. (Sometimes people forget what it takes to defined a full 2d CFT. It’s more than just conformal invariance and modular invariance, and even that is often just checked in low order in those “landscape” surveys.) In any case, one can come up with heuristic arguments that exclude some Einstein-Yang-Mills-Dirac theories as possible candidates for low energy effective quantum field theories approximating a string perturbation series. The space of them has been given a name (before really being understood, in good tradition…) and that name is, for better or worse, the “Swampland”. Isn’t it fatal that the string perturbation series does not converge? The string scattering amplitudes computed in string perturbation theory are thought to be term-wise (loop-wise, hence for each genus of the Riemann surface worldsheets) finite. Nevertheless, the sum over all these contributions diverges. Does this mean that perturbative string theory is unrealistic from the get go? No. On the contrary, the perturbation series of any interesting QFT is supposed to have radius of convergence equal to 0. Because the expansion parameter is the coupling constant and if the series had a finite radius of convergence, there would be also negative values of the coupling for which the correlators are convergent. See at non-perturbative effect for more. So the non-convergence of the string perturbation series is just as it should be in QFT: a perturbation series in quantum field theory is generally an asymptotic series, meaning that it diverges but every finite truncation of it produces a sum that approximates the “actual” value (the actual correlation functions) in a controled way (as explained at asymptotic expansion.) The important property of the string scattering amplitudes is rather that they (plausibly) come out termwise finite (proven so for bosonic string theory and the first orders of superstring theory), which means that it is already renormalized: the higher string modes provide the natural counterterms for the renormalization (see also at How do strings model massive particles?). So the convergence/divergence of the string perturbation theory is of the same kind as for instance in the QCD that appears in the standard model of particle physics. For more on this phenomenon see at perturbation theory – divergence/convergence and especially see also the references there and see at non-perturbative effect. How do strings model massive particles? In string phenomenology, each fundamental particle species corresponds to a different excitation mode of one single species of fundamental string. However, a subtle aspect to keep in mind here is that all fundamental particles are fundamentally massless and receive a – comparatively low – mass only via a Higgs mechanism. In string phenomenology therefore all fundamental particles correspond to ground state excitations of strings. Indeed, the ground state excitiations of superstring are massless (while that of the bosonic string in fact has negative mass squared and hence models a tachyon particle). This is a nontrivial statement due to the nature of quantum harmonic oscillators and follows from a careful analysis of quantum effects. For instance, in simple situations the gauge field particles of spin 1, such as the photon, are modeled by the ground state excitations of the open string. Due to quantum effects even the ground state oscillates a little, and the orientation of this oscillation accounts for the polarization degrees of freedom of the photon. Every excited mode of a string however corresponds to a particle of comparatively huge mass, not meant to be close to any particle mass ever observed. So the infinite tower of higher string excitations is not meant to be directly observable in string phenomenology. Nevertheless, these massive particles have a crucial role to play: their appearance as virtual particles in string scattering amplitudes is what renders these probability amplitudes loopwise finite: they are the counterterms that exhibit perturbative string theory as a renormalized perturbation theory. See also the discussion at Isn’t it fatal that the string perturbation series does not converge?. How is string theory related to the theory of gravity? The technical statement is that the effective quantum field theory defined by the string scattering amplitudes is a supergravity version of Einstein-Yang-Mills theory (namely heterotic supergravity or type II supergravity). This means the following: The string scattering amplitudes between given incoming and outgoing asymptotic quantum states of free strings, are probability amplitudes abstractly defined by summing the correlation function of a 2-dimensional SCFT over all super Riemann surfaces with given punctures. One tends to think of this sum heuristically as computing the superposition of probability amplitudes of all possible ways that the incoming strings can come in, interact with each other in a given background spacetime, and come out of this scattering process again. But this heuristic idea can be formalized: we can ask if there is an ordinary perturbative quantum field theory such when we compute its scattering amplitudes by the usual Feynman diagram rules of expansion about a solution to the equations of motion (e.g. Einstein's equations), the result coincides with the string scattering amplitudes at low energy. If one finds such a quantum field theory, one calls it the effective quantum field theory which effectively approximates the string dynamics at low energy. (See also above Why/how does string theory depend on “backgrounds”?.) In words this means: the “stringy” process defined by the string perturbation series looks in good approximation at low energy as such-and-such interactions of such-and-such particles. And if one now computes what this effective quantum field theory defined by the string scattering amplitudes is, then one finds that it generically is a gravity coupled to Yang-Mills theory in a supergravity version of Einstein-Yang-Mills theory. Notice that an effective quantum field theory (and this one is no exception) is not in general renormalizable, meaning that to a given energy scale one has to add higher order terms to it (counterterms) to make it well defined. Here it is precisely the full string perturbation series that provides these counterterms: the higher-energy interactions of the string provide the renormalization of its effective low energy theory of gravity+-Yang-Mills theory. One says that the string perturbation series provides a UV-completion for Einstein-Yang-Mills theory (hence for gravity, or rather for supergravity). By designs of what scattering amplitudes are, all this are statements in perturbation theory and in perturbation theory. Notice that this is not some secret bug in string theory, but is so by the very definition of perturbative string theory and perturbation theory. The fact that perturbatively perturbative string theory is a UV-completion of perturbative quantum gravity may of course make one want to find something like non-perturbative string theory. While there are some hints as to what this might be (these days there are many hopes that the AdS-CFT correspondence provides a non-perturbative extension of the description of quantum gravity by string theory), this non-perturbative description remains unclear, in stark contrast to perturbative string theory which is a rather well-defined and conceptually well-understood. Do the extra dimensions lead to instability of 4 dimensional spacetime? The parameters of size and shape of the compactified dimensions in string theory, and in fact in any Kaluza-Klein compactification, are called “moduli”. Since they are part of the higher dimensional metric, they are components of the higher dimensional field of gravity and hence are dynamical fields that evolve. The problem of their stability, hence the question whether there are dynamical mechanisms that make for instance the size of the compactified space remain stably at a given value, is famous as the problem of moduli stabilization in string theory. This problem used to be open until around 2002. Then it was realized that vacuum expectation values (VEVs) of the higher form fields (“fluxes”) present in string theory generically induce effective potentials for moduli that may stabilize them, at least for fluctuations that preserve the given special holonomy (CY-compactifications of string theory or G2-holonomy compactifications for M-theory). For type IIB string theory/F-theory this was argued in the influential article KKLT 03. An analogous moduli stabilization mechanism was also argued for M-theory on G2-manifolds by Acharya 02. It is the counting of all the many possible ways of stabilizing moduli via fluxes in type IIB that led to the now infamous discussion of the landscape of type II string theory vacua (see also below). In any case, there seems to be no lack of solutions of the stability problem. What does it mean to say that string theory has a “landscape of solutions”? Being a perturbation theory (see What is string theory? above), there are perturbative string scattering amplitudes for each solution of the equations of motion of the effective background theory of (super)-Einstein-Yang-Mills-Dirac-Higgs theory (see also How/why does string theory depend on backgrounds? above). In fact, more precisely, a perturbative background for string theory is a choice of super-2d CFT of central charge -15, and each of them induces such an effective background (by a mechanism indicated at 2-spectral triple). This imposes considerably more constraints than one has to solve to find a solution to just Einstein-Yang-Mills equations (for instance “modular invariance” of the 2d theory, etc.). As a result, it is considerably harder to find a backround vacuum for string theory than for its non UV-complete non-renormalized effective Einstein-Yang-Mills-Dirac-Higgs theory. Due to these strong constraints, there had for many decades be a wide spread hope among some (many) string theorists that these constraints are in fact so strong as to maybe essentially uniquely single out one small class of vacua. And since the vacua in a KK-mechanism theory like string theory determine the fundamental particle content in the low dimensional effective QFT, that would have meant that under maybe mild assumptions string theory would maybe even predict the particle content of the standard model of particle physics, to some extent. There had never been a formal argument for this hope, but this hope has influenced both the community and its public perception. Then in the 1990s (only) string theorists began to start solving some of these extra constraints that string theory imposes for consistent vacua. Among these constraints is notably the phenomenological constraint that the “moduli” of the KK compactification, hence the dilaton field and its many cousins, become massive. For if they were fundamentally massless as the main particles in the standard model of particle physics, then they would have had shown up in accelerator experiments (such as these days the LHC) which however they do not. One way to deal with this is to consider models in type II string theory which have nontrivial RR-field configurations in the internal space. It turns out that the periods of the corresponding field strengths over cycles in the compact space induce a potential energy for some or all of the moduli fields, hence a mass for them in perturbation theory, so that by choosng these “fluxes” appropriately “all moduli can be stabilized”. By a kind of Dirac charge quantization condition, the periods of these “flux fields” have to be integers and hence form a discrete space, as opposed to the usual continuous spaces of solutions of field theories. Moreover, due to some other constraints the potential energy which they induce cannot be too large, so that the admissible choices of periods for a flux field over some internal cycle is some finite number. Hence given some internal manifold, the number of choice of models with fluxes is the product of some finite number, taken over each of the (still finite) classes of cycles. Traditionally one considered KK compactification on real 6-dimensional Calabi-Yau manifolds (see at supersymmetry and Calabi-Yau manifolds). Since these are generically topologically complicated, they have comparatively large numbers of cycles. In a hand-waving estimate of the numbers to expect here, sombody assumed that the number of possible values of flux for each cycle is maybe about 10, and that the typical Calabi-Yau manifold has maybe about 500 classes of cycles. With these numbers accepted, there would be 10 50010^{500} many ways to choose the flux fields in such a flux compactification of string theory. While this number is large, for appreciating this number it is important to notice that typically the spaces of solutions to a physical theory, such as Einstein-Yang-Mills-Dirac-Higgs theory, not only have infinitely many points, but typically are highly infinite dimensional continuous spaces. In view of this even a large finite number of solutions still makes a very small space of solutions, compared to the generic spaces of solutions that one traditionally deals with in phyiscs. Nevertheless, due to that latent and long-time hope that maybe the full constraints on string theory vacua are so strong such as to only leave a handful, this number (obtained with plenty of assumptions and hand-waving and estamating, as it were) gave rise to a wide-spread feeling that the space of vacua of string theory is “vast”, and the word landscape of string theory vacua became popular and accepted for this space – for what among sober mathematicians would just have been called the moduli space of 2d SCFTs. Ever since the public discussion shows a strong preference towards being worried about a “landscape problem” of string theory. One might argue that instead of worrying much about a hand-waving statement which even at face value is “better” than what one sees in generic field theories, energy might more fruitfully be invested into getting a better technical handle of what the moduli space of 2d SCFTs – and hence the landscape of string theory vacua – actually is. For instance the authors of one of the few research programs these days to actually study the subtle nature of string theory backgrounds write on page 9 of their accurate study of orientifold backgrounds: We hope that our formulation of orientifold theory can help clarify some aspects of and prove useful to investigations in orientifold compactifications, especially in the applications to model building and the “landscape.” In particular, our work suggests the existence of topological constraints on orientifold compactifications which have not been accounted for in the existing literature on the landscape. In summary then, the landscape of string theory vacua is essentially the moduli space of 2d SCFTs. As such it is a mathematically rich and subtle object about which almost nothing precise is known at the moment. The hand-waving arguments about the nature of corners of this space, whether correct or not, do not actually point to a problem in string model building that would be worse than in model building in other theories. On the contrary, in as far as parts of the “landscape” are indeed finite sets, then no matter how large that finite number is, it is still vastly smaller than the cardinality of the infinite-dimensional spaces of solutions of a typical theory of physics. Is string theory mathematically rigorous? The core of perturbative string theory has a mathematically rigorous formulation. In fact much of mathematical physics and mathematical insight into quantum field theory as such has been gained from the study of the low-dimensional QFTs that constitute the worldvolume theories of the string and the various branes. For instance the axiomatization of QFT in the “FQFT” flavor (roughly dual to the AQFT picture) historically originates in insights gained in the study of (topological) string (namely the Moore-Seiberg axioms). On the other hand, the attempted implementations and applications of core string theory are vast and numerous, and when it finally comes to string phenomenology the usual level of rigor is just that common among practicing quantum field theorists. On the far end, deep aspects of string theory that are felt by many researchers to be of metaphysical relevance, such as the “landscape of string theory vacua” (see above) have led and are leading to speculations that are not anymore backed up by any disciplined reasoning. More in detail: The quantization of the string sigma-model may be obtained cleanly via the mathematical sound process of geometric quantization, see the references at string – Symplectic geometry and Geometric quantization. The famous Weyl anomaly of the string is formally understood in terms of anomalous action functionals, see for instance (Freed 86 2.). Various other obstructions to quantization (quantum anomalies) in the background fields for the string sigma-model such as notably the Freed-Witten-Kapustin anomaly, have been understood in fine detail in terms of obstructions in differential cohomology, see for instance (Distler-Freed-Moore 09). Particularly well analyzed are the two special sectors of first quantized string theory, that of rational conformal field theory, which contains the example of strings propagating on Lie group manifolds – the Wess-Zumino-Witten model; as well as the example of topological strings. Rational conformal field theories indeed stand out as one non-trivial and rich class of QFTs which have been subject to complete mathematical classification (in the same sense in which mathematicians for instance do the classification of finite simple groups). For details on this classification see at FRS formalism. For the topological string much more is true. The topological string has effectively become a subject in pure mathematics, with its rigorous axiomatization via the TCFT version of the cobordism hypothesis-theorem, its formulation as mathematical homological mirror symmetry, its relation to geometric Langlands duality etc. Here it is maybe noteworthy that all this mathematical insight into string physics rests on homotopy theory and higher category theory (the cobordism hypothesis, to wit, which governs the topological string, is a theorem in pure (infinity,n)-category theory). See also the question How does string theory involve homotopy theory, higher geometry and higher category theory?. But the FQFT-axiomatics that serves to mathematically formalize the topological string is not restricted to the topological sector, it also applies to the physical string. For instance Huang’s theorem shows that the familiar description of physical string via vertex operator algebra is an instance of the FQFT-formalization. Indeed, in FRS formalism these two formalizations, vertex operator algebras (via their modular tensor categories of representations, and TQFT combined via the rigorous AdS3-CFT2 and CS-WZW correspondence give the classification of rational CFT). (In particular this says that in this low dimensionl holography and AdS-CFT duality is rigorous, of course this is far, far from true in higher dimensions.) This means that the rigorous quantization of the string in these approaches is “holographically” encoded in the quantization of Chern-Simons theory: the space of quantum states of 3d Chern-Simons theory is identified with the space of conformal blocks (correlators) of the string WZW-model on this surface. Here quantization of Chern-Simons theory is one of the best analyzed quantizations of a non-trivial rich QFT. In summary this is a level of rigour with which the worldsheet 2d QFT of the string is understood which is well beyond of what one typically encounters for non-trivial interacting (non-free) QFT. And this is full non-perturbative quantum field theory (on the worldsheet!), not just the approximation in perturbation theory. From here on, also string field theory (its action functional, that is), has a completely rigorous formulation in terms of operads and L-infinity algebras (Lie n-algebras for nn \to \infty). Of course what does not have a rigorous formulation, not even a good non-rigorous formulation, is any conjectured non-perturbative (in spacetime!) completion of string theory (M-theory). But this is clear from the answer above at What is string theory. A snapshot of the state of the art of rigorous foundations of string theory as of 2011 is in Mathematical Foundations of Quantum Field and Perturbative String Theory. What does it mean to say that string theory depends on ‘miracles’, such as anomaly cancellation and avoidance of divergences? Most of the major frameworks of theoretical physics go along with a certain field of mathematics that naturally formalizes them. For instance classical mechanics today is understood as being described by the mathematics of symplectic geometry, Einstein gravity is described by differential geometry and gauge theory by what is called Chern-Weil theory. It turns out that string theory involves all these physical and mathematical ingredients, but in a “higher dimensional” way. This “higher dimensional mathematics” is known as homotopy theory and higher category theory, its geometric incarnation as higher geometry or derived geometry. At the heart of it, this increase in “mathematical dimension” is simply a reflection of the fact that a string worldsheet is one dimension higher than a particle worldline. This has to do with a general fact not just of string theory but of quantum field theory: the mathematical formulation of a local field theory of dimension nn is naturally captured by a higher dimensional mathematical structures known as an n-category. (The precise formulation of this is a theorem known as the cobordism hypothesis.) For instance where in quantum mechanics one has a 1-category of spaces of states, in string theory one has a 2-category of 2-modules. But string theory is really about the second quantization of these 2-dimensional worldsheet field theories, and it turns out that under this passage other worldvolume theories of all kinds of dimensions appear, too: the various “branes”. Accordingly, string theory involves n-categories for various values of nn. Much of this relation of string theory to higher category theory was unclear when string theory developed in the late 20th century. In fact also the theory of n-categories did not really exist yet, just a vague feeling that such a theory ought to exist. What was prevalent in string theory then was a feeling that some kind of new mathematics would be necessary to capture the nature of the theory. Back in the early ’70s, the Italian physicist, Daniele Amati reportedly said that string theory was part of 21st-century physics that fell by chance into the 20th century. I think it was a very wise remark. (Edward Witten, Nova interview 2003, also American Scientist Astronomy Issue 2002) For the moment, see at higher category theory and physics and Higher Prequantum Geometry for more. Further expositions, introductions and lectures Other entries on the nnLab with related lecture notes, expositions and introductions include the following: Other FAQs Other places on the web where some frequently asked questions about string theory are tried to given an educated balanced answer include the following Textbooks and lectures Textbooks on string theory and quantum field theory include For more see at Revised on March 9, 2017 13:16:36 by Urs Schreiber (
19a8a0040d1f06e9
Séminaires de l’année 2013 Séminaire de la Fédération PHYSTAT-SUD : Ludwik Leibler Ludwik Leibler, Matière Molle et Chimie, ESPCI, Paris Glass-workers shape marvelous objects without using moulds or precise temperature control because glass is a unique material that transforms from liquid to solid in a very progressive way. Can we imagine other compounds that offer similar opportunities? I will present the concept of solidification by molecular-networks topology freezing and introduce vitrimers, organic materials that behave just like glass. Beside opening new perspectives in chemistry and physics of glass formation and catalytically controlled exchange reactions, discovery of vitrimers could profoundly affect many industries that rely on polymers and composites. Séminaire du LPTMS: J. Bonart Impurties in trapped one-dimensional quantum liquids Julius Bonart, LPTHE Recent experiments with cold atoms on impurity dynamics immersed in trapped 1D quantum liquids have revealed an interesting interplay between the dynamic polaronic impurity mass shift and the renormalization of the optical potential. We show that the influence of the external trap on the Bose gas leads to a steeper effective potential for the impurity. We propose a framework in which this potential renormalization and the mass shift can be quantitatively understood by using a semi-classical theory of density wave excitations in the Luttinger liquid. Then we present more rigorous results on the impurity dynamics which are obtained via the non equilibrium formalism of a quantum Brownian particle. We show that the obtained theoretical results reproduce well recent experimental data. Soutenance HDR : Alberto Rosso La physique des extrêmes : de la diffusion anormale aux systèmes désordonnés Alberto Rosso Physics-Biology interface seminar : Jean-François Joanny Properties of the actin cortex and dynamics of cytokinesis Jean-François Joanny (Institut Curie - Paris) Séminaire du LPTMS: Christian Hagendorf Spin chains with dynamical lattice supersymmetry Christian Hagendorf, Université de Genève Starting from simple observations about the spectrum of the spin-1/2 XXZ and XYZ Heisenberg chains at particular anisotropy I will show how to construct a representation of the N=(2,2) supersymmetry algebra on the lattice. The supercharges are dynamical: they change the number of sites. The construction generalises to higher spin. In particular, it allows to identify lattice precursors of the N=(2,2) superconformal minimal models. The supersymmetry allows to prove the existence of zero-energy ground states which have often rich properties related to enumerative combinatorics. I will discuss in detail the case of the Fateev-Zamolodchikov chain at spin one whose ground-state components are related to the weighted enumeration of alternating sign matrices. Séminaire exceptionnel du LPTMS : David Saad David Saad (Aston University) Polymer physics for route optimisation on the London underground Optimizing paths on networks is crucial for many applications, from subway traffic to Internet communication. As global path optimization that takes account of all path-choices simultaneously is computationally hard, most existing routing algorithms optimise paths individually, thus providing sub-optimal solutions. This work includes two different aspects of routing. In the first [1] we employ the cavity approach to study analytically the routing of nodes on a graph of given topology to predefined network routers and devise the corresponding distributive optimisation algorithm. In the second [2] we employ the physics of interacting polymers and disordered systems (the replica method) to analyse macroscopic properties of generic path-optimisation problems between arbitrarily selected communicating pairs; we also derive a simple, principled, generic and distributive routing algorithm capable of considering simultaneously all individual path choices. Two types of nonlinear interactions are considered with different objectives: 1) alleviate traffic congestion at both cyber and real space and provide better route planning; and 2) save resources by powering down non-essential and redundant routers/stations at minimal cost. This saves energy and man-power, and alleviates the need for investment in infrastructure. We show that routing becomes more difficult as the number of communicating nodes increases and exhibits interesting physical phenomena such as ergodicity breaking. The ground state of such systems reveals non-monotonic complex behaviours in average path-length and algorithmic convergence, depending on the network topology, and densities of communicating nodes and routers. [1] C. H. Yeung, D. Saad, The Competition for Shortest Paths on Sparse Graphs, Phys. Rev. Lett., 108, 208701 (2012). [2] C. H. Yeung, D. Saad and K. Y. M. Wong, From the Physics of Interacting Polymers to Optimizing Routes on the London Underground, submitted (2012). Séminaire du LPTMS: A. de Luca Insights into the many-body localization transition from the structure of wave functions. Andrea de Luca, LPT ENS The possibility of combining interactions and disorder in a quantum system has recently become more concrete under the name of many-body localization transition. With a combination of old and new techniques, we explore the two sides of the transition from the point of view of the wave function statistics, pointing out analogies and differences with the usual  Anderson transition. The many-body localization seems to be characterized more by ergodicity breaking than by concrete localization in the Fock space. Physics-Biology interface seminar : Carsten Janke Encoding functional information into the microtubule cytoskeleton Carsten Janke Institut Curie - Orsay Séminaire du LPTMS: V. Bapst On the Quantum Annealing of Quantum Mean-Field Models Victor Bapst, LPT ENS In this talk I will present analytical results on the quantum annealing of quantum mean-field models. I will first explain the connection between static and dynamic properties of the model using results obtained on a toy model, the fully-connected ferromagnetic p-spin model. For p=2 this corresponds to the quantum Curie-Weiss model which exhibits a second-order phase transition, while for p>2 the transition is first order. Focusing on the latter case, I will detail the link between phase transitions, metastable states and residual excitation energy on both finite and divergent time-scales. Then, I will present a study of the thermodynamics of a more realistic optimization problem subject to quantum fluctuations: the quantum coloring problem on random regular graphs. Using the quantum cavity method, that allows to solve such models in the thermodynamic limit, we determined the order of the quantum phase transition that occurs when the transverse field is varied and unveiled the rich structure of the quantum spin-glass phase. In particular, I will explain why the quantum adiabatic algorithm would fail to solve typical instances of this problem efficiently because of entropy induced crossings within the quantum spin-glass phase. Journal Club du LPTMS: Kabir Ramola Watersheds are Schramm-Loewner Evolution Curves Kabir Ramola, Post-Doc LPTMS Séminaire du LPTMS: Michel Bauer Mesures non-destructives en mécanique quantique, martingales et variables échangeables Michel Bauer, IPhT Saclay Plus de 80 ans après la découverte des lois de la mécanique quantique, celles-ci continuent de susciter l'étonnement. La notion de mesure, en particulier, reste très mystérieuse, et ce terme recouvre aujourd'hui des processus très variés. Nous décrirons des expériences récentes très simples et belles, qui permettent de lever légèrement un coin du voile. Après quelques remarques sur les mesures en mécanique quantique, nous montrerons pourquoi l'interprétation de ces expériences est étroitement liée à des piliers de la théorie des probabilités, et en particulier au théorème de convergence des martingales. En conclusion, nous montrerons que le théorème de deFinetti sur les variables aléatoires échangeables est une piste pour expliquer la robustesse de la mesure en mécanique quantique et suggère des liens avec la notion "d'éléments de réalité". Physics-Biology interface seminar : Stefan Hell Nanoscopy with focused light Joint seminar with Laboratoire Aimé Cotton hosted by François Treussart Journal Club du LPTMS: Ricardo Marino Temporal Effects in the Growth of Networks Ricardo Marino, PhD student LPTMS This article explores a very interesting model in random graphs for preferential attachment with time-decaying properties, ideal for problems in which "popularity" decays over time (for instance, article citations). Matúš Medo, Giulio Cimini, and Stanislao Gualdi Abstract of the article : We show that to explain the growth of the citation network by preferential attachment (PA), one has to accept that individual nodes exhibit heterogeneous fitness values that decay with time. While previous PA-based models assumed either heterogeneity or decay in isolation, we propose a simple analytically treatable model that combines these two factors. Depending on the input assumptions, the resulting degree distribution shows an exponential, log-normal or power-law decay, which makes the model an apt candidate for modeling a wide range of real systems. Séminaire du LPTMS: Serguei Nechaev From elongated spanning trees to vicious random walks Serguei Nechaev, LPTMS Given a spanning forest on a large square lattice, we consider by combinatorial methods a correlation function of k paths (k is odd) along branches of trees or, equivalently, k loop-erased random walks. Starting and ending points of the paths are grouped such that they form a k-leg watermelon. For large distance r between groups of starting and ending points, the ratio of the number of watermelon configurations to the total number of spanning trees behaves as r^{-nu} log r with nu = (k^2-1)/2. Considering the spanning forest stretched along the meridian of this watermelon, we show that the two-dimensional k-leg loop-erased watermelon exponent nu is converting into the scaling exponent for the reunion probability (at a given point) of k (1+1)-dimensional vicious walkers, tilde{nu} = k^2/2. Some consequences and generalizations of this result will be also discussed. Séminaire du LPTMS: Charles Grenier Characterization and decoherence of minimal excitations Charles Grenier, CPhT Ecole Polytechnique The recent years saw the developments of quantum optics experiments with electrons propagating along the edge channels of two-dimensional electron gases in the quantum Hall regime [1]. In this context, the realization of single electron sources is necessary to observe analogues of the fundamental results of quantum optics, such as the Hong-Ou-Mandel effect. In the electronic context, these experiments would provide unprecendented insights on the behavior of electron gases at the single-particle level [2]. In this talk, I will present a theoretical study of the single electron coherence properties of two different types of voltage pulses [3,4]. By combining bosonization and the Floquet scattering approach, the effect of interactions on a periodic source of voltage pulses is computed exactly [5]. When such excitations are injected into one of the channels of a system of two copropagating quantum Hall edge channels, they fractionalize into pulses whose characteristics reflects the properties of interactions. We show that the dependence of fractionalization induced electron/hole pair production in the pulses amplitude contains clear signatures of the fractionalization of the individual excitations. We propose an experimental setup combining a source of Lorentzian pulses [3] and an Hanbury Brown and Twiss interferometer to measure interaction induced electron/hole pair production and more generally to reconstruct single electron coherence of these excitations before and after their fractionalization. [1] Y. Ji, et al, Nature 422, 415 (2003) [2] P. Degiovanni, et al, Phys. Rev. B 80, 241307(R) (2009) [3] L. Levitov, et al, J. Math. Phys. 37, 4845 (1996); D. A. Ivanov, et al, Phys. Rev. B 56, 6839 (1997); J. Keeling, et al, Phys. Rev. Lett. 97, 116403 (2006) [4] J. Dubois, et al, preprint ArXiv:1212.3921 [5] Ch. Grenier, et al, ArXiv:1301.6777 Séminaire de la Fédération PHYSTAT-SUD : Aris Moustakas Power optimization on a random wireless network: A statistical physics approach Aris Moustakas (National and Capodistrian University of Athens, Dept of Physics) The efficient use of transmitted power is important in modern day wireless networks. Not only is the battery life of transmitting devices thus extended, but also the interference caused to neighboring transmitters is minimized. There are iterative methods to obtain the optimal power per node in a network, however it is known that depending on the strength of the interference, the problem may be infeasible. However, the properties of the optimal power vector in a random network, as well as the convergence to it are not well understood. In this work we study power optimization in a simple network with randomness. We show that the problem can be mapped to an Anderson model and we analyze its properties, such as the average power per transmitter, its probability of failure, the tails of its power distribution and dynamics using ideas from the theory of disordered metals. café/thé seront servis à  14h00 Physics-Biology interface seminar : Jean-François Allemand In vitro & in vivo single molecule approaches to DNA replication  Jean-François Allemand (École normale supérieure) Soutenance de thèse : Jason Sakellariou Inverse inference in the asymmetric Ising model In recent years new experimental methods have made possible the acquisition of an overwhelming amount of data for a number of biological systems such as assemblies of neurons, genes and proteins. Typically, these systems consist of a large number of interacting components and can be described by high dimensional models such as the well known Ising model from statistical physics. The nature of the data acquired from experiments makes necessary the development of methods that are able to infer the parameters of the model and thus predict the pattern of the interactions between the components of such systems. In this thesis I have studied the particular case of the Ising model with asymmetric interactions which is arguably the most relevant case when dealing with neural networks and could be generalized to fit to other biological systems as well. I will present a new mean-field inference method based on a simple application of the central limit theorem, able to infer exactly the parameters of the asymmetric Ising model from data in a computationally efficient way. I will also discuss some results of numerical simulations where the performance of our new method can be evaluated and compared with other existing methods. Finally, I will also show how the method can be better adapted to the case of sparse networks where, additionally, the amount of data used in the inference is low compared to the size of the system. Journal Club du LPTMS: Dario Villamaina Dario Villamaina, post-doc LPTMS I will present some recent applications of non-equilibrium fluctuation theory to one-dimensional diffusive models. Spontaneous Symmetry Breaking at the Fluctuating Level (Phys. Rev. Lett. 107, 180601 (2011) ) Abstract: Phase transitions not allowed in equilibrium steady states may happen, however, at the fluctuating level. We observe for the first time this striking and general phenomenon measuring current fluctuations in an isolated diffusive system. While small fluctuations result from the sum of weakly correlated local events, for currents above a critical threshold the system self-organizes into a coherent traveling wave which facilitates the current deviation by gathering energy in a localized packet, thus breaking translation invariance. This results in Gaussian statistics for small fluctuations but non-Gaussian tails above the critical current. Our observations, which agree with predictions derived from hydrodynamic fluctuation theory, strongly suggest that rare events are generically associated with coherent, self-organized patterns which enhance their probability. Séminaire du LPTMS: Kabir Ramola Columnar Order and Ashkin-Teller Criticality in the Hard Square Lattice Gas Kabir Ramola, LPTMS Orsay The lattice gas model of particles with nearest and next-nearest-neighbour exclusion on a square lattice (hard squares) undergoes a transition from a fluid phase at low density to a columnar ordered phase at high density. The high-activity series for this model is a singular perturbation series in powers of 1/z, where z is the fugacity associated with each particle. We show that the different terms of the series need to be regrouped to get a Mayer-like series for a polydisperse system of interacting vertical rods. We also analyse the nature of the phase transition as a function of density in this system. We argue that the critical properties of the model are that of a more general Ashkin-Teller model. We employ Monte Carlo simulations to study the correlations between various quantities in the system to get a fairly precise estimate of the position of the transition on the Ashkin-Teller critical line. Séminaire du LPTMS: David Mukamel Long-Range correlations in driven, non-equilibrium systems David Mukamel, Weizmann Systems driven out of thermal equilibrium often reach a steady state which under generic conditions exhibits long-range correlations. As a result these systems sometimes share some common features with equilibrium systems with long-range interactions, such as the existence of long range-order and spontaneous symmetry breaking in one dimension, ensemble  inequivalence and other properties. Some models of driven systems will be presented, and features resulting from the existence of long-range correlations will be discussed. Séminaire de la Fédération PHYSTAT-SUD : Peter Sollich Unified study of glass and jamming rheology in soft particle systems Peter Sollich, King's College London Authors: Atsushi Ikeda, Ludovic Berthier, and Peter Sollich Abstract: We explore numerically the shear rheology of soft repulsive particles at large volume fraction. The interplay between viscous dissipation and thermal motion results in multiple rheological regimes encompassing Newtonian, shear-thinning and yield stress regimes near the ‘colloidal’ glass transition when thermal fluctuations are important, crossing over to qualitatively similar regimes near the ‘jamming’ transition when dissipation dominates. In the crossover regime, glass and jamming sectors coexist and give complex flow curves. Although glass and jamming limits are characterized by similar macroscopic flow curves, we show that they occur over distinct time and stress scales and correspond to distinct microscopic dynamics. We propose a simple rheological model describing the glass to jamming crossover in the flow curves, and discuss the experimental implications of our results. Time permitting a systematic comparison of the model to data for a number of paradigmatic experimental systems will be described. Journal Club du LPTMS: Pierpaolo Vivo Pierpaolo Vivo, CNRS researcher LPTMS I present the results of the two papers listed below on an interesting real-life system, namely the bus transportation in the Mexican city of Cuernavaca. In Cuernavaca there is no covering company responsible for organizing the city transport. Consequently, constraints such as a time table that represents external influence on the transport do not exist. Moreover, each bus is the property of the driver. The drivers try to maximize their income and hence the number of passengers they transport. This leads to competition among the drivers and to their mutual interaction. In the paper (1) below, the statistics of empirical inter-arrivals of buses was found to conform to the spacing distribution of Gaussian Unitary Ensemble (GUE) of random matrices. In the paper (2), a simple but rich model of self-avoiding walks with Poissonian rates is put forward to explain the empirical data, whose connection with GUE can be then analytically elucidated. 2. J. Baik, A. Borodin, P. Deift and T. Suidan, A model for the bus system in Cuernavaca (Mexico), J. Phys. A: Math. Gen. 39, 8965 (2006) Séminaire du LPTMS: G. Biroli Giulio Biroli, IPhT Saclay Random Matrix Theory was initially developed to explain the eigen-energy distribution of heavy nuclei. It has become clear by now that its domain of application is much broader and extends to very different fields such as number theory and quantum chaos, just to cite a few. In particular, it has been conjectured---and proved or verified in some special cases---that quantum ergodic (or chaotic) systems are characterized by eigen-energies statistics in the same universality class of random matrices and by eigen-functions that are delocalized over the configuration space. On the contrary, non-ergodic quantum systems, such as integrable models, are expected to display a Poisson statistics of energy levels and localized wave-functions. Starting from Anderson's pioneering papers, similar properties have also been studied for electrons hopping in a disordered environment. Remarkably, also in this case, similar features of the energy-level statistics have been found. All that has lead to the conjecture that delocalization in configuration space, ergodicity and level statistics are intertwined properties.  Physics-Biology interface seminar : Andrea De Martino MicroRNAs: a selective channel of communication between competing RNAs Andrea De Martino (Sapienza Universita' di Roma) Séminaire du LPTMS: Camille Aron Strongly Correlated Electrons Driven by an Electric Field Camille Aron, Rutgers University Strongly correlated many-body systems driven out of equilibrium are attracting a lot of interest, motivated by new experimental realizations in non-linear transport in devices, heterostructures and cold atoms. Theoretically, the challenge is to understand the interplay between the strong correlation and the finite drive. In this talk, I will focus on the Hubbard model driven by a constant electric field. First, I will introduce some theoretical tools to address the non-equilibrium steady states, bypassing the transient dynamics. At the dynamical mean-field level, I will describe how the lattice problem can be self-consistently mapped to a multi-lead Anderson impurity model. Afterwards, I will discuss the fate of Mott physics in the non-linear regime by detailing two key far-from-equilibrium phenomena:  - the dimensional crossover towards a lower dimensional equilibrium system; - the dielectric breakdown of the Mott insulator.  C. Aron, G. Kotliar, C. Weber, PRL 108, 086401 (2012)  C. Aron, PRB 86, 085127 (2012)  C. Aon, C. Weber, G. Kotliar, arXiv:1210.4926 (2012) Séminaire du LPTMS: Chase Broedersz Actively stressed marginal networks Chase Broedersz, Princeton University The mechanical properties of cells are regulated in part by internal stresses generated actively by molecular motors in the cytoskeleton. Experiments on reconstituted intracellular F-actin networks with myosin motors show that such active contractility dramatically affects network elasticity. There is also experimental evidence that cytoskeletal networks in living cells may be unstable or only marginally stable in the absence of motor activity. We study the impact of active stresses on the mechanics of disordered, marginally stable networks using a simple model for networks of fibers with linear bending and stretching elasticity. Motor activity controls the elasticity in an anomalous fashion close to the point of marginal stability by coupling to critical network fluctuations. In addition, such motor stresses can stabilize initially floppy networks, extending the range of critical behavior to a broad regime of network connectivities below the marginal point. Away from this regime, or at high stress, motors give rise to a linear increase in stiffness with stress. The remarkable mechanical response of these actively stressed networks is captured by a simple, constitutive scaling relation, which highlights the important role of nonaffine strain fluctuations as a susceptibility to motor stress. Séminaire de la Fédération PHYSTAT-SUD : Tomohiro Sasamoto Fluctuations in the 1D Kardar-Parisi-Zhang equation and discrete analogues Tomohiro SASAMOTO (Chiba University, Japan) The Kardar-Parisi-Zhang (KPZ) equation is a non-linear stochastic partial differential equation which describes surface growth phenomena. Recently its one-dimensional version has attracted much attention because of several exciting experimental and theoretical developments [1].  In this talk we discuss the replica analysis of the KPZ equation and its discrete analogues. Starting from the basics of the KPZ equation and the replica analysis, we explain how one can utilize it for the analysis of the KPZ equation [2], where are the tricky parts, and how they are regularized in discrete models like q-TASEP [3].  [1] K.A. Takeuchi, M. Sano, T. Sasamoto and H. Spohn, Growing interfaces uncover universal fluctuations behind scale invariance, Sci. Rep. 1 (2011) 34. [2] T. Imamura and T. Sasamoto, Exact solution for the stationary KPZ equation, Phys. Rev. Lett. 108 (2012) 190603. [3] A. Borodin, I. Corwin and T. Sasamoto, From duality to determinants for q-TASEP and ASEP, arXiv:1207.5035. Physics-Biology interface seminar : Chase Broedersz Chase Broedersz (Princeton University) Journal Club du LPTMS: Andrey Lokhov Andrey Lokhov: Ph.D. Student LPTMS Improved contact prediction in proteins: Using pseudolikelihoods to infer Potts models M. Ekeberg, C. Lovkvist, Y. Lan, M. Weigt, E. Aurell,  Phys. Rev. E 87, 012707 (2013) In this paper inference methods of inverse statistical mechanics are applied to the 21-state Potts model for predicting amino-acid contacts in proteins. This problem is important in biology since a successful estimation of contacts is essential for a prediction of proteins' 3D structure and, therefore, of its biological function. See also  PNAS December 6, 2011 vol. 108 no. 49 E1293-E1301 PNAS 106(1), 67-72 (2009) Abstract of the paper: Spatially proximate amino acids in a protein tend to coevolve. A protein’s 3D structure hence leaves an echo of correlations in the evolutionary record. Reverse engineering 3D structures from such correlations is an open problem in structural biology, pursued with increasing vigor as more and more protein sequences continue to fill the data banks. Within this task lies a statistical inference problem, rooted in the following: correlation between two sites in a protein sequence can arise from firsthand interaction, but can also be network-propagated via intermediate sites; observed correlation is not enough to guarantee proximity. To separate direct from indirect interactions is an instance of the general problem of inverse statistical mechanics, where the task is to learn model parameters (fields, couplings) from observables (magnetizations, correlations, samples) in large systems. In the context of protein sequences, the approach has been referred to as direct-coupling analysis. Here we show that the pseudolikelihood method, applied to 21-state Potts models describing the statistical properties of families of evolutionarily related proteins, significantly outperforms existing approaches to the direct-coupling analysis, the latter being based on standard mean-field techniques. This improved performance also relies on a modified score for the coupling strength. The results are verified using known crystal structures of specific sequence instances of various protein families. Code implementing the new method can be found at http://plmdca.csc.kth.se/. Séminaire du LPTMS: P. Soulé et P.-E. Larré Wave pattern generated by an obstacle moving in a one-dimensional polariton condensate Pierre-Elie Larré, LPTMS Motivated by recent experiments on generation of wave patterns in polariton condensates, we analyze superfluid and dissipative characteristics of the one-dimensional flow of a nonresonantly-pumped polariton condensate past a localized obstacle. We consider the response of the condensate flow in the weak-perturbation limit, but also by means of the Whitham averaging theory in the nonlinear regime. One of the results of this work is the identification of a new time-dependent regime separating two types of stationary flows (a mostly viscous one and another one dominated by Cherenkov radiation). I will also present results obtained by including polarization effects in the description of the polariton condensate, and I will argue that similar effects in presence of an acoustic horizon offer possibilities for demonstrating Hawking-like radiation in polariton condensates. Many-body study of a quantum point contact in the fractional quantum Hall regime Paul Soulé, LPTMS Fractional Quantum Hall (FQH) fluids have a low-energy effective theory localized at their edges. The edge states are expected to form a very special one-dimensional strongly interacting electronic system, a chiral Luttinger liquid. Depending on FQH phases, a variety of 1D models has been proposed. But only poor experimental agreements have been found yet. Using exact diagonalizations in the cylinder geometry we identify the edge modes in the presence of a parabolic confining potential at Landau level filling factors v=1/3 and v=5/2. If we change the sign of the potential we can access both the tunneling through the bulk of the fluid and the tunneling between spatially separated droplets, and measure numerically Luttinger parameters in both cases. These phenomena are at the basis of tunneling experiments in quantum point contact devices of two-dimensional electron gases. P. Soulé, T. Jolicoeur, Phys. Rev. B 86, 115214 (2012) P. Soulé, T. Jolicoeur, in preparation Physics-Biology interface seminar: Emmanuèle Helfer Journal Club du LPTMS: Dmitry Petrov Dmitry Petrov, CNRS researcher LPTMS Title: Quantum annealing with more than one hundred qubits Authors: Sergio Boixo, Troels F. Ronnow, Sergei V. Isakov, Zhihui Wang, David Wecker, Daniel A. Lidar, John M. Martinis, and Matthias Troyer The paper is not publicly available as it is submitted to Nature, but I explained Matthias our Journal Club idea and he sent me the preprint asking to keep it for our private use. I will put the preprint on our coffee table together with the D-Wave Systems wikipedia page. Enjoy. At a time when quantum effects start to pose limits to further miniaturisation of devices and the exponential performance increase due to Moore's law, quantum technology is maturing to the point where quantum devices, such as quantum communication systems, quantum random number generators and quantum simulators, may be built with powers exceeding the performance of classical computers. A quantum annealer, in particular, finds solutions to hard optimisation problems by evolving a known initial configuration towards the ground state of a Hamiltonian that encodes an optimisation problem. Here, we present results from experiments on a 108 qubit D-Wave One device based on superconducting flux qubits. The correlations between the device and a simulated quantum annealer demonstrate that the device performs quantum annealing: unlike classical thermal annealing it exhibits a bimodal separation of hard and easy problems, with small-gap avoided level crossings characterizing the hard problems. To assess the computational power of the quantum annealer we compare it to optimised classical algorithms. We discuss how quantum speedup could be detected on devices scaled to a larger number of qubits where the limits of classical algorithms are reached. See also Séminaire du LPTMS: Sébastien Balibar La Plasticité Géante d'un Cristal Quantique  Sébastien Balibar, Laboratoire de Physique Statistique de l'ENS Nous avons découvert [1] que les cristaux d'hélium 4 présentent une plasticité géante à très basse température si l'on élimine toutes leurs impuretés. Ils ne résistent pratiquement pas au cisaillement dans une direction particulière, même sous contrainte extrêmement faible (1 nanobar) et au voisinage du zéro absolu (0.01 Kelvin). Ce phénomène est un exemple spectaculaire de "plasticité" car c'est une conséquence du mouvement libre des dislocations dans une direction que nous avons identifiée, ce qui réduit l'un des coefficients élastiques d'environ 80% même sous l'effet de contraintes extrêmement faibles (1 nanobar). On notera cependant que, contrairement à la plasticité classique, il s'agit d'un effet réversible. Il disparaît dès que des traces d'impuretés s'attachent aux dislocations ou si la température augmente ce qui induit des collisions entre les dislocations en mouvement et les phonons thermiques. Ce dernier phénomène nous a permis[2] de mesurer la densité de ces dislocations et leur longueur libre qui est très grande (50 à 200 microns) ce qui prouve que les dislocations sont peu connectées entre elles, plus probablement alignées parallelement les unes aux autres. Toutes ces observations récentes [1,2] démontrent que ces cristaux ne sont vraisemblablement pas "supersolides" comme cela avait été proposé en 2004, et qu'au contraire, leurs apparentes anomalies de rotation sont des artefacts dus à la plasticité géante que nous avons mise en évidence. [1] A. Haziot, X. Rojas, A. Fefferman, J. Beamish and S. Balibar, Phys. Rev. Lett. 110, 035301 (2013) [2] A. Haziot, A. Fefferman, J. Beamish, and S. Balibar, Phys. Rev. B 90, 060509(R) (2013). Séminaire du LPTMS: Christophe Texier Distribution du temps de Wigner dans des cavités chaotiques Christophe Texier, LPTMS The Wigner time delay captures temporal aspect of the scattering process. Due to its relation with the density of states of the open system (here a chaotic cavity), it is also of great interest for the study of coherent electronic transport in mesoscopic devices. Using the joint distribution for proper time-delays of a chaotic cavity derived by Brouwer, Frahm & Beenakker [Phys. Rev. Lett. 78, 4737 (1997)], we obtain, in the limit of large number of channels N, the large deviation function for the distribution of the Wigner time-delay (the sum of proper times) by a Coulomb gas method. We show that the existence of a power law tail originates from narrow resonance contributions,  related  to a (second order) freezing transition in the Coulomb gas. Physics-Biology interface seminar : Karen Perronet Cinétique de traduction de ribosomes individuels par microscope de fluorescence Karen Perronet (Laboratoire Charles Fabry, Institut d'Optique)  Séminaire du LPTMS: Edouard Hannezo Epithelial sheet morphologies E. Hannezo, Institut Curie Understanding morphogenesis during the development of the embryo requires an understanding of how mechanical forces coordinate to generate the macroscopic shape of organs. Although it is acknowledged that adhesion and mechanical forces play a crucial role in determining the morphology of a cell, a precise model remains elusive. We propose a simple theoretical model based on adhesion and acto-myosin generated forces to study the morphology of epithelial cells and the 3D topology of epithelial sheets. Séminaire de la Fédération PHYSTAT-SUD : Jesper Jacobsen Critical manifolds for percolation and Potts models from graph polynomials Jesper Jacobsen (Labo. de Physique Théorique de l'Ecole Normale Supérieure, Paris) The first parameter to be fixed when studying a phase transition is the critical temperature. Somewhat surprisingly, this parameter is only known analytically for the simplest two-dimensional models (Ising model), or for more complicated models (Potts and O(n) vector models) on the simplest possible lattices. The known critical temperatures are invariably given by simple algebraic curves. Some of these results have very recently been proved mathematically by the technique of discrete holomorphicity. For Potts and (bond or site) percolation models on any desired two-dimensional lattice we define a graph polynomial whose roots turn out to give very accurate approximations to the critical temperatures, or even yield the exact result in the exactly solvable cases. This polynomial depends on a basis (unit cell) and its embedding into the infinite lattice. As the size of the basis is increased the approximation becomes increasingly accurate. This, on the one hand, gives strong evidence that the critical temperature for the lattices with no known analytical solution may not be algebraic numbers, and that conformal invariance will not have any counterpart in finite size (discrete holomorphicity). On the other hand, the method determines the critical temperature to unprecedented accuracy, typically 12-13 significant digits for bond percolation thresholds. It also shows that the phase diagram of the Potts model in the antiferromagnetic regime has an intricate and highly lattice-dependent structure. Séminaire du LPTMS: Marie-Hélène Genest Supersymmetry and dark matter at the LHC Marie-Hélène Genest, LPSC Grenoble The presence of Cold Dark Matter in the universe has been evinced by a number of astrophysical experiments. Direct and indirect detection experiments are looking for evidences of the presence of this elusive new particle. In a completely complementary approach, one can look for the production of such weakly interacting massive particles (WIMPs) at the LHC, either by looking for direct production of such WIMPs or by looking for supersymmetry, an appealing theory beyond the Standard Model of particles which provides a natural Cold Dark Matter candidate. Physics-Biology interface seminar : Claire Wilhelm Claire Wilhelm (Université Paris 7) Séminaire exceptionnel du LPTMS : Andrey Yanovsky Effect of winding edge currents Andrey Yanovsky (B. Verkin Institute for Low Temperature Physics and Engineering of the NAS of Ukraine) Some degrees of freedom “interact”' with the velocity winding of trajectories, e.g. spin.  We discuss the connection between spintronics and winding: namely its relation to Spin-hall effect and Thomas precession. Further we discuss persistent currents for particles with internal degrees of freedom. The currents arise because of winding properties essential for the chaotic motion of the particles in a confined geometry. The currents do not change the particle concentrations or thermodynamics, similar to the skipping orbits in a magnetic field. One of the possible realization are persistent currents in conditions of “extrinsic”' spin Hall effect. The effect is classical in contrast to similar currents predicted for conditions of quantum “intrinsic” spin Hall effect. An analytical solution has been found in an Ornstein-Uhlenbeck process. Weak dependence of the winding currents on the form of confined potential has been uncovered. The results can be of interest beyond the scope of phenomena associated with the spin Hall effect. S. Ouvry, L .Pastur, A. Yanovsky, Physics Letters A 377 (2013), pp. 804-809 Séminaire du LPTMS: Véronique Terras Algberaic Bethe Ansatz approach to correlation functions of integrable systems: a review Véronique Terras, ENS Lyon This is a review of recent results concerning the computation, in the algebraic Bethe Ansatz framework, of correlation functions of integrable systems. On the simple example of the XXZ spin 1/2 Heisenberg chain, I will explain how to obtain exact representations for the form factors and correlations functions on the lattice, both in finite volume and at the thermodynamic limit. I will then explain how these representations can be used to derive the large distance asymptotic behavior of the two-point functions at the thermodynamic limit and in the critical regime of the chain. I will finally discuss how the method can be applied to more complicated models, such as the exactly solvable solid-on-solid model. Journal Club du LPTMS : François Landes François Landes (PhD student LPTMS) She, Z., Aurell, E. & Frisch, U. The inviscid Burgers equation with initial data of Brownian type. Communications in Mathematical Physics 148, 623–641 (1992). Freely available at : http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.cmp/1104251047 Title: The inviscid Burgers equation with initial data of Brownian type. Authors: Zhen-su She, Erik Aurell and Uriel Frisch. In my presentation (aimed mainly at the sutdents) I will introduce Burgers' equation, its link with Navier-Stokes and the phase-diffusion equation, and show how its inviscid limit can be simply solved using a Cole-Hopf transform. In this limit, at long times, the velocity field develops shocks (singularities), which have interesting statistical properties (e.g. they are scale-free distributions). I will motivate the use of non-smooth, random initial conditions, then present a few results obtained with this kind of initial condition. Physics-Biology interface seminar : David Dulin David Dulin (Delft University of Technology) Séminaire du LPTMS : Bernard Hellfer Opérateurs d'Aharonov-Bohm et Partitions Spectrales Minimales Bernard Hellfer (Dept Math Paris Sud) "Si l'effet d'Aharonov-Bohm fut au départ une expérience dans un domaine non-borné non simplement connexe interprêté par le scattering, l'étude des propriétés spectrales de l'opérateur d'Aharonov-Bohm dans un domaine borné apparaît dans beaucoup d'autres questions en particulier dans l'étude du diamagnétisme et du paramagnétisme. Une situation très particulière correspond au cas où les flux créés aux pôles sont des multiples impairs de Pi. Nous discuterons de ce cas en liaison avec la question des parties spectrales minimales, question qui apparaît comme situation limite en dynamique des populations." Séminaire de la Fédération PHYSTAT-SUD : Rémi Monasson Cross-talk and transitions between spatial charts in a neuronal model of the hippocampus Rémi Monasson (LPTENS, Paris) Understanding the mechanisms by which space gets represented in the brain is a fundamental problem in neuroscience. The experimental discovery of so-called "place cells" and "grid cells", encoding specific positions in space, provide essential elements in this context. How a spatial chart i.e. a relation between different points in space, may be built and memorized? In this talk I will present a model of neural network capable of storing several spatial charts. This model may be solved with the help of the techniques of Statistical Physics of Disordered Systems. We will discuss the different possible phases of the system and the essential features of its dynamics (activated diffusion in one chart and transitions between charts), in relation with recent experiments. Physics-Biology interface seminar : Michael Murrell Mechanical Force Generation and Turnover in the Cell Cytoskeleton Michael Murrell (University of Wisconsin, Madison) Séminaire du LPTMS: Rodolfo Jalabert Scattering Phase of Quantum Dots: Emergence of Universal Behavior Rodolfo Jalabert, Institut de Physique et Chimie des Materiaux de Strasbourg Journal Club du LPTMS : Guillaume Roux "Quantum quenches through experimental examples"  Guillaume Roux (LPTMS) "The subject of quantum quenches is introduced through two recent experimental results: http://fr.arxiv.org/abs/1111.0776 Nature 481, 484 (2012) http://fr.arxiv.org/abs/1211.0545 Science 339, 52 (2013) The program will be to briefly recall the physics of cold atoms loaded in an optical lattice, to focus on some general questions concerning the time-evolution of closed quantum many-body systems (thermalization, relaxation and non-equilibrium physics), and to explain in more details the results obtained in these two experiments. " Séminaire du LPTMS: Max Atkin From Instantons to Large Deviations in Hermitian Random Matrices Max Atkin (University of Bielefeld) The typical fluctuations of the largest eigenvalue of a Hermitian random matrix about its mean has been known for a long time to be given by the Tracy-Widom distribution. More recently interest has focused on the 'atypical' fluctuations in which an eigenvalue appears very far from the bulk spectrum. This problem has been attacked using saddle point methods, loop equations and, to a much lesser extent, orthogonal polynomials. In this talk we review recent progress in the orthogonal polynomial approach which makes contact with instanton effects in the string theory literature. We use this framework to derive the distribution for large deviations in the case of multi-critical potentials. Physics-Biology interface seminar : Abdul Barakat Mechanotransduction in Vascular Endothelial Cells: Mechanisms and Implications Abdul Barakat (École Polytechnique) Séminaire du LPTMS: M. Kardar Levitation by Casimir forces in and out of equilibrium Mehran Kardar, MIT Journal Club du LPTMS: Olivier Giraud Orbiting walkers Olivier Giraud (LPTMS) "Trajectory eigenmodes of an orbiting wave source", E. Fort and Y. Couder, EPL 102, 16005 (2013) "Level splitting at Macroscopic scale", A. Eddi, J. Moukhtar, S. Perrard, E. Fort and Y. Couder, PRL 108, 264503 (2012) Des surfaces liquides soumises à une vibration verticale périodique peuvent soutenir le rebond de gouttelettes liquides pendant des temps longs. Ces gouttelettes peuvent être animées de mouvements horizontaux dus à leur interaction avec l'onde qu'elles génèrent à chaque rebond, se transformant ainsi en "marcheurs". Diverses analogies avec le comportement de particules quantiques ont été observées (quantification des orbites, effet tunnel). Nous rappellerons les principes de base de ces expériences et présenterons des résultats récents sur le mouvement et les propriétés de marcheurs soumis à une force centrale. Séminaire du LPTMS : Anupam Kundu Exact distributions of the number of distinct and common sites visited by N independant random walkers Anupam Kundu (LPTMS) Physics-Biology interface seminar : Christoph Cremer Fluorescence Microscopy of Biostructures @ Molecular Optical Resolution Seminar hosted by Olivier Acher & Guillaume Dupuis Séminaire "Fluides quantiques" du LPTMS Dopon-spinon confinement in the t-J model Alvaro Ferraz (Department of Theoretical and Experimental Physics, Federal University of Rio Grande do Norte, Brasil) Séminaire exceptionnel du LPTMS : Adélaïde Raguin Traffic regulated by junctions: A statistical model motivated by cytoskeletal transport Adélaïde Raguin, Université Montpellier 2 Active intracellular transport of motor proteins along the cytoskeleton is an important mechanism essential to accomplish many biological functions in eukaryotic cells. For instance, motor protein transport is involved in cell migration, feeding of the cell, and cell contractions. Importantly, several neurodegenerative diseases are a consequence of the malfunctioning of motor protein transport. Motor protein transport forms as well a challenge in statistical physics. It consists in the study of a non-equilibrium transport process along a complex and dynamic network. The totally asymmetric simple exclusion process (TASEP) can be considered as a minimal model to study motor protein transport. Along a single segment this model has been well studied, but how active particles organize along a complex networks has not been studied extensively. Motivated by recent experimental works based on tracer experiments of motor proteins moving along junctions of intersecting cytoskeletal filaments we formulate the TASEP along a junction. The first part of my talk will deal with how the microscopic traffic dynamics at a junction can modulate the transport on the whole network. In the second part of my talk I will introduce a more realistic model of motor protein transport at the junctions. Using the tools developed earlier I can characterize the various stationary states of motors moving along complex junctions as arising in living cells. Séminaire exceptionnel du LPTMS : Paolo Malgaretti Molecular motors: velocity speed-up, bidirectional motion and confinement-induced rectification Paolo Malgaretti (University of Barcelona) The energy-consuming motion of molecular  motors is responsible form many cellular tasks, ranging from cargo transport, cellular signalling and cellular division. The performances of single motors are strongly affected by the environment in which they work. Variation in ATP concentration, viscosity of the cytoplasm due to molecular crowding and motor-motor interactions can strongly affect the overall performances of molecular motors. Here I will focus on how the cytoplasm molecular motors move in affects single as well collective molecular motors dynamics. On one hand I will show that, due to their reduced sizes and velocities, molecular motors move in the low Reynolds regime where the large range of the hydrodynamic interaction  lead to collective behaviour not captured by the rigid-coupling picture. Such interaction lead to large velocity speed-up that can be up to two order of magnitude as compared to the single motor case[1]. On the other hand hydrodynamic coupling can lead to spontaneous symmetry breaking when motors pulling on opposite direction act on the same cargo[2]. On the other hand the high concentration of suspended molecules and proteins in the cytoplasm as well as local geometrical confinement imposed by organelles or large vesicles in suspension can affect molecular motors dynamics. I will show that the effective space motors can explore while moving along a filament lead to a local bias that,  coupling to the motor intrinsic stepping mechanism, provides an alternative route to control molecular motors velocity[3]. [1] P. Malgaretti, I. Pagonabarraga, D. Frenkel ``Running  faster together: Huge Speed up of Thermal Ratchets due to Hydrodynamic Coupling, Phys. Rev. Lett. 109, 168101 (2012) [2] P.Malgaretti, I. Pagonabarraga, J-F. Joanny, in preparation [3] P.Malgaretti, I. Pagonabarraga, J. M. Rubi, Confined Brownian ratchets, J. Chem. Phys. 138, 194906 (2013) Soutenance de thèse : Arthur Lavarélo Frustration and disorder in quantum spin chains and ladders Arthur Lavarélo In quantum spins systems, frustration and low-dimensionality generate quantum fluctuations and give rise to exotic quantum phases. This thesis studies a spin ladder model with frustrating couplings along the legs, motivated by experiments on cuprate BiCu2PO6. First, we present an original variational method to describe the low-energy excitations of a single frustrated chain. Then, the phase diagram of two coupled chains is computed with numerical methods. The model exhibits a quantum phase transition between a dimerized phase and resonating valence bound (RVB) phase. The physics of the RVB phase is studied numerically and by a mean-field treatment. In particular, the onset of incommensurability in the dispersion relation, structure factor and correlation functions is discussed in details. Then, we study the effects of non-magnetic impurities on the magnetization curve and the Curie law at low temperature. A low-energy effective model is derived within the linear response theory and is used to explain the behaviors of the magnetization and Curie constant. Eventually, we study the effect of bonds disorder, on a single frustrated chain. The variational method, introduced in the non-disordered case, gives a low disorder picture of the dimerized phase instability, which consists in the formation of Imry-Ma domains delimited by localized spinons. Séminaire du LPTMS: Giulia Foffano Stokes' law violation and negative drag in active fluids Giulia Foffano, University of Edinburgh Active fluids are suspensions of particles that absorb energy from their surroundings or from an internal tank, in order to do work. Main examples can be found in biological contexts and are e.g. bacteria suspensions and the actomyosin solution, that constitutes the cytoskeleton. I will present simulations where a small constant force is applied to a spherical colloid embedded in an active liquid crystal. In the linear regime the dissipative force acting on the particle embedded in a simple fluid is described by Stokes' law. Here instead we find a strong non-linearity with respect to the size of the probe and, strikingly, a regime where the colloid reaches a steady state of negative drag. Physics-Biology interface seminar: Matthieu Caruel Muscle power-stroke as a collective mechanical phenomenon Matthieu Caruel (Inria and École Polytechnique) Séminaire du LPTMS: Maxim Imakaev Three dimensional architecture of human chromosomes: from data analysis to polymer models Maxim Imakaev, MIT The three-dimensional and physical organization of genomes within cells plays critical roles in regulating chromosomal processes, including gene regulation, DNA replication, and genome stability. A recently developed method, Hi-C, provides a comprehensive whole-genome information about physical contacts between genomic region. Analysis of these data has begun to reveal determinants of 3D genomic organization. Chromosomes can be understood as long polymers. We use a combination of data analysis and polymer modelling to pinpoint main determinants of the 3D genomic organization in several studied organisms. We build polymer models based on the known or suggested principles of chromosomal organization and validate them against the Hi-C data. In particular, we have shown that yeast chromosomal organization can be well-described by a star-polymer model. The model we developed is consistent with the yeast Hi-C data, microscopy observations and diffusion measurements of yeast chromosomes. We also study folding of the human chromosomes during metaphase, which is vital for completion of cell division. Here, polymer modelling allowed us to discriminated between the two long-studied models of metaphase chromosomes: "bottle-brush" model of loops emanating from a central scaffold, and an hierarchical folding principle. Séminaire du LPTMS: Antonio Prados A general class of dissipative models: fluctuating hydrodynamics and large deviations A. Prados, Universidad de Sevilla We consider a general class of models, described at the mesoscopic level by a fluctuating balance equation for the local energy density. This balance equation has a diff usive term, with a current that fluctuates around its average behaviour given by Fourier's law, and a dissipation term which is a general function of the local energy density. The latter does not include a fluctuating term, as the dissipation fluctuations are enslaved to those of the density due to the (assumed) quasi-elasticity of the underlying microscopic dynamics. This quasi-elasticity of the microscopic dynamics is compatible with the existence of a finite dissipation over the di ffusive time scale which is relevant at the mesoscopic level. This general fluctuating hydrodynamic picture [1], together with an "additivity conjecture" [2], makes it possible to write the functional giving the probability of large deviations of the dissipated energy from the average behaviour. The functional has the same form as in the non-dissipative case, due to the subdominant role played by the dissipation noise. The above hydrodynamic description is shown to emerge from a general class of models, with stochastic dissipative dynamics at the microscopic level, in the large system size limit. Both the average macroscopic behaviour and the noise properties of the hydrodynamic fields are obtained from the microscopic dynamics. Finally, this general scheme is applied to the simplest dissipative version of the so-called KMP model [3] for heat transport. The theoretical predictions are compared to extensive numerical simulations, and an excellent agreement is found [4-6]. 1. L. Bertini, A. De Sole, D. Gabrielli, G. Jona-Lasinio and C. Landim, Phys. Rev. Lett. 87, 040601 (2001); Phys. Rev. Lett. 94, 030601 (2005); J. Stat. Mech. P07014 (2007); J. Stat. Phys. 135, 857 (2009). 3. C. Kipnis, C. Marchioro and E. Presutti, J. Stat. Phys. 27, 65 (1982). 4. A. Prados, A. Lasanta and P. I. Hurtado, Phys. Rev. Lett. 107, 140601 (2011). 5. A. Prados, A. Lasanta and P. I. Hurtado, Phys. Rev. E 86, 031134 (2012). 6. P. I. Hurtado, A. Lasanta, and A. Prados, Phys. Rev. E 88, 022110 (2013). Physics-Biology interface seminar: Marie Doumic-Jauffret Marie Doumic-Jauffret (INRIA Rocquencourt) Séminaire du LPTMS: Tom Witten A colloidal dance: synchronizing motion of brownian particles Thomas A. Witten, University of Chicago Séminaire exceptionnel du LPTMS: Srikanth H. Sastry Dynamical transition and memory in cyclically deformed amorphous solids Srikanth H. Sastry (TIFR Centre for Interdisciplinary Sciences, Hyderabad) Amorphous solids are microscopically disordered materials that arise in diverse contexts, with polymeric glasses and soft glassy materials like gels and metallic glasses being some typical examples. The mechanical response of amorphous solids is distinct in microscopic detail from that of crystalline solids, and is of practical interest because of the relevance of the elastic and plastic response and rheology of these substances to a variety of processes and applications. A computational investigation of the response of a model amorphous solid to oscillatory deformation is presented, after a brief overview of current approaches to understanding mechanical response of amorphous solids. A dynamical transition is observed as the amplitude of the deformation is varied: For large values of the amplitude the system exhibits diffusive behavior and loss of memory of the initial conditions, whereas localization is observed for small amplitudes. The formation of memory in such systems of single and multiple inputs is also described. Soutenance de thèse : Pierre-Elie Larré Fluctuations quantiques et effets non-linéaires dans les condensats de Bose-Einstein : des ondes de choc dispersives au rayonnement de Hawking acoustique Pierre-Elie Larré Cette thèse est dédiée à l'étude de l'analogue du rayonnement de Hawking dans les condensats de Bose-Einstein. Dans un premier temps, on présente de nouvelles configurations dans lesquelles l'écoulement d'un condensat atomique réalise l'équivalent acoustique d'un trou noir gravitationnel. Nous donnons dans chaque cas une description analytique du profil de l'écoulement, des fluctuations quantiques associées et du spectre du rayonnement de Hawking. L'analyse des corrélations à deux corps dans l'espace des positions et des impulsions fournit des signatures caractérisant l'occurrence du rayonnement de Hawking. Dans une deuxième partie, nous analysons les propriétés superfluides et/ou dissipatives de condensats de polaritons s'écoulant autour d'un obstacle. L'étude des ondes de polarisation dans ces systèmes nous permet enfin de proposer des signatures prometteuses pour l'observation de l'effet Hawking dans les gaz de polaritons et les condensats atomiques à deux composantes. Séminaire exceptionnel du LPTMS : Shimshon Bar-Ad Transonic flow of light, sonic horizons and analogue gravity  Shimshon Bar-Ad, Tel Aviv University In 1981 W.G. Unruh predicted that a thermal spectrum of sound waves would be emitted from the sonic horizon in transonic fluid flow, in analogy to black-hole evaporation. Based on this idea, extended to the realm of nonlinear optics, we explore an optical analog of the Laval nozzle, in which light propagation through a suitably shaped waveguide, filled with a self-defocusing nonlinear medium, mimics the transonic acceleration of a real fluid expanding through a propulsive exhaust nozzle. Experimental demonstrations of transonic flow in prototype optical nozzles will be presented, and the prospect of observing fluctuations that are classical analogs of Hawking radiation will be discussed. Séminaire du LPTMS: Oleg Borisov Poisson-Boltzmann theory of polyelectrolyte brushes Oleg Borisov,  IPREM, Université de Pau The polyelectrolyte brush results if long flexible ionic polymer chains are attached by one end onto an impermeable solid surface or onto the surface of a colloidal particle and immersed in a solvent. Understanding of the structure-properties relation in polymer and polyelectrolyte brushes is important for their applications for surface modification, in particular, in the field of biomaterial engineering. Functional polymer brush-like structures are found in biological systems. Examples of natural biopolyelectrolyte brushes include extracellular polysaccharides on bacterial surfaces, neurofilaments and microtubules with associated proteins, aggrecan macromolecules in articular cartilage, casein micelles etc. The possibility to trigger conformational transitions in surface-attached polymeric layers by external physical (temperature, electrical voltage, etc.) or chemical (pH, salinity, solvent composition, etc.) fields opens a perspective for design of stimuli-responsive interfaces. Theory of equilibrium structural properties  of polyelectrolyte brushes is developed in the frame of self-consistent field Poisson-Boltzmann approximation. We analyze the influence of ionic strength and (in the case of weak polyacid or polybase brushes) pH in the solution on the chain conformations in the brush. The effect of the counterion localization in the polyelectrolyte brush is most essential and governing the brush properties. The repulsive forces acting between surfaces or colloidal particles with grafted polyelectrolytes are calculated.  Furthermore, we discuss interaction of polyelectrolyte brushes with multivalent counterions and with globular proteins and the effect of their accumulation in the brushes. Physics-Biology interface seminar: Timo Betz Timo Betz (Institut Curie - Paris) Physics-Biology interface seminar: Daniel Axelrod Molecule motion inside secretory granules before and during exocytosis Daniel Axelrod (University of Michigan) Séminaire du LPTMS: Denis Basko Weak chaos and Arnold diffusion in the strongly disordered nonlinear Schroedinger chain Denis Basko, LPMMC - Université Joseph Fourier I will discuss the long-time equilibration dynamics of a strongly disordered one-dimensional chain of coupled weakly anharmonic classical oscillators, which is one of the simplest models allowing to study the effect of a classical nonlinearity on the Anderson localization. The system has chaotic behavior, and it is shown that chaos in this system has a very particular spatial structure: it can be viewed as a dilute gas of chaotic spots. Each chaotic spot corresponds to a stochastic pump which drives the Arnold diffusion of the oscillators surrounding it, thus leading to their relaxation and thermalization. The most important mechanism of relaxation at long distances is provided by random migration of the chaotic spots along the chain, which bears analogy with variable-range hopping of electrons in strongly disordered solids. Tri-Séminaire de Physique Statistique : Sergey Nazarenko Condensation & BKT transition in 2d defocusing nonlinear Schrödinger equation. Sergey Nazarenko (Mathematics Institute, University of Warwick) I will consider a 2D nonlinear Schrödinger equation model in a periodic box and truncated in Fourier space and study condensation phenomenon, Berezinskii-Kousterlitz-Thouless transition, and their links with the Wave Turbulence theory. Séminaire du LPTMS: Yasar Atas et François Landes Distribution of the ratio of level spacings in random matrix ensembles. Yasar Atas, LPTMS Initially introduced as a description of energy levels of heavy atomic nuclei by Wigner, Random Matrix Theory (RMT) is nowadays an active field of theoretical physics with ramification in various disciplines such as number theory, quantum chaos and finance to cite just a few. In RMT, the distribution of level spacings plays a very important role: it has been widely used since the inception of the theory and is considered as the "reference" measure of spectral statistics. Since different models may and do have different level densities, one has to perform a procedure on the spectrum called unfolding in order to obtain the spacing distribution. This procedure is not unique and the choice of the unfolding procedure is quite arbitrary. It seems then natural to search for another measure which is independent of the level density. This has been done quite recently by Oganesyan and Huse [1]: instead of looking at the spacing they prefer to look at the ratio of two consecutive level spacings. This quantity has the advantage that it does not require any unfolding. In this talk, I will make few remarks on random matrix theory and the unfolding procedure. I will then derive simple expressions for the probability distribution of the ratio of two consecutive spacings for the classical ensembles of random matrices [2]. These expressions, which were lacking in the literature, will  be compared to numerical data from a quantum many-body lattice model and from zeros of Riemann zeta function. [1] V. Oganesyan and D. A. Huse, Phys. Rev. B 75, 155111 (2007). [2] Y. Y. Atas, E. Bogomolny, O. Giraud, and G. Roux, Phys. Rev. Lett. 110, 084101 (2013) Avalanche statistics in disordered visco-elastic interfaces François Landes, LPTMS Many complex systems respond to a continuous input of energy by an accumulation of stress over time, and sudden energy releases, called avalanches. Recently, it has been pointed out that several basic features of avalanche dynamics are induced at the microscopic level by relaxation processes, usually neglected by conventional models. I will present a minimal model with relaxation and its mean field treatment, and a quick snapshot of the finite dimension results. In mean-field, our model yields a periodic behavior (with a new, emerging time scale), with events that span the whole system. In finite dimension (2D), the mean-field system-sized events become local, and numerical simulations give qualitative and quantitative results similar to the earthquakes observed in reality. Physics-Biology interface seminar: Roberto Toro Role of mechanical constraints on the establishment of neocortical organisation Roberto Toro (Institut Pasteur) Journal-Club : Vincent Michal Dynamics of a quantum phase transition: exact solution of the quantum Ising Model Jacek Dziarmaga, PRL 95, 245701 (2005) Vincent Michal (Post doc LPTMS) It will be based on the paper "Dynamics of a quantum phase transition: exact solution of the quantum Ising Model", by Jacek Dziarmaga, PRL 95, 245701 (2005). Abstract of the paper: The Quantum Ising model is an exactly solvable model of quantum phase transition. This Letter gives an exact solution when the system is driven through the critical point at a finite rate. The evolution goes through a series of Landau-Zener level anticrossings when pairs of quasiparticles with opposite pseudomomenta get excited with a probability depending on the transition rate. The average density of defects excited in this way scales like a square root of the transition rate. This scaling is the same as the scaling obtained when the standard Kibble-Zurek mechanism of thermodynamic second order phase transitions is applied to the quantum phase transition in the Ising model. Soutenance de thèse : Olga Valba Statistical analysis of networks and biophysical systems of complex architecture Olga Valba (LPTMS) Complex organization is found in many biological systems. For example, biopolymers could possess hierarchic structure, which provides their functional peculiarity. Artificially constructed biological networks are other common objects of statistical physics with rich functional properties. The aim of this thesis is to develop some methods for studying statistical systems of complex architecture with essential biological significance. The first part addresses to the statistical analysis of random biopolymers. Apart from the evolutionary context, our study covers more general problems of planar topology appeared in description of various systems, ranging from gauge theory to biophysics. In the second part of this work we focus our investigation on statistical properties of artificial and real networks. The importance of obtained results in applied biophysics is discussed. Also, the formation of stable patters of motifs in random networks under selective evolution in context of creation of islands of "superfamilies" is considered. Physics-Biology interface seminar: Emmanuel Margeat Looking at transcription mechanisms with single molecule FRET Emmanuel Margeat (Centre de Biochimie Structurale, Montpellier) Exceptional seminar hosted by Karen Perronet Séminaire exceptionnel du LPTMS : Eugene Sukhorukov Fermi Edge Singularity in Quantum Hall Systems far from Equilibrium Eugene Sukhorukov (Geneva University) We study non-equilibrium one dimensional physics on an example of a minimal setup inspired by the recent experiment based around a Mach-Zender interferometer composed of quantum Hall  edge channels at integer filling factor, one of which is in Coulomb interaction with an artificial impurity. The fluctuations of the impurity charge due to the tunneling transitions lead to resonant suppression of coherence on the channel and of the visibility of the interference pattern. We consider the regime where the transitions are induced by the non-equilibrium partitioning noise created in the interferometer itself by the beam splitter, and describe the strong effects of both the orthogonality catastrophe and the non-equilibrium noise, that are revealed in the shape of the transition rates, in the asymmetry of the visibility dip and in the non-trivial dependence of the dip position on the transparency Journal-Club : Pierre Ronceray Improving the Density of Jammed Disordered Packings Using Ellipsoids A. Donev, I. Cisse, D. Sachs, E. A. Variano, F. H. Stillinger, R. Connelly, S. Torquato, and P. M. Chaikin, Science, 303, pp 990-993 (2004). Packing problems, such as how densely objects can fill a volume, are among the most ancient and persistent problems in mathematics and science. For equal spheres, it has only recently been proved that the face-centered cubic lattice has the highest possible packing fraction f=Pi/sqrt(18) ~ 0.74. It is also well known that certain random (amorphous) jammed packings have  f~0.64. Here, we show experimentally and with a new simulation algorithm that ellipsoids can randomly pack more densely— up to f~0.68 to 0.71 for spheroids with an aspect ratio close to that of M&M’s Candies—and even approach f~ 0.74 for ellipsoids with other aspect ratios. We suggest that the higher density is directly related to the higher number of degrees of freedom per particle and thus the larger number of particle contacts required to mechanically stabilize the packing. We measured the number of contacts per particle Z~10 for our spheroids, as compared to Z~ 6 for spheres. Our results have implications for a broad range of scientific disciplines, including the properties of granular media and ceramics, glass formation, and discrete geometry. Séminaire du LPTMS: Fabio Deelan Cunden Polarized ensembles of random pure states Fabio  Deelan Cunden, Dipartimento di Matematica, Università di Bari In the last years many researchers  have been investigating  the typical properties of random pure states, i.e. unit vectors drawn at ``random'' from the Hilbert space associated to a quantum system. This subject has attracted the attention in several directions, and some important results have been achieved. The standard ensemble which has been intensively investigated is that of random pure states  with measure induced by the Haar measure on the unitary group. This ensemble, being the maximally symmetric one, implements in a natural way the  case of minimal knowledge on a quantum state. We recently presented a new family of polarized ensembles of random pure quantum states. Our idea is to move beyond the unbiased ensemble by using a natural operation at hand in the Hilbert space, namely  superposition of vector states. These ensembles are quite manageable and manifestly show that the unitarily invariant measures interact nicely with the operation of linear superposition of states. Our approach has been oriented to the study of  typical bipartite entanglement between subsystems, as measured by the local purity. This strategy yields an efficient and simple sampling of random pure states with fixed value of purity, and paves the way to further explorations and a deeper characterization of the geometry of isopurity manifolds. Cunden F D, Facchi P, Florio G (2013) J. Phys. A: Math. Theor. 46, 315306. Séminaire du LPTMS: Anders Carlsson Self-Organizing Waves and Patterns of Proteins in Biological Cells  Anders Carlsson (Washington University) Self-organized patterns, such as traveling waves and symmetry-breaking distributions of chemical species, demonstrate how simple laws of motion can lead to complex behaviors in physical systems. Such patterns result from a combination of positive and negative feedback effects with different time scales. The talk will discuss the example of waves of the protein actin in biological cells. Actin, the most abundant intracellular protein in mammals, occurs either as an isolated protein in solution, or in the form of filaments which have mechanical rigidity and are crosslinked into a gel. Recent experiments have shown that filamentous actin forms spontaneous waves that cause protrusion of the cell membrane. Such waves may serve as a “clock” that helps a cell explore its environment. A theoretical model of actin waves will be presented, based on the three-dimensional structure of the gel formed by actin filaments, interacting reciprocally with proteins in the cell membrane. Implementation of this model shows that positive feedback is inherent in the growth process of the actin gel, while negative feedback arises from the actin-membrane interaction. As the concentration of key proteins in the cell is varied,  the  model predicts phase transitions between different phases including traveling waves or patches, and static symmetry-breaking polarization of the cell. Physics-Biology interface seminar: Aurélien Bancaud Aurélien Bancaud (Laboratoire d'analyse et d'architecture des systèmes, Toulouse) Séminaire du LPTMS: Claire Lemarchand Molecular composition and mechanical properties of bitumen : a molecular dynamic study Claire Lemarchand, Université de Roskilde Bitumen is one of the essential components of roads.  Its mechanical properties need to be controlled in order to reduce the rolling resistance developed between the tyre and the road as a vehicle is travelling on the road. In this presentation, I will speak about the link between the molecular composition and the mechanical properties of "Cooee" bitumen. This model bitumen is composed of four realistic molecule types: saturated hydrocarbon, resinous oil, resin, and asphaltene. It is studied with molecular dynamic (MD) simulations. Asphaltene molecules are large and flat aromatic molecules. The molecular dynamic simulations are able to reproduce the aggregation of asphaltene molecules into nanoaggregates. The size of the nanoaggregates was estimated in the simulations and shown to be comparable to experimental results. The dynamics of the nanoaggregates was precisely quantified in MD, giving new insights about their formation. Finally, the influence of these nanoaggregates on the bitumen mechanical properties and overall dynamics was investigated. This last part of the work enables us to set some rules about bitumen chemical composition to lower its viscosity. Séminaire "Fluides quantiques" : Matteo Zaccanti Ultracold 6Li-40K Fermi mixtures with resonant interactions Matteo Zaccanti (LENS, Florence) Tri-Séminaire de Physique Statistique : David Holcman Analysis of superresolution data using Langevin equation. Brownian receptortrafficking on neuronal cells David Holcman (Institut de Biologie, Ecole Normale Supérieure - IBENS)  What is the basis of learning and memory in the brain? Neurons are the main players and synapses that are micro-contact between them play a critical role. How synapses are organized at a molecular level and what defines their synaptic strength at a molecular level remains unclear, although it is due to the combine effect of many molecules together. Thus the number of receptors and molecules must be well regulated. I will present here an integrative modeling approach of synaptic transmission that accounts for receptor dynamics and a novel method to extract local biophysical properties from thousands of individual trajectories obtained from superresolution microscopy data. This talk summarizes our long lasting effort in integrating key parameters involved in regulating synaptic transmission and plasticity. Séminaire "Fluides quantiques" : Dmytro Fil Superfluidity of electron-hole pairs in bilayers Dmytro Fil (Institute for Single Crystals, NASU, Kharkov) Journal-Club : Hao Lee Multi-orbital and density-induced tunneling of boson in optical lattices Hao Lee (doctorant au LPTMS)  Article de  Dirk-Soren Luhmann, Ole Jurgensen and Klaus Sengstock, New Journal of Physics 14 ( 2012) 033021 We show that multi-orbital and density-induced tunneling have a significant impact on the phase diagram of bosonic atoms in opticallattices. Off-site interactions lead to density-induced hopping, the so-called bond-charge interactions, which can be identified with an effective tunneling potential and can reach the same order of magnitude as conventional tunneling. In addition, interaction-induced higher-band processes also give rise to strongly modified tunneling, on-site and bond-charge interactions.We derive an extended occupation-dependent Hubbard model with multiorbitallyrenormalized processes and compute the corresponding phase diagram.It substantially deviates from the single-band Bose–Hubbard model andpredicts strong changes of the superfluid-to-Mott-insulator transition. In general,the presented beyond-Hubbard physics plays an essential role in bosoniclattice systems and has an observable influence on experiments with tunableinteractions. Séminaire du LPTMS: Thomas Barthel Algebraic versus exponential decoherence in dissipative many-particle systems Thomas Barthel, LPTMS Ultimately, every quantum system of interest is coupled to some form of environment which leads to decoherence. Until our recent study, it was assumed that, as long as the environment is memory-less (i.e. Markovian), the temporal coherence decay is always exponential -- to a degree that this behavior was synonymously associated with decoherence. However, the situation can change if the system itself is a many-body system. In this case, the interplay between dissipation and internal interactions gives rise to a wealth of novel phenomena. In particular, the coherence decay can change to a power law. After recapitulating the mathematical framework and basic notions of decoherence, I will discuss an open XXZ chain for which the decoherence time diverges in the thermodynamic limit. The coherence decay is then algebraic instead of exponential. In contrast, decoherence in the open transverse-field Ising model is found to be always exponential. In this case, the internal interactions can both facilitate and impede the environment-induced decoherence. The results are based on quasi-exact simulations using the time-dependent density matrix renormalization group (tDMRG) and explained on the basis of perturbative treatments. Reference: Z. Cai and T. Barthel, PRL 111, 150403 (2013) Séminaire du LPTMS : Giovanni Acquaviva Semi-classical and quantum effects in curved metrics Giovanni Acquaviva, University of Trento Since the work by Hartle and Hawking, black hole spacetimes constituted an excellent playground for the study of quantum effects in curved metrics: in this context a great deal of theoretical tools have been employed in order to unveil the connections between general relativity and the quantum realm.  In the seminar I will present two approaches that allow to highlight this aspect: i) the semi-classical tunneling method and ii) the quantum field-theoretical modeling of Unruh-DeWitt detectors.  One of the open questions in this field regards the everpresent role of thermodynamics: does it just constitutes a parallel arising from a fundamentally statistical treatment?  Or is there some more profound link? Physics-Biology interface seminar: Pascal Martin The hair-cell bundle as a mechanosensor and amplifier for hearing Pascal Martin (Institut Curie - Paris) Séminaire du LPTMS: Jon Keating Random Matrix Theory and quantum spin chains Jon Keating (Bristol University) I will discuss a random matrix model for understanding statistical features of the spectrum and excited states of certain families of quantum spin chains.  In the case of the states, this provides information about entanglement properties. Tri-séminaire de Physique Statistique : Denis Bernard Real Time Imaging of Quantum and Thermal Fluctuations : a Detour into Quantum Noise Denis Bernard, LPTENS In the last decade progresses have been achieved in realising and manipulating stable and controllable quantum systems, and these made possible to experimentally study fundamental questions posed in the early days of quantum mechanics. We shall theoretically discuss recent cavity QED experiments on non-demolition quantum measurements. While they nicely illustrate the possibility to implement efficient quantum state manipulations, these experiments pose a few questions such as: What does it mean to observe a progressive wave function collapse in real time? How to describe it? What do we learn from them? Their analysis will allow us on the one hand to link these experiments to basics notions of probability or information theory, and on the other hand to touch upon notions of quantum noise. As an illustration, we shall also look at quantum systems in contact with a heat bath and we shall describe the main physical features of thermally activated quantum jumps. Comité AERES Comité AERES Physics-Biology interface seminar: François Nédélec Optimal Design of Elongating Yeast Spindles François Nédélec (EMBL Heidelberg) Joint seminar with LEBS Gif-sur-Yvette Journal Club : Clélia de Mulatier B. Houchmandzadeh, Phys. Rev. E 80, 051920 (2009): Theory of neutral clustering for growing populations Clélia de Mulatier (PhD student LPTMS) The spatial distribution of most species in nature is nonuniform. We have shown recently [ B. Houchmandzadeh Phys. Rev. Lett. 101 078103 (2008)] on an experimental ecological community of amoeba that the most basic facts of life—birth and death—are enough to cause considerable aggregation which cannot be smoothened by random movements of the organisms. This clustering, termed neutral and always present, is independent of external causes and social interaction. We develop here the theoretical groundwork of this phenomenon by explicitly computing the pair-correlation function and the variance to mean ratio of the above neutral model and its comparison to numerical simulations. See also : W. R. Young, A. J. Roberts & G. Stuhne, Nature 412, 328-331 (19 July 2001) Séminaire du LPTMS: Haggai LANDA Solitons and nonlinearity with trapped ions Haggai Landa, LPTMS Solitons are ubiquitous in many areas of physics and the natural sciences. Quantum dynamics of solitons, however, has proven experimentally challenging to measure. A system in which state-of-the-art quantum control over individual degrees of freedom has been demonstrated is the ion trap, where, due to the time-dependent trapping fields and the Coulomb interaction, trapped ions manifest rich nonlinear dynamics. I will present some theoretical work and collaboration with experimental groups from recent years, focusing in particular on the study of discrete solitons with trapped ions, and more generally, other nonlinear phenomena, both classical and quantum mechanical. Séminaire du LPTMS: Eduardo Sanz Crystallization in molecular and colloidal systems by means of computer simulations Eduardo Sanz, Universidad Complutense de Madrid Crystallization is often used in separation and purification industrial processes. Most drugs, for instance, are stored and delivered in a crystalline form. Moreover, climate change is strongly influenced by the crystallization of water in tropospheric clouds. It is therefore important to deeply understand the kinetics of this phase transition, and computer simulations is a very suitable tool for this purpose. The reason is that crystallization starts by a nucleation step that consists in the emergence of a small embryo of the crystal phase in the bulk of the parent fluid phase. In molecular systems, such embryo is rather small (contains of the order of 10^2-10^3 molecules) and short-lived (lasts for ~10^-9 seconds) and it can not therefore be directly visualized experimentally. In this talk I will discuss several numerical studies of crystallization. First, I will discuss a case study in which several crystal phases (polymorphs) compete to nucleate in a metastable fluid of oppositely charged colloids. Then, I will present a computational study of homogeneous ice nucleation at low supercoolings. Finally, I will present a study of the mechanism by which a colloidal glass becomes crystalline. Physics-Biology interface seminar: Yang Si Fluorescent Nano-objects For Bioimaging Applications Yang Si (ENS Cachan) Special seminar: poster prize from the NOMBA workshop
5d579109e25e448f
Electron cloud: Wikis (Redirected to Atomic orbital article) From Wikipedia, the free encyclopedia An atomic orbital is a mathematical function that describes the wave-like behavior of either one electron or a pair of electrons in an atom.[1] This function can be used to calculate the probability of finding any electron of an atom in any specific region around the atom's nucleus. These functions may serve as three-dimensional graph of an electron’s likely location. The term may thus refer directly to the physical region defined by the function where the electron is likely to be.[2] Specifically, atomic orbitals are the possible quantum states of an individual electron in the collection of electrons around a single atom, as described by the orbital function. Despite the obvious analogy to planets revolving around the Sun, electrons cannot be described as solid particles and so atomic orbitals rarely, if ever, resemble a planet's elliptical path. A more accurate analogy might be that of a large and often oddly-shaped atmosphere (the electron), distributed around a relatively tiny planet (the atomic nucleus). Atomic orbitals exactly describe the shape of this atmosphere only when a single electron is present in an atom. When more electrons are added to a single atom, the additional electrons tend to more evenly fill in a volume of space around the nucleus so that the resulting collection (sometimes termed the atom’s “electron cloud” [3]) tends toward a generally spherical zone of probability describing where the atom’s electrons will be found. Electron atomic and molecular orbitals. The chart of orbitals (left) is arranged by increasing energy (see Madelung rule). Note that atomic orbits are functions of three variables (two angles, and the distance from the nucleus, r). These images are faithful to the angular component of the orbital, but not entirely representative of the orbital as a whole. The idea that electrons might revolve around a compact nucleus with definite angular momentum was convincingly argued in 1913 by Niels Bohr,[4] and the Japanese physicist Hantaro Nagaoka published an orbit-based hypothesis for electronic behavior as early as 1904.[5] However, it was not until 1926 that the solution of the Schrödinger equation for electron-waves in atoms provided the functions for the modern orbitals.[6] Because of the difference from classical mechanical orbits, the term "orbit" for electrons in atoms, has been replaced with the term orbital—a term first coined by chemist Robert Mulliken in 1932.[7] Atomic orbitals are typically described as “hydrogen-like” (meaning one-electron) wave functions over space, categorized by n, l, and m quantum numbers, which correspond with the pair of electrons' energy, angular momentum, and an angular momentum direction, respectively. Each orbital (defined by a different set of quantum numbers), and which contains a maximum of two electrons, is also known by the classical names used in the electron configurations shown on the right. These classical orbital names (s, p, d, f) are derived from the characteristics of their spectroscopic lines: sharp, principal, diffuse, and fundamental, the rest being named in alphabetical order. [8][9] From about 1920, even before the advent of modern quantum mechanics, the aufbau principle (construction principle) that atoms were built up of pairs of electrons, arranged in simple repeating patterns of increasing odd numbers (1,3,5,7..), had been used by Niels Bohr and others to infer the presence of something like atomic orbitals within the total electron configuration of complex atoms. In the mathematics of atomic physics, it is also often convenient to reduce the electron functions of complex systems into combinations of the simpler atomic orbitals. Although each electron in a multi-electron atom is not confined to one of the “one-or-two-electron atomic orbitals” in the idealized picture above, still the electron wave-function may be broken down into combinations which still bear the imprint of atomic orbitals; as though, in some sense, the electron cloud of a many-electron atom is still partly “composed” of atomic orbitals, each containing only one or two electrons. The physicality of this view is best illustrated in the repetitive nature of the chemical and physical behavior of elements which results in the natural ordering known from the 19th century as the periodic table of the elements. In this ordering, the repeating periodicity of 2, 6, 10, and 14 elements in the periodic table corresponds with the total number of electrons which occupy a complete set of s, p, d and f atomic orbitals, respectively. Orbital names Orbitals are given names in the form: X \, \mathrm{type}^y \ where X is the energy level corresponding to the principal quantum number n, type is a lower-case letter denoting the shape or subshell of the orbital and it corresponds to the angular quantum number l, and y is the number of electrons in that orbital. Formal quantum mechanical definition In quantum mechanics, the state of an atom, i.e. the eigenstates of the atomic Hamiltonian, is expanded (see configuration interaction expansion and basis (linear algebra)) into linear combinations of anti-symmetrized products (Slater determinants) of one-electron functions. The spatial components of these one-electron functions are called atomic orbitals. (When one considers also their spin component, one speaks of atomic spin orbitals.) In atomic physics, the atomic spectral lines correspond to transitions (quantum leaps) between quantum states of an atom. These states are labelled by a set of quantum numbers summarized in the term symbol and usually associated to particular electron configurations, i.e. by occupations schemes of atomic orbitals (e.g. 1s2 2s2 2p6 for the ground state of neon -- term symbol: 1S0). Connection to uncertainty relation In the quantum picture of Heisenberg, Schrödinger and others, the Bohr atom number n for each orbital became known as an n-sphere in a three dimensional atom and was pictured as the mean energy of the probability cloud of the electron's wave packet which surrounded the atom. Although Heisenberg used infinite sets of positions for the electron in his matrices, this does not mean that the electron could be anywhere in the universe.[citation needed] Rather there are several laws that show the electron must be in one localized probability distribution. An electron is described by its energy in Bohr's atom which was carried over to matrix mechanics. Therefore, an electron in a certain n-sphere had to be within a certain range from the nucleus depending upon its energy.[citation needed] This restricts its location. Hydrogen-like atoms The simplest atomic orbitals are those that occur in an atom with a single electron, such as the hydrogen atom. In this case the atomic orbitals are the eigenstates of the hydrogen Hamiltonian. They can be obtained analytically (see hydrogen atom). An atom of any other element ionized down to a single electron is very similar to hydrogen, and the orbitals take the same form. A given (hydrogen-like) atomic orbital is identified by unique values of three quantum numbers: n, l, and ml. The rules restricting the values of the quantum numbers, and their energies (see below), explain the electron configuration of the atoms and the periodic table. The stationary states (quantum states) of the hydrogen-like atoms are its atomic orbital. However, in general, an electron's behavior is not fully described by a single orbital. Electron states are best represented by time-depending "mixtures" (linear combinations) of multiple orbitals. See Linear combination of atomic orbitals molecular orbital method. The quantum number n first appeared in the Bohr model. It determines, among other things, the distance of the electron from the nucleus; all electrons with the same value of n lie at the same distance. Modern quantum mechanics confirms that these orbitals are closely related. For this reason, orbitals with the same value of n are said to comprise a "shell". Orbitals with the same value of n and also the same value of l are even more closely related, and are said to comprise a "subshell". Qualitative characterization Limitations on the quantum numbers An atomic orbital is uniquely identified by the values of the three quantum numbers, and each set of the three quantum numbers corresponds to exactly one orbital, but the quantum numbers only occur in certain combinations of values. The rules governing the possible values of the quantum numbers are as follows: l = 0 1 2 3 4 ... n = 1 ml = 0 2 0 -1, 0, 1 The shapes of orbitals The shapes of the first five atomic orbitals: 1s, 2s, 2px,2py, and 2pz. The colors show the wavefunction phase. The three p-orbitals for n=2 have the form of two ellipsoids with a point of tangency at the nucleus (sometimes referred to as a dumbbell). The three p-orbitals in each shell are oriented at right angles to each other, as determined by their respective values of m_\ell. Orbitals table s pz px py dz2 dxz dyz dxy dx2-y2 fz3 fxz2 fyz2 fxyz fz(x2-y2) fx(x2-3y2) fy(3x2-y2) n=1 S1M0.png Orbital energy s p d f g 1 1 2 2 3 3 4 5 7 4 6 8 10 13 5 9 11 14 17 21 6 12 15 18 22 26 7 16 19 23 27 31 8 20 24 28 32 36 Electron placement and the periodic table Relativistic effects See also 1. ^ Milton Orchin,Roger S. Macomber, Allan Pinhas, and R. Marshall Wilson(2005)"Atomic Orbital Theory" 3. ^ The Feynman Lectures on Physics -The Definitive Edition, Vol 1 lect 6 pg 11. Feynman, Richard; Leighton; Sands. (2006) Addison Wesley ISBN 0-8053-9046-4 5. ^ Nagaoka, Hantaro (May 1904). "Kinetics of a System of Particles illustrating the Line and the Band Spectrum and the Phenomena of Radioactivity". Philosophical Magazine 7: 445–455. http://www.chemteam.info/Chem-History/Nagaoka-1904.html.  7. ^ Mulliken, Robert S. (July 1932). "Electronic Structures of Polyatomic Molecules and Valence. II. General Considerations". Phys. Rev. 41 (1): 49–71. doi:10.1103/PhysRev.41.49. http://prola.aps.org/abstract/PR/v41/i1/p49_1.  Further reading • Tipler, Paul; Ralph Llewellyn (2003). Modern Physics (4 ed.). New York: W. H. Freeman and Company. ISBN 0-7167-4345-0.  • Scerri, Eric (2007). The Periodic Table, Its Story and Its Significance. New York: Oxford University Press. ISBN 978-0-19-530573-9.  External links Simple English In chemistry and nuclear physics, the electron cloud is a way to describe where electrons are when they go around the nucleus of an atom. The electron cloud model is different from the older model by Niels Bohr. Bohr talked about electrons going around the nucleus in a fixed circle, the same way that planets go around the Sun. The electron cloud model says that we can not know exactly where an electron is, but the electrons are more likely to be in specific areas of an atom. It is the most modern and accepted form of the atom. Got something to say? Make a comment. Your name Your email address
af2132e0761e8899
Theoretical background Theoretical background Classical and quantum description of a particle In classical mechanics the description of the dynamic state of a particle at a given time t is based on the specification of six parameters, the components of the position r(t) and linear momentum p(t) of the particle. All the dynamical variables (energy, angular momentum, etc.) are determined by the specification of r(t) and p(t). Newton's laws enable us to calculate r(t) through the solution of second order differential equations with respect to time. Consequently, they fix the values of r(t) and p(t) for any time t when they are known for the initial time. Quantum mechanics uses a more complicated description of phenomena. The dynamic state of a particle, at a given time, is characterised by a wave function Psi(r,t) which contains all the information that is possible to obtain about the particle. The state no longer depends on six parameters, but on an infinite number of parameters: the values of the wave function Psi(r,t) at all points r in the coordinate space. For the classical idea of trajectory (the succession in time of the various states of the classical particle) we must substitute the idea of the propagation of the wave associated with the particle. Psi(r,t) is interpreted as the probability amplitude of the particles presence. Abs[Psi(r,t)]2 is interpreted as the probability density of the particle being, at time t, in a volume element d3r situated at the point r. The equation describing the evolution of the wave function Psi(r,t) is the Schrödinger equation. The result of a measurement of an arbitrary dynamic variable must belong to the set of the eigen values of the operator representing the dynamic variable. With each eigen value it is associated an eigenstate, the eigen function of the operator belonging to the particular eigen value. If a measurement yields a particular eigen value, the corresponding eigen function is the wave function of the particle immediately after the measurement. The predictions of the measurement results are only probabilistic: they yield the probability of obtaining a given result in the measurement of a dynamical variable. The Schrödinger equation When a particle of mass m is subjected to the influence of a scalar potential V(r,t), the Hamiltonian operator of the particle in Schrödinger representation can be expressed as H = T + V(r,t) , where T is the kinetic energy operator of the form The Schrödinger equation , (1) which governs the time evolution of the physical system is of first order in t. From this it follows that, given the initial state Psi(r,t)0, the final state Psi(r,t) at any subsequent time t is determined. There is no indeterminacy in the time evolution of a quantum system. Indeterminacy appears only when a physical quantity is measured, the state function then undergoing an unpredictable modification. However, between two measurements, the state function evolves in a perfectly deterministic way in accordance with equation (1). The Schrödinger equation is linear and homogenous; its solutions are linearly superposable which leads to wave effects. Wave packet solution of the Schrödinger equation Consider a particle whose potential energy is zero (or has a constant value) at every point in space. The particle is not subjected to any force; it is said to be free. When V(r,t) = 0, the Schrödinger equation becomes: and a plane wave of type , (3) (where A is a constant) is a solution of it, on the condition that k and Omega satisfy the dispersion relation for a free particle: . (4) Since Abs[Psi(r,t)]2 = Abs[A]2, a plane wave of this type represents a particle whose probability of presence is uniform throughout all space. From the principle of superposition it follows that every linear combination of plane waves satisfying (4) will also be a solution of the Schrödinger equation. Such a superposition can be written as . (5) d3k represents, by definition, the infinitesimal volume element in the k-space. g(k), which can be complex, must be sufficiently regular to allow differentiation inside of the integral. A wave function, such as (5), a superposition of plane waves, is called a three dimensional wave packet. A plane wave, whose modulus is constant throughout all space, is not square-integrable, therefore, rigorously, it cannot represent a physical state of the particle. On the other hand, the superposition of plane waves can be square-integrable. It can be shown that any square-integrable solution can be written in the form (5). The form of the wave packet at a given instant of time, if we choose this instant as the time origin, is : . (6) It can be seen that g(k) is the Fourier transform of Psi(r,0): . (7) Consequently the validity of formula (6) is not limited to the case of the free particle: whatever the potential, Psi(r,0) can always be written in this form. In the general case, where the potential V(r) is arbitrary, the formula (4) is not valid. It is then useful to introduce the three dimensional Fourier transform g(k,t) of the function Psi(r,t) by writing: . (8) The time dependence of g(k,t) is brought in and determined by the potential V(r). The wave function's representation in the coordinate space, Psi(r,t) and its representation g(k,t) in the k space, form a Fourier transform pair: . (9) The velocity of propagation of the wave packet (of the envelope of the wave packet) is the group velocity: . (10) Gaussian wave packet in one dimension Consider, in a one dimensional model, a free particle [V(x) 0]. A normalised gaussian wave packet can be obtained by superposing plane waves eikx with the coefficients: . (11) Expression (11) corresponds to a gaussian function centred at k = k0 (and multiplied by a numerical coefficient which normalises the wave function). The wave function at time t = 0 is: , (12) which shows that the Fourier transform of a gaussian function is also gaussian.Therefore at time t = 0, the probability density of the particle is given by . (13) The centre of the wave packet is at x=0. It is convenient to define the width of the wave function by Delta x, the root-mean-square deviation of the x, thus Delta x = a/2. Since Abs[g(k,0)]2 is also a gaussian function we can calculate its width in a similar way; this gives Deltak = 1/a. The wave function of the free particle Psi(x,t) at time t is dispersion relation. By performing the integral in expression (14) it can be shown that at any time t the envelop of the wave packet remains gaussian but it is spreading out in time. The width of the wave packet (Delta x) is a function of time . (15) The height of the wave packet also varies, it decreases as the wave packet is spreading out, so the norm of Psi(x,t) remains constant. The properties of the function g(k,t), the Fourier transform of Psi(x,t), are completely different. From equation (14) it follows that . (16) g(k,t) has the same norm as g(k,0) therefore the average momentum of the wave packet and its momentum dispersion do not vary in time. This arises from the fact that the momentum is a constant of motion for a free particle; since the particle encounters no obstacle, its momentum distribution cannot change. The solution of the Schrödinger equation To follow the evolution of state of the system one has to solve the quantum-mechanical equation of motion - the time-dependent Schrödinger equation. Analytical solution exists only for some oversimplified cases. In one dimension one can find the solution for an arbitrary V(x) potential by numerical integration of the time-dependent Schrödinger equation. This is performed such, that first the effect of the Hamiltonian on Psi(x,t)0, the state function at the initial time instant initial t0, is calculated. This gives the time rate of change of the state function at the initial time instant t0. From this one gets the change of the state function for the time interval , and thus the state function at the time instant t, Psi(x,t). By choosing short time intervals and close values of the x coordinate the method provides a good approximation of the evolution of the state function. To get the evolution of the wave function by the outlined method in two or three dimensions is rather hopeless, mainly because of computational time limitations. These limitations are largely removed by an efficient numerical technique based on splitting of the evolution operator and Fourier transform technique. The evolution operator and the split-operator Fourier transform method The transformation of Psi(r,t)0 (the state function at an initial instant t0) into Psi(r,t) (the state function at an arbitrary instant t) is linear. Therefore there exists a linear operator U(t, t0), such that: . (17) The operator U(t,t0) is, by definition, the evolution operator of the system. U(t,t0) is a unitary operator, it conserves the norm of vectors on which it acts. Unitarity expresses the conservation of probability. In case of conservative systems, when the operator H does not depend on time, equation (17) can easily be integrated and we obtain: , (18) where . Since the Hamiltonian operator H contains the scalar function V(r) and through T the differential operator, it is difficult to evaluate U(t,t0) in the numerical solution. The evaluation of the exponential by expressing it as a power series and truncating that would lead to the loss of unitarity of the evolution operator, that is to non-conservation of probability and should be avoided. The splitting of the evolution operator into the product of two exponentials: one containing only derivative operators, the other the scalar function V(r) is more rewarding, because if both terms can be evaluated exactly it preserves unitarity. But because the kinetic energy operator T and the potential energy operator V(r) do not commute, the splitting of the evolution operator into the product of two exponentials is only an approximation. According to Glauber's formula, if two operators A and B commute with their commutator [A,B] the relation is valid, and the error introduced in approximating the evolution operator by (19) is of . The approximation can be done considerable better by a slight modification to it by decomposing the evolution operator symmetrically: which leads to the reduction of the error to . Note, that if the potential V is constant, T and V commute and thus the error introduced by the splitting of the evolution operator into the product of exponentials vanishes. This means that equation (21) treats the motion of a free particle exactly. According to equation (21) the evaluation of the action of the evolution operator on the Psi(r,t) wavefunction is split into three consecutive steps. The potential energy operator of the system is a scalar function in coordinate space therefore the evaluation of the effect of the operator on the wave function is only a multiplication by it. Evaluation of the effect of the operator is more difficult, as the kinetic energy operator T contains the differential operator in coordinate space. To evaluate its action on the wave function one can utilise the property of the Fourier transform that the act of differentiation on a function in coordinate space is equivalent to the act of multiplication on that function's representation in the Fourier transform space by k, the Fourier transform variable conjugate to the coordinate. This means that the kinetic energy operator T is only a function of the wave vector k in the momentum space, T = hBar2k2/2m. Thus the action of the exponential containing the kinetic energy operator on the wavefunction can be evaluated as , (22) where the inverse Fourier transform is denoted by F-1. The evolution of the wave function over a time increment is calculated in a straightforward way: first equation (22) is applied, then the result is multiplied by and finally equation (22) is applied again. Thus the exact evolution is approximated by the product of a free particle evolution for one-half the time increment, a potential only evolution for a full time increment, and a final free particle evolution for another half time increment. Convergence towards the exact results can be obtained by using a small time increment. System of units used in calculations In the calculations the atomic system of units is used. The units and conversions are: , where , e is the elementary charge and m the mass of electron Mass unit: 1 au[mass] = m = 9.10953 10-31 kg Length unit: 1 Bohr = hBar^2 / ( m e^2 ) = 5.28446 10-11 m Time unit: 1 au[time] = hBar^3 / ( m e^4 ) = 2.41220 10-17 s Energy unit: 1 Hartree = m e^4 / hBar^2 = 4.37189 10-18 J 1 Bohr = 0.0528446 nm 1 Hartree = 27.28700 eV Stationary wave packet in one dimension The results of calculation for a one dimensional wave packet corresponding to a free particle are displayed at different time instants. The time instants were chosen as integer multiples (in the range from 0 to 5) of time required to double the initial width of the wave packet. The wave packet located at t=0 at x=0 has zero average momentum and an initial width of . On the diagrams the unit of x is given in Bohr radius; the unit of the wave number k is given in Bohr-1. Fig. a. Evolution of the real part of the wave function . Fig. b. Evolution of the probability density of the wave packet . The changing in time of the phase of the wave function causes the oscillations in the real part of the wave function. As k0=0, the wave packet does not propagate, it is spreading out only, its width Delta x is increasing with time t as given by formula (15). Fig. c. Real part of the Fourier transform of the wave function . Fig. d. Square of the modulus of the Fourier transform of the wave function versus the wave number k at the same instants as in Fig. a. and Fig. b. At t=0 the functions are centred at k0=0 and have a width of . Fig. d. clearly demonstrates the fact that the shape of the function (the square of the modulus of the Fourier transform of ) of a free particle does not change in time. As Delta x is spreading out in the coordinate space, the wave function's modulus does not change in the Fourier transform space (k space), the dispersion Delta k remains the same; the function is not shrinking, showing that the Heisenberg uncertainty relation is an inequality. The oscillations in the real part of g(k,t) are due to the factor of in equation (16).
d57f04ebced9c9a4
Talk:Laplace operator From Wikipedia, the free encyclopedia Jump to: navigation, search WikiProject Mathematics (Rated C-class, High-importance) WikiProject Mathematics Mathematics rating: C Class High Importance  Field:  Analysis One of the 500 most frequently viewed mathematics articles. edit·history·watch·refresh Stock post message.svg To-do list for Laplace operator: • Give an explanation of what the Laplacian actually is and/or does in the introduction. • Quick example of the Laplacian operator on a simple function Add a reference explaining the confusing (but universally used) notation?[edit] (Disclaimer: I'm neither an experienced Wikipedia editor nor a mathematician). The notation universally used for Laplacian is, it seems to me unfortunate, since it is neither the square of anything nor a function composited with itself. I think a really good aside on this anomalous notation is here: . Would it be appropriate to add a footnote linking to this, or an aside in the article? Ma-Ma-Max Headroom (talk) 16:57, 28 July 2009 (UTC) • It's not that bad really; no worse than anything else the physicists come up with. is just a vector of operators, and ∇2 really is that squared, the appropriate product (vector dot product). The author of your link gets a bit attached to his idea of thinking of his notation for operators in terms of what they return, not what they are themselves. So, he states that ∇• is "never" written •, although mathematicians tend to use that notation (assuming they are bothering to mark any vectors specially). The rationale is that is a column vector of the partial derivative operators, so should be always emboldened if we are doing that to vectors, even though as an operator ∇• returns a scalar. Either way, this is not confusing, as the symbols are just grungy shorthand notation, and what they mean is obvious and unambiguous. You can add a footnote if you want, but I can't see a particular need.— Kan8eDie (talk) 22:31, 28 July 2009 (UTC) Wikipedia has any standard on this? We see the triangle up a lot (an instance this very page) but the triangle down squared is also there (wave equation, for example). I like the triangle squared, and have a particular dislike for the box representing dalembertian (notably absent in wave equation) — Preceding unsigned comment added by (talk) 17:00, 13 January 2013 (UTC) There is a tendency for physicists to use \nabla^2 and mathematicians to use \Delta (a tendency only, I've seen counterexamples on both sides). I think people should be aware that there are two different notations, and it's reasonable for the article to be neutral about the choice, but I think it's unnecessarily confusing how the article seems to randomly jump back and forth between the two notations. Kirisutogomen (talk) 16:30, 1 May 2015 (UTC) Section removed[edit] I removed the following from the article. It seems to have no focus whatsoever, and is of only peripheral relevance to the article. The "identity" listed is an obvious one, and the spherical harmonic context is not terribly compelling. Spherical harmonics are already mentioned elsewhere in the article. Here is the offending section. Sławomir Biały (talk) 14:13, 2 September 2009 (UTC) • If f and g are functions, then the Laplacian of the product is given by Note the special case where f is a radial function and g is a spherical harmonic, . One encounters this special case in numerous physical models. The gradient of is a radial vector and the gradient of an angular function is tangent to the radial vector, therefore In addition, the spherical harmonics have the special property of being eigenfunctions of the angular part of the Laplacian in spherical coordinates. Could someone add information about the boundedness of the Laplacian as a linear operator? As I understand it, it is unbounded when defined on the functions with bounded domain in R^n, but according to bounded operator it is bounded when the domain is R^n itself (and I presume other unbounded domains). Some mention of Sobolev spaces would be nice too. I will try to add what I know about these myself. —Preceding unsigned comment added by Slimeknight (talkcontribs) 21:31, 15 May 2010 (UTC) This is wrong. As an operator on, e.g., L2(Ω) (of any domain Ω whatsoever—bounded, unbounded, all of Rn), the Laplacian is an unbounded densely-defined operator. However, as an operator it is bounded. But this statement has very little content: it is simply how the norm on the Sobolev space H2 is defined. Sławomir Biały (talk) 12:15, 16 May 2010 (UTC) Ah, okay, I was confused about which norm was was being talked about, I see. That makes a lot more sense to me, as I originally made the comment after being confused as to why it wasn't always unbounded. Thanks very much. slimeknight (talk) 19:04, 16 May 2010 (UTC) Azimuth and zenith[edit] Since there is persistent confusion over the meaning of the words "zenith" and "azimuth", allow me to clarify to prevent further erroneous edits to the section on spherical coordinates. The angle φ measured with respect to the north pole is known as the zenith angle; the z axis itself is the zenith axis. The angle θ made between the positive x axis and the the orthogonal projection of the point into the xy plane is known as the azimuth angle, where the x axis itself is the azimuth axis. Now, the article at present says that To see that this is correct for our conventions, consider the Laplacian of In Cartesian coordinates, the Laplacian is 2. Working this out in spherical coordinates also gives as expected. However, if instead we were to misunderstsand the meaning of "zenith" and "azimuth" and considered the function then which is not equal to 2. Sławomir Biały (talk) 11:07, 27 July 2010 (UTC) Comment: that is inconsistent with the usage in Spherical coordinate system, where φ is used for azimuth. ...ah, but reading on, I see that However, some authors (including mathematicians) use φ for inclination (or elevation) and θ for azimuth, which "provides a logical extension of the usual polar coordinates notation" Some authors may also list the azimuth before the inclination (or elevation), and/or use ρ instead of r for radial distance. So, as so often, it's a case of "standards are wonderful, you can't have too many of them". Regards, JohnCD (talk) 16:31, 27 July 2010 (UTC) Up until quite recently, we used the other convention. But I changed to this one because of this very issue. People were still getting them mixed up even in the convention consistent with our spherical coordinate system article. I think the current conventions are preferable, because we should use the same letter to designate the azimuth in the polar, cylindrical, and spherical coordinates. I believe that this letter is almost always θ for the polar and cylindrical systems, and so we should use the convention for the spherical coordinate system in which it is also θ. It would seem to me to be far too confusing to have θ an azimuth in one paragraph, and then a zenith in the very next part. I thought changing to the "mathematical" convention would resolve the confusion, but apparently not. Sławomir Biały (talk) 17:50, 27 July 2010 (UTC) varphi vs phi[edit] Starting from the convention that theta is azmuthal and phi is latitude... I noticed you had two different versions of the equation, and one used \varphi for the latitude angle. I think these must have been intended to be the same angle (else there is no definition for the two), so I changed it. -Theanphibian (talkcontribs) 12:32, 14 May 2013 (UTC) 'Spheres centred at'...[edit] Firstly it's not clear if it's talking about the surface or the solid (I know that it's surface technically but I don't know if the editor knew this). Secondly, shouldn't really be in the intro, followed by zero follow up. Should probably be in the article somewhere, with at least some discussion of importance or derivation; I'm okay at vector calculus but I've never heard this interpretation before. — Preceding unsigned comment added by (talk) 17:00, 12 July 2011 (UTC) As usual in mathematics, "sphere" means the surface of the ball. This characterization was chosen for the lead paragraph since it seemed to be the simplest way to say what the Laplacian is for a general audience (no assumption of vector calculus). Sławomir Biały (talk) 17:53, 12 July 2011 (UTC) Let's revisit this. The average of a function f over the sphere of radius r centered at the origin in is Apply a change of variables to convert to spherical coordinates: Differentiating with respect to r gives Applying the divergence theorem gives So is equal to times the average value of over the ball of radius r. Sławomir Biały (talk) 12:10, 13 July 2011 (UTC) So... the introductory statement is incorrect? I can't see how the above is relevant. -- (talk) 16:04, 13 July 2011 (UTC) No, the precise statement is: Sławomir Biały (talk) 03:07, 14 July 2011 (UTC) ...okay, for now, I'm removing the statement. The above in no way shows equality to 'the rate at which the average value of ƒ over spheres centered at p, deviates from ƒ(p) as the radius of the sphere grows', and as such is extremely unhelpful. — Preceding unsigned comment added by (talk) 14:43, 19 July 2011 (UTC) I've rephrased the statement. Sławomir Biały (talk) 16:42, 19 July 2011 (UTC) Rephrase it again. It is not even roughly equal 'up to a factor'; there is an r^2 term. The true statement is now so lacking in intuitive meaning and consequence that it seems to me totally bizarre having in the beginning as if it's either important or helpful. — Preceding unsigned comment added by (talk) 15:57, 20 July 2011 (UTC) Invariance under orthogonal coordinate transformations[edit] I think one important property worth mentioning is that if two coordinate charts are related as , where is orthogonal (in general unitary), then . This is conceptually important when transitioning from a differential equation interpretation to an operator field interpretation, for example in QM to QFT transition -- We need to have operators that are invariant under unitary coordinate transformations so that the differential equation governing the scalar/vector fields does not depend on the choice of the inertial frame. We thus have in Schrödinger equation (or even the heat eqn), but a first order operator does not satisfy a similar property until we consider C1,3(C)-valued fields as in the Dirac equation (where it's satisfied with some gauge invariance). - Subh83 (talk | contribs) 19:15, 1 April 2015 (UTC)
5fad3a0c40f63fad
We now have two books freely available to view & read on the Quantum Mind website – “Quantum Physics In Consciousness Studies”, &, “Consciousness, Biology And Fundamental Physics”. See below.  Text is always free on the Quantum Mind website. Quantum Physics In Consciousness StudiesConsciousness Our latest book is available to view & download as a PDF by clicking here. Please note that by clicking this link a download dialogue box will open in your browser to download the book to your device. This is a 6MB file so depending on the speed of your connection this should take no more than 20 – 30 seconds to download completely. The first few of chapters of our new book are also available to view directly on our website by clicking here. Consciousness, Biology And Fundamental Physics The sites online book ‘Consciousness, Biology and Fundamental Physics‘ is now available on Amazon both as a paperbackConsciousness, Biology And Fundamental Physics and as a kindle book. New paperbacks currently priced from £9.05 – Click here to Buy Kindle books from £2.71 – Click here to Buy Text remains free on this site. In writing something of this kind, it is difficult know what level to pitch it at and what degree of detail to bring in. On the one hand, experts in particular fields may ridicule the superficial nature of the description and arguments here, while at the other extreme some would-be readers may find even the opening sentences baffling. I have two recommendations for dealing with these problems. Firstly, I would advocate a pick and mix approach to the offerings here. For instance, those not particularly inclined to wade through user-unfriendly material relative to physics, biology and neuroscience might prefer to go straight to the final section, rather arrogantly entitled ‘a theory of consciousness’. This gives the main conclusions as how consciousness arises and its function. If this looks at all interesting it is then possible to go back and see how I have attempted to substantiate the proposals I have made in this section. The same general approach can be applied to the other chapters, in skipping over things that are either too difficult, or are too well known to need revisiting. There is perhaps a word of caution relative to this approach. The section on physics emphasises the problem areas in quantum physics, which may be played down in more discussion. The sections on both quantum biology and neuroscience emphasise research work in very recent years that can be argued to have reversed some assumptions that are still common in science and in consciousness studies. The main inspiration for this attempt at consciousness theory is the ideas of Roger Penrose (1.&.2.) Unfortunately, I have over more than twenty years come to form the opinion that the vast majority of modern consciousness studies is profoundly misguided, and that in time Penrose may come to be seen, as being alone as a deep thinker on the subject, in our rather benighted period. This book attempts an amendment and simplification of the Orch OR scheme, and also to some extent an updating in line with very recent developments in biology. It is tentatively suggested that a less complex approach to the function of consciousness than that provided by the Gödel theorem can be attempted, and similarly that in the brain, quantum consciousness might be based on shorter-lived quantum coherence in individual neurons, rather than the longer-lived and spatially distributed proposal put forward by Hameroff. The possible need to amend the original concepts are the reason from merely commenting on quantum consciousness topics to outlining a version of the theory. Definition:  “Consciousness is defined here as our subjective experience of the external world, our physical bodies, our thinking and our emotions.” Consciousness is also defined in terms of ‘being like something’ to have experiences. It is like something to be alive, like something to have a body and like something to experience the colour red. In contrast, it is assumed to be not like something to be a table or a chair. Further to being like something conscious also gives us the experience of choice. In philosophy, this opens up the controversial topic of freewill, but at a more mundane level we have the something it is like to choose types of beer, or between a small amount of benefit now or a more substantial benefit in the future. A special characteristic of subjective consciousness is privacy, in the sense that we have no way of knowing that our experience of the colour red is the same as someone else’s, and no way of conveying the exact nature of our experience of redness. These subjective experiences are referred to as qualia. The problem of qualia or phenomenal consciousness is here viewed as the sole problem of consciousness and the whole of the problem of consciousness. The problem we have to address here is how consciousness, subjective experience or qualia arise in the physical matter of the brain. Even this simple question raises some queries as to whether consciousness does in fact arise from the brain, although the arguments in favour of this position do in fact look strong. The classic argument is that things done to the brain such as particular injuries or the application of anaesthetics can remove consciousness. The main challenge to the ‘brain produces consciousness’ hypothesis is dualism, or the idea that there is a separation between a spirit stuff and a physical stuff that together make up the universe, with  consciousness being part of the spirit stuff, but inhabiting a physical brain and body. This had probably been the most popular idea since ancient times, but it was formalised by Descartes in the seventeenth century. The idea has a certain beguiling simplicity, since at a stroke it gets rid of the need to worry about how the physical brain can produce consciousness, or all the difficulties this gives rise to in terms of biology and physics. Unfortunately, the problems of dualism appear to be of the serious kind. This is principally the question of how the physical stuff and the spirit stuff can interact. If the spirit stuff is to interact with the physical stuff it would appear to need to have some physical qualities, in which case it would not be true spirit stuff. The same applies in the opposite direction in that the physical stuff would seem to need some spirit qualities to interact with the spirit stuff, and would therefore not conform to conventional physics. We are thus left with the problem of how the physical stuff as described by science can produce consciousness. The philosopher, David Chalmers (3.), labelled this the ‘hard problem’. The problem here is really a problem of specialness. The brains of humans and possibly animals are the only places in the universe where consciousness has been observed, so the question really is as to ‘what is special about the brain’, and the answer to this tends to be that there’s nothing special about the brain, because it’s made of exactly the same type of stuff and obeys the same physical laws as the rest of the universe. The brain comprises the same carbon, oxygen, hydrogen and other atoms that are found in the stars and planets and the objects of the everyday world around us. At first sight this might not seem too much of a problem. The brain is considered to be the most complex thing in the universe, and surely something in such a system can manage to produce consciousness. Unfortunately, this does not appear to be the case. In a conventional neuroscience text book, which will emphasise the fluctuation of electrical potential in the neurons (brain cells) and the resulting movement of neurotransmitters between neurons, we are presented with a causally closed information system, which does not require consciousness in order to function, and offers no physical mechanism by which consciousness could be produced. Since consciousness ceased to be a taboo subject for academic research twenty-or-so years ago, several theories that seek to explain consciousness arising within the confines of classical/macroscopic physics have been advanced. It would take many hundreds of pages to discuss these adequately, so I will here summarise the main ideas, and where they look to fail. For those who find any of them plausible, there is a huge and expanding literature out there working to reinforce these theories. Possibly the most plausible attempt to explain how the brain could produce consciousness within the concepts of classical and macroscopic physics is the idea that consciousness is an emergent property of the brain’s physical matter. Emergent properties are an established concept in physics. The classic example is that liquidity is an emergent property of water. The individual hydrogen and oxygen atoms or their sub-atomic components do not have the property of liquidity. However when the atoms are combined into molecules and a sufficient number are brought together, within a particular temperature range, the property of liquidity emerges. The problem with the emergent property idea, when it comes to dealing with consciousness, is that where emergent properties do arise in nature, physics can trace them back to the component particles and the forces that bind them. Thus the liquidity of water can be explained by the electromagnetic force acting on hydrogen and oxygen atoms. But in many years of the emergent property idea being promoted by parts of the conventional consciousness studies community, no one has been able to propose a micro scale emergent mechanism in the brain comparable to the explanation of how liquidity emerges from water. In much of the late twentieth century consciousness studies was dominated by functionalism. This theory proposes that consciousness is a function of the brain’s information processing system, and that the biological matter of the brain is irrelevant to consciousness. This means that any system that processes information in the same way as the brain will be conscious regardless of what it is made of. Therefore a silicon computer of sufficient complexity would flip into consciousness at some point, and future systems using still other materials would do likewise. This is because the system, rather than the stuff from which it is made, is seen as being the thing that produces consciousness. The underlying weakness of functionalism is that it does not actually explain the mechanism of how consciousness arises in the brain’s systems in the first place, nor how it might physically arise from silicon computers or other machines. This is a crucial problem regardless of whether the brain or system in question is made of biological tissue, silicon or anything else. It is generally agreed that the computer on the desk is not conscious, but that brains are conscious. The question we are left with is what changes between the computer on the desk and the brain, and similarly between the computer on the desk and any future super computer that might actually become conscious. There may be a vague assumption that more and more of the same initial complexity does it. But the physical world doesn’t work like that. The problem of butter not cutting steel is not resolved by adding lots more butter, but by finding something with different properties from butter. Identity theory is similar in tone to functionalism. It says that consciousness is identical to the brain or at least parts of it. The problem with an identity theory is that it needs to specify a particular object, or more plausibly a particular process in the brain that is physically identical to consciousness. It is not enough to show that the axons of neurons spike, or that there is a gamma oscillation between the cortex and the thalamus, when conscious processing occurs. These things are correlated to consciousness, but that is another thing from saying they are identical to consciousness. The distinction between identity and correlation is crucial here. Thunder and lightning are correlated, but they are not the same physical process, even though they have the same ultimate cause. In contrast, the morning star and the evening star are identical, because they are both names given to the planet Venus, a single physical object. Astronomy has conclusively demonstrated this identity, because the behaviour of a point of light in the morning and evening sky can be completely explained by the behaviour of the planet Venus. However, neuroscience has not demonstrated that the behaviour of any particular physical process in the brain that is identical to, or can completely explain the behaviour of consciousness, as opposed to being merely correlated to it. In addition to this, more recent neuroscience has at least qualified identity theory. Expositions of identity theory tended to be rather simplistic in applying to the whole brain, while recent neuroscience has demonstrated consciousness as correlated to both particular neuronal assemblies and to single neurons, albeit on a temporary basis with activity correlated to consciousness shifting from place to place in the brain. The basic idea here appears to be that a level or perhaps levels of the brain observe another level or levels, and the interaction of the two somehow generates consciousness. We are also asked to believe that because one system monitors another it will become conscious. This suggestion bears little relation to the technological world where it is common place for one non-conscious automated system to monitor another and have some automatic response to changes in it, without any requirement for or evidence of consciousness. In the present century, the concept of conscious embodiment has come to the fore. It is suggested that a brain or a computer by itself cannot be consciousness, but the brain and possibly the computer can become conscious when attached to a body or some comparable extension. The recognition of the fact that brain and body are interactive was in itself an advance on twentieth century notions of the brain as an isolated computer, and the body as an automaton incapable of being uninfluenced by the mind. That said, there appear to be two problems with this approach as an explanation of consciousness. Firstly, it carries the rather implausible notion that the body has some consciousness generating process that does not exist in the brain. There is a complete absence of explanation as to what this might actually be. Admittedly, most touch and pain are transmitted from the body to the brain, and visual and auditory inputs to the brain are fed forward to the viscera, but this does not explain why signals going through the body should generate something different from incoming signals through the brain. This theory looks difficult to square with what has now become known about the organisation of brain processing. While the bodily touch and pain can certainly be seen to play a role, it is hard to see why all visual, auditory inputs, and the results of cognitive processing should have to wait on the laborious responses of the viscera, especially as it is the reward assessment areas of the brain that signal the viscera in the first place. If bodily generated emotion were the whole story, the emotional evaluation regions of the orbitofrontal and amygdala would seem to be in a state of suspended activity between sending a signal to the autonomic system and getting signals back from the viscera. In the specific case of rapid phobic reactions in the amygdala, the idea fails completely. Recent expositions of the theory indicate an over emphasis on the body’s movement and relations to the external world, perhaps because they are more compatible with this theory, at the expense of the other senses and more especially at the expense of thinking and emotion-related evaluation. A further objection to this theory is that bodily arousal does not provide a sufficient range of responses to match the range of human emotional responses. Emotional research, which often means animal research, has tended to focus on the easy target of fear, which produces very definite bodily responses, whereas cognitive processing or visual and auditory sensations, not related to immediate danger, can produce a much less marked bodily response, and a wider and subtler range of emotional responses. The more plausible view is that visceral responses are one aspect of many responses that are integrated in the orbitofrontal and other evaluation processes. Further to this, evolution seems to have altered the response system to visceral inputs when it came to primates. The visceral inputs no longer go via the pons structure in the brain stem, and this is argued to suggest a less automatic response to visceral inputs in humans and primates. It seems more likely that in line with most brain processes, there is a complex feed-forward and feedback between all parts of the system including the viscera and the orbitofrontal. The body-only theory looks to depend on a simple feed-forward mechanism, which is alien to how brain processing functions. Attempts to classify consciousness as a form of information can be seen as another attempt to explain consciousness in classical terms. This idea also looks to encounter insuperable problems. There are innumerable examples of information processing and communication that does not involve consciousness, especially when we look at modern technologies. Further to this, we lack a description of a physical process that would distinguish conscious information from non-conscious information. There is a core difference between information and reality, in that information involves only what we happen to know about something, while a knowledge of reality requires a full description of its make up and a full explanation of its behaviour. The only information available to a hunter gatherer in ancient Africa glancing up at the sun is the intensity of glare and heat and changes in position in the sky of light. It required the complexities of modern science to unravel everything that is involved in the sun producing light, the light getting to our eyes, and the brain states this produces. This theory proposes that consciousness is a by-product of neural processing, which has no function or significance. There are three main problems here. In the first place like some other modern consciousness theories, it is actually a non-explanation. Even if consciousness has no function, we still need to know how it is physically instantiated, and this is never attempted when this theory is proposed. The suspicion is that the proponents of this theory are unconsciously closet Cartesians, with an underlying assumption that consciousness is ‘non-physical’ or ‘immaterial.’ If it can be categorised in this surprising manner it can be dismissed as non-functional, and relegated to a smallest possible footnote in any scientific study. This is contradictory in that the proponents are invariably non-dualists, who believe that there is no such thing as the non-physical. The second problem is that consciousness has to be linked to the rest of the brain and the physical universe, because the very fact of conscious experience indicates that we are dealing with the reception of some form of incoming signal, and anything receiving incoming signals is likely to be able to emit them in some form of response, which will have physical consequences. Some writers have suggested an escape route here, which allows consciousness a trivial influence. This is feasible up to a point, but hints at problems in defining what is trivial, and would erode the position of the modern orthodoxy that argues for complete determinism and no freewill at all in behaviour. A further problem for epiphenomenalism is that it conflicts with evolutionary theory. If this by-product consciousness is physical as the scientific paradigm demands, it needs energy to produce and maintain it, and given that the brain is very energy intensive, this could involve quite a large amount of energy. It would be maladaptive for evolution to select for something that ties up energy with no benefit to the organism. It might be argued that neural processing was such that some by-product was essential, but this would require a demonstration that neural processing produces this something else. However, in the physical description of the matter and energy involved in the brain, as described by standard neuroscience, there is no sign of such a process. New mysterians or sometimes just mysterians take the view that just as dogs cannot understand calculus, humans will never be able to understand consciousness. This may in the end turn out to be true, but to accept this view as final at this stage in the proceedings seems unduly defeatist. The human mind has proved capable of understanding the mechanisms of the physical universe so far, and it is reasonable to hope that the rather narrow scope of thinking in conventional consciousness studies may not have exhausted all possible explanatory routes. Where the mysterian approach is advocated there is usually a ‘no nonsense’ implication that having established this point, consciousness is no longer a threat to a view of the mind that is dominated by classical physics and slightly old fashioned text book neuroscience. On further reflection however the exact opposite is true. Humans have been able to understand the physical law. If they cannot understand consciousness, then consciousness lies outside the physical law or any logical extension to it. This, if anything ever does, opens the sluice gates to the dark tide of the occult, necessitating that consciousness is something akin to a spirit stuff lying outside of, and able to act outside of the physical law. Much of the scientific, philosophical and psychological community never internalised the revolution in physics that produced quantum theory early in the last century. There seems to be an assurance that this was an abstruse special case that need not bother day-to-day thinking. The theory was more or less censored out of general education and even basic scientific education. In mainstream consciousness studies, there is an apparent determination not to move beyond nineteenth century macroscopic physics, which proposes a billiard ball world, where everything is explained in terms of objects bumping into one another. This is despite the fact that it has been known for a century that this is a convenient approximation for studying the human-scale world, but is not how the underlying physics works. Neuroscience’s approach to consciousness is even more mired in nineteenth century concepts. The discovery of individual neurons and their connections at the end of that century allowed the idea of the neuron as a simple switch with no further complications to become entrenched. Not long after this discovery, what is sometimes called ‘the long night of behaviourism’ descended on consciousness studies, decreeing that consciousness was irrelevant to behaviour and not a proper subject of study. Although behaviourism as such dropped out of favour in the latter decades of the twentieth century, subsequent theories have sought to justify the same general conclusion by marginalising consciousness. Behaviourism is dead. Long live behaviourism. In fact, one curious consequence of the functionalist and identity approaches is that much of consciousness studies has paid remarkably little attention to the brain or to advances in neuroscience in recent decades. The assumption has been that all that was needed was a particular system that could run on any material, and there was no need to inquire any further into the detailed biology of the brain. Information about binding and the gamma synchrony or consciousness in individual neurons and the distinction between conscious and non-conscious neural area are footnotes, while the functional role of subjectivity in orbitofrontal valuation is never mentioned, or perhaps not even known about. 1.10:  Why 21st century consciousness studies will fail Consciousness studies has gone off in a different direction from neuroscience. Much of it is dominated by philosophers or psychologists who deal more in abstractions than what is going in the physical brain. In addition, they have tended to see themselves as under-labourers supporting a nineteenth century Newtonian world view, while at the same time discussing consciousness in very abstract terms that take limited account of advances in neuroscience research. Neuroscientists, meanwhile, seem to have been persuaded to treat consciousness as not really part of their remit, and defer to philosophers whenever they felt it necessary to mention consciousness, even when the views of the philosophers appeared to conflict with the neuroscientists recent findings. For this reason, it seems possible to predict that consciousness studies will come to the end of the 21st century without having achieved consensus on a theory that has any useful explanatory value. The above discussions might seem to bring us to an impasse, where we don’t think that consciousness can derive either from separate spirit stuff nor from the material that comprises the brain, the body and the universe. Luckily, there is an escape route from this.Physics does not explain everything. The arrow of explanation heads for ever downwards, but it does at last strike bottom. There is a level beyond which there is no further reduction or explanation. The quanta or sub-atomic particles have properties of mass, charge and spin and are bound by the particular strengths of the forces of nature. These are fundamentals, primitives or given properties of the universe that have to simply be accepted. If we ask what is the charge on the electron, not what does it do, but what is it, the answer is a resounding silence, because it is a given property of the universe, and comes without explanation. If we had a scientific culture that did not accept that quanta could be electrically charged, and that other quanta could intermediate the electromagnetic force, this might develop into another hard  problem like the one we have with consciousness. We would go round and round trying to stick electrical charge on to other and probably macroscopic physical features, or we might even decide, as happens sometimes in consciousness studies, that charge did not really exist, that it was a product of something else or an illusion. No doubt experimental psychologists could devise cunning tests that showed how subjects confabulated the idea of electrical charge. If we accept that fundamental properties do exist, and that they cannot be explained by other means, and also that it is impossible to explain consciousness in terms of classical physics, then it would seem reasonable to suggest that consciousness is one of this small group of fundamentals. Thinkers such as David Bohm (4.) and Roger Penrose have made such proposals, but the response has been generally hostile, although the reasons for this may be cultural or even metaphysical rather than scientific. Just having a concept of consciousness as a fundamental of physics is not by any means enough. Fundamental physics may be a possible gate to consciousness, but to substantiate this we need some concept of how consciousness might be integrated into what is known about fundamental physics. In the first place, it might help to have at least a very simplified idea of quantum theory and some recent ideas about spacetime. Quantum theory is the fundamental theory of energy and matter as it exists behind the appearances of the classical or macroscopic world. Suppose one were to ask for a scientific description of your hand. Biology could describe it in terms of skin, bone, muscles, nerves, blood etc., and this might seem a completely satisfactory description. However, if you were just a bit more curious, you might ask what the muscle and blood etc. were made of. Here you would descend to a chemical explanation in terms of molecules of protein, water etc. and the reactions and relations between these. If you were still not satisfied, then beyond this you would have to descend into the quantum world. At this level, the solidity and continuity of matter dissolves. The molecules of protein etc. are made up of atoms, but the atoms themselves are mainly vacuum. Most of the mass of the atom lies in a small nucleus, comprised of protons and neutrons, which are themselves made up of smaller particles known as quarks. The rest of the mass of the atom resides in a cloud of electrons around the nucleus. The fundamental particles are bound together by the four forces of nature, which are  electromagnetism, the strong and the weak nuclear forces and gravity. The quanta can be divided into two main classes, the fermions, which possess mass and the bosons which convey energy or the forces of nature. In contrast to the nuclear forces, gravity and electromagnetism are conceived of as extending over infinite distance, but with their strength diminishing according to the inverse square law. That is, if you double your distance from an object, its gravitational attraction will be four times as weak. The strong nuclear force binds together the particles in the nucleus of the atom, and acts only over the very short range. Gravity is a long-range force that mediates the mutual attraction of all objects possessing mass. The electromagnetic force, also a long-range force, is perhaps the force most apparent in everyday life. We are familiar with it in the form of light, radio, microwaves and X-rays. It holds together the atom through the attraction of the opposite electrical charges of the electron and the proton. It also governs the interactions between molecules. Van der Waals forces, a weak form of the electromagnetic force is vital to the conformation of protein and thus to the process of life itself. 2.2:  Quantum waves, superpositions and a problem of the serious kind. The quantum particles or quanta are unlike any particles or objects that are encountered in the large-scale world. When isolated from their environment, they are conceived as having the property of waves, but when they are brought into contact with the environment, there is a process referred to as decoherence or wave function collapse, in which the wave form collapses into a particle located in a specific position. The wave form of the quanta is different from waves of matter in the large- scale world, such as the familiar waves in the sea. These involve energy passing through matter. By contrast, the quantum wave can be viewed as a wave of the probability for finding a particle in a specific position. This probability wave also applies to other states of the quanta such as momentum. While the quanta remain in wave form, they are described as being in a superposition of all the possible positions that the particle could occupy. At the peak of the wave, where the amplitude is greatest, there is the highest probability of finding a particle. However, the choice of position for each individual particle is completely random, representing an effect without a cause. This acausal result comprises the first serious conceptual problem in quantum theory. The physicist, Richard Feynman, said that the two-slit experiment contained all the problems of quantum theory. In the early nineteenth century, an experiment by Thomas Young showed that when a light source shone through two slits in a screen, and then onto a further screen, then a pattern of light and dark bands appeared on the further screen, indicating that the light was in some places intensified and in others reduced or eliminated. Where two waves of ordinary matter, for instances waves in water, come into contact an interference pattern forms, by which the waves are either doubled in size or cancelled out. The appearance of this phenomenon in Young’s experiment demonstrated that light had the characteristics of a wave. 2.4:  The experiment refined Later, the experiment was refined. It could now be performed with one or two slits open. If there was only one slit open, the photons or light quanta, or any other quanta used in the experiment behaved like particles. They passed through the one open slit, interacted with the screen beyond, and left an accumulation of marks on that screen, signifying a shower of particles rather than a wave. But once the second slit was opened, the traditional interference pattern, indicating interaction between two waves, reappeared on the screen. The ability to generate the behaviour of either particles or waves, simply according to how the experiment was set up, showed that the quanta had a perplexing wave/particle duality. It could seem that the best way to understand what was happening here was to place photon counters at the two slits in order to monitor what the photons were up to. However, as soon as a photon is registered by a counter, it collapses from being a wave into being a particle, and the wave-related interference pattern is lost from the further screen. The most plausible way to look at it may be to say that the wave of the photon passes through both slits, or possibly that it tries out both routes. 2.5:  There was worse to come The wave/particle duality was shocking enough, but there was worse to come. Technology advanced to the point where photons could be emitted one-at-a-time, and therefore impacted the screen one-at-a time. What is remarkable is that with two slits open, but the photons emitted one-at-a-time, the pattern on the screen formed itself into the light and dark bands of an interference pattern. The question arose as to how the photons emitted later in time ‘knew’ how to arrange themselves relative to the earlier photons in such a way that there was a pattern of light and dark bands. The ability of quanta to arrange themselves in this non-random way over time, despite initially choosing random positions, could be considered to be the second big problem of quantum theory. Einstein disliked the inherent randomness involved in the collapse of the wave function. This was despite himself having contributed to the foundation of the quantum theory. He sought repeatedly to show that quantum theory was flawed, and in 1935 he seemed to have delivered a masterstroke in the form of the EPR (Einstein, Podolsky, Rosen) experiment. At the time this was only a ‘thought experiment’, a mental simulation of how a real experiment might proceed, but in recent decades, it has been possible to perform this as a real experiment. Two quanta that have been closely connected can be in a state where they will always have a particular relationship to one another. This is known as being entangled. For instance, electrons have a property of spin, and can have a state of spin-up or spin-down. Two entangled electrons can be in a state where their spin will always be opposite. This applies however distant they become from one another. However while the electrons (or other quanta) are in the form of the wave both electrons are superpositions of spin-up and spin-down, so entanglement only really manifests itself when there is decoherence or wave function collapse. The EPR experiment proposed that two quanta, which have remained sufficiently isolated from their environments to be conceived as waves or superpositions, are moved apart from one another. This could be a few metres along a laboratory bench or to the other side of the universe. The relevant consideration is that the two locations should be out-of-range of a signal travelling at the speed of light, within the timescales of any readings that are taken. Both particles are a superposition of two possible states, but if an observation is made on one of the particles, its wave function collapses, and it acquires a defined spin, let’s say spin-up in this case. Now when an observation is made on the other particle, it will always be found to have the opposite spin. This defies the normal expectation of classical physics that a random choice of spins would produce approximately 50% the same spin and 50% different. Therefore, there is seen to be some non-local connection between the two particles, although it is not possible to describe or detect this in terms of a physical transfer of energy or matter. In fact, the entanglement influence is shown to be instantaneous, whereas energy and matter are thought to be constrained by the speed of light. This quantum relationship between particles is called entanglement, and can be regarded as the third big problem in quantum theory. Recent debate suggests that the different interpretations of quantum theory are becoming more distinct and more entrenched, rather than showing any sign of moving towards any kind of consensus (5.). In particular, six types of approach are distinguished, [1.] Everett many-world theories [2.]  Post-Copenhagen theories based only on our information about quantum states.[3.] Coherence remains with hidden superposition within macroscopic objects. [4.]  Bohmian type pilot-wave theories, [5.] wave function collapse theories. [6.] The suggestion that none of these are satisfactory, and that quantum theory will only be explained in terms of a deeper level of physics. The interpretation of quantum theory has an unhappy history. In the 1920s there was for a short time a unity of purpose in trying to both understand and apply quantum theory. Thereafter a premature notion that the interpretative debates had been settled took hold, and in the period after World War II academic institutions discouraged foundational research. The physicist, AntonyValentini, argues that quantum theory got off to this bad start, because it was philosophically influenced by Kant’s idea that physics reflected the structure of human thought rather than the structure of the world. The introduction of the observer into physics allowed a drift away from the idea of finding out about what existed and also how what existed behaved. It was not until the 1970s and 1980s that new interpretations of quantum theory started to become academically acceptable. The philosopher, Tim Maudlin, contrasts two intellectual attitudes in the approach to quantum theory. Einstein, Schrödinger and Bell wanted to understand what existed and how it worked, while many who came after them were more incurious, and happy with a calculational system that worked, giving the so-called ‘shut up and calculate’ approach. Maudlin suggests that what is traditionally referred to as the ‘measurement problem’ in quantum theory is really the problem of what is reality. He sees the aim of physics as being to tell us what exists and the laws governing the behaviour of what exists. Maudlin argues that quantum theory describes the movement of existing objects in spacetime, while the wave function plays a role in determining how objects behave locally. He suggests that there are many problems for theories that deny the existence of real objects or the reality of the wave function. Lee Smolin, a physicist at the Perimeter Institute, remarks that bundles of ideas in quantum theory and related areas tend to go together. Believers in Everett many worlds tended to also support strong artificial intelligence, allowing classical computers to become conscious, and also support the anthropic principle. Disagreement with these three ideas also seems to go together. 2.8:  Everett many-worlds The philosopher, Arthur Fine, puts the fashionable ‘many worlds’ theory, originally proposed by Everett, at the bottom of ‘anyone’s list of what is sensible’. He criticises proponents of the theory for concentrating on narrow technical issues rather than thinking about what it means for universes to split. I think the difficulty of many worlds is even greater than Fine’s suggests. The splitting of worlds demands that huge number of new universes are coming into existence all the time, thus apparently suggests that the energy of entire new universes is being created the whole time. Explanations never seem to go beyond asserting that this is, for some reason, not a problem. Christopher Fuchs, another Perimeter Institute physicist, criticises philosophers who support the Everett theory for not looking for some physical explanation. The theory did not receive much support when it was originally propounded in the 1960s. The current popularity may look like an attempt to preserve classical assumptions, even at the cost of asserting a fantastical sci-fi idea. Shelley-Goldstein, who spans maths, physics and philosophy, criticises information-based theories for their failure to deal with the two-slit experiment. It is asked how the different paths of the wave function in the two-slit experiment could lead to a wave interference pattern if nothing physical was involved. It is considered that the wave function is objective and physical, and that it is neither some form of subjective experience, nor something that is simply the information that we happen to have. He sees the notion of information more in terms of a brain state connected to human needs and desires, rather than as an objective aspect of the external world. Goldstein discusses a refined version of the double-slit experiment, in which the quanta are sent into the system one-by-one, and an interference pattern gradually emerges. The emerging pattern is seen as an entirely objective phenomena not resulting from any limitation of our knowledge of the system. Tim Maudlin appears to agree with this, arguing that in the two-slit experiment, the sensitivity to whether one or two slits are open indicates the response of something physical, rather than the experimenters ignorance about the location of the particle. Maudlin points to the holistic nature of the two-slit experiment, and suggests that the same thing is apparent in non-locality. The philosopher, David Wallace, also takes the view that states in physics are facts about the physical world, and not just our knowledge of the physical world. He rejects the view sees the quantum as a mixture of our information and our ignorance, because in practice physicists measure particular physical processes. The physicist, Antony Valentini, poses the question as to how the definite states of classical physics arise from the indefinite state of quantum physics. He argues that it is impossible to have a continous transition or emergent process moving from one to the other. The problem of measurement or reality at the qauntum level is therefore argued to be a real problem, and requires some physical theory such as pilot waves or collapse theories to explain it. The physicist, Ghirardi, a member of the trio of physicists responsible for the GRW collapse theory, views information theory as having played a negative role in terms of evading the need to deal with foundational problems in quantum theory. He sees it as a backward step, to go from being concerned about what exists, to merely considering our limited information. John Bell, whose inequalities theorem sparked off the modern interest in entanglement, asked in response to this approach what information was about. Proponents of information theory denounce this as a metaphysical question, which seems illogical for physcists who are thus apparently withdrawing from the attempt to produce a physical description of nature. As in some reaches of consciousness studies, we seem to be seeing the modern mind retreating into a mysterian view, possibly as a last ditch way of defending classical physics, or perhaps we should say metaphysics. Tim Maudlin similarly finds the notion of information theory puzzling. In his view, the physical reality exists before we start to get information about it, and it is meaningful to reverse the process. Decoherence theory:  Tim Maudlin uses reductio ad absurdum to argue against decoherence theory. Buckyballs (a molecule of 60 carbon atoms) have been put into superposition, and from there is has been suggested that larger and larger superpositions are possible without limit, so that decoherence never actually occurs, and therefore superpositions are possible without limit, and can be hidden within macroscopic objects. From this, Maudlin argues that solid macroscopic objects such as bowling balls would be capable of being put through a two-slit experiment and producing an interference pattern. Similarly, Ghirardi says that he would be willing to give up his collapse interpretation of quantum theory if macroscopic superpositions could be demonstrated. Some philosophers object to residual approximation and lack of explanation for superposition in the decoherence approach. Ghirardi is also critical of the theoretical basis of decoherence theory, because the claimed superpositions of macroscopic objects cannot be detected by existing technology. Collapse model:  Wave function collapse models developed by Ghirardi and others are yet another interpretation of quantum theory. These theories require a modification of the Schrodinger equation, so that the evolution of the wave function described by the Schrodinger equation can collapse to the outcome in the form of a particle with a particular position and other properties. In Ghirardi’s theory of wave function collapse, the wave function can be viewed as the quantity that determines the nature of our physical world and the spatial arrangements of objects. The wave function governs the space and time localisation of individual particles. He prefers collapse theories that assume a process for random localisation of particles alongside of the standard Schrodinger quantum evolution. Such localisation occurs only rarely for quanta, but the process of localisation is seen as defining the difference between quantum and classical processes. As of November 2011 collapse theories look to have received a degree of support from researchers, Pusey et al, at Imperial College London, who have devised a theorem claiming to prove the physical reality of wave function collpase (arXiv: 1111.3328vl [quant-ph] 14 Nov 2011. The authors claim to have shown that the view that quantum states are only mathematical abstractions is inconsistent with the predictions of quantum theory. The theorem indicates that quantum states in an experiment must be physical systems, or an experiment will have a results not predicted by quantum mechanics. They also claim that the theorem can be tested by experiment. The physicist, Lee Smolin, thinks that space and the related concept of locality should be thought of as emerging from a network of relationships between the quanta that are regarded as fundamental. He argues that the existence of non-locality shows that spacetime is not fundamental , in that non-locality does not accord with the conventional view of spacetime. He proposes that spacetime emerges from a more connected structure. The approaches of many modern physicists tend to view spacetime as a discrete network or or web, and this in turn hints at a structure which could support some form of pattern or code that could support a fundamental code for conscious experience. A century ago, Einstein showed that spacetime was not a fixed absolute background or rigid theatre against which life could be acted out. Instead it could be conceived of as dynamic in response to changes in matter. Relativity describes the behaviour of the universe on the large scale, while quantum theory describes a scale at which gravity can be ignored. Although both theories have been exhaustively tested over the last century, they are not compatible. The smooth continous curvature of spacetime in relativity conflicts with the discreteness of the quanta, while the dynamism of spacetime in relativity contrasts with a fixed background in quantum theory. Loop Quantum Gravity:  Loop quantum gravity is one attempt to reconcile relativity and quantum theory. Space is viewed as an emergent property based on something that is discrete rather than continous. Field lines are viewed as describing the geometry of spacetime. Areas and volumes come in discrete units. There is a suggestion that knots and links in the network code for particles, with different knotting of the network coding for different particles. This suggests that space may have the energy transmitting properties of a superconductor based on the quantum vaccum being full of oscillating particles. The vacuum fluctuations are here seen as transmitters of force. An alternative is that the quantised force binding together the quarks which make up the protons of the nucleus are themselves fundamental entities. Here again there is the idea of a non-continous structure in spacetime. In loop quantum gravity the geometry of spacetime is expressed in loops which may be the loops of the colour force binding the quarks and whose interrelation defines space. The area of any surface comes in discrete multiples of units, the smallest unit being the Planck or the square of the Planck length. The geometry of spacetime changes as a result of the movement of matter. The geometry is here the relationships of the edges, areas and volumes of the the network, while of physics govern how the geometry evolves. Most of the information needed to construct the geometry of spacetime comprises information about its causal structure. The fact that the universe is seen as a causal structure means that even terms such as ‘things’ or ‘objects’ are not strictly correct because they are really processes, and as such causal structures that are creating spacetime. Penrose spin network:  The discrete units of loop quantum gravity relate to the spin network concept earlier developed by Roger Penrose as a version of quantum geometry. The network is a graph labelled with integers, representing the spins that particles have in quantum theory. The spin networks provide a possible quantum state for the geometry of space. The edges of the network correspond to units of area, while the nodes where edges of the spin network meet correspond to unnits of volume. The spin network is suggested to follow from combining quantum theory and realtivity. The network can evolve in response to changes and relates to the development of light cones. Penrose sees understanding and consciousness as being embedded in the geometry of spacetime. Black holes and their significance:  Light cannot escape from black holes thus creating a hidden region behind the horizon of the black hole. The entropy of the black hole is proportional not to its volume but to the area of the event horizon and is given as a quarter of the area of the horizon divided by h bar x the gravitational constant. The horizon can be conceived as a computer screen with one pixel for even four Planck squares, which gives the amount of information hidden in the black hole. In fact, all observers are seen as having hidden regions bounded by horizons. The horizon marks the boundary from beyond which they will never receive light signals. The situation of an observer on a spaceship accelerating close to the speed of light is considered in relation to this. There is a region behind the ship from which light will never catch up constituting a hidden region for the observer. At the same time, the observer on the spaceship would see heat radiation coming towards from in front. Uncertainty principle determines that space is filled by virtual particles jumping in and out of existence, but the energy of an accelerating spaceship or its equivalent in the form of extreme gravity close to the event horizon of a black hole would convert these virtual particles into real particles. Experimental evidence:  A recent experiment by Chris Wilson serves to substantiate this prediction that energy in the form of photons could be created out of empty space. In this the kinetic energy of an electron accelerated to a quarter of the speed of light was sufficient for the kinetic energy of the electon to turn virtual photons into real photons. The significance of this is to suggest that spacetime is not an abstraction but something real that is capable of producing particles, and also possibly capable of containing the configurations of consciousness and understanding. A further suggestion here is that quantum randomness is not really on the basis of the whole universe but a measure of the information about particles which lie beyond the event horizon, but with which a particle is non-locally correlated.  In contrast to Smolin, some other physicists regard spacetime as the fundamental aspect of the universe, with the quanta seen as merely disturbances of this underlying spacetime. In this section, we discuss recent research indicating the existence of quantum coherence in organic matter and the implications of this for neurons. First, however, we take an excursion into some well established organic chemistry which is relevant to the systems discussed later. Pi electons: We start by discussing the role of electrons round atoms. The overlap of the atomic orbitals forms bonds between atoms, and thus creates molecules, and also determines the shape of the molecules. The term ‘n’ is used to describe the energy level of each orbital. Each value of ‘n’ can represent a number of orbitals at different energy levels and this is known as a shell. The first shell, n=1, can contain only one orbital, the second shell, n=2 can contain two orbital and so on. Angular momentum:  Another quantum number ‘L’ relates to the angular momentum of an electron in an orbital. The value of ‘L’ is at least one less than the value of ‘n’. For the first two shells the values of ‘L’ are therefor 0 and 1, conventionally given as ‘s’ and ‘p’. So an electron can be labelled as 2p, denoting an orbital energy of 2 and an angular momentum of 1. Electron wave function:  The electron orbital is viewed as being a wave function. The wave length or its reciprocal, the frequency, is related to the energy level of the individual electron. Spheres and lobes:  The probability of an electron being at a particular point in space can be referred to as its density plot. For an ‘s’ orbital, the density plot is spherical, but with ‘p’ electrons, the shape of the density plot is two lobes with a nodal area in between, where there is no electron density. The wave function of these two lobes is out of phase. A further quantum number ‘ml’ relates to the spatial orientation of the orbital angular momentum. The ‘s’ orbitals have 0 because a sphere does not have an orientation in space. For ‘p’ orbitals there are three possibilities of -1, 0 and +1, written as px, py and p1,  that can be related to the mutually perpindicular ‘x’, ‘y’ and ‘z’ axes in geometry. Structure of an atom:  The structure of an atom involves having electrons in the lowest energy orbital and working up from there. Hydrogen has one electron located in the lowest energy orbital, and helium has two electrons both in the lowest orbital. Two electrons renders an orbital full. An orbital can be full with two electrons, half full with one electron or empty. With lithium which has three electrons, the third electron has to be placed in the second orbital. With carbon there are six electrons, two in the ‘n’ = 1, first shell, while in the second ‘n’=2 shell, there is one full orbital with two ‘s’ electrons and two half full orbitals, each with one ‘p’ electron. Structure of molecules:  Atoms are the basis for molecules. The orbitals or atoms are wave functions and if these waves are in phase their amplitudes are added together. When this happens the increased amplitude works against the repulsive force acting between the psotively charged nuclei of neighbouring atoms, and thus acts to bind the atoms together. This is referred to as a bonding molecular orbital. When the orbitals are out of phase, they are on the far side of the atomic nuclei, which continue to repel one another. This is known as the anti-bonding molecular orbital. The two are collectively known as MOs. The anti-bonding MOs usually ahve higher energy than the bonding MOs. Energy applied to an atom can promote a low-energy bonding orbital to a higher-energy anti-bonding orbital, and this can break the bond between the atoms. When ‘s’ orbitals combine the MOs are symmetrical and this is referred to as sigma symmetry, and is desribed as a sigma bond. When there is a combination of ‘p’ orbitals, there is a possibility of three different ‘p’ orbitals on axes that are perpindicular to one another. One of these can overlap end-on with an orbital in another atom, and these two orbitals are described as 2psigma and 2psigma*. Two further types of orbital can overlap with those in other atoms side-on, and these will not be symmetrical about the nuclear axis. These are described as pi orbitals and they form pi bonds. Further to that ‘p’ electrons must have theright orientation, so px electrons can only interact with other px electrons and so on. In discussing bonding, only the electrons in the outermost shell of the atom are usually relevant. For example, in a nitrogen molocule only the electrons in the second shell are involved in bonding. Nitrogen atoms have seven electrons, but only five in the outer shell are involved in bonding. A further two elecrons in the outer shell are ‘s’ electrons leaving the bonding work to the three ‘p’ electrons in the outer shell. When two nitrogen atoms are bound into a nitrogen molecule they form two pi bonds and one sigma bond. Molecular bonding can also occur between different types of atoms, but there is a requirement that the energy difference is not too great. Hybridisation:  Hybridisation is an important factor in the formation of molecular bonds. The ‘s’ and ‘p’ bonds are the most importantant in organic chemistry, as with the bonding of carbon, oxygen, nitrogen, sulphur and phosphorous. Hybridised orbitals are viewed as ‘s’ and ‘p’ orbitals superimposed on one another. 3.2:  In its ground state, the carbon atom has two electrons in the first shell, and this is not normally involved in bonding. In its second and outer shell it has two ‘s’ electrons filling an orbital, and two ‘p’ electrons, one px and one py, each in a half-filled orbital. If the carbon atom is excited, say by the positive charge attraction of the nucleus of a nearby hydrogen atom, an ‘s’ electron in the outer shell can be excited into a ‘p’ orbital, so that the outer shell now has one ‘s’ electron and three ‘p’ electrons, one each in an x, y and z orientation. The four outer shell electrons are now deemed to be not distinct ‘s’ and ‘p’ electrons but four ‘sp’ electrons, here described as sp3, because the configuration is one quarter ‘s’ electron and three-quarters ‘p’ electrons. The arrangement allows the formation of four σ covalent bonds. Carbon atoms can use sp2 hybridisation where one ‘s’ electron and two ‘p’ electrons in the outer shell are hybridised. There is also ‘sp’ hybridisation where the ‘s’ orbital mixes with just one of the ‘p’ orbitals. P. With the C=O double bond, the two atoms in the double bond are sp2 hybridised. The carbon atom uses all three orbitals in the sp2 arrangement to form σ bonds with other orbitals, but the oxygen atoms use only one of these. In addition a ‘p’ electron from each atom forms a π bond. 3.3:  Delocalisation and conjugation The joining together or conjugation of double bonds is important for organic structures.  π bonds can form into a framework over a large number of atoms, and are seen to account for the stability of some compounds. The structure of benzene is relevant in this respect. Benzene is based on a ring of six carbon atoms. The carbon atoms are sp2 hybridised, leaving one ‘p’ electron per carbon atom free, or six electrons altogether. These six electrons are spread equally over the six carbon atoms of the ring. These are π bonds delocalised over all six atoms in the carbon ring, rather than being localised in particular double bonds. Delocalisation emphasises the spatial spread of the electron waves, and occurs over the whole of the conjugated system. This is sometimes referred to as resonance. Sequences of double and single bonds also occur as chains rather than rings. Conjugation refers to the sequence of single and double bonds that form either a ring or a chain. Double bonds between carbon and oxygen can be conjugated in the same way as double bonds between carbon atoms. Conjugation involves there being only one single bond between each double bond. Two double bonds together also do not permit conjugation. These ‘rules’ relate to the need to have ‘p’ orbitals available to delocalise over the system. In both rings and chains every carbon atom is sp2 hybridised leaving a third ‘p’ electron to overlap with its neighbours, and form an uninterrupted chain. The double bonds that are conjugated with single bonds are seen to have different properties from double bonds not arranged in this way. Here again conjugation leads to a significantly different chemical behaviour. P. Chlorophyll, the pigment molecule in plants, is a good example of a conjugated ring of single and double bonds, and the colour of all pigments and dyes depends on conjugation. The colour involved depends on the length of the conjugated chain. Each bond increases the wavelength of the light absorbed. With less than eight bonds light is absorbed in the ultra-violet. The colours of objects and materials around us are a function of the interaction of light with pigments. Pigments are characterised by having a large number of double bonds between atoms. The pigment, lycopene, responsible for the red in tomatoes and some berries, comprises a long chain of alternating double and single bonds, allowing the molecule to form π bonds. An extensive network of π bonds across a large number of atoms is involved in the chemistry of many compounds. It is responsible for the high degree of stability in aromatic compounds such as benzene. The compound ethylene (CH2=CH2) has all its atoms in the same plane, and is therefore described as planar. In this molecule, the two carbon atoms are joined by a double bond. Hybridisation involves mixing the 2s orbital on each carbon atom with two out of the three ‘p’ orbital on each carbon atom to give three sp2 orbitals. The third ‘p’ orbital on each atom overlaps with the ‘p’ orbital of the other atom to form a π bond. The ‘p’ orbitals of the two atoms also have to be parallel to one another in order to form a π bond. This bond prevents the rotation of the double bond between the carbon atoms. However, sufficient energy, such as that of ultra violet light, can break the π bond, and thus allow the double bond to rotate. An important feature of benzene is the ability to preserve its ring structure through a variety of chemical reactions. Benzene and other compounds that have this property are termed aromatic. In looking at these structures, the important feature is not the number of conjugated atoms, but the number of electrons involved in the π system The six π electrons of  benzene leave all its molecular orbitals fully occupied in a closed shell, and account for its stability. A closed shell of electrons in bonding orbitals is a definition of aromacity. In benzene, the lowest energy ‘p’ orbitals comprise electron density above and below the plane of the molecule. These electron orbitals are spread over, delocalised over or conjugated over all six carbon molecules in the benzene ring. The delocalised ‘p’ orbitals can themselves be thought of as a ring. Expressed another way, this type of delocalisation is an uninterrupted sequence of double and single bonds, and it is this which is described as conjugation. The properties of this type of system are seen to be different from its component parts. Benzene has six π electrons, and in consequence all its bonding orbitals are full, giving the molecule a closed structure, which is often not the case for quite similar molecules with a lot of double bonds. This is referred to as a molecule being aromatic. The general rule is that there has to be a low energy bonding orbital with the ‘p’ orbitals in-phase. There is a closed shell giving greater stability in aromatic systems, where there are two ‘p’ orbitals forming a π bond and four other electrons. Carbon and oxygen bonds It is not essential in these systems to have carbon-to-carbon bonds. Carbon and oxygen also often form double bonds, separated by just one single bond. Here to the behaviour of the double-bonded system is quite different from the behaviour of the component parts. These structures are special in the sense of only arising where there are ‘p’ orbitals on different atoms available to overlap with one another. In many other molecules, there is a similarity in terms of a large number of double bonds, but they are insulated from one another by the lack of ‘p’ orbitals available to overlap with one another. Amide groups, amino acids and protein P. The amide group is crucial to protein, and therefore to living systems as a whole, in that it forms the links between amino acid molecules that in turn make up protein, the basic building blocks of life. The amino group on one amino acid molecule combines with the carboxylic group on another amino acid molecule to give an amide group. When a chain of this kind forms it is a peptide or polypeptide, and longer chains are classed as proteins. Conjugation arises from the bonding of a lone pair of ‘p’ orbitals, and this is vital in stabilising the link between the amino acids, and making it relatively difficult to disrupt the amino acid chains that make up protein. 3.4: Structure of molecules The structure of the individual atom is also the basis for the structure of molecules. Atomic orbitals are wave functions, and the orbital wave functions of different atoms are like waves, in that if they are in phase, their amplitudes are added together. When this happens, the increased amplitude of the wave function works against the mutual repulsion of the positively charged atomic nuclei of different atoms, and works to bond the atoms together. This is referred to as a bonding molecular orbital. When the orbitals are out-of-phase, they are on the far sides of the atomic nuclei, which continue to repel one another due to like positive electric charges, and this arrangement is known as the anti-bonding molecular orbital. Collectively the two types of molecular orbital are referred to as MOs. The antibonding MOs usually have higher energy than the bonding MOs. Energy applied to an atom can promote a low-energy bonding orbital to a higher-energy anti-bonding orbital, and this process can break the bond between two atoms. When ‘s’ orbitals combine, the MOs are symmetrical, and this type of orbital overlap has sigma (σ) symmetry, and is described as a sigma (σ) bond. When there is a combination of ‘p’ orbitals, there is a possibility of three different ‘p’ orbitals on axes that are perpendicular to one another. One of these can overlap end-on with an orbital in another atom, and these two orbitals are described as 2pσ and 2pσ*. Two other orbitals can overlap with those on other atoms side-on, and will not be symmetrical about the nuclear axis. These are described as π orbitals and form π bonds. P. In discussing bonding, only the electrons in the outermost shell of the atoms are usually relevant. For example, in a nitrogen molecule formed by the bonding of two nitrogen atoms, only the electrons in the second, ‘n’ = 2, shell are involved in bonding. The nitrogen atom has seven electrons, so there are fourteen on the two atoms that bond to form a nitrogen molecule. Two electrons in the inner shell of each atom are not involved, leaving five on each atom and ten altogether in the second shells. The 2s electrons on each atom cancel out, and are described as lone pairs. The bonding work thus devolves on three electrons in each atom, or six in the whole molecule. These form one σ bond and two π bonds. This is described as a triple-bonded structure. Orbitals overlap better when they are in the same shell of their respective atoms. So electrons in the second shell will overlap more readily with other second shell electrons than with third or fourth shell electrons. Further to that ‘p’ electrons must have the right orientation and px electrons can only interact with other px electrons and so on, because the x, y and z electrons are perpendicular or orthogonal to one another. Molecular bonding also applies to molecules that are formed out of different types of atoms, as distinct from molecules formed from atoms of the same element such as the nitrogen molecule discussed above. If the atomic orbitals of different atoms are very different, they cannot combine, and the atom cannot form covalent bonds (sharing the electron between two atoms). Instead an electron can transfer from one atom to another, transforming the first atom into a negative ion, and the second atom into a positive ion, with the molecule now held together by the attraction between the oppositely charged ions. This is known as ionic bonding. Covalent bonds with overlapping orbitals can only be formed when the difference in energy is not too great. P. Hybridisation Hybridisation is an important factor in the formation of molecular bonds. The ‘s’ and ‘p’ orbitals are those most important for organic chemistry, and for the bonding of atoms such carbon, oxygen, nitrogen, sulphur and phosphorous. Hybridised orbitals are viewed as ‘s’ and ‘p’ orbitals superimposed on one another. The key argument against quantum states having a practical role in neural processing is that in the conditions of the brain quantum decoherence would happen too rapidly for the states to be relevant. This view was crystallised by the (9. Tegmark, 2000) paper published in the prestigious journal, Physical Review E. The paper itself was not remarkable. For reasons that have never been properly explained, it used a model of quantum processing that has never been proposed elsewhere, and it failed to discuss or even mention arguments for the shielding of quantum processing in the brain. Nevertheless, it succeeded in confirming in a prestigious way the views of the numerous opponents of quantum consciousness. The situation remained like that between 2000 and 2007, after which the debate over quantum states in biological systems was moved to a new stage by the discovery that quantum coherence has a functional role in the transfer of energy within photosynthetic organisms (10. Engel et al, 2007). This moved the discussion of what sort of coherent biological features could support consciousness on from a phase of pure theorising, to a phase, in which ideas can be related to features that have been shown to exist in biological matter. 3.6:  The Engel study The Engel et al paper studied photosynthesis in green sulphur bacteria. The photosynthetic complexes (chromophores) in the bacteria are tuned to capturing light and transmitting its energy to long-term storage areas. It should be stressed that in this system, photons (the light quanta) only provide the initial excitation, and the coherence and entanglement discussed here involves electrons in biological systems. The Engel study documented the dependence of energy transport on the spatially extended properties of the wave function of the photosynthetic complexes. In particular, the timescale of the quantum coherence observed was much longer than would normally be predicted for a biological environment, with a duration of at least 660 femtoseconds (femtosecond=10-15 seconds), nearly three times as long as the classically predicted times of 250 femtoseconds. In the latter case, rapid destruction of coherence would prevent it from influencing the system. The wavelike process noted by Engel was suggested to account for the efficiency of the system, at 98% compared to the 60-70% predicted for a classical system. 3.7:  Limited dephasing Another researcher in this area, Martin Plenio, argues that where temperatures are relatively high, there is likely to be some dephasing of the quanta, but contrary to the popular view that this would be the end of quantum processing, the efficiency of energy transportation could actually be enhanced by this limited dephasing. Referring to a quantum experiment with beam splitters and detectors, he suggests that partial dephasing might actually allow the wider and therefore more efficient exploration of the system. 3.8:  Cheng & Fleming: – the protein environment In a paper by Cheng & Fleming published in ‘Science’ (11.) a study of long-lived quantum coherence in photosynthetic bacteria, demonstrates strong correlations between chromophore molecules. One experiment looked at two chromophore molecules. The system provided near unity efficiency of energy transfer, and also demonstrates energy transfer between the chromophores. The experiment also shows that the time for dephasing of these molecules is substantially longer than would have been traditionally estimated. The traditional approach in particular ignored the coherence between donor and acceptor states. The adaptive advantages of this lie in the efficiency of the search for the electron donor. The longer time to dephasing of one as compared to the other of the experimental chromophores was taken to indicate a strong correlation of the energy fluctuations of the two molecules. This meant that the two molecules were embedded in the same protein environment. Another study by Fleming et al that also observed long-lasting coherence in a photosynthetic indicated that this could be explained by correlations between protein motions that modulate the transition energies of neighbouring chromophores. This suggests that protein environments works to preserve electronic coherence in photosynthetic complexes, and thus optimise excitatory energy transfer. Chains of polymers Elizabetta Collini and Gregory Scholes conducted an experiment also reported in ‘Science’ (12.) that observed quantum coherence dynamics in relation to electronic energy transfer. The experiment examined polymer samples with different chain conformations at room temperature, and recorded intrachain, but not interchain, coherent electronic energy transfer. It is pointed out that natural photosynthetic proteins and artificial polymers organise light absorbing molecules (chromophores) to channel photon energy. The excitation energy from the absorbed light can be shared quantum mechanically among the chromophores. Where this happens, electronic coupling predominates over the tendency towards quantum decoherence, (loss of coherence due to interaction with the environment), and is viewed as comprising a standing wave connecting donor and acceptor paths, with the evolution of the system entangled in a single quantum state. Within chains of polymers there can be conformational subunits 2 to 12 repeat units long, which are the primary absorbing units or chromophores. Neighbouring chromophores along the backbone of a polymer have quite a strong electronic coupling, and electronic transfer between these is coherent at room temperature. 3.9:  Quantum entanglement considered – Sarovar et al (2009) In a 2009 paper, Sarovar et al (13.) examined the subject of possible quantum entanglement in photosynthetic complexes. The paper starts by discussing quantum coherence between the spatially separated chromophore molecules found in these systems. Modelling of the system showed that entanglement would rapidly decrease to zero, but then resurge after about 600 femtoseconds. Entanglement could in fact survive for considerably longer than coherence, with a duration of five picoseconds at 77K, falling to two picoseconds at room temperature. The entanglement examined here is the non-local correlation between the electronic states of spatially separated chromophores. Coherence is a necessary and sufficient state for entanglement to exist. Ishizaki and Fleming (2009) This paper (14.) developed an equation that allows modelling of the photosynthetic systems discussed above. Where this deals with the sites to be excited by the light energy, the initial entanglement rapidly decreases to zero, but then increases again after about 600 femtoseconds. This is thought to be a function of the entanglement of the initial sites being transported and localised at other sites, but remaining coherent at these other sites, from which further entanglement can subsequently resurge. Other studies appear to confirm the existence of picosecond timescales for entanglement in chromophores. It is not clear to the authors that entanglement is actually functional in chromophores. Coherence appears to be sufficient for very efficient transport of energy, and entanglement may be only a by-product of coherence. This looks to remain an area of scientific debate. Earlier studies such as Engel’s were performed at low temperatures, whereas quantum coherence becomes more fragile at higher temperatures, because of the higher amplitude of environmental fluctuations. In the Ishizaki and Fleming paper, the equation supplied by the authors suggest that coherence could persist for several hundred femtoseconds even at physiological temperatures of 300 Kelvin. This study deals with the Fenna-Matthews-Olson (FMO) pigment-protein complex found in low light-adapted green sulphur bacteria. The FMO is situated between the chlorosome antenna and the reaction centre, and its function is to transport energy  harvested from sunlight by the antenna to the reaction centre. The FMO complex is a trimer of identical sub-units, each comprised of seven bacteriochlorphyl (BChl) molecules. This structure has been extensively studied. Each unit of the FMO comprises 7 BChl molecules. BChl 1 and 6 are orientated towards the chlorosome antenna, and are the initially excited pigment, and BChl 3 and 4 are orientated towards the reaction centre. Even at the physiological temperatures, quantum coherence can be observed for up to 350 femtoseconds in this structure. This suggests that long-lived electronic coherence is sustained among the BChls, even at physiological temperatures, and may play a role in the high efficiency of EET in photosynthetic proteins. P. BChl 1 and 6 are seen as capturing and conveying onward the initial electronic energy excitation. Quantum coherence is suggested to allow rapid sampling of pathways to BChl 3 that connects to the reaction centre. If the process was entirely classical, trapping of energy in subsidiary minima would be inevitable, whereas quantum delocalisation can avoid such traps, and aid the capture of excitation by pigments BChl 3 and 4. BChl 6 is strongly coupled to BChl 5 and 7, which are in turn stongly coupled to BChl 4, ensuring transfer of excitation energy. Delocalisation of energy over several of the molecules allows exploration of the lowest energy site in BChl 3. The study predicts that quantum coherence could be sustained for 350 femtoseconds, but if the calculation is adjusted for a possible longer phonon relaxation time, this could extend to 550 femtoseconds, still at physiological temperatures. Cia et al (2008): – resetting entanglement A 2008 paper from Cia et al (15.) also looked at the possibility of quantum entanglement in the type of system studied in the Engel paper. Cia takes the view that entanglement can exist in hot biological environments. Cia says traditional thinking on biological systems is based on the assumption of thermal equilibrium, whereas biological systems are far from thermal equilibrium. He points out that the conformation of protein involves interactions at the quantum level. These are usually treated classically, but Cia wonders whether a proper understanding of protein dynamics does not require quantum mechanics. It is said not to be clear, whether or not entanglement is generated during the motions of protein, but that it is possible that entanglement could have important implications for the functioning of protein. The model studied by the Cia et al paper suggests that while a noisy environment, such as that found in biological matter, can destroy entanglement, it can also set up fresh entanglement. It is argued that entanglement can recur in the case of an oscillating molecule, in a way that would not be possible in the absence of this oscillation. P. The molecule has to oscillate at a certain rate relative to the environment to become entangled. This process allows for entanglement to emerge, but this would normally also disappear quickly. Something extra is needed for entanglement to recur or persist. It is suggested here that the environment, which is normally viewed as the source of decoherence, can play a constructive role in resetting entanglement, when combined with classical molecules. Environmental noise in combination with molecular motion provides a reset mechanism for entanglement. According to the author’s calculations entanglement can persistently recur in an oscillating molecule, even if the environment is too hot for static entanglement. The oscillation of the molecule combined with the noise of the environment may repeatedly reset entanglement. 3.10:  The FMO complex and entanglement A paper by K. Birgitta Whaley, Mohan Sarovar and Akihito Ishizaki published in 2010 (16.) discusses recent studies of photosynthetic light harvesting complexes. The studies are seen as having established the existence of quantum entanglement in biologically functional systems that are not in thermal equilibrium. However, this does not necessarily mean that entanglement has a biological function. The authors point out that the modern discussion of entanglement has moved and from simple arrangements of particles to entanglement in larger scale systems. Measurements of excitonic energy transport in photosynthetic light harvesting complexes show evidence of quantum coherence in these systems. A particular focus of research has been the Fenna-Matthew-Olson (FMO) complex in green sulphur bacteria. The FMO serves to transport electronic energy from the light harvesting antenna to the photosynthetic reaction centre. Coherence is present here at up to 300K. The authors draw attention to the relationship between electronic excitations in the chromophores and those in the surrounding protein. The electronic excitations in the chromophores are coupled to the vibrational modes of the surrounding protein scaffolding. One study (Scholak et al, 2010) shows a correlation between the extent of entanglement and the efficiency of the energy transport. The study went on to claim that efficient transport requires entanglement, although the authors of this paper query such a definite assertion. The pigment-protein dynamics generates entanglement across the entire FMO complex in only 100 femtoseconds, but followed by oscillations that damp out over several hundred femtoseconds, with a subsequent longer contribution continuing beyond that for up to about five picoseconds. This more persistent entanglement can be at between a third and a half of the initial value and 15% of the maximum possible value. Long-lived entanglement takes place between four or five of the existing seven chromophores. The most extended entanglement is between chromophores one and three, and these are also two of the most widely separated chromophores. Studies also show that this entanglement is quite resistant to temperature increase, with only a 25% reduction when the temperature rises from 77K to 300K. Overall studies indicate long-lived entanglement of as much as five picoseconds between numbers of excitations on spatially separated pigment molecules. This is described here as long-lived coherence because energy transfer through the FMO complex is on a time span of a few picoseconds meaning that the up to five picoseconds of entanglement seen between the chromophores represents a functional timescale. However, the authors do not consider this by itself to be a conclusive argument for entanglement being functional in the FMO. Light-harvesting complex II (LNCII) P. This paper also looks at light harvesting complex II (LHCII), which is also shown to have long-lived electronic coherence. LHCII is the most common light harvesting complex in plants. The system comprises three subunits each of which contains eight chlorophyll ‘a’ molecules and six chlorophyll ‘b’ molecules. A study by two of the authors (Ishizaki & Fleming, 2010) indicates that only one out of chlorophyll molecules would be initially excited by photons, and this molecule would then become entangled with other chlorophyll molecules. Entanglement decreases at first, but then persists at a significant proportion of the maximum possible value. This is also an important feature of the FMO complex. In both these complexes entanglement is seen to be generated by the passage of electronic excitation through the light harvesting complexes, and to be distributed over a number of chromophores. Entanglement persists over a longer time and is more resistant to temperature increase than might have been previously expected. A functional biological role is suggested by the persistence of entanglement over the same timescale as the energy transfer within the light harvesting complexes. Light harvesting complexes (LHCs) are densely packed molecular structures involved in the initial stages of photosynthesis. These complexes capture light, and the resulting excitation energy is transferred to reaction centres, where chemical reactions are initiated. LHCs are particularly efficient at transporting excitation energy in disordered environments. Simulations of the dynamics of particular LHCs predict that quantum entanglement will persist over observable timescales. Entanglement here would mean that there are non-local correlations between spatially separated molecules in the LHCs. The molecules in the LHCs, referred to as chromophores, are close enough together for considerable dipole coupling leading to coherent interaction over observable timescales. The existence of coherence between molecules in these systems has been recognised for a decade or more. This condition is seen as the basis for entanglement. Coherence in this area, known as the site basis, is necessary and sufficient for entanglement, and any coherence in the area will lead to entanglement, and can be viewed in experiments as a signature of entanglement. The authors base part of their study on the description of the dynamics of a molecule in a protein in an LHC. This model indicates the coupling of some pairs of molecules due to proximity and favourable dipole orientation, thus effectively forming dimers. The wave function of the system is delocalised across these dimers. Using this equation, the interface of the LHC with light energy leads to a rapid increase in entanglement for a short time, followed by a decay punctuated by varying amounts of oscillation. The initial rapid increase reflects the coherent coupling of some parts of the LHC system. This entanglement decreases again as the excitation comes into contact with other parts of the protein. Some of the entanglement seen is not between immediately neighbouring molecules, but between more distant parts of the LHC. Entanglement in LHC is estimated to continue until the excitation reaches the reaction centre. The authors view this as a remarkable conclusion, since it shows that entanglement between several particles can persist in a non-equilibrium condition, despite being in a decoherent environment. 3.11:  Entanglement and efficiency A paper by Francesca Fassioli and Alexandra Olaya-Castro (17.) suggests that electronic quantum coherence amongst distance donors could allow precise modulation of the light harvesting function. Photosynthesis is remarkable for the near 100% efficiency of energy transfer. The spatial arrangement of the pigment molecules and their electronic interaction is known to relate to this efficiency. Recent experimental studies of photosynthetic protein have shown that it can sustain quantum coherence for longer than previously expected, and that this can happen at the normal temperature of biological processes. This has been taken to imply that quantum coherence may affect light harvesting processes. In photosynthesis, the energy of sunlight is transferred to a reaction centre with near 100% efficiency. The spatial arrangement of pigment molecules and their electronic interactions is known to be involved with this high efficiency. There is an implication that quantum coherence may affect the light harvesting process. Some studies point to very efficient energy transport as the optimal result of the interplay of quantum coherent with decoherent mechanisms. Roles proposed for quantum coherence vary between avoidance of energy traps that are not at the overall lowest energy level, and actual searches for the overall lowest energy level. In this paper, it is suggested that the function of quantum coherence goes beyond efficiency of energy transport, and includes the modulation of the photosynthetic antennae complexes to deal with variations in the environment. 3.12:  The role of quantum entanglement There is some debate as to whether quantum entanglement plays a role in the functioning of the light-harvesting complexes, or is just a by-product of quantum states. The authors here argue that entanglement may be involved in the efficiency of the system, and they use the FMO protein in green sulphur bacteria as the basis of their study. They suggest that entanglement could play a role in light-harvesting by allowing precise control of the rate at which excitations are transferred to the reaction centre. Long-range quantum correlations have been suggested to be important as a mechanism helping quantum coherence to survive at the high temperatures sustained in light harvesting antennae. This paper claims to show that in the FMO complex long-lived quantum coherence is spatially distributed in such a way that entanglement between pairs of molecules controls the efficiency profile needed to cope with variations in the environment. The ability to control energy transport under varying environmental conditions is seen as crucial for the robustness of photosynthetic systems. A mechanism involving quantum coherence and entanglement might be effective in controlling the response to different light intensities. 3.13:  Room temperature: Moving the debate forward A paper by Elizabetta Collini et al published in ‘Nature’ in 2010 (18.) moved the debate forward in an important way by demonstrating the existence of room-temperature quantum coherence in organic matter. This paper describes X-ray crystallography studies of two types of marine cryptophyte algae that have long-lasting excitation oscillations and correlations and anti-correlations, symptomatic of quantum coherence even at ambient temperature. Distant molecules within the photosynthetic protein are thought to be connected to quantum coherence, and to produce efficient light-harvesting as a result. The cryptophytes can photosynthesise in low-light conditions suggesting a particularly efficient transfer of energy within protein. According to the traditional theory, this would imply only small separation between chromophores, whereas the actual separation is unusually large. In this study, performed at room temperature, the antenna protein received a laser pulse,  resulting in a coherent superposition. The experimental data of the study shows that the superposition persists for 400 femtoseconds and over a distance of 2.5 nanometres. Quantum coherence occurs in a complex mix of quantum interference between electronic resonances, and decoherence is caused by interaction with the environment. The authors think that long-lived quantum coherence facilitates efficient energy transfer across protein units. The authors remains uncertain, as to how quantum coherence can persist for hundreds of femtoseconds in biological matter. One suggestion is that the expected rate of decoherence is slowed by shared or correlated motions in the surrounding environment. Where light-harvesting chromophores are covalently bound to the protein backbone, it is suggested that this may strengthen correlated motions between the chromophores and the protein. Covalent binding to the protein backbone is speculated to make coherence longer lasting. 3.14:  Widespread in nature In addition to the discovery of quantum coherence in biological systems at room temperature, studies now also show that coherence is present in multicellular green plants. Calhoun et al, 2009 (19.) studied this kind of organism. These two discoveries, coherence at room temperature and coherence in green plants have removed the initial possibility that coherence in organism was an outlier confined to extreme conditions rather than something that was widespread in nature. The question arises as to whether quantum coherence and entanglement in plants has any relevance to animal life and in particularly to brains. A brief talk by Travis Craddock of the University of Alberta at a 2011 consciousness conference suggested that it could. P. Craddock stressed that light absorbing chromophore molecules involved in light harvesting use dipoles to provide 99% efficiency in energy transfer from the light harvesting antennae to the reaction centre. The studies show that instead of quantum coherence being destroyed by the environment within the organism, a limited amount of noise in the environment acts to drive the system. 3.16:  Tryptophan Craddock indicates that any system of dipoles could work like this. He is particularly interested in the role of the amino acid, tryptophan. Similar models can be used for chromophores in photosynthetic systems and for tryptophan, an aromatic amino acid that is one of the 20 standard amino acids making up protein, including the microtubular protein, tubulin. Tryptophan has eight molecules extending over the length of the tubulin protein dimer, and it possesses strong transition dipoles. Excitons over this network are not localised, but are shared between all the tryptophan molecules, in the same way that excitons are delocalised in the photosynthetic light-harvesting structures. Photosynthesis absorbs light in the red and infra red. These forms of light are not available to tryptophan in proteins, but tryptophan is able to use ultra violet light emitted by the mitochondria. In fact, Tryptophan is sometimes referred to as chromophoric because of its ability to absorb UV light. Craddock implies that the same system that gives rise to quantum coherence in light-harvesting complexes could also give rise to it within the protein of neurons. Functional quantum states in the brain:  Following the recent papers discussed above, the debate on quantum coherence in living tissues has moved to a new stage. We now have definite evidence of functional quantum coherence in living matter, and also the existence of quantum entanglement, which may also be functional. When this evidence is added to the similarities between the coherent structures in photosynthetic organisms and tryptophan, an amino acid that is common within neurons, we look to be moving into a zone where functional quantum states in the brain begin to look perfectly feasible. 3.17:  Quantum and classical interaction The biologist, Stuart Kauffman, based at University of Vermont and & Tampere University, Finland (20.) is sceptical about ideas of consciousness based on classical and macroscopic physics. He proposes instead that consciousness is related to the border area between quantum and classical processing, where the non-algorithmic aspect of the quantum and the non-random aspect of the classical may be mixed. This is termed the ‘poised realm’, and is seen as applying to systems that include biomolecules and by extension brain systems. P.The poised realm. In rejecting the classical basis of mainstream consciousness studies, Kauffman instead proposes the idea of the ‘poised realm’, essentially the border of quantum and classical rules, which he suggests may support processing that is non-algorithmic, but at the same time non-random. This resembles the earlier non-algorithmic scheme proposed by Penrose. Kauffman puts forward the notion of a distinction between ‘res potentia’, the realm of the possible, or the quantum world, and ‘res extensa’ the realm of what actually exists, or the classical world. His proposal examines the meaning of the unmeasured or uncollapsed Schrödinger wave, and the question as to whether consciousness can participate at this level Kauffman discusses the modern quantum theory approach that distinguishes between an open quantum system and its environment. The open quantum system can be seen as the superposition of many possible quantum particles oscillating in phase. The information of the in-phase quanta can be lost through interaction with the environment, in the process known as decoherence. The information about the peaks and troughs of the Schrödinger wave, and the familiar interference pattern disappears, leading towards a classical system. The process of decoherence takes time, on a scale of one femtosecond. There is a problem regarding the physics of this, because while the mathematical description of the Schrödinger wave is time- reversible, decoherence has traditionally been treated as a time-irreversible dissipative process. Recoherence:  However, it is has in recent years become apparent that recoherence and the creation of a new coherence state is possible, with systems decohering to the point of being effectively classical, and then recohering. Classical information can itself produce recoherence. The Shor quantum error correction theorem shows that in a quantum computer with partially decoherent qubits, a measurement that injects information can bring the qubits back to coherence. Kauffman, in collaboration with Gabor Vattay, a physicist at Eotvos University Budapest, and Samuli Niiranen, a computer scientist at Tampere University worked out the concept of the ‘poised realm’ between quantum coherence and classical behaviour. It is in this poised region that Kaufmann suggests non-random, but also non-deterministic processes could arise. Between the open quantum system of the Schrödinger wave and classicality, there is an area that is neither algorithmic nor deterministic, and which is also acausal, and therefore unlike a classical computer.  It is suggested that systems can hover between quantum and classical behaviour, this state being what Kaufmann refers to as the ‘poised realm’. The non-deterministic processing in the ‘poised realm’ influences the otherwise deterministic processing of the classical sphere, which can in its turn alter the remaining quantum sphere. There is a two-way interaction between the quantum and classical region. The fact that this process deriving from the classical region is non-random introduces a non-random element into any remaining decoherence in the quantum system. Further, classical parts of the system can recohere, and inject classical information into the quantum system, thus introducing a degree of control into the superpositions of the quanta. In particular, the decision on which amplitudes reach the higher amplitudes, and thus have the greatest probability of decohering can be altered, thus altering the nature of particular classical outcomes. This leads Kauffman on to discuss the recent discoveries in quantum biology, where quantum coherence and entanglement have been demonstrated in living photosynthetic organisms. The suggestion is that biomolecules are included in the systems that can hover between the quantum and the classical region, and further that this could apply not only to photosynthetic biomolecules, but also to biomolecules within neurons. Thus brain systems could be allowed to recohere to introduce further acausality into the system. Kaufmann views consciousness as a participation in res potentia and its possibilities. The presence of consciousness in the res potentia is also suggested to explain the lack of an apparent spatial location for consciousness. Qualia are suggested to be related to quantum measurement in which the possible becomes actual. However, Kaufmann admits that all this still contains no real explanation of sensory experience. Kaufmann acknowledges that he is looking for something similar to Penrose, but thinks it may be located in the poised realm rather than in Penrose’s objective reduction. Where the earlier scheme of Penrose still has the advantage is in the rounding off proposition that his objective reduction gives access to consciousness at the level of the fundamental spacetime geometry. Presumably Kaufmann assumes something of the kind. There is no particular reason why either quanta or classical structures or some mixture of them should be conscious, but we know that the quanta relate to fundamental properties such as charge and spin and to spacetime, and it seems reasonable on the same basis to look for consciousness as a fundamental property at this level. Roger Penrose is one of the very few thinkers to consider how consciousness could arise from first principles rather than merely trying to shoe horn it into nineteenth century physics, and his ideas appear to be a good starting point from which to try to understand consciousness as a fundamental. Gödel’s theorem:. Penrose’s  approach was a counter attack on the functionalism of the late 20th century, which claimed that computers and robots could be conscious. He approached the question of consciousness from the direction of mathematics. The centre piece of his argument is a discussion of Gödel’s theorem.  Gödel demonstrated that any formal system or any significant system of axioms, such as elementary arithmetic, cannot be both consistent and complete. There will be statements that are undecidable, because although they are seen to be true, but are not provable in terms of the axioms. Penrose’s controversial claim:  The Gödel theorem as such is not controversial in relation to modern logic and mathematics, but the argument that Penrose derived from it has proved to be highly controversial. Penrose claimed that the fact that human mathematicians can see the truth of a statement that is not demonstrated by the axioms means that the human mind contains some function that is not based on algorithms, and therefore could not be replicated by a computer. This is because the functioning of computers is based solely on algorithms (a system of calculations). Penrose therefore claimed that Gödel had demonstrated that human brains could do something that no computer was able to do. Arguments against Penrose’s position:  Some critics of Penrose have suggested that while mathematicians could go beyond the axioms, they were in fact using a knowable algorithm present in their brains. Penrose contests this, arguing that all possible algorithms are defeated by the Gödel problem. In respect to arguments as to whether computers could be programmed to deal with Gödel propositions, Penrose accepts that a computer could be instructed as to the non-stopping property of Turing’s halting problem. Here, a proposition that goes beyond the original axioms of the system is put into a computation. However, this proposition is not part of the original formal system, but instead relies on the computer being fed with human insights, so as to break out of the difficulty. So the apparently non-algorithmic insights are required to supplement the functioning of the computer in this instance. An unknowable algorithm:  Penrose further discusses the suggestion of an unknowable algorithm that enables mathematicians to perceive the truth of statements. He argues that there is no escape from the knowability of algorithms. An unknowable algorithm means an algorithm, whose specification could not be achieved. But any algorithm is in principle knowable, because it depends on the natural numbers, which are knowable. Further, it is possible to specify natural numbers that are larger than any number needed to specify the algorithmic action of an organism, such as a human or a human brain. Mathematical robots:  Penrose says that with a mathematical robot, it would not be practical to encode all the possible insights of mathematicians. The robot would have to learn certain truths by studying the environment, which in its turn is assumed to be based on algorithms. But to be a creative mathematician, the robot will need a concept of unassailable truth, that is a concept that some things are obviously true. This involves the mathematical robot having to perceive that a formal system ‘H’ implies the truth of its Gödel proposition, and at the same time perceiving that the Gödel proposition cannot be proved by the formal system ‘H’. It would perceive that the truth of the proposition follows from the soundness of the formal system, but the fact that the proposition cannot be proved by the axioms also derives from the formal system. This would involve a contradiction for the robot, since it would have to believe something outside the formal system that encapsulated its beliefs. 4.2:  Solomon Feferman: Amongst experts in this area who do not entirely reject Penrose’s argument, Solomon Feferman (21.) has criticised Penrose’s detailed argument, but is much closer to his position than to that of mainstream consciousness studies. Feferman makes common cause with Penrose in opposing the computational model of the mind, and considering that human thought, and in particular mathematical thought, is not achieved by the mechanical application of algorithms, but rather by trial-and-error, insight and inspiration, in a process that machines will never share with humans. Feferman finds numerous flaws in Penrose’s work, but at the end he informs his readers that Penrose’s case would not be altered by putting right the logical flaws that Feferman has spent much time discovering. Feferman says that it is ridiculous to think that mathematics is performed in this way. Trial-and-error reasoning, insight and inspiration, based on prior experience, but not on general rules, are seen as the basis of mathematical success. A more mechanical approach is only appropriate, after an initial proof has been arrived at. Then this approach can be used for mechanical checking of something initially arrived at by trial-and-error and insight. He  views mathematical thought as being non-mechanical. He says that he agrees with Penrose that understanding is essential to mathematical thought, and that it is just this area of mathematical thought that machines cannot share with us. Penrose’s search for a non-algorithmic feature:  Penrose went on to ask, what it was in the human brain that was not based on algorithms. The physical law is described by mathematics, so it is not easy to come up with a process that is not governed by algorithms. The only plausible candidate that Penrose could find was the collapse of the quantum wave function, where the choice of the position of a particle is random, and therefore not the product of an algorithm. However, he considered that the very randomness of the wave collapse disqualifies it as a useful basis for the mathematical judgement or understanding in which he was initially interested. The wave function:  In respect of consciousness, it is Penrose’s attitude to the reality of the quantum wave function collapse that is the important area. In particular, he disagrees with the traditional Copenhagen interpretation, which says that the theory is just an abstract calculational procedure, and that the quanta only achieve objective reality when a measurement has been made. Thus in the Copenhagen approach reality somehow arises from the unreal or from abstraction, giving a dualist quality to the theory. The discussion of quantum theory repeatedly comes back to the theme that Penrose regards the quantum world and the uncollapsed wave function as having objective existence. In Penrose’s view, the objective reality of the quantum world allows it to play a role in consciousness. Penrose emphasises that the evolution of the wave function portrayed by the Schrödinger equation is both deterministic and linear. This aspect of quantum theory is not random. Randomness only emerges when the wave function collapses, and gives the choice of a particular position or other properties for a particle. P. Penrose discusses the various takes made on wave function collapse by physicists. Some would like everything to depend on the Schrödinger equation, but Penrose rejects this idea, because it is impossible to see how the mechanism of this equation could produce the transformation from the superposition of alternatives, as found in the quantum wave, to the random choice of a single alternative. He also discusses the suggestion that the probabilities of the quantum wave that emerges into macroscopic existence arise from uncertainties in the initial conditions and that the system is analogous to chaos in macroscopic physics. This does not satisfy Penrose, who points out that chaos is based on non-linear developments, whereas the Schrödinger equation is linear. 4.3:  Important distinction between Penrose and Wigner Penrose also disagrees with Eugene Wigner’s suggestion that it is consciousness that collapses the wave function, on the basis that consciousness is only manifest in special corners of spacetime. Penrose himself advances the exact opposite proposal that the collapse of a special (objective) type of wave function produces consciousness. It is important to stress this difference between the Penrose and the Wigner position, as some commentators mix up Wigner’s idea with Penrose’s propositions on quantum consciousness, and then advance a refutation of Wigner, wrongly believing it to be a refutation of Penrose. Penrose is also dismissive of the ‘many worlds’ version of quantum theory, which would have an endless splitting into different universes with, for instance, Schrödinger’s cat alive in one universe and dead in another universe. Penrose objects to the lack of economy and the multitude of problems that might arise from attempting such a solution, and in addition argues that the theory does not explain why the splitting has to take place, and why it is not possible to be conscious of superpositions. Penrose instead argues for some new physics, and in particular an additional form of wave function collapse. If the superpositions described by the quantum wave extended into the macroscopic world, we would in fact see superpositions of large-scale objects. As this does not happen, it is argued that something that is part of objective reality must take place to produce the reality that we actually see. This requirement for new physics is often criticised as unjustified. However, these criticisms tend to ignore the fact that while quantum theory provides many accurate predictions, there has never been satisfactory agreement about its interpretation, nor has its conflict with relativity been resolved. 4.5:  Consciousness, spacetime, the second law & gravity Penrose sees consciousness as not only related to the quantum level but also to spacetime. He discusses the spacetime curvature described in general relativity. He looks at the effect of singularities relative to two spacetime curvature tensors, Weyl and Ricci. Weyl represents the tidal effect of gravity, by which the part of a body nearest to the gravitational source falls fastest creating a tidal distortion in the body. Ricci represents the inward pull on a sphere surrounding the gravitational force. In a black hole singularity, the tidal distortion of Weyl would predominate over Ricci, and Weyl goes to infinity at the singularity. However, in the early universe expanding from the Big Bang, the inward tidal distortion is absent, so Weyl=0, while it is the inward pressure of Ricci that predominates. So the early universe is seen to have had low entropy with Weyl close to zero. Weyl is related to gravitational distortions, and Weyl close to zero indicates a lack of gravitational clumping, just as Weyl at infinity indicated the gravitational collapse into a black hole. Weyl close to zero and low gravitational clumping therefore indicate low entropy at the beginning of the universe. The fact the Weyl is constrained to zero is seen by Penrose as a function of quantum gravity. The whole theory is referred to as the Weyl curvature hypothesis. The question that Penrose now asks is as to why initial spacetime singularities have this structure. He thinks that quantum theory has to help with the problem of the infinity of singularities. This would be a quantum theory of the structure of spacetime, or in other words a theory of quantum gravity. Penrose regards the problems of quantum theory in respect of the disjuncture between the Schrödinger equations deterministic evolution and the randomness in wave function collapse as fundamental. He thinks in terms of a time-asymmetrical quantum gravity, because the universe is time-asymmetric from low to high entropy. He argues that the conventional process of collapse of the wave function is time-asymmetric. He describes an experiment where light is emitted from a source and strikes a half-silvered mirror with a resulting 50% probability that the light reaches a detector and 50% that it hits a darkened wall. This experiment cannot be time reversed, because if the original emitter now detects an incoming photon, there is not a 50% probability that it was emitted by the wall, but instead 100% probability that it was emitted by the other detecting/emitting device. Penrose relates the loss of information that occurs in black holes to the quantum mechanical effects of the black hole radiation described by Stephen Hawking. This relates the Weyl curvature that is seen to apply in black holes and the quantum wave collapse. As Weyl curvature is related to the second law of thermodynamics, this is taken to show that the quantum wave reduction is related to the second law and to gravity. He proposes that in certain circumstances there could be an alternative form of wave function collapse. He called this objective reduction (OR). He suggests that as a result of the evolution of the Schrodinger wave, the superpositions of the quanta grow further apart. According to Penrose’s interpretation of general relativity, each superposition of the quanta is conceived to have its own spacetime geometry. The separation of the superpositions, each with its own spacetime geometry constitutes a form of blister in space-time. However once the blister or separation grows to more than the Planck length of 10-35 metres, the separations begin to be affected by the gravitational force, the superposition becomes unstable, and it soon collapses under the pressure of its gravitational self-energy. As it does so, it chooses one of the possible spacetime geometries for the particle. This form of wave function collapse is proposed to exist in addition to the more conventional forms of collapse. Evidence for non-computational spacetime:. In support of this, he points out that when the physicists, Geroch and Hartle, studied quantum gravity, they ran up against a problem in deciding whether two spacetimes were the same. The problem was solvable in two dimensions, but intractable in the four dimensions that accord with the four dimensional spacetime, in which the superposition of quantum particles needs to be modelled. It has been shown that there is no algorithm for solving this problem in four dimensions. Earlier the mathematician, A. Markov, had shown there was no algorithm for such a problem, and that if such an algorithm did exist, it could solve the Turing halting, for which it had already been shown that there was no algorithm. The possibly non-computable nature of the structure of four-dimensional space-time is deemed to open up the possibility that wave function collapses could give access to this non-computable feature of fundamental space-time. Testing Penrose’s objective reduction:. A long-term experiment is underway to test Penrose’s hypothesis of objective reduction. This experiment is being run by Dirk Bouwmeester at the University of California, Santa Barbara and involves mirrors only ten micrometres across and weighing only a few trillionths of a kilo, and the measurement of their deflection by a photon. The experiment is expected to take ten years to complete. This means that theories of consciousness based on objective reduction are likely to remain speculative for at least that length of time. However, the ability to run an experiment that could look to falsify objective reduction, at least qualifies it as a scientific theory. 4.6:  Significance for consciousness The significance of this for the study of consciousness is that, in contrast to the conventional idea of wave function collapse, this form of collapse is suggested to be non-random, and instead driven by a non-computable function at the most fundamental level of spacetime. Penrose argues that, in contrast to the conventional wave function form of collapse, there are indications that in this case, there is a decision process that is neither random nor computationally/algorithmically based, but is more akin to the ‘understanding’ by which Penrose claims the human brain goes beyond what can be achieved by a computer. When Penrose first proposed his ideas on consciousness, he had no significant suggestion as to how this could be physically instantiated in the brain. Subsequent to this, Stuart Hameroff proposed a scheme by which Penrose’s concept of objective reduction might be instantiated in neurons, giving rise to the theory of orchestrated objective reduction (Orch OR). 4.8:  Single-cells organisms, neurons Hameroff emphasises that single-cell organisms have no nervous system, but can perform complicated tasks, which could only be achieved by means of some form of internal processing. He surmised that the same form of processing could exist in brain cells. Thus Hameroff viewed each neuron as a computer. Within the neuron, a number of areas such as the ion channels and parts of the synapses were considered as possible sites for information processing and ultimately consciousness. However, another candidate, the cytoskeleton, came to be viewed as the component of the neuron best suited to information processing. The cytoskeleton comprises a protein scaffolding that provides a structural support for all living cells including neurons. 4.9:  Microtubules Microtubules are the major element of the cytoskeleton. As well as providing structural support for the cell, they are important for internal transport, including the transport of neurotransmitter vesicles to synapses in neurons. Hameroff suggested that microtubules were suitable for information processing, and in addition to this that they could support quantum coherence and the objective reduction looked for in Penrose’s theory. The microtubules are comprised of the protein tubulin, which is made up of an alpha and beta tubulin dimer. The microtubules are formed of 13 filamentous tubulin chains skewed so that the filaments run down the cylinder of the microtubule in a helical form, and hexagonal in that each tubulin dimer has six neighbours. Each turn of this helix is formed by thirteen dimers, and creates a slightly skewed hexagonal lattice, considered to be suitable for information processing. The intersections of the windings of the protofilaments are also the attachment sites for microtubular associated proteins (MAPs) that help to bind the cytoskeleton together. The nature and activity of microtubules in neurons is markedly different from that in other body cells. Neuron microtubules are denser and more stable than those in other cells. In neurons microtubules are also more important for linking parts of the cell, such as taking synaptic vesicles from the Golgi apparatus in the cell body down to the axon terminal, and carrying protein and RNA to the dendritic spines. 4.10:  Suitability for information processing It is the geometry of this lattice based on tubulin sub-units that is considered to have a potential for information processing. Within the cylindrical lattice of the microtubule, each tubulin is in a hexagonal relationship, by virtue of being in contact with six other neighbouring tubulins. Each dimer would be influenced by the polarisation of six of its neighbours, giving rise to effective rules for the conformation of the tubulins, which in turn makes them suitable for the transmission of signals. Tubulin switches between two conformations. It is suggested that tubulin conformational states could interact with neighbouring tubulin by means of dipole interactions. The dipole-coupled conformation for each tubulin could be determined by the six surrounding tubulins The geometry of a quantum computing lattice could be suitable for quantum error correction, which along with pumping of energy and quantum error correction might delay decoherence. This latter view is consistent with the recent studies of photosynthetic systems. The intersections of the windings of the protofilaments are also the attachment sites for microtubular associated proteins (MAPs) that help to bind the cytoskeleton together. Hameroff describes protein conformation as a delicate balance between contervailing forces. Proteins are chains of amino-acids that fold into three dimensional conformations. Folding is driven by van der Waals forces between hydrophobic amino-acid groups. These groups can form hydrophobic pockets in some proteins. These pockets are critical to the folding and regulation of protein. Amino acid side groups in these pockets interact by van der Waals forces. 4.11:  Dendrites and consciousness Hameroff related consciousness not to the axons of neurons that allow forward communication with other neurons, but the dendrites that receive inputs from other neurons. The cytoskeleton of the dendrites is distinct both from that found in cells outside the brain and also from the cytoskeleton found in the axons of neurons. The microtubules in dendrites are shorter than those in axons and have mixed as opposed uniform polarity. This appears a sub-optimal arrangement from a structural point of view, and it is suggested that in conjunction with microtubule associated proteins (MAPs), this arrangement may be optimal for information processing. These microtubule/MAP arrangements are connected to synaptic receptors on the dendrite membrane by a variety of calcium and sodium influxes, actin and other inputs. Alterations in the microtubule/MAPs network in the dendrites correlate with the rearrangement of dendrite synapatic receptors. Hameroff points out that changes in dendrites can lead to increased synaptic activity. The changes in dendrites involve the number and arrangement of receptors and the arrangement of dendritic spines and dendrite-to-dendrite connections. The main function of dendrites is seen to be the handling of signal input into the neuron, which may eventually result in an axon spike. 4.12:  Dendritic spines, the dendritic cytoskeleton & information transmission Neurons receive inputs through dendrites and dispatch signals through axons. Dendritic spines are the points at which signals from other neurons enter the dendrites. There is evidence for interactivity between dendritic spines and the dendritic cytoskeleton. The connection between the membrane and the cytoskeleton has tended to be ignored. Actin filaments are concentrated in dendritic spines and near to axon terminals.  These bind to scaffolding proteins and interact with signalling molecules. There are also interactions between ion channels and the cytoskeleton, especially actin filaments. Experimental work suggests that the cytoskeleton and actin filaments in particular can regulate ion channels that are part of basic neural processing. Recent studies indicate cross-linker proteins between actin filaments and microtubules, in additions to MAP 2 and tau which are known to bind to actin filaments. The dendritic spines can be modulated by actin indicating that cytoskeletal proteins can influence synaptic plasticity. The spines receive glutamate inputs by means of NMDA and AMPA receptors.  Actin holds signal transduction molecules close to the NMDA receptors, and this links these receptors to signal cascades within the neuron. Actin is also important for anchoring ion channels, and congregating them in clusters. Actin filaments are known to control the excitability of some ion channels, such as the K+ channel, and it also binds to the Na+ and Ca2+ ion channels. Scaffolding proteins such as the post-synaptic density protein, PSD95 and gephyrin a GABA receptor scaffolding protein, secure the membrane receptors in the dendritic spine, and attach them to protein kinases and also to actin filaments that constitute part of the cytoskeleton. Gephyrin concentrates GABA receptors at post synaptic sites, while actin filaments support the movements of geyphrin complexes. Actin filaments are concentrated immediately below the neuronal membrane, but also penetrate into the rest of the cytoskeleton and are heavily concentrated in dendritic spines. The actin filaments are shown to be involved in the reorganisation of dendritic spines following stimulation. They also hold in place receptors, ion channels and transduction molecules. Where Hameroff moves on to discussing π electron clouds, he comes closer to the type of functional quantum coherence identified in photosynthetic systems. There has been much discussion over the last two decades as to how microtubules or any other structure in the neurons could sustain quantum states for long enough for them to be relevant to neural processing. In the light of more recent studies of quantum coherence in photosynthetic systems, it looks most likely that any quantum coherence in microtubules would relate to π electron clouds. Mainstream research moved away from the idea of quantum processes in living organisms during the second half of the 20th century, although a few physicists such as Fröhlich kept the idea alive. Fröhlich proposed that biochemical energy could pump quantum coherent dipole states in geometrical arrays of non-polar π electron delocalised clouds. Such electron clouds are now known to be isolated from water and ions, and present in cells within membranes, microtubules and organelles. These electron clouds can use London forces, involving interaction between instantly forming dipoles in different electron clouds, to govern the conformation of biomolecules, including proteins. 4.14:  Aromatic rings Life is based on carbon chemistry and notably carbon ring molecules, such as benzene, which has delocalised electron clouds in which London forces are active. Carbon has four atoms in its outer shell, able to form four covalent bonds with other atoms. In some cases two of the electrons form a double bond with another atom, and the remaining two outer electrons remain mobile and are known as π electrons. In benzene, there are three double bonds between six carbon atoms, such that all six carbon atoms are involved in a bond. The ring structure, into which these atoms are formed, famously came to its discoverer, Friedrich von Kekule, in a dream of a snake biting its tail. There are varying configurations of the bonds and the π electrons and the molecule is delocalised between these configurations. Benzene rings and the more complex indole rings are referred to as aromatic rings, and make up several of the amino acid side groups that are attached to proteins. Protein folding and π electron clouds:. Proteins constitute the driving machinery of living systems, since it is they which open and close ion channels, grasp molecules to enzymes and receptors, make alterations within cells, and govern the bending and sliding of muscle filaments. The organisation of protein is still poorly understood. Proteins are formed from 20 different amino acids with an enormous number of possible sequences. Van der Waals forces are involved in the proteins folding into different conformations, with a huge number of possible patterns of attraction and repulsion between the side groups of the protein. During the protein folding process there are non-local interactions between aromatic rings, which has been seen as suggestive of quantum mechanical sampling of possible foldings. Once formed a protein structure can be stabilised by outwardly facing polar groups and by regulation from non-polar regions within. The coalescence of non-polar amino acid side groups such as two aromatic rings can result in extended electron clouds constituting hydrophobic pockets. Protein conformation represents a delicate balance between forces such as chemical and ionic bonds, and as a result London forces driven by π electrons in hydrophobic pockets can tip the balance and thus govern conformations of protein. 4.15:  Hydrophobic pockets and entanglement The more solid parts of cells include protein structures, and these have within them hydrophobic areas containing hydrophobic or oil-like molecules with delocalised π electron clouds. In water, non-polar oily molecules such as benzene, which are hydrophobic are pushed together, attracting each other by London forces, and eventually aggregate into stable regions shielded from interaction with water. London forces can govern the configurations of protein in these regions. Such regions occur as pockets in proteins. In the repetitive structures of the tubulin dimer, π electrons clouds may be separated by less than two nanometres, and this is seen as conducive to entanglement, electron tunnelling or exciton hopping between dimers and connections between the electron clouds extending down the length of the neuron. Tubulin has a dimer form with an alpha and beta monomer joined by a ‘hinge’. The tubulin has a large non-polar region in the beta monomer just below the ‘hinge’. Other smaller non-polar regions with π electron rich indole rings, are distributed throughout the tubulin with distances of about two nanometres between them. The positioning of π electron clouds within about two nanometres of one another is suggested to allow the electrons to become entangled. This entanglement could spread through the microtubule and to other microtubules in the same dendrite. Following on recent research, it has become possible to compare the situation in microtubules to quantum coherence and entanglement in photosynthetic organisms, something unknown when researchers such as Tegmark argued against the possibility of functionally relevant quantum coherence in the brain. In photosynthetic systems light-harvesting chromophore molecules use dipoles to provide 99% efficiency in energy transfer from the light harvesting antennae to the reaction centre. The studies show that instead of quantum coherence being destroyed by the environment within the organism, a limited amount of noise in the environment acts to drive the system. 4.16:  Tryptophan Photosynthesis absorbs light in the red and infra red. These forms of light are not available to tryptophan in proteins, but tryptophan is able to use ultra violet light emitted by the mitochondria. In fact Tryptophan is sometimes referred to as chromophoric because of its ability to absorb UV light. It is becoming feasible to suggest that the same system that gives rise to quantum coherence in light-harvesting complexes could also give rise to it within the protein of neurons. 4.17:  Penrose & Hameroff 2011 In their latest joint paper published as a chapter in Consciousness and the Universe (2011) (22.) Penrose and Hameroff deal with aromatic rings and proposed hydrophobic channels within microtubules that could be crucial for a quantum theory of consciousness. They point to unexpected discoveries in biology. The most important change since Penrose and Hameroff first propounded their ideas in the 1980s and 1990s is the recent discoveries in biology relative to higher temperature quantum activity. In 2003 Ouyang & Awschalom showed that quantum spin transfer in phenyl rings (an aromatic ring molecule like those found in protein hydrophobic pockets) increases at higher temperatures. In 2005 Bernroider and Roy (23.) researched the possibility of quantum coherence in K+ neuronal ion channels. A more crucial discovery came in 2007 when it was demonstrated that quantum coherence was functional in efficiently transferring energy within photosynthetic organisms (Engel et al, 2007). Subsequent papers showed functional quantum coherence in multicellular plants and also at room temperature. In 2011 papers by Gauger et al  and Luo and Lu dealt with higher temperature coherence in bird brain navigation and in protein folding. Work by Anirban Bandyopadhyay with single animal microtubules showed eight resonance peaks correlated with helical pathways round the cylindrical microtubule lattice. This allowed ‘lossless’ electrical conductance. The authors also make a direct reply to one critic in particular (McKemmish et al, 2010) McKemmish claimed that switching between two states of the tubulin protein in the microtubules would involve conformational changes requiring GTP hydrolysis which in turn would involve an impossible energy requirement. The authors however claim that electron cloud dipoles (van der Waals London forces) are sufficient to achieve switching without large conformational changes. Where the Hameroff version of quantum consciousness remains ambitious relative to existing scientific knowledge is in the proposed link to the global gamma synchrony, the brain’s most obvious correlate of consciousness. He proposes that coherence within dendrites connects via gap junctions to other neurons and thus to the neuronal assemblies involved in the global gamma synchrony. He thus proposes the existence of quantum coherence over large areas of the brain, sometimes including multiple cortical areas and both hemispheres of the brain. Hameroff pointed to gap junctions as an alternative to synapses for connections between neurons. Neurons that are connected by gap junctions depolarise synchronously. Cortical inhibitory neurons are heavily studded with gap junctions, possibly connecting each cell to 20 to 50 other. The axons of these neurons form inhibitory GABA chemical synapses on the dendrites of other interneurons. Studies show that gap junctions mediate the gamma synchrony. On this basis, Hameroff suggested that cells connected by gap junctions may in fact constitute a cell assembly, with the added advantage of synchronous excitation. In this scheme computations are suggested to persist for 25 ms, thus linking them to the 40Hz gamma synchrony. The attempt to extend a proposal for quantum features from single neurons out to neuronal assemblies of millions of neurons resurrects the nay-sayer objections about time to decoherence. The photosynthetic states that have been demonstrated persist for only over femtosecond and picosecond timescales. Where the decoherence argument still stands up is in dealing with a system that needs to be sustained for 25 milliseconds. Further to this the Hameroff’s gamma wide theory involves difficult arguments about the ability of coherence to pass from neuron to neuron via the gap junctions. Danko Georgiev, a researcher at Kanazawa University also criticises Hameroff’s  requirement for microtubules to be quantum coherent for 25 ms. This has been generally regarded as an ambitious timescale for quantum coherence, and Georgiev objects on the grounds that enzymatric functions in proteins take place on a very much quicker 10-15 picosecond timescale. Georgiev wants to base his version of OR consciousness on this 10-15 picosecond timescale. Such a rapid form of objective reduction would also remove the necessity for the gel-sol cycle to screen microtubules from decoherence, as it does in the Hameroff version of objective reduction. Axons, dendrites and synapses:  Georgiev also criticises Hameroff’s emphasis on conscious processing as being concentrated in the dendrites. He claims that Hameroff’s does not allow any consciousness in axons, and this creates a problem in explaining the problematic firing of synapses. Only 15-30% of axon spikes result in a synapse firing, and it is not clear what determines whether or not a synapse fires. He discusses the probabilistic nature of neurotransmitter release at the synapses, and the possible connection this has with quantum activity in the brain. The probability of the synapse firing in response to an electrical signal is estimated at only around 25%. Georgiev points out that an axon forms synapses with hundreds of other neurons, and that if the firing of all these synapses was random, the operation of the brain could prove chaotic. He suggests instead the choice of which synapses will fire is connected to consciousness, and that consciousness acts within neurons. Each synapse has about 40 vesicles holding neurotransmitters, but only one vesicle fires at any one time. Again the choice of vesicle seems to require some form of ordering. The structure of the grid in which the vesicles are held is claimed to be suitable to support vibrationally assisted quantum tunnelling. P. Georgiev’s emphasises the onward influence of solitons (quanta propagating as solitary waves) from the microtubules to the presynaptic scaffold protein, from where, via quantum tunnelling, they are suggested to influence whether or not synapses fire in response to axon spikes. Jack et al (1981) suggested an activation barrier, restricting the docking of vesicles and the release of neurotransmitters. The control of presynaptic proteins is suggested to overcome this barrier, and to regulate the vesicles that hold neurotransmitters in the axon terminals. This is suggested to be the process that decides whether a synapse will fire in response to an axon spike (a probability of only about 25%), and if it does, which of a choice of 40 or so vesicles will release its neurotransmitters. The system he describes involves the neuronal cytoskeleton, and particularly the pre and post-synaptic scaffold proteins. Here, it is suggested that consciousness arises from the objective reduction of the wave function within these structures. The timescale of the system is argued to be defined by changes in tubulin conformations within the cytoskeleton and by the enzyme action in the scaffold proteins, which involves a timescale of 10-15 picoseconds, and thus implies a decoherence time on the same scale. Georgiev points out that it is much easier to suppose a decoherence time of this length in the brain than the 25 ms demanded by the Hameroff proposals. 4.19:  Ion Channels and Consciousness – Gustav Bernroider As an aside from microtubules Gustav Bernroider at Salzburg University has proposed a quantum information system in the brain that is driven by the entangled ion states in the voltage-gated ion channels of the membranes of neurons. These ion channels, situated in the neuron’s membrane are a crucial component of the conventional neuroscience description of axon spiking leading to neural transmitter release at the synapses. The ion channels allow the influx and outflux of ions from the cell driving the fluctuation of electrical potential along the axon, which in turn provides the necessary signal to the synapse. The work concentrates attention on the potassium (K+) channel and in particular the configuration of this channel when it is in the closed state. This channel is traditionally seen as having the function of resetting the membrane potential from a firing to a resting state. This is achieved by positively charged potassium (K+) ions flowing out of the neuron through the channel. Recent progress in atomic-level spectroscopy of the membrane proteins that constitute the ion channels and the accompanying molecular dynamic simulations indicate that the organisation of the membrane proteins carries a logical coding potency, and also implies quantum entanglement within ion channels and possibly also between different ion channels. An increasing number of studies show that proteins surrounding membrane lipids are associated with the probabilistic nature of the gating of the ion channels (Doyle, 1998, Zhou, 2001, Kuyucak, 2001). This work draws particularly on the work of MacKinnon and his group, notably his crystallographic X-ray work. The study shows that ions are coordinated by carboxyl based oxygen atoms or by water molecules. An ion channel can be in either a closed or an open state, and in the closed state there are two ions in the permeation path that are confined there. This closed gate arrangement is regarded as the essential feature with regard to their research work. The open gate presents very little resistance to the flow of potassium ions, but the closed gate is a stable ion-protein configuration. The ion channel serves two functions, selecting K+ ions as the ones that will be given access through the membrane, and then voltage-gating the flow of the permitted K+ ions. In the authors’ view, recent studies also require a change in views both of the ion permeation and of the voltage-gating process. A charge transfer carried by amino acids is involved in the gating process. In the traditional model the charges were completely independent, whereas in the new model there is coupling with the lipids that lie next to the channel proteins. This view, which came originally from MacKinnon, is now supported by other more recent studies. The authors think that the new gating models are more likely to support computational activity, than were the traditional models. Three potassium ions are involved in the ion channel’s closed configuration. Two of these are trapped in the permeation path of the protein, when the channel gate is closed. The filter region of the ion channel is indicated by the recent studies to have five binding pockets in the form of five sets of four carboxyl related oxygen atoms. Each of the two trapped potassium ion are bound to eight of the oxygen atoms, i.e. each of them are bound to two out of the five binding pockets. The author’s calculations predict that the trapped ions will oscillate many times before the channel re-opens, and the calculations also suggest an entangled state between the potassium ions and the binding oxygen atoms. This structure is seen as being delicately balanced and sensitive to small fluctuations in the external field. This sensitivity is viewed as possibly being able to account for the observed variations in cortical responses. The theory also relates the results of recent studies of the potassium channel and its electrical properties to the requirements for quantum computing. There have been schemes for quantum computers involving ion traps, based on electostatic interactions between ions held in microscopic traps, that have a resemblance to Bernroider’s interpretation of the possible quantum state of the K+ channel. P. The authors deny that the rapid decoherence of quantum states in the brain calculated by Tegmark applies to their model. They argue that the ions are not freely moving in the ion filter area of the closed potassium channel, but are held in place by the surrounding electrical charges and the external field. The ions are particularly insulated within the carboxyl binding pockets, and it is suggested the decoherence could be avoided for the whole of the gating period of the channel, which is in the range of 10-13 seconds. The authors also raise the question of whether given quantum coherence in the ion channel, it is possible for the channel states to be communicated to the rest of the cell membrane. This could include connections to other ion channels in the same membrane, possibly by means of quantum entanglement. P. Bernroider’s work might not be considered to be a fully fledged separate quantum consciousness theory. In the early part of the decade, Bernroider seemed to associate himself with David Bohm’s implicate order, but the lack of much specific neuroscience in Bohm’s version makes it hard to make any definite connection between it and the type of detailed neuroscientific argument offered by Bernroider. In the light of the advances in biology, and the potential for coherence to be supported by aromatic molecules within microtubules, it might be feasible to suggest that quantum coherence in the ion channels works together with coherence in other parts of the neuron. We have looked at the question of consciousness as a fundamental in terms of the quanta and spacetime, and we have looked at the possibility of quantum states in the brain. This brings us to the further question of how consciousness is related to larger-scale brain processing. Twentieth century consciousness studies tended to be very insistent that consciousness was the product of neural systems as described in text books that made no distinction between conscious and non-conscious processing. Any system that did what neurons did would produce consciousness, or alternatively consciousness was what it was like to have a brain, without any distinction between different parts of the brain. More recent research is indicative of consciousness arising in specific areas of the brain, and that on a transient basis. While this cannot be said to disprove the twentieth century claims, it does at least suggest a more careful and discriminating approach to understanding what gives rise to consciousness. Researchers, Goodale and Milner (24.) point to a ventral stream in the brain that produces conscious visual perception and a separate dorsal stream supporting non-conscious visuomotor orientations and movements. Goodale and Milner refer to a patient, who had suffered brain damage as a result of an accident. She had difficulty in separating an object from its background, which is a crucial step in the process of visual perception. She could manipulate images in her mind, but could not perceive them directly on the basis of signals from the external world, thus demonstrating that images generated by thought use a different process from direct perceptions of the external world. In modern neuroscience, perception is not viewed as a purely bottom-up process resulting from analysis of patterns of light, but is seen as also requiring a top-down analysis based on what we already know about the world. The patient’s visual problems seemed paradoxical. If a pencil was held out in front of her, she couldn’t tell what it was, but she could reach out and position her hand correctly to grasp it. This contrast between what she could perceive and what she could do was apparent in a number of other instances. The researchers saw the patient’s state as indicative of the existence of two partly independent visual systems in the brain, one producing conscious perception, and the other producing unconscious control of actions. They point to instances of patients with the opposite of their patient’s problems, who are able to perceive objects, their size and location, but are unable to translate this into effective action. These patients may be able to accurately estimate the size of an object, but are unable to scale their grip in taking hold of it, despite having no underlying problem in their movement ability. These patients with exactly the opposite problems from the first patient are taken to suggest partly-independent brain systems, one supporting perception, and the other supporting vision based action. It has been found that even in more primitive organisms, there can be separate systems for catching prey, and for negotiating obstacles, with distinct input and output paths. This modularity is also found in the visual systems of mammals. The retina projects to a number of different regions in the brain. In humans and other mammals, the two most important pathways are to the lateral geniculate nucleus in the thalamus and the superior collicus in the midbrain. The path to the superior collicus is the more ancient in evolutionary terms, and is already present in more primitive organisms. The pathway to the geniculate nucleus is more prominent in humans. The geniculate nucleus projects in turn to the primary visual cortex or V1. The mechanisms that generate conscious visual representations are recent in evolutionary terms, and are seen as distinct from the visuomotor systems, which are the only system available to more primitive organisms. The perceptual system is not seen as being specifically linked to motor outputs. The perceptual representation may be additionally shaped by emotions and memories, as well as the immediate light signals from the environment. By contrast, visuomotor activity may be largely bottom up, drawing on the analysis of light signals, and is not accessible to conscious report. As such, this appears little different from the systems used by primitive organisms, while perception is a product of later evolution. Perceptual representations of the external world have meaning, and can be used for planning ahead, but they do not have a direct connection to the motor system. Earlier research by Ungerleider & Mishkin argued for two separate pathways within the cortex. The dorsal visual pathway leads to the posterior parietal region, while the ventral visual pathway leads to the inferior temporal region. The authors relate these two basic streams to the concepts of vision for action and vision for perception respectively. Studies have shown that damage to the dorsal stream results in deficits in actions such as reaching, while the ability to distinguish perceived visual images remains intact. Other studies have shown that damage to the ventral stream creates difficulties with recognising objects, but does not impair vision-based actions such as grasping objects. An interesting study shows that attempts of patients with dorsal damage to point to images actually improved, if they delayed pointing until after the image had been removed. It was surmised that with the image gone, patients started to rely on a memory based on the intact ventral stream. Neurons in the primary visual cortex fire in response to the position and orientation of particular edges, colours or directions of movement. Beyond the primary visual cortex, neurons code for more complicated features, and in the inferior temporal cortex they can code for something as specific as faces or hands. However, while neurons may respond only to quite specific features, they can respond to these with respect to a variety of viewpoints or lighting conditions. By contrast neurons in the dorsal stream usually fire only when the subject responds to the visual signal, such as when they reach out to grasp an object. The ventral stream neurons appear to be moving the signals towards perception, whereas the dorsal stream neurons are moving signals towards producing action. The visuomotor areas of the parietal cortex are closely linked to the motor cortex. The authors suggest that the dorsal stream may also be responsible for shifts in attention at least those made by the eyes. The ventral stream has no direct connections to the motor region, but has close connections with regions related to memory and emotions, and providing information as to the function of objects Even within the ventral/perception stream there are separate visual modules. Damage related to any one module can result in localised deficits such as recognition of faces or of landmarks in the spatial environment. A cluster of visual areas in the inferior temporal lobe are responsible for most of these modules. This aspect of the research looks to point to the importance of a changing population of a small number of neurons or even single neurons in producing consciousness. 5.2:  Blindsight With the phenomenon of ‘blindsight’ patients have damage in the primary visual cortex V1. If V1 is not functioning, the relevant visual cells in the inferior temporal cortex remain silent regardless of what is presented to the eye. However, Larry Weiskrantz (25.), an Oxford neuropsychologist, showed that patients with this conditions could nonetheless move their gaze towards objects that they could not consciously see, and later studies showed they could scale their grip, and rotate their wrist to grasp such objects These blindsight abilities are mainly visuomotor. It is suggested that in these cases, signals from the eyes could go direct to the superior collicus, a midbrain structure that predates the evolution of the cortex, and thence to the dorsal stream. It is suggested that while the ventral stream depends entirely on activity in V1, there may be an alternative route for the dorsal stream. Studies have shown the dorsal stream to be active even when V1 is inactive. The patient discussed earlier is considered to be similar to blindsight patients, although in her case V1 is still active, which accounts for her still having conscious vision, albeit with impairments. In this research visual perception provided by the ventral stream is seen as allowing us to plan and envisage the consequences of actions, and to file representations in long-term memory for future use. Motor control on the other hand requires immediate accurate information using a metric that correlates with the external world. In perception the metric is relative rather than absolute. Thus in a picture or film the actual size of the image doesn’t matter, and we judge scale by the relative size of objects such as people or buildings. Thus the computation used for the absolute external accuracy of the motor system needs to be different from the relative computation of visual perception. Probably the most established correlate of consciousness is the global gamma synchrony (53.&.54.). Here it is important to repeat the distinction between correlation and identity. The fact that the gamma is correlated to consciousness, and that we understand how the synchrony arises does not of itself mean that we have explained consciousness. However, it seems reasonable to think that exploration of the gamma synchrony and its role might lead us towards an understanding of consciousness. The binding problem:.  One important problem in consciousness studies is the so-called binding problem, as to why processing in spatially separated parts of the brain and in different modalities is experienced as a single unified consciousness. One the one hand, there is the unity of consciousness and on the other, the fact that the brain comprises a number of specialised although connected processing areas. In order for consciousness to become unified, it has to overcome the problem of being represented in different modalities. Further it is generally agreed that there is no single central processing area in the brain. Only a small part of the brain’s total processing is conscious, and also much of the brain supports both conscious and unconscious processing. The studies of the gamma synchrony discussed below accord with the idea that it is this synchrony that creates the unity of consciousness. The processing of neurons uses a fluctuation in electrical potential referred to as firing, spiking or axon potentials. The electrical potentials in individual neurons reach an axon terminal, which releases a neurotransmitter to a receptor on a neighbouring neuron. Axon spiking oscillates at particular frequencies. The gamma frequency of about 30-90 Hz, but mostly in the lower half of this region, is the most important frequency so far as consciousness is concerned. Numbers of neuronal assemblies can become synchronised to oscillate at this frequency. Studies suggest that local gamma processing is unconscious, whereas large-scale activity, referred to as global, such as reciprocal signalling between spatially separate neural assemblies is a correlate of consciousness. Research indicates a close correlation between global gamma synchronisation and conscious processing (26. Lucia Melloni et al, 2007). Activity related to conscious responses is more synchronised, but not more vigorous. In human subjects, conscious processing has been related to phase-locked gamma oscillations in widely distributed cortical areas, whereas unconscious processing produces only local gamma activity. This is argued to be a so-called ‘small worlds’ system, where there is a coexistence between local and long range networks. In the brain it is suggested that the local networks are between neurons only a few hundred micrometers apart within layers of the cortex, while the long distance networks run mainly through the white matter, and link spatially separated areas of the cortex. It is these latter that can establish a global synchrony that is correlated to consciousness. 5.4:  Experimentation Melloni et al suggest that masking is a good way of studying consciousness, because this allows the same stimuli to be either conscious or unconscious. In a study run by the authors, words could be perceived in some trials but not in others. Local synchronisation was similar in both cases, but with consciously perceived words there was a burst of long-distance gamma synchrony between the occipital, parietal and frontal cortices. Also subsequent to this burst, there was activity that could have indicated a transfer of information to working memory, while an increase in frontal theta oscillations may have indicated material being held in working memory. Words processed at the unconscious level could lead to an increase in power in the gamma frequency range, but only conscious stimuli produced increases in long-distance synchronisation. This, plus possibly the theta oscillation, looks to be a requirement for consciousness. In another study, long distance synchronisation in the beta as well as the gamma range was observed. Recent studies suggest a nesting of different frequencies of theta and gamma oscillations when there is conscious processing. Therefore long distance synchronisation looks to be a requirement for consciousness, and conscious stimuli are seen to be associated with phase-locking of gamma oscillations across spatially distributed regions of the cortex, and also with increases in synchrony without increases in the rate of neuronal firing rate. A further study, (27. Singer, Wolf, 2010) discusses the rhythmic modulation of neuronal activity. During processing in the cortex, the brain increasingly selects for the relationship between objects. This involves interactions between different parts of the cortex. There is a requirement to cope with the ambiguity of the external world. The environment may contain objects with contours that overlap, or are partly hidden, and these conflicting signals have to be resolved in the cortex. Further to this, some objects are encoded in different sensory modalities. Evidence suggests that this process involves not only individual neurons but also assemblies of neurons (28. Singer, 1999) (29. Tsunoda et al, 2001). The possible conjunctions in perception are too large to be dealt with by individual neurons, but instead utilise assemblies of neurons, with each neuron relating to particular aspects of the unified consciousness. There appear to be two stages to this process. There is a signal to indicate that certain features are present. This operates on a ‘rate code’ basis, where a higher discharge frequency codes for a greater probability of a particular feature being present. The cortex is organised into neuronal columns extending vertically through the layers of the cortex. Synchronisation is related to connections linking these neuronal columns, which are thought to encode for linked features. The inferior temporal cortex is regarded as the likely site for the production of visual objects, and object-related assemblies are associated with synchronisation. Oscillations are driven by inhibitory neurons through both synapses and gap junctions (30. Kopell et al, 2000) (31. Whittington et al, 2001) Inhibitory inputs to pyramidal cells favour discharges at depolarising peaks, and this allows synchrony in firing. Locally synchronised oscillations can become phase-locked with others that are spatially separated. Synchronisation also allows better control of interactions between neurons. Excitatory inputs are effective if they arrive at the depolarising slope of an oscillation cycle and ineffective at other times. This means that groups of neurons that oscillate in synchrony will be able to signal to one another, and groups that are out of synchrony will be ignored. This mechanism can function both within neural assemblies or between separated assemblies. The frequency and phase of oscillation can alter so as to influence signalling. Facial recognition:   In one study, neurons responding to eyes, noses and faces were shown to synchronise to recognise a face. If the individual components were scrambled into a non-face arrangement then synchrony did not arise. However, the scrambling into non-face did not alter the discharge rate, only the synchrony. Focus of attention on objects also caused increased synchrony in the beta and gamma bands. Here again synchronisation does not necessarily relate to increased discharge rates. A further point of interest is the relationship between the global gamma synchrony and consciousness-related firing in single neurons. There is a tension at present between evidence relating subjective perception to the activity of large neuronal assemblies linked by the global gamma synchrony and other studies relating it to the activity of much smaller numbers of neurons. Since the correlations with consciousness appear strong in both cases, it seems likely that consciousness will be found to involve both types of process. With studies related to small numbers of neurons, it is shown that neurons are selective for particular images or categories of image, and that most neurons will be inactive in relation to most objects. Face recognition and localised hot spots:. A recent study Rafael Malach (32.-35.) indicates that while perception involves widespread cortical processing, the emergence of an actual percept sometimes involves only a small number of localised hot spots, in which there is intense and persistent gamma activity. Malach used the area of face recognition to clarify this concept. Studies indicate the existence of so-called totem cells (a reference to totem poles with carved faces) that are able to recognise a number of faces. The hot spots are suggested to involve intense activity between several of these totem neurons resulting in a sort of vote. If the same face is recognised by a majority or most of the neurons, the face is consciously recognised. The presumption seems to be that this would apply for most forms of perception and not just face recognition. Malach’s studies hint at important possibilities. Firstly, they raise the game for the individual neuron, from a simple switch, to something probably involved in more sophisticated processing. If we accept a conscious processing role for the individual neuron, it puts a different light on the global gamma synchrony, as a possibly classical structure that simply coordinates the activity of a number of hot-spot neurons, in order to produce the unity of consciousness. Thus in Malach’s example, face-recognition is not the end of the problem, because we do not usually perceive faces in isolation but as part of an environment. This suggests that the gamma synchrony could ensure that face recognition is coordinated with other hot-spot neurons that recognise clothing, furniture, a room or a surrounding landscape. 5.6:  ‘All or nothing’ neurons A further study also involving Malach looked at the response of single neurons in the medial temporal lobe, while subjects looked at pictures of familiar faces or landmarks. The response of the neurons studied correlated with conscious perceptions reported by the subjects of the study. Visual perception is processed by the ventral visual pathway, which goes from the primary visual cortex to the medial temporal lobe. Recent studies have shown that neurons in the medial temporal lobe fire selectively to images of individual people. In some trials, the duration of stimuli was right on the boundary of the time needed for conscious recognition of an object, so that it was possible to compare the behaviour of the neurons when an object was recognised and not recognised by the subject. One finding of this study was the ‘all-or-nothing’ nature of the neuronal response. There was no spectrum involved. Either the neuron fired strongly, in correlation with the subject reporting recognition, or there was very little activity. The responses were not correlated with the duration of the stimuli, because the responses of the neurons lasted considerable longer than the stimuli. In one trial, a single neuron was shown to respond selectively to a picture of the subject’s brother, but not to other people well known to the subject. Particularly noted is the marked difference in the firing of the neuron when the subject’s brother was recognised and not recognised. The stimulus duration of 33 ms meant that half the time the image was recognised, and half the time not recognised. The neuron was nearly silent when the image was not recognised, but fired at nearly 50 Hz when there was conscious recognition, indicating an ‘all-or-nothing’ response from the neuron, correlated to subjective report of recognition. The response exceeded the duration of the stimulus, and it was shown that the range of signal duration had little influence on the neuron’s response. In another test, a single neuron went from baseline to 10 spikes per second when the subject recognised a picture of the World Trade Centre, but showed little response to all other images that were presented. Again the neuron fired in an ‘all-or-nothing’ fashion, depending on whether there was conscious recognition. In five trials not resulting in conscious recognition, this neuron did not fire a single spike. In yet another trial, the firing of a single neuron jumped from 0.05 Hz to 50 Hz when the subject reported recognition of an individual. The overall conclusion from these trials is that there is a significant relationship between the firing of neurons in the medial temporal region and the conscious perceptions of subjects. Further to this, the activity of the neurons lasted for substantially longer than the stimuli, and had only a marginal correlation with the stimuli. In particular, it is noted that with stimuli, at a duration where exactly the same image was recognised in some cases, but not in others, there was an entirely different (all-or-nothing) response from the neuron, according to whether or not the subject consciously recognised the image. Other neurons near to the medial temporal neurons studied were shown to respond to different stimuli from those that activated the studied neurons. These findings are stated to agree with earlier single-cell studies, including studies involving the inferior temporal cortex and the superior temporal sulcus. This study serves to refute one of the popular arguments of twentieth century consciousness studies to the effect that consciousness was ‘just what it was like to have a brain or neural processing’. The study demonstrates that neural processing is completely distinct for  exactly the same signal, with a duration that placed it on the boundary of being consciously recognised or not recognised, produced almost no response, if it was not consciously recognised, but a vigorous response if it was consciously recognised. A study by Rafael Malach et al shows a correlation between consciousness and a jump from baseline to 50 Hz spiking in single neurons. Rather similar experiments show a correlation between global gamma synchrony and conscious experience. The problem here is to discover the link, if any, between these two correlations. The authors ask to what extent the spiking activity of individual neurons is related to the gamma local field potential. Earlier studies had shown a confusing variation in the degree of correlation between neuronal spiking and gamma activity, with some studies showing a strong correlation and others showing only a weak correlation. The authors here think that they have a resolution to the arguments that have arisen around this confusing data. Their study demonstrates that most of the variability in the data can be explained in terms of whether or not the activity of individual neurons is correlated to the activity of neighbouring neurons. A relationship with gamma synchrony is apparent where there is correlated activity in neighbouring neurons. The link between individual neurons that are associated with other active neurons and the gamma synchrony is apparent, both when the brain is receiving sensory stimulation, and when activity is more introspective. The gamma synchrony is considered to arise from the dendritic activity of a large number of neurons over an extensive area of the cortex. This study shows that the relation between the activity of individual neurons and gamma correlates with the extent to which the activity of the neuron is linked to the firing rate of its neighbouring neurons. This establishes a relationship between gamma activity and a large number of individual neurons distributed over a region of the cortex. In this study discussed here, subjects watched a film. During this, scanning showed a high correlation between the spiking of individual neurons and gamma activity that arose at the same time. But this did not happen in all cases. It was found that the main factor relating to whether or not neuronal spiking related to gamma activity was the degree of correlation in spiking between neighbouring neurons. This study was based on recording the activity of several individual neurons. It was shown that the correlation between the spiking of the individual neuron and gamma synchrony could be predicted from the level of correlations between the activity of neighbouring neurons. When neurons were not correlated with their neighbours gamma activity was at a low level. Consciousness and the sensory cortex:  Rafael Malach again argues that, at least in some cases, conscious perception does not require any form of ‘observer’ in the prefrontal area, but needs only activation in the sensory cortex. This claim is based on fMRI studies performed by Malach and colleagues. In one study where subjects had their brains scanned while watching a film, there was a wide spread activation of the sensory cortex in the rear of the brain, coinciding with relatively little activity in the frontal areas, where a significant degree of inhibition was apparent. Malach pointed out that the use of a film contrasted with the more normal brain scanning procedure in which stationary objects are presented in isolation without being embedded in a background and without other modalities such as sound. Malach argued that the use of a film contrasted with the more normal brain scanning procedure in which stationary objects are presented in isolation without being embedded in a background and without other modalities such as sound. With subects watching a film there was synchronisation across 30% of the cortical surface. This synchronisation extended beyond both visual and auditory sensory cortices into association and limbic areas. Emotional scenes in the film were correlated with widespread cortical activation. The study appears interesting in terms of the ability to synchronise more than one sensory modality plus the emotional areas of the brain. It was further shown that the more engaging the film, the less activity there was in the frontal areas. Malach suggests that the role of the frontal areas is not to create perceptual consciousness but to deliberate on the significance of the sensory experience and to make it reportable. When introspective or deliberative activity is in process, it is accepted that both sensory and prefrontal areas may be activated. If we accept this approach it becomes impossible to explain consciousness entirely in terms of the self, and the easy let out of deconstructing the self, and then claiming to have explained consciousness is closed off. One study (Hasson et al, 2004) also scanned brain activation in subjects viewing a film. In general, the rear part of the brain, which is orientated towards the external environment, demonstrated widespread activation. In contrast, the front of the brain and some areas of the rear brain showed little activation. These less active areas are referred to as the ‘intrinsic system’ that deals with introspection, and the ‘first person’ or ‘self’ aspects of the mind. Reportability is presumed to arise in this part of the brain. This network shows a major reduction in activation at the times that perception is most absorbing. This observation is exactly the reverse of any notion that perception and reporting should work in tandem. Malach suggests that conceptually, there could be an axis running from, firstly, introspective activity in the prefrontal, through, secondly, attention to external world material such a film, which can activate much of the sensory cortex, while inhibiting prefrontal activity, to thirdly and finally experiences such as Zen meditation, which can be seen as pure perception without any residual awareness of the self. This type of pure perceptual/absence of self experience is reported as being associated with other forms of altered states of consciousness. On a more everyday level, Malach suggests that when subjects are sufficiently absorbed by their sensory perceptions, they ‘lose themselves’ in the sense of not having any introspection about what they are perceiving. A typical example is an interesting film in which the viewer is absorbed by the drama and suspends any personal introspection or attempts to report what they are experiencing. It is also stressed here that consciousness arises of its own accord in the sensory cortex, without being dependent on frontal cortices supposed to be related to the sense of self. This looks to undermine attempts to dismiss the problem of consciousness by conflating it with the self, and then after that deconstructing the self. On the other hand, it would probably be going too far in the other direction to say that consciousness does not arise at all in the frontal areas. In particular some activity in the orbitofrontal cortex can be correlated to conscious perception rather than the strength of the signal, in much the same way that Malach has indicated occurs with visual perceptions. Malach speculates that these experimental findings support the idea that subjective experience arises in the areas where sensory processing occurs, rather than having to be referred on to any type of higher-order read out or some form of separate ‘self’. In this view, sensory perceptions are seen as arising in a group of neurons. Studies show that high neuronal firing rates over an extended duration and dense local connectivity of neurons is associated with consciousness. Malach thinks that studies of brain processing can differentiate conscious perception from the process of reporting the perceptions, and that conscious perception does not require some higher-order read-out system or some form of self, but can be handled by groups of neurons, within which individual neurons provide the perceptual read-out or subjective experience. He also argues that this supports the view that consciousness arises in each of a number of single neurons in a network, rather than having to refer to some higher structure. The perception arises when all the neurons in a particular network are informed about the state of the others in the network. Thus the perception is suggested to be both assembled by, and read-out or subjectively experienced by the same set of neurons. Each active neuron is suggested to be involved in both creating and experiencing the perception. This view of conscious perception has some important implications for consciousness theory as a whole. In the first place, it makes it possible to consider looking for the process by which consciousness arises in individual neurons rather than brain wide assemblies. This is more easily consistent with the recent findings that quantum coherence and possibly entanglement is functional in individual living cells. A further point is that the idea of consciousness in neurons or small high density areas undermines the attempt by some consciousness theorists to try and conflate consciousness and self-consciousness, and then claim that a deconstruction of the self has explained consciousness. Ambigous images: Similar evidence emerges from studies of the well-known Rubin ambiguous vase-face illusion. High fMRI activity correlates with the emergence of a face perception, although this emergence into consciousness does not involve any alteration in the external signal (Hasson et al, 2001). This is another demonstration that brain activity can correlate to conscious perceptions rather than the nature of external signals. The authors consider that consciousness is correlated with non-linear increases in neural activity, here described as ‘neuronal explosions’ and occurring in sensory areas. Other fMRI studies have distinguished two types of fMRI reading. Sensory activity is marked by rapid but short bursts of neuronal firing, while rest activity in neurons involves slow, low amplitude activity. 5.7:  Further selective response studies Quiroga, Q. et al (36.) emphasise that studies over the last decade have shown that some neurons in the medial temporal lobe respond selectively to complex visual stimuli. The studies suggest a hierarchical organisation along the ventral visual pathway. Neurons in V1 code for basic visual features, whereas at the stage of the inferior temporal cortex neurons can code selectively for complex shapes or even faces. The inferior temporal cortex projects to the medial temporal cortex where neurons are found to be selectively responsive to categories such as animals, faces and houses, as well as the degree of novelty of images. Activity in the medial temporal lobe is thought to be linked to creating memories rather than actual recognition, a process that seems to be more closely linked to the inferior temporal lobe. In a study by the authors, a hippocampal neuron fired in response to the image of a particular actor. Recording of the activity of a handful of neurons could be used to predict which of a number of images a subject was viewing at an accuracy far above chance. About 40% of medial temporal lobe neurons were found to be selective in this way, although some could fire selectively in response to more than one image. However, when this was case the images were often connected, such as two actresses in the same soap opera, or two famous towers in Europe. In fact it is estimated that selectively responding cells would respond to between 50 and 150 images. The authors are not trying to revive the idea of the ‘grandmother cell’ where one and only one neuron could respond to a particular image, for instance the image of the subject’s grandmother. Rather than that, the authors have estimated that out of one billion cells in the medial temporal lobe, two million could be responsive to specific percepts.  These cells respond to percepts that are built up in the ventral pathway rather than detailed information falling on the retina. 5.8:  Distinction between physical input and conscious percepts Kreiman, Fried & Koch (37.) demonstrated that the same environmental input to the retina can give rise to two quite different conscious visual percepts. In this study, the responses of individual neurons were recorded. Two-thirds of the visually selective medial temporal lobe neurons recorded showed changes in activity that correlated with the shifts in what was subjectively perceived, rather than the retinal input. Flash suppression is an experimental technique by which an image is sent to one eye and then a different image to the other eye. The newer image will suppress the first input. Neurons that select for the initial input and not the input to the second eye will be inactive when the first input is suppressed in this way, although the first image is still physically present on the retina. In visual illusions such as the Necker cube, the same retinal input can produce two different subjective perceptions. There is a distinction here between what happens in the primary visual cortex and in the later visual areas. Activity in the primary areas correlates to the retinal input, rather than any subjective perception. In this study performed in the US in 2002, a neuron in a subject’s amygdala responded selectively to the image of President Clinton, while failing to respond to 49 other test images presented. In the case of Clinton’s image, the neuron’s firing rate jumped from a baseline of 2.8 spikes a second to 15.1 spikes per second. However, the neuron did not react when the initial image of Clinton was suppressed by an image for which the neuron was not selective. Another amygdala neuron increased its firing in response to some faces, but was inactive when an image it didn’t select for was flashed to the other eye. A neuron in the medial temporal lobe increased its firing in response to pictures of spatial lay outs and not to other stimuli. Here again the activity did not occur when a different image was flashed to the other eye. In all these cases, the physical input to the first eye was continuing, but was not getting into conscious perception. Out of 428 neurons studied in the medial temporal lobe, 44 responded selectively to particular categories and 32 to specific images. None of these neurons were active when the images or categories they were selective for were part of the input on their retina, but were suppressed from subjective experience by a second image to the other eye. However, they could be active when both images were present, but the image they selected for was dominant. In the experimental subjects two out of three medial temporal lobe neurons changed their firing in line with subjective perceptions, but activity did not change if an input was present on the retina but not subjectively experiences because of retinal input to the other eye. This study could be seen as laying to rest two favourite ideas of twentieth century consciousness studies. The first was the idea that consciousness was non-physical. This approach is not really coherent within a scientific paradigm in any case, but experiments now demonstrate a correlation between subjective perceptions and physical levels of activity in individual neurons. Similarly, the mind-brain identity concept seemed to propose that in some mysterious way consciousness was identical to the whole operation of the brain, whereas this and other experiments clearly relate consciousness to the activity of individual neurons and specific neuronal assemblies, albeit both the neurons and assemblies involved are constantly changing. 5.9:  Object recognition Kalanit Grill-Spector discusses studies with fMRI that have shown that activation in particular brain regions correlates with the recognition of objects and also of faces. Some regions are involved in both face and object recognition. Object recognition occurs in a number of regions in the occipital and temporal cortex collectively referred to as the lateral occipital complex (LOC). These regions respond more strongly when the subjects are viewing objects. The involvement of LOC is thought to be subsequent to the early visual areas (V1-V4) and in the ventral stream, responding selectively to objects and shapes and showing less response to contrasts and positions. There are object-selective regions in the dorsal stream, but these do not correlate with object perception, and are suggested to be involved with guiding action towards objects. The LOC is responsive to objects without reference to how the object is defined, i.e. it does not differentiate between a photograph and a silhouette. The LOC responds to shapes rather than surfaces, and it responds even if part of the shape is missing. It is suggested that a pooled response across a population of neurons allows a  response to objects that does not vary according to the position of an object. This could be taken to indicate a role for individual neurons. Each neuron’s response varies according to the position of the object. It appears that for any given position in the visual field each neuron’s response is greater for one object than for all other objects presented. Apart from the LOC other regions in the ventral stream have been shown to respond more to particular categories of object. One region showed more response to letters, several foci responded more to faces than objects, including the fusiform face area, while other areas responded more to buildings and places than to faces. Nancy Kanwisher et al have suggested that the ventral temporal cortex contains modules for the recognition of particular categories such as faces, places or parts of the body. However, it is suggested that the processing of faces is extended to a more sophisticated level, given the requirements for social interaction. There may be a distinction between processing to recognise individual faces and processing to recognise categories, such as horses or dogs as a category. However, it is suggested very expert recognition of categories, such as ornithologists recognising a bird may involve a process similar to face recognition. Rafael Malach et al suggest that category recognition may respond more to peripheral input, while face and letter recognition depends more on central stimuli. 5.10:  Gamma, neurons & consciousness This could suggest that the conscious response in a single cell is linked to or dependent on global gamma synchrony. However, it would appear not necessary for the whole collection of neuronal assemblies to come into consciousness, but only for the synchrony to trigger consciousness in the individual neuron. This might make it possible to invert Hameroff’s proposal for quantum coherence in neurons to drive consciousness in the gamma synchrony. The opposite case of the synchrony triggering consciousness in single neurons would be more compatible with the type of quantum coherence that is functional in photosynthetic organism. This tends to look like pieces of a jigsaw puzzle, and unfortunately one that we may not get much help in assembling. We know that the global gamma synchrony correlates to consciousness. We know that a jump to 50 Hz spiking in individual neurons correlates to consciousness. We also now know that the spiking in the individual neurons correlates to gamma if the spiking of the individuals correlates to their neighbours. From the point of view of recent findings relative to quantum coherence in organic matter, it has become most plausible to think in terms of consciousness arising within individual neurons, but the road there may involve feed forward and feedback, as is often the case in brain processing. Processing in one neuron as a result of external signals may set off other neighbouring neurons, which ultimately broaden into a neuronal assembly oscillating as a local gamma synchrony. Longer range signals to other neuronal assemblies would set up global gamma synchrony. It might only be at that point that signals went back to individual neurons triggering quantum coherent activity within the neuron. This might account for the 500ms time lag for signals to come into consciousness (the Libet half second), while at the same time being compatible with the femto and pico second timescales of functional quantum activity in biological systems. Very speculative, but perhaps this at least provides a starting point or rational framework for thinking about the consciousness problem. In recent years, the most important neuroscientific research has arguably involved the role of emotion or emotional evaluation in the brain. This was a previously very neglected area due to various biases and misconceptions in twentieth century neuroscience. Our attention is here focused on how the orbitofrontal cortex assigns reward/punisher values to representations projected from other cortices, and how the basal ganglia integrate these subjectively-based values with inputs from the other parts of the cortex and the limbic system. 5.12:  Subjective emotion, choice & a common neural currency:  We are conscious of emotions, and they allow us to assess the reward values of actions. Without the emotion-based assessment of rewards, rational processing is not by itself adequate to deliver normal behaviour. While reasoning can be seen as working with or without consciousness, the subjective experience of emotion is closely entwined with the subjective assessment of current or future rewards. In fact, this ability to have a subjective preference or choice can be argues to be the real distinction between conscious and non-conscious systems, the difference between an automated one-to-one response and the conscious but unpredictable preference of one thing over the other. Emotion, anticipation of rewards and enjoyment of the same are all here seen to be based on subjective experience, and the key importance of these factors for behaviour suggests that subjective emotion is a common neural currency underlying the determination of behaviour. It is hard to distinguish a purely algorithmic basis for this processing, since the weighting of two subjective experiences seems to require the injection of initially arbitrary weights suggesting a non-computable or non-algorithmic element. Rewards and punishers:  Modern descriptions of emotional processing in the brain revolve round a framework of ‘rewards’ and ‘punishers’, together referred to as ‘reinforcers’, with subjects working to gain rewards and to avoid punishers. Some stimuli are primary reinforcers, so-called because they do not have to be learned. Other stimuli are initially neutral, but become secondary reinforcers, because through learning, they become associated with pleasant and unpleasant stimuli. Reward assessment is argued to be implemented in the orbitofrontal region of the prefrontal cortex and in the amygdala, a part of the subcortical limbic system. Emotions are thus viewed as states produced by reinforcers. The amygdala, the orbitofrontal and the cingulate cortex are seen as the brain areas most involved with emotions. Emotional states are usually initiated by reinforcing stimuli present in the external environment. The decoding of a stimulus in the orbitofrontal and amygdala is needed to determine which emotion will be felt. 5.13:  Neutral representations In respect of emotions, the brain is envisaged as functioning in two stages. To take the best known example of the visual system, input from the eyes is processed in the rear (occipital) area of the brain, and then progressively assembled into a conscious image arising in the inferior temporal cortex. At this stage, these representations are neutral in terms of reward value. Thus visual representations in the inferior temporal, or analogous touch representations in the somatosensory cortex, are shown to be neutral in terms of reward value, until they have been projected to the amygdala and the orbitofrontal. The brain is organised first to process a stimulus to the object level, and only after that to access its reward value. Thus reward/punisher values are learned, in respect to perceived objects produced by the later stages of processing, rather than the pixels and edges produced by the earlier stages of processing. 5.14:  Orbitofrontal cortex – subjective experience over strength of signal The orbitofrontal region of the prefrontal cortex is seen as the most important region for determining the value of rewards or punishers (55.). Objects are first represented in the visual, somatosensory and other areas of the cortex, without having any aspect of reward value. This only arises in the orbitofrontal and the amygdala. Studies show that orbitofrontal activity correlates to the subjective pleasantness of sensory inputs, rather than the actual strength of the signal. The orbitofrontal projects to the basal ganglia, which appear to integrate a variety of cortical and limbic inputs in order to drive behaviour. Thus subjective emotional assessment occurring mainly in the orbitofrontal would appear to play an important part in determining behaviour. The orbitofrontal cortex receives input from the visual, auditory, somatosensory and other association cortex, allowing it to sample the entire sensory range, and to integrate this into an assessment of reward values. In the orbitofrontal, some neurons are specialised in dealing with primary reinforcers such as pain, while others are specialised in dealing with secondary reinforcers. Orbitofrontal neurons can reflect relative preferences for different stimuli. The subjective experience of one signal can be altered by another from a different modality. The impact of words can influence the subjective impression of an odour, and colours can also influence the perception of odour. It has also been shown that the subjective quality of, for instance odours, can be altered by the top-down modulatory impact of words, while colour is thought to influence olfactory judgement. There is seen to be a triangular system involving association cortex, amygdala and orbitofrontal. 5.15:  Experimentation: Correlation with subjective experience An important study looked at the activation produced by the touch of a piece of velvet and a touch of a piece of wood in the somatosensory cortex, and the activation of the orbitofrontal produced by the same touches (38-40.). This trial compared the pressure of a piece of wood with the perceived pleasant pressure of a piece of velvet. It was demonstrated that the pressure of the wood produced a higher level of activity in the somatosensory cortex than the pressure of a piece of velvet. However, in the orbitofrontal the same pressure from velvet produced a higher level of activation, with the difference between velvet and wood being correlated to the different subjective appreciation of the two pressures. The less intense but reward-value positive stimuli, produced more activation in the orbitofrontal than a more intense but reward-value neutral stimulus. Similarly a reward value negative stimulus also produced more activation in the orbitofrontal than a neutral stimuli that was registered as stronger by the somatosensory cortex. Researchers are clear in their conclusion that the orbitofrontal registers emotionally positive or negative aspects of an input, rather than any other aspects such as intensity of signal. Thus the subjective pleasantness of the velvet touch relates directly to the activation level of the orbitofrontal cortex, demonstrating a connection between subjective appreciation and the core mechanisms for decision taking and behaviour. The orbitofrontal and to a lesser extent the anterior cingulate cortex are seen here as being adaptive in registering the emotional or reward value aspects of the initially reward-neutral somatosensory stimulation. Studies suggest that the orbitofrontal deals with a variety of types of reward values. It has been suggested that the brain has a common neural currency for comparing very different reward values. Apart from the velvet/wood study, other studies show that the level of orbitofrontal activity correlates to the subjective pleasure of the sensation, rather than the strength of the signal being received. Activation in response to taste is seen to be in proportion to the subjective pleasantness of the taste, and in responding to faces, activity increases in line with the subjective attractiveness of the face. With taste, the orbitofrontal can represent the reward value of a particular taste, and this activation relates to subjective pleasantness. In humans the subjectively reported pleasantness of food is represented in the orbitofrontal. Studies of taste in particular are seen as evidence that aspects of emotion are represented in the orbitofrontal. With faces, the activation of the orbitofrontal has been found to correlate to the subjective attractiveness of a face. This subjective ability enables flexibility in behaviour. If there is a choice of carrots or apples, carrots might be preferred and the top preference signal in the brain would correlate to carrots. However, if the range of choice was subsequently expanded to include bananas, the top preference signal could switch to bananas. This reaction looks to require some form of preferred qualia, referring to a previous subjective experience of bananas. 5.16:  Different types of reward – money v. sex:  One study attempted to compare the brain’s processing of monetary rewards, with its processing of rewards in terms of erotic images. Monetary rewards were shown to use the anterior lateral region of the orbitofrontal cortex, while erotic images activated the posterior part of the lateral orbitofrontal cortex and also the medial orbitofrontal cortex. Brain activity in these orbitofrontal regions increased with the intensity of reward, but only for types of reward in which those areas were specialised. By contrast, activity increased for both monetary and erotic rewards in the ventral striatum, the anterior cingulate cortex, the anterior insula and the midbrain. Other studies using rewards such as pleasant tastes have suggested a similar distinction between the posterior and anterior regions of the orbitofrontal. The bilateral amygdala was the only subcortical area to be activated in reward assessment and it was only activated by primary rewards such as erotic images and not by abstract rewards such as money. This area is more strongly connected to the posterior and medial orbitofrontal than to the anterior orbitofrontal. P. One distinction that is argued to emerge is between immediate reward, and the more abstract quality of a monetary reward that can only be enjoyed over time. The authors argue that studies suggest that it is not the actual delay in benefiting from the monetary reward, but its abstract nature that leads to it being processed in a different area. It was also found that patients with damage to the anterior orbitofrontal have difficulty with assessing indirect consequences as distinct from immediate consequences. 5.17:   Adaptive advantage of flexibility and response to change The adaptive advantage of the emotional system is that responses to situations do not have to be pre-specified by the genes, but can be learned from experience. If evolution had attempted to specify fixed responses for every possible stimuli, there would have been an unmanageable explosion of programmes. The reinforcer defines a particular goal, but does not specify any particular action. The orbitofrontal is also suggested to be involved in amending responses to stimuli that used to be associated with rewards, but are no longer linked to these. Three groups of neurons in the orbitofrontal provide computation, as to whether reinforcements formerly associated with particular stimuli are still being obtained. These neurons are involved in altering behavioural responses. The orbitofrontal computes mismatches between stimuli that are expected and stimuli that are obtained and changes reward representations in accord with this. This rapid reversal of response carries through from the orbitofrontal to the basal ganglia. Damage to the orbitofrontal impairs the ability to respond to such changes, and is associated with irresponsible and impulsive behaviour, and difficulty in learning which stimuli are rewarding and which are not. Patients who have suffered damage to the orbitofrontal have difficulty in establishing new and more appropriate preferences, and in daily life they tend to manifest socially inappropriate behaviour. In particular there is greater difficulty in dealing with indirect or longer term consequences of actions than with direct and immediate consequences. 5.18:  Visceral responses and emotions The orbitofrontal and amygdala act on the autonomic and the endocrine systems when stimuli appear to have significance in terms of emotion or danger. Visceral responses as a result of this signalling are fed back to the brain. Studies suggest that visceral responses are integrated into goal-directed behaviour via the ventromedial prefrontal cortex (VMPFC). The insula and the orbitofrontal are also thought likely to map visceral responses, with feedback from the viscera influencing reward assessment via levels of comfort or discomfort. There is considerable support for the idea that the body is the basis of all emotion. However, this looks difficult to square with the actual structure and nature of brain processing. While the bodily responses can certainly be seen to play a role, it is hard to see why all visual, auditory inputs, and the results of cognitive processing should have to wait on the laborious responses of the viscera, especially as it is the reward assessment areas of the brain that signal the viscera in the first place. If bodily emotion were the whole story, the orbitofrontal and amygdala would seem to be in a state of suspended activity between sending a signal to the autonomic system and getting signals back from the viscera. Conventional thinking may have here been biased by the emphasis in experimentation on fear in animal subjects, where bodily reactions are pronounced, rather than more evaluative emotional activity emphasised above. In the specific case of rapid phobic reactions in the amygdala, the idea fails completely. The more plausible view is that visceral responses are one aspect of many responses that are integrated in the orbitofrontal.  It seems more likely that in line with most brain processes there is a complex feed forward and feedback between all parts of the system including the viscera and the orbitofrontal. The body-only theory seems to depend on a simple feed forward mechanism, which is alien to how brain processing works. 5.19:  Dorsolateral prefrontal The orbitofrontal projects not only to the basal ganglia but also to the dorsolateral prefrontal, which is responsible for executive functions, planning many steps ahead to obtain rewards,  and such decisions as deferring a short-term reward in favour of a higher value but  longer-term reward. Where dorsolateral activity reflects preferences, it is found that the orbitofrontal has reflected them first, and these preferences have been projected from the orbitofrontal to the dorsolateral, where they can be utilised for planning or for deciding whether or not to defer short-term rewards. In these instances, the reward assessing functions of the orbitofrontal and the integrative role of the basal ganglia play an important role. It has been argued that that ‘moral-based’ knowledge generated by rewards and punishers cannot take place without the orbitofrontal. Ethically based rewards for good or appropriate behaviour that are decided on by the dorsolateral are seen to be influenced by processing of the orbitofrontal. In the basal ganglia, the emotional evaluation of the orbitofrontal is combined with inputs from other cortices and the limbic areas (41.). The basal ganglia can be viewed as a sort of mixer tap for the wide spread of inputs from the cortex and limbic system, and as such select or gate for material processed by the cortex, including the orbitofrontal. The basal ganglia comprise a region of the brain with strong projections from most parts of the cortex and also the limbic system. Modern brain theory views the basal ganglia as important for the choice of behaviours and movements, both as regards activation and inhibition of these. The striatum, which includes the nucleus accumbens, is the largest component of the basal ganglia, receiving projections from much of the cortex, and also receiving dopamine projections from the midbrain. The basal ganglia are sensitive to the reward characteristics of the environment, and operate within a reward-driven system based on dopamine. The region is seen as integrating sensory input, generating motivation and also releasing motor output. Incoming stimuli from the environment to the brain are always excitatory. The thalamus receives the incoming signals, and sends them forward to the cortex for processing. This is also primarily excitatory, as are further projections to the frontal cortex. The basal ganglia are seen as important for inhibition. Cortical-subcortical-cortical loops are widespread in  the brain. In these loops, the cortical inputs are always excitatory, with the subcortical for the most part inhibitory. The subcortical areas are seen to project back to the cortex, and to modulate the cortical inputs. They are indicated to have a role in deciding what information is returned to the cortex. Each loop originates in a particular area of the cortex, such as the orbitofrontal and the anterior cingulate. Inhibitory output going back via the thalamus assists the focusing of attention and action. The basal ganglia gate or select for elements of the processed information used by the cortex. Novel problem solving requires interaction between the prefrontal cortex, other parts of the cortex and the basal ganglia. Striosomes, matrisomes & TAN cells: Striosomes are the area of the basal ganglia involved in modulating emotional arousal. The basal ganglia includes the striatum, which contains neurochemically specialised sections called striosomes that receive inputs mainly from limbic system structures, such as the amygdala, and project to dopamine containing neurons in the substantia nigra. This is seen as giving them a role in dealing with the input of emotional arousal into the basal ganglia ( Graybiel, 1995). Certain regions in the cortex, and notably areas involved with emotion such as the orbitofrontal cortex, the paralimbic regions and the amygdala, all project to the striosomes (Eblen & Graybiel, 1995). This is seen as constituting a limbic-basal ganglia circuit. The role of the striatum may be to balance out a variety of sometimes conflicting inputs from different parts of the prefrontal and the limbic areas, and to switch behaviour in response to these inputs. 5.21:  TANS (tonically active neurons):  In the mid 1990s researchers discovered specialised neurons referred to as tonically active neurons (TANs) that are situated where matrisomes and striosomes meet, and are therefore well placed to integrate emotional and rational input. Cortical areas involved with anticipation and planning project to areas in the striatum known as matrisomes. These are often found in close proximity to the striosomes. This is taken to suggest a link between the planning-related matrisomes and the limbic-related striosomes. TANS (tonically active neurons) are highly specialised cells located at striosome/matrisome borders, and therefore  well placed to integrate emotional and rational input. TANS can be seen as a form of mixer tap for combining planning and emotional assessment inputs in the basal ganglia. TANs respond strongly to reward-linked stimuli, and they also responded when a previously neutral stimuli becomes associated with a reward. TANs are thought to be involved in the development of habits, with particular environmental cues having emotional meaning, and producing particular behaviour. These cells have a distinct pattern of firing when rewards are delivered during behavioural conditioning (Asoki et al, 1995). It is suggested that changes in TAN activity could be a way of redirecting information flow in the striatum. 5.22: Dopamine:  The neurotransmitter dopamine is involved in delivering the reward system for which the orbitofrontal and other areas act as a prediction. The largest concentrations of dopamine in the brain are found in the prefrontal cortex and the basal ganglia. The dopamine system is based in the ventral tegmenta area of the midbrain. The dopamine producing neurons in the mid brain appear to be influenced by the size and probability of rewards, presumably based on information from areas such as the orbitofrontal and the amygdala. Dopamine projections are mainly to the nucleus accumbens, the amygdala and the frontal cortex. This is the brains reward circuitry. The ventral striatum is highly active in anticipation of reward, and remains active during the reward. It is believed to modulate motivation, attention and cognition. Impairment of this area creates a wide range of problems. Within the striatum learning is influenced by dopamine acting on medium spiny neurons, reducing inhibition and releasing or increasing output of activity. By contrast, reduced levels of dopamine lead to increased inhibition and reduced activity. Reward/pleasure centre – nucleus accumbens:  The nucleus accumbens is part of the ventral striatum and constitutes the reward/pleasure centre of the brain. The orbitofrontal and anterior cingulate both project to the nucleus accumbens. Dopamine-based activity in the nuclear accumbens is related to seeking reward and avoiding pain. Addictions are found to be related to a lack of natural activity in this area, with drugs of addiction working to enhance otherwise depressed activity. It has further been suggested that the use of neuromodulators by-passes the need to always rely on cognitive computation in the cortex. From the point of view of consciousness studies, it is apparent that these dopamine-rewards are registered in subjective consciousness, so as with the orbitofrontal there is again a weighting of different subjective impulses. The orbitofrontal would look to base its predictions on the previous subjective experience of the delivery of dopamine rewards. The nature of emotional evaluation in the brain discussed above leads on to the vexed question of free will. The area of conventional consciousness studies has been almost unanimous in rejecting the concept of freewill in favour of human behaviour being completely deterministic. We do not have to look very far to find the explanation for this counter-intuitive notion. Conventional thinking about consciousness is based on classical/ macroscopic physics, which is entirely deterministic and has no place for anything outside a direct sequence of cause and effect. The high degree of confidence expressed in deterministic explanations rests on this assumption. Once we begin to think that classical physics might not have the full explanation for consciousness, the assurance of determinism looks to be shaky. This in itself may partly explain the furious resistance to the involvement of non-classical physics in brain processing. Recent studies of the processing of emotion in the brain do not accord well with the deterministic thesis, albeit not many have yet come to terms with this. The workings of the emotional brain provide something that can only be experienced in terms of subjective scenarios, not apparently reducible to specific weightings or to algorithms that give precise and deterministic predictions. Another way of approaching the problem of deciding between two alternative courses of action is to look at what happens when we make a list of points in favour of both courses of action. While this may somewhat clarify the mind, we are still likely to find that something is missing, something which will ultimately need to be bridged by an emotional evaluation. Whether such subjective based decisions or influences can be described as ‘free’ is hard to say. They certainly look to lie outside the classical-based neuroscience which is the usual diet provided for us, but whether they represent ‘free’ agency is another matter. As something not derived from algorithms, they can however be seen as deriving from the same fundamental level of the universe that can over some extended period of time give a pattern to the apparently randomly arising position of particles. It is perhaps beyond us at the moment to say what is that takes this sort of decision, but what they have in common with emotional evaluation is that they cannot be described by an algorithm. In the discussion below, we look at various studies that disagree with the conventionally deterministic working of the brain. The psychiatrist, Jeffrey Schwartz (42-45.), argues that the exercise of the conscious will can overcome or reduce the problems of obsessive-compulsive disorder (OCD). This disorder leads to repetitive behaviour, for example, repeated unnecessary hand-washing. The patient is aware that their behaviour is unnecessary, but has a compulsive urge to persist with it. This behaviour is related to changes in brain function in the orbital frontal cortex (OFC), anterior cingulate gyrus and basal ganglia, all areas related to emotional processing (Schwartz 1997 a&b), (Graybiel et al, 1994), (Saint-Cyr et al, 1995), (Zald & Kim, 1996a&b). The patients are able to give clear subjective accounts of their experience that can be related to cerebral changes as revealed by scanning. Thus the sight of a dirty glove can cause increased activity in the orbital frontal and anterior cingulate gyrus. There is also increased activity in the caudate nucleus, a part of the basal ganglia that modulates the orbital and the anterior cingulate (Schwartz, 1997a, 1998a). The basal ganglia have been shown to be implicated in OCD (Rauchen, Whalen et al, 1998). The striatum, which is part of the basal ganglia, contains neurochemically specialised sections called striosomes that receive inputs mainly from limbic system structures, such as the amygdala, and project to dopamine producing neurons in the substantia nigra. The prefrontal cortex, which is seen as the prime area for assessing environmental inputs, also projects to this area. The densest projections come from the orbitofrontal and the anterior cingulate (Eblen & Graybiel, 1995). At the same time, brain regions involved in anticipation, reasoning and planning projects to areas in the striatum known as matrisomes. These are often found in close proximity to the striosomes. This is taken to suggest a link between the prefrontal related matrisomes and the limbic system related striosomes. High specialised cells are found to be located at striosome/matrisome borders known as tonically active neurons (TANs) look to be a kind of mixer tap to balance the inputs from cognition, emotion and most parts of the cortex. These cells have a distinct pattern of firing when rewards are delivered during behavioural conditioning (Asoki et al, 1995) (13). It is suggested that changes in TAN activity could be a way of redirecting information flow in the striatum. TANs could produce new striatal activity in response to new information. Schwarz states that studies of patients who learn how to alter their behaviour by the apparent exercise of their conscious will, showed significant changes in the activity of the relevant brain circuits. Anticipation and planning by the patient can be used to overcome the compulsions experienced in OCD. Patients are able to learn to change their behaviour while the OCD compulsions still occur. Successful patients are active and purposeful not passive during the process of their therapy. The actual feel of the OCD compulsions does not usual change during the early stages of treatment. The underlying brain mechanisms have probably not changed, but the response of the patient has begun to change. The patient is learning to control the response to the compulsive experience. To make a change requires mental effort by the patient at every stage. New patterns of brain activity have to be created for the patients to be aware that they are getting a faulty message, when they get an urge to carry out some compulsive behaviour. At the same time the patients have to refocus on some more useful behaviour. If this is done regularly, it is suggested that the gating in the basal ganglia will be altered. It is suggested that the response contingencies of the TANs alter as a result of the patient frequently resisting the compulsive urges. This presumably reflects projections from parts of the cortex to the TAN cells within the basal ganglia. What are we to make of this study, in the light of what recent neuroscience is telling us about the emotional brainIn the first place, the disorder is related to problems in the emotional areas of the brain in the form of the orbitofrontal and the anterior cingulate. These are at least in part the areas the push the patient towards hand washing or some other repetitive behaviour. However, the orbitofrontal at any rate is capable of both of changing it assessment, and evaluating the choice between conflicting rewards. As a purely speculative suggestion, I would suggest here that rational-based inputs, most likely from the dorsolateral could change the emotional weighting of particular actions so that the subjective feel-good factor of another hand washing might be balanced out by the feel good factor of overcoming the compulsion. Projections from the orbital frontal and other regions could in turn shift the balance of inputs to the TAN cells, at the rational/emotional juncture between striosomes and matrisomes. Studies by the psychologist, Carol Dweck, suggest that subjects who believe they can influence their academic performance (referred to as incremental theorists) perform better than students who are convinced that their performance is preordained (entity theorists) (Dweck & Molden, 2005, Molden & Dweck). Entity theorists tend to withdraw effort and avoid tasks once they have failed. Incremental theorists attempt an improved approach to a problem task. In a study (Blackwell et al, 2007), in which entity and incremental theorists started a high school maths course with the same standards, the incremental students  soon pulled ahead, with the gap continuing to widen over the duration of the course. This distinction was related to the incrementalists willingness to renew efforts after a setback. A further study (Robins and Pals, 2002) showed that during their college years, entity theorists had a steady decline in their feeling of self-worth, relative to incremental theorists. Other studies (Baer, Grant & Dweck, 2005) linked some cases of depression to self-critical rumination on supposedly fixed traits by entity theorists, and suggested that incremental theorists had greater resilience to obstacles, were more conscientious in their work, and more willing to attempt challenging tasks. This suggests a role for conscious will or effort to act in a causal way on brains that initially had the same quality of rational problem solving so far, to leave them with different qualities by the end of a period of study. The essential distinction in the academic performance is that when the incrementalists suffered a setback, they did not accept this as the final judgement on their performance. This looks to point to a subjective assessment of two scenarios, the easy but disappointing scenario of giving up, and the demanding but more satisfying strategy of trying again. The second is absent in the entity theorists because they ‘know’ that they can’t achieve more than a modest performance. The psychologist, Roy Baumeister (47.), examines the reason for the scientific and psychological consensus against the existence of freewill. He suggests a metaphysical element in this, with some scientists feeling that rejection of freewill is part of being a scientist. The fact that Libet and similar experiments have shown that actual movements of the body are not driven by free will is acknowledged, but Baumeister points to researchers such as Gollwitzer (1. 1999), who distinguishes between the decision to act and the action or movement itself. It is suggested that free will may have a role in the deliberative stage. For instance, free will could govern the decision to go for a walk, but the actions of getting up, going out the door and putting one foot in front of the other would be unconsciously driven. Self-control, such as the ability to resist short-term benefits in favour of long-term goals and also rational choice based on deliberative thinking are here seen as two of the most important factors associated with freewill. Baumeister argues that reasoning entails at least a limited degree of freewill in that people can alter their behaviour on the basis of reasoning. Similarly self-control equates to the ability to alter behaviour in line with some goal. Decisions such as these can certainly be related to emotional evaluation in the orbital frontal and other regions. Baumeister cautions that the ability of modern technology to study periods of milliseconds may have blinded some researchers to the importance of processes that take extended periods of time. He wonders why people agonise over decisions if they actually have no influence on them, and also suffer negative stress effects in situations where they lack control over their lives. The implication is that the use of time and energy on such a process should have been selected out by evolution if it had no relevance. The author argues that while researchers such as Wegner have shown that people are sometimes not aware of the causes of their actions, that is very different from saying that they never determine their actions. The consensus against freewill has set the bar as high as possible in denying that freewill ever has any influence or exists at all. They have to show that none of the apparent occurrences of freewill are real, rather just producing scattered examples of freewill being an illusion, some involving rather contrived conditions. Baumeister argues for the efficacy of freewill. In particular studies show that the processes of both self control and rational choice deplete glucose in the bloodstream, leading to a deterioration in subsequent performance. It appears unlikely that evolution would have selected for such a high energy process if it was not efficacious. Consciousness is closely associated to freewill and these studies therefore carry a strong implication that consciousness itself is also a physical thing or process involving energy and being efficacious. In Baumeister’s own experimental studies, he found that the performance of self-control tasks deteriorated if there had been previous self-control tasks. The implication of this is that some resource is used up during the exercise of self-control. The exercise of choice seems to have the same effect. Subsequent to the exercise of either self-control or choice, attempts to exercise further self-control saw performance deteriorate, in a way that did not occur when participants were just thinking or answering questions. This suggests that self-control and rational choice both draw on some form of energy. Gailliot et al (2007) (47.) found that self- control caused reductions of glucose in the bloodstream, and that low levels of glucose were correlated with poor self-control. This finding has important implications for the freewill argument. If free choice was only some form of illusion, it is not clear why it would be adaptive for evolution to select for something that consumed a lot of energy, but had no influence on behaviour. There is a rather convoluted suggestion that we have the illusion of freewill because that makes us think that others have freewill and should therefore be punished if they do not make choices that are favourable to the group. There are two problems with this approach. The Baumeister study showed that the same depletion of energy that occurred with the exercise of free will in the sense of self control also occurred with the exercise of choice not requiring any particular restraint on impulses. The physical process of choosing, often referring to individual or private matters looks to go far beyond simple approval of the actions of others. Further to this,  if freewill is really just a charade, it is surprising that it should require such a noticeable amount of energy. In fact, the assessment of the positive or negative effects of the actions of other members of the group looks to be more easily accessible to an algorithm based process. There is perhaps a deeper implication, not discussed in this articles that consciousness which is closely related to the experience of free choice is itself a physical thing or process requiring energy. This should not be a surprise given the nature of the physical laws, but at the moment it looks to be contrary to the scientific consensus. The high energy cost of freewill suggested here also serves to explain why conscious as distinct from unconscious processing  is used only sparingly, and that is one reason why we rely on unconscious responses for much of our activities. 5.24:  Free won’t An area of the basal ganglia known as the subthalamic nucleus (STN) is important from the point of view of the freewill debate. Benjamin Libet (48-51.), whose experiments indicated that some minor ‘voluntary’ movements were initiated before subjects were consciously aware of wishing to move, postulated that there could be a ‘free won’t’ mechanism that blocked actions that began unconsciously, but were later determined to be inappropriate by the conscious mind. Recent studies show that the subthalamic nucleus does have an inhibitory role in stopping behaviours whose execution has already begun. The scientific consensus against freewill has created some anxiety that as this ‘knowledge’ gradually leaks from the laboratory into the popular mind there will be a deterioration in public behaviour. Ingenious arguments have been advanced against this, but studies suggest that we should fear such a deterioration. Vohs & Schooler (2008) (52.) found that participants who had read a study advocating the non-existence of freewill were more likely than controls to take advantage of an opportunity to cheat in a subsequent maths test. Other studies by Baumeister et al showed that participants encouraged not to believe in freewill were more aggressive and less helpful towards others. At this stage, we might think we have covered enough ground to try to put together a theory of consciousness that has explanatory power, and is not obviously at variance with what we know about physics, neuroscience or evolution. We have tried to define consciousness, as our subjective experience, or as the fact of it ‘being like something’ to experience things. Consciousness also involves our subjective awareness of the real or apparent ability to subjectively envisage future scenarios, and to use these for our choice of actions. I have further suggested that there is only one problem with consciousness, the problem of how qualia or subjective experience arises, and that we have to address this and essentially only this in discussing consciousness. We have examined theories of consciousness that operate within the context of classical physics, and always come up against essentially the same explanatory gap. Classical physics gives a full explanation of the relationships of macroscopic matter, without any need for consciousness, and also without any ability to generate consciousness. This creates a problem as to how the brain can generate consciousness, given that neuroscience describes the brain in terms of the macroscopic matter made up of carbon, hydrogen, oxygen and other atoms, the relationships of which can be described without either requiring or generating consciousness. The failure to find a theory with satisfactory explanatory power within classical physics pushes us towards identifying consciousness as a fundamental or given property of the universe. What does this really mean? Explanation in science works by breaking things down into their components and the forces or processes that make them function. But this downward arrow of explanation does reach a floor. Mass, charge, spin and the particular strengths of the forces of nature are given properties of the universe that are not reducible to anything else and come without any explanation. Because consciousness has a similar lack of explanation, it is similarly suggested to be a fundamental property. This is only a start. In itself it tells us nothing about how such a fundamental manifests in the brain. Rather than having a solution, we are only at the beginning of a very difficult journey towards something with explanatory value. Not only do we have to discover some system that is truly fundamental, but, given the lack of apparent consciousness in the rest of the universe, we need a process that is unique in operating only in brains, and not in other physical systems. Quantum consciousness is really a misnomer for the sort of system that we are looking for. The philosopher, David Chalmers, was correct in pointing out that there was no more reason for consciousness to arise from quanta than there was for it to arise from classical structures. Both permeate the universe outside of the brain without producing consciousness. The quanta and their behaviour are only of interest if they can allow the brain access to a fundamental property not apparent in other matter. This brings us also to the question of what really is fundamental. There are two sides to this question. The quanta and spacetime. The quanta are the fundamental particles/waves of energy, which also equates to the mass of physical objects. Some quanta such as the proton and the neutron are composed of other quanta, so are not truly fundamental or elementary. The quarks that make up the protons and neutrons of the nucleus of the atom and the force carrying particles such as photons appear to be the most fundamental quanta. But the quanta cannot be understood in isolation. They must be seen as having some form of relationship to spacetime, and that’s a more difficult area than might appear at first sight. Neither quantum theory, nor relativity which is our theory of spacetime, have ever been falsified, but they are, nevertheless, incompatible with one another. Many physicists are coming round to the notion that spacetime is not an abstraction but a real thing, and also something that is not continuous, but discrete, and perhaps best conceived in the form of a web or network. They are divided as to whether the quanta create spacetime, or spacetime generates the quanta, or the third possibility that the two are expressions of something more fundamental. However, whatever form it is conceived to take, the concept of a real and discrete structure also allows the possibility of some form of pattern or information capable of decision making, and this is the level of the universe where we need to look for an explanation of consciousness. There are two routes leading to the conclusion that consciousness has to derive from such a fundamental level of the universe. In addition to the view that classical physics simply can’t cut it in respect of consciousness, there is the Penrose approach via the function of consciousness. As described earlier, he proposed that the Gödel theorem meant that human understanding or conscious could perform tasks that no algorithm-based system such as a computer could perform. This is led to an arcane dispute with logicians and philosophers which few lay people can follow. However, I think it unnecessary to penetrate into such an arcane area. At a much more mundane level, the process of choosing between alternative forms of behaviour or courses of action by means of subjective scenarios of the future looks to also invoke a process that cannot be decided by algorithms. This suggestion is now supported by recent studies showing that in the orbitofrontal region the brain some activity correlates to subjective appreciation rather than the strength of signal, whereas in other parts of the brain not involved with preferences, activity correlates to the strength of this same signal. So while Penrose provides the original inspiration for the idea of an aspect of the universe that could not be derived from a system of calculations, it seems possible to simplify or streamline the original inspiration in a manner that is compatible with recent brain research and not open to the same sort of attacks from logicians and philosophers. In a similar way, it may be possible to simplify Penrose’s proposal of a special type of quantum wave function collapse as the gateway to conscious understanding, seen here as an aspect of fundamental spacetime geometry. Penrose dismissed the randomness of the conventional wave function collapse as irrelevant to the mathematical understanding in which he was initially interested, and instead proposed a special form of objective wave function collapse, which was neither random nor deterministic, but accessed the fundamental spacetime geometry. His proposal as to wave function collapse is currently the subject of experimental testing although this is a procedure that is likely to take up to a decade. Again the question is whether it is necessary to go to such lengths. Might there be a way around the apparent randomness that led Penrose do dismiss conventional wave function collapse. Might not the more conventional wave function collapse, or alternatively decoherence equally well provide an access to the fundamental and conscious level of the universe. There are queries as to how random the randomness is. In one form of the famous two slit experiment, single photons arrive at a screen over some extended period of time. The initial photons register on the screen in apparently random position, but as later photons arrive the familiar light and dark bands form. Somehow later photons or perhaps the earlier photons, ‘know’ where to put themselves. There is a suggestion that this puzzle links to one of the other puzzles of quantum theory, namely entanglement, by which the quantum properties of particles can be altered instantaneously over any distance. In this suggestion, the photons in the two slit experiment are entangled with other distant quanta. Whatever it is that decides the position of these particles in this scheme has no apparent explanation in terms of algorithms or systems of rules for calculating, and this is something that it holds in common with choice by emotional valuation. But how could such a mechanism related to the fundamentals of distant space arise within our brains. Penrose’s collaborator, Stuart Hameroff, proposed a scheme by which quantum coherence arose within individual neurons and then spread throughout neuronal assemblies. Most conscious commentators believe that this theory can be straightforwardly refuted because of the rapid time to collapse or decoherence for quantum states in the conditions of the brain. However, this simplistic approach has in effect been partly refuted by the discovery of functional quantum coherence in biological systems during the last few years, initially in simple organisms subsisting at low temperatures, but most recently at room temperature and in multicellular organisms. Moreover, it is now apparent that the structures of aromatic molecules within the amino acids of individual neurons are similar to those within photosynthetic organisms now known to use quantum coherence. The structures that support quantum states in photosynthetic systems rely on the pi electron clouds discussed in earlier sections and in microtubules the amino acid tryptophan supports the same structure of pi electron clouds which thus look potentially capable of sustaining quantum coherence and entanglement through significant sections of a neuron. The mechanisms by which quantum coherence could subsist in neurons looks here to be within our grasp or understanding. But as with the original Penrose proposal, Hameroff’s scheme may be more ambitious and therefore more open to criticism than it needs to be. Where quantum states have been shown to be functional they subsist for only femtoseconds or picoseconds, whereas the Hameroff scheme requires quantum coherence to be sustained for an ambitious 25 ms, moreover it has to be sustained over possibly billions of neurons spread across the brain. This lays it open to attack from many angles. It looks much more feasible to work from the basis of quantum coherence that exists in other biological systems and to look for similar short lived single cell processes in the brain. The known systems of functional quantum states that subsist within individual cells elsewhere in biology look to have the potential to exist within neurons. For this reason, it is thus much more feasible in the absence of countervailing evidence to work on the basis of consciousness arising within individual neurons. This effectively inverts the Hameroff scheme. Rather than neurons feeding into the global gamma synchrony, the synchrony, which is certainly correlated with consciousness, may be a trigger to conscious activity in neurons. Recent studies give credibility to the idea of consciousness in single neurons. Experimentation has shown that increased activation in single neurons is correlated to particular percepts. Some neurons are selective in only responding to particular images, and activity in these is correlated to the conscious experience of those images. Of course it isn’t as simple as that. With 100 bn neurons in the brain, and perhaps a good percentage of these selecting for particular images, there has to be some way of coordinating their activity. It is initially puzzling that the same type of experiments that show a correlation between consciousness and individual neurons, also show a correlation between the global gamma synchrony and consciousness. So which of these produces consciousness, the individual neurons or the gamma synchrony? Recent studies suggest that activity in individual neurons correlates with the gamma synchrony when a number of the neuron’s neighbours were also active. This agrees with studies showing ‘hot spots’ of activity in the brain also correlated with consciousness. Here we are perhaps left with the concept that the brain is a gate to the fundamental level of the universe, in the literal sense of a mechanism that allows or prevents entry. All of this may seem very speculative, but against this has to be set the lack of explanation in classical physics for the ‘something it is like’ or the ability to have choice or preference that we find in consciousness. 18 Responses 1. Simon says: Glad you like the site. – Simon 2. Simon says: Glad you like the site. – Simon 4. Peder says: I’m starting to write a science fiction love story novel regarding mind upload (and writing music for a film if it goes there) so am striving to understand quantum consciousness and other topics which are mostly over my head. I really appreciate the clarity and completeness you have given to your writing. You are clear and concise and even a non-scientist such as myself can comprehend large pieces of it, while slowly plodding through it all. I heartily thank you. If you ever visit northern California (Sonoma County) drop me an email and you can stay at my house — I’d love to pick your brain while feeding you good food and California wine. • Simon says: I’m pleased that you find the site useful. I will certainly take you up on that if I am ever in your part of the world. – Simon • Simon says: Belated reply: You could try the rather similar novel I wrote ten+ year ago ‘Persephone Wakes: of minds, machines and immortality’ under the pen name of Jack Junius, available fairly cheaply on Amazon. The first half appears to anticipate the recent film ‘Ex Machina’. – Simon 5. Bess says: Im really impressed by your blog. My blog colorado title company (Bess) 6. Simon says: I regard the site as open access, with a view to encouraging discussion of these topics and awareness of new research, so I haven’t really considered the copyright issue – Simon 7. Kira says: thought i could also create comment due to this good paragraph. obtain helpful facts concerning my study and knowledge. my web-site: your blogs really nice, keep it up! I’ll go ahead and bookmark your website to come back in the future. All the best Leave a Reply
25a3af452746a9fd
Scientific Laws As I have told you earlier, my guest is very sceptical about our scientific achievements. What follows are the notes I took, when he gave me a short summary of what he considers ‘our strategy’. In modern understanding of science, the fundamental laws seem to be consequences of various symmetries of quantities like time, space or similar objects. To make this idea more precise scientists often use mathematical arguments, thereby choosing some set {X} as state space encoding all necessary information on the considered system. The system then is thought to evolve in time on a differentiable {n}-dimensional path {x_i(t)\in X} for all {t\in\mathbb{R}} and {1\leq i \leq n\in\mathbb{N}}. Quite frequently there is a so-called Lagrange function {L} on the domain { X^n \times X^n \times \mathbb{R} } and a constraint function {W} on the same domain. The path {x(\cdot)} is required to minimizes or maximizes the integral \displaystyle \int_0^T L\left(x(s),\dot{x}(s),s\right)ds under the constraint \displaystyle W\left(x(s),\dot{x}(s),s\right)=0. (Under some technical assumptions) a path does exactly that, if it satisfies the Euler-Lagrange equations \displaystyle \frac{d}{dt}\frac{\partial L}{\partial \dot{x}_i}-\frac{\partial L}{\partial x_i}=\lambda \frac{\partial W}{\partial \dot{x}_i} for some function {\lambda} depending on {X^n \times X^n \times \mathbb{R}}. Define {y_i:=\frac{\partial L}{\partial \dot{x_i}}} and observe that (under suitable assumptions) this transformation is invertible, i.e. the {\dot{x}_i} can be expressed as functions of {x_i, y_i} and {t}. Next, define the Hamilton operator \displaystyle H(x,y,t) = \sum_{i=1}^n \dot{x}_i(x,y,t) y_i - L(x,\dot{x}(x,y,t),t) as the Legendre transform of {L}. The Legendre transformation is (under some mild technical assumptions) invertible. Now, (under less mild assumptions, namely holonomic constraints) two things happen. The canonical equations \displaystyle \frac{d x_i}{d t} = - \frac{\partial H}{\partial y_i} \left(=[x_i, H]\right), \frac{d y_i}{d t} = \frac{\partial H}{\partial x_i}\left(=[y_i, H]\right),\frac{d H}{dt} = -\frac{\partial L}{\partial t} are equivalent to the Euler Lagrange equations. Here {[\cdot,\cdot]} denotes the commutator bracket {[a,b]:= ab-ba}. Furthermore, if {L} does not explicitly depend on time, then {H} is a constant. That is the aforementioned symmetry. {H}, the energy, is invariant under time translations. Given all that, the solution of the minimisation or maximisation problem can then be given (either in the Heisenberg picture) as \displaystyle x(t) = e^{t H} x(0) e^{-t H}, y(t) = e^{t H} y(0) e^{-t H} or (in the in this case equivalent Schrödinger picture,) as an equation on the state space \displaystyle u(t)= e^{t H}u(0). This description is equivalent (under mild technical assumptions) to the following initial value problem: \displaystyle \dot{u}(t)=H u(t), u(0) = u_0\in X. where the operator {H} is the ‘law’. More technically, the law is the generator of a strongly continuous (semi-)group of (in this case linear and unitary) operators acting on (the Hilbert space) {X}. As an example of this process he mentioned the Schrödinger equation governing quantum mechanical processes. His conclusion was that the frequently appearing ‘technical assumptions’ in the above derivation make it highly unlikely for laws to exist even for systems with, what he calls, no emergent properties. ‘If that was true’, I thought ‘then … bye bye theory of everything!’ He explained further, that under no reasonable circumstances it is possible to extrapolate these laws to the emergent situation. I am not sure, whether I understand completely what he means by that, but his summary on how we find scientific laws is in my opinion way too simple. It can’t be true and I told him. With just a couple of ink strokes he derived the commutation relations for exchange markets from microeconomic theory. That left me speechless, since I always thought, that there cannot be ‘market laws’. Markets are on principle unpredictable! They are, or? One Response to Scientific Laws 1. […] we need a lot more, but not now! As we have seen in Scientifc Laws all we need now is a symmetry between price and demand. The key to this symmetry is found in any […] Leave a Reply WordPress.com Logo Twitter picture Facebook photo Google+ photo Connecting to %s %d bloggers like this:
4d5e02c73da0f816
I asked a variety of people in different careers about the use of math in their lives.  They answered  any or all of the following questions: 1.       Do you use math in your current profession?  What type of math do you use? What is an example of the math you use? 2.       What do you wish you had paid better attention to in math class? What has been the most useful information, or skill, taken from the math classes? 3.       Do you utilize math skills to maintain your home or personal life? Finances? Budget? 4.       Do you use the logic or problem solving skills learned in math class to work through situations in your professional or personal life? Please look over their responses.  They are alphabetical by profession, with a list of professions on the right hand side panel. Alicia,  Business and Chemistry Math in current profession… I have used math in every field I've worked in.  When I worked in a lab, I used everything up to and including calc 2 (integrals).  In marketing I actually used math MORE than when I worked in the lab!  Lots of algebra, probability, and statistics.  Spreadsheet skills are KEY - use will use spreadsheets in just about any career. Skills that were useful or wish I had paid more attention to… The single most important thing I learned in math (all of school really) is CRITICAL THINKING.  If you can develop good critical thinking skills you will be MILES ahead of the competition when you are applying for jobs.  I wish I still had my stats book.  Math to maintain your home/personal life?... All the time!  Surface area and volume of simple and complex objects (like say a wall or rectangular room vs molding with rounded parts and/or steps or two level room.  I do a lot of math keeping track of our finances.  Taking the time to do financial math can save you tens of thousands of dollars over the life of a home loan, make you money with the best possible saving vehicles, and save you from predatory lenders (who will laon you way more than you can afford to pay back, then take your car/house/what have you). Logic or problem solving skills?... Constantly.  Everything from trying to repair things in the home to navigating workplace politics to arguing with my husband. Ryan, Engineer The best math classes I ever took were physics 1 and 2, hands down. The math instruction I've had (in real math classes) has mostly sucked. The curriculum is probably to blame more than the instructors, but way too much class time was spent on procedural rather than "connective" work - in other words, that which fosters our ability to connect what we were supposed to be learning with other things we had learned before. If someone asks me why math is important, I tell them one reason it's important is because it's the foundation upon which all science education rests. Anyone who is considering a career that has anything to do with science will have a much easier time of it if their math is solid. Robley, English Instructor Math in current profession… I primarily use very basic math in my job - things like calculating student grades, percentages, and weighing certain projects more heavily than others.  Frequently I encounter students who have no idea how to do this basic addition and averaging to determine their current or final grades. This is actually really problematic for students because they are unable to make decisions about their standing in class, their participating in the class, and how to reach the desired grade in the course.  Skills that were useful or wish I had paid more attention to… Geometry. I HATED geometry, it was surprisingly hard for me. But I find this is a tool I used frequently when fixing my house. When I'm building a garden in my back yard and need to calculate the supplies I need or what angle to cut the wood at to create a certain shape (I have a six sided feature we created in our yard that took FOREVER to figure out).  Logic or problem solving skills?...  Many of the courses that are required in college (and high school) can seem pointless while we're in the class. It's often difficult to understand the "use" of a certain piece of history or set of skills. One thing that I'm frequently struck by, however, is how much this knowledge does add value to my life. Even things I don't "use" in my daily life (for example, understanding the history of the Panama canal) add value to my life. I find this knowledge allows me to be better engaged in the world around me. I'm more prepared to understand the things I read (literature, news papers, blogs, FB status updates even); I'm more able to engage in conversations with people that I meet about a variety of subjects; I'm able to understand or question how current events (in my life and in the world) will effect the future; in short I'm more prepared to participate in the world. The knowledge, critical thinking skills and technical skills (like math and writing) that you are learning while you are in college serve not only to prepare you for a specific career, but to prepare you to be engaged participants in the world. This gives you a level of control over the things that are happening around you that you can't gain in any other way. My experience has been that much of this knowledge developed a "purpose" long after I left college and I am continually surprised at the variety of purposes I find for this knowledge.  Tim, Finance Math in current profession… We use calculus and linear algebra every day as part of option value calculations and numerical procedures we code up to calculate the fair value of a financial instrument. Skills that were useful or wish I had paid more attention to… Too often, math beyond algebra gets too abstract. For example linear algebra is immensely useful, but most courses I've seen don't drive home the real world applications. So I think I paid attention, but the teaching was so math focused, it was hard to see how I would use it until much later. Math to maintain your home/personal life?... I use math to calculate different budget scenarios at home. For example, if I pay $50 more on my mortgage, how much sooner will I pay off my mortgage? Logic or problem solving skills?...  I use stats every day at work, including probability theory to help make decisions. Travis, Chemist Math in current profession… calculate mole to mole ratios for chemical synthesis. Skills that were useful or wish I had paid more attention to… quantitative data for chemical analysis. Angela, Chemistry, B.S. Math in current profession… For chemistry math problems I'll have to use math to calculate concentration/molarity, volumes, and fractions of molecules included in a square area for lattice structures. We use logs to calculate the pH, and we use basic math to calculate total energy/entropy/enthalpy required in a process. Not to mention all the calculus we use to calculate the probability of an electron's location at any given moment, as well as the time-related Schrödinger equation for a wave function! YUCK. Math to maintain your home/personal life?... I use the "counting money back" technique every time I purchase something to be sure I receive the right amount of change. I use the 10% +half of the 10% to calculate tips! Every time I explain to people that's how I do it, they're always like "oh.. that's so easy". Chris, Computational Biologist Math in current profession… I'm a scientist who uses computers to help understand why we get cancer and how we might treat it more effectively. I use math every day, especially statistics. When we look at the DNA of a tumor, we see thousands of mutations. Statistics helps us decide which ones are most likely to be helping that cancer grow, and which ones might be good targets for new drugs. Skills that were useful or wish I had paid more attention to… I was never a great math student in high school and college. Because of that, I've had to go back to basics and teach myself a lot of things later on. If I had really learned the fundamentals of Algebra earlier, it would have made my life easier and really saved me a lot of time! Math to maintain your home/personal life?... We're saving up for a house and planning for our first kid. I need to understand concepts like compound interest to understand where my money goes and how much I need to save. Logic or problem solving skills?...  Math and science have a lot in common.  Both can be hard sometimes, but by stepping through problems logically and trying different approaches, you can usually figure out the answers. Erin, Geologist Math in current profession… I do use math all the time.  The main thing that we try to do is find where we think oil is accumulated below the earth's surface.  Once we have found an interesting area, we need to determine the volume of oil that could potentially be in there.  So we do volume calculations - There are a lot of factors involved in that - but essentially we need to know the volume of the entire rock body that would have the oil in it, then the porosity of the rock, and how much oil would actually fit in that pore space.  All of the factors involved in determining the volumes are imprecise, so we have to assign a range of the possibilities and then simulate the range of volume outcomes for that We interpret seismic data - which is like an ultrasound of the earth - sound waves are sent down into the earth, and recording their reflections back to the surface allow us to see the layers below the earth's surface (up to 8 or so km!)  So we have to know about wave theory (maybe that is more physics...) Skills that were useful or wish I had paid more attention to… I wish I had paid more attention to the principles of statistics.  Or, more accurately, that I had taken a statistics class...  Math to maintain your home/personal life?... Yes!  I use math in maintaining a budget and keeping track of where I am financially.  I report all my expenses and income and keep that up to date monthly. Logic or problem solving skills?...  Right now I live in Denmark and their currency is the Danish crown. When I go shopping, I convert the price to US dollars so I can get a sense of how expensive things are.  I use logic in planning my projects and managing my time throughout the day.  For example, what time and other resources do I have, what is my due date, what kind of product can I produce in that amount of time?  Also, when i read the news, my knowledge of statistics helps me (I do have some....) to recognize sloppy reporting and know which news articles to take seriously. Skye, Greenhouse Manager Math in current profession… I have to calculate man hours for planting times at my greenhouse eg. 8 people planting for 3 hours is 24 hours of work. Marilyn, Health Insurance Agent Math in current profession… I use math all the time in comparing plans and their rates. It is mostly simple math, and some percentages etc. I also use math in figuring tax savings etc.   Skills that were useful or wish I had paid more attention to… I wish I had listened to my Personal Finance teacher more. This class was all about math.  Now that I know how important it is to begin saving at a young age, I wish I had started earlier.   As I mentioned above, the financial calculations have been a great skill to have over the years.   Math to maintain your home/personal life?... Math to maintain your home/personal life?..  I use math in my personal life while shopping and figuring out if I should refinance my home etc., but most importantly in budgeting.  It is mostly simple math.  I use geometry and other math occasionally when I am fixing things around the house.  I often use a financial calculator that allows me to see what the future value of my investments will be given the amount and number of payments I make at a given rate of return.  (Everyone should know how to do this to inspire them to save money!!!) I use math all the time with my home budgeting and finances.  I calculate how much I need to save each month to pay taxes, or insurance premiums that come once a year.  I've even set up a very simple budget that allows me to spend wisely and save money for the future.  With math I am also able to see how stupid it is to use credit cards/get into debt.  I recently used math to show my daughter the advantages of driving a 'paid for' used car rather than leasing a new car.  We figured that in just 9 years she would be at least $25,000 ahead if she drove a used car. Logic or problem solving skills?... When I solve problems I love to use math because it usually gives me a clear cut answer.  You've heard the saying "the numbers don't lie" and that is the truth.  Still, sometimes there is more than just the numbers that need to be considered, for example, do we take the higher paying job if it means more stress and sacrificing time with family? - etc.  So you can't rely just on the math, but it does help you weigh your options.  I always check the numbers. I am really into SIMPLE budgeting. Budgeting seems to scare a lot of people off, but it really does not have to be that hard and it is so important.  Without a money plan or a budget, people can get into major financial problems, which in turn can lead to a lot of stress and pain.  Regardless of your career, budgeting and saving money are vital (math intensive) skills to have.  If I could give advice to young college students I would encourage them to learn math and start budgeting and saving now!!   Emlyn, Homemaker & Part-time curatorial assistant at Museum of Natural History Math in current profession… I use math for both professions.  At the museum, we do a significant amount of measuring in taking care of our objects.  We use both metric and American measurement types, and so must have a basic understanding of the comparison between the two.  We also use computer math functions, such as categorical numbering using our database as well as Excel spreadsheet columns.  And yes, we even use math to fill in the correct hours on our timecards! Skills that were useful or wish I had paid more attention to… The most useful information from math classes is basic math (addition, subtraction, multiplication, division) WITHOUT using a calculator.  I wish I had paid better attention so I could do all of that in my head quickly! Math to maintain your home/personal life?... At home I use math every day.  I am a mom of two, a 2-year-old and a 4-year-old.  Meals are often mini fights about fair portions... I use division and reasoning to make sure everyone gets their sandwich cut into 4 pieces.  I use measurement and math when making recipes, even basic macaroni and cheese!  Budgeting and finances require addition and subtraction all the time.  Balancing the checkbook to make sure we haven't overspent on our debit card and matching that against the bank's records is a very important skill. Logic or problem solving skills?... Logic and problem solving get used in my professional life at the museum when we are trying to find the best places to store and display objects.  Measurements are crucial as well as logic:  two big baskets probably won't fit on the same shelf!  At home, logic and problem solving are again crucial, as with the even sandwiches. :) Kyle, Lawyer Math in current profession… I am self employed and I use math with my bookkeeping, profit and loss, cost vs time evaluation,  interest calulations, etc.  Also, I always have to problem solve and have logical applications. Nadya, Marketing Math in current profession… Marketing is actually full of math. I've used math to compute how much food to order for an event or how many fun give-a-ways to order. How much money we have in the budget and where it can be best used, how many visitors to the website actually looked around, how many people read emails and responded, effectiveness of promotions, how many extra newsletters to order to account for mailing run off (basically the errors the mailhouse might make), how many bags of candy to buy to hand out at Halloween in the branches and much, much more. Marketing involves a lot more than just making things pretty or fun! Math to maintain your home/personal life?... Definitely used in budgeting. Also used in figuring out which can of green beans is the best price or other groceries (figuring per oz prices), estimating how much my grocery bill should be while shopping so I can tell if an error is made when being rung up. Logic or problem solving skills?...  I do use logic quite a bit but I'm not sure if I learned it in math class. Occasionally proofs have come in handy though when trying to illustrate to other people that a process will work. Danny, Neuroscience Researcher (Graduate Student) Math in current profession… I use algebra to make solutions needed for my experiments. I must calculate the mass of the drug that I want in my solution for a given volume of saline to reach the concentration that I need for my experiment. I also use algebra/arithmetic to convert different measures of the concentration of a solution to discuss the experiment with other scientists. When administering drugs to live animals, dose of the drug is described as mass of drug per mass of the animal (e.g. 5 mg/kg). However, if we are doing experiments in brain slices, we express the dosage of a drug as its molarity (20 µM).  I also do a lot of statistics on the results of the experiments. I write matlab scripts to analyze my data, which could basically be written as long, complicated, algebraic equations. I also use built-in matlab functions to do a statistical multivariate, repeated measure analysis of variance (ANOVA) to determine if the results for different experimental conditions are different enough to be statistically significant, which we can then report in scientific journals.  Skills that were useful or wish I had paid more attention to…  I paid pretty good attention in my math classes, since I liked math. My problem has mostly been forgetting certain concepts since I haven't been in a math class in over 10 years. A solid understanding of algebra is probably the most important for everyday life. Most problems can be solved with back-of-the-napkin type math using algebra and simple arithmetic.   Math to maintain your home/personal life?... Mostly I would say this consists of just some quick mental arithmetic to make sure some company didn't make a mistake and charge me too much or something. Logic or problem solving skills?... Logic and problem solving are hugely helpful in my job as a research scientist. Without them you cannot do science. Janice, Nurse (Medical Intensive Care Unit) Math in current profession… I use math all the time in my job as a nurse: estimate when the IV bag will be finished; basic adding and subtractding for intake and out put totals (24 hr fluid balance); I use algebra to calculate drug administration, setting IV pumps, etc. Everything is in metric system so it is important to understand that. Math to maintain your home/personal life?... In my everyday life, I use it to do my checkbook/bank statement, determine price of something I am buying (if it is a good price or not) and how much I will save using coupons, 20% off, etc. I also calculate if I am getting a good gas mileage with my car (miles/gallon of gas). I use measurements in cooking and adjusting recipes i.e. making half the recipe or doubling, etc. I use it when buying material for crafts/sewing. The list goes on! I am sure I use it even more than that! Jessica, Office Manager & Landscape Designer Math in current profession… I surprisingly use math a lot, which is a surprise for me because it was a real struggle for me through school. I use it as an office manager entering the finances to the theatre I work for and keeping the books up to date. I also use it as a landscape designer; square footage, volume for things like compost or gravel, dimensions, etc. Kevin, Petroleum Engineer Math in current profession… I use algebra for Multivariate statistics, I use geometry for wellbore calculations, and I use calculus for finding the rate of change, inflection points, and completed work (area under the curve).  Skills that were useful or wish I had paid more attention to… I wish I paid closer attention to statistics, the most important skill learned is solving algebraic equations.  Solutions to real world problems come about by solving equations and have contributed to my project at Shell. Christine, Statistician & Medical Researcher (Graduate Student) Math in current profession… I do research on statistical methods to infer biological networks. For example, inside of each of our cells, there are reactions occurring constantly. In a diseased state, this network of reactions is altered in ways that lead to a loss of cellular function.  Magnetic resonance spectroscopy can be used to profile tissue samples, and from this we can identify which molecules are present and estimate their concentrations. We can then use statistical methods to infer the reaction network by looking at the covariance of the concentrations of the molecules. This is a difficult problem since the data is noisy (i.e. there is random variation in the measurement), the networks are complex, and we often have small sample sizes. We are developing statistical methods that allow us to incorporate prior knowledge on the degree of connectivity of the network and known chemical reactions. These techniques can improve the reliability of our inference. The resulting identification of the reactions under diseased conditions can increase scientific understanding of the mechanisms of disease and also guide the development of future treatments. Skills that were useful or wish I had paid more attention to…  I still wish that I could explain mathematical concepts better! It's really nice to be able to help out other students, and you can feel sure that you understand something once you are able to explain it someone else. Right now, I work a lot with medical researchers, and I often have to explain statistical methods to them. I am still trying to get better at how I communicate these ideas. Erin, Swimming Coach Math in current profession… add up the amount of yardage that my swimmers are swimming  and calculate how much yardage is needed for a mile. Math to maintain your home/personal life?... balance a checkbook. Ben, TV Production Math in current profession… I do base 60 Time math daily (e.g. adding and subtracting the number of minutes in an hour),  subtracting one time from another to find the difference between the two.  Also I use math in trying to calculate time cues for production, which is tough since I have to do it backwards in real time.  Alicia, Veterinarian Math in current profession…  Yes, I use math everyday in my profession.  I use basic algebra and stoichiometry for fluid rate calculation, drug dosage calculations, basic unit conversions, etc.  I love trigonometry and know I will get to use some of it in when I go into my residency for sports med/rehab and have to study/use biomechanics, basic trig is also used a lot in orthopedic surgeries.  Skills that were useful or wish I had paid more attention to… I wish I understood physics better which has a lot of math in it because then I would understand ultrasounds better and be able to use my machine better.  The most useful skill I learned in math class is basic math, like fractions.  I am constantly surprised by how many people can’t understand that ¼ of a tablet is the same as 0.25 of a tablet… Math to maintain your home/personal life?... Yes, making a monthly budget and sticking to it has been important given the amount of educational debt load I have.  Also, calculating how much interest accrues in my loans is important to know each year. Andrea, Walmart Customer Service Manager and Supervisor Math in current profession…  I use it every day to calculate the ad matching we do. For example say you are buying 10.29 pounds of oranges from rancho market and their ad is 8 pounds for .99 cents you need to calculate how their rates compare with Walmart’s.  At Walmart we sell oranges by unit, so we have to figure out the price per orange because not all our produce is by the pound.  The reason I am giving you this example is I hear all day long from the younger cashiers that they hate math and I don't know how to figure it out. I show them all how to do this so they not only can handle the ad matches and so they don't feel stupid. I sympathize with a lot of them because math was not my best subject.  I also use math in my profession to figure out the difference I need to give customers as a credit if they paid a higher rate of tax than in Utah. Skills that were useful or wish I had paid more attention to…  Basic math, and math that you need in the real world, and how to do this in your head without a calculator.  And that if you have struggled in the past with math that you can get better, for example, I used to struggle with arithmetic, but now I can do it in my head. Find someone to help you break it down so that you can understand.  I am very grateful that I had teachers throughout my schooling who took time to do this for me.  Math to maintain your home/personal life?... I use it to figure out how to cut fabric,  to double check the machine on prices, and for gas mileage.  The desire to learn and gain knowledge (being able to solve the math problems like discounts, or being able to use the computer, etc)  has promoted me from a temporary employee to a manager.  Jim, Web Developer Math in current profession… I definitely use math in programming.  Simple arithmetic is everywhere in programming, but also many of the concepts of programming are based on math.  For example, a key concept of programming is understanding variables (assigning labels to data), which is the basis for algebra.  But most of the actual work of programming is more like solving proofs in geometry, at least conceptually.  The idea of tying together a series of steps to produce a specific result is exactly the work programmers do. Skills that were useful or wish I had paid more attention to… I wish I had taken a statistics class.  I think some of the concepts and algorithms would be useful, if for no other reason than to have a clearer idea on how to express what sort of algorithm I am looking for.  The most useful thing from math for me was discovering that I enjoyed doing geometry proofs.  I think "math" is a huge field and most people actually like certain parts of it, even if they "hate math".  You don't have to like everything, but I think everyone should try to expose themselves to as many fields of mathematics as they can. Math to maintain your home/personal life?... I do use math for finances, but not in a straightforward way.  I think having a strong understanding of probabilities and percentages can be very helpful for understanding things, such as the fact that if you buy a stock and it loses 50% of its value, the stock will have to gain 100% to be worth its original amount. Logic or problem solving skills?... All the time.  I think developing good problem solving skills is useful in almost every aspect of life and business, but the real key is being able to identify and translate real world problems into solvable problems.  I always hated solving word problems in math, but I think that's an important skill to master. Business and Chemistry English Instructor Chemistry, B.S. Computational Biologist Greenhouse manager Health Insurance Agent Homemaker, Part-time curatorial assistant at Museum of Natural History Neuroscience Researcher Nurse (Medical Intensive Care Nurse) Office Manager & Landscape Designer Petroleum Engineer Statistician, Medical Researcher Swimming Coach TV production Walmart Customer Service Manager/Supervisor Web Developer This free website was made using Yola. No HTML skills required. Build your website in minutes. Go to www.yola.com and sign up today! Make a free website with Yola
cd06c0acda8c4336
Cornu’s spiral Cornu’s spiral is the curve parameterized by Cornu's spiral Here’s the Python code used to make the plot. from scipy.special import fresnel from scipy import linspace import matplotlib.pyplot as plt t = linspace(-7, 7, 1000) y, x = fresnel(t) plt.plot(x, y) * * * For daily posts on analysis, follow @AnalysisFact on Twitter. AnalysisFact twitter icon Energy in frequency modulated signals In an earlier post we proved that if you modulate a cosine carrier by a sine signal you get a signal whose sideband amplitudes are given by Bessel functions. Specifically: \cos( 2\pi f_c t + \beta \sin(2\pi f_m t) ) = \sum_{k=-\infty}^\infty J_n(\beta) \cos(2\pi(f_c + nf_m)t) When β = 0, we have the unmodulated carrier, cos(2π fct), on both sides. When β is positive but small, J0(β) is near 1, and so the frequency component corresponding to the carrier is only slightly diminished. Also, the sideband amplitudes, the values of Jn(β) for n ≠ 0, are small and decay rapidly as |n| increases. As β increases, the amplitude of the carrier component decreases, the sideband amplitudes increase, and the sidebands decay more slowly. We can be much more precise: the energy in the modulated signal is the same as the energy in the unmodulated signal. As β increases, more enery transfers to the sidebands, but the total energy stays the same. This conservation of energy result applies to more complex signals than just pure sine waves, but it’s easier to demonstrate in the case of a simple signal. To prove the energy stays constant, we show that the sum of the squares of the coefficients of the cosine components is the same for the modulated and unmodulated signal.The unmodulated signal is just cos(2π fct), and so the only coefficient is 1. That means we have to prove \sum_{n=-\infty}^\infty J_n(\beta)^2 = 1 This is a well-known result. For example, it is equation 9.1.76 in Abramowitz and Stegun. We’ll show how to prove it from first principles. We’ll actually prove a more general result, the Newmann-Schläffi addition formula, then show our result follows easily from that. Newmann-Schläffi addition formula Whittaker and Watson define the Bessel functions by their generating function: \exp\left(\frac{z}{2}\left(t - \frac{1}{t}\right)\right) = \sum_{n=-\infty}^\infty t^n J_n(z) This means that when you expand the expression on the left as a power series in t, whatever is multiplied by tn is Jn(z) by definition. (There are other ways of defining the Bessel functions, but this way leads quickly to what we want to prove.) We begin by factoring the Bessel generating function applied to zw. \exp\left(\frac{z+w}{2}\left(t - \frac{1}{t}\right)\right) = \exp\left(\frac{z}{2}\left(t - \frac{1}{t}\right)\right) \exp\left(\frac{w}{2}\left(t - \frac{1}{t}\right)\right) Next we expand both sides as power series. \sum_{n=-\infty}^\infty t^n J_n(z+w) = \sum_{j=-\infty}^\infty t^j J_j(z) \sum_{k=-\infty}^\infty t^k J_k(w) and look at the terms involving tn on both sides. On the left this is Jn(zw). On the right, we multiply two power series. We will get a term containing tn whenever we multiply terms tj and tk where j and k sum to n. J_n(z+w) = \sum_{j+k = n} J_j(z) J_k(w) = \sum_{m=-\infty}^\infty J_m(z) J_{n-m} J(w) The equation above is the Newmann-Schläffi addition formula. Sum of squared coefficients To prove that the sum of the squared sideband coefficients is 1,  we apply the addition formula with n = 0, z = β, and w = -β. 1 = J_0(\beta - \beta) = \sum_{m=-\infty}^\infty J_m(\beta) J_{-m}(-\beta) = \sum_{m=-\infty}^\infty J_m(\beta)^2 This proves what we were after: We used a couple facts in the last step that we haven’t discussed. The first was that J0(0) = 1. This follows from the generating function by setting z to 0 and taking the limit as t → 0. The second was that Jm(-β) = Jm(β). You can also see this from the generating function since negating z has the same effect as swapping t and 1/t. Click to learn more about consulting help with signal processing Related posts Analyzing an FM signal Frequency modulation combines a signal with a carrier wave by changing (modulating) the carrier wave’s frequency. Starting with a cosine carrier wave with frequency fc Hz and adding a signal with amplitude β and frequency fm Hz results in the combination The factor β is known as the modulation index. We’d like to understand this signal in terms of cosines without any frequency modulation. It turns out the result is a set of cosines weighted by Bessel functions of β. Component amplitudes We will prove the equation above, but first we’ll discuss what it means for the amplitudes of the cosine components. For small values of β, Bessel functions decay quickly, which means the first cosine component will be dominant. For larger values of β, the Bessel function values increase to a maximum then decay like one over the square root of the index. To see this we compare the coefficients for modulation index β = 0.5 and β = 5.0. First, β = 0.5: and now for β = 5.0: For fixed β and large n we have J_n(\beta) \approx \frac{\beta^n}{2^n \, n!} and so the sideband amplitudes eventually decay very quickly. Update: See this post for what the equation above says about energy moving from the carrier to sidebands. To prove the equation above, we need three basic trig identities \cos(A + B) &=& \cos A \cos B - \sin A \sin B \\ 2\cos A \cos B &=& \cos(A-B) + \cos(A+B) \\ 2\sin A \sin B &=& \cos(A-B) + \cos(A-B) and a three Bessel function identities \cos( z \sin \theta) &=& J_0(z) + 2\sum{k=1}^\infty J_{k}(z) \cos(2k\theta) \\ \sin( z \sin \theta) &=& 2\sum{k=1}^\infty J_{2k+1}(z) \cos((2k+1)\theta) \\ J_{-n}(z) &=& (-1)^n J_n(z) The Bessel function identities above can be found in Abramowitz and Stegun as equations 9.1.42, 9.1.43, and 9.1.5. And now the proof. We start with and apply the sum identity for cosines to get \cos(2\pi f_c t) \cos(\beta \sin(2\pi f_m t)) - \sin(2\pi f_c t) \sin(\beta \sin(2\pi f_m t)) Now let’s take the first term and apply one of our Bessel identities to expand it to J_0(\beta) \cos(2\pi f_c t) + \sum_{k=1}^\infty J_{2k}(\beta) \left\{ \cos(2\pi (f_c - 2k f_m)t) + \cos(2\pi(f_c + 2k f_m)t) \right\} which can be simplified to \sum_{n \,\, \mathrm{even}} J_n(\beta) \cos(2\pi(f_c + nf_m)t) where the sum runs over all even integers, positive and negative. Now we do the same with the second half of the cosine sum. We expand \sum_{k=1}^\infty J_{2k+1}(\beta) \left\{ \cos(2\pi (f_c - (2k+1) f_m)t) - \cos(2\pi(f_c + (2k+1) f_m)t) \right\} which simplifies to where again the sum is over all (odd this time) integers. Combining the two halves gives our result Click to learn more about consulting help with signal processing Related post: Visualizing Bessel functions Connection between hypergeometric distribution and series A hypergeometric function is defined by a pattern in its power series coefficients. The hypergeometric function F(a, bcx) has a the power series and so * * * For daily posts on probability, follow @ProbFact on Twitter. ProbFact twitter icon Dilogarithm, polylogarithm, and related functions The functions dilogarithm, trilogarithm, and more generally polylogarithm are meant to be generalizations of the logarithm. I first came across the dilogarithm in college when I was evaluating some integral with Mathematica, and they’ve paid a visit occasionally ever since. Unfortunately polylogarithms are defined in several slightly different and incompatible ways. I’ll start by following An Atlas of Functions and then mention differences in A&S, SciPy, and Mathematica. According to Atlas, Polylogarithms are themselves special cases of Lerch’s function. Also known as Jonquière’s functions (Ernest Jean Philippe Fauque de Jonquières, 1820–1901, French naval officer and mathematician), they appear in the Feynman diagrams of particle physics. The idea is to introduce an extra parameter ν in the power series for natural log: \mbox{polyln}_\nu(x) = -\sum_{j=1}^\infty \frac{(1-x)^j}{j^\nu} When ν = 1 we get the ordinary logarithm, i.e. polyln1 = log. Then polyln2 is the dilogarithm diln, and polyln3 is the trilogarithm triln. One advantage of the definition in Atlas is that the logarithm is a special case of the polylogarithm. Other conventions don’t have this property. Other conventions The venerable A&S defines dilogarithms in a way that’s equivalent to the negative of the definition above and does not define polylogarithms of any other order. SciPy’s special function library follows A&S. SciPy uses the name spence for the dilogarithm for reasons we’ll get to shortly. Mathematica has the function PolyLog[ν, x] that evaluates to \mbox{Li}_\nu(x) = \sum_{j=1}^\infty \frac{x^j}{j^\nu} So polylnν above corresponds to -PolyLog[ν, -x] in Mathematica. Matlab’s polylog is the same as Mathematica’s PolyLog. Relation to other functions Spence’s integral is the function of x given by the integral \int_1^x \frac{\log t}{t-1}\, dt and equals diln(x). Note that the SciPy function spence returns the negative of the integral above. The Lerch function mentioned above is named for Mathias Lerch (1860–1922) and is defined by the integral \Phi(x, \nu, u) = \frac{1}{\Gamma(\nu)} \int_0^\infty \frac{t^{\nu-1} \exp(-ut)}{1 - x\exp(-t)} \, dt The connection with polylogarithms is easier to see from the series expansion: \Phi(x, \nu, u) = \sum_{j=0}^\infty \frac{x^j}{(j+u)^\nu} The connection with polylogarithms is then \Phi(x, \nu,1) = -\frac{1}{x} \mbox{polyln}_\nu(1-x) Note that the Lerch function also generalizes the Hurwitz zeta function, which in turn generalizes the Riemann zeta function. When x = 1, the Lerch function reduces to ζ(ν, u). Related: Applied complex analysis Hypergeometric bootstrapping: implement one, get seven free Suppose you have software to compute one hypergeometric function. How many more hypergeometric functions can you compute from it? Hypergeometric functions satisfy a lot of identities, so you can bootstrap one such function into many more. That’s one reason they’re so useful in applications. For this post, I want to focus on just three formulas, the so-called linear transformation formulas that relate one hypergeometric function to another. (There are more linear formulas that relate one hypergeometric function to a combination of two others. I may consider those in a future post, but not today.) Linear transformations The classical hypergeometric function has three parameters. The linear transformations discussed above relate the function with parameters (abc) to those with parameters • (c – ac – bc) • (ac – bc) • (bc – ac) Here are the linear transformations in detail: \begin{eqnarray*} F(a, b; c; z) &=& (1-z)^{c -a-b} F(c-a, c-b; c; z) \\ &=& (1-z)^{-a} F\left(a, c-b; c; \frac{z}{z-1}\right) \\ &=& (1-z)^{-b} F\left(b, c-a; c; \frac{z}{z-1}\right) \end{eqnarray*} How many more? The three transformations above are the ones listed in Handbook of Mathematical Functions by Abramowitz and Stegun. How many more transformations can we create by combining them? To answer this, we represent each of the transformations with a matrix. The transformations correspond to multiplying the matrix by the column vector  (abc). A &=& \left( \begin{array}{rrr} -1 & 0 & \phantom{-}1 \\ 0 & -1 & 1 \\ 0 & 0 & 1 \end{array} \right) \\ B &=& \left( \begin{array}{rrr} \phantom{-}1 & 0 & 0 \\ 0 & -1 & \phantom{-}1 \\ 0 & 0 & 1 \end{array} \right) \\ C &=& \left( \begin{array}{rrr} 0 & \phantom{-}1 & 0 \\ -1 & 0 & \phantom{-}1 \\ 0 & 0 & 1 \end{array} \right) The question of how many transformations we can come up with can be recast as asking what is the order of the group generated by AB, and C. (I’m jumping ahead a little by presuming these matrices generate a group. Conceivably the don’t have inverses, but in fact they do. The inverses of AB, and C are AB, and AC respectively.) A little experimentation reveals that there are eight transformations: I, ABCDABEACFBC, and GCB. So in addition to the identity I, the do-nothing transformation, we have found four more: DEF, and G. These last four correspond to taking  (abc) to • (c – abc) • (ac – bc) • (bac) • (c – bc – ac) This means that if we have software to compute the hypergeometric function with one fixed set of parameters, we can bootstrap that into computing the hypergeometric function with up to seven more sets of parameters. (Some of the seven new combinations could be repetitive if, for example, ab.) Identifying the group We have uncovered a group of order 8, and there are only 5 groups that size. We should be able to find a familiar group isomorphic to our group of transformations. The five groups of order 8 are Z8 (cyclic group), Z4×Z2 (direct product), E8 (elementary Abelian group), D8 (dihedral group), and the quaternion group. The first three are Abelian, and our group is not, so our group must be isomorphic to either the quaternion group or the dihedral group. The quaternion group has only 1 element of order 2, and the dihedral group has 5 elements of order 2. Our group has five elements of order 2 (ABDFG) and so it must be isomorphic to the dihedral group of order 8, the rotations and reflections of a square. (Some authors call this D8, because the group has eight elements. Others call it D4, because it is the group based on a 4-sided figure.) Dihedral group of order 8 Related: Applied complex analysis Special function resources This week’s resource post: some pages of notes on special functions: See also blog posts tagged special functions. I tweet about special functions occasionally on AnalysisFact Last week: HTML, LaTeX, and Unicode Next week: Python resources For daily posts on analysis, follow @AnalysisFact on Twitter. AnalysisFact twitter icon Uses for orthogonal polynomials When I interviewed Daniel Spielman at this year’s Heidelberg Laureate Forum, we began our conversation by looking for common mathematical ground. The first thing that came up was orthogonal polynomials. (If you’re wondering what it means for two polynomials to be orthogonal, see here.) JC: Orthogonal polynomials are kind of a lost art, a topic that was common knowledge among mathematicians maybe 50 or 100 years ago and now they’re obscure. DS: The first course I taught I spent a few lectures on orthogonal polynomials because they kept coming up as the solutions to problems in different areas that I cared about. Chebyshev polynomials come up in understanding solving systems of linear equations, such as if you want to understand how the conjugate gradient method behaves. The analysis of error correcting codes and sphere packing has a lot of orthogonal polynomials in it. They came up in a course in multi-linear algebra I had in grad school. And they come up in matching polynomials of graphs, which is something people don’t study much anymore. … They’re coming back. They come up a lot in random matrix theory. … There are certain things that come up again and again and again so you got to know what they are. * * * More from my interview with Daniel Spielman: For daily posts on analysis, follow @AnalysisFact on Twitter. AnalysisFact twitter icon Multiple zeta where the sum is over all positive integers n. Euler also introduced a multivariate generalization of the zeta function Relating Airy and Bessel functions y'' - xy = 0 The Airy functions can be related to Bessel functions as follows: from scipy.special import airy, jv, iv from numpy import sqrt, where def Ai(x): return ai def Bi(x): return bi def Ai2(x): third = 1.0/3.0 return where(x > 0, def Bi2(x): third = 1.0/3.0 return where(x > 0, Related links: Fibonacci numbers and orthogonal polynomials The famous Fibonacci numbers are defined by the initial conditions F0 = 0,  F1 = 1 and the recurrence relation Fn = Fn-1 + Fn-2 for n > 1. but a slightly different recurrence relation for n > 1. Several families of orthogonal polynomials satisfy a similar recurrence relationship Fibonacci x 1 0 1 Chebyshev T 2x -1 1 x Chebyshev U 2x -1 1 2x Hermite 2x 2 – 2n 1 2x * * * For daily posts on analysis, follow @AnalysisFact on Twitter. AnalysisFact twitter icon Giant components and the Lambert W-function The Lambert W-function is the function w(z) implicitly defined by w exp(w) = z. When I first saw this, I thought that I’d seen something like this come up many times and that it would be really handy to know that this function has a name and has many software implementations1. Since then, however, I’ve hardly ever run into the W-function in application. But here’s an application I ran into recently. A “giant component” in a random network is a component whose size grows in proportion to the size of the network. For a Poisson random network, let c = p(n – 1) be the expected number of vertices connected to any given vertex. If c > 1 then as n goes to infinity, there will be a giant component with probability 1. S, the proportion of nodes inside the a giant component, satisfies the equation S = 1 – exp(-cS). S plays a role similar to that of w in the definition of the W-function, which suggests the W-function might come in handy. And in fact, this book gives this solution for S: S = 1 + w( –c exp(-c) )/c. There are a couple issues. First, it’s not obvious that solution is correct. Second, we need to be more careful about how we define w(z) when z is negative. Given some value of c, let S = 1 + w( –c exp(-c) )/c. Then w( –c exp(-c) ) = –c(1 – S). Apply the function f(x) = x exp( x ) to both sides of the equation above. By the definition of the W-function, we have c exp(-c) = –c(1 – S) exp( –c(1 – S) ). From there it easily follows that S = 1 – exp(-cS). Now let’s look more carefully at the definition of the W-function. For positive real z, w exp(w) = z has a unique real solution defining w. But in the calculations above, we have evaluated w at –c exp(-c), which is negative since c > 1. For negative arguments, we have to pick a branch of w. Which branch guarantees that S = 1 – exp(-cS)? Any branch2. No matter which solution to w exp(w) = z we pick, the resulting S satisfies the giant component equation. However, the wrong branch might result in a meaningless solution. Since S is a proportion, S should be between 0 and 1. Let’s go back and say that for our purposes, we will take w(z) to mean the real number no less than -1 satisfying w exp(w) = z. Then for c > 1, the solution S = 1 + w( –c exp(-c) )/c is between 0 and 1. You can show that S = 1 – exp(-cS) has a unique solution for 0 < S < 1, so any other branch would not give a solution in this interval. Related links: Click to learn more about consulting for complex networks 1 For example, in Python the Lambert W-function is scipy.special.lambertw. The function takes two arguments: z and an integer denoting the branch. By default, the branch argument is set to 0, giving the branch discussed in this post. 2 For -1/e < z < 0, there are two real branches of w(z), but there are also infinitely many complex branches. Addition formulas for Bessel functions The first addition theorem says that where the sum is over all integer values of m. z0 = ∑ xm ym wm. zn = v-nxn+m ym wm. Related links: Math tools for the next 20 years Igor Carron commented on his blog that Related post: Doing good work with bad tools Castles and quantum mechanics How are castles and quantum mechanics related? One connection is rook polynomials. The rook is the chess piece that looks like a castle, and used to be called a castle. It can move vertically or horizontally, any number of spaces. A rook polynomial is a polynomial whose coefficients give the number of ways rooks can be arranged on a chess board without attacking each other. The coefficient of xk in the polynomial Rm,n(x) is the number of ways you can arrange k rooks on an m by n chessboard such that no two rooks are in the same row or column. The rook polynomials are related to the Laguerre polynomials by Rm,n(x) = n! xn Lnmn(-1/x) where Lnk(x) is an “associated Laguerre polynomial.” These polynomials satisfy Laguerre’s differential equation x y” + (n+1-x) y‘ + k y = 0, an equation that comes up in numerous contexts in physics. In quantum mechanics, these polynomials arise in the solution of the Schrödinger equation for the hydrogen atom. Related: Relations between special functions For daily posts on analysis, follow @AnalysisFact on Twitter. AnalysisFact twitter icon
77ac4c1c5b7aad56
About this Journal Submit a Manuscript Table of Contents ISRN Optics Volume 2013 (2013), Article ID 783865, 51 pages Review Article Universal Dynamical Control of Open Quantum Systems Weizmann Institute of Science, 76100 Rehovot, Israel Received 25 March 2013; Accepted 24 April 2013 Academic Editors: M. D. Hoogerland, D. Kouznetsov, A. Miroshnichenko, and S. R. Restaino Due to increasing demands on speed and security of data processing, along with requirements on measurement precision in fundamental research, quantum phenomena are expected to play an increasing role in future technologies. Special attention must hence be paid to omnipresent decoherence effects, which hamper quantumness. Their consequence is always a deviation of the quantum state evolution (error) with respect to the expected unitary evolution if these effects are absent. In operational tasks such as the preparation, transformation, transmission, and detection of quantum states, these effects are detrimental and must be suppressed by strategies known as dynamical decoupling, or the more general dynamical control by modulation developed by us. The underlying dynamics must be Zeno-like, yielding suppressed coupling to the bath. There are, however, tasks which cannot be implemented by unitary evolution, in particular those involving a change of the system’s state entropy. Such tasks necessitate efficient coupling to a bath for their implementation. Examples include the use of measurements to cool (purify) a system, to equilibrate it, or to harvest and convert energy from the environment. If the underlying dynamics is anti-Zeno like, enhancement of this coupling to the bath will occur and thereby facilitate the task, as discovered by us. A general task may also require state and energy transfer, or entanglement of noninteracting parties via shared modes of the bath which call for maximizing the shared (two-partite) couplings with the bath, but suppressing the single-partite couplings. For such tasks, a more subtle interplay of Zeno and anti-Zeno dynamics may be optimal. We have therefore constructed a general framework for optimizing the way a system interacts with its environment to achieve a desired task. This optimization consists in adjusting a given “score” that quantifies the success of the task, such as the targeted fidelity, purity, entropy, entanglement, or energy by dynamical modification of the system-bath coupling spectrum on demand. 1. Introduction Due to the ongoing trends of device miniaturization, increasing demands on speed and security of data processing, along with requirements on measurement precision in fundamental research, quantum phenomena are expected to play an increasing role in future technologies. Special attention must hence be paid to omnipresent decoherence effects, which hamper quantumness [170]. These may have different physical origins, such as coupling of the system to an external environment (bath), noise in the classical fields controlling the system, or population leakage out of a relevant system subspace. Their consequence is always a deviation of the quantum state evolution (error) with respect to the expected unitary evolution if these effects are absent. In operational tasks such as the preparation, transformation, transmission, and detection of quantum states, these effects are detrimental and must be suppressed by dynamical control. The underlying dynamics must be Zeno-like yielding suppressed coupling to the bath. Environmental effects generally hamper or completely destroy the “quantumness” of any complex device. Particularly fragile against environment effects is quantum entanglement (QE) in multipartite systems. This fragility may disable quantum information processing and other forthcoming quantum technologies: interferometry, metrology, and lithography. Commonly, the fragility of QE rapidly mounts with the number of entangled particles and the temperature of the environment (thermal “bath”). This QE fragility has been the standard resolution of the Schrödinger-cat paradox: the environment has been assumed to preclude macrosystem entanglement. In-depth study of the mechanisms of decoherence and their prevention is therefore an essential prerequisite for applications involving quantum information processing or communications [3]. The present paper aimed at furthering our understanding of these formidable issues. It is based on progress by our group, as well as others, towards a unified approach to the dynamical control of decoherence and disentanglement. This unified approach culminates in universal formulae allowing design of the required control fields. Most theoretical and experimental methods that aimed at assessing and controlling (suppressing) decoherence of qubits (two-level systems that are the quantum mechanical counterparts of classical bits) have focused on one of two particular situations: (a) single qubits decohering independently, or (b) many qubits collectively perturbed by the same environment. Thus, quantum communication protocols based on entangled two-photon states have been studied under collective depolarization conditions, namely, identical random fluctuations of the polarization for both photons [71, 72]. Entangled qubits that reside at the same site or at equivalent sites of the system, for example, atoms in optical lattices, have likewise been assumed to undergo identical decoherence. By contrast, more general problems of decay of nonlocal mutual entanglement of two or more small systems are less well understood. This decoherence process may occur on a time scale much shorter than the time for either body to undergo local decoherence, but much longer than the time each takes to become disentangled from its environment. The disentanglement of individual particles from their environment is dynamically controlled by interactions on non-Markovian time-scales, as discussed below. Their disentanglement from each other, however, may be purely Markovian [7375], in which case the present non-Markovian approach to dynamical control/prevention is insufficient. 1.1. Dynamical Control of Single-Particle Decay and Decoherence on Non-Markovian Time Scales Quantum-state decay to a continuum or changes in its population via coupling to a thermal bath is known as amplitude noise (AN). It characterizes decoherence processes in many quantum systems, for example, spontaneous emission of photons by excited atoms [76], vibrational and collisional relaxation of trapped ions [1], and the relaxation of current-biased Josephson junctions [77]. Another source of decoherence in the same systems is proper dephasing or phase noise (PN) [78], which does not affect the populations of quantum states but randomizes their energies or phases. For independently decohering qubits, a powerful approach for the suppression of decoherence appears to be the “dynamical decoupling” (DD) of the system from the bath [7992]. The standard “bang-bang” DD, that is, -phase flips of the coupling via strong and sufficiently frequent resonant pulses driving the qubit [8284], has been proposed for the suppression of proper dephasing [93]. This approach is based on the assumption that during these strong and short pulses there is no free evolution; that is, the coupling to the bath is intermittent with control fields. These -pulses hence serve as a complete phase reversal, meaning that the evolution after the pulse negates the deleterious effects of dephasing prior to the pulse, similar to spin-echo technique [94]. However, some residual decoherence remains and increases with the interpulse time interval, and thus in order to combat decoherence effectively, the pulses should be very frequent. While standard DD has been developed for combating first-order dephasing, several extensions have been suggested to further optimize DD under proper dephasing, such as multipulse control [89], continuous DD [88], concatenated DD [90], and optimal DD [95, 96]. DD has also been adapted to suppress other types of decoherence couplings such as internal state coupling [91] and heating [84]. Our group has proposed a universal strategy of approximate DD [97103] for both decay and proper dephasing, by either pulsed or continuous wave (CW) modulation of the system-bath coupling. This strategy allows us to optimally tailor the strength and rate of the modulating pulses to the spectrum of the bath (or continuum) by means of a simple universal formula. In many cases, the standard -phase “bang-bang” (BB) is then found to be inadequate or nonoptimal compared to dynamic control based on the optimization of the universal formula [104]. Our group has purported to substantially expand the arsenal of decay and decoherence control. We have presented a universal form of the decay rate of unstable states into any reservoir (continuum), dynamically modified by perturbations with arbitrary time dependence, focusing on non-Markovian time-scales [97, 99, 100, 102, 105]. An analogous form has been obtained by us for the dynamically modified rate of proper dephasing [100, 101, 105]. Our unified, optimized approach reduces to the BB method in the particular case of proper dephasing or decay via coupling to spectrally symmetric (e.g., Lorentzian or Gaussian) noise baths with limited spectral width (see below). The type of phase modulation advocated for the suppression of coupling to phonon or photon baths with frequency cutoff [103] is, however, drastically different from the BB method. Other situations to which our approach applies, but not the BB method, include amplitude modulation of the coupling to the continuum, as in the case of decay from quasibound states of a periodically tilted washboard potential [99]: such modulation has been experimentally shown [106] to give rise to either slowdown of the decay (Zeno-like behavior) or its speedup (anti-Zeno-like behavior), depending on the modulation rate. The theory has been generalized by us to finite temperatures and to qubits driven by an arbitrary time-dependent field, which may cause the failure of the rotating-wave approximation [100]. It has also been extended to the analysis of multilevel systems, where quantum interference between the levels may either inhibit or accelerate the decay [107]. Our general approach [99] to dynamical control of states coupled to an arbitrary “bath” or continuum has reaffirmed the intuitive anticipation that, in order to suppress their decay, we must modulate the system-bath coupling at a rate exceeding the spectral interval over which the coupling is significant. Yet our analysis can serve as a general recipe for optimized design of the modulation aimed at an effective use of the fields for decay and decoherence suppression or enhancement. 1.2. Control of Symmetry-Breaking Multipartite Decoherence Control of multiqubit or, more generally, multipartite decoherence is of even greater interest, because it can help protect the entanglement of such systems, which is the cornerstone of many quantum information processing applications. However, it is very susceptible to decoherence, decays faster than single-qubit coherence, and can even completely disappear in finite time, an effect dubbed entanglement sudden death (ESD) [73, 74, 108113]. Entanglement is effectively protected in the collective decoherence situation, by singling out decoherence-free subspaces (DFS) [114], wherein symmetrically degenerate many-qubit states, also known as “dark” or “trapping” states [78], are decoupled from the bath [87, 115117]. Symmetry is a powerful means of protecting entangled quantum states against decoherence, since it allows the existence of a decoherence-free subspace or a decoherence-free subsystem [77, 78, 8087, 102, 114120]. In multipartite systems, this requires that all particles be perturbed by the same environment. In keeping with this requirement, quantum communication protocols based on entangled two-photon states have been studied under collective depolarization conditions, namely, identical random fluctuations of the polarization for both photons [71]. Entangled states of two or more particles, wherein each particle travels along a different channel or is stored at a different site in the system, may present more challenging problems insofar as combating and controlling decoherence effects are concerned: if their channels or sites are differently coupled to the environment, their entanglement is expected to be more fragile and harder to protect. To address these fundamental challenges, we have developed a very general treatment. Our treatment does not assume the perturbations to be stroboscopic, that is, strong or fast enough, but rather to act concurrently with the particle-bath interactions. This treatment extends our earlier single-qubit universal strategy [97, 99, 100, 104, 121, 122] to multiple entangled systems (particles) which are either coupled to partly correlated (or uncorrelated) finite-temperature baths or undergo locally varying random dephasing [107, 123126]. Furthermore, it applies to any difference between the couplings of individual particles to the environment. This difference may range from the large-difference limit of completely independent couplings, which can be treated by the single-particle dynamical control of decoherence via modulation of the system-bath coupling, to the opposite zero-difference limit of completely identical couplings, allowing for multiparticle collective behavior and decoherence-free variables [86, 87, 115117, 127130]. The general treatment presented here is valid anywhere between these two limits and allows us to pose and answer the key question: under what conditions, if any, is local control by modulation, addressing each particle individually, preferable to global control, which does not discriminate between the particles? We show that in the realistic scenario, where the particles are differently coupled to the bath, it is advantageous to locally control each particle by individual modulation, even if such modulation is suboptimal for suppressing the decoherence of a single particle. This local modulation allows synchronizing the phase-relation between the different modulations and eliminates the cross coupling between the different systems. As a result, it allows us to preserve the multipartite entanglement and reduces the multipartite decoherence problem to the single particle decoherence problem. We show the advantages of local modulation, over global modulation (i.e., identical modulation for all systems and levels), as regards the preservation of arbitrary initial states, preservation of entanglement, and the intriguing possibility of entanglement increase compared to its initial value. The experimental realization of a universal quantum computer is widely recognized to be difficult due to decoherence effects, particularly dephasing [1, 131133], whose deleterious effects on entanglement of qubits via two-qubit gates [134136] are crucial. To help overcome this problem, we put forth a universal dynamical control approach to the dephasing problem during all the stages of quantum computations [125, 137], namely, (i) storage, wherein the quantum information is preserved in between gate operations, (ii) single-qubit gates, wherein individual qubits are manipulated, without changing their mutual entanglement, and (iii) two-qubit gates, that introduce controlled entanglement. We show that in terms of reducing the effects of dephasing, it is advantageous to concurrently and specifically control all the qubits of the system, whether they undergo quantum gate operations or not. Our approach consists in specifically tailoring each dynamical quantum gate, with the aim of suppressing the dephasing, thereby greatly increasing the gate fidelity. In the course of two-qubit entangling gates, we show that cross dephasing can be completely eliminated by introducing additional control fields. Most significantly, we show that one can increase the gate duration, while simultaneously reducing the effects of dephasing, resulting in a total increase in gate fidelity. This is at odds with the conventional approaches, whereby one tries to either reduce the gate duration, or increase the coherence time. A general task may also require state and energy transfer [138], or entanglement [139] of noninteracting parties via shared modes of the bath [123, 140] which call for maximizing the shared (two-partite) couplings with the bath, but suppressing the single-partite couplings. It is therefore desirable to have a general framework for optimizing the way a system interacts with its environment to achieve a desired task. This optimization consists in adjusting a given “score” that quantifies the success of the task, such as the targeted fidelity, purity, entropy, entanglement, or energy by dynamical modification of the system-bath coupling spectrum on demand. The goal of this work is to develop such a framework. 1.3. Dynamical Protection from Spontaneous Emission Schemes of quantum information processing that are based on optically manipulated atoms face the challenge of protecting the quantum states of the system from decoherence, or fidelity loss, due to atomic spontaneous emission (SE) [1, 141, 142]. SE becomes the dominant source of decoherence at low temperatures, as nonradiative (phonon) relaxation becomes weak [4, 5]. SE suppression cannot be achieved by frequent modulations or perturbations of the decaying state, because of the extremely broad spectrum of the radiative continuum (“bath”) [76, 97]. A promising means of protection from SE is to embed the atoms in photonic crystals (three-dimensionally periodic dielectrics) that possess spectrally wide, omnidirectional photonic bandgaps (PBGs) [6]: atomic SE would then be blocked at frequencies within the PBG [68]. Thus far, studies of coherent optical processes in a PBG have assumed fixed values of the atomic transition frequency [9]. However, in order to operate quantum logic gates, based on pairwise entanglement of atoms by field-induced dipole-dipole interactions [10, 143, 144], one should be able to switch the interaction on and off, most conveniently by AC Stark-shifts of the transition frequency of one atom relative to the other, thereby changing its detuning from the PBG edge. The question then arises: should such frequency shifts be performed adiabatically, in order to minimize the decoherence and maximize the quantum-gate fidelity? The answer is expected to be affirmative, based on the existing treatments of adiabatic entanglement and protection from decoherence [11, 12, 129] and on the tendency of nonadiabatic evolution to spoil fidelity and promote transitions to the continuum [13]. Surprisingly, our analysis (Section 6) demonstrates that only an appropriately phased sequence of “sudden” (strongly nonadiabatic) changes of the detuning from the PBG edge may yield higher fidelity of qubit and quantum gate operations than their adiabatic counterparts. This unconventional nonadiabatic protection from decoherence is valid for qubits that are strongly coupled to the continuum edge [14, 145], as opposed to the weak coupling approach in Sections 25. 1.4. Outline In this paper we develop, step by step, the framework for universal dynamical control by modulating fields of multilevel systems or qubits, aimed at suppressing or preventing their noise, decoherence, or relaxation in the presence of a thermal bath. Its crux is the general master equation (ME) of a multilevel, multipartite system, weakly coupled to an arbitrary bath and subject to arbitrary temporal driving or modulation. The present ME, derived by the technique [146, 147], is more general than the ones obtained previously in that it does not invoke the rotating wave approximation and therefore applies at arbitrarily short times or for arbitrarily fast modulations. Remarkably, when our general ME is applied to either AN or PN, the resulting dynamically controlled relaxation or decoherence rates obey analogous formulae provided that the corresponding density-matrix (generalized Bloch) equations are written in the appropriate basis. This underscores the universality of our treatment. It allows us to present a PN treatment that does not describe noise phenomenologically, but rather dynamically starting from the ubiquitous spin-boson Hamiltonian. In Sections 2 and 3, we present a universal formula for the control of single-qubit zero-temperature relaxation and discuss several limits of this formula. In Sections 4 and 5, we extend this formula to multipartite or multilevel systems. In Section 6 dynamical control in the strong coupling regime is considered. In Section 7, the treatment is extended to the control of finite-temperature relaxation and decoherence and culminates in single-particle Bloch equations with dynamically modified decoherence rates that essentially obey the universal formula of Section 3. We then discuss in Section 7.4 the possible modulation arsenal for either AN or PN control. In Section 8, we discuss the extensions of the universal control formula to entangled multipartite systems. The formalism is applicable in a natural and straightforward manner to such systems [123]. It allows us to focus on the ability of symmetries to overcome multipartite decoherence [87, 114117]. In Section 8, we discuss the implementations of the universal formula to multipartite quantum computation. Section 9 discusses some general aspects of multipartite dynamical control. We develop a general optimization strategy for performing a chosen unitary or nonunitary task on an open quantum system. The goal is to design a controlled time-dependent system Hamiltonian by variationally minimizing or maximizing a chosen function of the system state, which quantifies the task success (score), such as fidelity, purity, or entanglement. If the time dependence of the system Hamiltonian is fast enough to be comparable to or shorter than the response time of the bath, then the resulting non-Markovian dynamics is shown to optimize the chosen task score to second order in the coupling to the bath. This strategy can not only protect a desired unitary system evolution from bath-induced decoherence but also take advantage of the system-bath coupling so as to realize a desired nonunitary effect on the system. Section 10 summarizes our conclusions whereby this universal control can effectively protect complex systems from a variety of decoherence sources. 2. Modulation-Affected Control of Decay into Continua and Zero-Temperature Baths: Weak-Coupling Theory 2.1. Framework Consider the decay of a state via its coupling to a bath, described by the orthonormal basis , which forms either a discrete or a continuous spectrum (or a mixture thereof). The total Hamiltonian is Here is the dynamically modulated Hamiltonian of the system, with being the energy of . The time-dependent frequency can be attributed to the controllable dynamically imposed Stark shift, or to proper dephasing (uncontrolled, random fluctuation). The term is the time-dependent Hamiltonian of the bath, with being the energies of . The time-dependent frequencies , like , may arise from proper dephasing or dynamical Stark shifts. Finally denotes the off-diagonal coupling of with the continuum/bath, with being the dynamical modulation function and the system-bath coupling matrix elements. We write the wave function of the system as with the initial condition being A one-level system which can exchange its population with the bath states represents the case of autoionization or photoionization. However, the above Hamiltonian describes also a qubit, which can undergo transitions between the excited and ground states and , respectively, due to its off-diagonal coupling to the bath. The bath may consist of quantum oscillators (modes) or two-level systems (spins) with different eigenfrequencies. Typical examples are spontaneous emission into photon or phonon continua. In the rotating-wave approximation (RWA), which is alleviated in Section 7, the present formalism applies to a relaxing qubit, under the substitutions 3. Single-Qubit Zero-Temperature Relaxation To gain insight into the requirements of decoherence control, consider first the simplest case of a qubit with states and energy separation relaxing into a zero-temperature bath via off-diagonal () coupling, Figure 1(a). The Hamiltonian is given by the sum extending over all bath modes, where are the annihilation and creation operators of mode , respectively, with and denoting the bath vacuum and th-mode single excitation, respectively, and being the corresponding transition matrix element and . being Hermitian conjugate. We have also taken the rotating wave approximation (RWA). The general time-dependent state can be written as The Schrödinger equation results in the following coupled equations [83]: One can go to the rotating frame, define , , and get: where is the bath response/correlation function, expressible in terms of a sum over all transition matrix elements squared oscillating at the respective mode frequencies . Figure 1: (a) Schematic drawing of a two-level system with off-diagonal coupling to a continuum or a bath. (b) Schematic drawing of a bath comprised of many harmonic oscillators with different frequencies, whose temporal dephasing after correlation time renders the system-bath interaction practically irreversible. It is the spread of oscillation frequencies that causes the environment response to decohere after a (typically short) correlation time (Figure 1(b)). Hence, the Markovian assumption that the correlation function decays to instantaneously, , is widely used: it is in particular the basis for the venerated Lindblad’s master equation describing decoherence [148]. It leads to exponential decay of at the Golden Rule (GR) rate [76, 78] as We, however, are interested in the extremely non-Markovian time scales, much shorter than , on which all bath modes excitations oscillate in unison and the system-bath exchange is fully reversible. How does one probe, or, better still, maintain the system in a state corresponding to such time scales? To this end, we assume modulations of and , that result in the time-dependent modulation function , which has two components, namely, an amplitude modulation and phase modulation . The modulation function is related to in 3 via with This modulation may pertain to any intervention in the system-bath dynamics: (i) measurements that effectively interrupt and completely dephase the evolution, describable by stochastic [149], (ii) coherent perturbations that describe phase modulations of the system-bath interactions [99, 124]. For any , the exact equation (12) is then rewritten as We now resort to the crucial approximation that varies slower than either or . This approximation is justifiable in the weak-coupling regime (to second order in ), as discussed below. Under this approximation, (18) is transformed into a differential equation describing relaxation at a time-dependent rate as where is the instantaneous time-dependent relaxation rate and is the Lamb shift due to the coupling to the bath. One can separate the spectral representation of into the real and imaginary parts which satisfy the Kramers-Kronig relations with denoting the principal value. Henceforth, we shall concentrate on the relaxation rate, as it determines the excited state population, where is the average relaxation rate. It is advantageous to consider the frequency domain, as it gives more insight into the mechanisms of decoherence. For this purpose, we define the finite-time Fourier transform of the modulation function as The average time-dependent relaxation rate can be rewritten, by using the Fourier transforms of and , in the following form: where is the spectral-response function of the bath, and is the finite-time spectral intensity of the (random or coherent) intervention/modulation function, where the factor comes about from the definition of the decoherence rate averaged over the interval. The relaxation rate described by (25)–(27) embodies our universal recipe for dynamically controlled relaxation [99, 124], which has the following merits: (a) it holds for any bath and any type of interventions, that is, coherent modulations and incoherent interruptions/measurements alike; (b) it shows that in order to suppress relaxation we need to minimize the spectral overlap of , given to us by nature, and , which we may design to some extent; (c) most importantly, it shows that in the short-time domain, only broad (coarse-grained) spectral features of and are important. The latter implies that, in contrast to the claim that correlations of the system with each individual bath mode must be accounted for, if we are to preserve coherence in the system, we actually only need to characterize and suppress (by means of ) the broad spectral features of , the bath response function. The universality of (25)–(27) will be elucidated in what follows, by focusing on several limits. 3.1. The Limit of Slow Modulation Rate If corresponds to sufficiently slow rates of interruption/modulation , the spectrum of is much narrower than the interval of change of around , the resonance frequency of the system. Then can be replaced by , so that the spectral width of plays no role in determining , and we may as well replace by a spectrally finite, flat (white-noise) reservoir; that is, we may take the Markovian limit. The result is that (25) coincides with the Golden Rule (GR) rate, (14) (Figure 2(a)) as Namely, slow interventions do not affect the onset and rate of exponential decay. Figure 2: Frequency-domain representation of the dynamically controlled decoherence rate in various limits (Section 7). (a) Golden Rule limit. (b) Quantum Zeno effect (QZE) limit. (c) Anti-Zeno effect (AZE) limit. Here, and are the modulation and bath spectra, respectively, and are the interval of change and width of , respectively, and is the interruption rate. 3.2. The Limit of Frequent Modulation Frequent interruptions, intermittent with free evolution, are represented by a repetition of the free-evolution modulation spectrum where being the time-interval between consecutive interruptions. If describes extremely frequent interruptions or measurements , is much broader than . We may then pull out of the integral, whereupon (25) yields This limit is that of the quantum Zeno effect (QZE), namely, the suppression of relaxation as the interval between interruptions decreases [150152]. In this limit, the system-bath exchange is reversible and the system coherence is fully maintained (Figure 2(b)). Namely, the essence of the QZE is that sufficiently rapid interventions prevent the excitation escape to the continuum, by reversing the exchange with the bath. 3.3. Intermediate Modulation Rate In the intermediate time-scale of interventions, where the width of is broader than the width of (so that the Golden Rule is violated) but narrower than the width of (so that the QZE does not hold), the overlap of and grows as the rate of interruptions, or modulations, increases. This brings about the increase of relaxation rates with the rate of interruptions, marking the anti-Zeno effect (AZE) [85, 102, 153] (Figure 2(c)). On such time-scales, more frequent interventions (in particular, interrupting measurements) enhance the departure of the evolution from reversibility. Namely, the essence of the AZE is that if you do not intervene in time to prevent the excitation escape to the continuum, then any intervention only drives the system further from its initial state. We note that the AZE can only come about when the peaks of and do not overlap, that is, the resonant coupling is shifted from the maximum of . If, by contrast, the peaks of and do coincide, any rate of interruptions would result in QZE (Figure 2(b)). This can be understood by viewing as an averaging kernel of around . If is the maximum of the spectrum, any averaging can only be lower than this maximum, which is the Golden Rule decay rate. Hence, any rate of interruptions can only decrease the decay rate with respect to the Golden Rule rate, that is, cause the QZE. 3.4. Quasiperiodic Amplitude and Phase Modulation (APM) The modulation function can be either random or regular (coherent) in time, as detailed below. Consider first the most general coherent amplitude and phase modulation (APM) of the quasiperiodic form, Here () are arbitrary discrete frequencies with the minimum spectral distance . If is periodic with the period , then and become the Fourier components of . For a general quasiperiodic , one obtains Here equals the average of over a period of the order of , , and , whereas is a bell-like function of normalized to 1. For a sufficiently long time, the function becomes narrower than the respective characteristic width of around , and one can set Thus, when where is the effective correlation (memory) time of the reservoir, (25) is reduced to For the validity of (37), it is also necessary that This condition is well satisfied in the regime of interest, that is, weak coupling to essentially any reservoir, unless (for some harmonic ) is extremely close to a sharp feature in , for example, a band edge [145], a case covered by Section 6. Otherwise, the long-time limit of the general decay rate (25) under the APM is a sum of the GR rates, corresponding to the resonant frequencies shifted by , with the weights . Formula (37) provides a simple general recipe for manipulating the decay rate by APM. Its powerful generality allows for the optimized control of decay, not only for a single level but also for a band characterized by a spectral distribution (e.g., inhomogeneous or vibrational spectrum). We can then choose and in (37) so as to minimize the decay convoluted with . In what follows, various limits of (37) will be analyzed. 3.5. Coherent Phase Modulation (PM) 3.5.1. Monochromatic Perturbation Let Then where is a frequency shift, induced by the ac Stark effect (in the case, e.g., of atoms) or by the Zeeman effect (in the case of spins). In principle, such a shift may drastically enhance or suppress relative to . It provides the maximal variation of achievable by an external perturbation, since it does not involve any averaging (smoothing) of incurred by the width of : the modified can even vanish, if the shifted frequency is beyond the cut-off frequency of the coupling, where . Conversely, the increase of due to a shift can be much greater than that achievable by repeated measurements, that is, the anti-Zeno effect [97, 98, 101, 102]. In practice, however, ac Stark shifts are usually small for (cw) monochromatic perturbations, whence pulsed perturbations should often be used. 3.5.2. Impulsive Phase Modulation Let the phase of the modulation function periodically jump by an amount at times . Such modulation can be achieved by a train of identical, equidistant, narrow pulses of nonresonant radiation, which produce pulsed frequency shifts . Now where is the integer part. One then obtains that The decay, according to (22), has then the form (at ) where is defined by (25). For sufficiently long times For small phase shifts, , the peak dominates, whereas In this case, one can retain only the term in (37) (unless is changing very fast). Then the modulation acts as a constant shift With the increase of , the difference between the and peak heights diminishes, vanishing for . Then that is, for contains two identical peaks symmetrically shifted in opposite directions (the other peaks decrease with as , totaling 0.19). The above features allow one to adjust the modulation parameters for a given scenario to obtain an optimal decrease or increase of . The phase-modulation (PM) scheme with a small is preferable near the continuum edge, since it yields a spectral shift in the required direction (positive or negative). The adverse effect of peaks in then scales as and hence can be significantly reduced by decreasing . On the other hand, if is near a symmetric peak of , is reduced more effectively for , as in [80, 81], since the main peaks of at and then shift stronger with than the peak at for . 3.6. Amplitude Modulation (AM) Amplitude modulation (AM) of the coupling arises, for example, for radiative-decay modulation due to atomic motion through a high- cavity or a photonic crystal [154, 155] or for atomic tunneling in optical lattices with time-varying lattice acceleration [106, 156]. Let the coupling be turned on and off periodically, for the time and , respectively, that is, (). Now [157] so that (see (43)) where is given by (25) and (50). This case is also covered by (37) and (38), where the parameters are now found to be with It is instructive to consider the limit wherein and is much greater than the correlation time of the continuum; that is, does not change significantly over the spectral intervals . In this case, one can approximate the sum (37) by the integral (25) with characterized by the spectral broadening ~1. Then (25) for reduces to that obtained when ideal projective measurements are performed at intervals [97]. Thus the AM scheme can imitate measurement-induced (dephasing) effects on quantum dynamics, if the interruption intervals exceed the correlation time of the continuum. The decay probability , calculated for parameters similar to [106], completely coincides with that obtained for ideal impulsive measurements at intervals [97, 98, 101] and demonstrates either the quantum Zeno effect (QZE) or the anti-Zeno effect (AZE) behavior, depending on the rate of modulation. Since the Hamiltonian for atoms in accelerated optical lattices is similar to the Legett Hamiltonian for current-biased Josephson junctions [77], the present theory has been extended to describe effects of current modulations on the rate of macroscopic quantum tunneling in Josephson junctions in [100]. Projective measurements at an effective rate , whether impulsive or continuous, usually result in a broadened (to a width ) modulation function , without a shift of its center of gravity [97, 98, 101, 158, 159], This feature was shown in [97] to be responsible for either the standard quantum Zeno effect whereby scales as or the anti-Zeno effect whereby grows with . In contrast, a weak and broadband chaotic field, such that where is the mean intensity, is the bandwidth, and is the effective polarizability (electric or magnetic, depending on the system), would give rise to a Lorentzian dephasing function with a substantial shift This shift would have a much stronger effect on than the QZE or AZE, which are associated with the rate , since 4. Multipartite Decay Control 4.1. Multipartite PN Control by Resonant Modulation One can describe phase noise, or proper dephasing, by a stochastic fluctuation of the excited-state energy, , where is a stochastic variable with zero mean, and is the second moment. For multipartite systems, where each qubit can undergo different proper dephasing, , one has an additional second moment for the cross dephasing, . A general treatment of multipartite systems undergoing this type of proper dephasing is given in [107]. Here we give the main results for the case of two qubits. Let us take two TLS, or qubits, which are initially prepared in a Bell state. We wish to obtain the conditions that will preserve it. In order to do that, we change to the Bell basis, which is given by For an initial Bell-state , where , one can then obtain the fidelity, , as where where is the amplitude of the resonant field applied on qubit , , and the corresponds to and to . Expressions (61)–(67) provide our recipe for minimizing the Bell-state fidelity losses. They hold for any dephasing time-correlations and arbitrary modulation. One can choose between two modulation schemes, depending on our goals. When one wishes to preserve and initial quantum state, one can equate the modified dephasing and cross dephasing rates of all qubits, . This results in complete preservation of the singlet only, that is, , for all , but reduces the fidelity of the triplet state. On the other hand, if one wishes to equate the fidelity for all initial states, one can eliminate the cross dephasing terms, by applying different modulations to each qubit (Figure 3), causing for all . This requirement can be important for quantum communication schemes. Figure 3: Cross decoherence as a function of local modulation. Here two qubits are modulated by continuous resonant fields, with amplitudes . The cross decoherence decays as the two qubits’ modulations become increasingly different. The bath parameters are , where is the correlation time, and . 5. Dynamical Control of Zero-Temperature Decay in Multilevel Systems 5.1. General Formalism Here we discuss in detail a model for dynamical decay modifications in a multilevel system. The system with energies , , is coupled to a zero-temperature bath of harmonic oscillators with frequencies . Using the factorized coupling defined in Section 2.1, the corresponding Hamiltonian is found to be as in 1, where where now each level has a different modulation and a different coupling to the bath and denotes a gate operation. The system evolution is divided into two phases, one of storage without gate operations and a gate operation of finite duration The full wave function is given by Similarly to what was said in Section 2.1, one can consider two types of situations. The above equations (68)–(72) were written for an -level system which can exchange its population with the reservoir. In addition, one can consider an -level system, where transitions are possible between any level and a lower level , the reservoir consisting of quantum systems, as described in Section 2.1. The theory in Section 5 holds for both situations, with the minor difference that one should substitute as in (70) and (72) and perform a similar substitution in (76) below. In order to find the solution, one has to diagonalize the system hamiltonian by introducing a matrix that rotates the amplitudes as such that, by defining , one gets where are the eigenvalues of the new rotated system. Thus the transformed wave function becomes Using these rotated state amplitudes, a procedure similar to that used for one level, one finds that they obey the following integrodifferential equations, assuming slowly varying as Here, the and matrices are given by with and being the modulation and reservoir-response matrices, respectively, given by where During the storage phase, one has , and , and during the gate-operation phase, , , and . The solution to (77) is of the form To simplify the analysis, one can define the fluence and the modulation spectral matrices as The relevant imaginary parts of the spectral response of the reservoir can be expressed, analogously to (20) and (21), by the Kramers-Kronig relations Defining we shall now represent in different regimes (phases). (i) As a reference, it is important to consider the decoherence effects with no modulations at all, that is, . In this case, one obtains a diagonal decoherence matrix This means that interference of decaying levels and cancels out in the long time limit, and the decoherence is without cross relaxation. (ii) During the storage phase, (84) results in One can easily see that for the off-diagonal terms, a simple separation into decay rates and energy shifts is inapplicable in this formulation. (iii) During gate operations, (84) assumes the form In a more compact and enlightening form, one can rewrite this equation as , where is given in (86). 6. The Strong-Coupling Regime: Decay Control Near Continuum Edge by Nonadiabatic Interference The analysis expounded thus far has been based on a perturbative treatment of the system-bath coupling. Here, we address the regime of strong system-bath coupling, as in the case of a resonance frequency very near to the continuum edge, a situation that may be encountered in atomic excitation near the ionization energy, vibrational excitation frequency in a solid near the Debye cutoff, or an atomic excitation in a photonic crystal near a photonic bandgap. In the strong-coupling regime, it is advantageous to work in the combined basis of the system (qubit) and field (bath) states that incorporate the system-bath interaction. Dynamical control of the decay can then be analysed by exact solution of the Schrödinger equation in this basis. Analytical expressions are obtainable for alternating static evolutions with different parameters (e.g., resonant frequency), the dynamical control resulting from their interference. Specifically, we shall consider optical manipulations of atoms embedded in photonic crystals with atomic transition frequencies near a photonic bandgap (PBG), that is, near the edge of the photonic mode continuum, where the qubit is strongly coupled to the continuum, and spontaneous emission (SE) is only partially blocked, because an initially excited atom then evolves into a superposition of decaying and stable states, the stable state representing photon-atom binding [14, 145]. In what follows we shall demonstrate the ability of appropriately alternating sudden changes of the detuning to augment the interference of the emitted and back-scattered photon amplitudes, thereby increasing the probability amplitude of the stable (photon-atom bound) state. As a result, phase-gate operations affected by dipole-dipole interactions can be performed with higher fidelity than in the case of adiabatic frequency change. 6.1. Hamiltonian and Equations of Motion We consider a two-level atom with excited and ground states and coupled to the field of a discrete (or defect) mode and to the photonic band structure (PBS) in a photonic crystal. The hamiltonian of the system in the rotating-wave approximation assumes the form [145] Here, is the energy of the atomic transition frequency, and are, respectively, the creation and annihilation operators of the field mode at frequency , is the mode density of the PBS, and and are the coupling rates to the atomic dipole of a mode from the continuum and the discrete mode, respectively. Let us first consider the initial state obtained by absorbing a photon from the discrete mode as where is the vacuum state of the field. Then the evolution of the wavefunction has the general form where we have denoted by and the single-photon state of the relevant modes. The Schrödinger equation then leads to the set of coupled differential equations This evolution reflects the interplay between the off-resonant Rabi oscillations of and , at the driving rate , and the partly inhibited oscillatory decay from to via coupling to the continuum . This decay depends on the detuning of from the continuum edge at (the upper cutoff of the PBG). For a spectrally steep edge (see below), we are in the regime of strong coupling to the mode continuum (as in a high-Q cavity [8]) which allows for the existence of an oscillatory, nondecaying, component of , associated with a photon-atom bound state [7, 145]. 6.2. Periodic Sudden Changes of the Detuning Let us now introduce abrupt changes of , that is, of the detuning from the upper cutoff, , of the PBG (by fast AC-Stark modulations as discussed below), at intervals . In the sudden-change approximation for , the amplitudes of the excited state, the discrete mode and the continuum still evolve according to (91), except that from to the atomic transition frequency is , that is, the detuning , while for , we have , that is, . This dynamics leads to the relation Here, and are solutions of (91) with a static (fixed) atomic transition frequency, or . However, the initial condition at the instant of the frequency change from to is no longer the excited state (89) but the superposition In other words, the dynamics is equivalent to two successive static evolutions, the second one starting from initial conditions . Using the Laplace transform of the system (91) with the initial condition (93), it is possible to express the dynamic amplitude of the excited state after the sudden change as where we have used the initial conditions and the solution of (91) for the initial condition (89). There is an advantageous feature to the sudden change: since the time dependence of in (92) arises from the static amplitudes , , and at the shifted time , a consequence of the sudden change is to revive the excited-state population oscillations, which tend to disappear at long times in the static case. Hence, by applying several successive sudden changes, we should be able to maintain large-amplitude oscillations of the coherence between and . The scenario leading to the largest amplitude consists in periodic shifts of the energy detuning from to . When the initial detuning is large and we first reduce it to before it increases to , the dynamic population and the coherence, thanks to the revival of oscillations, are periodically larger than the static ones. This remarkable result occurs unexpectedly: it implies that successive abrupt changes can reverse the decay to the continuum, even though they cannot be associated with the Zeno effect: they occur at intervals much longer than the correlation (Zeno) time of the radiative continuum, which is utterly negligible ( s) [97], or even longer than the static-oscillation half period. The fact that this happens only for the rather “counter-intuitive” ordering of detuning values (from large to small then back again) is a manifestation of interference between successive static evolutions: their relative phases determine the beating between the emitted and reabsorbed (back-scattered) photon amplitudes and thereby the oscillation of . Let us now consider the initial superposition and a nonnegligible coupling constant . In this case, the periodic dynamic population of the excited state also strongly exceeds the static one. Most importantly, the instantaneous dynamic fidelity is periodically enhanced as compared to the static one, as demonstrated numerically. In order to use these results for quantum logic gates, let us consider the example of the dipole-dipole induced control-phase gate, which consists in shifting the phase of the target-qubit excited state by via interaction with the control qubit [10, 143, 144]. The phase shift must be accumulated gradually, to preserve the coherence of the system. We have found that ten or twenty sudden shifts of or , respectively, alternating with appropriate detuning changes, can keep the fidelity high, with little decoherence. The system begins to evolve following the “counter-intuitive” detuning sequence discussed above (not to be confused with the adiabatic STIRAP method [11, 12, 129]). As soon as two sudden changes of the detuning have been performed, the conditional phase shift of or takes place and the process is further repeated. The total gate operation is completed within the time interval of maximum fidelity. The fidelity of the system relative to its initial state during the realization of a control phase gate, with alternating detunings, is perhaps our most impressive finding. We find that the fidelity is increased using the “counterintuitive” sequence of detunings (solid line) as compared to the static (fixed) choice of maximal detuning (long-dashed line), or compared to the dynamically enhanced fidelity obtained without gate operations (dot-dashed line). 6.3. Comparison with the Weak-Coupling Regime We have compared the results of this method, which allows for possibly strong coupling of with the continuum edge, with those of the universal formula of Section 2 (25), which expresses the decay rate of by the convolution of the modulation spectrum and the PBS coupling spectrum. We find good agreement with this formula only in the regime of weak coupling to the PBG edge, when the dimensionless detuning parameter , as expected from the limitations of the theory in Section 2. 6.4. Experimental Scenario The following experimental scenario may be envisioned for demonstrating the proposed effect: pairs of qubits are realizable by two species of active rare-earth dopants [17, 18] or quantum dots in a photonic crystal. The transition frequency of one species is initially detuned by from the PBG edge with coupling constant and by ~3 MHz from the resonance of the other species. This is abruptly modulated by nonresonant laser pulses which exert ~3 MHz AC Stark shifts. Between successive shifts, the qubits are near resonant with their neighbours and therefore become dipole-dipole coupled, thus affecting the high-fidelity phase-control gate operation [10, 143, 144]. The required pulse rate is , much lower than the pulse rate stipulated under similar conditions by previously proposed strategies [81, 99, 118]. 7. Finite-Temperature Relaxation and Decoherence Control So far we have treated the case of an empty (zero-temperature) bath. In order to account for finite-temperature situations, where the bath state is close to a thermal (Gibbs) state, we resort to a master equation (ME) for any dynamically controlled reduced density matrix of the system [100, 124] that we have derived using the Nakajima-Zwanzig formalism [70, 146, 147, 160]. This ME becomes manageable and transparent under the following assumptions. (i) The weak-coupling limit of the system-bath interaction prevails, corresponding to the neglect of terms. This is equivalent to the Born approximation, whereby the back effect of the system on the bath and their resulting entanglement are ignored. (ii) The system and the bath states are initially factorisable. (iii) The initial mean value of vanishes. We present the general form of the Nakajima-Zwanzig formalism and resort to the aforementioned assumptions only when necessary. Hence, the formalism may seem cumbersome, yet it can be simplified greatly if the assumptions are made from the outset (see [70]). 7.1. Explicit Equations for Factorisable Interaction Hamiltonians We now wish to write the ME explicitly for time-dependent Hamiltonians of the following form [100]: where and are the system and bath Hamiltonians, respectively, and , the interaction Hamiltonian, is the product of operators and which act on the system and bath, respectively. Finally, defining the correlation function for the bath, we obtain the ME for in the Born approximation as We focus on two regimes: a two-level system coupled to either an amplitude- or phase-noise (AN or PN) thermal bath. The bath Hamiltonian (in either regime) will be explicitly taken to consist of harmonic oscillators and be linearly coupled to the system Here are the annihilation and creation operators of mode , respectively, and is the coupling amplitude to mode . 7.1.1. Amplitude-Noise Regime We first consider the AN regime of a two-level system coupled to a thermal bath. We will use off-resonant dynamic modulations, resulting in AC-Stark shifts. The Hamiltonians then assume the following form: where is the dynamical AC-Stark shifts, is the time-dependent modulation of the interaction strength, and the Pauli matrix . 7.1.2. Phase-Noise Regime Next, we consider the PN regime of a two-level system coupled to a thermal bath via operator. To combat it, we will use near-resonant fields with time-varying amplitude as our control. The Hamiltonians then assume the following forms: where is the time-dependent resonant field, with real envelope , is the time-dependent modulation of the interaction strength, and . Since we are interested in dephasing, phases due to the (unperturbed) energy difference between the levels are immaterial. 7.2. Universal Master Equation To derive a universal ME for both amplitude- and phase-noise scenarios, we move to the interaction picture and rotate to the appropriate diagonalizing basis, where the appropriate basis for the AN case of (100) is while for the PN case of (102) the basis is In this rotated and tilted frame, where is the phase-modulation due to the time-dependent control in the system Hamiltonian. Allowance for arbitrary time-dependent intervention in the system and interaction dynamics , , respectively, yields the following universal ME for a dynamically controlled decohering system [100, 124]: Here is the modulated interaction operator, where denotes the rotated and tilted frame, and . The modulation function is given by for both AN and PN. It is important to note that is a function of    (not of ): this convolutionless form of the ME is fully non-Markovian to second order in , as proven exactly in [124]. 7.3. Universal Modified Bloch Equations The resulting modified Bloch equations, in the appropriate diagonalizing basis (see (104) for AN and (105) for PN), are given by The time-dependent relaxation rates are real, and the only difference between them is the complex conjugate of the combined modulation function, . They can be very different for a complex correlation function. One can derive the corresponding time-averaged relaxation rates of the upper and lower states as For both AN (see (100)) and PN (see (102)), where is the zero-temperature bath spectrum, and are the frequency-dependent density of bath modes and the transition matrix element, respectively, is the temperature-dependent bath mode population, and is the inverse temperature. Also, is the Heaviside function, that is, the zero-temperature bath spectrum is defined only for positive frequencies . Hence, the first right-hand side of (114) is nonzero for positive frequencies and the second right-hand side is nonzero for negative frequencies. For either AN or PN, we may control the decoherence by either off-resonant or near-resonant modulations, respectively. The modulation spectrum has the same form for both (see Section 7.4) as where the modulation function is given in (107) and (109). The time-dependent modulation phase factor is obtained for AN in the form of an AC-Stark shift, time-integrated over where is the Rabi frequency of the control field and is the detuning. The corresponding phase factor for PN is the integral of the Rabi frequency , that is, the pulse area of the resonant control field, (107) (Figure 4). Figure 4: Schematic drawing of system and bath. (a) Amplitude noise (AN) (red) combatted by AC-Stark shift modulation (green). (b) Phase noise (PN) (red) combatted by resonant-field modulation (green). Hence, upon making the appropriate substitutions, the Bloch equations (110) have the same universal form for either AN or PN. An arbitrary combination of AN and PN requires a more detailed treatment, yet the universal form is maintained. 7.3.1. Dynamically Modified Decay Rates Since we are interested here in dynamical control of relaxation, we shall concentrate on the transition rates rather than the level shifts. The average rate of the transition and its counterpart are given by Here the upper (lower) sign corresponds to the subscript , and can be shown [161] to be nonnegative, with , and vanishes for at : . For the oscillator bath, one finds that where and is the average number of quanta in the oscillator (bath mode) with frequency . We apply (118) to the case of coherent modulation of quasiperiodic form, (see (31)). Without a limitation of the generality, we can assume that . We then find, using (118), that the rates tend to the long-time limits where or Equation (121) shows that is given by the overlap of the modulation spectrum with the bath-CF spectrum . The limits (123) are approached when and . Here is the bath memory (correlation) time, defined as the inverse of , the spectral interval over which changes around the relevant frequencies. Had we used the standard dipolar RWA hamiltonian in the case of an oscillator bath, dropping the antiresonant terms in , we would have arrived at the transition rates wherein the integration is performed from 0 to , rather than from to , as in (121). This means that the RWA transition rates hold for a slow modulation, when at , being peaked near . However, whenever the suppression of requires modulation at a rate comparable to , the RWA is inadequate. For instance, (120) and (124) imply that, at , the rate vanishes identically, irrespective of , in contrast to the true upward-transition rate in (121), which may be comparable to for ultrafast modulation. The difference between the RWA and non-RWA decay rates stems from the fact that the RWA implies that a downward (upward) transition is accompanied by emission (absorption) of a bath quantum, whereas the non-RWA (negative-frequency) contribution to in (121) allows for just the opposite: downward (upward) transitions that are accompanied by absorption (emission). The latter processes are possible since the modulation may cause level to be shifted below . The validity of the (decohering) qubit model in the presence of modulation at a rate is now elucidated: it requires that , being the effective transition rate from level to any other level , and, in particular, . If   are strongly suppressed by the modulation, the TLS model holds for long times. 7.3.2. Dynamically Modified Proper Dephasing We turn now to proper dephasing when it dominates over decay. The random frequency fluctuations are typically characterized by a (single) correlation time , with ensemble mean . When the field is used only for gate operations, we assume that it does not affect proper dephasing. The ensemble average over results in with the dephasing rate The dephasing CF is the counterpart of the bath CF . At , the decoherence rate and shift approach their asymptotic values For the validity of (127), it is necessary that We assume the secular approximation, which holds if By analogy with (118), one can obtain that where is given by (117) with As follows from (131), is a symmetric function, The proper dephasing rate associated with is In the presence of a constant [cw ], it is modified into For a sufficiently strong field, the dephasing rate can be suppressed by the factor . This suppression reflects the ability of strong, near-resonant Rabi splitting to shift the system out of the randomly fluctuating bandwidth, or average its effects. Quantum gate operations may be performed by slight modulations of the control field, which can flip the qubit without affecting proper dephasing. By comparison, the “bang-bang” (BB) method involving -periodic -pulses [2, 82, 84] is an analog of the above “parity kicks.” Using the analog of (121), such pulses can be shown to suppress approximately according to (135) with . This BB method requires pulsed fields with Rabi frequencies , that is, much stronger fields than the cw field in (135). Using  s, cw Rabi frequencies exceeding 1 MHz achieve a significant dephasing suppression. 7.4. Modulation Arsenal Any modulation with quasi-discrete, finite spectrum is deemed quasiperiodic, implying that it can be expanded as where are arbitrary discrete frequencies such that where is the minimal spectral interval. One can define the long-time limit of the quasi-periodic modulation, when where is the bath-memory (correlation) time, defined as the inverse of the largest spectral interval over which and change appreciably near the relevant frequencies . In this limit, the average decay rate is given by (Figure 5(a)) as Figure 5: Spectral representation of the bath coupling, , and the modulation, . (a) General quasi-periodic modulation, with peaks at . (b) On-off modulation, with repetition rate for . (c) Impulsive phase modulation, (-pulses), . (d) Monochromatic modulation, or impulsive phase modulation, with small phase shifts, , and repetition rate. 7.4.1. Phase Modulation (PM) of the Coupling Monochromatic Perturbation. Let Then where is a frequency shift, induced by the AC Stark effect (in the case of atoms) or by the Zeeman effect (in the case of spins). In principle, such a shift may drastically enhance or suppress relative to the Golden Rule decay rate, that is, the decay rate without any perturbation as Equation (40) provides the maximal change of achievable by an external perturbation, since it does not involve any averaging (smoothing) of incurred by the width of : the modified can even vanish, if the shifted frequency is beyond the cutoff frequency of the coupling, where (Figure 5(d)). This would accomplish the goal of dynamical decoupling [8187, 118, 162]. Conversely, the increase of due to a shift can be much greater than that achievable by repeated measurements, that is, the anti-Zeno effect [97, 98, 101, 102]. In practice, however, AC Stark shifts are usually small for (cw) monochromatic perturbations, whence pulsed perturbations should often be used, resulting in multiple shifts, as per (139). Dynamical Decoupling. Dynamical decoupling (DD) is one of the best known approaches to combat decoherence, especially dephasing [7992, 95, 96]. A full description of this approach is beyond the scope of this work, but we present its most essential aspects and how it can be incorporated into the general framework described above. 7.4.2. Standard DD DD is based on the notion that the phase-modulation control fields are short and strong enough such that the free evolution can be neglected during these pulses. Hence, the propagator can be decomposed into the free propagator, followed by the control-field propagator, free propagator, and so forth. The control fields used result in the periodic accumulation of -phases; that is, each pulse has a total area of , whose effects are similar to time-reversal or the spin-echo technique [94]. Thus, the free evolution propagator after the control -pulse negates the effects of the free evolution propagator prior to the control fields, up to first order of the noise in the Magnus expansion. While the formalism of dynamical decoupling is quite different from the formalism presented here, it can be easily incorporated into the general framework of universal dynamical decoherence control by introducing impulsive phase modulation. Let the phase of the modulation function periodically jump by an amount at times . Such modulation can be achieved by a train of identical, equidistant, narrow pulses of nonresonant radiation, which produce pulsed AC Stark shifts of . When , this modulation corresponds to dynamical-decoupling (DD) pulses. For sufficiently long times (see (138)), one can use (139), with For small phase shifts, , the peak dominates, whereas In this case, one can retain only the term in (139), unless is changing very fast with frequency. Then the modulation acts as a constant shift (Figure 5(d)) as As increases, the difference between the and peak heights diminishes, vanishing for . Then
3339dfa57e9cd631
Take the 2-minute tour × Let's suppose I have a Hilbert space $K = L^2(X)$ equipped with a Hamiltonian $H$ such that the Schrödinger equation with respect to $H$ on $K$ describes some boson I'm interested in, and I want to create and annihilate a bunch of these bosons. So I construct the bosonic Fock space $$S(K) = \bigoplus_{i \ge 0} S^i(K)$$ where $S^i$ denotes the $i^{th}$ symmetric power. (Is this "second quantization"?) Feel free to assume that $H$ has discrete spectrum. What is the new Hamiltonian on $S(K)$ (assuming that the bosons don't interact)? How do observables on $K$ translate to $S(K)$? I'm not entirely sure this is a meaningful question to ask, so feel free to tell me that it's not and that I have to postulate some mechanism by which creation and/or annihilation actually happens. In that case, I would love to be enlightened about how to do this. Now, various sources (Wikipedia, the Feynman lectures) inform me that $S(K)$ is somehow closely related to the Hilbert space of states of a quantum harmonic oscillator. That is, the creation and annihilation operators one defines in that context are somehow the same as the creation and annihilation operators one can define on $S(K)$, and maybe the Hamiltonians even look the same somehow. Why is this? What's going on here? Assume that I know a teensy bit of ordinary quantum mechanics but no quantum field theory. share|improve this question Hello Qiaochu, welcome to physics.SE! Nice question and I hope we can expect many more :-) –  Marek Jan 8 '11 at 3:31 What is $S^i(K)$ @Qiaochu ? –  user346 Jan 8 '11 at 5:10 @space_cadet: the i^{th} symmetric power, i.e. the Hilbert space of states of i identical bosons. –  Qiaochu Yuan Jan 8 '11 at 13:14 Ah ok. In the physics literature $H$, almost always, denotes the Hamiltonian and $S$ the action. –  user346 Jan 8 '11 at 13:25 $H$ on $Sym^2(K)$ is really $H\otimes 1 + 1 \otimes H$ and likewise for $a$ and $a^\dagger$. So for example the energy is the sum of the (uncoupled) energies. You might have expected $H\otimes H$, for example, but $H$ generates an infinitesimal translation in time. Exponentiating gives the expected result on the propogator $U = exp(tH)$ as $U\otimes U.$ –  Eric Zaslow Jan 8 '11 at 20:20 3 Answers 3 up vote 3 down vote accepted Let's discuss the harmonic oscillator first. It is actually a very special system (one and only of its kind in whole QM), itself being already second quantized in a sense (this point will be elucidated later). First, a general talk about HO (skip this paragraph if you already know them inside-out). It's possible to express its Hamiltonian as $H = \hbar \omega(N + 1/2)$ where $N = a^{\dagger} a$ and $a$ is a linear combination of momentum and position operator). By using the commutation relations $[a, a^{\dagger}] = 1$ one obtains basis $\{ \left| n \right >$ | $n \in {\mathbb N} \}$ with $N \left | n \right > = n$. So we obtain a convenient interpretation that this basis is in fact the number of particles in the system, each carrying energy $\hbar \omega$ and that the vacuum $\left | 0 \right >$ has energy $\hbar \omega \over 2$. Now, the above construction was actually the same as yours for $X = \{0\}$. Fock's construction (also known as second quantization) can be understood as introducing particles, $S^i$ corresponding to $i$ particles (so HO is a second quantization of a particle with one degree of freedom). In any case, we obtain position-dependent operators $a(x), a^{\dagger}(x), N(x)$ and $H(x)$ which are for every $x \in X$ isomorphic to HO operators discussed previously and also obtain base $\left | n(x) \right >$ (though I am actually not sure this is base in the strict sense of the word; these affairs are not discussed much in field theory by physicists). The total hamiltonian $H$ will then be an integral $H = \int H(x) dx$. The generic state in this system looks like a bunch of particles scattered all over and this is in fact particle description of a free bosonic field. share|improve this answer I realize I left your original Hamiltonian $H$ out of the discussion. I'll add that to the answer later. For now note that $x$ is in no way special in the above, we could have used other "basis" of $K$ like momentum and in particular energy basis of the $H$. In that case the relevant states for $S(K)$ become $\left | n_0 n_1 \cdots \right>$ with $n_i$ telling us how many particles are in the state with energy $E_i$. –  Marek Jan 8 '11 at 4:07 @Marek: thanks! I would definitely appreciate some pointers about exactly what to do with the original Hamiltonian. Some follow-up questions: are the creation and annihilation operators observables? Is number going to turn out to be a conserved quantity in the general case? –  Qiaochu Yuan Jan 8 '11 at 14:05 @Marek: and one more question. Given an observable A on K, what's the corresponding observable on S(K)? I can think of a few different possibilities and I'm not sure which one physicists actually use. –  Qiaochu Yuan Jan 8 '11 at 14:27 @Qiaochu: true, but I thought you were asking how to promote observables from $K$ to $S(K)$. $N(\lambda)$ are completely new operators than need the structure of $S(K)$ to be defined. As for interactions: well, that is a topic for a one-semester course in quantum field theory so I recommend you ask this as a separate question. But in short: in general any $H_I$ is possible. But physical ones need to conserve energy, momentum and in fact complete Poincaré symmetry. So one uses representations of Poincaré group to restrict possible choices of $H_I$. –  Marek Jan 8 '11 at 15:24 @Qiaochu: (cont.) in the end it turns out that it's really ineffective to work in this way and one is forced to pass to the language of fields. One can quantize classical fields (again enforcing Poincaré and perhaps other, gauge symmetries) by usual means (canonical quantization, path-integral, etc.) and in the end one can decompose the Hilbert space into particles (in the Fock sense) and $H_I$ falls out. In any case, there is still a lot of room for possible interactions and to get a taste, see e.g. QED Lagrangian. –  Marek Jan 8 '11 at 15:30 Reference: Fetter and Walecka, Quantum Theory of Many Particle Systems, Ch. 1 The Hamiltonian for a SHO is: $$ H = \sum_{i = 0}^{\infty}\hbar \omega ( a_i^{+} a_i + \frac{1}{2} ) $$ where $\{a^+_i, a_i\}$ are the creation and annihilation operators for the $i^\textrm{th}$ eigenstate (momentum mode). The Fock space $\mathbf{F}$ consists of states of the form: $$ \vert n_{a_0},n_{a_1}, ...,n_{a_N} \rangle $$ which are obtained by repeatedly acting on the vacuum $\vert 0 \rangle $ by the ladder operators: $$ \Psi = \vert n_{i_0},n_{i_1}, ...,n_{i_N} \rangle = (a_0^+)^{i_0} (a_1^+)^{i_1} \ldots (a_N^+)^{i_N} \vert 0 \rangle $$ The interpretation of $\Psi$ is as the state which contains $i_k$ quanta of the $k^\textrm{th}$ eigenstate created by application of $(a^+_k)^{i_k}$ on the vacuum. The above state is not normalized until multiplied by factor of the form $\prod_{k=0}^N \frac{1}{\sqrt{k+1}}$. If your excitations are bosonic you are done, because the commutator of the ladder operators $[a^+_i,a_j] = \delta_{ij}$ vanishes for $i\ne j$. However if the statistics of your particles are non-bosonic (fermionic or anyonic) then the order, in which you act on the vacuum with the ladder operators, matters. Of course, to construct a Fock space $\mathbf{F}$ you do not need to specify a Hamiltonian. Only the ladder operators with their commutation/anti-commutation relations are needed. In usual flat-space problems the ladder operators correspond to our usual fourier modes $ a^+_k \Rightarrow \exp ^{i k x} $. For curved spacetimes this can procedure can be generalized by defining our ladder operators to correspond to suitable positive (negative) frequency solutions of a laplacian on that space. For details, see Wald, QFT in Curved Spacetimes. Now, given any Hamiltonian of the form: $$ H = \sum_{k=1}^{N} T(x_k) + \frac{1}{2} \sum_{k \ne l = 1}^N V(x_k,x_l) $$ with a kinetic term $T$ for a particle at $x_k$ and a pairwise potential term $V(x_k,x_l)$, one can write down the quantum Hamiltonian in terms of matrix elements of these operators: $$ H = \sum_{ij} a^+_i \langle i \vert T \vert j \rangle a_i + \frac{1}{2}a^+_i a^+_j \langle ij \vert V \vert kl \rangle a_l a_k $$ where $|i\rangle$ is the state with a single excited quantum corresponding the action of $a^+_i$ on the vacuum. (For details, steps, see Fetter & Walecka, Ch. 1). I hope this helps resolves some of your doubts. Being as you are from math, there are bound to be semantic differences between my language and yours so if you have any questions at all please don't hesitate to ask. share|improve this answer Can you explain the notation in that last formula? What are the b_i? –  Qiaochu Yuan Jan 8 '11 at 18:00 @qiaochu that was a typo. Its fixed now. –  user346 Jan 8 '11 at 20:29 As recently as 10 years ago Welecka was still teaching as William & Mary. It's worth taking his course. Any course. Or even going to see a talk. Really. –  dmckee Jan 8 '11 at 23:00 Suppose, as you do, that $K$ is the space of states of a single boson. Then the space of states of a combined system of two bosons is not $K\otimes K$ as it would be if the two bosons were distinguishable, it is the symmetric subspace which you are denoting as $S^2$. Your sum over all $i$, which you denote $S$, is then a HIlbert space (state space) of a new system whose states contain the states of one-boson system, a two-boson system, a three-boson system, etc. except not an infinite number of bosons. (that is not included in the space $S$). And your space $S$ includes superpositions, for example if $v_1$ is an element of $S$ (a state of one boson) and if $v_3 \in S^3$ (a state of a three boson system) then $0.707 v_1 - -.707 v_3$ is a state which has a fifty per cent. probability of being one boson, if the number of particles is measured, and a fifty per cent. probability of being found to be three bosons. That is the physical meaning of Fock space. It is the state space on which the operators of a quantum field act. As already remarked by Eric Zaslow, if $H$ is the Hamiltonian of the h.o. $K$, then by definition, $H\otimes I + I \otimes H$ is the Hamiltonian on $S^2$, etc. on each $S^i$. Then one sums them all up to get a Hamiltonian on the direct sum $S$. Unless this Hamiltonian is perturbed, the number of particles is constant, obviously, since it preserves each subspace $S^i$ of $S$. So there will be no creation or annihilation of pairs of particles. If this field comes into interaction with an extraneous particle, the Hamiltonian will be perturbed of course. It is connected with second quantisation as follows: if you have a classical h.o. and quantise it, you get $K$. If you now second quantise $K$, you get $S$ which can be regarded as a quantum field. Sir James Jeans showed, before the quantum revolution, that the classical electromagnetic field could be obtained from the classical mechanics h.o. as a limit of more and more classical h.o.'s not interacting with each other, and this procedure of second quantisation is a quantum analogue. It is not the same procedure as if you start with a classical field and then quantise it. But it is remarkable that you can get the same anser either way, as JEans noticed in the classical case. That is, you started with a quantum one-particle system and passed to Fock space and got the quantum field theory corresponding to that system. But we could have started with a classical field and quantised it, and gotten the quantum field that way. share|improve this answer Your Answer
7236642fe3957e27
Fundamental particles in the SU(2)xU(1) part of the Standard Model Above: the Standard Model particles in the existing SU(2)xU(1) electroweak symmetry group (a high-quality PDF version of this table can be found here).  The complexity of chiral symmetry – the fact that only particles with left-handed spins (Weyl spinors) experience the weak force – is shown by the different effective weak charges for left and right handed particles of the same type.  My argument, with evidence to back it up in this post and previous posts, is that there are no real ‘singlets': all the particles are doublets apart from the gauge bosons (W/Z particles) which are triplets.  This causes a major change to the SU(2)xU(1) electroweak symmetry.  Essentially, the U(1) group which is a source of singlets (i.e., particles shown in blue type in this table which may have weak hypercharge but have no weak isotopic charge) is removed!  An SU(2) symmetry group then becomes a source of electric and weak hypercharge, as well as its existing role in Standard Model as a descriptor of the isotopic spin.  It modifies the role of the ‘Higgs bosons': some such particles are still be required to give mass, but the mainstream electroweak symmetry breaking mechanism is incorrect. There are 6 rather than 4 electroweak gauge bosons, the same 3 massive weak bosons as before, but 2 new charged massless gauge bosons in addition to the uncharged massless ‘photon’, B.  The 3 massless gauge bosons are all massless counterparts to the 3 massive weak gauge bosons.  The ‘photon’ is not the gauge boson of electromagnetism because, being neutral, it can’t represent a charged field.  Instead, the ‘photon’ gauge boson is the graviton, while the two massless gauge bosons are the charged exchange radiation (gauge bosons) of electromagnetism.  This allows quantitative predictions and the resolution of existing electromagnetic anomalies (which are usually just censored out of discussions). It is the U(1) group which falsely introduces singlets.  All Standard Model fermions are really doublets: if they are bound by the weak force (i.e., left-handed Weyl spinors) then they are doublets in close proximity.  If they are right-handed Weyl spinors, they are doublets mediated by only strong, electromagnetic and gravitational forces, so for leptons (which don’t feel the strong force), the individual particles in a doublet can be located relatively far from another (the electromagnetic and gravitational interactions are both long-range forces).  The beauty of this change to the understanding of the Standard Model is that gravitation automatically pops out in the form of massless neutral gauge bosons, while electromagnetism is mediated by two massless charged gauge bosons, which gives a causal mechanism that predicts the quantitative coupling constants for gravity and electromagnetism correctly.  Various other vital predictions are also made by this correction to the Standard Model. Fundamental vector boson charges of SU(2)  Above: the fundamental vector boson charges of SU(2).  For any particle which has effective mass, there is a black hole event horizon radius of 2GM/c2.  If there is a strong enough electric field at this radius for pair production to occur (in excess of Schwinger’s threshold of 1.3*1018 v/m), then pairs of virtual charges are produced near the event horizon.  If the particle is positively charged, the negatively charged particles produced at the event horizon will fall into the black hole core, while the positive ones will escape as charged radiation (see Figures 2, 3 and particularly 4 below for the mechanism for propagation of massless charged vector boson exchange radiation between charges scattered around the universe).  If the particle is negatively charged, it will similarly be a source of negatively charged exchange radiation (see Figure 2 for an explanation of why the charge is never depleted by absorbing radiation from nearby pair production of opposite sign to itself; there is simply an equilibrium of exchange of radiation between similar charges which cancels out that effect).  In the case of a normal (large) black hole or neutral dipole charge (one with equal and opposite charges, and therefore neutral as a whole), as many positive as negative pair production charges can escape from the event horizon and these will annihilate one another to produce neutral radiation, which produces the right force of gravity.  Figure 4 proves that this gravity force is about 1040 times stronger than electromagnetism.  Another earlier post calculates the Hawking black hole radiation rate and proves it creates the force strength involved in electromagnetism. (For a background to the elementary basics of quantum field theory and quantum mechanics, like the Schroedinger and Dirac equations and their consequences, see the earlier post on The Physics of Quantum Field Theory.  For an introduction to symmetry principles, see the previous post.) The SU(2) symmetry can model electromagnetism (in addition to isospin) because it models two types of charges, hence giving negative and positive charges without the wrong method U(1) uses (where it specifies there are only negative charges, so positive ones have to be represented by negative charges going backwards in time).  In addition, SU(2) gives 3 massless gauge bosons, two charged ones (which mediate the charge in electric fields) and one neutral one (which is the spin-1 graviton, that causes gravity by pushing masses together).  In addition, SU(2) describes doublets, matter-antimatter pairs.  We know that electrons are not produced individidually, only in lepton-antilepton pairs.  The reason why electrons can be separated a long distance from their antiparticle (unlike quarks) is simply the nature of the binding force, which is long range electromagnetism instead of a short-range force. Quantum field theory, i.e., the standard model of particle physics, is based mainly on experimental facts, not speculating.  The symmetries of baryons give SU(3) symmetry, those of mesons give SU(2) symmetry.  That’s experimental particle physics. The problem in the standard model SU(3)xSU(2)xU(1) is the last component, the U(1) electromagnetic symmetry.  In SU(3) you have three charges (coded red, blue and green) and form triplets of quarks (baryons) bound by 32-1 = 8 charged gauge bosons mediating the strong force.  For SU(2) you have two charges (two isospin states) and form doublets, i.e., quark-antiquark pairs (mesons) bound by 22-1 = 3 gauge bosons (one positively charged, one negatively charged and one neutral). One problem comes when electromagnetism is represented by U(1) and added to SU(2) to form the electroweak unification, SU(2)xU(1).  This means that you have to add a Higgs field which breaks the SU(2)xU(1) symmetry at low energy, by giving masses (at low energy only) to the 3 gauge bosons of SU(2).  At high energy, the masses of those 3 gauge bosons must disappear, so that they are massless, like the photon assumed to mediate the electromagnetic force represented by U(1).  The required Higgs field which adds mass in the right way for electroweak symmetry breaking to work in the Standard Model but adds complexity and isn’t very predictive. The other, related, problem is that SU(2) only acts on left-handed particles, i.e., particles whose spin is described by a left-handed Weyl spinor.  U(1) only has one electric charge, the electron.  Feynman represents positrons in the scheme as electrons going backwards in time, and this makes U(1) work, but it has many problems and a massless version of SU(2) is the correct electromagnetism-gravitational model. So the correct model for electromagnetism is really SU(2) which has two types of electric charge (positive and negative) and acts on all particles regardless of spin, and is mediated by three types of massless gauge bosons: negative ones for the fields around negative charges, positive ones for positive fields, and neutral ones for gravity. The question then is, what is the corrected Standard Model?  If we delete U(1) do we have to replace it with another SU(2) to get SU(3)xSU(2)xSU(2), or do we just get SU(3)xSU(2) in which SU(2) takes on new meaning, i.e., there is no symmetry breaking? Assume the symmetry group of the universe is SU(3)xSU(2).  That would mean that the new SU(2) interpretation has to do all the work and more of SU(2)xU(1) in the existing Standard Model.  The U(1) part of SU(2)xU(1) represented both electromagnetism and weak hypercharge, while SU(2) represented weak isospin. We need to dump the Higgs field as a source for symmetry breaking, and replace it with a simpler mass-giving mechanism that only gives mass to left-handed Weyl spinors.  This is because the electroweak symmetry breaking problem has disappeared. We have to use SU(2) to represent isospin, weak hypercharge, electromagnetism and gravity.   Can it do all that? Can the Standard Model be corrected by simply removing U(1) to leave SU(3)xSU(2) and having the SU(2) produce 3 massless gauge bosons (for electromagnetism and gravity) and 3 massive gauge bosons (for weak interactions)? Can we in other words remove the Higgs mechanism for electroweak symmetry breaking and replace it by a simpler mechanism in which the short range of the three massive weak gauge bosons distinguishes between electromagnetism (and gravity) from the weak force? The mass giving field only gives mass to gauge bosons that normally interact with left-handed particles. What is unnerving is that this compression means that one SU(2) symmetry is generating a lot more physics than in the Standard Model, but in the Standard Model U(1) represented both electric charge and weak hypercharge, so I don’t see any reason why SU(2) shouldn’t represent weak isospin, electromagnetism/gravity and weak hypercharge. The main thing is that because it generates the 3 massless gauge bosons, only half of which need to have mass added to them to act as weak gauge bosons, it has exactly the right field mediators for the forces we require. If it doesn’t work, the alternative replacement to the Standard Model is SU(3)xSU(2)xSU(2) where the first SU(2) is isospin symmetry acting on left-handed particles and the second SU(2) is electrogravity. Mathematical review Following from the discussion in previous posts, it is time to correct the errors of the Standard Model, starting with the U(1) phase or gauge invariance.  The use of unitary group U(1) for electromagnetism and weak hypercharge is in error as shown in various ways in the previous posts here, here, and here. The maths is based on a type of continuous group defined by Sophus Lie in 1873.  Dr Woit summarises this very clearly in Not Even Wrong (UK ed., p47): ‘A Lie group … consists of an infinite number of elements continuously connected together.  It was the representation theory of these groups that Weyl was studying. ‘A simple example of a Lie group together with a representation is that of the group of rotations of the two-dimensional plane.  Given a two-dimensional plane with chosen central point, one can imagine rotating the plane by a given angle about the central point.  This is a symmetry of the plane.  The thing that is invariant is the distance between a point on the plane and the central point.  This is the same before and after the rotation.  One can actually define rotations of the plane as precisely those transformations that leave invariant the distance to the central point.  There is an infinity of these transformations, but they can all be parametrised by a single number, the angle of rotation.  Not Even Wrong Argand diagram showing rotation by an angle on the complex plane.   Illustration credit: based on Fig. 3.1 in Not Even Wrong. ‘If one thinks of the plane as the complex plane (the plane whose two coordinates label the real and imaginary part of a complex number), then the rotations can be thought of as corresponding not just to angles, but to a complex number of length one.  If one multiplies all points in the complex plane by a given complex number of unit length, one gets the corresponding rotation (this is a simple exercise in manipulating complex numbers).  As a result, the group of rotations in the complex plane is often called the ‘unitary group of transformations of one complex variable’, and written U(1). ‘This is a very specific representation of the group U(1), the representation as transformations of the complex plane … one thing to note is that the transformation of rotation by an angle is formally similar to the transformation of a wave by changing its phase [by Fourier analysis, which represents a waveform of wave amplitude versus time as a frequency spectrum graph showing wave amplitude versus wave frequency by decomposing the original waveform into a series which is the sum of a lot of little sine and cosine wave contributions].  Given an initial wave, if one imagines copying it and then making the copy more and more out of phase with the initial wave, sooner or later one will get back to where one started, in phase with the initial wave.  This sequence of transformations of the phase of a wave is much like the sequence of rotations of a plane as one increases the angle of rotation from 0 to 360 degrees.  Because of this analogy, U(1) symmetry transformations are often called phase transformations. … ‘In general, if one has an arbitrary number N of complex numbers, one can define the group of unitary transformations of N complex variables and denote it U(N).  It turns out that it is a good idea to break these transformations into two parts: the part that just multiplies all of the N complex numbers by the same unit complex number (this part is a U(1) like before), and the rest.  The second part is where all the complexity is, and it is given the name of special unitary transformations of N (complex) variables and denotes SU(N).  Part of Weyl’s achievement consisted in a complete understanding of the representations of SU(N), for any N, no matter how large. ‘In the case N = 1, SU(1) is just the trivial group with one element.  The first non-trivial case is that of SU(2) … very closely related to the group of rotations in three real dimensions … the group of special orthagonal transformations of three (real) variables … group SO(3).  The precise relation between SO(3) and SU(2) is that each rotation in three dimensions corresponds to two distinct elements of SU(2), or SU(2) is in some sense a doubled version of SO(3).’ Hermann Weyl and Eugene Wigner discovered that Lie groups of complex symmetries represent quantum field theory.  In 1954, Chen Ning Yang and Robert Mills developed a theory of photon (spin-1 boson) mediator interactions in which the spin of the photon changes the quantum state of the matter emitting or receiving it via inducing a rotation in a Lie group symmetry. The amplitude for such emissions is forced, by an empirical coupling constant insertion, to give the measured Coulomb value for the electromagnetic interaction. Gerald ‘t Hooft and Martinus Veltman in 1970 argued that the Yang-Mills theory is renormalizable so the problem of running couplings having no limits can be cut off at effective limits to make the theory work (Yang-Mills theories use non-commutative algebra, usually called non-commutative geometry). The photon Yang-Mills theory is U(1). Equivalent Yang-Mills interaction theories of the strong force SU(3) and the weak force isospin group SU(2) in conjunction with the U(1) force result in the symmetry group  SU(3) x SU(2) x U(1) which is the Standard Model. Here the SU(2) group must act only on left-handed spinning fermions, breaking the conservation of parity. Dr Woit’s Not Even Wrong at pages 98-100 summarises the problems in the Standard Model.  While SU(3) ‘has the beautiful property of having no free parameters’, the SU(2)xU(1) electroweak symmetry does introduce two free parameters: alpha and the mass of the speculative ‘Higgs boson’.  However, from solid facts, alpha is not a free parameter but the shielding ratio of the bare core charge of an electron by virtual fermion pairs being polarized in the vacuum and absorbing energy from the field to create short range forces: “This shielding factor of alpha can actually obtained by working out the bare core charge (within the polarized vacuum) as follows.  Heisenberg’s uncertainty principle says that the product of the uncertainties in momentum and distance is on the order h-bar.  The uncertainty in momentum p = mc, while the uncertainty in distance is x = ct.  Hence the product of momentum and distance, px = (mc).(ct) = Et where E is energy (Einstein’s mass-energy equivalence).  Although we have had to assume mass temporarily here before getting an energy version, this is just what Professor Zee does as a simplification in trying to explain forces with mainstream quantum field theory (see previous post).  In fact this relationship, i.e., product of energy and time equalling h-bar, is widely used for the relationship between particle energy and lifetime.  The maximum possible range of the particle is equal to its lifetime multiplied by its velocity, which is generally close to c in relativistic, high energy particle phenomenology.  Now for the slightly clever bit: px = h-bar implies (when remembering p = mc, and E = mc2): x = h-bar /p = h-bar /(mc) = h-bar*c/E so E = h-bar*c/x when using the classical definition of energy as force times distance (E = Fx): F = E/x = (h-bar*c/x)/x = h-bar*c/x2. “So we get the quantum electrodynamic force between the bare cores of two fundamental unit charges, including the inverse square distance law!  This can be compared directly to Coulomb’s law, which is the empirically obtained force at large distances (screened charges, not bare charges), and such a comparison tells us exactly how much shielding of the bare core charge there is by the vacuum between the IR and UV cutoffs.  So we have proof that the renormalization of the bare core charge of the electron is due to shielding by a factor of a.  The bare core charge of an electron is 137.036… times the observed long-range (low energy) unit electronic charge.  All of the shielding occurs within a range of just 1 fm, because by Schwinger’s calculations the electric field strength of the electron is too weak at greater distances to cause spontaneous pair production from the Dirac sea, so at greater distances there are no pairs of virtual charges in the vacuum which can polarize and so shield the electron’s charge any more. “One argument that can superficially be made against this calculation (nobody has brought this up as an objection to my knowledge, but it is worth mentioning anyway) is the assumption that the uncertainty in distance is equivalent to real distance in the classical expression that work energy is force times distance.  However, since the range of the particle given, in Yukawa’s theory, by the uncertainty principle is the range over which the momentum of the particle falls to zero, it is obvious that the Heisenberg uncertainty range is equivalent to the range of distance moved which corresponds to force by E = Fx.  For the particle to be stopped over the range allowed by the uncertainty principle, a corresponding force must be involved.  This is more pertinent to the short range nuclear forces mediated by massive gauge bosons, obviously, than to the long range forces. “Experimental evidence: “In particular: As for the ‘Higgs boson’ mass that gives mass to particles, there is evidence there of its value.  On page 98 of Not Even Wrong, Dr Woit points out: ‘Another related concern is that the U(1) part of the gauge theory is not asymptotically free, and as a result it may not be completely mathematically consistent.’ He adds that it is a mystery why only left-handed particles experience the SU(2) force, and on page 99 points out that: ‘the standard quantum field theory description for a Higgs field is not asymptotically free and, again, one worries about its mathematical consistency.’ Another thing is that the 9 masses of quarks and leptons have to be put into the Standard Model by hand together with 4 mixing angles to describe the interaction strength of the Higgs field with different particles, adding 13 numbers to the Standard Model which you  want to be explained and predicted. Important symmetries: 1. ‘electric charge rotation’ would transform quarks into leptons and vice-versa within a given family: this is described by unitary group U(1).  U(1) deals with just 1 type of charge: negative charge, i.e., it ignores positive charge which is treated as a negative charge travelling backwards in time, Feynman’s fatally flawed model of a positron or anti-electron, and with solitary particles (which don’t actually exist since particles always are produced and annihilated as pairs).  U(1) is therefore false when used as a model for electromagnetism, as we will explain in detail in this post.  U(1) also represents weak hypercharge, which is similar to electric charge. 2. ‘isospin rotation’ would switch the two quarks of a given family, or would switch the lepton and neutrino of a given family: this is described by symmetry unitary group SU(2).  Isospin rotation leads directly to the symmetry unitary group SU(2), i.e., rotations in imaginary space with 2 complex co-ordinates generated by 3 operations: the W+, W-, and Z0 gauge bosons of the weak force.  These massive weak bosons only interact with left-handed particles (left handed Weyl spinors).  SU(2) describes doublets, matter-antimatter pairs such as mesons and (as this blog post is arguing) lepton-antilepton charge pairs in general (electric charge mechanism as well as weak isospin). 3. ‘colour rotation’ would change quarks between colour charges (red, blue, green): this is described by symmetry unitary group SU(3).  Colour rotation leads directly to the Standard Model symmetry unitary group SU(3), i.e., rotations in imaginary space with 3 complex co-ordinates generated by 8 operations, the strong force gluons.  There is also the concept of ‘flavor’ referring to the different types of quarks (up and down, strange and charm, top and bottom).  SU(3) describes triplets of charges, i.e. baryons. U(1) is a relatively simple phase-transformation symmetry which has a single group generator, leading to a single electric charge.  (Hence, you have to treat positive charge as electrons moving backwards in time to make it incorporate antimatter!  This is false because things don’t travel backwards in time; it violates causality, because we can use pair-production – e.g. electron and positron pairs created by the shielding of gamma rays from cobalt-60 using lead – to create positrons and electrons at the same time, when we choose.)  Moreover, it also only gives rise to one type of massless gauge boson, which means it is a failure to predict the strength of electromagnetism and its causal mechanism of electromagnetism (attractions between dissimilar charges, repulsions between similar charges, etc.).  SU(2) must be used to model the causal mechanism of electromagnetism and gravity; two charged massless gauge bosons mediate electromagnetic forces, while the neutral massless gauge boson mediates gravitation.  Both the detailed mechanism for the forces and the strengths of the interactions (as well as various other predictions), arise automatically from SU(2) with massless gauge bosons replacing U(1). Fig. 1 - The imaginary U(1) interaction of a photon with an electron, which is fine for photons interacting with electrons, but doesn't adequately describe the mechanism by which electromagnetic gauge bosons produce electromagnetic forces! Fig. 1: The imaginary U(1) gauge invariance of quantum electrodynamics (QED) simply consists of a description of the interaction of a photon with an electron (e is the coupling constant, the effective electric charge after allowing for shielding by the polarized vacuum if the interaction is at high energy, i.e., above the IR cutoff).  When the electron’s field undergoes a local phase change, a gauge field quanta called a ‘virtual photon’ is produced, which keeps the Lagrangian invariant; this is how gauge symmetry is supposed to work for U(1). This doesn’t adequately describe the mechanism by which electromagnetic gauge bosons produce electromagnetic forces!  It’s just too simplistic: the moving electron is viewed as a current, and the photon (field phase) affects that current by interacting by the electron.  There is nothing wrong with this simple scheme, but it has nothing to do with the detailed causal, predictive mechanism for electromagnetic attraction and repulsion, and to make this virtual-photon-as-gauge-boson idea work for electromagnetism, you have to add two extra polarizations to the normal two polarizations (electric and magnetic field vectors) of ordinary photons.  You might as well replace the photon by two charged massless gauge bosons, instead of adding two extra polarizations!  You have so much more to gain from using the correct physics, than adding extra epicycles to a false model to ‘make it work’. This is Feynman’s explanation in his book QED, Penguin, 1990, p120: ‘Photons, it turns out, come in four different varieties, called polarizations, that are related geometrically to the directions of space and time. Thus there are photons polarized in the [spatial] X, Y, Z, and [time] T directions. (Perhaps you have heard somewhere that light comes in only two states of polarization – for example, a photon going in the Z direction can be polarized at right angles, either in the X or Y direction. Well, you guessed it: in situations where the photon goes a long distance and appears to go at the speed of light, the amplitudes for the Z and T terms exactly cancel out. But for virtual photons going between a proton and an electron in an atom, it is the T component that is the most important.)’ The gauge bosons of mainstream electromagnetic model U(1) are supposed to consist of photons with 4 polarizations, not 2.  However, U(1) has only one type of electric charge: negative charge.  Positive charge is antimatter and is not included.  But in the real universe there as much positive as negative charge around! We can see this error of U(1) more clearly when considering the SU(3) strong force: the 3 in SU(3) tells us there are three types of color charges, red, blue and green.  The anti-charges are anti-red, anti-blue and anti-green, but these anti-charges are not included.  Similarly, U(1) only contains one electric charge, negative charge.  To make it a reliable and complete theory predictive everything, it should contain 2 electric charges: positive and negative, and 3 gauge bosons: positive charged massless photons for mediating positive electric fields, negative charged massless photons for mediating negative electric fields, and neutral massless photons for mediating gravitation.  The way this correct SU(2) electrogravity unification works was clearly explained in Figures 4 and 5 of the earlier post: Basically, photons are neutral because if they were charged as well as being massless, the magnetic field generated by its motion would produce infinite self-inductance.  The photon has two charges (positive electric field and negative electric field) which each produce magnetic fields with opposite curls, cancelling one another and allowing the photon to propagate: Fig. 2 - Mechanism of gauge bosons for electromagnetism Fig. 2: charged gauge boson mechanism for electromagnetism, as illustrated by the Catt-Davidson-Walton work in charging up transmission lines like capacitors and checking what happens when you discharge the energy through a sampling oscilloscope.  They found evidence, discussed in detail in previous posts on this blog, that the existence of an electric field is represented by two opposite-travelling (gauge boson radiation) light velocity field quanta: while overlapping, the electric fields of each add up (reinforce) but the magnetic fields disappear because the curls of the magnetic field components cancel once there is equilibrium of the exchange radiation going along the same path in opposite directions.  Hence, electric fields are due to charged, massless gauge bosons with Poynting vectors, being exchanged between fermions.  Magnetic fields are cancelled out in certain configurations (such as that illustrated) but in other situations where you send two gauge bosons of opposite charge through one another (in the figure the gauge bosons modelled by electricity have the same charge), you find that the electric field vectors cancel out to give an electrically neutral field, but the magnetic field curls can then add up, explaining magnetism. The evidence for Fig. 2 is presented near the end of Catt’s March 1983 Wireless World article called ‘Waves in Space’ (typically unavailable on the internet, because Catt won’t make available the most useful of his papers for free): when you charge up x metres of cable to v volts, you do so at light speed, and there is no mechanism for the electromagnetic energy to slow down when the energy enters the cable.  The nearest page Catt has online about this is here: the battery terminals of a v volt battery are indeed at v volts before you connect a transmission line to them, but that’s just because those terminals have been charged up by field energy which is flowing in all directions at light velocity, so only half of the total energy, v/2 volts, is going one way and half is going the other way.  Connect anything to that battery and the initial (transient) output at light speed is only half the battery potential; the full battery potential only appears in a cable connected to the battery when the energy has gone to the far end of the cable at light speed and reflected back, adding to further in-flowing energy from the battery on the return trip, and charging the cable to v/2 + v/2 = v volts. Because electricity is so fast (light speed for the insulator), early investigators like Ampere and Maxwell (who candidly wrote in the 1873 edition of his Treatise on Electricity and Magnetism, 3rd ed., Article 574: ‘… there is, as yet, no experimental evidence to shew whether the electric current… velocity is great or small as measured in feet per second. …’) had no idea whatsoever of this crucial evidence which shows what electricity is all about.  So when you discharge the cable, instead of getting a pulse at v volts coming out with a length of x metres (i.e., taking a time of t = x/c seconds), you instead get just what is predicted by Fig. 2: a pulse of v/2 volts taking 2x/c seconds to exit.  In other words, the half of the energy already moving towards the exit end, exits first.  That gives a pulse of v/2 volts lasting x/c seconds.  Then the half of the energy going initially the wrong way has had time to go to the far end, reflect back, and follow the first half of the energy.  This gives the second half of the output, another pulse of v/2 volts lasting for another x/c seconds and following straight on from the first pulse.  Hence, the observer measures an output of v/2 volts lasting for a total duration of 2x/c seconds.  This is experimental fact.  It was Oliver Heaviside – who translated Maxwell’s 20 long-hand differential equations into the four vector equations (two divs, two curls) – who experimentally discovered the first evidence for this when solving problems with the Newcastle-Denmark undersea telegraph cable in 1875, using ‘Morse Code’ (logic signals).  Heaviside’s theory is flawed physically because he treated rise times as instantaneous, a flaw inherited by Catt, Davidson, and Walton, which blocks a complete understanding of the mechanisms at work.  The Catt, Davidson and Walton history is summarised here [The original Catt-Davidson-Walton paper can be found here (first page) and here (second page) although it contains various errors.  My discussion of it is here.  For a discussion of the two major awards Catt received for his invention of the first ever practical wafer-scale memory to come to market despite censorship such as the New Scientist of 12 June 1986, p35, quoting anonymous sources who called Catt ‘either a crank or visionary’ – a £16 million British government and foreign sponsored 160 MB ‘chip’ wafer back in 1988 – see this earlier post and the links it contains.  Note that the editors of New Scientist are still vandals today.  Jeremy Webb, current editor of New Scientist, graduated in physics and solid state electronics, so he has no good excuse for finding this stuff – physics and electronics – over his head.  The previous editor to Jeremy was Dr Alum M. Anderson who on 2 June 1997 wrote to me the following insult to my intelligence: ‘I’ve looked through the files and can assure you that we have no wish to suppress the discoveries of Ivor Catt nor do we publish only articles from famous people.  You should understand that New Scientist is not a primary journal and does not publish the first accounts of new experiments and original theories. These are better submitted to an academic journal where they can be subject to the usual scientific review.  New Scientist does not maintain the large panel of scientific referees necessary for this review process. I’m sure you understand that science is now a gigantic enterprise and a small number of scientifically-trained journalists are not the right people to decide which experiments and theories are correct. My advice would be to select an appropriate journal with a good reputation and send Mr Catt’s work there. Should Mr Catt’s theories be accepted and published, I don’t doubt that he will gain recognition and that we will be interested in writing about him.’  Both Catt and I had already sent Dr Anderson abstracts from Catt’s peer-reviewed papers such as IEEE Trans. on Electronic Computers, vol. EC-16, no. 6, Dec. 67. Also Proc. IEE, June 83 and June 87. Also a summary of the book “Digital Hardware Design” by Catt et. al., pub. Macmillan 1979.  I wrote again to Dr Anderson with this information, but he never published it; Catt on 9 June 1997 published his response on the internet which he carbon copied to the editor of New Scientist.  Years later, when Jeremy Webb had taken over, I corresponded with him by email.  The first time Jeremy responded was on an evening in Dec 2002, and all he wrote was a tirade about his email box being full when writing a last-minute editorial.  I politely replied that time, and then sent him by recorded delivery a copy of the Electronics World January 2003 issue with my cover story about Catt’s latest invention for saving lives.  He never acknowledged it or responded.  When I called the office politely, his assistant was rude and said she had thrown it away unread without him seeing it!  I sent another but yet again, Jeremy wasted time and didn’t publish a thing.  According to the Daily Telegraph, 24 Aug. 2005: ‘Prof Heinz Wolff complained that cosmology is “religion, not science.” Jeremy Webb of New Scientist responded that it is not religion but magic. … “If I want to sell more copies of New Scientist, I put cosmology on the cover,” said Jeremy.’  But even when Catt’s stuff was applied to cosmology in Electronics World Aug. 02 and Apr. 03, it was still ignored by New ScientistHelene Guldberg has written a ‘Spiked Science’ article called Eco-evangelism about Jeremy Webb’s bigoted policies and sheer rudeness, while Professor John Baez has publicised the decline of New Scientist due to the junk they publish in place of solid physics.  To be fair, Jeremy was polite to Prime Minister Tony Blair, however.  I should also add that Catt is extremely rude in refusing to discuss facts.  Just because he has a few new solid facts which have been censored out of mainstream discussion even after peer-reviewed publication, he incorrectly thinks that his vast assortment of more half-baked speculations are equally justified.  For example, he refuses to discuss or co-author a paper on the model here.  Catt does not understand Maxwell’s equations (he thinks that if you simply ignore 18 out of 20 long hand Maxwell differential equations and show that when you reduce the number of spatial dimensions from 3 to 1, then – since the remaining 2 equations in one spatial dimension contain two vital constants – that means that Maxwell’s equations are ‘shocking … nonsense’, and he refuses to accept that he is talking complete rubbish in this empty argument), and since he won’t discuss physics he is not a general physics  authority, although he is expert in experimental research on logic signals, e.g., his paper in IEEE Trans. on Electronic Computers, vol. EC-16, no. 6, Dec. 67.] Fig. 3 - Coulomb force mechanism for electric charged massless gauge bosons Fig. 3: Coulomb force mechanism for electric charged massless gauge bosons.  The SU(2) electrogravity mechanism.  Think of two flak-jacket protected soldiers firing submachine guns towards one another, while from a great distance other soldiers (who are receding from the conflict) fire bullets in at both of them.  They will repel because of net outward force on them, due to successive impulses both from bullet strikes received on the sides facing one another, and from recoil as they fire bullets.  The bullets hitting their backs have relatively smaller impulses since they are coming from large distances and so due to drag effects their force will be nearly spent upon arrival (analogous to the redshift of radiation emitted towards us by the bulk of the receding matter, at great distances, in our universe).  That explains the electromagnetic repulsion physically.  Now think of the two soldiers as comrades surrounded by a mass of armed savages, approaching from all sides.  The soldiers stand back to back, shielding one another’s back, and fire their submachine guns outward at the crowd.  In this situation, they attract, because of a net inward acceleration on them, pushing their backs toward towards one another, both due to the recoils of the bullets they fire, and from the strikes each receives from bullets fired in at them.  When you add up the arrows in this diagram, you find that attractive forces between dissimilar unit charges have equal magnitude to repulsive forces between similar unit charges.  This theory holds water! This predicts the right strength of gravity, because the charged gauge bosons will cause the effective potential of those fields in radiation exchanges between similar charges throughout the universe (drunkard’s walk statistics) to multiply up the average potential between two charges by a factor equal to the square root of the number of charges in the universe. This is so because any straight line summation will on average encounter similar numbers of positive and negative charges as they are randomly distributed, so such a linear summation of the charges that gauge bosons are exchanged between cancels out. However, if the paths of gauge bosons exchanged between similar charges are considered, you do get a net summation. Fig. 4 - Charged gauge bosons mechanism and how the potential adds up Fig. 4: Charged gauge bosons mechanism and how the potential adds up, predicting the relatively intense strength (large coupling constant) for electromagnetism relative to gravity according to the path-integral Yang-Mills formulation.  For gravity, the gravitons (like photons) are uncharged, so there is no adding up possible.  But for electromagnetism, the attractive and repulsive forces are explained by charged gauge bosons.  Notice that massless charge electromagnetic radiation (i.e., charged particles going at light velocity) is forbidden in electromagnetic theory (on account of the infinite amount of self-inductance created by the uncancelled magnetic field of such radiation!) only if the radiation is going solely in only one direction, and this is not the case obviously for Yang-Mills exchange radiation, where the radiant power of the exchange radiation from charge A to charge B is the same as that from charge B to charge A (in situations of equilibrium, which quickly establish themselves).  Where you have radiation going in opposite directions at the same time, the handedness of the curl of the magnetic field is such that it cancels the magnetic fields completely, preventing the self-inductance issue.  Therefore, although you can never radiate a charged massless radiation beam in one direction, such beams do radiate in two directions while overlapping.  This is of course what happens with the simple capacitor consisting of conductors with a vacuum dielectric: electricity enters as electromagnetic energy at light velocity and never slows down.  When the charging stops, the trapped energy in the capacitor travels in all directions, in equimibrium, so magnetic fields cancel and can’t be observed.  This is proved by discharging such a capacitor and measuring the output pulse with a sampling oscilloscope. The price of the random walk statistics needed to describe such a zig-zag summation (avoiding opposite charges!) is that the net force is not approximately 1080 times the force of gravity between a single pair of charges (as it would be if you simply add up all the charges in a coherent way, like a line of aligned charged capacitors, with linearly increasing electric potential along the line), but is the square root of that multiplication factor on account of the zig-zag inefficiency of the sum, i.e., about 1040 times gravity. Hence, the fact that equal numbers of positive and negative charges are randomly distributed throughout the universe makes electromagnetism strength only 1040/1080 = 10-40 as strong as it would be if all the charges were aligned in a row like a row of charged capacitors (or batteries) in series circuit. Since there are around 1080 randomly distributed charges, electromagnetism as multiplied up by the fact that charged massless gauge bosons are Yang-Mills radiation being exchanged between all charges (including all charges of similar sign) is 1040 times gravity. You could picture this summation by the physical analogy of a lot of charged capacitor plates in space, with the vacuum as the dielectric between the plates. If the capacitor plates come with two opposite charges and are all over the place at random, the average addition of potential works out as that between one pair of charged plates multiplied by the square root of the total number of pairs of plates. This is because of the geometry of the addition. Intuitively, you may incorrectly think that the sum must be zero because on average it will cancel out. However, it isn’t, and is like the diffusive drunkard’s walk where the average distance travelled is equal to the average length of a step multiplied by the square root of the number of steps. If you average a large number of different random walks, because they will all have random net directions, the vector sum is indeed zero. But for individual drunkard’s walks, there is the factual solution that a net displacement does occur. This is the basis for diffusion. On average, gauge bosons spend as much time moving away from us as towards us while being exchanged between the charges of the universe, so the average effect of divergence is exactly cancelled by the average convergence, simplifying the calculation. This model also explains why electromagnetism is attractive between dissimilar charges and repulsive between similar charges. For some of the many quantitative predictions and tests of this model, see previous posts such as this one. SU(2), as used in the SU(2)xU(1) electroweak symmetry group, applies only to left-handed particles.  So it’s pretty obvious that half the potential application of SU(2) is being missed out somehow in SU(2)xU(1). SU(2) is fairly similar to U(1) in Fig. 1 above, except that SU(2) involves 22 – 1 = 3 types of charges (positive, negative and neutral), which (by moving) generate 2 types of charged currents (positive and negative currents) and 1 neutral current (i.e., the motion of an uncharged particle produces a neutral current by analogy to the process whereby the motion of a charged particle produces a charged current), requiring 3 types of gauge boson (W+, W-, and Z0). For weak interactions we need the whole of SU(2)xU(1) because SU(2) models weak isospin by using electric charges as generators, while U(1) is used to represent weak hypercharge, which looks almost identical to Fig. 1 (which illustrates the use of U(1) for quantum electrodynamics).  The SU(2) isospin part of the weak interaction SU(2)xU(1) applies to only left-handed fermions, while the U(1) weak hypercharge part applies to both types of handedness, although the weak hypercharges of left and right handed fermions are not the same (see earlier post for the weak hypercharges of fermions with different spin handedness). It is interesting that the correct SU(2) symmetry predicts massless versions of the weak gauge bosons (W+, W-, and Z0).  Then the mainstream go to a lot of trouble to make them massive by adding some kind of speculative Higgs field, without considering whether the massless versions really exist as the proper gauge bosons of electromagnetism and gravity.  A lot of the problem is that the self-interaction of charged massless gauge bosons is a benefit in explaining the mechanism of electromagnetism (since two similar charged electromagnetic energy currents flowing through one another cancel out each other’s magnetic fields, preventing infinite self-inductance, and allowing charged massless radiation to propagate freely so long as it is exchange radiation in equilibrium with equal amounts flowing from charge A to charge B as flow from charge B to charge A; see Fig. 5 of the earlier post here).  Instead of seeing how the mutual interactions of charged gauge bosons allow exchange radiation to propagate freely without complexity, the mainstream opinion is that this might (it can’t) cause infinities because of the interactions.  Therefore, mainstream (false) consensus is that weak gauge bosons have to have a great mass, simply in order to remove an enormous number of unwanted complex interactions!  They simply are not looking at the physics correctly. U(2) and unification Dr Woit has some ideas on how to proceed with the Standard Model: ‘Supersymmetric quantum mechanics, spinors and the standard model’, Nuclear Physics, v. B303 (1988), pp. 329-42; and ‘Topological quantum theories and representation theory’, Differential Geometric Methods in Theoretical Physics: Physics and Geometry, Proceedings of NATO Advanced Research Workshop, Ling-Lie Chau and Werner Nahm, Eds., Plenum Press, 1990, pp. 533-45. He summarises the approach in ‘… [the theory] should be defined over a Euclidean signature four dimensional space since even the simplest free quantum field theory path integral is ill-defined in a Minkowski signature. If one chooses a complex structure at each point in space-time, one picks out a U(2) [is a proper subset of] SO(4) (perhaps better thought of as a U(2) [is a proper subset of] Spin^c (4)) and … it is argued that one can consistently think of this as an internal symmetry. Now recall our construction of the spin representation for Spin(2n) as A *(C^n) applied to a ‘vacuum’ vector. ‘Under U(2), the spin representation has the quantum numbers of a standard model generation of leptons… A generation of quarks has the same transformation properties except that one has to take the ‘vacuum’ vector to transform under the U(1) with charge 4/3, which is the charge that makes the overall average U(1) charge of a generation of leptons and quarks to be zero. The above comments are … just meant to indicate how the most basic geometry of spinors and Clifford algebras in low dimensions is rich enough to encompass the standard model and seems to be naturally reflected in the electro-weak symmetry properties of Standard Model particles…’ The SU(3) strong force (colour charge) gauge symmetry The SU(3) strong interaction – which has 3 color charges (red, blue, green) and 32 – 1 = 8 gauge bosons – is again virtually identical to the U(1) scheme in Fig. 1 above (except that there are 3 charges and 8 spin-1 gauge bosons called gluons, instead of the alleged 1 charge and 1 gauge boson in the flawed U(1) model of QED, and the 8 gluons carry color charge, whereas the photons of U(1) are uncharged).  The SU(3) symmetry is actually correct because it is an empirical model based on observed particle physics, and the fact that the gauge bosons of SU(3) do carry colour makes it a proper causal model of short range strong interactions, unlike U(1).  For an example of the evidence for SU(3), see the illustration and history discussion in this earlier post.SU(3) is based on an observed (empirical, experimentally determined) particle physics symmetry scheme called the eightfold way.  This is pretty solid experimentally, and summarised all the high energy particle physics experiments from about the end of WWII to the late 1960s.  SU(2) describes the mesons which were originally studied in natural cosmic radiation (pions were the first mesons discovered, and they were found in cosmic radiation from outer space in 1947, at Bristol University).  A type of meson, the pion, is the long-range mediator of the strong nuclear force between nucleons (neutrons and protons), which normally prevents the nuclei of atoms from exploding under the immense Coulomb repulsion of having many protons confined in the small space of the nucleus.  The pion was accepted as the gauge boson of the strong force predicted by Japanese physicist Yukawa, who in 1949 was awarded the Nobel Prize for predicting that meson right back in 1935.  So there is plenty of evidence for both SU(3) color forces and SU(2) isospin.  The problems all arise from U(1). To give an example of how SU(3) works well with charged gauge bosons, gluons, remember that this property of gluons is responsible for the major discovery of asymptotic freedom of confined quarks.  What happens is that the mutual interference of the 8 different types of charged gluons with pairs of virtual quarks and virtual antiquarks at very small distances between particles (high energy) weakens the color force.  The gluon-gluon interactions screen the color charge at short distances because each gluon contains two color charges.  If each gluon contained just one color charge, like the virtual fermions in pair production in QED, then the screening effect would be most significant at large, rather than short, distances.  Because the effective colour charge diminishes at very short distances, for a particular range of distances this color charge fall as you get closer offsets the inverse-square force law effect (the divergence of effective field lines), so the quarks are completely free – within given limits of distance – to move around within a neutron or a proton.  This is asymptotic freedom, an idea from SU(3) that was published in 1973 and resulted in Nobel prizes in 2004.  Although colour charges are confined in this way, some strong force ‘leaks out’ as virtual hadrons like neutral pions and rho particles which account for the strong force on the scale of nuclear physics (a much larger scale than is the case in fundamental particle physics): the mechanism here is similar to the way that atoms which are electrically neutral as a whole can still attract one another to form molecules, because there is a residual of the electromagnetic force left over.  The strong interaction weakens exponentially in addition to the usual fall in potential (1/distance) or force (inverse square law), so at large distances compared to the size of the nucleus it is effectively zero.  Only electromagnetic and gravitational forces are significant at greater distances.  The weak force is very similar to the electromagnetic force but is short ranged because the gauge bosons of the weak force are massive.  The massiveness of the weak force gauge bosons also reduces the strength of the weak interaction compared to electromagnetism. The mechanism for the fall in color charge coupling strength due to interference of charged gauge bosons is not the whole story.  Where is the energy of the field going where the effective charge falls as you get closer to the middle?  Obvious answer: the energy lost from the strong color charges goes into the electromagnetic charge.  Remember, short-range field charges fall as you get closer to the particle core, while electromagnetic charges increase; these are empirical facts.  The strong charge decreases sharply from about 137e at the greatest distances it extends to (via pions) to around 0.15e at 91 GeV, while over the same range of scattering energies (which are appriximately inversely proportional to the distance from the particle core), the electromagnetic charge has been observed to increase by 7%.  We need to apply a new type of continuity equation to the conservation of gauge boson exchange radiation energy of all types, in order to deduce vital new physical insights from the comparison of these figures for charge variation as a function of distance.  The suggested mechanism in a previous post is: ‘We have to understand Maxwell’s equations in terms of the gauge boson exchange process for causing forces and the polarised vacuum shielding process for unifying forces into a unified force at very high energy.  If you have one force (electromagnetism) increase, more energy is carried by virtual photons at the expense of something else, say gluons. So the strong nuclear force will lose strength as the electromagnetic force gains strength. Thus simple conservation of energy will explain and allow predictions to be made on the correct variation of force strengths mediated by different gauge bosons. When you do this properly, you learn that stringy supersymmetry first isn’t needed and second is quantitatively plain wrong.  At low energies, the experimentally determined strong nuclear force coupling constant which is a measure of effective charge is alpha = 1, which is about 137 times the Coulomb law, but it falls to 0.35 at a collision energy of 2 GeV, 0.2 at 7 GeV, and 0.1 at 200 GeV or so.  So the strong force falls off in strength as you get closer by higher energy collisions, while the electromagnetic force increases!  Conservation of gauge boson mass-energy suggests that energy being shielded form the electromagnetic force by polarized pairs of vacuum charges is used to power the strong force, allowing quantitative predictions to be made and tested, debunking supersymmetry and existing unification pipe dreams.’-  Force strengths as a function of distance from a particle core I’ve written previously that the existing graphs showing U(1), SU(2) and SU(3) force strengths as a function of energy are pretty meaningless; they do not specify which particles are under consideration.  If you scatter leptons at energies up to those which so far have been available for experiments, they don’t exhibit any strong force SU(3) interactions.What should be plotted is effective strong, weak and electromagnetic charge as a function of distance from particles.  This is easily deduced because the distance of closest approach of two charged particles in a head-on scatter reaction is easily calculated: as they approach with a given initial kinetic energy, the repulsive force between them increases, which slows them down until they stop at a particular distance, and they are then repelled away.  So you simply equate the initial kinetic energy of the particles with the potential energy of the repulsive force as a function of distance, and solve for distance.  The initial kinetic energy is radiated away as radiation as they decelerate.  There is some evidence from particle collision experiments that the SU(3) effective charge really does decrease as you get closer to quarks, while the electromagnetic charge increases.  Levine and Koltick published in PRL (v.78, 1997, no.3, p.424) in 1997 that the electron’s charge increases from e to 1.07e as you go from low energy physics to collisions of electrons at an energy of 91 GeV, i.e., a 7% increase in charge.  At low energies, the experimentally determined strong nuclear force coupling constant which is a measure of effective charge is alpha = 1, which is about 137 times the Coulomb law, but it falls to 0.35 at a collision energy of 2 GeV, 0.2 at 7 GeV, and 0.1 at 200 GeV or so. The full investigation of running-couplings and the proper unification of the corrected Standard Model is the next priority for detailed investigation.  (Some details of the mechanism can be found in several other recent posts on this blog, e.g., here.) ‘The observed couping constant for W’s is much the same as that for the photon – in the neighborhood of j [Feynman’s symbol j is related to alpha or 1/137.036… by: alpha = j^2 = 1/137.036…]. Therefore the possibility exists that the three W’s and the photon are all different aspects of the same thing. [This seems to be the case, given how the handedness of the particles allows them to couple to massive particles, explaining masses, chiral symmetry, and what is now referred to in the SU(2)xU(1) scheme as ‘electroweak symmetry breaking’.] Stephen Weinberg and Abdus Salam tried to combine quantum electrodynamics with what’s called the ‘weak interactions’ (interactions with W’s) into one quantum theory, and they did it. But if you just look at the results they get you can see the glue [Higgs mechanism problems], so to speak. It’s very clear that the photon and the three W’s [W+, W-, and W0 /Z0 gauge bosons] are interconnected somehow, but at the present level of understanding, the connection is difficult to see clearly – you can still the ’seams’ [Higgs mechanism problems] in the theories; they have not yet been smoothed out so that the connection becomes … more correct.’ [Emphasis added.] - R. P. Feynman, QED, Penguin, 1990, pp141-142.Mechanism for loop quantum gravity with spin-1 (not spin-2) gravitons Peter Woit gives a discussion of the basic principle of LQG in his book: I watched Lee Smolin’s Perimeter Institute lectures, “Introduction to Quantum Gravity”, and he explains that loop quantum gravity is the idea of applying the path integrals of quantum field theory to quantize gravity by summing over interaction history graphs in a network (such as a Penrose spin network) which represents the quantum mechanical vacuum through which vector bosons such as gravitons are supposed to travel in a standard model-type, Yang-Mills, theory of gravitation. This summing of interaction graphs successfully allows a basic framework for general relativity to be obtained from quantum gravity. It’s pretty evident that the quantum gravity loops are best thought of as being the closed exchange cycles of gravitons going between masses (or other gravity field generators like energy fields), to and fro, in an endless cycle of exchange.  That’s the loop mechanism, the closed cycle of Yang-Mills exchange radiation being exchanged from one mass to another, and back again,  continually. According to this idea, the graviton interaction nodes are associated with the ‘Higgs field quanta’ which generates mass.  Hence, in a Penrose spin network, the vertices represent the points where quantized masses exist. Some predictions from this are here. Professor Penrose’s interesting original article on spin networks, Angular Momentum: An Approach to Combinatorial Space-Time, published in ‘Quantum Theory and Beyond’ (Ted Bastin, editor), Cambridge University Press, 1971, pp. 151-80, is available online, courtesy of Georg Beyerle and John Baez. Update (25 June 2007): Lubos Motl versus Mark McCutcheon’s book The Final Theory Seeing that there is some alleged evidence that mainstream string theorists are bigoted charlatans, string theorist Dr Lubos Motl, who is soon leaving his Assistant Professorship at Harvard, made me uneasy when he attacked Mark McCutcheon’s book The Final Theory. Motl wrote a blog post attacking McCutcheon’s book by saying that: ‘Mark McCutcheon is a generic arrogant crackpot whose IQ is comparable to chimps.’ Seeing that Motl is a stringer, this kind of abuse coming from him sounds like praise to my ears. Maybe McCutcheon is not so wrong? Anyway, at lunch time today, I was in Colchester town centre and needed to look up a quotation in one of Feynman’s books. Directly beside Feynman’s QED book, on the shelf of Colchester Public Library, was McCutcheon’s chunky book The Final Theory. I found the time to look up what I wanted and to read all the equations in McCutcheon’s book. Motl ignores McCutcheon’s theory entirely, and Motl is being dishonest when claiming: ‘his [McCutcheon’s] unification is based on the assertion that both relativity as well as quantum mechanics is wrong and should be abandoned.’ This sort of deception is easily seen, because it has nothing to do with McCutcheon’s theory! McCutcheon’s The Final Theory is full of boring controversy or error, such as the sort of things Motl quotes, but the core of the theory is completely different and takes up just two pages: 76 and 194. McCutcheon claims there’s no gravity because the Earth’s radius is expanding at an accelerating rate equal to the acceleration of gravity at Earth’s surface, g = 9.8 ms-2. Thus, in one second, Earth’s radius (in McCutcheon’s theory) expands by (1/2)gt2 = 4.9 m. I showed in an earlier post that there is a simple relationship between Hubble’s empirical redshift law for the expansion of the universe (which can’t be explained by tired light ideas and so is a genuine observation) and acceleration: Hubble recession: v = HR = dR/dt, so dt = dR/v, hence outward acceleration a = dv/dt = d[HR]/[dR/v] = vH = RH2 McCutcheon instead defines a ‘universal atomic expansion rate’ on page 76 of The Final Theory which divides the increase in radius of the Earth over a one second interval (4.9 m) into the Earth’s radius (6,378,000 m, or 6.378*106 m). I don’t like the fact he doesn’t specify a formula properly to define his ‘universal atomic expansion rate’. McCutcheon should be clear: he is dividing (1/2)gt2 into radius of Earth, RE, to get his ‘universal atomic expansion rate, XA: XA = (1/2)gt2/RE, which is a dimensionless ratio. On page 77, McCutcheon honestly states: ‘In expansion theory, the gravity of an object or planet is dependent on it size. This is a significant departure from Newton’s theory, in which gravity is dependent on mass.‘ At first glance, this is a crazy theory, requiring Earth (and all the atoms in it, for he makes the case that all masses expand) to expand much faster than the rate of expansion of the universe. However, on page 194, he argues that the outward acceleration of the an atom of radius R is: a = XAR, now the first thing to notice is that acceleration has units of ms-2 and R has units of m. So this equation is false dimensionally if XA = (1/2)gt2/RE. The only way to make a = XAR accurate dimensionally is to change the definition of XA by dropping t2 from the dimensionless ratio (1/2)gt2/RE to the ratio: XA = (1/2)g/RE, which has correct units of s-2. So we end up with this accurate version of McCutcheon’s formula for the outward acceleration of an atom of radius R (we will use the average radius of orbit of the chaotic electron path in the ground state of a hydrogen atom for R, which is 5.29*10-11 m): a = XAR = [(1/2)g/RE]R, which can be equated to Newton’s formula for acceleration due to mass m, which is 1.67*10-27 kg: a = [(1/2)g/RE]R = mG/R2. Hence, McCutcheon on page 194 calculates a value for G by rearranging these equations: G = (1/2)gR3/(REm) =(1/2)*(9.81)*(5.29*10-11)3 /[(6.378*106)*(1.67*10-27)] = 6.82*10-11 m3/(kg*s2). Which is only 2% higher than the measured value of G = 6.673 *10-11 m3/(kg*s2). After getting this result on page 194, McCutcheon remarks on page 195: ‘Recall … that the value for XA was arrived at by measuring a dropped object in relation to a hypothesized expansion of our overall planet, yet here this same value was borrowed and successfully applied to the proposed expansion of the tinest atom. We can compress McCutcheon’s theory: what is he basically saying is the scaling ratio: a = (1/2)g(R/RE) which when set equal to Newton’s law mG/R2, rearranges to give: G = (1/2)gR3/(REm). However, McCutcheon’s own formula is just his guessed scaling law: a = (1/2)g(R/RE). Although this quite accurately scales the acceleration of gravity at Earth’s surface (g at RE) to the acceleration of gravity at the ground state orbit radius of a hydrogen atom (a at R), it is not clear if this is just a coincidence, or if it is really anything to do with McCutcheon’s expanding matter idea. He did not derive the relationship, he just defined it by dividing the increased radius into the Earth’s radius and then using this ratio in another expression which is again defined without a rigorous theory underpinning it. In its present form, it is numerology. Furthermore, the theory is not universal: ithe basic scaling law that McCutcheon obtains does not predict the gravitational attraction of the two balls Cavendish measured; instead it only relates the gravity at Earth’s surface to that at the surface of an atom, and then seems to be guesswork or numerology (although it is an impressively accurate ‘coincidence’). It doesn’t have the universal application of Newton’s law. There may be another reason why a = (1/2)g(R/RE) is a fairly accurate and impressive relationship. Since I regularly oppose censorship based on fact-ignoring consensus and other types of elitist fascism in general (fascism being best defined as the primitive doctrine that ‘might is right’ and who speaks loudest or has the biggest gun is the scientifically correct), it is only correct that I write this blog post to clarify the details that really are interesting. Maybe McCutcheon could make his case better to scientists by putting the derivation and calculation of G on the front cover of his book, instead of a sunset. Possibly he could justify his guesswork idea to crackpot string theorists by some relativistic obfuscation invoking Einstein, such as: ‘According to relativity, it’s just as reasonable to think as the Earth zooming upwards up to hit you when you jump off a cliff, as to think that you are falling downward.’ If he really wants to go down the road of mainstream hype and obfuscation, he could maybe do even better by invoking the popular misrepresentation of Copernicus: ‘According to Copernicus, the observer is at ‘no special place in the universe’, so it is as justifiable to consider the Earth’s surface accelerating upwards to meet you, as vice-versa. Copernicus used a spaceship to travel all throughout the entire universe on a spaceship or a flying carpet to confirm the crackpot modern claim that we are not at a special place in the universe, you know.’ The string theorists would love that kind of thing (i.e., assertions that there is no preferred reference frame, based on lies) seeing that they think spacetime is 10 or 11 dimensional, based on lies. My calculation of G is entirely different, being due to a causal mechanism of graviton radiation, and it has detailed empirical (non-speculative) foundations to it, and a derivation which predicts G in terms of the Hubble parameter and the local density: G = (3/4)H2/(rπe3), plus a lot of other things about cosmology, including the expansion rate of the universe at long distances in 1996 (two years before it was confirmed by Saul Perlmutter’s observations in 1998). However, this is not necessarily incompatible with McCutcheon’s theory. There are such things as mathematical dualities: where completely different calculations are really just different ways of modelling the same thing. McCutcheon’s book is not just the interesting sort of calculation above, sadly. It also contains a large amount of drivel (particularly in the first chapter) about his alleged flaw in the equation: W = Fd or work energy = force applied * distance moved by force in the direction that the force operates. McCutcheon claims that there is a problem with this formula, and that work energy is being used continuously by gravity, violating conservation of energy. On page 14 (2004 edition) he claims falsely: ‘Despite the ongoing energy expended by Earth’s gravity to hold objects down and the moon in orbit, this energy never diminishes in strength…’ The error McCutcheon is making here is that no energy is used up unless gravity is making an object move. So the gravity field is not depleted of a single Joule of energy when an object is simply held in one place by gravity. For orbits, gravity force acts at right angles to the distance the moon is going in its orbit, so gravity is not using up energy in doing work on the moon. If the moon was falling straight down to earth, then yes, the gravitational field would be losing energy to the kinetic energy that the moon would gain as it accelerated. But it isn’t falling: the moon is not moving towards us along the lines of gravitational force; instead it is moving at right angles to those lines of force. McCutcheon does eventually get to this explanation on page 21 of his book (2004 edition). But this just leads him to write several more pages of drivel about the subject: by drivel, I mean philosophy. On a positive note, McCutcheon near the end of the book (pages 297-300 of the 2004 edition) correctly points out that that where two waves of equal amplitude and frequency are superimposed (i.e., travel through one another) exactly out of phase, their waveforms cancel out completely due to ‘destructive interference’. He makes the point that there is an issue for conservation of energy where such destructive interference occurs. For example, Young claimed that destructive interference of light occurs at the dark fringes on the screen in the double-slit experiment. Is it true that two out-of-phase photons really do arrive at the dark fringes, cancelling one another out? Clearly, this would violate conservation of energy! Back in February 1997, when I was editor of Science World magazine (ISSN 1367-6172), I published an article by the late David A. Chalmers on this subject. Chalmers summed the Feynman path integral for the two slits and found that if Young’s explanation was correct, then half of the total energy would be unaccounted for in the dark fringes. The photons are not arriving at the dark fringes. Instead, they arrive in the bright fringes. Feynman quotation The Feynman quotation I located is this: Compare that to: Just as the Copenhagen Interpretation was supported by lies (such as von Neumann’s false ‘disproof’ of hidden variables in 1932) and fascism (such as the way Bohm was treated by the mainstream when he disproved von Neumann’s ‘proof’ in the 1950s), string ‘theory’ (it isn’t a theory) is supported by similar tactics which are political in nature and have nothing to do with science: ‘String theory has the remarkable property of predicting gravity.’ – Dr Edward Witten, M-theory originator, Physics Today, April 1996. ‘The critics feel passionately that they are right, and that their viewpoints have been unfairly neglected by the establishment. … They bring into the public arena technical claims that few can properly evaluate. … Responding to this kind of criticism can be very difficult. It is hard to answer unfair charges of élitism without sounding élitist to non-experts. A direct response may just add fuel to controversies.’ – Dr Edward Witten, M-theory originator, Nature, Vol 444, 16 November 2006. ‘Superstring/M-theory is the language in which God wrote the world.’ – Assistant Professor Lubos Motl, Harvard University, string theorist and friend of Edward Witten, quoted by Professor Bert Schroer, (p. 21). ‘The mathematician Leonhard Euler … gravely declared: “Monsieur, (a + bn)/n = x, therefore God exists!” … peals of laughter erupted around the room …’ – ‘… I do feel strongly that this is nonsense! … I think all this superstring stuff is crazy and is in the wrong direction. … I don’t like it that they’re not calculating anything. I don’t like that they don’t check their ideas. I don’t like that for anything that disagrees with an experiment, they cook up an explanation – a fix-up to say “Well, it still might be true”. For example, the theory requires ten dimensions. Well, maybe there’s a way of wrapping up six of the dimensions. Yes, that’s possible mathematically, but why not seven? … In other words, there’s no reason whatsoever in superstring theory that it isn’t eight of the ten dimensions that get wrapped up … So the fact that it might disagree with experiment is very tenuous, it doesn’t produce anything; it has to be excused most of the time. … All these numbers … have no explanations in these string theories – absolutely none!’ – Richard P. Feynman, in Davies & Brown, Superstrings, 1988, pp 194-195. [Quoted by Tony Smith.] Feynman predicted today’s crackpot run world in his 1964 Cornell lectures (broadcast on BBC2 in 1965 and published in his book Character of Physical Law, pp. 171-3): ‘The inexperienced, and crackpots, and people like that, make guesses that are simple, but [with extensive knowledge of the actual facts rather than speculation] you can immediately see that they are wrong, so that does not count. … There will be a degeneration of ideas, just like the degeneration that great explorers feel is occurring when tourists begin moving in on a territory.’ In the same book Feynman states: Sent: 02/01/03 17:47 Subject: Your_manuscript LZ8276 Cook {gravity unification proof} Physical Review Letters does not, in general, publish papers on alternatives to currently accepted theories…. Yours sincerely, Stanley G. Brown, Editor, Physical Review Letters ‘If you are not criticized, you may not be doing much.’ – Donald Rumsfeld. The Standard Model, which Edward Witten has done a lot of useful work on (before he went into string speculation), is the best tested physical theory. Forces result from radiation exchange in spacetime. The big bang matter’s speed is 0-c in spacetime of 0-15 billion years, so outward force F = ma = 1043 N. Newton’s 3rd law implies equal inward force, which from the Standard Model possibilities will be carried by gauge bosons (exchange radiation), predicting current cosmology, gravity and the contraction of general relativity, other forces and particle masses. ‘A fruitful natural philosophy has a double scale or ladder ascendant and descendant; ascending from experiments to axioms and descending from axioms to the invention of new experiments.’ – Novum Organum. This predicts gravity in a quantitative, checkable way, from other constants which are being measured ever more accurately and will therefore result in more delicate tests. As for mechanism of gravity, the dynamics here which predict gravitational strength and various other observable and further checkable aspects, are consistent with LQG and Lunsford’s gravitational-electromagnetic unification in which there are 3 dimensions describing contractable matter (matter contracts due to its properties of gravitation and motion), and 3 expanding time dimensions (the spacetime between matter expands due to the big bang according to Hubble’s law). So the vacuum simply isn’t full of annihilation-creation loops (they only extend out to 1 fm around particles). The LQG loops are entirely different (exchange radiation) and cause gravity, not cosmological constant effects. Hence no dark energy mechanism can be attributed to the charge creation effects in the Dirac sea, which exists only close to real particles. ‘By struggling to find a mathematically precise formulation, one often discovers facets of the subject at hand that were not apparent in a more casual treatment. And, when you succeed, rigorous results (”Theorems”) may flow from that effort. ‘But, particularly in more speculative subject, like Quantum Gravity, it’s simply a mistake to think that greater rigour can substitute for physical input. The idea that somehow, by formulating things very precisely and proving rigorous theorems, correct physics will eventually emerge simply misconstrues the role of rigour in Physics.’ – Professor Jacques Distler, blog entry on The Role of Rigour. ‘[Unorthodox approaches] now seem the antithesis of modern science, with consensus and peer review at its very heart. … The sheer number of ideas in circulation means we need tough, sometimes crude ways of sorting…. The principle that new ideas should be verified and reinforced by an intellectual community is one of the pillars of scientific endeavour, but it comes at a cost.’ – Editorial, p5 of the 9 Dec 06 issue of New Scientist. Far easier to say anything else is crackpot. String isn’t, because it’s mainstream, has more people working on it, and has a large number of ideas connecting one another. No ‘lone genius’ can ever come up with anything more mathematically complex, and amazingly technical than string theory ideas, which are the result of decades of research by hundreds of people. Ironically, the core of a particle is probably something like a string, albeit not the M-theory 10/11 dimensional string, just a small loop of energy which acquires mass by coupling to an external mass-giving bosonic field. It isn’t the basic idea of string which is necessarily wrong, but the way the research is done and the idea that by building a very large number of interconnected buildings on quicksand, it will be absurd for disaster to overcome the result which has no solid foundations. In spacetime, you can equally well interpret recession of stars as a variation of velocity with time past as seen from our frame of reference, or a variation of velocity with distance (the traditional ‘tunnel-vision’ due to Hubble). Some people weirdly think Newton had a theory of gravity which predicted G, or that because Witten claimed in Physics Today magazine in 1996 that his stringy M-theory has the remarkable property of “predicting gravity”, he can do it. The editor of Physical Review Letters seemed to suggest this to me when claiming falsely that the facts above leading to a prediction of gravity etc is an “alternative to currently accepted theories”. Where is the theory in string? Where is the theory in M-”theory” which predicts G? It only predicts a spin-2 graviton mode for gravity, and the spin-2 graviton has never been observed. So I disagree with Dr Brown. This isn’t an alternative to a currently accepted theory. It’s tested and validated science, contrasted to currently accepted religious non-theory explaining an unobserved particle by using unobserved extra dimensional guesswork. I’m not saying string should be banned, but I don’t agree that science should be so focussed on stringy guesswork that the hard facts are censored out in consequence!) There is some dark matter in the form of the mass of neutrinos and other radiations which will be attracted around galaxies and affect their rotation, but it is bizarre to try to use discrepancies in false theories as “evidence” for unobserved “dark energy” and “dark matter”, neither of which has been found in any particle physics experiment or detector in history. The “direct evidence of dark matter” seen in photos of distorted images don’t say what the “dark matter” is and we should remember that Ptolemy’s followers were rewarded for claiming direct evidence of the earth centred universe was apparent to everyone who looked at the sky. Science requires evidence, facts, and not faith based religion which ignores or censors out the evidence and the facts. The reason for current popularity of M-theory is precisely that it claims to not be falsifiable, so it acquires a religious or mysterious allure to quacks, just as Ptolemy’s epicycles, phlogiston, caloric, Kelvin’s vortex atom and Maxwell’s mechanical gear box aether did in the past. Dr Peter Woit explains the errors and failures of mainstream string theory in his book Not Even Wrong (Jonathan Cape, London, 2006, especially pp 176-228): using the measured weak SU(2) and electromagnetic U(1) forces, supersymmetry predicts the SU(3) force incorrectly high by 10-15%, when the experimental data is accurate to a standard deviation of about 3%. By claiming to ‘predict’ everything conceivable, it predicts nothing falsifiable at all and is identical to quackery, although string theory might contain some potentially useful spin-offs such as science fiction and some mathematics (similarly, Ptolemy’s epicycles theory helped to advance maths a little, and certainly Maxwell’s mechanical theory of aether led ultimately to a useful mathematical model for electromagnetism; Kelvin’s false vortex atom also led to some ideas about perfect fluids which have been useful in some aspects of the study of turbulence and even general relativity). Even if you somehow discovered gravitons, superpartners, or branes, these would not confirm the particular string theory model anymore than a theory of leprechauns would be confirmed by discovering small people. Science needs quantitative predictions. Dr Imre Lakatos explains the way forward in his article ‘Science and Pseudo-Science’: Really, there is nothing more anyone can do after making a long list of predictions which have been confirmed by new measurements, but are censored out of mainstream publications by the mainstream quacks of stringy elitism. Prof Penrose wrote this depressing conclusion well in 2004 in The Road to Reality so I’ll quote some pertinent bits from the British (Jonathan Cape, 2004) edition: ‘Cargo cult science is defined by Feynman as a situation where a group of people try to be scientists but miss the point. Like writing equations that make no checkable predictions… Of course if the equations are impossible to solve (like due to having a landscape of 10^500 solutions that nobody can handle), it’s impressive, and some believe it. A winning theory is one that sells the most books.’ – Path integrals for gauge boson radiation versus path integrals for real particles, and Weyl’s gauge symmetry principle The previous post plus a re-reading of Professor Zee’s Quantum Field Theory in a Nutshell (Princeton, 2003) suggests a new formulation for quantum gravity, the mechanism and mathematical predictions of which were given two posts ago.The sum over histories for real particles is used to work out the path of least action, such as the path of a photon of light which takes the least time to bounce off a mirror.  You can do the same thing for the path of a real electron, or the path of a drunkard’s walk.  The integral tells you the effective path taken by the particle, or the probability of any given path being taken, from many possible paths. For gauge bosons or vector bosons, i.e., force-mediating radiation, the role of the path integral is no longer to find the probability of a path being taken or the effective path.  Instead, gauge bosons are exchanged over many paths simultaneously.  Hence there are two totally different applications of path integrals we are concerned with: • Applying the path integral for real particles involves evaluating a lot of paths, most of which are not actually taken (the real particle takes only one of those paths, although as Feynman said, it uses a ‘small core of nearby space’ so it can be affected by both of two slits in a screen, provided those slits are close together, within a transverse wavelength or so, so the small core of paths taken overlap both slits). • Applying the path integral for gauge bosons involves evaluating a lot of paths which are all actually being taken, because the extensive force field is composed of lots of gauge bosons being exchanged between charges, really going all over the place (for long-range gravity and electromagnetism). In both cases the path taken by a given real particle or a single gauge boson must be composed of straight lines in between interactions (see Fig. 1 of previous post) because the curvature of general relativity appears to be a classical approximation to a lot of small discrete deflections due to discrete interactions with field quanta (sometimes curves are used in Feynman diagrams for convenience, but according to quantum field theory all mechanisms for curvature actually involve lots of little deflections by the quanta of fields). The calculations of quantum gravity, two posts ago, use geometry to evaluate these straight-line gauge boson paths for gravity and electromagnetism.  Presumably, translating the simplicity of the calculations based on geometry in that post into a path integrals will appeal more to the stringy mainstream.  Loop quantum gravity methods of summing up a lot of interaction graphs will be used to do this.  What is vital are directional asymmetries, which transform a perfect symmetry of gauge boson exchanges in all directions into a force, represented by the geometry of Fig. 1 (below).  One way to convert that geometry into a formula is to consider the inward-outward travelling isotropic graviton exchange radiation by using divergence operator.  I think this can be done easily because there are two useful physical facts which make the geometry simpler even than appears from Fig. 1: first, the shield area x in Fig. 1 is extremely small so the asymmetry cone can not ever have a large sized base for any practical situation; second, by Newton’s proof the gravity inverse square law force from a lot of little particles spread out in the Earth is the same as you get by mathematically assuming that all the little masses (fundamental particles) are not spread throughout a large planet but are all at the centre.  So a path integral formulation for the geometry of Fig. 1 is simple. Fig. 1: Mechanism for quantum gravity (a tiny falling test mass is located in the middle of the universe, which experiences isotropic graviton radiation - not necessarily spin 2 gravitons, preferably spin 1 gravitons which cause attraction by simply pushing things as this allows predictions as wel shall see - from all directions except that where there is an asymmetry produced by the mass which shields that radiation) . By Newton’s 3rd law the outward force of the big bang has an equal inward force, and gravity is equal to the proportion of that inward force covered by the shaded cone in this diagram: (force of gravity) = (total inward force).(cross sectional area of shield projected out to radius R, i.e., the area of the base of the cone marked x, which is the product of the shield’s cross-sectional area and the ratio R2/r2) / (total spherical area with radius R). Fig. 1: Mechanism for quantum gravity (a tiny falling test mass is located in the middle of the universe, which experiences isotropic graviton radiation –  spin 1 gravitons which cause attraction by simply pushing things as this allows predictions as proved in the earlier post – from all directions except that where there is an asymmetry produced by the mass which shields that radiation). By Newton’s 3rd law the outward force of the big bang has an equal inward force, and gravity is equal to the proportion of that inward force covered by the shaded cone in this diagram: (force of gravity) = (total inward force).(cross sectional area of shield projected out to radius R, i.e., the area of the base of the cone marked x, which is the product of the shield’s cross-sectional area and the ratio R2/r2) / (total spherical area with radius R).  (Full proof here.) Weyl’s gauge symmetry principle A symmetry is anything that doesn’t change as the result of a transformation.  For example, the colour of a plastic pen doesn’t change when you rotate it, so the colour is a symmetry of the pen when the transformation type is a rotation.  If you transform the plastic pen by burning it, colour is not a symmetry of the pen (unless the pen was the colour of carbon in the first place). A gauge symmetry is one where scalable quantities (gauges) are involved.  For example, there is a symmetry in the fact that the same amount of energy is required to lift a 1 kg mass up by a height of 1 metre, regardless of the original height of the mass above sea level.  (This example is not completely true, but it is almost true because the fall in gravity acceleration with height is small, as gravity is only 0.3% weaker at the top of the tallest mountain than it is at sea level.) The female mathematician Emmy Noether in 1915 proved a great theorem which states that any continuous symmetry leads to a conservation law, e.g., the symmetry of physical laws (due to these laws remaining the same while time passes) leads to the principle of conservation of energy!  This particularly impressive example of Noether’s theorem does not strictly apply to forces over very long time scales, because, as proved, fundamental force coupling constants (relative charges) increase in direct proportion to the age of the universe.  However, the theorem is increasingly accurate as the time scale involved is reduced and the inaccuracy becomes trivial when the time considered is small compared to the age of the universe. At the end of Quantum Field Theory in a Nutshell (at page 457), Zee points out that Maxwell’s equations unexpectedly contained two hidden symmetries, Lorentz invariance and gauge invariance: ‘two symmetries that, as we now know, literally hold the key to the secrets of the universe.’ He then argues that Maxwell’s long-hand differential equations masked these symmetries and it took Einstein’s genius to uncover them (special relativity for Lorentz invariance, general relativity for the tensor calculus with the repeated-indices summation convention, e.g., mathematical symbol compressions by defining notation which looks something like: Fab = 2dAab = daAb – dbAa).  This is actually a surprisingly good point to make. Zee, judging from what his Quantum Field Theory in a Nutshell book contains, does not seem to be aware how useful Heaviside’s vector calculus is (Heaviside compressed Maxwell’s 20 equations into 4 field equations plus a continuity equation for conservation of charge, while Einstein merely compressed the 4 field equations into 2, a less impressive feat but one leading to less intuitive equations; divergence and curl equations in vector calculus describe simple divergence of radial electric field lines which you can picture, and simple curling of electric or magnetic field lines which again are easy to picture).  In addition, the way relativity comes from Maxwell’s equations is best expressed non-mathematically, just because it is so simple: if you move relative to an electric charge you get a magnetic field, if you don’t move relative to an electric charge you don’t see the magnetic field. Zee adds: ‘it is entirely possible that an insightful reader could find a hitherto unknown symmetry hidden in our well-studied field theories.’ Well, he could start with the insight that U(1) doesn’t exist, as explained in the previous post.  There are no single charged leptons about, only pairs of them.  They are created in pairs, and are annihilated as pairs.  So really you need some form of SU(2) symmetry to replace U(1).  Such a replacement as a bonus predicts gravity and electromagnetism quantitatively, giving the coupling constants for each and the complete mechanism for each force. Just to be absolutely lucid on this, so that there can be no possible confusion: • SU(2) correctly asserts that quarks form quark-antiquark doublets due to the short-range weak force mediated by massive weak gauge bosons • U(1) falsely asserts that leptons do not form doublets due to the long-range electromagnetic force mediated by mass-less electromagnetic gauge bosons. The correct picture to replace SU(2)xU(1) is based on the same principle for SU(2) but a replacement of U(1) by another effect of SU(2): • SU(2) also correctly asserts that leptons form lepton-antilepton doublets (although since the binding force is long-range electromagnetism instead of short-range massive weak gauge bosons, the lepton-antilepton doublets are not confined in a small place because the range over which the electromagnetic force operates is simply far greater than that of the weak force). Solid experimentally validated evidence for this (including mechanisms and predictions of gravity and electromagnetism strengths, etc., from massless SU(2) gauge boson interactions which automatically explain gravity and electromagnetism): here.  Sheldon Glashow’s early expansion of the original Yang-Mills SU(2) gauge interaction symmetry to unify electromagnetism and weak interactions is quoted here.  More technical discussion on the relationship of leptons to quarks implies by the model: here. However, innovation of a checkable sort is now unwelcome in mainstream stringy physics, so maybe Zee was joking, and maybe he secretly doesn’t want any progress (unless of course it comes from mainstream string theory).  This suggestion is made because Zee on the same page (p457) adds that the experimentally-based theory of electromagnetic unification (unification of electricity and magnetism) was a failure to achieve its full potential because those physicists: ‘did not possess the mind-set for symmetry.  The old paradigm “experiments -> action -> symmetry” had to be replaced in fundamental physics by the new paradigm “symmetry -> action -> experiments,” the new paradigm being typified by grand unified theory and later by string theory.’  (Emphasis added.) Problem is, string theory has proved an inedible, stinking turkey (Lunsford both more politely and more memorably calls string ‘a vile and idiotic lie’ which ‘has managed to slough itself along for 20 years, leaving a shiny trail behind it’).  I’ve explained politely why string theory is offensive, insulting, abusive, dictatorial ego-massaging, money-laundering pseudoscience at my domain Zee needs to try reading Paul Feyerabend’s book, Against Method.  Science actually works by taking the route that most agrees with nature, regardless of how unorthodox an idea is, or how crazy it superficially looks to the prejudiced who don’t bother to check it objectively before arriving at a conclusion on its merits; ‘science,’ when it does occasionally take the popular route that is a total and complete moronic failure, e.g., mainstream string, temporarily becomes a religion.  String theorists are like fanatical preachers, trying to dictate to the gullible what nature is like ahead of any evidence, the very error Bohr alleged Einstein was making in 1927.  Actually there is a strong connection between the speculative Copenhagen Interpretation propaganda of Bohr in 1927 (Bohr in fact had no solid evidence for his pet theory of metaphysics, while Einstein had every causal law and mechanism of physics on his side; today we all know from high-energy physics that virtual particles are an experimental physics fact and they cause indeterminancy in a simple mechanical way on small distance scales), and string.  Both rely on exactly the same mixture of lies, hype, coercion, ridicule of factual evidence, etc.  Both are religions.  Neither is a science, and no matter how much physically vacuous mathematical obfuscation they use, it’s failure to cover-up the gross incompetence in basic physics remains as perfectly transparent as the Emperor’s new clothes.  Unfortunately, most people see what they are told to see, so this farce of string theory continues. Feynman diagrams in loop quantum gravity, path integrals, and the relationship of leptons to quarks Fig. 1: Comparison of a Feynman-style diagram for general relativity (smooth curvature of spacetime, i.e., smooth acceleration of an electron by gravitational acceleration) with a Feynman diagram for a graviton causing acceleration by hitting an electron (see previous post for the mechanism and quantitative checked prediction of the strength of gravity).  If you believe string theory, which uses spin-2 gravitons for ‘attraction’ (rather than pushing), you have to imagine the graviton not pushing rightwards to cause the electron to deflect, but somehow pulling from the right hand side: see this previous post for the maths of how the bogus (vacuous, non-predictive) spin-2 graviton idea works in the path integrals formulation of quantum gravity.  (Basically, spin-1 gravitons push, while spin-2 gravitons suck.  So if you want a checkable, predictive, real theory of quantum gravity that pushes forward, check out spin-1 gravitons.  But if you merely want any old theory of quantum gravity that well and truly sucksyou can take your pick from the ‘landscape’ of 10500 stringy theories of mainstream sucking spin-2 gravitons.)  In general relativity, an electron accelerates due to a continuous smooth curvature of spacetime, due to a spacetime ‘continuum’ (spacetime fabric). In mainstream quantum gravity ideas (at least in the Feynman diagram for quantum gravity), an electron accelerates in a gravitational field because of quantized interactions with some sort of graviton radiation (the gravitons are presumed to interact with the mass-giving Higgs field bosons surrounding the electron core).  As explained in the discussion of the stress-energy curvature in the previous post, in addition to the gravity mediators (gravitons) presumably being quantized rather than a continuous or continuum curved spacetime, there is the problem that the sources of fields such as discrete units of matter, come in quantized units in locations of spacetime.  General relativity only produces smooth curvature (the acceleration curve in the left hand diagram of Fig. 1) by smoothing out the true discontinuous (atomic and particulate) nature of matter by the use of an averaged density to represent the ‘source’ of the gravitational field. The curvature of the line in the Feynman diagram for general relativity is therefore due to the smoothness of the source of gravity spacetime, resulting from the way that the presumed source of curvature – the stress-energy tensor in general relativity – averages the discrete, quantized nature of mass-energy per unit volume of space. Quantum field theory is suggestive that the correct Feynman diagram for any interaction is not a continuous, smooth curve, but instead a number of steps due to discrete interactions of the field quanta with the charge (i.e., gravitational mass).  However, the nature of the ‘gravitons’ has not been observed, so there are some uncertainties remaining about their nature.  Fig. 1 (which was inspired – in part – by Fig. 3 in Lee Smolin’s Trouble with Physics) is designed to give a clear idea of what quantum gravity is about and how it is related to general relativity: The previous post predicts gravity and cosmology correctly; the basic mechanism was published (by Electronics World) in October 1996, two years ahead of the discovery that there’s no gravitational retardation.  More important, it predicts gravity quantitatively, and doesn’t use any ad hoc hypotheses, just experimentally validated facts as input.  I’ve used that post to replace the earlier version of the gravity mechanism discussion here, here, etc., to improve clarity. I can’t update the more permanent paper on the CERN document server here because as Tony Smith has pointed out, “… CERN’s Scientific Information Policy Board decided, at its meeting on the 8th October 2004, to close the EXT-series. …”  The only way you can update a paper on the CERN document server is if it a mirror copy of one on arXiv; update the arXiv paper and CERN’s mirror copy will be updated.  This is contrary to scientific ethics whereby the whole point of electronic archives is that corrections and updates should be permissible.  Professor Jacques Distler, who works on string theory and is a member of arXiv’s advisory board, despite being warmly praised by me, still hasn’t even put Lunsford’s published paper on arXiv, which was censored by arXiv despite having been peer-reviewed and published. Path integrals of quantum field theory The path integral for the incorrect spin-2 idea was discussed at the earlier post here, while as stated the correct mechanism with accurate predictions confirming it, is at the post here. Let’s now examine the path integral formulation of quantum field theory in more depth.  Before we go into the maths below, by way of background, Wiki has a useful history of path integrals, mentioning: ‘The path integral formulation was developed in 1948 by Richard Feynman. … This formulation has proved crucial to the subsequent development of theoretical physics, since it provided the basis for the grand synthesis of the 1970s called the renormalization group which unified quantum field theory with statistical mechanics. If we realize that the Schrödinger equation is essentially a diffusion equation with an imaginary diffusion constant, then the path integral is a method for the enumeration of random walks. For this reason path integrals had also been used in the study of Brownian motion and diffusion before they were introduced in quantum mechanics.’ As Fig. 1 shows, according to Feynman, ‘curvature’ is not real and general relativity is just an approximation: in reality, graviton exchange causes accelerations in little jumps.  If you want to get general relativity out of quantum field theory, you have to sum over the histories or interaction graphs for lots of little discrete quantized interactions.  The summation process is what we are about to describe mathematically.  By way of introduction, we can remember the random walk statistics mentioned in the previous post.  If a drunk takes n steps of approximately equal length x in random directions, he or she will travel an average of distance xn1/2 from the starting point, in a random direction!  The reason why the average distance gone is proportional to the square-root of the number of steps is easily understood intuitively because it is due to diffusion theory.  (If this was not the case, there would be no diffusion, because molecules hitting each other at random would just oscillate around a central point without any net movement.)  This result is just a statistical average for a great many drunkard’s walks.  You can derive it statistically, or you can simulate it on a computer, add up the mean distance gone after n steps for lots of random walks, and take the average.  In other words, you take the path integral over all the different possibilities, and this allows you to work out what is most likely to occur. Feynman applied this procedure to the principle of least action.  One simple way to illustrate this is the discussion of how light reflects off a mirror.  Classically, the angle of incidence is equal to the angle of reflection, which is the same as saying that light takes the quickest possible route when reflecting.  If the angle of incidence were not equal to the angle of reflection, then light would obviously take longer to arrive after being deflected than it actually does (i.e., the sum of lengths of the two congruent sides in an isosceles triangle is smaller than the sum of lengths of two dissimilar sides for a trangle with the same altitude line perpendicular to the reflecting surface). The fact that light classically seems always to go where the time taken is least is a specific instance of the more general principle of least action.  Feynman explains this with path integrals in his book QED (Penguin, 1990).  Physically, path integrals are the mathematical summation of all possibilities.  Feynman crucially discovered that all possibilities have the same magnitude but that the phase or effective direction (argument of the the complex number) varies for different paths.  Because each path is a vector, the differences in directions mean that the different histories will partly cancel each other out. To get the probability of event y occurring, you first calculate the amplitude for that event.  Then you calculate the path integral for all possible events including event y.  Then you divide the first probability (that for just event y) into the path integral for all possibilities.  The result of this division is the absolute probability of event y occurring in the probability space of all possible events!  Easy. What is pretty important to note is that, contrary to some popular hype by people who should know better (Dr John Gribbin being such an example of someone who won’t correct errors in his books when I email the errors), the particle doesn’t actually travel on all of the paths integrated over in a specific interaction!  What happens is just one interaction, and one path.  The other paths in the path integral are considered so that you can work out the probability of a given path occurring, out of all possibilities.  (You can obviously do other things with path integrals as well, but this is one of the simplest things. For example, instead of calculating the probability of a given event history, you can use path integrals to identify the most probable event history, out of the infinite number of possible event histories.  This is just a matter of applying simple calculus!) However, the nature of Feynman’s path integral does allow a little interaction between nearby paths!  This doesn’t happen with brownian diffusion!  It is caused by the phase interference of nearby paths, as Feynman explains very carefully: The Wiki article explains: ‘In the limit of action that is large compared to Planck’s constant h-bar, the path integral is dominated by solutions which are stationary points of the action, since there the amplitudes of similar histories will tend to constructively interfere with one another. Conversely, for paths that are far from being stationary points of the action, the complex phase of the amplitude calculated according to postulate 3 will vary rapidly for similar paths, and amplitudes will tend to cancel. Therefore the important parts of the integral—the significant possibilities—in the limit of large action simply consist of solutions of the Euler-Lagrange equation, and classical mechanics is correctly recovered. ‘Action principles can seem puzzling to the student of physics because of their seemingly teleological quality: instead of predicting the future from initial conditions, one starts with a combination of initial conditions and final conditions and then finds the path in between, as if the system somehow knows where it’s going to go. The path integral is one way of understanding why this works. The system doesn’t have to know in advance where it’s going; the path integral simply calculates the probability amplitude for a given process, and the stationary points of the action mark neighborhoods of the space of histories for which quantum-mechanical interference will yield large probabilities.’ I think this last bit is badly written: interference is only possible in the ‘small core’ paths that the size of the photon or other particle takes.  The paths which are not taken are not eliminated by inferference: they only occur in the path integral so that you know the absolute probability of a given path actually occurring. Similarly, to calculate the probability of dice landing heads up, you need to know how many sides dice have.  So on one throw the probability of one particular side landing facing upwards is 1/6 if there are 6 sides per die.  But the fact that the number 6 goes into the calculation doesn’t mean that the dice actually arrive with every side facing up.  Similarly, a photon doesn’t arrive along routes where there is perfect cancellation!  No energy goes along such routes, so nothing at all physical travels along any of them.  Those routes are only included in the calculation because they were possibilities, not because they were paths taken. In some cases, such as the probability that a photon will be reflected from the front of a block of glass, other factors are involved.  For the block of glass, as Feynman explains, Newton discovered that the probability of reflection depends on the thickness of the block of glass as measured in terms of the wavelength of the light being reflected.  The mechanism here is very simple.  Consider the glass before any photon even approaches it.  A normal block of glass is full of electrons in motion and vibrating atoms.  The thickness of the glass determines the number of wavelengths that can fit into the glass for any given wavelength of vibration.  Some of the vibration frequencies will be cancelled out by interference.  So the vibration frequencies of the electrons at the surface of the glass are modified in accordance to the thickness of the glass, even before the photon approaches the glass.  This is why the exact thickness of the glass determines the precise probability of light of a given frequency being reflected.  It is not determined when the photon hits the electron, because the vibration frequencies of the electron have already been determined by the interference of certain frequencies of vibration in the glass. The natural frequencies of vibration in a block of glass depend on the size of the block of glass!  These natural frequencies then determine the probability that a photon is reflected.  So there is the two-step mechanism behind the dependency of photon reflection probability upon glass thickness.  It’s extremely simple.  Natural frequency effects are very easy to grasp: take a trip on an old school bus, and the windows rattle with substantial amplitude when the engine revolutions reach a particular frequency.  Higher or lower engine frequencies produce less window rattle.  The frequency where the windows shake the most is the natural frequency.  (Obviously for glass reflecting photons, the oscillations we are dealing with are electron oscillations which are much smaller in amplitude and much higher in frequency, and in this case the natural frequencies are determined by the thickness of the glass.) The exact way that the precise thickness of a sheet of glass affects the abilities of electrons on the surface to reflect light easily understood by reference to Schroedinger’s original idea of how stationary orbits arise with a wave picture of an electron.  Schroedinger found that where an integer number of wavelengths of the electron fits into the orbit circumference, there is no interference.  But when only a fractional number of wavelengths would fit into that distance, then interference would be caused.  As a result, only quantized orbits were possible in that model, corresponding to Bohr’s quantum mechanics.  In a sheet of glass, when an integer number of wavelengths of light for a particular frequency of oscillation fit into the thickness of the glass, there is no interference in vibrations at that specific frequency, so it is a natural frequency.  However, when only a fractional number of wavelengths fit into the glass thickness, there is destructive interference in the oscillations.  This influences whether the electrons are resonating in the right way to admit or reflect a photon of a given frequency.  (There is also a random element involved, when considering the probability for individual photons chancing to interact with individual electrons on the surface of the glass in a particular way.) Virtual pair-production can be included in path integrals by treating antimatter (such positrons) as matter (such as electrons) travelling backwards in time (this was one of the conveniences of Feynman diagrams which initially caused Feynman a lot of trouble, but it’s just a mathematical convenience for making calculations).  For more mathematical detail on path integrals, see Richard Feynman and Albert Hibbs, Quantum Mechanics and Path Integrals, as well as excellent briefer introductions such as Christian Grosche, An Introduction into the Feynman Path Integral, and Richard MacKenzie, Path Integral Methods and ApplicationsFor other standard references, scroll down this page.  For Feynman’s problems and hostility from Teller, Bohr, Dirac and Oppenheimer in 1948 to path integrals, see quotations in the comments of the previous post. Feynman was extremely pragmatic.  To him, what matters is the validity of the physical equations and their predictions, not the specific model used to get the equations and predictions.  For example, Feynman said: If you can get the right equations even from a false model, you have done something useful, as Maxwell did.  However, you might still want to search for the correct model, as Feynman explained: This perturbative expansion is a simple example of the application of path integrals.  There are several ways that the electron can move, each corresponding to a unique Feynman diagram.  The electron can do along a direct path from spacetime location A to spacetime location B.  Alternatively, it can be deflected by a virtual particle enroute, and travel by a slightly longer path. Another alternative is that if could be deflected by two virtual particles.  There are, of course, an infinite number of other possibilities.  Each has a unique Feynman diagram and to calculate the most probable outcome you need to average them all in accordance with Feynman’s rules. For the case of calculating the magnetic moment of leptons, the original calculation came from Dirac and assumed in effect the simplest Feynman diagram situation: that the electron interacts with a virtual (gauge boson) ‘photon’ from a magnet in the simplest simple way possible.  This is what conributes 98.85% of the total (average) magnetic moment of leptons, according to path integrals for lepton magnetic moments.  The next Feynman diagram is the second highest contributor and accounts for over 1% of interactions.  This correction is the situation evaluated by Schwinger in 1947 and is represented by a Feynman diagram in which a lepton emits a virtual photon before it interacts with the magnet.  After interacting with the magnet, it re-absorbs the virtual photon it emitted earlier.  This is odd because if an electron emits a virtual photon, it briefly (until the virtual photon is recaptured) loses energy.  How, physically, can this Feynman diagram explain how the magnetic moment of the electron be increased by 0.116% as a result of losing the energy of a virtual photon for the duration of the interaction with a magnet?  If this mechanism was the correct story, maybe you’d have a reduced magnetic moment result, not an increase?  Since virtual photons  mediate electromagnetic charge, you might expect them to reduce the charge/magnetism of the electromagnetism by being lost during an interaction.  Obviously, the loss of a non-virtual photon from an electron has no effect on the charge energy at all, it merely decelerates the electron (so kinetic energy and mass are slightly reduced, not electromagnetic charge). There are two possible explanations to this: 1) the Feynman diagram for Schwinger’s correction is physically correct.  The emission of the virtual photon occurs in such a way that the electron gets briefly deflected towards the magnet for the duration of the interaction between electron and magnet.  The reason why the magnetic moment of the electron is increased as a result of this is simply that the virtual ‘photon’ that is exchanged between the magnet and the electron is blue-shifted by the motion of the electron towards the magnet for the duration of the interaction.  After the interaction, the electron re-captures the virtual ‘photon’ and is no-longer moving towards the magnet.  The blue-shift is the opposite of red-shift.  Whereas red-shift reduces the interaction strength between receding charges, blue-shift (due to the approach of charges) increases the interaction strength because the photons have an energy that is directly proportional to their frequency (E = hf).  This mechanism may be correct, and needs further investigation. 2) The other possibility is that there is a pairing between the electron core and a virtual fermion in the vacuum around it which increases the magnetic moment by a factor which depends on the shielding factor of the field from the particle core.  This mechanism was described in the previous post.  It helped inspire the general concept for the mass model discussed in the previous post, which is independent of this magnetic moment mechanism, and makes checkable predictions of all observable lepton and hadron masses. The relationship of leptons to quarks and the perturbative expansion As mentioned in the previous post (and comments number 13, 14, 22, 24, 25, 26, 27, 28 and 31 of that post), the number one priority now is to develop the details of the lepton-quark relationship.  The evidence that quarks are pairs or triads of confined leptons with some symmetry transformations was explained in detail in comment 13 to the previous post and is known as universality.  This was first recognised when the lepton beta decay event muon -> electron + electron antineutrino + muon neutrino was found to have similar detailed properties to the quark beta decay event neutron -> proton + electron + electron antineutrino Nicola Cabibbo used such evidence that quarks are closely related to leptons (I’ve only given one of many examples above) to develop the concept of ‘weak universality, which involves a similarity in the weak interaction coupling strength between different generations of particles.’  As stated in comment 13 of the previous post, I’m interested in the relationship between electric charge Q, weak isospin charge T and weak hypercharge Y: Q = T + Y/2. Where Y = −1 for left-handed leptons (+1 for antileptons) and Y = +1/3 for left-handed quarks (−1/3 for antiquarks).  The minor symmetry transformations which occur when you confine leptons in pairs or triads to form “quarks” with strong (colour) charge and fractional apparent electric charge, are physically caused by the increased strength of the polarized vacuum, and by the ability of the pairs of short-ranged virtual particles in the field to move between the nearby individual leptons, mediating new short-ranged forces which would not occur if the leptons were isolated. The emergence of these new short ranged forces, which appear only when particles are in close proximity, is the cause of the new nuclear charges, and these charges add extra quantum numbers, explaining why the Pauli exclusion principle isn’t violated. (The Pauli exclusion simply says that in a confined system, each particle has a unique set of quantum numbers.)  Peter Woit’s Not Even Wrong summarises what is known in Figure 7.1 on page 93 of Not Even Wrong: ‘Under SU(2), the [left-handed] particles in the middle row are doublets (and are left-handed Weyl-spinors under Lorentz transformations), the other [right-handed] particles are invariant (and are right-handed Weyl-spinors under Lorentz transformations). This makes it easier to understand: the QCD colour force of SU(3) controls triplets of particles (’quarks’), whereas SU(2) controls doublet’s of particles (’quarks’). The issue of the fine detail in the relationship of leptons and quarks, how the transformation occurs physically and all the details you can predict from the new model suggested in the previous post, is very interesting and, as stated, is the number one priority. For a start, to study the transformation of a lepton into a quark, we will consider the conversion of electrons into downquarks.  First, the conversion of a left-handed electron into a left-handed downquark will be considered, because the weak isospin charge is the same for each (T = -1/2): eL  -> dL The left-handed electron, eL, has a weak hypercharge of Y = -1 and the left-handed downquark, dL, has a weak hypercharge of Y = +1/3.  Therefore, this transformation incurs a fall in observable electric charge by a factor of 3 and an accompanying increase in weak hypercharge by +4/3 units (from -1 to +1/3). Now, if the vacuum shielding mechanism suggested has any heuristic validity, the right-handed electron should transform into a right-handed downquark by way of a similar fall in electric charge by a factor of 3 and accompanying increase in weak hypercharge by +4/3 units: eR -> dR The weak isospin charges are the same for right-handed electrons and right-handed downquarks (T = 0 in each case). The transformation of a right-handed electron to right-handed downquark involves the same reduction in electric charge by a factor of 3 as for left-handed electrons, while the weak hypercharge changes from Y = -2 to Y = -2/3.  This means that the weak hypercharge increases by +4/3 units, just the same amount as occurred with the transformation of a left-handed electron to a left-handed downquark.  So there is a consistency to this model: the shielding of a given amount of electric charge by the polarized vacuum causes a consistent increase in the weak hypercharge. If we ignore for the moment the possibility that antimatter leptons may get transformed into upquarks and just consider matter, then the symmetry transformations required to change right-handed neutrinos into right-handed upquarks, and left-handed neutrions into left-handed upquarks are: vL -> uL vR -> uR The first transformation involves a left-handed neutrino, vL, with Y = -1, Q = 0, and T = 1/2, becoming a left-handed upquark, uL, with Y = 1/3, Q = 2/3, and T = 1/2.  We notice that Y gains 4/3 in the transformation, while Q gains 2/3. The second transformation involves a right-handed neutrino with Y = 0, Q = 0 and T = o  becoming a right-handed upquark with Y = 4/3, Q = 2/3 and T = 0.  We can immediately see that the transformation has again resulted in Y gaining 4/3 while Q gains 2/3.  Hence, the concept that a given change in electric charge is accompanied by a given change in hypercharge remains valid.  So we have accounted for the conversion of the four leptons in one generation of particle physics (two types of handed electrons and two types of handed neutrinos) into the four quarks in the same generation of particle physics (left and right handed versions of two quark flavors). These transformations are obviously not normal reactions at low energy.  The first two make checkable, falsifiable predictions about unification to replace supersymmetry speculation about the unification of running couplings, the relative charges of the electromagnetic, weak and strong forces as a function of either collision energy (e.g., electromagnetic charge increases at higher energy, while strong charge falls) or distance (e.g., electromagnetic charge increases at small distances, while strong charge falls). If we review the symmetry transformations suggested for a generation of leptons into a generation of quarks, eL  -> dL eR -> dR vL -> uL vR -> uR it is clear that the last two reactions are in difficulty, because the conversion of neutrinos into upquarks (in this example of a generation of quarks) is a potential problem for the suggested physical mechanism in the previous (and earlier) posts.  The physical mechanism for the first two of the four transformations is relatively straightforward to picture: try to collide leptons at enormous energy and the overlap of the polarized vacuum veils of polarizable fermions should shield some of the long-range (observable low energy) electric charge, with this shielded energy is used instead in short range weak hypercharge mediated by weak gauge bosons, and colour charges for the strong force. Because we know exactly how much energy is ‘lost’ from the electric charge in the first two transformations due to the increased shared polarized vacuum shield, we can quantitatively check this physical mechanism by setting this lost energy equal to the energy gained in the weak force and seeing if the predictions are accurate.  This mechanism might not apply directly to the last two transformations, since neutrinos do not carry a net electric charge.  It is also necessary to investigate the possibilities for the transformation of positrons into upquarks.  This issue of why there is little antimatter might be resolved if positrons were converted into upquarks at high energy in the big bang by the mechanism suggested for the first two transformations. However, the polarized vacuum shielding mechanism might still apply in some circumstances to neutral particles, depending on the geometry.  Neutrinos may be electrically neutral as observed at low energy or large distances, while actually carrying equal and opposite electric charge.  (Similarly, atoms often appear to be neutral, but if we smash them to pieces we get observable electric charges arise.  The apparent electrical neutrality of atoms is a masking effect of the fact that atoms usually carry equal positive and negative charge, which cancel as seen from a distance.  A photon of light similarly carries positive electric field and negative electric field energy in equal quantities; the two cancel out overall, but the electromagnetic fields of the photon can interact with charges. Charge is only manifested by way of the field created by a charge; since nobody has ever seen the core of a charged particle, only the field.  A confined field of a given charge is therefore indistinguishable from a charge.  The only reason why an electron appears to be a negative charge is because it has a negative electric field around it.  As shown in Fig. 5 of the previous post, there is a modification necessary to the U(1) symmetry of the standard model of particle physics: negative gauge bosons to mediate the fields around negative charges, and positive gauge bosons to mediate the fields around positive charges. So a ‘neutral’ particle which is neutral because it contains of equal amounts of positive and negative electric field, may be able to induce electric polarization of the vacuum for the short ranged (uncancelled) electric field.  The range of this effect is obviously limited to the distance between the centrel of the positive part of the particle and the negative part of the particle.  (In the case of a photon for example, this distance is the wavelength.) If we replace the existing electroweak SU(2)xU(1) symmetry by SU(2)xSU(2), maybe with each SU(2) having a different handedness, then we get four charged bosons (two charged massive bosons for the weak force, and two charged massless bosons for electromagnetism) and two neutral bosons: a massless gravity mediating gauge boson, and a massive weak neutral-current producing gauge boson. Let’s try the transformation of a positron into an upquark.  This has two major advantages over the idea that neutrinos are transformed into upquarks.  First, it explains why we don’t observe much antimatter in nature (tiny amounts arise from radioactive decays involving positron emission, but it quickly annihilates with matter into gamma rays).  In the big bang, if nature was initially symmetric, you would expect as much matter as antimatter.  The transformation of free positrons into confined upquarks would sort out this problem.  Most of the universe is hydrogen, consisting of a proton containing two upquarks and a downquark, plus an orbital electron.  If the upquarks come come from a transformation of positrons while downquarks come from a transformation of electrons, the matter-antimatter balance is resolved. Secondly, the transformation of positrons to upquarks has a simple mechanism by vacuum polarization shielding of the electric charge, causing the electric charge of the positron to drop from +1 unit for a positron to +2/3 units for upquarks.  This occurs because you get two positive upquarks and one downquark in a proton.  The transformation is e+L  -> uL The positron on the left hand side has Y = +1, Q = +1 and T = +1/2.  The upquark on the right hand side has Y = +1/3, Q = +2/3 and T = +1/2.  Hence, there is an decrease of Y by 2/3, while Q decreases by 1/3.  Hence the amount of change of Y is twice that of Q.  This is impressively identical to the the situation in the transformation of electrons into downquarks, where an increase of Q by 2/3 units is accompanied by an increase of Y by twice 2/3, i.e., by 4/3, for the transformation eL  -> dL  There are only two ways that quarks can group: in pairs and in triplets or triads.  The pairs are of quarks sharing the same polarized vacuum are known as mesons, and mesons are the SU(2) symmetry pairs of left-handed quark and left-handed anti-quark, which both experience the weak nuclear force (no right-handed particle can participate in the weak nuclear force, because the right handed neutrino has zero weak hypercharge).  The SU(3) symmetry triplets of quarks are called baryons. Because only left-handed particles experience the weak force (i.e., parity is broken), it is vital to explain why this is so.  This arises from the way the vector bosons gain mass.  In the basic standard model, everything is massless.  Mass is added to the standard model by a separate scalar field (such as that which is speculatively proposed by Philip Anderson and Peter Higgs and called the Higgs field), which gives all the massive particles (including the weak force vector bosons) their mass.  The quanta for the scalar mass field are named ‘Higgs bosons’ but these have never been officially observed, and mainstream speculations do not make predict Higgs boson mass unambiguously. The model for masses in the previous post predicts composite (meson and baryon) particle masses to be due to an integer number of 91 GeV building blocks of mass which couple weakly due to the shielding factor due to the polarized vacuum around a fermion.  The 91 GeV energy equivalent to the rest mass of the uncharged neutral weak gauge boson, the Z. The SU(3), SU(2) and U(1) gauge symmetries of the standard model describe triplets (baryons), doublets (mesons) and single particle cores (leptons), dominated by strong, weak and electromagnetic interactions, respectively.  The problem is located in the electroweak SU(2)xU(1) symmetry.  Most of the papers and books on gauge symmetry focus on the technical details of the mathematical machinery, and simple mechanisms are looked at askance (as is generally the case in quantum mechanics and general relativity).  So you end up learning say, how to drive a car without knowing how the engine works, or you learn how the engine works without any knowledge of the territory which would enable you to plan a useful journey.  This is the way some complex mathematical physics is traditionally taught, mainly to get away from useless speculations: Feynman’s analogy of the chess game is fairly good.  (Deduce some of the rules of the game by watching the game being played, and use these rules to make some accurate predictions about what may happen; without having the complete understanding necessary for confident explanation of what the game is about.  Then make do by teaching the better known predictive rules, which are technical and accurate, but don’t always convey a complete understanding of the big picture.) A serious problem with the U(1) symmetry is that you can’t really ever get single leptons in nature.  They all arise naturally from pair production, so they usually arrive in doublets, contradicting U(1); examples: in beta decay, you get a beta particle and an antineutrino, while in pair production you may get a positron and an electron. This is part of the reason why SU(2) deals with leptons in the model proposed in the previous post.  Whereas pairs of left-handed quarks are confined in close proximity in mesons, a lepton-antilepton pair is not confined in a small space, but it is still a type of doublet and can be treated as such by SU(2) using massless gauge bosons (take the masses away from the Z, W+ and W- weak bosons, and you are left with a massless Z boson that mediates gravity, and massless W+ and W- bosons which mediate electromagnetic forces).  Because a version of SU(2) with massless gauge bosons has infinite range inverse-square law fields, it is ideal for describing the widely separated lepton-antilepton pairs created by pair production, just as SU(2) with massive guage bosons is ideal for describing the short range weak force in left-handed quark-antiquark pairs (mesons). The electroweak chiral symmetry arises because only left-handed particles can interact with massive SU(2) gauge bosons (the weak force), while all particles can interact with massless SU(2) gauge bosons (gravity and electromagnetism).  The reason why this is the case is down to the nature of the way mass is given to SU(2) gauge bosons by a mass-giving Higgs-type field.  Presumably the combined Higgs boson when coupled with a massless weak gauge boson gives a composite particle which only interacts with left-handed particles, while the nature of the massless weak gauge bosons is that in the absence of Higgs bosons they can interact equally with left and right handed particles. To summarise, quarks are probably electron and antielectrons (positrons) with the symmetry transformation modifications you get from close confinement of electrons against the exclusion principle (e.g., such electrons acquire new charges and short range interactions). Downquarks are electrons trapped in mesons (pairs of quarks containing quark-antiquark, bound together by the SU(2) weak nuclear force, so they have short lifetimes and under beta radioactive decay) or baryons, which are triplets of quarks bound by the SU(3) strong nuclear force.  The confinement of electrons in a small reduces their electric charge because they are all close enough in the pair or triplet to share the same overlapping polarized vacuum which shields part of the electric field.  Because this shielding effect is boosted, the electron charge per electron observed at long range is reduced to a fraction.  The idealistic model is 3 electrons confined in close proximity, giving a 3 times strong polarized vacuum, which reduces the observable charge per electron by a factor of 3, giving the e/3 downquark charge.  This is a bit too simplistic of course because in reality you get mainly stable combinations like protons (2 upquark and 1 downquark).  The energy lost from the electric charge, due to the absorption in the polarized vacuum, powers short-ranged nuclear forces which bind the quarks in mesons and hadrons together. Upquarks would seem to be trapped positrons.  This is neat because most of the universe is hydrogen, with one electron in orbit and 2 upquarks plus 1 downquark in the proton nucleus.  So one complete hydrogen atom is formed by 2 electrons and 2 positrons.  This explains the absence of antimatter in the universe: the positrons are all here, but trapped in nuclei as upquarks.  Only particles with left-handed Weyl spin undergo weak force interactions. Possibly the correct electroweak-gravity symmetry group is SU(2)L x SU(2)R, where SU(2)L is a left-handed symmetry and SU(2)R is a right handed one. The left-handed version couples to massive bosons which give mass to particles and vector bosons, creating all the massive particles and weak vector bosons. The right handed version presumably does not couple to massive bosons. The result here is that the right handed version, SU(2)R, produces only mass-less particles, giving the gauge bosons needed for long-range electromagnetic and gravitational forces. If that works in detail, it is a simplification of the SU(2)xU(1) electroweak model, which should make the role of the mass-giving field clearer, and predictions easier. The mainstream SU(2)xU(1) model requires a symmetry-breaking Higgs field which works by giving mass to weak gauge bosons only below a particular energy or beyond a particular distance from a particle core. The weak gauge bosons are supposed to be mass-less above that energy, where electroweak symmetry exists; electroweak symmetry breaking is supposed to occur below the Higgs expectation energy due to the fact that 3 weak gauge bosons acquire mass at low energy, while photons don’t acquire mass at low energy. This SU(2)xU(1) model mimics a lot of correct physics, without being the correct electroweak unification. How far has the idea that weak gauge bosons lose mass above the Higgs expectation value been checked (I don’t think it has been checked at all yet)? Presumably this is linked to ongoing efforts to see evidence for a Higgs boson. The electroweak theory correctly unifies the weak force (dealing with neutrinos, beta decay and the behaviour of mesons) with Maxwell’s equations at low energy and the electroweak unification SU(2)xU(1) predicted the W and Z massive weak gauge bosons detected at CERN in 1983. However, the existence of three massive weak gauge bosons is the same in the proposed replacement for SU(2)xU(1). I think that the suggested replacement of U(1) by another SU(2) makes quite a lot of changes to the untested parts of the standard model (in particular the Higgs mechanism), besides the obvious benefits of introducing gravity and causal electromagnetism. Spherical symmetry of Hubble recession I’d like to thank Bee and others at the Backreaction blog for patiently explaining to me that a statement that radial distance elements are equal for the Hubble recession in all directions around us, H = dv/dr = dv/dx = dv/dy = dv/dz t(age of universe), 1/H = dr/dv = dx/dv = dy /dv = dz/dv dv/H = dr = dx = dy = dx for spherically symmetrical recession of stars around us (in directions x, y, z, where r is the general radial direction that can point any way), appears superficially to be totally ‘wrong’ to people who are only unaccustomed to cosmology where the elementary equations for spherical geometry and metrics in non-symmetric spatial dimensions don’t apply.  Hopefully, ‘critics’ will grasp the point that equation A does not disprove equation B just because you have seen equation A in some textbook, and not equation B. For example, some people repeatedly and falsely claim that H = dv/dr = dv/dx = dv/dy = dv/dz and the resulting equality dr = dx = dy = dx is total rubbish, and is ‘disproved’ by the existence of metrics and non-symmetrical spherical geometrical equations.  They ignore all explanations that this equality of gradient elements has nothing to do with metrics or spherical geometry, and is due to the spherical symmetry of the cosmic expansion we observer around us. Another way to look at H = dv/dr = dv/dx = dv/dy = dv/dz is to remember that 1/H is a way to measure the age of the universe.  If the universe were at critical density and being gravitationally slowed down with no cosmological constant to offset this gravity effect by providing repulsive long range force and an outward acceleration to cancel out the gravitational inward deceleration assumed by the mainstream (i.e., the belief until 1998), then the age of the universe would be (2/3)/H where 2/3 is the compensation factor for gravitational retardation. However, since 1998 there has been good evidence that gravity is not slowing down the expansion; instead there is either something opposing gravity by causing repulsion at immense distance scales and outward acceleration (so-called ‘dark energy’ giving a small positive cosmological constant), or else there is a partial lack of gravity at long distances due to graviton redshift and/or the geometry of a quantum gravity mechanism (depending on whether you are assuming spin-2 gravitons or not), which has substantially more predictive and less ad hoc. since it was predicted via Electronics World Oct. 1996, years before being confirmed by observation (see comment 11 on previous post). Therefore, let’s use 1/H as the age of the universe, time!  Then we find: This proves that dr/dv = dx/dv = dy/dv = dz/dv. Now multiply this out by dv, and what do you get?  You get: dr = dx = dy = dz. As Fig. 2 shows, it is a fact that the Hubble parameter can be expressed as H = dv/dr = dv/dx = dv/dy = dv/dz, where the equality of numerators means that the denominators are similarly equal: dr = dx = dy = dx.  This is fact, not an opinion or guess. Fig 2 - why dr = dx = dy = dx in the Hubble law v/r = H or dv/dr = H Fig. 2: Illustration of the reason why the Hubble law H = dv/dr = dv/dx = dv/dy = dv/dz, where because of the isotropy (i.e. the Hubble law is the same in every direction we look, as far as observational evidence can tell), the numerators in the fractions are all equal to dv so the denominators are all equal to each other too: dr = dx = dy = dx.  Beware everyone, this has nothing whatsoever to do with metrics, with general relativity, or with the general case in spherical geometry (where the origin of coordinates need not in general be the centre of the spherical symmetry)! So if your textbook has a formula which ‘contradicts’ dr = dx = dy = dx or if you think that dr = dx = dy = dx should in your opinion be replaced by a metric with the squares of line elements all added up, or with a general formula for spherical geometry which applies to situations where the recession would be vary with directions, then you are wrong.  As one commentator on this blog has said (I don’t agree with most of it), it is true that new ideas which have not been investigated before often look ‘silly’.  People who do not check the physics and instead just pick out formulae, misunderstand them, and then ridicule them, are not “critics”.  They are not criticising the work, instead they are criticising their own misunderstandings.  So any ridicule and character assassinations resulting should be taken with a large pinch of salt.  It’s best to try to see the funny side when this occurs! One of the very interesting things about dr = dx = dy = dx is what you get for time dimensions because the age of the universe (if there is no gravitational deceleration, as was shown to be the case in 1998) is 1/H, and because we look back in time with increasing distance according to r = x = y = z = ct, it follows that there are equivalent time-like dimensions for each of the spatial dimensions.  This makes spacetime easier to understand and allows a new unification scheme!  The expanding universe has three orthagonal expanding time-like dimensions (we usually refer to astronomical dimensions in time units like ‘lightyears’ anyway, since we are observing the past with increasing distance, due to the travel time of light) in addition to three spacetime dimensions describing matter.  Surely this contradicts general relativity?  No, because all three time dimensions are usually equal, and so can be represented by a single time element, dt, or its square.  To do this, we take dr = dx = dy = dz and convert them all into time-like equivalents by dividing each distance element by c, giving: (dr)/c = (dx)/c = (dy)/c = (dz)/c which can be written as: dtr = dtx = dty = dtz So, because the age of the universe (ascertained by the Hubble parameter) is the same in all directions, all the time dimensions are equal!  This is why we only need one time to describe the expansion of the universe.  If the Hubble expansion rate was found to be different in directions x, y and x, then the age of the universe would appear to be different in different directions.  Fortunately, the age of the universe derived from the Hubble recession seems to be the same (within observational error bars) in all directions: time appears to be isotropic!  This is quite a surprising result as some hostility to this new idea from traditionalists shows. But the three time dimensions which are usually hidden by this isotropy are vitally important!  Replacing the Kaluza-Klein theory, Lunsford has a 6-dimensional unification of electrodynamics and gravitation which has 3 time-like dimensions and appears to be what we need.  It was censored off arXiv after being published in a peer-reviewed physics journal, “Gravitation and Electrodynamics over SO(3,3)”, International Journal of Theoretical Physics, Volume 43, Number 1 / January, 2004, Pages 161-177, which can be downloaded here.  The mass-energy (i.e., matter and radiation) has 3 spacetime which are different from the 3 cosmological spacetime dimensions: cosmological spacetime dimensions are expanding, while the 3 spacetime dimensions are bound together but are contractable in general relativity.  For example, in general relativity the Earth’s radius is contracted by the amount 1.5 millimetres. In addition, as was shown in detail in the previous post, this sorts out ‘dark energy’ and predicts the strength of gravity accurately within experimental data error bars, because when we rewrite the Hubble recession in terms of time rather than distance, we get acceleration which by Newton’s 2nd empirical law of motion (F = ma) implies an outward force of receding matter, which in turn implies by Newton’s 3rd empirical law of motion an inward reaction force which – it turns out – is the mechanism behind gravity: ‘To find out what the acceleration is, we remember that velocity is defined as v = dR/dt, and this rearranges to give dt = dR/v, which can be substituted into the definition of acceleration, a = dv/dt, giving a = dv/(dR/v) = v.dv/dR, into which we can insert Hubble’s empirical law v = HR, giving a = HR.d(HR)/dR = H2R.’ - Herman Minkowski, 1908. Deriving the relationship between the FitzGerald contraction and the gravitational contraction Feynman finds that whereas lengths contract in the direction of motion at velocity v by the ratio (1 – v2/c2)1/2, gravity contracts lengths by the amount (1/3)MG/c2 = 1.5 mm for the contraction of Earth’s radius by gravity. It is of interest that this result can be obtained simply, throwing light on the relationship between the equivalence of mass and energy in ‘special relativity’ (which is at best just an approximation) and the equivalence of inertial mass and gravitational mass in general relativity. To start with, recall Dr Love’s derivation of Kepler’s law from the equivalence of the kinetic energy of a planet to its gravitational potential energy, given in a previous post. This is very simple.  If a body’s average kinetic energy in space (outwide the atmosphere) is such that it has just over the escape velocity, it will eventually escape and will therefore be unable to orbit endlessly.  If it has just under that velocity, it will eventually fall back to Earth and so it will not orbit endlessly, just as is the case if the average velocity is too high.  Like Goldilocks and the porridge, it is very fussy. The average orbital velocity must exactly match the escape velocity – and be neither more nor less than the escape velocity – in order to achieve a stable orbit. Dr Love points out the consequences: a body in orbit must have an average velocity equal to escape velocity v = (2GM/r1/2 which implies that its kinetic energy must be equal to its gravitational potential energy: This permits him to derive Kepler’s law.  It is also very important because it explains the relationship for stability of orbits: average kinetic energy = gravitational potential energy Einstein’s equivalence of inertial and gravitational mass in E = mc2 then allows us to use this equivalence of inertial kinetic energy and gravitational potential energy derive the equivalence principle of general relativity, which states that the inertial mass is equal to the gravitational mass, at least for orbiting bodies.  Another physically justified argument is that gravitational potential is the gravity energy that would be released in the case of collapse.  If you allowed the object to fall and therefore pick up that gravitational potential energy, the latter energy would be converted into kinetic energy of the object.  This is why the two energies are equivalent.  It’s a rigorous argument! Now test it further.  Take the FitzGerald-Lorentz contraction of length due to inertial motion at velocity, where objects are compressed by the ratio (1 – v2/c2)1/2, using equivalence of average kinetic energy to gravitational potential energy, you can place the escape velocity, v = (2GM/r1/2 into the contraction formula, and expand the result to two terms using the binomial expansion.  You find that the radius of a gravitational mass would be reduced by the amount GM/c2 = 4.5 mm for Earth’s radius which is three times as big as Feynman’s formula for gravitational compression of Earth’s radius.  The factor of three comes from the fact that the FitzGerald-Lorentz contraction is in one dimension only (direction of motion), while the gravitational field lines radiate in three dimensions, so the same amount of contraction is spread over three times as many dimensions, giving a reduction in radius by (1/3)GM/c2 = 1.5 mm!  (There is also a rigorous mathematical discussion of this on the page here if you have the time to scroll down and find it.) Unusually, Feynman makes a confused mess of this effect in his relevant volume of Lectures on Physics, c42 p6, where correctly he gives his equation 42.3 for excess radius being equal to predicted radius minus measured radius (i.e., he claims that the predicted radius is the bigger one in the equation) but then on the same page in the text falsely and confusingly writes: ‘… actual radius exceeded the predicted radius …’ (i.e., he claims in the text that the predicted radius is the smaller). Professor Jacques Distler’s philosophical and mathematical genius – Professor Jacques Distler, Musings blog post on the Role of Rigour.
ca16743945b13cfe
Common Compounds Exam Guide Construction Kits Companion Notes Just Ask Antoine! Slide Index Tutorial Index Companion Notes Atoms & ions Chemical change The mole Energy & change The quantum theory Electrons in atoms The periodic table Home :Companion NotesPrint | Comment Electrons in atoms Learning Objectives Lecture Notes Internet sites and paper references for further exploration. Frequently Asked Questions Find an answer, or ask a question. Learning objectives • Explain the difference between a continuous spectrum and a line spectrum. • Explain the difference between an emission and an absorption spectrum. • Use the concept of quantized energy states to explain atomic line spectra. • Given an energy level diagram, predict wavelengths in the line spectrum, and vice versa. • Define and distinguish between shells, subshells, and orbitals. • Explain the relationships between the quantum numbers. • Use quantum numbers to label electrons in atoms. • Describe and compare atomic orbitals given the n and ell quantum numbers. • List a set of subshells in order of increasing energy. • Write electron configurations* for atoms in either the subshell or orbital box notations. • Write electron configurations of ions. • Use electron configurations to predict the magnetic properties of atoms. Lecture outline The quantum theory was used to show how the wavelike behavior of electrons leads to quantized energy states when the electrons are bound or trapped. In this section, we'll use the quantum theory to explain the origin of spectral lines and to describe the electronic structure of atoms. Emission Spectra • experimental key to atomic structure: analyze light emitted by high temperature gaseous elements • experimental setup: spectroscopy • atoms emit a characteristic set of discrete wavelengths- not a continuous spectrum! • atomic spectrum can be used as a "fingerprint" for an element • hypothesis: if atoms emit only discrete wavelengths, maybe atoms can have only discrete energies • an analogy A turtle sitting on a ramp can have any height above the ground- and so, any potential energy A turtle sitting on a staircase can take on only certain discrete energies • energy is required to move the turtle up the steps (absorption) • energy is released when the turtle moves down the steps (emission) • only discrete amounts of energy are absorbed or released (energy is said to be quantized) • energy staircase diagram for atomic hydrogen • bottom step is called the ground state • higher steps are called excited states • computing line wavelengths using the energy staircase diagram • computing energy steps from wavelengths in the line spectrum • summary: line spectra arise from transitions between discrete (quantized) energy states The quantum mechanical atom • Electrons in atoms have quantized energies • Electrons in atoms are bound to the nucleus by electrostatic attraction • Electron waves are standing matter waves • standing matter waves have quantized energies, as with the "electron on a wire" model • Electron standing matter waves are 3 dimensional • The electron on a wire model was one dimensional; one quantum number was required to describe the state of the electron • A 3D model requires three quantum numbers • A three-dimensional standing matter wave that describes the state of an electron in an atom is called an atomic orbital • The energies and mathematical forms of the orbitals can be computed using the Schrödinger equation • quantization isn't assumed; it arises naturally in solution of the equation • every electron adds 3 variables (x, y, z) to the equation; it's very hard to solve equations with lots of variables. • energy-level separations computed with the Schrödinger equation agree very closely with those computed from atomic spectral lines Quantum numbers • Think of the quantum numbers as addresses for electrons • the principal quantum number, n • determines the size of an orbital (bigger n = bigger orbitals) • largely determines the energy of the orbital (bigger n = higher energy) • can take on integer values n = 1, 2, 3, ..., • all electrons in an atom with the same value of n are said to belong to the same shell • spectroscopists use the following names for shells Spectroscopist's notation for shells*. nshell name nshell name 1K 5O 2L 6P 3M 7Q • the azimuthal quantum number, ell • designates the overall shape of the orbital within a shell • affects orbital energies (bigger ell = higher energy) • all electrons in an atom with the same value of ell are said to belong to the same subshell • only integer values between 0 and n-1 are allowed • sometimes called the orbital angular momentum quantum number • spectroscopists use the following notation for subshells Spectroscopist's notation for subshells*. ellsubshell name • the magnetic quantum number, mell • determines the orientation of orbitals within a subshell • does not affect orbital energy (except in magnetic fields!) • only integer values between -ell and +ell are allowed • the number of mell values within a subshell is the number of orbitals within a subshell The number of possible mell values determines the number of orbitals* in a subshell. ell possible values of mell number of orbitals in this subshell 0 0 1 1 -1, 0, +1 3 2 -2, -1, 0, +1, +2 5 • the spin quantum number, ms • several experimental observations can be explained by treating the electron as though it were spinning • spin makes the electron behave like a tiny magnet • spin can be clockwise or counterclockwise • spin quantum number can have values of +1/2 or -1/2 Electron configurations of atoms • a list showing how many electrons are in each orbital or subshell in an atom or ion • subshell notation: list subshells of increasing energy, with number of electrons in each subshell as a superscript • examples • 1s2 2s2 2p5 means "2 electrons in the 1s subshell, 2 electrons in the 2s subshell, and 5 electrons in the 2p subshell" • 1s2 2s2 2p6 3s2 3p3 is an electron configuration with 15 electrons total; 2 electrons have n=1 (in the 1s subshell); 8 electrons have n=2 (2 in the 2s subshell, and 6 in the 2p subshell); and 5 electrons have n=3 (2 in the 3s subshell, and 3 in the 3p subshell). • ground state* configurations fill the lowest energy orbitals first Electron configurations of the first 11 elements, in subshell notation. Notice how configurations can be built by adding one electron at a time. atomZground state electronic configuration H 1 1s1 He 2 1s2 Li 3 1s2 2s1 Be 4 1s2 2s2 B 5 1s2 2s2 2p1 C 6 1s2 2s2 2p2 N 7 1s2 2s2 2p3 O 8 1s2 2s2 2p4 F 9 1s2 2s2 2p5 Ne 10 1s2 2s2 2p6 Na 11 1s2 2s2 2p6 3s1 Writing electron configurations • strategy: start with hydrogen, and build the configuration one electron at a time (the Aufbau principle*) 1. fill subshells in order by counting across periods, from hydrogen up to the element of interest: Filling order of subshells from the periodic table 2. rearrange subshells (if necessary) in order of increasing n & l • examples: Give the ground state electronic configurations for: • Al • Fe • Ba • Hg • watch out for d & f block elements; orbital interactions cause exceptions to the Aufbau principle • half-filled and completely filled d and f subshells have extra stability Know these exceptions to the Aufbau principle in the 4th period. (There are many others at the bottom of the table, but don't worry about them now.) exception configuration predicted by the Aufbau principle true ground state configuration Cr 1s2 2s2 2p6 3s2 3p6 3d4 4s2 1s2 2s2 2p6 3s2 3p6 3d5 4s1 Cu 1s2 2s2 2p6 3s2 3p6 3d9 4s2 1s2 2s2 2p6 3s2 3p6 3d10 4s1 Electron configurations including spin • unpaired electrons give atoms (and molecules) special magnetic and chemical properties • when spin is of interest, count unpaired electrons using orbital box diagrams Examples of ground state electron configurations in the orbital box notation that shows electron spins. atomorbital box diagram • drawing orbital box diagrams 1. write the electron configuration in subshell notation 2. draw a box for each orbital. • Remember that s, p, d, and f subshells contain 1, 3, 5, and 7 degenerate* orbitals, respectively. • Remember that an orbital can hold 0, 1, or 2 electrons only, and if there are two electrons in the orbital, they must have opposite (paired) spins (Pauli principle*) 3. within a subshell (depicted as a group of boxes), spread the electrons out and line up their spins as much as possible (Hund's rule*) • the number of unpaired electrons can be counted experimentally • configurations with unpaired electrons are attracted to magnetic fields (paramagnetism*) • configurations with only paired electrons are weakly repelled by magnetic fields (diamagnetism*) Core and valence electrons • chemistry involves mostly the shell* with the highest value of principal quantum number*, n, called the valence shell* • the noble gas core* under the valence shell is chemically inert • simplify the notation for electron configurations by replacing the core with a noble gas symbol in square brackets: Examples of electron configurations written with the core/valence notation atom full configuration core valence configuration full configuration using core/valence notation O 1s2 2s2 2p4 He 2s2 2p4 [He] 2s2 2p4 Cl 1s2 2s2 2p6 3s2 3p5 Ne 3s2 3p5 [Ne] 3s2 3p5 Al 1s2 2s2 2p6 3s2 3p1 Ne 3s2 3p1 [Ne] 3s2 3p1 • electrons in d and f subshells outside the noble gas core are called pseudocore electrons Examples of electron configurations containing pseudocore electrons atom core pseudocore valence full configuration Fe Ar 3d6 4s2 [Ar] 3d6 4s2 Sn Kr 4d10 5s2 5p2 [Kr] 4d10 5s2 5p2 Hg Xe 4f14 5d10 6s2 [Xe] 4f14 5d10 6s2 Pu Rn 5f6 7s2 [Rn] 5f6 7s2 Sign up for a free monthly newsletter describing updates, new features, and changes on this site. General Chemistry Online! Electrons in atoms Copyright © 1997-2005 by Fred Senese Comments & questions to fsenese@frostburg.edu
246d416e8442173b
Quantum Monte Carlo Quantum Monte Carlo Quantum Monte Carlo to get instant updates about 'Quantum Monte Carlo' on your MyPage. Meet other similar minded people. Its Free! All Updates Quantum Monte Carlo is a large class of computer algorithms that simulate quantum systems with the idea of solving the quantum many-body problem. They use, in one way or another, the Monte Carlo method to handle the many-dimensional integrals that arise. Quantum Monte Carlo allows a direct representation of many-body effects in the wave function, at the cost of statistical uncertainty that can be reduced with more simulation time. For bosons, there exist numerically exact and polynomial-scaling algorithms. For fermions, there exist very good approximations and numerically exact exponentially scaling quantum Monte Carlo algorithms, but none that are both. In principle, any physical system can be described by the many-body Schrödinger equation as long as the constituent particles are not moving "too" fast; that is, they are not moving near the speed of light. This includes the electrons in almost every material in the world, so if we could solve the Schrödinger equation, we could predict the behavior of any electronic system, which has important applications in fields from computers to biology. This also includes the nuclei in Bose–Einstein condensate and superfluids such as liquid helium. The difficulty is that the Schrödinger equation involves a function of three times the number of particles and is difficult to solve even using parallel computing technology in a reasonable amount of time. Traditionally, theorists have approximated the... Read More No feeds found Posting your question. Please wait!... No updates available. No messages found Tell your friends > about this page  Create a new Page Create a new Page  Find your friends   Find friends on MyPage from
5dbcb23114fa6eeb
The Copenhagen interpretation is a collection of views about the meaning of quantum mechanics principally attributed to Niels Bohr and Werner Heisenberg.[1][2][3] It is one of the oldest of numerous proposed interpretations of quantum mechanics, as features of it date to the development of quantum mechanics during 1925–1927, and it remains one of the most commonly taught.[4][5][6] There is no definitive historical statement of what the Copenhagen interpretation is. There are some fundamental agreements and disagreements between the views of Bohr and Heisenberg.[7][8] For example, Heisenberg emphasized a sharp "cut" between the observer (or the instrument) and the system being observed,[9]: 133  while Bohr offered an interpretation that is independent of a subjective observer or measurement or collapse, which relies on an "irreversible" or effectively irreversible process, which could take place within the quantum system.[10] Features common to Copenhagen-type interpretations include the idea that quantum mechanics is intrinsically indeterministic, with probabilities calculated using the Born rule, and the principle of complementarity, which states that objects have certain pairs of complementary properties which cannot all be observed or measured simultaneously.[11] Moreover, the act of "observing" or "measuring" an object is irreversible, no truth can be attributed to an object except according to the results of its measurement. Copenhagen-type interpretations hold that quantum descriptions are objective, in that they are independent of physicists' mental arbitrariness.[12]: 85–90  Over the years, there have been many objections to aspects of Copenhagen-type interpretations, including the discontinuous and stochastic nature of the "observation" or "measurement" process, the apparent subjectivity of requiring an observer, the difficulty of defining what might count as a measuring device, and the seeming reliance upon classical physics in describing such devices. Main article: Old quantum theory Starting in 1900, investigations into atomic and subatomic phenomena forced a revision to the basic concepts of classical physics. However, it was not until a quarter-century had elapsed that the revision reached the status of a coherent theory. During the intervening period, now known as the time of the "old quantum theory", physicists worked with approximations and heuristic corrections to classical physics. Notable results from this period include Max Planck's calculation of the blackbody radiation spectrum, Albert Einstein's explanation of the photoelectric effect, Einstein and Peter Debye's work on the specific heat of solids, Niels Bohr and Hendrika Johanna van Leeuwen's proof that classical physics cannot account for diamagnetism, Bohr's model of the hydrogen atom and Arnold Sommerfeld's extension of the Bohr model to include relativistic effects. From 1922 through 1925, this method of heuristic corrections encountered increasing difficulties; for example, the Bohr–Sommerfeld model could not be extended from hydrogen to the next simplest case, the helium atom.[13] The transition from the old quantum theory to full-fledged quantum physics began in 1925, when Werner Heisenberg presented a treatment of electron behavior based on discussing only "observable" quantities, meaning to Heisenberg the frequencies of light that atoms absorbed and emitted.[14] Max Born then realized that in Heisenberg's theory, the classical variables of position and momentum would instead be represented by matrices, mathematical objects that can be multiplied together like numbers with the crucial difference that the order of multiplication matters. Erwin Schrödinger presented an equation that treated the electron as a wave, and Born discovered that the way to successfully interpret the wave function that appeared in the Schrödinger equation was as a tool for calculating probabilities.[15] Quantum mechanics cannot easily be reconciled with everyday language and observation, and has often seemed counter-intuitive to physicists, including its inventors.[note 1] The ideas grouped together as the Copenhagen interpretation suggest a way to think about how the mathematics of quantum theory relates to physical reality. Origin and use of the term The Niels Bohr Institute in Copenhagen The Niels Bohr Institute in Copenhagen The term refers to the city of Copenhagen in Denmark, and was apparently coined during the 1950s.[16] Earlier, during the mid-1920s, Heisenberg had been an assistant to Bohr at his institute in Copenhagen, where they helped originate quantum mechanical theory.[17][18] At the 1927 Solvay Conference, a dual talk by Max Born and Heisenberg declared "we consider quantum mechanics to be a closed theory, whose fundamental physical and mathematical assumptions are no longer susceptible of any modification."[19][20] In 1929, Heisenberg gave a series of invited lectures at the University of Chicago explaining the new field of quantum mechanics. The lectures then served as the basis for his textbook, The Physical Principles of the Quantum Theory, published in 1930.[21] In the book's preface, Heisenberg wrote: On the whole, the book contains nothing that is not to be found in previous publications, particularly in the investigations of Bohr. The purpose of the book seems to me to be fulfilled if it contributes somewhat to the diffusion of that 'Kopenhagener Geist der Quantentheorie' [Copenhagen spirit of quantum theory] if I may so express myself, which has directed the entire development of modern atomic physics. The term 'Copenhagen interpretation' suggests something more than just a spirit, such as some definite set of rules for interpreting the mathematical formalism of quantum mechanics, presumably dating back to the 1920s.[22][23][24] However, no such text exists, and the writings of Bohr and Heisenberg contradict each other on several important issues.[8] It appears that the particular term, with its more definite sense, was coined by Heisenberg around 1955,[16] while criticizing alternative "interpretations" (e.g., David Bohm's[25]) that had been developed.[26][27] Lectures with the titles 'The Copenhagen Interpretation of Quantum Theory' and 'Criticisms and Counterproposals to the Copenhagen Interpretation', that Heisenberg delivered in 1955, are reprinted in the collection Physics and Philosophy.[28] Before the book was released for sale, Heisenberg privately expressed regret for having used the term, due to its suggestion of the existence of other interpretations, that he considered to be "nonsense".[29] In a 1960 review of Heisenberg's book, Bohr's close collaborator Léon Rosenfeld called the term an "ambiguous expression" and suggested it be discarded.[30] However, this did not come to pass, and the term entered widespread use.[16][27] There is no uniquely definitive statement of the Copenhagen interpretation.[8][31][32][33] The term encompasses the views developed by a number of scientists and philosophers during the second quarter of the 20th century.[34] Bohr and Heisenberg never totally agreed on how to understand the mathematical formalism of quantum mechanics,[35] and Bohr distanced himself from what he considered Heisenberg's more subjective interpretation.[7] Bohr offered an interpretation that is independent of a subjective observer, or measurement, or collapse; instead, an "irreversible" or effectively irreversible process causes the decay of quantum coherence which imparts the classical behavior of "observation" or "measurement".[10][36][37][38] Different commentators and researchers have associated various ideas with the term.[20] Asher Peres remarked that very different, sometimes opposite, views are presented as "the Copenhagen interpretation" by different authors.[note 2] N. David Mermin coined the phrase "Shut up and calculate!" to summarize Copenhagen-type views, a saying often misattributed to Richard Feynman and which Mermin later found insufficiently nuanced.[40][41] Mermin described the Copenhagen interpretation as coming in different "versions", "varieties", or "flavors".[42] Some basic principles generally accepted as part of the interpretation include the following:[7] 1. Quantum mechanics is intrinsically indeterministic. 2. The correspondence principle: in the appropriate limit, quantum theory comes to resemble classical physics and reproduces the classical predictions. 3. The Born rule: the wave function of a system yields probabilities for the outcomes of measurements upon that system. 4. Complementarity: certain properties cannot be jointly defined for the same system at the same time. In order to talk about a specific property of a system, that system must be considered within the context of a specific laboratory arrangement. Observable quantities corresponding to mutually exclusive laboratory arrangements cannot be predicted together, but considering multiple such mutually exclusive experiments is necessary to characterize a system. Hans Primas and Roland Omnès give a more detailed breakdown that, in addition to the above, includes the following:[12]: 85  1. Quantum physics applies to individual objects. The probabilities computed by the Born rule do not require an ensemble or collection of "identically prepared" systems to understand. 2. The results provided by measuring devices are essentially classical, and should be described in ordinary language. This was particularly emphasized by Bohr, and was accepted by Heisenberg.[note 3] 3. Per the above point, the device used to observe a system must be described in classical language, while the system under observation is treated in quantum terms. This is a particularly subtle issue for which Bohr and Heisenberg came to differing conclusions. According to Heisenberg, the boundary between classical and quantum can be shifted in either direction at the observer's discretion. That is, the observer has the freedom to move what would become known as the "Heisenberg cut" without changing any physically meaningful predictions.[12]: 86  On the other hand, Bohr argued that a complete specification of the laboratory apparatus would fix the "cut" in place. Moreover, Bohr argued that at least some concepts of classical physics must be meaningful on both sides of the "cut".[8] 4. During an observation, the system must interact with a laboratory device. When that device makes a measurement, the wave function of the systems collapses, irreversibly reducing to an eigenstate of the observable that is registered. The result of this process is a tangible record of the event, made by a potentiality becoming an actuality.[note 4] 5. Statements about measurements that are not actually made do not have meaning. For example, there is no meaning to the statement that a photon traversed the upper path of a Mach–Zehnder interferometer unless the interferometer were actually built in such a way that the path taken by the photon is detected and registered.[12]: 88  6. Wave functions are objective, in that they do not depend upon personal opinions of individual physicists or other such arbitrary influences.[12]: 509–512  Another issue of importance where Bohr and Heisenberg disagreed is wave–particle duality. Bohr maintained that the distinction between a wave view and a particle view was defined by a distinction between experimental setups, whereas Heisenberg held that it was defined by the possibility of viewing the mathematical formulas as referring to waves or particles. Bohr thought that a particular experimental setup would display either a wave picture or a particle picture, but not both. Heisenberg thought that every mathematical formulation was capable of both wave and particle interpretations.[44][45] One difficulty in discussing the philosophical position of "the Copenhagen interpretation" is that there is no single, authoritative source that establishes what the interpretation is. Another complication is that the philosophical background familiar to Einstein, Bohr, Heisenberg, and contemporaries is much less so to physicists and even philosophers of physics in more recent times.[13] Nature of the wave function A wave function is a mathematical entity that provides a probability distribution for the outcomes of each possible measurement on a system. Knowledge of the quantum state together with the rules for the system's evolution in time exhausts all that can be predicted about the system's behavior. Generally, Copenhagen-type interpretations deny that the wave function provides a directly apprehensible image of an ordinary material body or a discernible component of some such,[46][47] or anything more than a theoretical concept. Probabilities via the Born rule Main article: Born rule The Born rule is essential to the Copenhagen interpretation.[48] Formulated by Max Born in 1926, it gives the probability that a measurement of a quantum system will yield a given result. In its simplest form, it states that the probability density of finding a particle at a given point, when measured, is proportional to the square of the magnitude of the particle's wave function at that point.[note 5] Main article: Wave function collapse A common perception of "the" Copenhagen interpretation is that an important part of it is the "collapse" of the wave function.[7] In the act of measurement, it is postulated, the wave function of a system can change suddenly and discontinuously. Prior to a measurement, a wave function involves the various probabilities for the different potential outcomes of that measurement. But when the apparatus registers one of those outcomes, no traces of the others linger. Heisenberg spoke of the wave function as representing available knowledge of a system, and did not use the term "collapse", but instead termed it "reduction" of the wave function to a new state representing the change in available knowledge which occurs once a particular phenomenon is registered by the apparatus.[53] According to Howard and Faye, the writings of Bohr do not mention wave function collapse.[16][7] Because they assert that the existence of an observed value depends upon the intercession of the observer, Copenhagen-type interpretations are sometimes called "subjective". This term is rejected by many Copenhagenists because the process of observation is mechanical and does not depend on the individuality of the observer.[54] Wolfgang Pauli, for example, insisted that measurement results could be obtained and recorded by "objective registering apparatus".[9]: 117–123  As Heisenberg wrote, In the 1970s and 1980s, the theory of decoherence helped to explain the appearance of quasi-classical realities emerging from quantum theory,[55][56][57] but was insufficient to provide a technical explanation for the apparent wave function collapse.[58] Completion by hidden variables? Main article: Hidden-variable theory In metaphysical terms, the Copenhagen interpretation views quantum mechanics as providing knowledge of phenomena, but not as pointing to 'really existing objects', which it regards as residues of ordinary intuition. This makes it an epistemic theory. This may be contrasted with Einstein's view, that physics should look for 'really existing objects', making itself an ontic theory.[59] The metaphysical question is sometimes asked: "Could quantum mechanics be extended by adding so-called "hidden variables" to the mathematical formalism, to convert it from an epistemic to an ontic theory?" The Copenhagen interpretation answers this with a strong 'No'.[60] It is sometimes alleged, for example by J.S. Bell, that Einstein opposed the Copenhagen interpretation because he believed that the answer to that question of "hidden variables" was "yes". By contrast, Max Jammer writes "Einstein never proposed a hidden variable theory."[61] Einstein explored the possibility of a hidden variable theory, and wrote a paper describing his exploration, but withdrew it from publication because he felt it was faulty.[62][63] Acceptance among physicists During the 1930s and 1940s, views about quantum mechanics attributed to Bohr and emphasizing complementarity became commonplace among physicists. Textbooks of the time generally maintained the principle that the numerical value of a physical quantity is not meaningful or does not exist until it is measured.[64]: 248  Prominent physicists associated with Copenhagen-type interpretations have included Lev Landau,[64][65] Wolfgang Pauli,[65] Rudolf Peierls,[66] Asher Peres,[67] Léon Rosenfeld,[8] and Ray Streater.[68] Throughout much of the 20th century, the Copenhagen tradition had overwhelming acceptance among physicists.[64][69] According to a very informal poll (some people voted for multiple interpretations) conducted at a quantum mechanics conference in 1997,[70] the Copenhagen interpretation remained the most widely accepted label that physicists applied to their own views. A similar result was found in a poll conducted in 2011.[71] The nature of the Copenhagen interpretation is exposed by considering a number of experiments and paradoxes. 1. Schrödinger's cat The Copenhagen interpretation: The wave function reflects our knowledge of the system. The wave function means that, once the cat is observed, there is a 50% chance it will be dead, and 50% chance it will be alive.[67] 2. Wigner's friend Wigner puts his friend in with the cat. The external observer believes the system is in state . However, his friend is convinced that the cat is alive, i.e. for him, the cat is in the state . How can Wigner and his friend see different wave functions? The Copenhagen interpretation: The answer depends on the positioning of Heisenberg cut, which can be placed arbitrarily (at least according to Heisenberg, though not to Bohr[8]). If Wigner's friend is positioned on the same side of the cut as the external observer, his measurements collapse the wave function for both observers. If he is positioned on the cat's side, his interaction with the cat is not considered a measurement. 3. Double-slit diffraction The Copenhagen interpretation: Light is neither. A particular experiment can demonstrate particle (photon) or wave properties, but not both at the same time (Bohr's complementarity principle). The same experiment can in theory be performed with any physical system: electrons, protons, atoms, molecules, viruses, bacteria, cats, humans, elephants, planets, etc. In practice it has been performed for light, electrons, buckminsterfullerene,[73][74] and some atoms. Due to the smallness of Planck's constant it is practically impossible to realize experiments that directly reveal the wave nature of any system bigger than a few atoms; but in general quantum mechanics considers all matter as possessing both particle and wave behaviors. Larger systems (like viruses, bacteria, cats, etc.) are considered as "classical" ones but only as an approximation, not exact. 4. Einstein–Podolsky–Rosen paradox Entangled "particles" are emitted in a single event. Conservation laws ensure that the measured spin of one particle must be the opposite of the measured spin of the other, so that if the spin of one particle is measured, the spin of the other particle is now instantaneously known. Because this outcome cannot be separated from quantum randomness, no information can be sent in this manner and there is no violation of either special relativity or the Copenhagen interpretation. The Copenhagen interpretation: Assuming wave functions are not real, wave-function collapse is interpreted subjectively. The moment one observer measures the spin of one particle, they know the spin of the other. However, another observer cannot benefit until the results of that measurement have been relayed to them, at less than or equal to the speed of light. Incompleteness and indeterminism Niels Bohr and Albert Einstein, pictured here at Paul Ehrenfest's home in Leiden (December 1925), had a long-running collegial dispute about what quantum mechanics implied for the nature of reality. Einstein was an early and persistent critic of the Copenhagen school. Bohr and Heisenberg advanced the position that no physical property could be understood without an act of measurement, while Einstein refused to accept this. Abraham Pais recalled a walk with Einstein when the two discussed quantum mechanics: "Einstein suddenly stopped, turned to me and asked whether I really believed that the moon exists only when I look at it."[75] While Einstein did not doubt that quantum mechanics was a correct physical theory in that it gave correct predictions, he maintained that it could not be a complete theory. The most famous product of his efforts to argue the incompleteness of quantum theory is the Einstein–Podolsky–Rosen thought experiment, which was intended to show that physical properties like position and momentum have values even if not measured.[note 6] The argument of EPR was not generally persuasive to other physicists.[64]: 189–251  Carl Friedrich von Weizsäcker, while participating in a colloquium at Cambridge, denied that the Copenhagen interpretation asserted "What cannot be observed does not exist". Instead, he suggested that the Copenhagen interpretation follows the principle "What is observed certainly exists; about what is not observed we are still free to make suitable assumptions. We use that freedom to avoid paradoxes."[31] Einstein was likewise dissatisfied with the indeterminism of quantum theory. Regarding the possibility of randomness in nature, Einstein said that he was "convinced that He [God] does not throw dice."[80] Bohr, in response, reputedly said that "it cannot be for us to tell God, how he is to run the world".[note 7] The "shifty split" Much criticism of Copenhagen-type interpretations has focused on the need for a classical domain where observers or measuring devices can reside, and the imprecision of how the boundary between quantum and classical might be defined. John Bell called this the "shifty split".[10] As typically portrayed, Copenhagen-type interpretations involve two different kinds of time evolution for wave functions, the deterministic flow according to the Schrödinger equation and the probabilistic jump during measurement, without a clear criterion for when each kind applies. Why should these two different processes exist, when physicists and laboratory equipment are made of the same matter as the rest of the universe?[81] And if there is somehow a split, where should it be placed? Steven Weinberg writes that the traditional presentation gives "no way to locate the boundary between the realms in which [...] quantum mechanics does or does not apply."[82] The problem of thinking in terms of classical measurements of a quantum system becomes particularly acute in the field of quantum cosmology, where the quantum system is the universe.[83][84] How does an observer stand outside the universe in order to measure it, and who was there to observe the universe in its earliest stages? Advocates of Copenhagen-type interpretations have disputed the seriousness of these objections. Rudolf Peierls noted that "the observer does not have to be contemporaneous with the event"; for example, we study the early universe through the cosmic microwave background, and we can apply quantum mechanics to that just as well as to any electromagnetic field.[66] Likewise, Asher Peres argued that physicists are, conceptually, outside those degrees of freedom that cosmology studies, and applying quantum mechanics to the radius of the universe while neglecting the physicists in it is no different from quantizing the electric current in a superconductor while neglecting the atomic-level details.[85] You may object that there is only one universe, but likewise there is only one SQUID in my laboratory.[85] E. T. Jaynes,[86] an advocate of Bayesian probability, argued that probability is a measure of a state of information about the physical world, and so regarding it as a physical phenomenon would be an example of a mind projection fallacy. Jaynes described the mathematical formalism of quantum physics as "a peculiar mixture describing in part realities of Nature, in part incomplete human information about Nature—all scrambled up together by Heisenberg and Bohr into an omelette that nobody has seen how to unscramble".[87] Further information: Interpretations of quantum mechanics The ensemble interpretation is similar; it offers an interpretation of the wave function, but not for single particles. The consistent histories interpretation advertises itself as "Copenhagen done right".[88] More recently, interpretations inspired by quantum information theory like QBism[89] and relational quantum mechanics[90] have attracted support.[91][92] Under realism and determinism, if the wave function is regarded as ontologically real, and collapse is entirely rejected, a many worlds theory results. If wave function collapse is regarded as ontologically real as well, an objective collapse theory is obtained. Bohmian mechanics shows that it is possible to reformulate quantum mechanics to make it deterministic, at the price of making it explicitly nonlocal. It attributes not only a wave function to a physical system, but in addition a real position, that evolves deterministically under a nonlocal guiding equation. The evolution of a physical system is given at all times by the Schrödinger equation together with the guiding equation; there is never a collapse of the wave function.[93] The transactional interpretation is also explicitly nonlocal.[94] Some physicists espoused views in the "Copenhagen spirit" and then went on to advocate other interpretations. For example, David Bohm and Alfred Landé both wrote textbooks that put forth ideas in the Bohr–Heisenberg tradition, and later promoted nonlocal hidden variables and an ensemble interpretation respectively.[64]: 453  John Archibald Wheeler began his career as an "apostle of Niels Bohr";[95] he then supervised the PhD thesis of Hugh Everett that proposed the many-worlds interpretation. After supporting Everett's work for several years, he began to distance himself from the many-worlds interpretation in the 1970s.[96][97] Late in life, he wrote that while the Copenhagen interpretation might fairly be called "the fog from the north", it "remains the best interpretation of the quantum that we have".[98] Other physicists, while influenced by the Copenhagen tradition, have expressed frustration at how it took the mathematical formalism of quantum theory as given, rather than trying to understand how it might arise from something more fundamental. This dissatisfaction has motivated new interpretative variants as well as technical work in quantum foundations.[69][99] Physicists who have suggested that the Copenhagen tradition needs to be built upon or extended include Rudolf Haag and Anton Zeilinger.[84][100] See also 1. ^ As Heisenberg wrote in Physics and Philosophy (1958): "I remember discussions with Bohr which went through many hours till very late at night and ended almost in despair; and when at the end of the discussion I went alone for a walk in the neighbouring park I repeated to myself again and again the question: Can nature possibly be so absurd as it seemed to us in these atomic experiments?" 2. ^ "There seems to be at least as many different Copenhagen interpretations as people who use that term, probably there are more. For example, in two classic articles on the foundations of quantum mechanics, Ballentine (1970) and Stapp (1972) give diametrically opposite definitions of 'Copenhagen.'"[39] 3. ^ "Every description of phenomena, of experiments and their results, rests upon language as the only means of communication. The words of this language represent the concepts of ordinary life, which in the scientific language of physics may be refined to the concepts of classical physics. These concepts are the only tools for an unambiguous communication about events, about the setting up of experiments and about their results."[43]: 127  4. ^ "It is well known that the 'reduction of the wave packets' always appears in the Copenhagen interpretation when the transition is completed from the possible to the actual. The probability function, which covered a wide range of possibilities, is suddenly reduced to a much narrower range by the fact that the experiment has led to a definite result, that actually a certain event has happened. In the formalism this reduction requires that the so-called interference of probabilities, which is the most characteristic phenomena [sic] of quantum theory, is destroyed by the partly undefinable and irreversible interactions of the system with the measuring apparatus and the rest of the world."[43]: 125  5. ^ While Born himself described his contribution as the "statistical interpretation" of the wave function,[49][50] the term "statistical interpretation" has also been used as a synonym for the ensemble interpretation.[51][52] 6. ^ The published form of the EPR argument was due to Podolsky, and Einstein himself was not satisfied with it. In his own publications and correspondence, Einstein used a different argument to insist that quantum mechanics is an incomplete theory.[76][77][78][79] 7. ^ Bohr recollected his reply to Einstein at the 1927 Solvay Congress in his essay "Discussion with Einstein on Epistemological Problems in Atomic Physics", in Albert Einstein, Philosopher–Scientist, ed. Paul Arthur Shilpp, Harper, 1949, p. 211: " spite of all divergencies of approach and opinion, a most humorous spirit animated the discussions. On his side, Einstein mockingly asked us whether we could really believe that the providential authorities took recourse to dice-playing ("ob der liebe Gott würfelt"), to which I replied by pointing at the great caution, already called for by ancient thinkers, in ascribing attributes to Providence in everyday language." Werner Heisenberg, who also attended the congress, recalled the exchange in Encounters with Einstein, Princeton University Press, 1983, p. 117: "But he [Einstein] still stood by his watchword, which he clothed in the words: 'God does not play at dice.' To which Bohr could only answer: 'But still, it cannot be for us to tell God, how he is to run the world.'" 1. ^ Przibram, K., ed. (2015) [1967]. Letters on Wave Mechanics: Correspondence with H. A. Lorentz, Max Planck, and Erwin Schrödinger. Translated by Klein, Martin J. Philosophical Library/Open Road. ISBN 9781453204689. the Copenhagen Interpretation of quantum mechanics, [was] developed principally by Heisenberg and Bohr, and based on Born's statistical interpretation of the wave function. 2. ^ Buckley, Paul; Peat, F. David; Bohm; Dirac; Heisenberg; Pattee; Penrose; Prigogine; Rosen; Rosenfeld; Somorjai; Weizsäcker; Wheeler (1979). "Leon Rosenfeld". In Buckley, Paul; Peat, F. David (eds.). A Question of Physics: Conversations in Physics and Biology. University of Toronto Press. pp. 17–33. ISBN 9781442651661. JSTOR 10.3138/j.ctt15jjc3t.5. The Copenhagen interpretation of quantum theory, ... grew out of discussions between Niels Bohr and Werner Heisenberg... 3. ^ Gbur, Gregory J. (2019). Falling Felines and Fundamental Physics. Yale University Press. pp. 264–290. doi:10.2307/j.ctvqc6g7s.17. S2CID 243353224. Heisenberg worked under Bohr at an institute in Copenhagen. Together they compiled all existing knowledge of quantum physics into a coherent system that is known today as the Copenhagen interpretation of quantum mechanics. 4. ^ Siddiqui, Shabnam; Singh, Chandralekha (2017). "How diverse are physics instructors' attitudes and approaches to teaching undergraduate level quantum mechanics?". European Journal of Physics. 38 (3): 035703. Bibcode:2017EJPh...38c5703S. doi:10.1088/1361-6404/aa6131. 5. ^ Stapp, Henry Pierce (1997). "The Copenhagen Interpretation". The Journal of Mind and Behavior. Institute of Mind and Behavior, Inc. 18 (2/3): 127–54. JSTOR 43853817. led by Bohr and Heisenberg ... was nominally accepted by almost all textbooks and practical workers in the field. 6. ^ Bell, John S. (1987), Speakable and Unspeakable in quantum Mechanics (Cambridge: Cambridge University Press) 7. ^ a b c d e Faye, Jan (2019). "Copenhagen Interpretation of Quantum Mechanics". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. 8. ^ a b c d e f Camilleri, K.; Schlosshauer, M. (2015). "Niels Bohr as Philosopher of Experiment: Does Decoherence Theory Challenge Bohr's Doctrine of Classical Concepts?". Studies in History and Philosophy of Modern Physics. 49: 73–83. arXiv:1502.06547. Bibcode:2015SHPMP..49...73C. doi:10.1016/j.shpsb.2015.01.005. S2CID 27697360. 9. ^ a b Pauli, Wolfgang (1994) [1958]. "Albert Einstein and the development of physics". In Enz, C. P.; von Meyenn, K. (eds.). Writings on Physics and Philosophy. Berlin: Springer-Verlag. 10. ^ a b c Bell, John (1990). "Against 'measurement'". Physics World. 3 (8): 33–41. doi:10.1088/2058-7058/3/8/26. ISSN 2058-7058. 11. ^ Omnès, Roland (1999). "The Copenhagen Interpretation". Understanding Quantum Mechanics. Princeton University Press. pp. 41–54. doi:10.2307/j.ctv173f2pm.9. S2CID 203390914. Bohr, Heisenberg, and Pauli recognized its main difficulties and proposed a first essential answer. They often met in Copenhagen ... 'Copenhagen interpretation has not always meant the same thing to different authors. I will reserve it for the doctrine held with minor differences by Bohr, Heisenberg, and Pauli. 12. ^ a b c d e Omnès, R. (1994). The Interpretation of Quantum Mechanics. Princeton University Press. ISBN 978-0-691-03669-4. OCLC 439453957. 13. ^ a b Chevalley, Catherine (1999). "Why Do We Find Bohr Obscure?". In Greenberger, Daniel; Reiter, Wolfgang L.; Zeilinger, Anton (eds.). Epistemological and Experimental Perspectives on Quantum Physics. Springer Science+Business Media. pp. 59–74. doi:10.1007/978-94-017-1454-9. ISBN 978-9-04815-354-1. 14. ^ van der Waerden, B. L. (1968). "Introduction, Part II". Sources of Quantum Mechanics. Dover. ISBN 0-486-61881-1. 15. ^ Bernstein, Jeremy (2005). "Max Born and the Quantum Theory". American Journal of Physics. 73 (11): 999–1008. Bibcode:2005AmJPh..73..999B. doi:10.1119/1.2060717. 16. ^ a b c d Howard, Don (2004). "Who invented the Copenhagen Interpretation? A study in mythology" (PDF). Philosophy of Science. 71 (5): 669–682. CiteSeerX doi:10.1086/425941. JSTOR 10.1086/425941. S2CID 9454552. 17. ^ Dolling, Lisa M.; Gianelli, Arthur F.; Statile, Glenn N., eds. (2003). "Introduction". The Tests of Time: Readings in the Development of Physical Theory. Princeton University Press. pp. 359–370. doi:10.2307/j.ctvcm4h07.52. The generally accepted interpretation of Quantum Theory was formulated by Niels Bohr, Werner Heisenberg, and Wolfgang Pauli during the early part of the twentieth century at Bohr's laboratory in Copenhagen, Denmark. This account, commonly referred to as the "Copenhagen Interpretation"... 18. ^ Brush, Stephen G. (1980). "The Chimerical Cat: Philosophy of Quantum Mechanics in Historical Perspective". Social Studies of Science. Sage Publications, Ltd. 10 (4): 393–447. doi:10.1177/030631278001000401. JSTOR 284918. S2CID 145727731. On the other side, Niels Bohr was the leading spokesman for the new movement in physics, and thus it acquired the name 'Copenhagen Interpretation.' 19. ^ Bacciagaluppi, Guido; Valentini, Antony (2009-10-22). Quantum Theory at the Crossroads: Reconsidering the 1927 Solvay Conference. Cambridge University Press. p. 408. ISBN 978-0-521-81421-8. (This book contains a translation of the entire authorized proceedings of the 1927 Solvay conference from the original transcripts.) 20. ^ a b Bokulich, Alisa (2006). "Heisenberg Meets Kuhn: Closed Theories and Paradigms". Philosophy of Science. 73 (1): 90–107. doi:10.1086/510176. ISSN 0031-8248. JSTOR 10.1086/510176. S2CID 170902096. 21. ^ Mehra, J.; Rechenberg, H. (2001). The Historical Development of Quantum Theory: Volume 4. Springer-Verlag. p. 266. ISBN 9780387906423. OCLC 928788723. 22. ^ Smith, Quentin (1997). "The Ontological Interpretation of the Wave Function of the Universe". The Monist. Oxford University Press. 80 (1): 160–185. doi:10.5840/monist19978015. JSTOR 27903516. Since the late 1920s, the orthodox interpretation was taken to be the Copenhagen Interpretation 23. ^ Weinberg, Steven (2018). "The Trouble with Quantum Mechanics". Third Thoughts. Harvard University Press. pp. 124–142. ISBN 9780674975323. JSTOR j.ctvckq5b7.17. One response to this puzzle was given in the 1920s by Niels Bohr, in what came to be called the Copenhagen interpretation of quantum mechanics. 24. ^ Hanson, Norwood Russell (1959). "Five Cautions for the Copenhagen Interpretation's Critics". Philosophy of Science. The University of Chicago Press, Philosophy of Science Association. 26 (4): 325–337. doi:10.1086/287687. JSTOR 185366. S2CID 170786589. Feyerabend and Bohm are almost exclusively concerned with the inadequacies of the Bohr-Interpretation (which originates in Copenhagen). Both understress a much less incautious view, which I shall call 'the Copenhagen Interpretation' (which originates in Leipzig and presides at Göttingen, Munich, Cambridge, Princeton,―and almost everywhere else too). 25. ^ Bohm, David (1952). "A Suggested Interpretation of the Quantum Theory in Terms of 'Hidden' Variables. I & II". Physical Review. 85 (2): 166–193. Bibcode:1952PhRv...85..166B. doi:10.1103/PhysRev.85.166. 26. ^ Kragh, H. (1999). Quantum Generations: A History of Physics in the Twentieth Century. Princeton University Press. p. 210. ISBN 978-0-691-01206-3. OCLC 450598985. In fact, the term 'Copenhagen interpretation' was not used in the 1930s but first entered the physicists' vocabulary in 1955 when Heisenberg used it in criticizing certain unorthodox interpretations of quantum mechanics. 27. ^ a b Camilleri, Kristian (May 2009). "Constructing the Myth of the Copenhagen Interpretation". Perspectives on Science. 17 (1): 26–57. doi:10.1162/posc.2009.17.1.26. ISSN 1063-6145. S2CID 57559199. 28. ^ a b Heisenberg, Werner (1958). Physics and Philosophy. Harper. 29. ^ "I avow that the term ‘Copenhagen interpretation’ is not happy since it could suggest that there are other interpretations, like Bohm assumes. We agree, of course, that the other interpretations are nonsense, and I believe that this is clear in my book, and in previous papers. Anyway, I cannot now, unfortunately, change the book since the printing began enough time ago." Quoted in Freire Jr., Olival (2005). "Science and exile: David Bohm, the hot times of the Cold War, and his struggle for a new interpretation of quantum mechanics". Historical Studies in the Physical and Biological Sciences. 36 (1): 31–35. 30. ^ Rosenfeld, Léon (1960). "Heisenberg, Physics and Philosophy". Nature. 186 (4728): 830–831. Bibcode:1960Natur.186..830R. doi:10.1038/186830a0. S2CID 12979706. 31. ^ a b Cramer, John G. (1986). "The Transactional Interpretation of Quantum Mechanics". Reviews of Modern Physics. 58 (3): 649. Bibcode:1986RvMP...58..647C. doi:10.1103/revmodphys.58.647. Archived from the original on 2012-11-08. 32. ^ Maleeh, Reza; Amani, Parisa (December 2013). "Pragmatism, Bohr, and the Copenhagen Interpretation of Quantum Mechanics". International Studies in the Philosophy of Science. 27 (4): 353–367. doi:10.1080/02698595.2013.868182. ISSN 0269-8595. S2CID 170415674. 33. ^ Boge, Florian J. (2018). Quantum Mechanics Between Ontology and Epistemology. Cham: Springer. p. 2. ISBN 978-3-319-95765-4. OCLC 1086564338. 34. ^ Scheibe, Erhard (1973). The Logical Analysis of Quantum Mechanics. Pergamon Press. ISBN 9780080171586. OCLC 799397091. [T]here is no point in looking for the Copenhagen interpretation as a unified and consistent logical structure. Terms such as "Copenhagen interpretation" or "Copenhagen school" are based on the history of the development of quantum mechanics; they form a simplified and often convenient way of referring to the ideas of a number of physicists who played an important role in the establishment of quantum mechanics, and who were collaborators of Bohr's at his Institute or took part in the discussions during the crucial years. On closer inspection, one sees quite easily that these ideas are divergent in detail and that in particular the views of Bohr, the spiritual leader of the school, form a separate entity which can now be understood only by a thorough study of as many as possible of the relevant publications by Bohr himself. 35. ^ Camilleri, Kristian (September 2007). "Bohr, Heisenberg and the divergent views of complementarity". Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics. 38 (3): 514–528. Bibcode:2007SHPMP..38..514C. doi:10.1016/j.shpsb.2006.10.002. 36. ^ Bohr, Niels (1985) [May 16, 1947]. Kalckar, Jørgen (ed.). Niels Bohr: Collected Works. Vol. 6: Foundations of Quantum Physics I (1926-1932). pp. 451–454. |volume= has extra text (help) 37. ^ Stenholm, Stig (1983). "To fathom space and time". In Meystre, Pierre (ed.). Quantum Optics, Experimental Gravitation, and Measurement Theory. Plenum Press. p. 121. The role of irreversibility in the theory of measurement has been emphasized by many. Only this way can a permanent record be obtained. The fact that separate pointer positions must be of the asymptotic nature usually associated with irreversibility has been utilized in the measurement theory of Daneri, Loinger and Prosperi (1962). It has been accepted as a formal representation of Bohr's ideas by Rosenfeld (1966). 38. ^ Haake, Fritz (April 1, 1993). "Classical motion of meter variables in the quantum theory of measurement". Physical Review A. 47 (4): 2506–2517. Bibcode:1993PhRvA..47.2506H. doi:10.1103/PhysRevA.47.2506. PMID 9909217. 39. ^ Peres, Asher (2002). "Popper's experiment and the Copenhagen interpretation". Studies in History and Philosophy of Modern Physics. 33: 23. arXiv:quant-ph/9910078. doi:10.1016/S1355-2198(01)00034-X. 40. ^ Mermin, N. David (1989). "What's Wrong with this Pillow?". Physics Today. 42 (4): 9. Bibcode:1989PhT....42d...9D. doi:10.1063/1.2810963. 41. ^ Mermin, N. David (2004). "Could Feynman have said this?". Physics Today. 57 (5): 10–11. Bibcode:2004PhT....57e..10M. doi:10.1063/1.1768652. 42. ^ Mermin, N. David (2017-01-01). "Why QBism Is Not the Copenhagen Interpretation and What John Bell Might Have Thought of It". In Bertlmann, Reinhold; Zeilinger, Anton (eds.). Quantum [Un]Speakables II. The Frontiers Collection. Springer International Publishing. pp. 83–93. arXiv:1409.2454. doi:10.1007/978-3-319-38987-5_4. ISBN 9783319389851. S2CID 118458259. 43. ^ a b Heisenberg, Werner (1971) [1959]. "Criticism and counterproposals to the Copenhagen interpretation of quantum theory". Physics and Philosophy: the Revolution in Modern Science. London: George Allen & Unwin. pp. 114–128. 44. ^ Camilleri, K. (2006). "Heisenberg and the wave–particle duality". Studies in History and Philosophy of Modern Physics. 37 (2): 298–315. Bibcode:2006SHPMP..37..298C. doi:10.1016/j.shpsb.2005.08.002. 45. ^ Camilleri, K. (2009). Heisenberg and the Interpretation of Quantum Mechanics: the Physicist as Philosopher. Cambridge UK: Cambridge University Press. ISBN 978-0-521-88484-6. OCLC 638813030. 46. ^ Bohr, N. (1928). "The Quantum Postulate and the Recent Development of Atomic Theory". Nature. 121 (3050): 580–590. Bibcode:1928Natur.121..580B. doi:10.1038/121580a0., p. 586: "there can be no question of an immediate connexion with our ordinary conceptions". 47. ^ Heisenberg, W. (1959/1971). 'Language and reality in modern physics', Chapter 10, pp. 145–160, in Physics and Philosophy: the Revolution in Modern Science, George Allen & Unwin, London, ISBN 0-04-530016 X, p. 153: "our common concepts cannot be applied to the structure of the atoms." 48. ^ Bohr, N. (1928). "The Quantum Postulate and the Recent Development of Atomic Theory". Nature. 121 (3050): 580–590. Bibcode:1928Natur.121..580B. doi:10.1038/121580a0., p. 586: "In this connexion [Born] succeeded in obtaining a statistical interpretation of the wave functions, allowing a calculation of the probability of the individual transition processes required by the quantum postulate." 49. ^ Born, M. (1955). "Statistical interpretation of quantum mechanics". Science. 122 (3172): 675–679. Bibcode:1955Sci...122..675B. doi:10.1126/science.122.3172.675. PMID 17798674. 50. ^ "... the statistical interpretation, which I have first suggested and which has been formulated in the most general way by von Neumann, ..." Born, M. (1953). The interpretation of quantum mechanics, Br. J. Philos. Sci., 4(14): 95–106. 51. ^ Ballentine, L.E. (1970). "The statistical interpretation of quantum mechanics". Rev. Mod. Phys. 42 (4): 358–381. Bibcode:1970RvMP...42..358B. doi:10.1103/revmodphys.42.358. 52. ^ Born, M. (1949). Einstein's statistical theories, in Albert Einstein: Philosopher Scientist, ed. P.A. Schilpp, Open Court, La Salle IL, volume 1, pp. 161–177. 53. ^ W. Heisenberg "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik," Zeitschrift für Physik, Volume 43, 172–198 (1927), as translated by John Wheeler and Wojciech Zurek, in Quantum Theory and Measurement (1983), p. 74. ("[The] determination of the position selects a definite "q" from the totality of possibilities and limits the options for all subsequent measurements. ... [T]he results of later measurements can only be calculated when one again ascribes to the electron a "smaller" wavepacket of extension λ (wavelength of the light used in the observation). Thus, every position determination reduces the wavepacket back to its original extension λ.") 54. ^ "Of course the introduction of the observer must not be misunderstood to imply that some kind of subjective features are to be brought into the description of nature." Heisenberg, W. (1959/1971). Criticism and counterproposals to the Copenhagen interpretation of quantum theory, Chapter 8, pp. 114–128, in Physics and Philosophy: the Revolution in Modern Science, third impression 1971, George Allen & Unwin, London, at p. 121. 55. ^ Zeh, H. Dieter (1970). "On the Interpretation of Measurement in Quantum Theory". Foundations of Physics. 1 (1): 69–76. Bibcode:1970FoPh....1...69Z. doi:10.1007/BF00708656. S2CID 963732. 56. ^ Zurek, Wojciech H. (1981). "Pointer Basis of Quantum Apparatus: Into what Mixture does the Wave Packet Collapse?". Physical Review D. 24 (6): 1516–1525. Bibcode:1981PhRvD..24.1516Z. doi:10.1103/PhysRevD.24.1516. 57. ^ Zurek, Wojciech H. (1982). "Environment-Induced Superselection Rules". Physical Review D. 26 (8): 1862–1880. Bibcode:1982PhRvD..26.1862Z. doi:10.1103/PhysRevD.26.1862. 58. ^ Schlosshauer, M. (2019). "Quantum Decoherence". Physics Reports. 831: 1–57. arXiv:1911.06282. Bibcode:2019PhR...831....1S. doi:10.1016/j.physrep.2019.10.001. S2CID 208006050. 59. ^ Jammer, M. (1982). 'Einstein and quantum physics', pp. 59–76 in Albert Einstein: Historical and Cultural Perspectives; the Centennial Symposium in Jerusalem, edited by G. Holton, Y. Elkana, Princeton University Press, Princeton NJ, ISBN 0-691-08299-5. On pp. 73–74, Jammer quotes a 1952 letter from Einstein to Besso: "The present quantum theory is unable to provide the description of a real state of physical facts, but only of an (incomplete) knowledge of such. Moreover, the very concept of a real factual state is debarred by the orthodox theoreticians. The situation arrived at corresponds almost exactly to that of the good old Bishop Berkeley." 60. ^ Heisenberg, W. (1927). Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik, Z. Phys. 43: 172–198. Translation as 'The actual content of quantum theoretical kinematics and mechanics' here: "Since the statistical nature of quantum theory is so closely [linked] to the uncertainty in all observations or perceptions, one could be tempted to conclude that behind the observed, statistical world a "real" world is hidden, in which the law of causality is applicable. We want to state explicitly that we believe such speculations to be both fruitless and pointless. The only task of physics is to describe the relation between observations." 62. ^ Belousek, D.W. (1996). "Einstein's 1927 unpublished hidden-variable theory: its background, context and significance". Stud. Hist. Phil. Mod. Phys. 21 (4): 431–461. Bibcode:1996SHPMP..27..437B. doi:10.1016/S1355-2198(96)00015-9. 63. ^ Holland, P (2005). "What's wrong with Einstein's 1927 hidden-variable interpretation of quantum mechanics?". Foundations of Physics. 35 (2): 177–196. arXiv:quant-ph/0401017. Bibcode:2005FoPh...35..177H. doi:10.1007/s10701-004-1940-7. S2CID 119426936. 64. ^ a b c d e Jammer, Max (1974). The Philosophy of Quantum Mechanics. John Wiley and Sons. ISBN 0-471-43958-4. 65. ^ a b Mermin, N. David (2019-01-01). "Making better sense of quantum mechanics". Reports on Progress in Physics. 82 (1): 012002. arXiv:1809.01639. Bibcode:2019RPPh...82a2002M. doi:10.1088/1361-6633/aae2c6. ISSN 0034-4885. PMID 30232960. S2CID 52299438. 66. ^ a b Peierls, Rudolf (1991). "In defence of "measurement"". Physics World. 4 (1): 19–21. doi:10.1088/2058-7058/4/1/19. ISSN 2058-7058. 67. ^ a b Peres, Asher (1993). Quantum Theory: Concepts and Methods. Kluwer. pp. 373–374. ISBN 0-7923-2549-4. OCLC 28854083. 68. ^ Streater, R. F. (2007). Lost causes in and beyond physics. Berlin: Springer. ISBN 978-3-540-36582-2. OCLC 185022108. 69. ^ a b Appleby, D. M. (2005). "Facts, Values and Quanta". Foundations of Physics. 35 (4): 637. arXiv:quant-ph/0402015. Bibcode:2005FoPh...35..627A. doi:10.1007/s10701-004-2014-6. S2CID 16072294. 70. ^ Max Tegmark (1998). "The Interpretation of Quantum Mechanics: Many Worlds or Many Words?". Fortsch. Phys. 46 (6–8): 855–862. arXiv:quant-ph/9709032. Bibcode:1998ForPh..46..855T. doi:10.1002/(SICI)1521-3978(199811)46:6/8<855::AID-PROP855>3.0.CO;2-Q. 71. ^ M. Schlosshauer; J. Kofler; A. Zeilinger (2013). "A Snapshot of Foundational Attitudes Toward Quantum Mechanics". Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics. 44 (3): 222–230. arXiv:1301.1069. Bibcode:2013SHPMP..44..222S. doi:10.1016/j.shpsb.2013.04.004. S2CID 55537196. 73. ^ Nairz, Olaf; Brezger, Björn; Arndt, Markus; Zeilinger, Anton (2001). "Diffraction of Complex Molecules by Structures Made of Light". Physical Review Letters. 87 (16): 160401. arXiv:quant-ph/0110012. Bibcode:2001PhRvL..87p0401N. doi:10.1103/PhysRevLett.87.160401. PMID 11690188. S2CID 21547361. 74. ^ Brezger, Björn; Hackermüller, Lucia; Uttenthaler, Stefan; Petschinka, Julia; Arndt, Markus; Zeilinger, Anton (2002). "Matter-Wave Interferometer for Large Molecules". Physical Review Letters. 88 (10): 100404. arXiv:quant-ph/0202158. Bibcode:2002PhRvL..88j0404B. doi:10.1103/PhysRevLett.88.100404. PMID 11909334. S2CID 19793304. 75. ^ Pais, Abraham (1979). "Einstein and the quantum theory". Reviews of Modern Physics. 51 (4): 863–914. Bibcode:1979RvMP...51..863P. doi:10.1103/RevModPhys.51.863. 76. ^ Harrigan, Nicholas; Spekkens, Robert W. (2010). "Einstein, incompleteness, and the epistemic view of quantum states". Foundations of Physics. 40 (2): 125. arXiv:0706.2661. Bibcode:2010FoPh...40..125H. doi:10.1007/s10701-009-9347-0. S2CID 32755624. 77. ^ Howard, D. (1985). "Einstein on locality and separability". Studies in History and Philosophy of Science Part A. 16 (3): 171–201. Bibcode:1985SHPSA..16..171H. doi:10.1016/0039-3681(85)90001-9. 78. ^ Sauer, Tilman (2007-12-01). "An Einstein manuscript on the EPR paradox for spin observables". Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics. 38 (4): 879–887. Bibcode:2007SHPMP..38..879S. CiteSeerX doi:10.1016/j.shpsb.2007.03.002. ISSN 1355-2198. 79. ^ Einstein, Albert (1949). "Autobiographical Notes". In Schilpp, Paul Arthur (ed.). Albert Einstein: Philosopher-Scientist. Open Court Publishing Company. 80. ^ Letter to Max Born (4 December 1926); The Born-Einstein Letters. Translated by Born, Irene. New York: Walker and Company. 1971. ISBN 0-8027-0326-7. OCLC 439521601. 81. ^ Weinberg, Steven (November 2005). "Einstein's Mistakes". Physics Today. 58 (11): 31. Bibcode:2005PhT....58k..31W. doi:10.1063/1.2155755. 82. ^ Weinberg, Steven (19 January 2017). "The Trouble with Quantum Mechanics". New York Review of Books. Retrieved 8 January 2017. 83. ^ 'Since the Universe naturally contains all of its observers, the problem arises to come up with an interpretation of quantum theory that contains no classical realms on the fundamental level.', Claus Kiefer (2002). "On the interpretation of quantum theory – from Copenhagen to the present day". Time. p. 291. arXiv:quant-ph/0210152. Bibcode:2003tqi..conf..291K. 84. ^ a b Haag, Rudolf (2010). "Some people and some problems met in half a century of commitment to mathematical physics". The European Physical Journal H. 35 (3): 263–307. Bibcode:2010EPJH...35..263H. doi:10.1140/epjh/e2010-10032-4. S2CID 59320730. 85. ^ a b Peres, Asher (1998-12-01). "Interpreting the Quantum World". Studies in History and Philosophy of Modern Physics. 29 (4): 611–620. arXiv:quant-ph/9711003. doi:10.1016/S1355-2198(98)00017-3. ISSN 1355-2198. 86. ^ Jaynes, E. T. (1989). "Clearing up Mysteries – The Original Goal" (PDF). Maximum Entropy and Bayesian Methods: 7. 87. ^ Jaynes, E. T. (1990). "Probability in Quantum Theory". In Zurek, W. H. (ed.). Complexity, Entropy, and the Physics of Information. Addison-Wesley. pp. 381–404. ISBN 9780201515060. OCLC 946145335. 88. ^ Hohenberg, P. C. (2010-10-05). "Colloquium : An introduction to consistent quantum theory". Reviews of Modern Physics. 82 (4): 2835–2844. doi:10.1103/RevModPhys.82.2835. ISSN 0034-6861. 89. ^ Healey, Richard (2016). "Quantum-Bayesian and Pragmatist Views of Quantum Theory". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. 90. ^ van Fraassen, Bas C. (April 2010). "Rovelli's World". Foundations of Physics. 40 (4): 390–417. Bibcode:2010FoPh...40..390V. doi:10.1007/s10701-009-9326-5. ISSN 0015-9018. S2CID 17217776. 91. ^ Kate Becker (2013-01-25). "Quantum physics has been rankling scientists for decades". Boulder Daily Camera. Retrieved 2013-01-25. 93. ^ Goldstein, Sheldon (2017). "Bohmian Mechanics". Stanford Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University. 94. ^ Kastner, R. E. (May 2010). "The Quantum Liar Experiment in Cramer's transactional interpretation". Studies in History and Philosophy of Modern Physics. 41 (2). doi:10.1016/j.shpsb.2010.01.001. 95. ^ Gleick, James (1992). Genius: The Life and Science of Richard Feynman. Vintage Books. ISBN 978-0-679-74704-8. OCLC 223830601. 96. ^ Wheeler, John Archibald (1977). "Include the observer in the wave function?". In Lopes, J. Leite; Paty, M. (eds.). Quantum Mechanics: A Half Century Later. D. Reidel Publishing. 97. ^ Byrne, Peter (2012). The Many Worlds of Hugh Everett III: Multiple Universes, Mutual Assured Destruction, and the Meltdown of a Nuclear Family. Oxford University Press. ISBN 978-0-199-55227-6. OCLC 809554486. 98. ^ Wheeler, John Archibald (2000-12-12). "'A Practical Tool,' But Puzzling, Too". New York Times. Retrieved 2020-12-25. 99. ^ Fuchs, Christopher A. (2018). "Copenhagen Interpretation Delenda Est?". American Journal of Physics. 87 (4): 317–318. arXiv:1809.05147. Bibcode:2018arXiv180905147F. doi:10.1119/1.5089208. S2CID 224755562. 100. ^ Zeilinger, Anton (1999). "A foundational principle for quantum mechanics". Foundations of Physics. 29 (4): 631–643. doi:10.1023/A:1018820410908. Suffice it to say here that, in my view, the principle naturally supports and extends the Copenhagen interpretation of quantum mechanics. It is evident that one of the immediate consequences is that in physics we cannot talk about reality independent of what can be said about reality. Likewise it does not make sense to reduce the task of physics to just making subjective statements, because any statements about the physical world must ultimately be subject to experiment. Therefore, while in a classical worldview, reality is a primary concept prior to and independent of observation with all its properties, in the emerging view of quantum mechanics the notions of reality and of information are on an equal footing. One implies the other and neither one is sufficient to obtain a complete understanding of the world. Further reading
f88470906818f07c
(redirected from Bion (physics)) Also found in: Dictionary, Thesaurus. An isolated wave that propagates without dispersing its energy over larger and larger regions of space. In most of the scientific literature, the requirement that two solitons emerge unchanged from a collision is also added to the definition; otherwise the disturbance is termed a solitary wave. There are many equations of mathematical physics which have solutions of the soliton type. Correspondingly, the phenomena which they describe, be it the motion of waves in shallow water or in an ionized plasma, exhibit solitons. The first observation of this kind of wave was made in 1834 by John Scott Russell, who followed on horseback a soliton propagating in the windings of a channel. In 1895, D. J. Korteweg and H. de Vries proposed an equation for the motion of waves in shallow waters which possesses soliton solutions, and thus established a mathematical basis for the study of the phenomenon. Interest in the subject, however, lay dormant for many years, and the major body of investigations began only in the 1950s. Researches done by analytical methods and by numerical methods made possible with the advent of computers gradually led to a complete understanding of solitons. Eventually, the fact that solitons exhibit particlelike properties, because the energy is at any instant confined to a limited region of space, received attention, and solitons were proposed as models for elementary particles. However, it is difficult to account for all of the properties of known particles in terms of solitons. More recently it has been realized that some of the quantum fields which are used to describe particles and their interactions also have solutions of the soliton type. The solitons would then appear as additional particles, and may have escaped experimental detection because their masses are much larger than those of known particles. In this context the requirement that solitons emerge unchanged from a collision has been found too restrictive, and particle theorists have used the term soliton where traditionally the term solitary wave would be used. See Elementary particle, Quantum field theory A hydrodynamic soliton is simply described by the equation of Korteweg and de Vries, which includes a dispersive term and a term to represent nonlinear effects. Easily observed in a wave tank, a bell-shaped solution of this equation balances the effects of dispersion and nonlinearity, and it is this balance that is the essential feature of the soliton phenomenon. Tidal waves in the Firth of Forth were found by Scott Russell to be solitons, as are internal ocean waves and tsunamis. At an even greater level of energy, it has been suggested that the Great Red Spot of the planet Jupiter is a hydrodynamic soliton. The most significant technical application of the soliton is as a carrier of digital information along an optical fiber. The optical soliton is governed by the nonlinear Schrödinger equation, and again expresses a balance between the effects of optical dispersion and nonlinearity that is due to electric field dependence of the refractive index in the fiber core. If the power is too low, nonlinear effects become negligible, and the information spreads (or disperses) over an ever increasing length of the fiber. At a pulse power level of about 5 milliwatts, however, a robustly stable soliton appears and maintains its size and shape in the presence of disturbing influences. Present designs for data transmission systems based on the optical soliton have a data rate of 4 × 109 bits per second. A carefully studied soliton system is the transverse electromagnetic (TEM) wave that travels between two strips of superconducting metal separated by an insulating layer thin enough (about 2.5 nanometers) to permit transverse Josephson tunneling. Since each soliton carries one quantum of magnetic flux, it is also called a fluxon if the magnetic flux points in one direction, and an antifluxon if the flux points in the opposite direction. Oscillators based on this system reach into the submillimeter wave region of the electromagnetic spectrum (frequencies greater than 1011 Hz). See Josephson effect The all-or-nothing action potential or nerve impulse that carries a bit of biological information along the axon of a nerve cell shares many properties with the soliton. Both are solutions of nonlinear equations that travel with fixed shape at constant speed, but the soliton conserves energy, while the nerve impulse balances the rate at which electrostatic energy is released from the nerve membrane to the rate at which it is consumed by the dissipative effects of circulating ionic currents. The nerve process is much like the flame of a candle. A solution of a nonlinear differential equation that propogates with a characteristic constant shape. An isolated wave that propagates without dispersing its energy over larger and larger regions of space, and whose nature is such that two such objects emerge unchanged from a collision. A laser pulse that retains its shape in a fiber over long distances. By generating the pulse at a certain frequency and at a certain power level, the pulse takes advantage of competing dispersion effects. As it travels, the pulse is lengthened and then shortened back to its original size.
9378153abf178c85
ANTIMATTER: who ordered that? The existence of antimatter became known following Dirac’s formulation of relativistic quantum mechanics, but this incredible development was not anticipated. These days conjuring up a new particle or field (or perhaps even new dimensions) to explain unknown observations is pretty much standard operating procedure, but it was not always so. The famous “who ordered that” statement of I. I. Rabi was made in reference to the discovery of the muon, a heavy electron whose existence seemed a bit unnecessary at the time; in fact it was the harbinger of a subatomic zoo. The story of Dirac’s relativistic reformulation of the Schrödinger wave equation, and the subsequent prediction of antiparticles, is particularly appealing; the story is nicely explained in a recent biography of Dirac (Farmelo 2009). As with Einstein’s theory of relativity, Dirac’s relativistic quantum mechanics seemed to spring into existence without any experimental imperative. That is to say, nobody ordered it! The reality, of course, is a good deal more complicated and nuanced, but it would not be inaccurate to suggest that Dirac was driven more by mathematical aesthetics than experimental anomalies when he developed his theory. The motivation for any modification of the Schrödinger equation is that it does not describe the energy of a free particle in a way that is consistent with the special theory of relativity. At first sight it might seem like a trivial matter to simply re-write the equation to include the energy in the necessary form, but things are not so simple. In order to illustrate why this is so it is instructive to briefly consider the Dirac equation, and how it was developed. For explicit mathematical details of the formulation and solution of the Dirac equation see, for example, Griffiths 2008. The basic form of the Schrödinger wave equation (SWE) is (-\frac{\hbar^2}{2m}\nabla^2+V)\psi = i\hbar \frac{\partial}{\partial t}\psi.                                                    (1) The fundamental departure from classical physics embodied in eq (1) is the quantity \psi , which represents not a particle but a wavefunction. That is, the SWE describes how this wavefunction (whatever it may be) will behave. This is not the same thing at all as describing, for example, the trajectory of a particle. Exactly what a wavefunction is remains to this day rather mysterious. For many years it was thought that the wavefunction was simply a handy mathematical tool that could be used to describe atoms and molecules even in the absence of a fully complete theory (e.g., Bohm 1952). This idea, originally suggested by de Broglie in his “pilot wave” description, has been disproved by numerous ingenious experiments (e.g., Aspect et al., 1982). It now seems unavoidable to conclude that wavefunctions represent actual descriptions of reality, and that the “weirdness” of the quantum world is in fact an intrinsic part of that reality, with the concept of “particle” being only an approximation to that reality, only appropriate to a coarse-grained view of the world. Nevertheless, by following the rules that have been developed regarding the application of the SWE, and quantum physics in general, it is possible to describe experimental observations with great accuracy. This is the primary reason why many physicists have, for over 80 years, eschewed the philosophical difficulties associated with wavefunctions and the like, and embraced the sheer predictive power of the theory. We will not discuss quantum mechanics in any detail here; there are many excellent books on the subject at all levels (e.g., Dirac 1934, Shankar 1994, Schiff 1968). In classical terms the total energy of a particle E can be described simply as the sum of the kinetic energy (KE) and the potential energy (PE) as KE+PE=\frac{p^2}{2m}+V=E                                                 (2) where p = mv represents the momentum of a particle of mass m and velocity v. In quantum theory such quantities are described not by simple formulae, but rather by operators that act on the wavefunction. We describe momentum via the operator -i \hbar\nabla and energy by i\hbar \partial / \partial t and so on. The first term of eq (1) represents the total energy of the system, and is also known as the Hamiltonian, H. Thus, the SWE may be written as H\psi=i\hbar\frac{\partial\psi}{\partial t}=E\psi                                                              (3) The reason why eq (3) is non-relativistic is that the energy-momentum relation in the Hamiltonian is described in the well-known non-relativistic form. As we know from Einstein, however, the total energy of a free particle does not reside only in its kinetic energy; there is also the rest mass energy, embodied in what may be the most famous equation in all of physics: E=mc^2.                                                                    (4) This equation tells us that a particle of mass m has an equivalent energy E, with c2 being a rather large number, illustrating that even a small amount of mass (m) can, in principle, be converted into a very large amount of energy (E). Despite being so famous as to qualify as a cultural icon, the equation E = mc2 is, at best, incomplete. In fact the total energy of a free particle (i.e., V = 0) as prescribed by the theory of relativity is given by E^2=m^2c^4 +p^2c^2.                                                        (5) Clearly this will reduce to E = mc2 for a particle at rest (i.e., p = 0): or will it? Actually, we shall have E = ± mc2, and in some sense one might say that the negative solutions to this energy equation represent antimatter, although, as we shall see, the situation is not so clear cut. In order to make the SWE relativistic then, one need only replace the classical kinetic energy E = p2/2m with the relativistic energy E = [m2c4+p2c2]1/2. This sounds simple enough, but the square root sign leads to quite a lot of trouble! This is largely because when we make the “quantum substitution” p \rightarrow -i\hbar\nabla  we find we have to deal with the square root of an operator, which, as it turns out, requires some mathematical sophistication. Moreover, in quantum physics we must deal with operators that act upon complex wavefunctions, so that negative square roots may in fact correspond to a physically meaningful aspect of the system, and cannot simply be discarded as might be the case in a classical system. To avoid these problems we can instead start with eq (5) interpreted via the operators for momentum and energy so that eq (3) becomes (- \frac{1}{c^2}\frac{\partial^2}{\partial t^2} + \nabla^2)\psi=\frac{m^2 c^2}{\hbar^2}\psi.                                                (6) This equation is known as the Klein Gordon equation (KGE), although it was first obtained by Schrödinger in his original development of the SWE. He abandoned it, however, when he found that it did not properly describe the energy levels of the hydrogen atom. It subsequently became clear that when applied to electrons this equation also implied two things that were considered to be unacceptable; negative energy solutions, and, even worse, negative probabilities. We now know that the KGE is not appropriate for electrons, but does describe some massive particles with spin zero when interpreted in the framework of quantum field theory (QFT); neither mesons nor QFT were known when the KGE was formulated. Some of the problems with the KGE arise from the second order time derivative, which is itself a direct result of squaring everything to avoid the intractable mathematical form of the square root of an operator. The fundamental connection between time and space at the heart of relativity leads to a similar connection between energy and momentum, a connection that is overlooked in the KGE. Dirac was thus motivated by the principles of relativity to keep a first order time derivative, which meant that he had to confront the difficulties associated with using the relativistic energy head on. We will not discuss the details of its derivation but will simply consider the form of the resulting Dirac equation: (c \alpha \cdot \mathrm{P}+\beta mc^2)\psi=i\hbar \frac{\partial\psi}{\partial t}.                                                     (7) This equation has the general form of the SWE, but with some significant differences. Perhaps the most important of these is that the Hamiltonian now includes both the kinetic energy and the electron rest mass, but the coefficients αi and \beta  have to be four-component matrices to satisfy the equation. That is, the Dirac equation is really a matrix equation, and the wavefunction it describes must be a four component wavefunction. Although there are no problems with negative probabilities, the negative energy solutions seen in the KGE remain. These initially seemed to be a fatal flaw in Dirac’s work, but were overlooked because in every other aspect the equation was spectacularly successful. It reproduced the hydrogen atomic spectra perfectly (at least, as perfectly as it was known at the time) and even included small relativistic effects, as a proper relativistic wave equation should. For example, when the electromagnetic interaction is included the Dirac equation predicts an electron magnetic moment: \mu_e = \frac{\hbar e}{2m} = \mu_B                                                                   (8) where \mu_B is known as the Bohr magneton. This expression is also in agreement with experiment, almost: it was later discovered that the magnetic moment of the electron differs from the value predicted by eq (8) by about 0.1% (Kusch and Foley 1948).  The fact that Dirac’s theory was able to predict these quantities was considered to be a triumph, despite the troublesome negative energy solutions. Another intriguing aspect of the Dirac equation was noticed by Schrödinger in 1930. He realised that interference between positive and negative energy terms would lead to oscillations of the wavepacket of an electron (or positron) about some central point at the speed of light. This fast motion was given the name zitterbewegung (which is German for “trembling motion”). The underlying physical mechanism that gives rise to the zitterbewegung effect may be interpreted in several different ways but one way to look at it is as an interaction of the electron with the zero-point energy of the (quantised) electromagnetic field. Such electronic oscillations have not been directly observed as they occur at a very high frequency (~ 1021 Hz), but since zitterbewegung also applies to electrons bound to atoms, this motion can affect atomic energy levels in an observable way. In a hydrogen atom the zitterbewegung acts to “smear out” the electron charge over a larger area, lowering the strength of its interaction with the proton charge. Since S states have a non-zero expectation value at the origin, the effect is larger for these than it is for P states. The splitting between the hydrogen 2S1/2 and 2P1/2 states, that are degenerate in the Dirac theory, is known as the Lamb Shift (Lamb, 1947). This shift, which amounts to ~1 GHz was observed in an experiment by Willis Lamb and his student Robert Retherford (not to be confused Ernest Rutherford!). The need to explain this shift, which requires a proper explanation of the electron interacting with the electromagnetic field, gave birth to the theory of quantum electrodynamics, pioneered by Bethe, Tomanoga, Schwinger and Feynman. The solutions to the SWE for free particles (i.e., neglecting the potential V) are of the form \psi = A \mathrm{exp}(-iEt / \hbar).                                                       (9) Here A is some function that depends only on the spatial properties of the wavefunction (i.e., not on t). Note that this wavefunction represents two electron states, corresponding to the two separate spin states. The corresponding solutions to the Dirac equation may be represented as                                                             \psi_1 = A_1 \mathrm{exp}(-iEt / \hbar), \psi_2 = A_2 \mathrm{exp}(+iEt / \hbar).                                                   (10) Here \psi_2 represents the negative energy solutions that have caused so much trouble. The existence of these states is central to the theory they cannot simply be labelled as “unphysical” and discarded. The complete set of solutions is required in quantum mechanics, in which everything is somewhat “unphysical”. More properly, since the wavefunction is essentially a complex probability density function that yields a real result when its absolute value is squared, the negative energy solutions are no less physical than the positive energy solutions; it is in fact simply a matter of convention as to which states are positive and which are negative. However you set things up, you will always have some “wrong” energy states that you can’t get rid of. Thus, Dirac was able to eliminate the negative probabilities and produce a wave equation that was consistent with special relativity, but the negative energy states turned out to be a fundamental part of the theory and could not be eliminated, despite many attempts to get rid of them. After his first paper in 1928 (The quantum theory of the electron) Dirac had established that his equation was a viable relativistic wave equation, but the negative energy aspects remained controversial. He worried about this for some time, and tried to develop a “hole” theory to explain their seemingly undeniable existence. A serious problem with negative energy solutions is that one would expect all electrons to decay into the lowest energy state available, which would be the negative energy states. Since this would not be consistent with observations there must, so Dirac reasoned, be some mechanism to prevent it. He suggested that the states were already filled with an infinite “sea” of electrons, and therefore the Pauli Exclusion Principle would prevent such decay, just as it prevents more than two electrons from occupying the lowest energy level in an atom. (Note that this scheme does not work for Bosons, which do not obey the exclusion principle). Such an infinite electron sea would have no observable properties, as long as the underlying vacuum has a positive “bare” charge to cancel out the negative electron charge. Since only changes in the energy density of this sea would be apparent, we would not normally notice its presence. Moreover, Dirac suggested that if a particle were missing from the sea the resulting hole would be indistinguishable from a positively charged particle, which he speculated was a proton, protons being the only positively charged subatomic particles known at the time. This idea was presented in a paper in 1930 (A Theory of Electrons and Protons, Dirac 1930). The theory was less than successful, however, and the deficiencies served only to undermine confidence in the entire Dirac theory. Attempts to identify holes as protons only made matters worse; it was shown independently by Heisenberg, Oppenheimer and Pauli that the holes must have the electron mass, but of course protons are almost 2000 times heavier. Moreover, the instability between electrons and holes completely ruled out stable atomic states made from these entities (bad news for hydrogen, and all other atoms). Eventually Dirac was forced to conclude that the negative energy solutions must correspond to real particles with the same mass as the electron and a positive charge. He called these anti-electrons (Quantised Singularities in the Electromagnetic Field, Dirac 1931). This almost reluctant conclusion was not based on a full understanding of what the negative energy states were, but rather the fact that the entire theory, which was so beautiful in other ways that it was hard to resist, depended on them. It turns out that to properly understand the negative energy solutions requires the formalism of quantum field theory (QFT). In this description particles (and antiparticles) can be created or destroyed, so it is no longer necessarily appropriate to consider these particles to be the fundamental elements of the theory. If the total number of particles in a system is not conserved then one might prefer to describe that system in terms of the entities that give rise to the particles rather than the particles themselves. These are the quantum fields, and the standard model of particle physics is at its heart a QFT. By describing particles as oscillations in a quantum field not only do we have an immediate mechanism by which they may be created or destroyed, but the problem of negative energies is also removed, as this simply becomes a different kind of variation in the underlying quantum field. Dirac didn’t explicitly know this at the time, although it would be fair to say that he essentially invented QFT, when he produced a quantum theory that included quantized electromagnetic fields (Dirac, 1927, The Quantum Theory of the Emission and Absorption of Radiation). This led, eventually, to what would be known as quantum electrodynamics. Dirac would undoubtedly have been able to make much more use of his creation if he had not been so appalled by the notion of renormalization. Unfortunately this procedure, which in some ways can be thought of as subtracting infinite quantities from each other to leave a finite quantity, was incompatible with his sense of mathematical aesthetics. So, despite initially struggling with the interpretation of his theory, there can be no question that Dirac did indeed explicitly predict the existence of the positron before it was experimentally observed. This observation came almost immediately in cloud chamber experiments conducted by Carl Anderson in California (C. D. Anderson: The apparent existence of easily deflectable positives, Science 76 238, 1932).  Curiously, however, Anderson was not aware of the prediction, and the proximity of the observation was apparently coincidental. We will discuss this remarkable observation in a later post. *This post is adapted from an as-yet unpublished book chapter by D. B. Cassidy and A. P. Mills, Jr. Griffiths, D. (2008). Introduction to Elementary Particles Wiley-VCH; 2nd edition. Farmelo, “The Strangest Man: The Hidden Life of Paul Dirac, Mystic of the Atom” Basic Books, New York, (2011). Dirac, P.A.M. (1927). The Quantum Theory of the Emission and Absorption of Radiation, Proceedings of the Royal Society of London, Series A, Vol. 114, p. 243. P. A. M. Dirac, Proc. Phys. Soc. London Sect. A 117, 610 (1928). P. A. M. Dirac, Proc. Phys. Soc. London Sect. A 126, 360 (1930). P. A. M. Dirac, Proc. Phys. Soc. London Sect. A 133, 60 (1931). Anderson, C. D. (1932). The apparent existence of easily deflectable positives, Science 76, 238. A.  Aspect, D. Jean, R. Gerard (1982). Experimental Test of Bell’s Inequalities Using Time- Varying Analyzers, Phys. Rev. Lett. 49 1804 P. Kusch and H. M. Foley “The Magnetic Moment of the Electron”, Phys. Rev. 74, 250 (1948).
5614005a63fcd89f
semiclassical approximation physics, mathematical physics, philosophy of physics Surveys, textbooks and lecture notes theory (physics), model (physics) experiment, measurement, computable physics To some extent, quantum mechanics and quantum field theory are a deformation of classical mechanics and classical field theory, with the deformation parameterized by Planck's constant \hbar. The semiclassical approximation or quasiclassical approximation to quantization/quantum mechanics is the restriction of this deformation to just first order (or some finite order) in \hbar. classical mechanicssemiclassical approximationformal deformation quantizationquantum mechanics order of Planck's constant \hbar𝒪( 0)\mathcal{O}(\hbar^0)𝒪( 1)\mathcal{O}(\hbar^1)𝒪( n)\mathcal{O}(\hbar^n)𝒪( )\mathcal{O}(\hbar^\infty) statesclassical statesemiclassical statequantum state observablesclassical observablequantum observable Applied to path integral quantization, the semiclassical approximation is meant to approximate the path integral ϕFieldsDϕF(ϕ)e iS(ϕ)/\int_{\phi \in \mathbf{Fields}} D\phi\; F(\phi) e^{iS(\phi)/\hbar} by an expansion in \hbar about the critical points of the action functional SS (hence the solutions of the Euler-Lagrange equations, hence to the classical trajectories of the system). As usual for the path integral in physics, this often requires work to make precise, but at a heuristic level the idea is famous as the rotating phase approximation?: the idea is that in regions of field-space where SS varies fast as measured in units of Planck's constant, the complex phases of the integrand exp(iS/)\exp(i S / \hbar ) tend to cancel each other in the integral so that substantial contributions to the integral come only from the vicininity of critical points of SS (classical trajectories). But semiclassical approximations can be applied to most other formulations of quantum physics, where they often lead to precise and powerful mathematical tools. Notably in the Schrödinger picture of quantum evolution, solutions to the Schrödinger equation iddtψ=H^ψi \hbar \frac{d}{d t} \psi = \hat H \psi (which characterizes quantum states given by wave functions ψ\psi for Hamiltonian dynamics induced by a Hamilton operator H^\hat H) are usefully considered to first (or any finite) order in \hbar. This method, known after (some of) its inventors as the WKB method or similar, amounts to expressing the wave function in the form ψ=exp(S)\psi = exp(S) where SS is a slowly varying function and solving the equation for SS. Globally consistent such solutions to first order lead to what are called Bohr-Sommerfeld quantization conditions. For the formalization of this method in symplectic geometry/geometric quantization see at semiclassical state. This WKB method makes sense for a more general class of wave equations. For instance in wave optics this yields the short-wavelength limit of the geometrical optics approximation. Here SS is called the eikonal?. Multidimensional generalization of the WKB method appear to be rather nontrivial; they have been pioneered by Victor Maslov who introduced a topological invariant to remove ambiguities of the naive version of the method, called the Maslov index. Equivariant localization In some special cases (most often in the presence of supersymmetry) the main contribution (the first term in expansion) amounts to the true result; the quantum correction sometimes leads however to an overall scalar factor. This is the case of so-called localization (related directly in some cases to the equivariant localization in cohomology and Lefshetz-type fixed point formulas). Most of well known examples of integrable systems and TQFTs lead to localization. Large NN-limit in gauge theories The large N limit of gauge theories, which is of importance in collective field theory and in the study of relation between gauge and string theories is formally very similar to semiclassical expansion, where the role of Planck constant is played by 1/N 21/N^2. In radiation theory In the theory of radiation there is a different meaning of semiclassical treatment: one considers particles in a sorrounding electromagnetic field and the particles are treated as in finite-dimensional quantum mechanics, with the electromagnetic field as an external classical field coupled to the particles via an interaction term. Borel summability may make sense of the semiclassical expansion to all orders; this approach is sometimes called exact WKB method: Relation to quantum integrable systems is in a series of works of Vũ Ngọc, e.g. For large N-limit compared to semiclassical expansion see For the semiclassical method in superstring theory see
b8eb20c6b4310063
Is the nature of quantum chaos classical? We pay attention here to three originating moments: 1) Equation (12) is the Schr&#246;dinger equation again, but without an external Is the nature of quantum chaos classical? Другие статьи по предмету Сдать работу со 100% гаранией Is the nature of quantum chaos classical? K.N. Yugay, S.D. Tvorogov, Omsk State University, General Physics Department, pr.Mira,55-A 644077 Omsk, RUSSIA Institute of Atmosphere Optics of Russian Academy of Sciences Recently discussions about what is a quantum chaos do not abate [1-16]. Some authors call in question the very fact of an existence of the quantum chaos in nature [8]. Mainly reason to this doubt is what the quantum mechanics equations of motion for the wave function or density matrix are linear whereas the dynamical chaos can arise only into nonlinear systems. In this sence the dynamical chaos in quantum systems, i.e. the quantum chaos, cannot exist. However a number of experimental facts allow us to state with confidence that the quantum chaos exists. Evidently this contradiction is connected with what our traditional description of nature is not quite adequate to it. Reflecting on this problem one cannot but pay attention to the following: i) two regin exist - the pure quantum one (QR) and the pure classical one (CR), where descriptions are essentially differed. The way in which the quantum and classical descriptions are not only two differen levels of those, but it seems to be more something greater; the problem of quantum chaos indicates to it. Since experimental manifestations of quantum chaos exist therefore one cannot ignore the question on the nature of quantum chaos and the description of it. ii) It undoubtedly that the intermediate quantum-classical region (QCR) exists between the QR and the CR, which must be possessed of characteristics both the QR and the CR. Since the term "quasiclassics" is connected traditionally with corresponding approximate method in the quatum mechanics we shall call this region as quantum-classical one further. It is evident that the QCR is the region of high excited states of quantum systems. Below shall show that quantum and classical problems are not autonomous into the QCR but they are coupled with each other, so that a solution of a quantum problem contains a solution of a corresponding classical problem, but not vice versa. A possible dynamical chaos of a nonlinear classical problem has an effect on the quantum problem so that one can say quantum chaos arises from depths of the nonlinear classical mechanics and it is completely described in terms of nonlinear dynamics, for example, instability, bifurcation, strange attractor and so on. We shall show also that the connection between the quantum and classical problems is reflected on a phase of a wave function which having a quite classical meaning is subjected to its classical equation of motion and in the case of its nonlinearity into the system the dynamical chaos is excited. One of a splendid example of a role of the wave function phase is a description of dynamical chaos in a long Josephson junction [17-24]. Here the wave function phase (the difference phases on a junction) of a superconducting condensate is subjected to the nonlinear dynamical sine-Gordon equation. The dynamical chaos arising in a long Josephson junction and describing by the sine-Gordon equation is a quantum chaos essentially since the question is about a phenomenon having exceptionally the quantum character. However the quantum chaos is described here precisely by the classical nonlinear equation. Below we shall try to show that the description of the quantum chaos in the more general case may be carry out just as in a long Josephson junction in terms of nonlinear classical dynamics equations of motion to wich the wave function phase of a quantum is subjected. In addition the quantum system must be into the QCR, i.e. into high excited states. Let us assume that the Hamiltonian of a system have the form where the operator of the potential energy U(x,t) is (We examine here an one-dimensional system for the simplicity). Here U0(x) is the nonperturbation potential energy, and f(t) is the time-dependent external force. We shall found the solution of the Schrödinger equation in the form , is the solution of the classical equation of motion, is the certain constant, s(t) is the time-dependent function, the sense of that will be clear later on. We notice that the function A(x,t) is real. (A representation of the phase A(x,t) in the form (5) at was introduced first by Husimi [25]). Substituting (4) into Eq.(1) and taking into account (5), we get Here subscripts t, y and denote the partial derivatives with respect to time t and coordinates y, , respectively. On the right of Eq.(6) the expressions of both square brackets are equal to zero because of following relations: i) of the classical equation of motion where is the same potential, that is into (3), and ii) of the expression for the classical Lagrang function L(t) so that the function makes a sense of an action integral. Into Eq.(6) By deduction of Eq.(6) we made use of an potential energy expansion in the form It is obvious that the expansion (11) is correct in the case when a classical trajectory is close to a quantum one. Thus we get the equation for the function in the form We pay attention here to three originating moments: 1) Equation (12) is the Schrödinger equation again, but without an external force. 2) We have the system of two equations of motion: quantum Eq.(12) and classical Eq.(7). In a general case these equations make up the system of bound equations, because the coefficient k can be a function of classical trajectory, . As we show below a connection between Eqs. (12) and (7) arises in the case, if classical Eq. (7) is nonlinear. 3) Classical Eq.(7) contains some dissipative term, and so makes sense of a dissipative coefficient. The arising of dissipation just into the classical equation is looked quite naturally - a dissipation has the classical character. Let us assume that is the potential energy of a linear harmonic oscillator where is the certain constant. Then we have where is the natural frequency of the harmonic oscillator. Equations (15) and (16) represent the corresponding equations of the quantum and classical linear harmonic oscillators. We see that Eqs.(15) and (16) are autonomous with respect to each other. Thus in the case if the classical limit (16) of the corresponding quantum problem (15) is linear then the solution of the classical and quantum one are not connected with each other. Let us assume now that have a form of the potential energy of the Duffing oscillator where , and are some constants. For the potential energy (17) k takes the form Then we have the following equations of motion Equation (20) represents the equation of motion for a nonlinear oscillator. It is seen, that quantum (19) and classical (20) equations of motion are coupled with each other. We return to the discussion of expansion (11). It is seemed obvious, that the classical and quantum trajektories coexist and close to each other only into the QCR. Into the pure quantum region QR and into the pure classical one CR these trajectories cannot coexist: because into the CR a de Broglie wave packet fails quickli in consequence of dispersion; into the QR the classical trajectory dissappears in consequence of uncertainty relations. Thus expansion (11) is correct into the quantum-classical region QCR only, or in other words into the quasiclassical region. The QCR is became essential just in cases when a classical problem proves to be nonlinear. The transition of a particle from the low states (from the QR) into high excited states (into the QCR) is where A(x,t) is defined with the expression (5). It is easily seen that the probability of this transition will be depend on the solution of the classical equation of motion . Since the classical problem (19) is nonlinear, then into its, as it is known [26] dynamical chaos can be arisen. This chaos will lead to nonregularities in the wave function phase A(x,t) and also in the function , that in turn will lead to nonregularities of the probabilities of the transition in high excited states, and also from high excited states into states of the continuous spectrum. In this way it can be said that the quantum chaos is the dynamical chaos in the nonlinear classical problem, defining quantum solutions, from the point of view of the stated here theory. These investigations are supported by the Russian Fund of Fundamental Researches (project No. 96-02-19321). Список литературы Zaslavsky G.M., Chirikov B.V. Stochastic Instability of Nonlinear Oscillations // Usp. Fiz. Nauk. 1971. V.105. N.1. P.3-29. Chirikov B.V., Izrailev F.M., Shepelaynsky D.L. Dynamical Stochasticity in Classical and Quantum Mechanics // Sov. Sci. Rev., Sect.C. 1981. V.2. P.209-223. Toda M., Ikeda K. Quantum version of resonance overlap //J.Phys.A: Math. and Gen. 1987. V.20. N.12. P.3833-3847. Nakamura K. Quantum chaos. Fundamental problems and application to material science // Progr. Theor. Phys. 1989. Suppl. N.98. P.383-399. FloresJ.C. Kicked quantum rotator with dynamic disorder. A diffusive behaviour in momentuum space // Phys. Rev. A. 1991. V.44. N.6. P.3492-3495. Gasati G., Guarneri I., Izrailev F., Scharf R. Scaling b Похожие работы 1 2 >
c9df71a8617bd258
From Uncyclopedia, the content-free encyclopedia Jump to navigation Jump to search [[File:|200px]]   Idiotic Table of Elements »  Scientific Information moderate to high Natural Habitat: Sitting motionless in front of the television, worrying about her Standard atomic weight Known Isotopes: Physical Properties −272.20 °C, or maybe −272.20 °F −268.93 °C, or maybe −268.93 °F Colorless gas, exhibiting a red-orange, almost sickly glow when placed in a high-voltage electric field No data, as she was visiting relatives in the Horsehead Nebula on the day of Dr. "Pervert" Pauling's dumb exam It is commonly known that Helium is an element from the first period of the periodic table. But few know of the long personal, extraordinary and unique journey that she has faced over the years. Helium was raised by her Auntie Neutrino, along with her lame brother, Hydrogen. Helium was well liked as a child, by her peers and teachers. She was considered more noble than Hydrogen, and certainly more popular due to her thermally and calorically perfect nature. Said Argon, Helium's best friend at the time, "She didn't react to any old thing, unlike her idiot brother. Boy, he was the simplest kid in the Rho Ophiuchi cloud complex." When Helium was a girl, she had no difficulty making friends. However, her friends were always made fun of, especially Iridium, who was a very dense transition metal. Hydrogen was also bullied for only having "s" orbitals, but due to Helium's lack of reaction and calm demeanor, the three tough guys, Lonsdaleite, Boron nitride and Diamond never bothered her about it. Anyhow, in spite of Helium's qualities, her teacher at the time, Mr. Muon stated that Helium bothered him greatly because "A closed-form solution to the Schrödinger equation for the helium atom has not been found." During the end of her time at Electron Cloud High, Helium was encouraged by Auntie Neutrino to change, so she could interact with "cooler elements" such as Chlorine, Nitrogen and Oxygen. Helium created two alter ego's - Helium-3 and Helium-4. However, some elements thought this was cover for the fact that she swang both ways, considering that functioning as a fermion and a boson was covering all the bases for being an elementary particle. Despite the fact that Helium-4 was so cool it became superfluid, most of the elements ignored Helium's best efforts. After that didn't work, Helium went around, boasting that she was named after a god, but according to Uranium, the element that started the fad, "That was, like, so after that was fashionable." Early Adulthood[edit] Helium, known for her calm attitude, didn't react after she found out that Fluorine and Neon were together. During her early adulthood, Helium started to become jealous of Hydrogen's thermal conductivity, density and specific heat. She too decided that listening to heavy metal would help her find inner peace, although unlike her brother, she found Platinum a little more attractive than Osmium. But, as the heavy metal grew duller and duller, Helium realised that she was more into Thermoacoustic stuff. It is generally agreed that one day, Helium got chatting to Neon, who was a very bright boy in her VB Theory class. Helium and Neon both had a lot in common - the pair were both calm, noble and had high levels of ionization energy. The duo had a comfortable relationship as they never got angry or antagonized with each other, but that all changed when a new element arrived on the scene. Fluorine tore the two apart after telling Neon that "Helium doesn't have eight valence electrons like the rest of group 18, oh no, she only has two, just like that unstable weirdo Radium. You can do so much better." Helium moved on and after graduating out of high school, she got an internship with Beryllium, a down to earth, solid older element with a lot more connections. According to Argon, "Beryllium really helped Helium take a closer look into her own crystalline structure, and when Helium found out that she was hexagonal close-packed, she looked so much more...congruent." Helium has often said that she wished the start to their relationship was more meaningful, and wasn't formed by the decay of tritium. An Old Relationship[edit] A long time before her internship, Helium and her brother became very close due to the fact that they had a natural affinity for each other. There was a lot chemistry between the two, and before too long they had a covenant bond. Once they found out, many other elements looked down on the two, saying that it was wrong to be so close. As ever, Helium responded calmly "We haven't broken Hess's Law, and let's remember, this was back when we were the only two elements. It's not like we had a Big Bang or anything." Of course, the relationship didn't last. Other elements were coming into existence and Hydrogen's nature always made things uncomfortable. The length of their relationship was 0.772 Å, which Helium described as "adequate." Thanks to her coolness and popularity, the relationship was forgotten. Alkali Metal Alkaline Earth Transition Metal Basic Metal Noble Gas  ( v · t · e )
fb7ac4f4b9b40ac1
Title: Development of 3d-printed microwave metamaterial absorbers This project relates to the development of broadband metamaterial absorbers for mm-wavelength radiation. Such absorber technologies are being considered for next-generation telescopes measuring the polarization of the cosmic microwave background (CMB). The project involves the study of potential plastic resin candidates doped with conductive particles to increase mm-wavelength absorption properties as well development of code that generates geometries suitable for 3D printing. As part of the project, the student will develop a familiarity with a resin 3D printer and test fabrication of absorbing geometries. The project will conclude with measurement of material properties and comparison with expectations. Supervisor: Jon Gudmundsson Title: Machine learning algorithms and optics design The project aims to explore and develop machine learning algorithms to design optical instrumentation for astronomical observatories automatically. High-fidelity astronomy requires designing ever-increasing complex optical instruments that meet the researcher’s demands. Traditional optical systems have consisted only of limited pre-machined convex or concave shapes. In recent years, it has become feasible to produce so-called freeform lenses of arbitrary surface shapes, allowing for a wider variety of optical tasks that can be addressed with them. This opens the opportunity to design more complex optical instruments that satisfy specified criteria. However, the ability to machining such new lenses comes at the expense of an increased design task.  This thesis will explore various machine learning approaches to search for optimal optical designs that satisfy predefined criteria and requirements of scientific instrumentation. Supervisors: Jens Jasche and Jon Gudmundsson Title: Method of moments approach to optical modeling for mm-wavelength telescopes In this project, we will develop an algorithm that makes use of method of moments approach for electromagnetic scattering—a concept often employed in stealth technology—to study realistic optical systems and their impact on the overall noise budget of our bolometers. Although the method of moments approach to electromagnetic scattering problems has been around for a few decades, it has not seen much use in modeling of large telescope systems operating at radio and mm wavelengths. Some of the basic concepts are introduced in textbooks by Harrington. We will review the algorithm and implement it on a few simple electromagnetic scattering problems before developing code that applies the algorithm to realistic telescopes used by CMB experiments. Contact Jón Gudmundsson for more information Title: Avoided level crossings in cosmology There is strong observational evidence that most of the matter in the universe is invisible, or ’dark’. It is not known what particles comprise this dark matter, or how they came into being. An ‟avoided level crossing” is a quantum phenomenon, well-known in atomic physics and neutrino physics, where two eigenvalues of a time-dependent Hamiltonian become almost degenerate, and transitions between the eigenstates can become ‟resonant” and unsuppressed. This project will explore if avoided level crossings can explain the production of dark matter in the early universe, and if so, what additional predictions this entails. This project requires an active interest in theoretical quantum physics, astroparticle physics, and cosmology. Contact: David Marsh Title: Axion-photon conversion and correlation functions The QCD axion and more general axion-like particles (ALPs) are theoretically very well-motivated extensions of the Standard Model of particle physics. A prediction of this theory is that ALPs mix with photons in the presence of magnetic fields. The mixing equation is classical, but can be written as a Schrödinger equation for a 3-level system. This makes it possible to translate results from quantum mechanics to learn about the predictions of ALP theories. This project will develop the formalism of axion photon mixing as applied to one of the most promising targets for axion searches: the X-ray emitting intracluster medium of galaxy clusters. This project will include theoretical and numerical work that may contribute to the foundation for future ALP searches by the next generation of satellite missions. Contact: David Marsh Title: Geometric destabilisation of cosmological fields Inflation provides the leading paradigm for the origin of cosmic structure. In this framework, quantum scalar field fluctuations froze into the fabric of space during a hypothetical period of accelerated expansion in the early universe. Inflation can be realised with rather simple ingredients, such as one or more scalar fields slowly ‟rolling” down a potential. Theories with more than one field are theoretically very well-motivated, and includes new features such as non-trivial curvature of the field space itself. This project will investigate a potential instability of multifield inflationary theories in which the field space curvature induces an instability for certain fluctuations around the inflationary background. After carefully characterising this effect geometrically, this project may investigate whether it can be realised in the type of complex Kähler geometries that are relevant in theories of supergravity. This project involves theoretical work in cosmology and field theory, including aspects of geometry. Contact: David Marsh Title: Modified dispersion relations in cosmology Can we use cosmological observations to observe, e.g. quantum effects of gravity? If we can measure the gravitational lensing effect and/or the group velocity through the light travel time for single sources at different energies, we are able to investigate modified dispersion relations. First, review systems that have observations at different wavelengths. Second, use available data to constrain the energy scale of modifications to the dispersion relation. Last, interpret results in terms of modifications to general relativity. Is it possible that modified dispersion relations can help easing the tension between the inferred lensing of the cosmic microwave background and lensing of galaxies observed in optical light? How do these limits compare to time delay constraints in, e.g., https://arxiv.org/abs/2109.07850? Contact Edvard Mörtsell for more information Title: Lensed gravitational waves Cook up a lens that explains the model of lensed gravitational waves in https://arxiv.org/abs/2007.12709. Also, is it possible that many other gravitational wave events are lensed and in fact at much higher redshifts than commonly believed, see https://arxiv.org/abs/2006.13219 and https://arxiv.org/abs/2106.06545? Contact Edvard Mörtsell for more information Title: Gravitational waves in modified gravity In extended gravity theories, e.g., massive graviton theories, the gravitational wave velocity will be energy dependent. This means that the higher frequency signal from the later stages of a coalescing event may catch up with the early low frequency part causing a gravitational wave sound bang at the detector, or even an inverted signal. In principle this could be searched for in archival data. The project aims at deriving possible highly distorted gravitational wave signals very different from the ones nominally searched for and therefore possibly missed by common detection pipelines. Contact Edvard Mörtsell for more information Title: Micro lensing bias in gravitationally lensed supernovae and quasars In cases of multiply imaged supernovae and quasars, we expect the magnification of individual images to be affected by micro lensing from stars and possibly compact dark matter in the lens galaxy. In principle, this can be corrected for on a statistical basis. However, the fact that it is easier to detect high magnification events may cause a bias towards such events in the observed data, possibly invalidating such simple corrections. The project aims at quantifying this bias through the use of simulated events. Contact Edvard Mörtsell for more information Title: Modified gravity and rotation curves Can we fit galactic rotation curves if the acceleration scale of Modified Newtonian Dynamics (MOND, or some similar theory) is mass dependent? An example is bimetric theory where the so called Vainshtein radius within which general relativity is restored, . References: e.g. https://arxiv.org/abs/1401.5619 and https://arxiv.org/abs/1705.02366. Contact Edvard Mörtsell for more information Title: The prior dependence in Bayseian model selection When doing model selection using Bayesian statistics, the assumed prior on the parameters of the model is very important. Especially, the range of the prior can have a large impact on the validity of the model in question. As an example, the case for a non-zero cosmological constant, , is expected to beyond doubt given how much better the fit is to observed data compared to the case with . However, given that the theoretical prior on the possible value of  is huge, it is not obvious that it will be preferred given a strict Bayesian analysis. This, and simliar cases, should be investigated in the project. See also https://arxiv.org/abs/2102.10671 and https://arxiv.org/abs/2111.04231. Contact Edvard Mörtsell for more information Title: Dark matter direct detection project The XENONnT experiment is one of the world’s most sensitive detectors for measuring direct interactions between potential dark matter candidates and ordinary matter. It is situated at LNGS in Italy ca. 1.4 km deep under the Abruzzo mountains  and started taking data in 2021. Our group is involved in the data analysis, specifically the high-end statistical analysis, the development of a new statistical framework and the operation of the detector, specifically the photosensors used to measure the light and charge signals which are expected from a dark matter interaction with our detector. In addition we are involved in the development of a completely new type of photosensor, called ABALONE, for the future DARWIN experiment which will be even more sensitive than our current XENONnT detector. Potential projects in our group include the setup and operation of cryogenic tests (at -100˚C) for these new photosensors in our lab at AlbaNova as well as the data analysis of these tests. If you are interested in this topic or other topics our group is involved in, please don’t hesitate to contact us (contacts: Jörn and Jan Conrad). Analysis of Supernovae from the Zwicky Transient Facility Type Ia Supenovae (SNe Ia) are bright explosions of white-dwarf stars, that can be used to measure cosmological distances. The accelerated expansion of the Universe was discovered using only ~100 SNe Ia in 1998 (Nobel Prize in 2011), and we now have more than 4000 SNe Ia discovered by the Zwicky Transient Facility (ZTF) to analyse. SNe Ia remain essential for studying the properties of the “dark energy” driving the accelerated expansion of the Universe, but the lack of understanding of the white dwarf star progenitor systems and the standardization corrections to the light-curve shapes and colors represent severe limitations for SNe Ia as cosmological probes. One source of uncertainty comes from cosmic dust particles in along the line-of-sight (e.g. in the Milky way, host galaxies and/or circumstellar environments of the SNe), that affect the observed colours and luminosities of SNe Ia - typically making them redder and fainter.      Related thesis projects: Measuring extinction from dust in the Milky way, SN host galaxies and the circumstellar environment using the most reddened SNe Ia in ZTF; Analyzing light-curves and spectra of extreme and “weird” SNe Ia, and comparing to different explosion scenarios; Correlations between supernova features and host galaxy properties; Sample analysis of SN Ia ”siblings” (SNe sharing the same host galaxy, e.g. Biswas et al. 2021) to self-calibrate the light-curve corrections. (Contacts: Joel Johansson, Ariel Goobar, Steve Schulze) Cosmology with gravitationally lensed Supernovae and Quasars For the rare cases of nearly perfect alignment between an observer, an intervening galaxy and a background source, multiple images of a single source can be detected, a phenomenon known as strong gravitational lensing. Multiple images of lensed sources arrive at different times because they travel along different paths and through different gravitational potentials to reach us. For transient phenomena like supernovae (SNe) or quasars (QSOs), strong lensing offers exciting opportunities to directly measure time-delays between the images, which can be used to study the distribution matter in the lensing object and to measure the Hubble constant, H0, which is currently the most hotly contested parameter in cosmology.      The first strongly lensed Type Ia supernova (SN Ia) was recently discovered (iPTF16geu , Goobar et al. 2017), and ZTF is well-suited to search for more of these rare transient phenomena. Gravitationally lensed SNe Ia are particularly interesting due to their “standard candle" nature, i.e., all explosions have nearly identical peak luminosity, intrinsic colors and lightcurve shapes, making them ideal tools for magnification and time-delay measurements, as well as probes of the lensing matter distribution.      Related thesis projects: Implement Machine Learning techniques to detect gravitationally lensed SNe in ZTF; Measure time delays from multiply imaged QSO’s in ZTF; Time delays from simulated spectral time series of Supernovae, see e.g. Johansson et al. 2021. (Contacts: Ana Sagués Carracedo, Remy Joseph, Joel Johansson) UV data of Superluminous Supernovae The era of large-scale time-domain astronomical surveys has arrived. Every night, modern all-sky surveys detect hundreds of thousands of extragalactic transients. In less than 5 years, the Vera Rubin Observatory will increase the nightly discovery rate by a factor of 10. It will also push large-scale time-domain astronomy to the young high-redshift Universe. As we look further back in time (=higher redshift), telescopes do not observe the optical but the UV emission of SNe, which got redshifted by the expansion of the Universe. However, little is known about the UV emission of SNe and, therefore, how SNe at high-redshift could look like. In the past years, we have collected UV data of a particular SN class, namely superluminous supernovae (SLSNe). SLSNe are 100-times more luminous than regular core-collapse SNe and Type Ia SNe. They have been a focus of SN science ever since, because of the opportunity they provide to study, for instance, new explosion channels of very massive stars in the distant Universe. A master student will use the UV data of SLSNe to predict the light curves of high-redshift SLSNe and compare them to those of known high-redshift SLSNe. (Contact: Steve Schulze)
726a9f639845107d
A graph of isotope stability, with some of the magic numbers. In nuclear physics, a magic number is a number of nucleons (either protons or neutrons, separately) such that they are arranged into complete shells within the atomic nucleus. As a result, atomic nuclei with a 'magic' number of protons or neutrons are much more stable than other nuclei. The seven most widely recognized magic numbers as of 2019 are 2, 8, 20, 28, 50, 82, and 126 (sequence A018226 in the OEIS). For protons, this corresponds to the elements helium, oxygen, calcium, nickel, tin, lead and the hypothetical unbihexium, although 126 is so far only known to be a magic number for neutrons. Atomic nuclei consisting of such a magic number of nucleons have a higher average binding energy per nucleon than one would expect based upon predictions such as the semi-empirical mass formula and are hence more stable against nuclear decay. The unusual stability of isotopes having magic numbers means that transuranium elements could theoretically be created with extremely large nuclei and yet not be subject to the extremely rapid radioactive decay normally associated with high atomic numbers. Large isotopes with magic numbers of nucleons are said to exist in an island of stability. Unlike the magic numbers 2–126, which are realized in spherical nuclei, theoretical calculations predict that nuclei in the island of stability are deformed.[1][2][3] Before this was realized, higher magic numbers, such as 184, 258, 350, and 462 (sequence A033547 in the OEIS), were predicted based on simple calculations that assumed spherical shapes: these are generated by the formula (see Binomial coefficient). It is now believed that the sequence of spherical magic numbers cannot be extended in this way. Further predicted magic numbers are 114, 122, 124, and 164 for protons as well as 184, 196, 236, and 318 for neutrons.[1][4][5] However, more modern calculations predict, along with 184 and 196, 228 and 308 for neutrons.[6] History and etymology Maria Goeppert Mayer Maria Goeppert Mayer Upon working on the Manhattan Project, the German physicist Maria Goeppert Mayer became interested in the properties of nuclear fission products, such as decay energies and half-lives.[7] In 1948, she published a body of experimental evidence for the occurrence of closed nuclear shells for nuclei with 50 or 82 protons or 50, 82, and 126 neutrons.[8] It had already been known earlier that nuclei with 20 protons or neutrons were stable: that was evidenced by calculations by Hungarian-American physicist Eugene Wigner, one of her colleagues in the Manhattan Project.[9] Two years later, in 1950, a new publication followed in which she attributed the shell closures at the magic numbers to spin-orbit coupling.[10] According to Steven Moszkowski (a student of Maria Goeppert Mayer), the term "magic number" was coined by Wigner: "Wigner too believed in the liquid drop model, but he recognized, from the work of Maria Mayer, the very strong evidence for the closed shells. It seemed a little like magic to him, and that is how the words 'Magic Numbers' were coined."[11] These magic numbers were the bedrock of the nuclear shell model, which Mayer developed in the following years together with Hans Jensen and culminated in their shared 1963 Nobel Prize in Physics.[12] Doubly magic Nuclei which have neutron number and proton (atomic) numbers each equal to one of the magic numbers are called "doubly magic", and are especially stable against decay.[13] The known doubly magic isotopes are helium-4, helium-10, oxygen-16, calcium-40, calcium-48, nickel-48, nickel-56, nickel-78, tin-100, tin-132, and lead-208. However, only the first, third, fourth, and last of these doubly magic nuclides are completely stable, although calcium-48 is extremely long-lived and therefore naturally occurring, disintegrating only by a very inefficient double beta minus decay process. Doubly-magic effects may allow existence of stable isotopes which otherwise would not have been expected. An example is calcium-40, with 20 neutrons and 20 protons, which is the heaviest stable isotope made of the same number of protons and neutrons. Both calcium-48 and nickel-48 are doubly magic because calcium-48 has 20 protons and 28 neutrons while nickel-48 has 28 protons and 20 neutrons. Calcium-48 is very neutron-rich for such a light element, but like calcium-40, it is stabilized by being doubly magic. Magic number shell effects are seen in ordinary abundances of elements: helium-4 is among the most abundant (and stable) nuclei in the universe[14] and lead-208 is the heaviest stable nuclide. Magic effects can keep unstable nuclides from decaying as rapidly as would otherwise be expected. For example, the nuclides tin-100 and tin-132 are examples of doubly magic isotopes of tin that are unstable, and represent endpoints beyond which stability drops off rapidly. Nickel-48, discovered in 1999, is the most proton-rich doubly magic nuclide known.[15] At the other extreme, nickel-78 is also doubly magic, with 28 protons and 50 neutrons, a ratio observed only in much heavier elements, apart from tritium with one proton and two neutrons (78Ni: 28/50 = 0.56; 238U: 92/146 = 0.63).[16] In December 2006, hassium-270, with 108 protons and 162 neutrons, was discovered by an international team of scientists led by the Technical University of Munich, having a half-life of 9 seconds.[17] Hassium-270 evidently forms part of an island of stability, and may even be doubly magic due to the deformed (American football- or rugby ball-like) shape of this nucleus.[18][19] Although Z = 92 and N = 164 are not magic numbers, the undiscovered neutron-rich nucleus uranium-256 may be doubly magic and spherical due to the difference in size between low- and high-angular momentum orbitals, which alters the shape of the nuclear potential.[20] Magic numbers are typically obtained by empirical studies; if the form of the nuclear potential is known, then the Schrödinger equation can be solved for the motion of nucleons and energy levels determined. Nuclear shells are said to occur when the separation between energy levels is significantly greater than the local mean separation. In the shell model for the nucleus, magic numbers are the numbers of nucleons at which a shell is filled. For instance, the magic number 8 occurs when the 1s1/2, 1p3/2, 1p1/2 energy levels are filled, as there is a large energy gap between the 1p1/2 and the next highest 1d5/2 energy levels. The atomic analog to nuclear magic numbers are those numbers of electrons leading to discontinuities in the ionization energy. These occur for the noble gases helium, neon, argon, krypton, xenon, radon and oganesson. Hence, the "atomic magic numbers" are 2, 10, 18, 36, 54, 86 and 118. As with the nuclear magic numbers, these are expected to be changed in the superheavy region due to spin–orbit coupling effects affecting subshell energy levels. Hence copernicium (112) and flerovium (114) are expected to be more inert than oganesson (118), and the next noble gas after these is expected to occur at element 172 rather than 168 (which would continue the pattern). In 2010, an alternative explanation of magic numbers was given in terms of symmetry considerations. Based on the fractional extension of the standard rotation group, the ground state properties (including the magic numbers) for metallic clusters and nuclei were simultaneously determined analytically. A specific potential term is not necessary in this model.[21][22] See also 2. ^ "Nuclear scientists eye future landfall on a second 'island of stability'". 3. ^ Grumann, Jens; Mosel, Ulrich; Fink, Bernd; Greiner, Walter (1969). "Investigation of the stability of superheavy nuclei aroundZ=114 andZ=164". Zeitschrift für Physik. 228 (5): 371–386. Bibcode:1969ZPhy..228..371G. doi:10.1007/BF01406719. 6. ^ Koura, H. (2011). Decay modes and a limit of existence of nuclei in the superheavy mass region (PDF). 4th International Conference on the Chemistry and Physics of the Transactinide Elements. Retrieved 18 November 2018. 7. ^ Out of the shadows : contributions of twentieth-century women to physics. Byers, Nina. Cambridge: Cambridge Univ. Pr. 2006. ISBN 0-521-82197-5. OCLC 255313795.CS1 maint: others (link) 8. ^ Mayer, Maria G. (1948-08-01). "On Closed Shells in Nuclei". Physical Review. 74 (3): 235–239. doi:10.1103/physrev.74.235. ISSN 0031-899X. 9. ^ Wigner, E. (1937-01-15). "On the Consequences of the Symmetry of the Nuclear Hamiltonian on the Spectroscopy of Nuclei". Physical Review. 51 (2): 106–119. doi:10.1103/PhysRev.51.106. 10. ^ Mayer, Maria Goeppert (1949-06-15). "On Closed Shells in Nuclei. II". Physical Review. 75 (12): 1969–1970. doi:10.1103/PhysRev.75.1969. 11. ^ Audi, Georges (2006). "The history of nuclidic masses and of their evaluation". International Journal of Mass Spectrometry. 251 (2–3): 85–94. arXiv:physics/0602050. Bibcode:2006IJMSp.251...85A. doi:10.1016/j.ijms.2006.01.048. 12. ^ "The Nobel Prize in Physics 1963". NobelPrize.org. Retrieved 2020-06-27. 13. ^ "What is Stable Nuclei - Unstable Nuclei - Definition". Periodic Table. 2019-05-22. Retrieved 2019-12-22. 14. ^ Nave, C. R. "The Most Tightly Bound Nuclei". HyperPhysics. 15. ^ W., P. (October 23, 1999). "Twice-magic metal makes its debut - isotope of nickel". Science News. Archived from the original on May 24, 2012. Retrieved 2006-09-29. 16. ^ "Tests confirm nickel-78 is a 'doubly magic' isotope". Phys.org. September 5, 2014. Retrieved 2014-09-09. 17. ^ Audi, G.; Kondev, F. G.; Wang, M.; Huang, W. J.; Naimi, S. (2017). "The NUBASE2016 evaluation of nuclear properties" (PDF). Chinese Physics C. 41 (3): 030001–134. Bibcode:2017ChPhC..41c0001A. doi:10.1088/1674-1137/41/3/030001. 18. ^ Mason Inman (2006-12-14). "A Nuclear Magic Trick". Physical Review Focus. Vol. 18. Retrieved 2006-12-25. 19. ^ Dvorak, J.; Brüchle, W.; Chelnokov, M.; Dressler, R.; Düllmann, Ch. E.; Eberhardt, K.; Gorshkov, V.; Jäger, E.; Krücken, R.; Kuznetsov, A.; Nagame, Y.; Nebel, F.; Novackova, Z.; Qin, Z.; Schädel, M.; Schausten, B.; Schimpf, E.; Semchenkov, A.; Thörle, P.; Türler, A.; Wegrzecki, M.; Wierczinski, B.; Yakushev, A.; Yeremin, A. (2006). "Doubly Magic Nucleus 108270Hs162". Physical Review Letters. 97 (24): 242501. Bibcode:2006PhRvL..97x2501D. doi:10.1103/PhysRevLett.97.242501. PMID 17280272. 20. ^ Koura, H.; Chiba, S. (2013). "Single-Particle Levels of Spherical Nuclei in the Superheavy and Extremely Superheavy Mass Region". Journal of the Physical Society of Japan. 82 (1): 014201. Bibcode:2013JPSJ...82a4201K. doi:10.7566/JPSJ.82.014201. 21. ^ Herrmann, Richard (2010). "Higher dimensional mixed fractional rotation groups as a basis for dynamic symmetries generating the spectrum of the deformed Nilsson-oscillator". Physica A. 389 (4): 693–704. arXiv:0806.2300. Bibcode:2010PhyA..389..693H. doi:10.1016/j.physa.2009.11.016. 22. ^ Herrmann, Richard (2010). "Fractional phase transition in medium size metal clusters and some remarks on magic numbers in gravitationally and weakly bound clusters". Physica A. 389 (16): 3307–3315. arXiv:0907.1953. Bibcode:2010PhyA..389.3307H. doi:10.1016/j.physa.2010.03.033.