chash
stringlengths
16
16
content
stringlengths
267
674k
ce718c5aec29bcaa
Take the 2-minute tour × How is the following classical optics phenomenon explained in quantum electrodynamics? • Reflection and Refraction Are they simply due to photons being absorbed and re-emitted? How do we get to Snell's law, for example, in that case? Split by request: See the other part of this question here. share|improve this question Nice question ! –  student Dec 18 '10 at 11:03 Feynman's little pop-sci book on QED address these questions very well. I'd recommend it for anyone who doesn't want to muddle through the nitty-gritty of the math. Heck, it's a fun read even if you do want to work through the math. –  dmckee Dec 18 '10 at 15:33 By taking the classical limit, then explained clasically. Correspondence principle. –  user1708 Dec 18 '10 at 16:58 @kalle43: that goes without saying. –  Sklivvz Dec 18 '10 at 17:01 As I said in the comments below, the question is too broad right now and it would perhaps be wise to split it up. –  Marek Dec 19 '10 at 10:30 2 Answers 2 up vote 10 down vote accepted Hwlau is correct about the book but the answer actually isn't that long so I think I can try to mention some basic points. Path integral One approach to quantum theory called path integral tells you that you have to sum probability amplitudes (I'll assume that you have at least some idea of what probability amplitude is; QED can't really be explained without this minimal level of knowledge)) over all possible paths that the particle can take. Now for photons probability amplitude of a given path is $\exp(i K L)$ where $K$ is some constant and $L$ is a length of the path (note that this is very simplified picture but I don't want to get too technical so this is fine for now). The basic point is that you can imagine that amplitude as a unit vector in the complex plane. So when doing a path integral you are adding lots of short arrows (this terminology is of course due to Feynman). In general for any given trajectory I can find many shorter and longer paths so this will give us a nonconstructive interference (you will be adding lots of arrows that point in random directions). But there can exist some special paths which are either longest or shortest (in other words, extremal) and these will give you constructive interference. This is called Fermat's principle. Fermat's principle So much for the preparation and now to answer your question. We will proceed in two steps. First we will give classical answer using Fermat's principle and then we will need to address other issues that will arise. Let's illustrate this first on a problem of light traveling between points $A$ and $B$ in free space. You can find lots of paths between them but if it won't be the shortest one it won't actually contribute to the path integral for the reasons given above. The only one that will is the shortest one so this recovers the fact that light travels in straight lines. The same answer can be recovered for reflection. For refraction you will have to take into account that the constant $K$ mentioned above depends on the index of refraction (at least classically; we will explain how it arises from microscopic principles later). But again you can arrive at Snell's law using just Fermat's principle. Now to address actual microscopic questions. First, index of refraction arises because light travels slower in materials. And what about reflection? Well, we are actually getting to the roots of the QED so it's about time we introduced interactions. Amazingly, there is actually only one interaction: electron absorbs photon. This interaction again gets a probability amplitude and you have to take this into account when computing the path integral. So let's see what we can say about a photon that goes from $A$ then hits a mirror and then goes to $B$. We already know that the photon travels in straight lines both between $A$ and the mirror and between mirror and $B$. What can happen in between? Well, the complete picture is of course complicated: photon can get absorbed by an electron then it will be re-emitted (note that even if we are talking about the photon here, the emitted photon is actually distinct from the original one; but it doesn't matter that much) then it can travel for some time inside the material get absorbed by another electron, re-emitted again and finally fly back to $B$. To make the picture simpler we will just consider the case that the material is a 100% real mirror (if it were e.g. glass you would actually get multiple reflections from all of the layers inside the material, most of which would destructively interfere and you'd be left with reflections from front and back surface of the glass; obviously, I would have to make this already long answer twice longer :-)). For mirrors there is only one major contribution and that is that the photon gets scattered (absorbed and re-emitted) directly on the surface layer of electrons of the mirror and then flies back. Quiz question: and what about process that the photon flies to the mirror and then changes its mind and flies back to $B$ without interacting with any electrons; this is surely a possible trajectory we have to take into account. Is this an important contribution to the path integral or not? share|improve this answer +1, makes sense to me, but what about color? :-) –  Sklivvz Dec 18 '10 at 17:47 @Sklivvz: oh, I completely missed that part of the question. Thanks. But my answer is already too long, so I guess I will suggest OP to ask that as a separate question. Actually, @hwlau gives a correct first view on the problem (quantum mechanical), but the question actually deserves a lot more, I think. –  Marek Dec 18 '10 at 17:55 @Sklivvz: oh, you are OP :-D I am so sorry :-D –  Marek Dec 18 '10 at 17:55 +1, pretty good with this length. The difficult part is to explain why the other path can cancel exactly in few sentences, such as why the path of refraction is bend toward normal ;) –  hwlau Dec 19 '10 at 10:47 It really desires a long discussion. You may be interested in the book "QED: The Strange theory of light and matter" written by Richard Feynman (or the corresponding video), which gives a comprehensive introduction with almost no number and formula. For the solution of Schrödinger equation of hydrogen atom. The energy level is discrete, so its absorption spectrum is also discrete. In this case, only few colors can be seen. However, in solid, the atom are interacting strongly with each other and the resulting absorption spectrum can be very complicated. This interaction depends strongly with its structure and the outside electrons. The temperature can play an essential role for the structural change and a phase transition can occur and so does color change. I think there is no easy explanation for the exact absorption spectrum, or color, for a material without doing complicated calculation. share|improve this answer Your Answer
820fa492410a06ba
 Second Quantization in Quantum Physics Means Something Different than What it is Thought to Mean San José State University Thayer Watkins Silicon Valley & Tornado Alley Second Quantization in Quantum Physics Means Something Different than What it is Thought to Mean Second Quantization is a body of physical analysis in quantum field theory that focuses on the occupation numbers of states rather than the physical states of particular particles. It talks of operations which create and which annihilate a particle in a field. When this is applied to the quantum analysis of a harmonic oscillator it is seen that what the term particle refers to second quantization is entirely different from what particle refers to in other contexts. Here is what the probability density function looks like for a particular level of energy. The number of peaks of the probability density function is 2n+1, where n is the principal quantum number.. What an increase in n adds to the solution is roughly depicted below. It is n which the creation and annihilation operators change. The details will be given later. First the mathematical background will be given. Mathematical Background Let V be a vector space of complex elements. A function that maps V onto V is called a transformation. A linear transformation is called an operator. Operators can be added and substracted. The concatenation of two operators is defined and will be referred to as the multiplication of operators. This multiplication of operators is associative (AB)C=A(BC) but not communicative AB≠BA,. The complex conjugate of an element u of V is denoted as u*. If an operator A maps v to u then the operator that maps v to u* is denoted as A* and is called the adjoint of A.. If V is finite dimensional then an operator is just a matrix. When V consists of functions it is infinite dimensional. Let A and B be operators on V. The commutator [A, B] is defined as [A, B] = AB − BA This leads to AB = BA + [A, B] BA = AB + [B, A] = AB − [A, B] Note that the commutator is anti-symmetric, AB=−BA. Some General Relationships for Operators on a Function Space Let A, B and C be operators. Then for the commutator [AB, C] [AB, C] = A[B, C] − [A, C]B [AB, C] = (AB)C − C(AB) and from the associativity of operator applications [AB, C] = A(BC) − (CA)B and from subtracting and adding A(CB) on the right [AB, C] = A(BC) − A(CB) + A(CB) − (CA)B and hence [AB, C] = A[B, C] + (AC)B − (CA)B and finally Now let A be any operator and A* its adjoint (conjugate operation). Applying the above indentity to [A*A, A] gives [A*A, A] = A*[A, A] + [A*, A]A but [A, A]=0, so [A*A, A] = [A*, A]A since commutation is antisymmetric [A*A, A] = −[A, A*]A [A*A, A*] = A*[A, A*] + [A*, A*]A = A*[A, A*] Canonical Quantization Canonical quantization is defined as [A, A*] = I where I is the identity operator. If A satisfies the canonical quantization condition then the above two relationships reduce to [A*A, A] = −A [A*A, A*] = A* These can also be expressed as (A*A)A = A(A*A)−A = A(A*A−I) (A*A)A* = A*(A*A)+A* = A*(A*A+I) An Eigenvector for an Operator and its Eigenvalue For and operator A, any element v of V such that Av = αv is called an eigenvector of A and α is its eigenvalue. The Notation of P.A.M. Dirac Dirac had a brilliant idea for labeling vectors. Instead of using an arbitrary letter to denote a vector, he suggested that it should be labeled by its eigenvalue. Thus if v is an eigenvector of A with the eigenvalue β then v is denoted as |β>. The symbology |..> Is part of what is known as Dirac's bra...ket notation. Let an eigenvalue of A*A be denoted as α. Expressed in the Dirac notation this means A*A|α> = α|α> Not only is |α> an eigenfunction of A*A, but A|α> is also an eigenfunction of A*A because (A*A)A|α> = A(A*A−I)|α> = A{α|α> − |α>} = A(α−1)|α> (A*A)(A|α>) = (α−1)(A|α>) So (A|α>) is also an eigenfunction of (A*A) but with an eigenvalue that is 1 less than that of |α>. A reapplication of the above would show that An|α> for n over some range is an eigenfunction of A*A with (α−n) as its eigenvalue. Therefore the eigenfunction of An|α> is denoted as |(α−n)>. Similarly A*|α> is an eigenfunction of A*A with an eigenvalue of (α+1) and therefore (A*)n|α> is an eigenfunction of A*A with an eigenvalue of (α+n). It is represented as |(α+n)>. Relationships between Eigenfunctions Let |α> be an eigenfunction of A*A. Then A|α> and A*|α> are also an eigenfunction of A*A. Thus A*|α> = |(α+1)> = C|α> A*|(α-1)> = α|(α-1)> A*A(A|α>) The Integralness of the Eigenvalues of A*A The eigenfunction of an operator cannot be the zero function. Therefore there must be an integer m such that Am|α> is an eigenfunction of A*A but Am+1 |α> is not. This implies that (α−m) is equal to 0 and hence α is equal to that integer m. Therefore the eigenvalues of A*A are necessarily integers from 0 to some maximum integer m. What we found above is that the assumption of canonical quantification for the commutator is sufficient to assure the existence of the creator, annihilator and number operator without any reference to the physics of the particles. A Physical System A harmonic oscillator is a mass m attached to a spring of stiffness coefficient k. The deviation from equilibrium is denoted as x. The total energy E IS E - ½mv² + ½kx² where v is velocity (dx/dt). The system oscillates sinusoidally at a frequency ω equal to (k/m)½. This means that the total energy may be expressed as E = ½mv² + ½mω²x² When this is expressed in terms of momentum p=mv the result is called the Hamiltonian function for the system; i.e., H = ½p²/m + ½kx² Quantum Analysis For the quantum theoretic analysis of the harmonic oscillator p and x in the Hamiltonian function must be replaced by their operator representations. The operator representation of the deviation x is very simple; it is just multiplication by x. The momentum operator is p^ = −ih(∂/∂x) where i is the imaginary unit, the square root of −1 and h is Planck's constant divided by 2π. Thus the Hamiltonian operator H^ for a harmonic oscillator is H^ = ½(p^)²/m + ½mω²(x^)² = −½h²(∂²/x∂²) Let φ(x) denote the complex-valued function such that its squared value |φ(x)|² is the probability density function at x. It is called the wave function and its values are determined as a solution to the time-independent Schrödinger equation H^φ(x) = Eφ(x) where E is the total energy of the oscillator. This equation has solutions only for discrete values of E, which are positive integers times ω. The positive integer for the system is called its principal quantum number and will be denoted as n. This is the first quantization of a harmonic oscillator. Second Quantization Consider the commutators of p^ and x^, [p^, x^]^φ = p^(x^φ) − x^(p^φ) = (p^x)φ+xp^φ − x(p^φ) = (p^x)φ = −ihφ [p^, x^] = −ih [x^, p^] = ih Now consider the two operators α =γ(x^ + βp^) α* = γ(x^ − βp^) where β=i/(mω) and γ=(mω/(2h))½. Now consider [α, α*], [α, α*] = [γ(x^ + βp^), γ(x^ − βp^)] = γ² [(x^ + βp^), (x^ − βp^)] [(x^ + βp^), (x^ − βp^)] = [x^, (x^ − βp^)] + [βp^, (x^ − βp^)] = [x^, x^] −β[x^, p^] + β[p^, x^] − β²[p^, p^] But [x^, x^]=0, [p^, p^]=0 and [p^, x^]=−[x^, p^] so [α, α*] = γ² (−2β)[x^, p^] = γ² (−2β) ih Replacing β and γ by their defined values gives [α, α*] = (mω/(2h))(−2i/(mω)(ih) = 1^ where 1^ is just the identity operation. This means that α satisfies the canonical quantification condition and therefore α is a creation operator, α* is an annihilation operator and α*α is a number (counting) operator. That is to say, according to the conventional presentation of second quantization, α operating on a field increases the number of particles by one, α* decreases the number of particles in a field by one and α*α counts the number of particles in a field. But what does the probability density function look like for a harmonic oscillator. Here is the solution for principal quantum number n equal to 30. The number of peaks of the probability density function is 2n+1. What an increase in n adds to the solution is roughly depicted below. The standard second quantization is, in effect, calling a pair of peaks of the probability density function a particle even though this does not correspond to a particle in the usual sense of the term. A photon, as a perturbation in an electromagnetic field, would fit the notion of particle as this term is used in second quantization. However, in general, the use of the term particle in second quantization anaysis is misleading, very misleading. Peaks in probability density correspond to states which a particle passes through in its periodic path. When a physical system is analyzed the eigenvalues correspond to energy quanta which may or may not have any correspondence to particles in the usual sense of that term. For example, consider a harmonic oscillator. Its energy is proportional to a integer n, called its principall quantum number. The number of peaks of the probability density function is 2n+1. HOME PAGE OF applet-magic HOME PAGE OF Thayer Watkins
7ba31c2b9bfbb9e2
Dismiss Notice Join Physics Forums Today! Schrodinger's Equation 1. Mar 5, 2006 #1 I was wondering if someone could explain to me why is Schrodinger's Equation has to be linear? 2. jcsd 3. Mar 6, 2006 #2 It has to be linear so that also linear combinations of solutions are also solutions of the Schrödinger equation. Quantum states as represented by state vectors ( wave functions in position space) are superposed states, i.e. linear combinations of vectors with complex numbers as their components. So the superposition principle needs it to be linear. 4. Mar 6, 2006 #3 I can't say anything intelligent about it because I've never studied it, but there IS a non-linear Schrodinger's equation. You could probably Google it. Similar Discussions: Schrodinger's Equation 1. Schrodinger Equation (Replies: 1) 2. Schrodinger equation? (Replies: 8) 3. Schrodinger equation (Replies: 1) 4. Schrodinger equation (Replies: 4)
a588d2f5a7ad8aff
Franck Laloë Do We Really Understand Quantum Mechanics? Franck Laloë, Do We Really Understand Quantum Mechanics?, Cambridge University Press, 2012, 406pp., $75.00 (hbk), ISBN 9781107025011. Reviewed by Valia Allori, Northern Illinois University Do we really need another book on quantum mechanics? Quantum mechanics is one of the greatest scientific achievements, and yet it is still controversial how we should interpret it and what conclusions we are entitled to draw from it. Because of this, it has inevitably attracted many writers, both physicists and philosophers. So, hasn't enough ink been spilled about the subject already? Who needs more? Surprisingly enough, Franck Laloë has shown us that there is still room for another valuable contribution Laloë provides us with a great addition to the abundant (sometimes even redundant) literature. On the one hand we find technical books (for instance Sakurai, or Messiah), while on the other we have introductory books (Albert, Maudlin, and Ghirardi). The former kind of book is full of mathematical details and focused on formal results and so of great value for students needing to learn how to do calculations, solve problems and perform experiments. But such a book is totally useless if one wishes to get a realist picture of the quantum world since it totally ignores the conceptual problems that plague the foundations of quantum mechanics. In contrast, the latter kind of book usually has the aim of introducing the subject to philosophy students, who usually are interested in understanding rather than computing. As a result, this kind of book often disregards technicalities and focuses primarily on the problems of interpreting the formalism and on the possible realist interpretations of the theory. While this approach is extremely valuable in making the conceptual problems crystal clear, its danger is that the philosopher, when confronted with particularly challenging technical material, might still be unable to get around it and arrive at the correct conclusions without being obfuscated by the mathematical detail. So, what seems to be missing, indeed, is a book that combines these two extremes: a book that cares about conceptual issues but at the same time provides enough mathematical details to enable the reader to understand and judge for herself even the more densely technical material. Laloë's book (at least partially) fills this gap. His goal is explicitly to understand the foundations of quantum theory which also, by his admission, is something that has been neglected in the majority of traditional physics books on the topic. Not having a clear grasp of its foundations, writes Laloë, makes quantum mechanics a "colossus with feet of clay" (p. xi): how can we properly understand the theory and its implications if we do not understand what it is grounded on? This admission, made by a physicist, is rare and frankly refreshing. While the importance of conceptual issues has been underlined by philosophers all along, physicists have always given them a smug look. Laloë's book seems to show that finally the situation has started to change. The book has eleven Chapters and several appendices, through which the author goes from the history of quantum mechanics to its interpretations, passing through the Schrödinger cat, the Einstein-Podolsky-Rosen (EPR) paradox and Bell's theorem; quantum entanglement and its application; and a solid mathematical introduction. More in detail, chapter 1 discusses the history of quantum mechanics with accuracy and balance, from the "prehistory" (Planck's oscillators, Bohr's atomic model, Heisenberg's matrix mechanics), to the "undulatory period" (the contributions of de Broglie, Debye and Schrödinger), to the emergence of the Copenhagen interpretation (the developments of Born, Bohr, Heisenberg, Jordan and Dirac). Particular importance is given to the role and status of the state vector. In this regard, two extreme positions are analyzed: first, the view that the state vector describes the physical properties of a system, and second the view that the state vector represents just the information that an observer has about the system. Laloë rejects both views, calling them "two opposite mistakes" (p. 13). While his reasons for rejecting the latter view are the traditional ones and therefore not controversial, in my opinion his reasons for rejecting the former are too hasty. In fact the first option is dismissed right away by the author as follows: "the difficulties introduced by this view are now so well-known . . . that nowadays few physicists seem to be tempted to support it" (p. 13). Also: "it didn't take long before it became clear that the completely undulatory theory of matter also suffered of serious difficulties, actually so serious that physicists were soon led to abandon it." The main problem identified by the author for this view is that in a many-particle system the state vector would live in configuration space, and this makes it, in the opinion of the author, obviously the wrong candidate to represent matter. While I happen to agree with Laloë's conclusion, the issue cannot be dismissed that quickly. There is an ongoing debate within the philosophy of physics community exactly about whether it is possible, if not even advisable, to regard quantum mechanics as a theory about the wave function, intended as a material field on configuration space, and the issue is far from having been settled. Be that as it may, chapter 2 correctly and exhaustively discusses the fundamental conceptual difficulties of quantum theory (from the Schrödinger cat, to Wigner's friend, to the role of decoherence), while chapter 3 is a very informative presentation of the EPR "paradox." The author uses many nice illustrative examples to clarify the main premises, the logic and the conclusion of the EPR argument, making the chapter an incredible resource for both physicists and philosophers. Chapters 4 and 5 are dedicated to Bell's theorem and nonlocality. The premises of the theorem and the the logic of the argument are made extremely clear and straightforward, something rarely found in the literature. The theorem is discussed in its many formulations -- from Bell's original 1964 theorem, to the Bell-Clauser-Horne-Shimony-Holt (BCHSH) inequalities, to the formulations of Wigner, Mermin, Greenberg-Horne-Zeilinger (GHZ), Cabello, Hardy, Bell-Kochen Specker -- and Laloë correctly notes that what is at stake is locality and not determinism. The discussion is so complete that the author even covers some of the attempts to bypass Bell's conclusion that reality is nonlocal, discussing the so-called "free will," "counterfactuality" and "contextuality" assumptions. Chapter 6 is devoted entirely to the purely quantum property of entanglement, including a discussion of origin of quantum correlations. This is a more technical chapter, in which master equations and density operators are introduced, including a discussion of the evolution of subsystems. The following chapter is dedicated to the many applications of entanglement: from quantum cryptography to teleportation, from quantum computation, quantum gates and quantum algorithms to quantum error correction codes. Chapter 8 returns to a more technical and formal presentation, discussing the notions of "quantum measurement:" from the notions of direct and indirect measurement to those of weak and continuous measurement. Then the book presents experiments to illustrate typical quantum properties, such as quantum reduction in real time, single ion or electron in a trap, number of photons in a cavity and spontaneous phase of Bose-Einsetin condensates. Finally chapter 10 addresses the issue of the "interpretations," as Laloë (like many others) calls them. He starts off with the (commonsensical but vague) pragmatist approach, continues with the statistical interpretation (the view that the quantum description only applies to ensembles), moves to Rovelli's relational interpretation (the state vector depends on the observer), to the algebraic approaches and quantum logic, continues with d'Espagnat's veiled reality interpretation (according to which reality is only marginally accessible to us). After a comprehensive and detailed analysis of hidden variables theories like Bohmian and Nelson mechanics, the book continues with a discussion of the modal interpretations, the Schrödinger dynamics of Ghirardi, Rimini, Weber (GRW) and Pearle, Cramer's transactional interpretation (a view which in certain respects reminds one of the Wheeler-Feynman electromagnetic absorber theory), the history interpretations and concludes with the Everett interpretation. The book concludes with a comprehensive and clear mathematical review of the various elements of quantum mechanics. The book is so dense and full of interesting ideas that many comments could be made. In this review, though, I wish to primarily discuss the main strategy the author uses. Laloë wants to provide a balanced and comprehensive view of the foundations of quantum mechanics; he does not want to give preference to one interpretation or another, to one strategy to another, but rather he wishes to analyze how each of them relate to one another and what their mutual relations and differences are. In fact he writes that his book provides a "balanced view of the conceptual situation" of quantum mechanics (p. xiv). Many would regard this attitude as a strength of the book, since it provides the reader with all the relevant information and tools to decide the issue for herself without being influenced by the opinion of the author. In contrast, I think this strategy may be the weakest trait of the book; this may just be a question of personal taste, but I always find that "neutral" books like this one leave something to be desired. Assuming that I have the means to understand and judge the material independently (which this book is able to provide), I'd rather read a heavily opinionated and provocative book than one in which all the options are stated with an impartial and unbiased attitude. When expressing his own opinion, most commonly an author invariably ends up being more convincing than just merely stating the possible alternatives. Laloë's book seems no exception to this rule. For instance, when the discussion focuses on the attempts to bypass the conclusion of Bell's theorem, Laloë's treatment of the strategies based on the free will assumption, contextuality and counterfactuality is maybe more charitable than needed and sounds a little artificial. More generally, how can one possibly address all the foundational issues correctly without taking a stand? In other words, taking a particular view to be the case will have consequences: in particular, one view will lead to certain problems, another view to others. For instance, if one thinks that Bohr's theory is correct, then the notion of measurement will be a crucial part of quantum mechanics. But if instead another "interpretation" is believed to be the correct one, the importance and the role of measurement in this theory will be fundamentally different than in the previous one. How can one discuss, say, the notion of measurement in quantum mechanics in general? When in chapter 8 Laloë claims that the notion of measurement is important in quantum mechanics, what does he have in mind? How can we decide what is being measured if we don't already have a clear idea what the ontology of the theory is? As Einstein once reminded us, it is the theory that decides what is being measured. In other words, how can we make a theory of measurement without a clear understanding of the "interpretation" of the formalism? Therefore, I find particularly odd that the chapter on interpretations is left at the end of the book. Indeed, I find it misleading that these alternatives are actually called "interpretations"; each of them provides a distinctive picture of reality and because of this each of them should be called "theory" instead of "interpretation." Apart from the terminological point, it seems to me that only once a theory is chosen can one then discuss what concepts are relevant and what problems need to be addressed within the context of that theory. If we leave the question of ontology at the end, one may be led to believe that all theories have the same problems simply because they have the same -- or similar -- mathematics (the state vector, the Schrödinger equation and so on), and this would be a mistake. So, to conclude, while I find Laloë's book to be an extremely valuable contribution to the foundations of quantum mechanics for its completeness and clarity of exposition, I still find unsatisfactory its lack of an opinionated discussion and evaluation of the different alternatives.
0cb746633b948045
Wednesday, September 08, 2010 The Narrow Cosmic Performance Envelope The Cosmos must have a very particular performance envelope if evolution is going to get anywhere very fast. (i.e. 0 to Life in a mere 15 billion years) Brian Charlwood has posted a comment on my Blog post Not a Lot of People Know That. As it’s difficult to work with those narrow comment columns I thought I would put my reply here. Brian’s comments are in italics. You say //So evolution is not a fluke process as it has to be resourced by probabilistic biases.// so it is either a deterministic system or it is a random system. I am not happy with this determinism vs. randomness dichotomy: To appreciate this consider the tossing of a coin. The average coin gives a random configuration of heads/tails with a fifty/fifty mix. But imagine some kind of “tossing” system where the mix was skewed in favour of heads. In fact imagine that on average tails only turned up once a year. This system is much closer to a “deterministic” system than it is to the maximally random system of a 50/50 mix. To my mind the lesson here is that the apparent dichotomy of randomness vs. determinism does no justice to what is in fact a continuum. A deterministic system requires two ingredients: 1/ A state space 2/ An updating rule For example a pendulum has as a state space all possible positions of the pendulum, and as updating rules the laws of Newton (gravity, F=ma) which tell you how to go from one state to another, for instance the pendulum in the lowest position to the pendulum in the highest position on the left. Fine, I’m not adverse to that neat way of modeling general deterministic systems as they develop in time, but for myself I’ve scrapped the notion of time. I think of applied mathematics as a set of algorithms for embodying descriptive information about the “timeless” structure of systems. This is partly a result of an acquaintance with relativity which makes the notion of a strict temporal sequencing across the vastness of space problematical. Also, don’t forget that these mathematical systems can also be used to make “predictions” about the past (or post-dictions), a fact which also suggests that mathematical models are “information” bearing descriptive objects rather than being what I can only best refer to here as “deeply causative ontologies”. A random system is a bit more intricate. It can be built up with 1/ A state space 2/ An updating rule Huh? Looks the same. Yeah, but I can now add the rule is updating. Contrary to deterministic systems, the updating rule does not tell us what the next state is going to look like given a previous state, it is only telling us how to update the probability of a certain state. Actually, that is only one possible kind of random system, one could also build updating rules which are themselves random. So you have a lot of possibilities, on the level of probabilities, a random system can look like a deterministic system, but it is really only predicting probabilities. It can also be random on the level of probabilities, requiring a kind of meta-probabilitisic description. If I understand you right then the Schrödinger equation is an example of a system that updates probabilities deterministically. The meta-probabilistic description you talk of is, I think, mathematically equivalent to conditional probabilities. This comes up in random walk where the stepping to the left or right by a given distance are assigned probabilities. But conceivably step sizes could also vary in a probabilistic way, thus superimposing probabilities on probabilities. i.e. conditional probabilities. In the random walk scenario the fascinating upshot of this intricacy is that it has no effect on the general probability distribution as it develops in space. (See the “central limit theorem”) Anyway, these are technical details, but let's look at what happens when we have a deterministic system and we introduce the slightest bit of randomness. Take again the pendulum. What might happen is that we don't know the initial state with certainty, the result is that you still have a deterministic updating rule, but you can now only predict how the probability of having a certain state will evolve. Now, this is still a deterministic system, the probability only creeps in because we have no knowledge of the initial state. But suppose the pendulum was driven by a genuine random system. Say that the initial state of the pendulum is chosen by looking at the state of a radio-active atom. If the atom decayed in a certain time-interval, we let the pendulum start on the left, if not on the right. The pendulum as such is still a deterministic system. But because we have coupled it to a random system, the system as a whole becomes random. This randomness would be irreducible. This would classify as a one of those systems on the deterministic/random spectrum. The mathematics of classical mechanics would mean that any old behavior is not open to the pendulum system, and therefore it is not maximally random.; the system is constrained by classical mechanics to behave within certain limits. The uncertainty in initial conditions, when combined with mathematical constraint of classical mechanics, would produce a system that behaves randomly only within a limited envelope of randomness; the important point to note is that it is an envelope, that is, an object with limits, albeit fuzzy limits like a cloud. Limits imply order. Thus, we have here a system that is a blend of order and randomness; think back to that coin tossing system where  tails turned up randomly but very infrequently. So, if you want to say that there is a part of evolution that is random, the consequence is that the whole of it is random and therefore it is all one big undesigned fluke. No, I don't believe we can yet go this far. Your randomly perturbed pendulum provides a useful metaphor: Relative to the entire space of possibility the pendulum’s behavior is highly organized, its degrees of freedom very limited. Here, once again, the probabilities are concentrated in a relatively narrow envelop of behavior, just as they must be in any working evolutionary system – unless, of course, one invokes some kind of multiverse, which is one (speculative) way of attempting maintain the “It’s just one big fluke” theory. Otherwise, just how we ended up with a universe that has a narrow probability envelope (i.e. an ordered universe) is, needless to say, the big contention that gets people hot under the collar. No comments:
cb222cb8a4f3d408
Tuesday, April 29, 2008 Pop goes the housing bubble Figure via Calculated Risk. Larger version here I believe the outlines of the bust are becoming as visible as the bubble itself was to any astute observer a few years ago. But no bottom yet! If I had to guess, I'd say we are going to give back most of the integral over the curve from 1997 to 2007 or so, net of overall inflation during that period (say 20-30%). In other words, extend the blue trend line beyond the early nineties, and integrate your favorite curve minus this trend line from 1997-2007 to get the overvaluation. (Note the graph is of year over year price changes in nominal dollars, not absolute price.) Or, just look at the figure below to see that prices might have to drop 30-40% to return to consistency with the long term trend. As we've discussed before, house price increases tend to be quite modest if measured in real dollars. (Except, of course, for bubbles and special cases :-) The Case-Shiller national index will probably be off close to 12% YoY (will be released in earlylate May). Currently (as of Q4) the national index is off 10.1% from the peak. The composite 10 index (10 large cities) is off 13.6% YoY. (15.8% from peak) The composite 20 index is off 12.7% YoY. (14.8% from peak) A poem for the West To get a feel for the reaction of average Chinese towards criticism from the West, read the excerpt below from a poem now circulating on the internet. (See also video version at bottom.) If you are not familiar with some of the historical references, you might ask someone like Howard Zinn to explain them to you. by Anonymous ...When we closed our doors, You smuggled drugs to open markets. When we tried to put the broken pieces back together again, Free Tibet you screamed, It was an Invasion! When we tried Communism, You hated us for being Communist. When we embrace Capitalism, You hate us for being Capitalist. When we had a billion people, You said we were destroying the planet. When we tried limiting our numbers, You said we abused human rights. When we were poor, You thought we were dogs. When we loan you cash, You blame us for your national debts. When we build our industries, You call us Polluters. When we buy oil, You call it exploitation and genocide. When you go to war for oil, You call it liberation. ... This NYTimes article and this Time magazine blog post are good examples of how poorly the Chinese worldview is understood here. The poem was erroneously attributed to Dou-Liang Lin, an emeritus professor of physics at SUNY Buffalo. Professor Lin writes that he is not the author and doesn't know who is. On 4/25/2008 at 7:56 AM, Duoliang Lin wrote: Dear Friends, Thank you for your enthusiastic praise and support. Several of you have asked for my authorization for translation into Chinese and/or reprinting. Since this was an anonymous poem circulating in the email, I suppose that the author would not mind to be quoted, translated or reprinted. But I was not the author of the poem. Please see below. This is to clarify that the poem circulated in the email recently was not my work. I received it via email last week. There was no author shown. I read it with great interest and was impressed very much. I then decided to share it with my friends through my email network. Apparently some of them forwarded it to their friends, and in a few days, it has reached a large number of readers. Because my email is set with a signature block, some of the recipients assumed that I was the author. This is a misunderstanding and I should not be credited for its success. I appreciate compliments from many within the last few days, but I must say that I am not the one to be credited. I am trying to trace back the email routes to see if I can find the original author. I was informed today that it was also quoted in Wall Street Journal: There has been a poem by an anonymous author circulating in the internet recently. I feel relieved because I was not cited as the author. Thank you for your attention. Here is a nice observation, originally due to Henry Kissinger, which appeared in the comments below: ...America needs to understand that a hectoring tone evokes in China memories of imperialist condescension and is not appropriate in dealing with a country that has managed 4,000 years of uninterrupted self-government. Sunday, April 27, 2008 Soros tells it like it is The Financial Crisis: An Interview with George Soros with Judy Woodruff on Bloomberg TV. According to his estimates (see below), housing starts would have to go to zero (he gives a normal value of 600k per annum, but I think the correct number is about twice that) for several years to compensate for the inventory that will flood the market due to foreclosures. So, no recovery in the near future, and continued pressure on the dollar. He also has some nice things to say about academic economics :-) Each time, it's the authorities that bail out the market, or organize companies to do so. So the regulators have precedents they should be aware of. But somehow this idea that markets tend to equilibrium and that deviations are random has gained acceptance and all of these fancy instruments for investment have been built on them. There are now, for example, complex forms of investment such as credit-default swaps that make it possible for investors to bet on the possibility that companies will default on repaying loans. Such bets on credit defaults now make up a $45 trillion market that is entirely unregulated. It amounts to more than five times the total of the US government bond market. The large potential risks of such investments are not being acknowledged. Woodruff: How can so many smart people not realize this? Soros: In my new book I put forward a general theory of reflexivity, emphasizing how important misconceptions are in shaping history. So it's not really unusual; it's just that we don't recognize the misconceptions. Woodruff: Who could have? You said it would have been avoidable if people had understood what's wrong with the current system. Who should have recognized that? Soros: The authorities, the regulators—the Federal Reserve and the Treasury—really failed to see what was happening. One Fed governor, Edward Gramlich, warned of a coming crisis in subprime mortgages in a speech published in 2004 and a book published in 2007, among other statements. So a number of people could see it coming. And somehow, the authorities didn't want to see it coming. So it came as a surprise. Woodruff: The chairman of the Fed, Mr. Bernanke? His predecessor, Mr. Greenspan? Soros: All of the above. But I don't hold them personally responsible because you have a whole establishment involved. The economics profession has developed theories of "random walks" and "rational expectations" that are supposed to account for market movements. That's what you learn in college. Now, when you come into the market, you tend to forget it because you realize that that's not how the markets work. But nevertheless, it's in some way the basis of your thinking. Woodruff: How much worse do you anticipate things will get? Soros: Well, you see, as my theory argues, you can't make any unconditional predictions because it very much depends on how the authorities are going to respond now to the situation. But the situation is definitely much worse than is currently recognized. You have had a general disruption of the financial markets, much more pervasive than any we have had so far. And on top of it, you have the housing crisis, which is likely to get a lot worse than currently anticipated because markets do overshoot. They overshot on the upside and now they are going to overshoot on the downside. Woodruff: You say the housing crisis is going to get much worse. Do you anticipate something like the government setting up an agency or a trust corporation to buy these mortgages? Soros: I'm sure that it will be necessary to arrest the decline because the decline, I think, will be much faster and much deeper than currently anticipated. In February, the rate of decline in housing prices was 25 percent per annum, so it's accelerating. Now, foreclosures are going to add to the supply of housing a very large number of properties because the annual rate of new houses built is about 600,000. There are about six million subprime mortgages outstanding, 40 percent of which will likely go into default in the next two years. And then you have the adjustable-rate mortgages and other flexible loans. Problems with such adjustable-rate mortgages are going to be of about the same magnitude as with subprime mortgages. So you'll have maybe five million more defaults facing you over the next several years. Now, it takes time before a foreclosure actually is completed. So right now you have perhaps no more than 10,000 to 20,000 houses coming into the supply on the market. But that's going to build up. So the idea that somehow in the second half of this year the economy is going to improve I find totally unbelievable. Woodruff: When you talk about currency you have more than a little expertise. You were described as the man who broke the Bank of England back in the 1990s. But what is your sense of where the dollar is going? We've seen it declining. Do you think the central banks are going to have to step in? Soros: Well, we are close to a tipping point where, in my view, the willingness of banks and countries to hold dollars is definitely impaired. But there is no suitable alternative so central banks are diversifying into other currencies; but there is a general flight from these currencies. So the countries with big surpluses—Abu Dhabi, China, Norway, and Saudi Arabia, for example—have all set up sovereign wealth funds, state-owned investment funds held by central banks that aim to diversify their assets from monetary assets to real assets. That's one of the major developments currently and those sovereign wealth funds are growing. They're already equal in size to all of the hedge funds in the world combined. Of course, they don't use their capital as intensively as hedge funds, but they are going to grow to about five times the size of hedge funds in the next twenty years. Are you Gork? Slide from this talk. Survey questions: 2) If not, why? e.g, I have a soul and Gork doesn't Decoherence solved all that! See previous post. Wednesday, April 23, 2008 Feynman and Everett A couple of years ago I gave a talk at the Institute for Quantum Information at Caltech about the origin of probability -- i.e., the Born rule -- in many worlds ("no collapse") quantum mechanics. It is often claimed that the Born rule is a consequence of many worlds -- that it can be derived from, and is a prediction of, the no collapse assumption. However, this is only true in a particular limit of infinite numbers of degrees of freedom -- it is problematic when only a finite number of degrees of freedom are considered. Today I noticed a fascinating paper on the arXiv posted by H.D. Zeh, one of the developers of the theory of decoherence: Feynman's quantum theory H. D. Zeh (Submitted on 21 Apr 2008) A historically important but little known debate regarding the necessity and meaning of macroscopic superpositions, in particular those containing different gravitational fields, is discussed from a modern perspective. The discussion analyzed by Zeh, concerning whether the gravitational field need be quantized, took place at a relativity meeting at the University of North Carolina in Chapel Hill in 1957. Feynman presents a thought experiment in which a macroscopic mass (source for the gravitational field) is placed in a superposition state. One of the central points is necessarily whether the wavefunction describing the macroscopic system must collapse, and if so exactly when. The discussion sheds some light on Feynman's (early) thoughts on many worlds and his exposure to Everett's ideas, which apparently occurred even before their publication (see below). Nowadays no one doubts that large and complex systems can be placed in superposition states. This capability is at the heart of quantum computing. Nevertheless, few have thought through the implications for the necessity of the "collapse" of the wavefunction describing, e.g., our universe as a whole. I often hear statements like "decoherence solved the problem of wavefunction collapse". I believe that Zeh would agree with me that decoherence is merely the mechanism by which the different Everett worlds lose contact with each other! (And, clearly, this was already understood by Everett to some degree.) Incidentally, if you read the whole paper you can see how confused people -- including Feynman -- were about the nature of irreversibility, and the difference between effective (statistical) irreversibility and true (quantum) irreversibility. Zeh: ... Quantum gravity, which was the subject of the discussion, appears here only as a secondary consequence of the assumed absence of a collapse, while the first one is that "interference" (superpositions) must always be maintained. ... Because of Feynman's last sentence it is remarkable that neither John Wheeler nor Bryce DeWitt, who were probably both in the audience, stood up at this point to mention Everett, whose paper was in press at the time of the conference because of their support [14]. Feynman himself must have known it already, as he refers to Everett's "universal wave function" in Session 9 – see below. ... Toward the end of the conference (in the Closing Session 9), Cecile DeWitt mentioned that there exists another proposal that there is one "universal wave function". This function has already been discussed by Everett, and it might be easier to look for this "universal wave function" than to look for all the propagators. Feynman said that the concept of a "universal wave function" has serious conceptual difficulties. This is so since this function must contain amplitudes for all possible worlds depending on all quantum-mechanical possibilities in the past and thus one is forced to believe in the equal reality [sic!] of an infinity of possible worlds. Well said! Reality is conceptually difficult, and it seems to go beyond what we are able to observe. But he is not ready to draw this ultimate conclusion from the superposition principle that he always defended during the discussion. Why should a superposition not be maintained when it involves an observer? Why “is” there not an amplitude for me (or you) observing this and an amplitude for me (or you) observing that in a quantum measurement – just as it would be required by the Schrödinger equation for a gravitational field? Quantum amplitudes represent more than just probabilities – recall Feynman’s reply to Bondi’s first remark in the quoted discussion. However, in both cases (a gravitational field or an observer) the two macroscopically different states would be irreversibly correlated to different environmental states (possibly including you or me, respectively), and are thus not able to interfere with one another. They form dynamically separate “worlds” in this entangled quantum state. ... Feynman then gave a resume of the conference, adding some "critical comments", from which I here quote only one sentence addressed to mathematical physicists: Feynman: "Don't be so rigorous or you will not succeed." (He explains in detail how he means it.) It is indeed a big question what mathematically rigorous theories can tell us about reality if the axioms they require are not, or not exactly, empirically founded, and in particular if they do not even contain the most general axiom of quantum theory: the superposition principle. It was the important lesson from decoherence theory that this principle holds even where it does not seem to hold. However, many modern field theorists and cosmologists seem to regard quantization as of secondary or merely technical importance (just providing certain "quantum corrections") for their endevours, which are essentially performed by using classical terms (such as classical fields). It is then not surprising that the measurement problem never comes up for them. How can anybody do quantum field theory or cosmology at all nowadays without first stating clearly whether he/she is using Everett’s interpretation or some kind of collapse mechanism (or something even more speculative)? Previous posts on many worlds quantum mechanics. Tuesday, April 22, 2008 Deep inside the subprime crisis Moody's walks Roger Lowenstein (writing for the Times Sunday magazine) through the construction, rating and demise of a pool of subprime mortgage securities. Some readers may have thought the IMF was exaggerating when it forecast up to $1 trillion in future losses from the credit bubble. After reading the following you will see that it's not an implausible number, and it will be clear why the system is paralyzed in dealing with (marking to market) the complicated securities (CDOs, etc.) that are contaminating the balance sheets of banks, investment banks, hedge funds, pension funds, sovereign wealth funds, etc. around the world. Here's a quick physicist's calculation: roughly 10 million houses sold per year, assume that 10% of these mortgages are bad and will cost the issuer $100k to foreclose and settle. That means $100B per year in losses. Over the whole bubble, perhaps $300-500B in losses, which is more or less what the IMF estimates as the residential component of credit bubble losses (the rest of the trillion comes from commercial and corporate lending and consumer credit). The internet bubble, with irrational investors buying shares of pet food e-commerce companies, was crazy. Read the excerpts below and you'll see that our recent housing boom was even crazier and at an unimaginably larger scale. (Note similar bubbles in the UK, Spain and in China.) The best predictor, going forward, of mortgage default rates (not just subprime, but even prime mortgages) in a particular region will likely be the decline in home prices in that region. The incentive for a borrower to default on his or her mortgage is the amount by which they are "upside down" on the loan -- the amount by which their indebtedness exceeds the value of the home. Since we can't forecast price declines very well -- indeed, it's a nonlinear problem, with more defaults leading to more price declines, leading to more defaults -- we can't price the derivative securities built from those mortgages. Efficient markets! ;-) The figure above compares Case-Shiller data on the current bust (magenta) to the bust of the 80s-90s (blue). (Click for larger version.) You can see we have some way to go before all the fun ends. Wall Street (Oliver Stone): Monday, April 21, 2008 Returns to elite education In an earlier post I discussed a survey of honors college students here at U Oregon, which revealed that very few had a good understanding of elite career choices outside of the traditional ones (law, medicine, engineering, etc.). It's interesting that, in the past, elite education did not result in greater average earnings once SAT scores are controlled for (see below). But I doubt that will continue to be the case today: almost half the graduating class at Harvard now head into finance, while the top Oregon students don't know what a hedge fund is. NYTimes: ...Recent research also suggests that lower-income students benefit more from an elite education than other students do. Two economists, Alan B. Krueger and Stacy Berg Dale, studied the earnings of college graduates and found that for most, the selectivity of their alma maters had little effect on their incomes once other factors, like SAT scores, were taken into account. To use a hypothetical example, a graduate of North Carolina State who scored a 1200 on the SAT makes as much, on average, as a Duke graduate with a 1200. But there was an exception: poor students. Even controlling for test scores, they made more money if they went to elite colleges. They evidently gained something like closer contact with professors, exposure to new kinds of jobs or connections that they couldn’t get elsewhere. “Low-income children,” says Mr. Krueger, a Princeton professor, “gain the most from going to an elite school.” I predict that, in the future, the returns to elite education for the middle and even upper middle class will resemble those in the past for poor students. Elite education will provide the exposure to new kinds of jobs or connections that they couldn't get elsewhere. Hint: this means the USA is less and less a true meritocracy. It's also interesting how powerful the SAT (which correlates quite strongly with IQ, which can be roughly measured in a 12 minute test) is in predicting life outcomes: knowing that a public university grad scored 99th percentile on the SAT (or brief IQ test) tells you his or her expected income is equal to that of a Harvard grad (at least that was true in the past). I wonder why employers (other than the US military) aren't allowed to use IQ to screen employees? ;-) I'm not an attorney, but I believe that when DE Shaw or Google ask a prospective employee to supply their SAT score, they may be in violation of the law. Saturday, April 19, 2008 Trading on testosterone Take with boulder-sized grain of salt. Cause and effect? Only an eight day interval? Couldn't that have been an exceptional period over which aggressiveness paid off? NYTimes: MOVEMENTS in financial markets are correlated to the levels of hormones in the bodies of male traders, according to a study by two researchers from the University of Cambridge (newscientist.com). John Coates, a research fellow in neuroscience and finance, and Joe Herbert, a professor of neuroscience, sampled the saliva of 17 traders on a stock trading floor in London two times a day for eight days. They matched the men’s levels of testosterone and cortisol with the amounts of money the men lost or won on the markets. Men with elevated levels of testosterone, a hormone associated with aggression, made more money. When the markets were more volatile, the men showed higher levels of cortisol, considered a “stress hormone.” But, as New Scientist asked, “which is the cause and which is the effect?” According to the researchers’ analysis, the men who began their workdays with high levels of testosterone did better than those who did not. “The popular view is that experienced traders can control their emotions,” Mr. Coates told New Scientist. “But, in fact, their endocrine systems are on fire.” As with anything else, when it comes to hormones, it is possible to have (from a trader’s perspective) too much of a good thing. Excessive testosterone levels can lead a trader to make irrational decisions. New Scientist pointed out that although cortisol can help people make more rational decisions during volatile trading periods, too much of it can lead to serious health problems like heart disease and arthritis, and, over time, diminish brain functions like memory. If individual traders are affected by their hormone levels, does the same hold true for the markets as a whole? After all, the market is nothing more than an aggregate of the individual actions of traders. Mr. Coates thinks it is possible that “bubbles and crashes are coming from these steroids,” according to New Scientist. If so, “central banks may lower interest rates only to find that traders still refuse to buy risky assets.” Perhaps, he told New Scientist, “if more women and older men were trading, the markets would be more stable.” Wednesday, April 16, 2008 Crossfit: cult or ultimate training? Having played a lot of sports and done a lot of physical training, it's not often that I see something in the gym that shocks me. But recently I came across the Crossfit training system. It's based around short, hyper intense workouts using basic bodyweight gymnastic moves (pushups, pullups, burpees, rope climbing), olympic and power lifts (cleans, jerks, presses, squats) and track sprints and rowing. The goal is to engage the large muscle groups and push them to both anaerobic and aerobic failure at the same time. For experienced athletes, the idea of using olympic lifts for cardiovascular stress training seems over the top, but anyone who can survive this is going to get very, very fit. The founder of Crossfit, former gymnast Greg Glassman, is the guru behind this movement. He rails against bodybuilders who lack functional strength, and runners, cyclists and triathletes who are so specialized that they lack overall athleticism. (He doesn't have any bad words for ultimate fighters, though, some of whom use his system :-) The point I think Glassman overlooks is that the traditional training methods are meant to minimize injury and allow regular performance by an average person. It's telling that Glassman, 49, doesn't Crossfit train anymore. (See this NYTimes profile from a few years ago; the followup reader discussion is very good.) If you have any athletic background at all (endurance training doesn't count -- it's gotta be something with a little explosiveness and testosterone ;-), watch the videos and tell me you are not freaked out. More video: Uneven Grace mov wmv (check out the women doing 30 clean and jerks with 85lbs in 5-7 minutes!) GI Jane mov wmv (pushup, burpee, pullup -- basic, but so brutal. Greg Amundson is a badass!) Interview: Coach Greg Glassman CFJ: What’s wrong with fitness training today? Coach Glassman: The popular media, commercial gyms, and general public hold great interest in endurance performance. Triathletes and winners of the Tour de France are held as paradigms of fitness. Well, triathletes and their long distance ilk are specialists in the word of fitness and the forces of combat and nature do not favor the performance model they embrace. The sport of competitive cycling is full of amazing people doing amazing things, but they cannot do what we do. They are not prepared for the challenges that our athletes are. The bodybuilding model of isolation movements combined with insignificant metabolic conditioning similarly needs to be replaced with a strength and conditioning model that contains more complex functional movements with a potent systemic stimulus. Sound familiar? Seniors citizens and U.S. Marine Combatant Divers will most benefit from a program built entirely from functional movement. CFJ: What about aerobic conditioning? Coach Glassman: I know you’re messing with me – trying to get me going. Look, why is it that a 20 minute bout on the stationery bike at 165 bpm is held by the public to be good cardio vascular work, whereas a mixed mode workout keeping athletes between 165-195 bpm for twenty minutes inspires the question, ”what about aerobic Conditioning?” For the record, the aerobic conditioning developed by CrossFit is not only high-level, but more importantly, it is more useful than the aerobic conditioning that comes from regimens comprised entirely of monostructural elements like cycling, running, or rowing. Now that should start some fires! Put one of our guys in a gravel shoveling competition with a pro cyclist and our guy smokes the cyclist. Neither guy trains by shoveling gravel, why does the CrossFit guy dominate? Because CrossFit’s workouts better model high demand functional activities. Think about it – a circuit of wall ball, lunges and deadlift/highpull at max heart rate better matches more activities than does cycling at any heart rate. What good is happiness if it can't buy money? This NYTimes article covers recent results in happiness research, which shows that money does buy happiness after all ;-) The new data seem to show a stronger correlation between average happiness and economic development than earlier studies which had led to the so-called Easterlin paradox. One explanation for the divergence between old and new data is that people around the world are now more aware of how others in developed countries live, thanks to television and the internet. That makes them less likely to be content if their per capita incomes are low (see the hedonic treadmill below). The old data showed surprisingly little correlation between average income and happiness, but 30-50 years ago someone living in Malawi might have been blissfully unaware of what he or she was missing. See the article for links to the research papers and a larger version of the figure. Also see these reader comments from the Times, which range from the "happiness is a state of mind" variety to "money isn't everything but it's way ahead of whatever is in second place." In previous posts we've discussed the hedonic treadmill, which is based on the idea of habituation. If your life improves (e.g., move into a nicer house, get a better job, become rich), you feel better at first, but rapidly grow accustomed to the improvement and soon want even more. This puts you on a treadmill from which it is difficult to escape. The effect is especially pernicious if you adjust your perceived peer group as you progress (rivalrous thinking) -- there is always someone else who is richer and more successful than you are! Note, the hedonic treadmill is not inconsistent with an overall correlation between happiness and income or wealth. It just suggests diminishing returns due to psychological adjustment. Monday, April 14, 2008 John Wheeler, dead at 96 John Archibald Wheeler, one of the last great physicists of a bygone era, has died. He outlived most of his contemporaries (Bohr, Einstein, Oppenheimer) and even some of his students, like Feynman. ...One particular aspect of Einstein’s theory got Dr. Wheeler’s attention. In 1939, J. Robert Oppenheimer, formerly the head of the Manhattan Project, and a student, Hartland Snyder, suggested that Einstein’s equations had made an apocalyptic prediction. A dead star of sufficient mass could collapse into a heap so dense that light could not even escape from it. The star would collapse forever while spacetime wrapped around it like a dark cloak. At the center, space would be infinitely curved and matter infinitely dense, an apparent absurdity known as a singularity. Dr. Wheeler at first resisted this conclusion, leading to a confrontation with Dr. Oppenheimer at a conference in Belgium in 1958, in which Dr. Wheeler said that the collapse theory “does not give an acceptable answer” to the fate of matter in such a star. “He was trying to fight against the idea that the laws of physics could lead to a singularity,” Dr. Charles Misner, a professor at the University of Maryland and a former student, said. In short, how could physics lead to a violation itself — to no physics? Dr. Wheeler and others were finally brought around when David Finkelstein, now an emeritus professor at Georgia Tech, developed mathematical techniques that could treat both the inside and the outside of the collapsing star. At a conference in New York in 1967, Dr. Wheeler, seizing on a suggestion shouted from the audience, hit on the name “black hole” to dramatize this dire possibility for a star and for physics. The black hole “teaches us that space can be crumpled like a piece of paper into an infinitesimal dot, that time can be extinguished like a blown-out flame, and that the laws of physics that we regard as ‘sacred,’ as immutable, are anything but,” he wrote in his 1999 autobiography, “Geons, Black Holes & Quantum Foam: A Life in Physics.” (Its co-author is Kenneth Ford, a former student and a retired director of the American Institute of Physics.) In 1973, Dr. Wheeler and two former students, Dr. Misner and Kip Thorne, of the California Institute of Technology, published “Gravitation,” a 1,279-page book whose witty style and accessibility — it is chockablock with sidebars and personality sketches of physicists — belies its heft and weighty subject. It has never been out of print. ... I mentioned the history of black holes in general relativity in an earlier post on J.R. Oppenheimer: Perhaps most important was his work in the 1930's on the endpoint of stellar evolution, with his students Volkoff and Snyder at Berkeley. They explored many of the properties of black holes long before the term "black hole" was coined by Wheeler. Oppenheimer and company were interested in neutron star stability, and gave the first general-relativistic treatment of this complicated problem. In so doing, they deduced the inevitability of black hole formation for sufficiently massive progenitors. They also were the first to note that an infalling object hits the horizon after a finite proper time (in its own frame), whereas an observer orbiting the hole never actually sees the object hit the horizon. The work received amazingly little attention during Oppenheimer's life. But, had Oppenheimer lived another few decades, it might have won him a Nobel prize. Friday, April 11, 2008 Young and Restless in China Looks like a fascinating documentary, profiling nine young people trying to make it in modern China. Among those profiled are a US-educated entrepreneur, a hip hop artist, an environmental lawyer and a migrant factory worker. It's meant to be a longitudinal study like Michael Apted's Up series in the UK, so look for future installments. Interview with the filmmaker on the Leonard Lopate show. (I highly recommend Lopate's podcasts -- he's the sharpest interviewer I've found in arts, literature and contemporary culture. Not exactly your guy for science or economics, though.) More clips from the film. PS Forget about Tibet. The vast majority of (Han) Chinese consider it part of China. Let's restore the Navajo nation to its pre-European contact independence before lecturing China about Tibet. Wednesday, April 09, 2008 $1 trillion in losses? We've had $200B in write-downs so far. The Fed has taken about $300B of shaky debt onto its balance sheet. The IMF is talking about a global bailout of the US economy. It took Japan well over a decade to clean up its banking system after their property bubble burst. I doubt the US is going to take all of its bitter medicine at once. Whither the dollar? The figure below first appeared on this blog in 2005: Financial Times: IMF fears credit crisis losses could soar towards $1 trillion Losses from the credit crisis by financial institutions worldwide are expected to balloon to almost $1 trillion (£507 billion), threatening to trigger severe economic fallout, the International Monetary Fund said yesterday. In a grim assessment of the deepening crisis delivered days before ministers from the Group of Seven leading economies meet in Washington, the IMF warns governments, central banks and regulators that they face a crucial test to stem the turmoil. “The critical challenge now facing policymakers is to take immediate steps to mitigate the risks of an even more wrenching adjustment,” it says in its twice-yearly Global Financial Stability Report. The IMF sounds an alert over the danger that banks’ escalating losses, along with credit market uncertainties, could prompt a vicious downward spiral as they weaken economies and asset prices, leading to higher unemployment, more loan defaults and still deeper losses. “This dynamic has the potential to be more severe than in previous credit cycles, given the degree of securitisation and leverage in the system,” the Fund argues. It says that it is clear that global financial upheavals are now more than just a shortage of ready funds, or liquidity, but are rooted in “deep-seated fragilities” among banks with too little capital. This “means that its effects are likely to be broader, deeper and more protracted”, the report concludes. “A broadening deterioration of credit is likely to put added pressure on systemically important financial institutions,” it adds, saying that the risks have increased of a full-blown credit crunch that could undercut economic growth. The warning came as Kenneth Rogoff, a former chief economist at the IMF and currently Professor of Economics at Harvard University, said that there was a “likely possibility” that the Fund will have to coordinate a global policy package to prop up the US economy. “They [the US] would not go for a conventional bail-out from the IMF. The IMF could not afford it – they have around $200 billion, which the US would burn through in a matter of months. It would be a package where various countries would try and prop up global demand to cushion the US economy.” He added: “The US is going to be looking for help to prevent this banking and housing problem from getting worse.” The report also highlights the threat posed by the rapid spread of the credit crisis from its roots in the US sub-prime home loans to more mainstream lending markets worldwide. While banks have so far declared losses and writedowns over the crisis totalling $193 billion, the IMF expects the ultimate toll to reach $945 billion. Global banks are expected to shoulder about half of the total losses – between $440 and $510 billion – with the rest being borne by insurance companies, pension funds, hedge funds and money market funds, and other institutional investors, predominantly in the US and Europe. Most of the losses are expected to stem from defaults in the US, with $565 billion written off in sub-prime and prime mortgages and a further $240 billion to be lost on commercial property lending. Losses on corporate loans are projected to mount to an eventual $120 billion and those on consumer lending to $20 billion. Monday, April 07, 2008 The New Math Alpha magazine has a long article on the current state of quant finance. It may be sample bias, but former theoretical physicists predominate among the fund managers profiled. I've always thought theoretical physics was the best training for applying mathematical techniques to real world problems. Mathematicians seldom look at data, so are less likely to have the all-important intuition for developing simple models of messy systems, and for testing models empirically. Computer scientists generally don't study the broad variety of phenomena that physicists do, and although certain sub-specialties (e.g., machine learning) look at data, many do not. Some places where physics training can be somewhat weak (or at least uneven) include statistics, computation, optimization and information theory, but I've never known a theorist who couldn't pick those things up quickly. Physicists have a long record of success in invading other disciplines (biology, computer science, economics, engineering, etc. -- I can easily find important contributions in those fields from people trained in physics, but seldom the converse). Part of the advantage might be pure horsepower -- the threshold for completing a PhD in theoretical physics is pretty high. However, a colleague once pointed out that the standard curriculum of theoretical physics is basically a collection of the most practically useful mathematical techniques developed by man -- the high points and greatest hits! Someone trained in that tradition can't help but have an advantage over others when asked to confront a new problem. Having dabbled in fields like finance, computer science and even biology, I've come to consider myself as a kind of applied mathematician (someone who applies mathematical ideas to the real world) who happens to have had most of his training from working on physical systems. I suspect that physicists who have left the field, as well as practitioners of biophysics, econophysics, etc. might feel the same way. Readers of this blog sometimes accuse me of a negative perspective towards physics. Quite the contrary. Although I might not be optimistic about career prospects within physics, or the current state of the field, I can't think of any education which gives a richer understanding of the world, or a greater chance of contributing to it. ...Finkelstein, who also grew up in Kharkov, has a Ph.D. in theoretical physics from New York University and a master’s degree in the same discipline from the Moscow Institute of Physics and Technology. Before joining Horton Point as chief science officer, he was head of quantitative credit research at Citadel Investment Group in Chicago. Most of the 12 Ph.D.s at Horton Point’s Manhattan office are researching investment strategies and ways to apply scientific principles to finance. The firm runs what Finkelstein, 54, describes as a factory of strategies, with new models coming on line all the time. “It’s not like we plan to build ten strategies and sit on them,” he says. “The challenge is to keep it going, to keep this factory functioning.” Along with his reservations about statistical arbitrage, Sogoloff is wary of quants who believe the real world is obliged to conform to a mathematical model. He acknowledges the difficulty of applying scientific disciplines like genetics or chaos theory — which purports to find patterns in seemingly random data — to finance. “Quantitative work will be much more rewarding to the scientist if one concentrates on those theories or areas that attempt to describe nonstable relationships,” he says. Sogoloff sees promise in disciplines that deal with causal relationships rather than historical ones — like mathematical linguistics, which uses models to analyze the structure of language. “These sciences did not exist five or ten years ago,” he says. “They became possible because of humongous computational improvements.” However, most quant shops aren’t exploring such fields because it means throwing considerable resources at uncertain results, Sogoloff says. Horton Point has found a solution by assembling a global network of academics whose research could be useful to the firm. So far the group includes specialists in everything from psychology to data mining, at such schools as the Beijing Institute of Technology, the California Institute of Technology and Technion, the Israel Institute of Technology. Sogoloff tells the academics that the goal is to create the Bell Labs of finance. To align both parties’ interests, Horton Point offers them a share of the profits should their work lead to an investment strategy. Scientists like collaborating with Horton Point because it combines intellectual freedom with the opportunity to test their theories using real data, Sogoloff says. “You have experiments that can be set up in a matter of seconds because it’s a live market, and you have the potential for an amazing economic benefit.” ... Friday, April 04, 2008 Credit crisis for pedestrians Here is a 40 minute discussion of the credit crisis on NPR's Fresh Air. The "expert" is a law professor with a tenuous grasp of finance, a love of regulation and an axe to grind against Wall St. and former Senator Phil Gramm. Terri Gross, ordinarily an astute interviewer, can't seem to get beyond concepts like big bets at a big casino by unregulated fat cats. 8-/ Tuesday, April 01, 2008 Hsu scholarship at Caltech I donated a number of shares in my previous startup (SafeWeb, Inc., acquired by Symantec in 2003) to endow a permanent undergraduate scholarship in memory of my father. In the course of setting up the scholarship I had to assemble a brief bio of my dad, which I thought I would post here on the Internet, to preserve for posterity. The first recipient of the scholarship was a student from Shanghai, who had won a gold medal in the International Physics Olympiad. The second recipient was a woman from Romania. I encourage all of my friends in the worlds of technology and finance to give back to the institutions from which they received their educations. Cheng Ting Hsu Scholarship This scholarship was endowed on behalf of Cheng Ting Hsu by his son Stephen Hsu, Caltech class of 1986. It is to be awarded in accordance with Institute policies to the most qualified international student each year. Preference is to be given to applicants from Chinese-speaking countries: China (including Hong Kong), Taiwan and Singapore. Also, preference should be given, if possible, to those with outstanding academic qualifications (such as, but not limited to, performance in national-level competitions in math, physics or computer science or other similar distinction). If the recipient is a continuing (rather than incoming) student, academic qualification can be based on GPA at Caltech, or other outstanding performance (such as, but not limited to, performance on competitive exams such as those in computer programming or mathematics, or outstanding research work). Cheng Ting Hsu was born December 1, 1923 in Wenling, Zhejiang province, China. His grandfather, Zan Yao Hsu was a poet and doctor of Chinese medicine. His father, Guang Qiu Hsu graduated from college in the 1920's and was an educator, lawyer and poet. Cheng Ting was admitted at age 16 to the elite National Southwest Unified University, which as created during WWII by merging Tsinghua, Beijing and Nankai Universities. This university produced numerous famous scientists and scholars such as the physicists C.N. Yang and T.D. Lee. Cheng Ting studied aerospace engineering (originally part of Tsinghua), graduating in 1944. He became a research assistant at China's Aerospace Research Institute and a lecturer at Sichuan University. He also taught aerodynamics for several years to advanced students at the air force engineering academy. In 1946 he was awarded one of only two Ministry of Education fellowships in his field to pursue graduate work in the United States. In 1946-1947 he published a three-volume book, co-authored with Professor Li Shoutong on the structures of thin-walled airplanes. In January, 1948, he left China by ocean liner, crossing the Pacific and arriving in San Francisco. In March of 1948 he began graduate work at the University of Minnesota, receiving his masters degree in 1949 and PhD in 1954. During this time he was also a researcher at the Rosemount Aerospace Research Institute in Minneapolis. In 1958 Cheng Ting was appointed associate professor of aerospace engineering at Iowa State University. He was one of the founding faculty members of the department and became a full professor in 1962. During his career he supervised about 30 Masters theses and PhD dissertations. His research covered topics including jet propulsion, fluid mechanics, supersonic shock waves, combustion, magneto-hydrodynamics, vortex dynamics (tornados) and alternative energy (wind turbines). He published widely, in scientific journals ranging from physics to chemistry and aerodynamics. Professor Hsu retired from Iowa State University in 1989 due to ill health, becoming Professor Emeritus. He passed away in 1996. Blog Archive
9d04a45da54437fd
Sign up × When tackling a physics problem, An Engineer will manipulate the axes/coordinate system where a Mathematicians and/or Physicists will use the original coordinate system and math. Why do Engineers think differently? I know its likely because that is how they are taught, but why are they taught that way? share|cite|improve this question closed as off topic by David Z May 31 '11 at 21:25 Who told you that this is special to engineers? I learned it both ways and I teach it both ways, because the important thing is that each person be able to do it in a way that makes sense to them and still be able to follow when someone else does it another way. – dmckee May 31 '11 at 18:26 My calculus professor made this general statement about engineers, and my engineering professors agree. – Dale May 31 '11 at 18:45 I'm with dmckee. Physicists will seek out coordinate systems to greater simplify problems. – Jerry Schirmer May 31 '11 at 19:36 This strikes me as a question about engineers, not about physics. – David Z May 31 '11 at 21:26 Such statements are good for jokes,eventually! The span of very different attitudes of engineers (civil, electronics, surveillance, etc) to math and physics shows that this statement on "engineers" is silly. – Georg Jun 1 '11 at 8:58 4 Answers 4 Choosing an appropriate coordinate system often vastly simplifies a problem. Anyone who wants to solve a problem expediently will try to find a coordinate system that simplifies the problem. If your professors told you that physicists do not do this, then your professors told you a falsehood. share|cite|improve this answer Engineers and Physicists have different requirements so they use different tools, and sometimes use the same tools with different approaches Engineers usually are after solving differential equations, or doing resonance analysis on some structure, which mostly involves doing Laplace transforms of complicated systems of equations, these equations might become significant easier to solve in specific coordinate systems. Some coordinate systems are better than others for certain problems Physics also use this for solving equations (think how easier is to solve Schrödinger equation for the hydrogen atom in spherical coordinates rather than, say, cartesian). However in theoretical physics one usually does not want to focus how the equations look in specific coordinates; one actually wants to see what part of a equation does not change (or change in a preescribed manner) when a coordinate system is changed, since the most interesting theoretical quantities are usually the ones that transform in particular simple and elegant ways share|cite|improve this answer Because engineers like making things simple - it it's easier to work in the coordinate system of the aircraft (rather than galactic coordiantes) then they will. On the other hand the physicists will redefine all the constants to 1 to simplify the sums. share|cite|improve this answer There are a few reasons. The first is that most engineers do project work, so a coordinate system is usually developed to suit the project making everything simple and easy to input into calculations. The second reason is that engineers like to look at solutions to problems by comparing the results calculations and designs with other designs at different stages. It is far easier to compare the dimensions of items when the units and origin of the coordinate system are suited to the problems. For example if you had to compare the depth of bridge girders but the bridge girders were measured as offsets from a origin at the support of the bridge rather than simply the depth of the girder it would be far more difficult to do a simple comparison. The final reason is simply that one engineering structure often interfaces with another. If you take the example of the London Underground it has it's own coordinate system for x, y and z coordinates. This means that it is easy for a new project to connect to an existing project share|cite|improve this answer Every particle physics experiment I've worked on had it's own coordinate system. Some had multiple systems (i.e. accelerator coordinates, electron spectrometer coordinates, hadron spectrometer coordinates, etc.). Sometimes these were reasonably obvious, other times ... "whaddya mean the z-axis runs 3.4 degrees below the horizontal?!?". Each set was chosen for a good reason. – dmckee Jun 1 '11 at 1:16
b6a743019481103b
Dismiss Notice Dismiss Notice Join Physics Forums Today! QM: Changing wavefunctions after measurements 1. Oct 16, 2008 #1 Hi all. My question is best illustrated with an example. Please, take a look: Let's say we have particle in a stationary state, so [tex]\Psi(x,0)=1\cdot \psi_{1,0}(x)[/tex] with energy E_{1,0}. Now at time t=0 the Hamiltonian of this particle changes, since the particle gains some energy. Thus the wavefunction [tex]\Psi(x,t)[/tex] changes, and it can be written as: where [tex]\psi_{2,n}(x)[/tex] are the new stationary states. Question: At time t=0 when the particle gains energy and hence [tex]\Psi(x,t)[/tex] changes, is the wavefunction given as: \Psi (x,0) = \psi _{1,0} (x) = \sum\limits_{n = 0}^\infty {c_n } \psi _{2,n} (x) So does the wavefunction change immediately or does it evovle in a slow fashion? 2. jcsd 3. Oct 16, 2008 #2 The coefficients cn are time dependent here. You can write the wave-function as 4. Oct 16, 2008 #3 Does this follow from the fact that our wave function lives in Hilkbert space? 5. Oct 16, 2008 #4 Ok, I am having my doubts about why we are allowed to do this. When the Hamiltonian changes, the particle is not longer in the groundstate. So if it is not in the groundstate, then why are we even interested in using the following expression? Last edited: Oct 16, 2008 6. Oct 17, 2008 #5 I don't mean to be impolite, but I really need to understand this. I still cannot see how [itex]\psi_{1,0}[/itex] comes into play, when the particle is not in that state anymore. Of course we can write it as a linearcombination of the new stationary states (as you said, nasu), but we could do that with any wavefunction. Why is [itex]\psi_{1,0}[/itex] so interesting in this case? Especially when our particle is not in this state anymore. 7. Oct 17, 2008 #6 At time t=0 the particle *is* still in the state [itex] \psi_{1,0} [/itex]. Now, instead of simply evolving as [itex] \psi_{1,0}\exp(-iE_{1,0}t/\hbar) [/itex] it evolves under the new Hamiltonian with new eigenenergies [itex] E'_n [/itex]. What you do is you first expand the initial state as \psi_{1,0}=\sum_n c_n \psi'_{n} in terms of the new eigenfunctions of the Hamiltonian. Then, once you have calculated those coefficients [itex] c_n [/itex] (they are given by the overlap between the initial state and the new eigenfunctions), the state evolves as \psi_{1,0}=\sum_n c_n \psi'_{n}\exp(-iE'_n t/\hbar) 8. Oct 18, 2008 #7 Hi borgwal First, thanks for answering me. The way I have understood it, [itex]\psi_{1,0}[/itex] is the stationary state with a definite energy. So if the particle at time t=0 is in state [itex]\psi_{1,0}[/itex], then doesn't this mean that it has the energy associated with the stationary state [itex]\psi_{1,0}[/itex]? In your last expression: [itex]\psi_{1,0}=\sum_n c_n \psi'_{n}\exp(-iE'_n t/\hbar)[/itex]: Isn't a timefactor [itex]\exp(-iE'_{1,0}t/\hbar)[/itex] missing on the left hand side? Again, thanks for answering. This forum is the only place I can get help at the moment, so I really appreciate it. 9. Oct 18, 2008 #8 You're changing the Hamiltonian at t=0: so, the *same* state that is stationary with respect to the old Hamiltonian is no longer stationary with respect to the new Hamiltonian. There is no timefactor [itex] \exp(-iE'_{1,0}t/\hbar)[/itex] missing in my equation: in fact there is no new energy [itex] E'_{1,0} [/itex] for the state [itex] \psi_{1,0} [/itex] anymore, because that wavefunction is no longer an energy eigenfunction. The concept of what a stationary state is, depends on what Hamiltonian you have! Similarly, the energy "associated with a state" depends on the Hamiltonian. 10. Oct 20, 2008 #9 Perfect borgwal, these are just the answers I needed. I have two final questions (REMARK: I can't get the LaTeX to work properly, so I'll just write the code. I guess it's just a temporary issue.): 1) Prior to the measurement (i.e. the old Hamiltonian is still valid), our particle is in the stationary state \psi_{1, 0}. How does the time-dependent wavefunction look like for this state? Is it simply: \Psi(x, t) = \psi_{1, 0}? 2) So have I understood it correctly when I say that the stationary state of the old Hamiltonian is now a time-dependent state after the Hamiltonian has changed? Thanks in advance. I really appreciate your effort. Last edited: Oct 20, 2008 11. Oct 20, 2008 #10 2) yes! 1) no, before t=0 there is still the trivial time-dependence \Psi=\psi_{1,0}\exp(-i E_{1,0}t/\hbar), but I'm sure you knew that already. [And I see I had written that in my previous post, too] 12. Oct 20, 2008 #11 Yes, you actually did. Sorry for not seeing that. But it still confuses me for two reasons: 1) We are in a stationary state. So why would it evolve? 2) The square of the constants in the linearcombination at an arbitrary time t does not sum up to one. I mean, let's for example take t = 2 - then the sum of the squares are not 1? 13. Oct 20, 2008 #12 1) That's just a matter of terminology: expectation values like those of position and momentum do not depend on time. The overall (time-dependent) phase factor is irrelevant for any physical variable. 2) The *absolute* values squared of all coefficients should add up to 1, at any time t. 14. Oct 21, 2008 #13 Ok, my last issues are these: 1) So \psi_{1,0} is no longer a solution to the time-independent Schrödinger equation, because the Hamiltonian has changed. But \psi_{1,0}(x) is not time-dependent (it is only x-dependent), so when the Hamiltonian has changed and we get a linear combination of the new states, the linear combination is somehow constant in time, although it evolves? How is this to be understood? 2) When we calculate the probability of a particle's wavefunction to collapse to an eigenstate, we use the absolute value of the coefficient in front of that eigenstate in the linear combination. At an arbitrary time t, is the exponential term also part of that constant? If yes, it will vanish, right? Again, I really appreciate it. These are my last questions, I promise. Last edited: Oct 21, 2008 15. Oct 21, 2008 #14 User Avatar Staff Emeritus Science Advisor Homework Helper The coefficients in that linear combination contain the factors exp(i E2,n t / hbar) You can think of this as causing interference among the various state components, sometimes constructive and sometimes destructive. But the key thing is, the amount of interference changes with time at any given location, so the total wavefunction amplitude at a given x will change in magnitude. That's why the wavefunction as a whole changes after t=0. I'm not understanding this question (it has been 20 years since I had quantum mechanics, sorry) but perhaps somebody else can address it. 16. Oct 21, 2008 #15 George Jones User Avatar Staff Emeritus Science Advisor Gold Member I wouldn't, say that it vanishes, I would say that it drops out. Why? How, in general, is this probability calculated? 17. Oct 21, 2008 #16 By taking the absolute value, and since the term is complex, it drops out. This is just a general question: If the Hamiltonian has changed, then \psi_{1,0} is no longer a solution to the Schrödinger equation. Then how can the particle still be in this state? 18. Oct 21, 2008 #17 You're confusing stationary solutions of the S.E., which correspond to solutions to the time-independent S.E. with a fixed energy, with general solutions to the time-dependent S.E. A particle is always in a state that is a solution to the time-dependent S.E., but not necessarily in a stationary state. 19. Oct 25, 2008 #18 Ahh yes, you are correct. Thanks for mentioning that. Say we are looking at a particle in the infinite square well, where the potential is zero in the interval from 0 to L. The groundstate is given by: \psi (x) = \sqrt {\frac{2}{L}} \sin \left( {\frac{{n\pi }}{L}x} \right) Now length of the well increases from 0<x<L to 0<x<2L, and the new eigenstates are given by: \psi_{\text{new}} (x) = \sqrt {\frac{1}{L}} \sin \left( {\frac{{n\pi }}{{2L}}x} \right) From our preceding discussion, the particle is still in the state [tex]\psi(x)[/tex], but it is now a linear combination of the new eigenstates [tex]\psi_{\text{new}}(x)[/tex]. So using the facts from our discussion, can we conclude that the probability of finding the particle outside 0<x<L is zero, since the particle is still in the state [tex]\psi(x)[/tex]? Thanks in advance. Best regards, 20. Oct 25, 2008 #19 Correct, at time t=0 the particle will not be found outside 0<x<L. But then the wavefunction starts evolving away from the original state, and the particle can be found in the region x>L. 21. Oct 25, 2008 #20 Hmm, this is what I do not understand. The new states are: \Psi(x,t) = \psi(x)=\sum_n c_n \psi_{new,\,\,n}(x)\exp(-iE_{new,\,\,n}t/\hbar) 1) Regardless of what time t it is, [tex]\psi(x)[/tex] is zero for x>L and x=L. But am I supposed to understand it like this: That when we make a new measurent, then the possible states are [tex]\psi_{\text{new,\,\,n}}(x)[/tex], which are not 0 for x greater than or equal to L, and hence the particle can be found in 0 <x< 2L for times t>0? 2) By the way, does the particle's wavefunction collapse to one of these new eigenstates immediately when the well becomes wider? Or does it only collapse upon a new measurement? Have something to add? Similar Discussions: QM: Changing wavefunctions after measurements 1. QM wavefunctions (Replies: 8)
a6c1be754b373429
Sign up × I would like to know what quantization is, I mean I would like to have some elementary examples, some soft nontechnical definition, some explanation about what do mathematicians quantize?, can we quantize a function?, a set?, a theorem?, a definition?, a theory? share|cite|improve this question Ugh, can someone rewrite this question? – Scott Morrison Nov 20 '09 at 2:42 I fear that the OP might be misinterpreting the meaning of the word "theory" in QFT. – José Figueroa-O'Farrill Nov 20 '09 at 17:46 I rewrote the question. – Kristal Cantwell May 26 '13 at 16:49 11 Answers 11 up vote 84 down vote accepted As I'm sure you'll see from the many answers you'll get, there are lots of notions of "quantization". Here's another perspective. Recall the primary motivation of, say, algebraic geometry: a geometric space is determined by its algebra of functions. Well, actually, this isn't quite true --- a complex manifold, for example, tends to have very few entire functions (any bounded entire function on C is constant, and so there are no nonconstant entire functions on a torus, say), so in algebraic geometry, they use "sheaves", which are a way of talking about local functions. In real geometry, though (e.g. topology, or differential geometry), there are partitions of unity, and it is more-or-less true that a space is determined by its algebra of total functions. Some examples: two smooth manifolds are diffeomorphic if and only if the algebras of smooth real-valued functions on them are isomorphic. Two locally compact Hausdorff spaces are homeomorphic if and only if their algebras of continuous real-valued functions that vanish at infinity (i.e. for any epsilon there is a compact set so that the function is less than epsilon outside the compact set) are isomorphic. (From a physics point of view, it should be taken as a definition of "space" that it depends only on its algebra of functions. Said functions are the possible "observables" or "measurements" --- if you can't measure the difference between two systems, you have no right to treat them as different.) So anyway, it can be useful to recast geometric ideas into algebraic language. Algebra is somehow more "finite" or "computable" than geometry. But not every algebra arises as the algebra of functions on a geometric space. In particular, by definition the multiplication in the algebra is "pointwise multiplication", which is necessarily commutative (the functions are valued in R or C, usually). So from this point of view, "quantum mathematics" is when you try to take geometric facts, written algebraically, and interpret them in a noncommutative algebra. For example, a space is locally compact Hausdorff iff its algebra of continuous functions is commutative c-star algebra, and any commutative c-star algebra is the algebra of continuous functions on some space (in fact, on its spectrum). So a "quantum locally compact Hausdorff space" is a non-commutative c-star algebra. Similarly, "quantum algebraic space" is a non-commutative polynomial algebra. Anyway, I've explained "quantum", but not "quantization". That's because so far there's just geometry ("kinetics"), and no physics ("dynamics"). Well, a noncommutative algebra has, along with addition and multiplication, an important operation called the "commutator", defined by $[a,b]=ab-ba$. Noncommutativity says precisely that this operation is nontrivial. Let's pick a distinguished function H, and consider the operation $[H,-]$. This is necessarily a differential operator on the algebra, in the sense that it is linear and satisfies the Leibniz product rule. If the algebra were commutative, then differential operators would be the same as vector fields on the corresponding geometric space, and thus are the same as differential equations on the space. In fact, that's still true for noncommutative algebras: we define the "time evolution" by saying that for any function (=algebra element) f, it changes in time with differential [H,f]. (Using this rule on coordinate functions defines the geometric differential equation; in noncommutative land, there does not exist a complete set of coordinate functions, as any set of coordinate functions would define a commutative algebra.) Ok, so it might happen that for the functions you care about, $[a,b]$ is very small. To make this mathematically precise, let's say that (for the subalgebra of functions that do not have very large values) there is some central algebra element $\hbar$, such that $[a,b]$ is always divisible by $\hbar$. Let $A$ be the algebra, and consider the $A/\hbar A$. If $\hbar$ is supposed to be a "very small number", then taking this quotient should only throw away fine-grained information, but some sort of "classical" geometry should still survive (notice that since $[a,b]$ is divisible by $\hbar$, it goes to $0$ in the quotient, so the quotient is commutative and corresponds to a classical geometric space). We can make this precise by demanding that there is a vector-space lift $(A/\hbar A) \to A$, and that $A$ is generated by the image of this lift along with the element $\hbar$. Anyway, so with this whole set up, the quotient $A/\hbar A$ actually has a little more structure than just being a commutative algebra. In particular, since $[a,b]$ is divisible by $\hbar$, let's consider the element $\{a,b\} = \hbar^{-1} [a,b]$. (Let's suppose that $\hbar$ is not a zero-divisor, so that this element is well-defined.) Probably, $\{a,b\}$ is not small, because we have divided a small thing by a small thing, so that it does have a nonzero image in the quotient. This defines on the quotient the structure of a Poisson algebra. In particular, you can check that $\{H,-\}$ is a differential operator for any (distinguished) element $H$, and so still defines a "mechanics", now on a classical space. Then quantization is the process of reversing the above quotient. In particular, lots of spaces that we care about come with canonical Poisson structures. For example, for any manifold, the algebra of functions on its cotangent bundle has a Possion bracket. "Quantizing a manifold" normally means finding a noncommutative algebra so that some quotient (like the one above) gives the original algebra of functions on the cotangent bundle. The standard way to do this is to use Hilbert spaces and bounded operators, as I think another answerer described. share|cite|improve this answer Concerning "...a space is locally compact Hausdorff iff its algebra of continuous functions is commutative c-star algebra": Unless I misunderstand the statement, it is not true: For any topological space $X$, the bounded functions $X\to \mathbb C$ form a commutative $C^*$-algebra. – Rasmus Bentmann Nov 11 '11 at 21:52 @Rasmus: hrm, it's now been a while since c-star-algebra class, and it's not my area. But my understanding is the following. First, when I say "algebra of functions", I never mean the algebra of bounded functions. In the real world, I usually want "all" functions, but when I am working c-star-algebraically, I mean "function that's less than $\epsilon$ outside a compact". Given $X$, the algebra of bounded functions is the algebra of functions on the Stone-Cech completion $\beta X$ of $X$, and it's not surprising for $\beta X$ to have better properties than $X$. – Theo Johnson-Freyd Nov 12 '11 at 6:41 But of course you're right, there's something wrong with the statement, because any indiscrete space has only the constant functions, which clearly form a commutative c-star algebra. Probably I should have added the word "Hausdorff" somewhere --- there's no chance of recovering non-Hausdorff structure from continuous $\mathbb C$-valued functions. – Theo Johnson-Freyd Nov 12 '11 at 6:43 I don't know what it means for a mathematician to quantize something, but I can give you a rough description, and a few specific examples, from a physicist's point of view. Motivational fluff When quantum mechanics was first discovered, people tended to think of it as a modified version of classical mechanics [1]. In those days, very few quantum systems were known, so people would create quantum systems by "quantizing" classical ones. To quantize a classical system is to come up with a quantum system that "behaves similarly" in some sense. For example, you generally want there to be an intuitive correspondence between the observables of a classical system and the observables of its quantization, and you generally want the expectation values of the quantized observables to obey the same equations of motion as their classical counterparts. Because the goal of quantization is to find a quantum system that's "analogous" in some way to a given classical system, it's not a mathematically well-defined procedure, and there's no unique way of doing it. How you attempt to quantize a system, and how you decide whether or not you've succeeded, depends entirely on your motivation and goals. The harder stuff I've been using the phrase "quantum system" a lot---what do I really mean? In my opinion, one of the best ways to find out is to read Section 16.5 of Probability via Expectation, by Peter Whittle. Roughly speaking, a quantum system has two basic parts: • A complex inner product space $H$, called the state space [2]. Each ray of $H$ represents a possible "pure state" of the system. A pure state is somewhat analogous to a probability distribution, in that it tells you how to assign expectation values to "observables"; in particular, it tells you how to assign probabilities to propositions. • A collection of self-adjoint linear maps from $H$ to itself, called observables. An observable is somewhat analogous to a random variable; it represents a property of the system that can be measured and found to have a certain value. The values that an observable can take are given by its eigenvalues (or, in the infinite-dimensional case, its spectrum). Say $A$ is an observable, $a$ is an eigenvalue of $A$, and $v_1, \ldots, v_n \in H$ form an orthonormal basis for the eigenspace of $a$. If the state of the system is the ray generated by the unit vector $\psi \in H$, the probability that the observable $A$ will be found to have the value $a$ is $\langle v_1, \psi \rangle + \ldots + \langle v_n, \psi \rangle$, where $\langle \cdot, \cdot \rangle$ is the inner product. You can then easily show that the expectation value of the observable $A$ is $\langle \psi A \psi \rangle$. Observables whose only eigenvalues are $1$ and $0$—that is, projection operators on $H$—play a special role, because they correspond to logical propositions about the system. The expectation value of a projection operator is just the probability of the proposition. Most interesting quantum systems have another part, which is often very important: • A set of unitary maps from $H$ to itself, which might be called transformations. These represent "automorphisms" of the system. In physics, many quantum systems have a one-parameter group of transformations, often denoted $U(t)$, that represent time evolution; the idea is that if the state of the system is currently (the ray generated by) $\psi$, the state will be $U(t)\psi$ after $t$ units of time have passed. Physical systems often have other transformation groups as well; for example, a quantum system that's supposed to have a "spatial orientation" will generally have a group of transformations that form a representation of $SO(3)$. A few examples • Quantum random walks are, as the name suggests, quantized random walks. More generally, you can quantize the idea of a Markov chain. For a great introduction, see the paper "Quantum walks and their algorithmic applications", by Andris Ambainis. • In Sections 2 and 3 of the notes "A Short Introduction to Noncommutative Geometry", Peter Bongaarts describes quantized versions of compact topological spaces and classical mechanical systems. • In Section 4 of the book Noncommutative Geometry (caution---big PDF), Alain Connes introduces a quantized version of calculus. Here, the observables representing complex variables are non-self-adjoint because complex variables can take on complex values. An observable representing a complex variable must therefore be allowed to have complex eigenvalues. I hope this helps! [1] Today, in contrast, most physicists think of classical mechanics as an approximation to quantum mechanics. [2] If $H$ is infinite-dimensional, it's typically a separable Hilbert space. You may even need $H$ to be something fancier, like a rigged Hilbert space. share|cite|improve this answer Just to restate some facts already stated in other answers, quantization can mean a few different things. In deformation quantization, we start with a classical theory given by a Poisson manifold. Then, (by definition) the algebra of functions forms a Poisson algebra. A quantization of this algebra is a noncommutative algebra with operators $X_f$ for $f$ a function. There is also a formal parameter $\hbar$. This algebra satifies $$ X_f\ X_g = X_{fg} + \mathcal{O}(\hbar)\ . $$ The idea of quantization is that the Poisson bracket becomes a commutator, or $$ [X_f,X_g] = \hbar X_{\lbrace f,g \rbrace} + \mathcal{O}(\hbar^2)\ . $$ Thus, we have a noncommutative version of classical mechanics. The existence of such an algebra is a theorem of Kontsevich (the case of a symplectic manifold was solved much earlier, but I forget by whom). In mathematics, there are plenty of interesting analogous situations where you have a noncommutative thingie which is, in some sense, a formal deformation of a commutative thingie. You can see the other direction of the above as an example of the following general fact. Given a filtered algebra whose associated graded is commutative, there is a natural Poisson structure on the associated graded. In physics, however, it's not enough to just deform the algebra of functions; we have to now represent things on a Hilbert space. This introduces a whole host of other problems. In geometric quantization, this is split into two steps. Let's say we have a symplectic manifold whose symplectic form is integral. Then we can construct a line bundle with connection whose curvature is that symplectic form. The Hilbert space is the space of $L^2$ sections of this bundle. This is much too large, however, so you have to cut it down (which is step 2). In various cases, well-defined procedures exist, but I don't believe this is well-understood in general. For example, I'm not sure it's possible to represent every function as an operator. It's probably worth pointing out that, from the point of view of physics, quantization is backwards. It is the quantum theory that is fundamental, and the classical theory should arise as some limit of the quantum theory. There's some interesting mathematics there, and also a whole lot of philosophy too. share|cite|improve this answer I believe that the symplectic case was solved independently by De Wilde-Lecomte, Omori-Maeda-Yoshioka and Fedosov. – José Figueroa-O'Farrill Nov 20 '09 at 17:43 The word has many meanings in mathematics, most of them quite vague. One general way of describing what quantization is for a mathematician is the following: you have your favorite object $X$, and you find that there is a family of other objects $X_q$ parametrized by a parameter $q$ which varies in some set (or is only a ‘formal parameter’ in the way that the variable in a polynomial ring is ‘formally’ an element in a over-ring of the coefficient ring) such that for a special value $q_0$ of the parameter $q$, or, in the ‘formal’ case, when the parameter degenerates in some specific way, you have that $X_{q_0}$ is your original favorite $X$, and if the objects $X_q$ are in some sense (more) non-commutative than $X$, one says that the family $X_q$ is a quantization of $X$. Very vague, I know. And this is only interesting if both your $X$ is interesting, if the $X_q$ themselves are interesting, and if there is some connection between the two. For example, integer numbers are undeniably interesting objects, and they have a ‘quantization’, given by the (one of the couple of) usual quantum integers where this is very visible. The thing is, usually, starting from some interesting $X$, there are really not very many ways in which you can do this. For example, if you start with an enveloging algebra of a simple Lie algebra over $\mathbb C$, then there is just one way to do this (up to the appropriate way of ignoring that there are really many ways to do this) share|cite|improve this answer I think you mean quantization is some kind of deformation theory. – Allen Sep 28 '12 at 11:23 There are some good long answers already, so I'm going to try to give as short an answer as possible. A quantization of $X$ is some $X_\hbar$ depending on a parameter $\hbar$ (occasionally $q=e^\hbar$ instead) such that $X=X_0$ and $X_\hbar$ is generically "less commutative" than $X$. This is by analogy with quantum physics where $X_0$ is classical physics and $\hbar$ measures the failure of position and momentum to commute. share|cite|improve this answer In mathematics, quantization often refers to some kind of deformation of a classical object. The Heisenberg Uncertainty Principle says that the position and momentum operators do not commute. In fact, $[X,P]=i\hbar$. In the limit as $\hbar\to 0$, these operators commute once again. Technically speaking, this is nonsense as $\hbar$ is a universal constant, but in mathematics, we are free to play with parameters. A couple of examples include: • the noncommutative torus, the universal $C^\ast$-algebra generated by two unitaries satisfying $uv=e^{i\theta} vu$. As $\theta\to 0$, we get $C(\mathbb{T}^2)$, the continuous functions on the $2$-torus. We usually think of the deformed algebra as a quantization of the commutative one. • some quantum groups are deformations of universal enveloping algebras, i.e., we get the universal enveloping algebra as $q\to 1$. share|cite|improve this answer As a physicist who has taken a bunch of Quantum Mechanics and Solid State physics, when we say "quantize your system" it means: You set up your classical Lagrangian $L$ (in terms of kinetic $K$ and potential $U$ energy), given generalized coordinates $q_i,p_i$ (usually position and momentum, but could also be angles and angular velocity). You then take the Hamiltonian $H$ of that system, which in most cases becomes $H=K+U$. This is all in terms of your generalized coordinates. Once that is done, "quantizing" the system (or your variables) means to simply set $[q_i,p_j]=i\hbar \delta_{ij}$. The quantum mechanics is now in effect. This is known as $\textit{canonical quantization}$. Quantum Field Theory is a perturbation to Quantum Mechanics, where you perform a second quantization. For instance, in using electrodynamics in quantum mechanics you simply quantize the atomic-motion (which interacts with the $\textbf{E}$-field); this is the "semiclassical approach". Second quantization further quantizes this electromagnetic field, so that now the light and the atom both have discrete structures. share|cite|improve this answer Given a theory, described by an action $S(\phi)$ with field $\phi \in \mathcal{P}$, where $\mathcal{P}$ is usually the set of sections of a bundle over some manifold $M$. The action admits $\mathcal{G}$ a set of gauge symmetries, $\phi \rightarrow \phi'$ such that $S(\phi) = S(\phi')$. One has quantized this theory when one has calculated, or has an algorithm that can calculate $\int_{\mathcal{P} / \mathcal{G}} \mathcal{O}(\phi) e^{iS(\phi)/\hbar} \mathcal{D}\phi$ for any function $\mathcal{O}(\phi)$ on $\mathcal{P} / \mathcal{G}$. In the case of quantum field theory $\mathcal{D}\phi$ is usually ill-defined and the integral usually diverges. However, for a certain class of theories, so-called renormalizable theories, one can, more-or-less, make sense of this integral. An excellent treatment of perturbative renormalization, from a mathematical point-of-view, is found in Kevin Costello's soon to be published book, Renormalization and effective field theory. share|cite|improve this answer A very basic answer: think about the classical Hamiltonian, $$ a(x,\xi)=\vert \xi\vert^2-\frac{\kappa}{\vert x\vert},\quad \text{$\kappa>0$ parameter}. $$ The classical motion is described by the integral curves of the Hamiltonian vector field of $a$, $$ \dot x=\frac{\partial a}{\partial\xi},\quad \dot \xi=-\frac{\partial a}{\partial x}. $$ The attempt of describing the motion of an electron around a proton by classical mechanics leads to the study of the previous integral curves and is extremely unstable since the function $a$ is unbounded from below. If classical mechanics were governing atomic motion, matter would not exist, or would be so unstable that could not sustain its observed structure for a long time, with electrons collapsing onto the nucleus. Now, you change the perspective and you decide, quite arbitrarily that atomic motion will be governed by the spectral theory of the quantization of $a$, i.e. by the selfadjoint operator $$ -\Delta-\frac{\kappa}{\vert x\vert}=A. $$ It turns out that the spectrum of that operator is bounded from below by some fixed negative constant, and this a way to explain stability of matter. Moreover the eigenvalues of $A$ are describing with an astonishing accuracy the levels of energy of an electron around a proton (hydrogen atom). My point is that, although quantization has many various mathematical interpretations, its success is linked to a striking physical phenomenon: matter is existing with some stability and no explanation of that fact has a classical mechanics interpretation. The atomic mechanics should be revisited, and quantization is quite surprisingly providing a rather satisfactory answer. For physicists, it remains a violence that so refined mathematical objects (unbounded operators acting on - necessarily - infinite dimensional space) have so many things to say about nature. It's not only Einstein's "God does not play dice", but also Feynman's "Nobody understands Quantum Mechanics" or Wigner's "Unreasonable effectiveness of Mathematics." share|cite|improve this answer I vote for "Nobody understands Quantum Mechanics". You are not Joking, Mr. Feynman :-) – Patrick I-Z Nov 20 '13 at 1:10 I'm gonna be a bit more down to earth and cover the basics of Weyl quantization (in units where $\hbar = 1$)... The Hamiltonian is typically introduced first: starting from the de Broglie relation $p = k$ and the Einstein-Planck relation $E = \omega$ we can regard the (Weyl) correspondence principle heuristically as arising by viewing Fourier analysis through the lens of spectral theory for self-adjoint operators: i.e., we have $p \rightarrow -i\partial_x, \quad H \rightarrow i\partial_t$ which leads immediately to the Schrödinger equation, in which the energy levels are associated with eigenvalues of the Hamiltonian. The Euclidean version is obtained by a Wick rotation: $t = -i\tau \Rightarrow \partial_t = \partial_{-i\tau} = i\partial_\tau \Rightarrow H \rightarrow -\partial_\tau.$ The time evolution operator encoding the dynamics is just $U(t) = e^{-iHt}$. The rest is details or field theory. share|cite|improve this answer Here is a link to an article on quantization in physics: The article contains links to other articles on quantization including cananonical quantization and geometric quantization, and weyl quantization. quantization involves converting classical fields to operators acting on quantum states of the field theory. share|cite|improve this answer Your Answer
7f707b08e63752c1
Sign up × I'm trying to solve the initial value problem $(i\partial_t+\Delta_x)u(t,x)=0$, $u(0,x)=f(x)$ for the Schrödinger equation ($t\in\mathbb{R}$, $x\in\mathbb{R}^n$, $f$ Schwartz). I know that a fundamental solution is given by $K(t,x)=(4\pi it)^{-n/2}e^{i|x|^2/{4t}}$. How do I interpret $\sqrt{i}$ here? I'm trying to show that if I convolve the above fundamental solution $K$ with the initial data $f$ (convolution in the spatial variable $x$), then I obtain the solution to the initial value problem. Specifically, how do I prove that $K\ast f\rightarrow f$ as $t\rightarrow0$? More generally, what are the differences between this problem and the analogous problem for the heat equation $(\partial_t-\Delta_x)u(t,x)=0$ (here $t>0$)? [I know that the Schrödinger equation and fundamental solution are obtained from their heat counterparts via $t\mapsto it$.] Why is the Schrödinger equation time reversible (i.e. why can it be solved both forwards and backwards in time), while the heat equation isn't? The total integral of the heat kernel (with respect to $x$) is $1$; is the total integral of the "Schrödinger kernel" $K$ also equal to $1$? share|cite|improve this question 1 Answer 1 The two roots differ by a factor of $-1$. For one of the roots $\lim_{t\to 0} K* f = f$, for the other, $\lim_{t\to 0} K*f = -f$. Choose the one that gives the former (the correct one is the one given by the standard square root on $\mathbb{C}$ with branch cut along the negative real axis, so $\sqrt{i} = \exp \pi i / 4$). Convergence as $t\to 0$. If the data $f$ is in Schwartz class, you can just do it via the Fourier transform. But if you want to do it in more general function spaces, and get estimates on pointwise convergence, the issue is actually amazingly delicate (and not fully resolved yet). The relevant papers are that of Sjölin and Vega written simultaneously but independently. Difference from the heat equation. In the heat equation case, the convolution kernels $H_t$ for $t > 0$ are actually Schwartz functions, and one has that family actually form a family of "Good Kernels" (in the terminology of E. Stein) for which one actually has the general theorem that Theorem If $f$ is an integrable function, and $f$ is continuous at $x_0$. Let $H_t$ be a family of "Good Kernels" (or approximations to the identity) then $\lim_{t\to 0} H_t* f(x_0) = x_0$. The Schrödinger kernel is actually quite far from being a "Good kernel" (the criteria for which are (a) integral 1 (b) absolute integral bounded (c) restricted away from the origin, the $\lim_{t\to 0}$ of the absolute integral goes to zero; (b) and (c) quite clearly fails for the Schrödinger kernel). And so the above general theorem cannot be applied. But is the total integral 1? Yes. While the absolute integral of the Schrödinger kernel does not converge, its improper integral converges to 1 (one can evaluate this by, for example, taking a contour integral). An indication to this is that its Fourier transform is, by definition, $$ \hat{K}(t,\xi) = \exp (- i t \xi^2) \implies \hat{K}(t,0) = \exp 0 = 1 $$ Then we can note that by definition of the Fourier transform $$ \hat{K}(t,\xi) = \int_{-\infty}^\infty K(t,x) \mathrm{d}x. $$ So assuming all the relevant quantities converge the value of its total integral must be 1. (One way to make the argument above still more precise is that for any Schwarz function $f$ one verifies, through the property of $K$ as a Fourier multiplier, that $ \int K* f \mathrm{d}x = \int f \mathrm{d}x $.) Time reversibility? One "meta"-argument that the Schrödinger equation, if can be solved locally, must be time reversible, lies in the form of the equation. Observe that the complex conjugation operation can be combined with the time reversal to show time reversibility (if $\Psi$ is a solution to Schrödinger's equation, $\bar{\Psi}$ solves the complex-conjugate equation, which is the same equation as the time-reversed equation). This lies in the fact that there is no canonical choice of which square root of -1 we call $i$ and which we call $-i$. So if the "Wick rotation" of the heat equation were to make sense as an evolution equation, it must be evolvable in both $+i$ and $-i$ imaginary time, and so must be time reversible. Mathematically (in order to illustrate the basic intuition, I shall commit the sin of lying by omission of more difficult details), the difference between the heat and Schrödinger equations is (roughly speaking) the difference between the real and complex exponential function $e^t$ and $e^{it}$ for $t\in\mathbb{R}$. Up to complex conjugation, the complex exponential is "time symmetric": $e^{i(-t)} = \overline{e^{it}}$. But the function $e^t$ is quite clearly not time symmetric. Now, writing the formal solution to the heat equation $$ \partial_t u = \triangle u $$ using ODE type notation $$ u(t) = e^{t\triangle} u_0 $$ we see that since $\triangle$ is a self adjoint operator with negative eigenvalues, $e^{t\triangle}$ is a contraction, and like $e^{t(-1)}$ is not time reversible, whereas $$ i\partial_t u = \triangle u $$ is solved by $$ u(t) = e^{it\triangle} u_0$$ and now since $\triangle$ is self-adjoint, its eigenvalues are real, we have that the exponent is purely imaginary. So the solution operator behaves like the complex exponential $e^{it(-1)}$ which, as discussed above, is time reversible. share|cite|improve this answer I have found your (excellent) answer with Google and while reading it, I encountered a term that I do not understand: "the total integral of a kernel". Since the kernel of Schrödinger's equation has improper integral $1$, I assume that this must mean something else. What exactly, please? And a second question: is there a known expression for the kernel in presence of a potential $V$? Thank you. – Alex M. Sep 5 '14 at 18:54 @AlexM. regarding your first point, you are absolutely correct. I made a mistake (halfway through writing my brain switched from the improper integral $\int K$ to the absolute integral $\int |K|$). I've fixed it above. For your second: no. There are some known expressions for some specific $V$, but none in general. – Willie Wong Sep 8 '14 at 8:58 Your Answer
87d3331201f0903c
måndag 31 augusti 2009 Illusions of Theories of Everything The ultimate dream of theoretical physicists is a Grand Unified Theory GUT or a Theory Of Everything TOE  as a mathematical equation in the form of a system of partial differential equations, the solutions of which would represent all there is in the World. We find this dream partially realized in specific areas of mechanics and physics identified by a specific system of partial differential equations, such as  • fluid mechanics: Navier-Stokes equations  • quantum mechanics: Schrödinger's equation • celestial mechanics: Newton's equations of motion/gravitation. One can argue that in a certain sense all of celestial mechanics including the motion of the planets, comets, asteroids et cet in our Solar system, is represented as solutions to Newton's equations of motion/gravitation, that all of quantum mechanics is represented as solutions of Schrödinger's equation, and that all of fluid mechanics is represented as solutions of the Navier-Stokes equations.  We can thus view Newton's equations, Navier-Stokes equations and Schrödinger's equation as different forms of TOE with Everything representing the totality of a certain part of the World, like a continent of the Earth.  Newton seemed to be able to describe all of celestial mechanics by his equations of motion and gravitation in his TOE,  which made Newton immensly famous attributed with godlike power.  Similarly one can argue that all of fluid mechanics can be described by the Navier-Stokes equations, and all of quantum mechanics by Schrödinger's equation as different forms of TOE.  This can give a physicist in control of a TOE the illusion of superhuman power, but there is a hook: Even if the equations can be written down in a couple of lines, like the Navier-Stokes equations, they can be impossible to solve by analytical mathematics representing solutions in terms of elementary functions such as polynomials and trigonometric functions: Analytical solutions of Newton's equations are known only for the two-body problem of one small body like the Earth orbiting a big body like the Sun. Already three bodies is beyond analytical representation, not to speak of the turbulent solutions of the Navier-Stokes equations.  If we stop here, a TOE in the form of Navier-Stokes equations instead of a theory of everything, would seem to be rather a theory of nothing. This would be like a jeweler with diamonds still to be captured from the rock. Is a  jeweler without jewels, still a jeweler? However, one can compute digital solutions of the Navier-Stokes equations using computers, for specific choices of data, and in this way gain insight case by case using a Computational Theory of Everything. Some diamonds thus can be brought to the surface and put into rings to be admired. But we cannot get full insight in one shot. We cannot capture all diamonds in one day in a true TOE. One can argue that specific knowledge of e.g. fluid mechanics comes form specific computational solutions to the Navier-Stokes equations for specific data, and fluid mechanics is the totality of such specific knowledge. I give examples in my knols on fluid mechanics. See also So even if a GUT or TOE unifying quantum mechanics and gravitation in the form of one set of equations, e.g. in the form of string theory, the main task of computing and studying specific solutions would remain. We can make a parallel with Darwin's theory of evolution based on an equation expressing genetic variability + selection by survival of the fittest. Anybody can formulate this equation and the non-trivial part of evolution theory is the study of specific solutions. Still 150 years after Darwin,  Richard Dawkins struggles hard to compute specific solutions of Darwin's equation... One can even argue that Darwins TOE of evolution as variability + selection, is trivial in the sense that it can be written down in one line, while the determination of specific solutions such as amoebas and human beings is highly non-trivial. We often hear physicists claim that the atomic electron structure of the periodic table of elements is a consequence of quantum mechanics, but at a closer examination we find the electron structure for atoms with more than one electron, that is all elements except Hydrogen, is unknown as a solution to the Schrödinger equation, see my knols on quantum mechanics. Likewise, one can argue that the secret of turbulence is hidden as solutions to the Navier-Stokes equations, a secret closed to analytical solution but open to exploration by computation, as well as the n-body problem of celestial mechanics. The basic differential equations of celestial, fluid and quantum mechanics express basic physical laws of balance or conservation, such as Newton's 2nd law, conservation of mass, momentum and energy. These physical laws are what a blind Nature obeys in its evolution from one moment of time to the next in some form of analog computation, which can be mimicked by digital computation. To evolve according to physical laws does not require any intelligence, just work. In rare cases human intelligence allows shortcuts to analytical solutions, but in general only brute force computation is effective. This is what makes the world go round, whether it is understood by someone or not. torsdag 27 augusti 2009 Will Mathematicians Save the World, Again? The free world was saved from the threat of both nazism, fascism and communism, because of free world mathematicians were able to compute both how to make nuclear bombs and run Star Wars.  Today we are told that mathematical climate models predict catastrophical global warming by CO2 emission from burning of fossil fuels, which represent 75% of the total energy production in the World. Based on these mathematical predictions US President Obama stated the G8 meeting in Aquila in July: Realization of these goals will require a major reorganization of the industrial world and threatens to keep the developing world from development. The necessity of the drastic actions required come from predictions of mathematical climate models, and the question that the leaders of the world must pose concerns the reliability of these predictions.  This is a question for mathematicians. Are mathematicians ready to once again save the World from catastrophy, by once again focussing on the most urgent question facing mankind? Let us see what the International Union of Mathematicians IMU has to say about global warming? Nothing it seems. Strange! Are mathematicians not willing to save the World this time? The last International Congress of Mathematicians ICM organized by IMU in Madrid in 2006, had no section on mathematical climate modeling, and the upcoming IMC in Hyderabad, India, in 2010 seems no better. Why? To Limit or Not to Limit Global Warming to 2 degrees C? Swedish Minister for the Environment Andreas Carlgren leading the EU delegation to Washington DC, USA, on 23–26 August for climate negotiations started out optimistically with the following message to the US: • It is vital that the US is involved in the next climate agreement if we are to manage climate issues.  • The EU and the US have a common interest and task in helping to fund adaptation measures and technology transfer to developing countries. This is crucial in order to enable the countries of the world to conclude an agreement in Copenhagen in December • The right conclusions must now be drawn for how the temperature rise is to be kept below 2 degrees Celsius. However, today Carlgren reports pessimistically:  • Some wine some water...the pace of the negotiations is slow, and they need a kick-start at political level if they are going to be concluded in Copenhagen. EU led by Sweden wants to save the world from overheating, but the US, China and India are are slow to jump on the wagon. Why? Is it because they are not convinced by IPCC? Or are they convinced, but nevertheless choose to march on towards catastrophy? Is Obama stepping back from his bold plans for his presidency and his promise at the meeting in Aquila in July: Is the Copenhagen meeting collapsing even before starting? Sweden and Carlgren has a tough job to do...Maybe it is not so easy to convince rich people that the have get poorer and poor people that they have to stay poor... Listen to Roy Spencer Testimony to the US Senate Environment and Public Works Committee onsdag 26 augusti 2009 tisdag 25 augusti 2009 New Flight Theory is Taking Off Our new theory of flight is starting to get appreciated: Diego Gugliotta, professional teacher of aerodynamics to pilot students, expresses • I had a look to your Mathematical Theory of Flight, which indeed is very interesting. In my leisure time I'm a glider pilot and I also teach pilot students in aerodynamic. Professionally I'm an engineer educated at Aalborg University (thermodynamics, and a M.Sc. in system engineering). • After reading your paper I really don't know what to do with my teaching. It is my impression that it is very difficult to know what to rely on when explaining why gliders fly at all , and it's obvious that lesson number one shall be by definition "why does it fly". The last two years I adopted the Newton-Bernoulli approach, combined with Kutta-Zhukovsky's circulation theory, without really knowing how to explain such a circulation. I also experienced, like you also mentioned in your paper, that not even NASA explained the theory of lift. • Your theory gives sense, and I'm looking to adopt it as the right theory of lift in my teaching, but now to the 1 million question: How do I explain a 17 year old glider pilot student with only basic school education the theory of lift? any good idea? The reaction of Diego Gugliotta supports our experience that not even NASA can explain why it is possible to fly, as illustrated on my blogs listed under theory of flight including interviews with NASA Glenn Research Center and my flight expert collegues at KTH. An answer to the question by Diego can be: • Redirection of the incoming flow down will give a reaction up = lift. The flow gets redirected if it does not separate on the top of the wing before the trailing edge. Separation is only possible at a stagnation point. Since the flow is only slightly viscous and thus slides along the wing surface with small friction, stagnation cannot occur before the trailing edge. Hence there is lift. OK? Note that it is crucial that the flow has small viscosity: You cannot glide in syrup. Diego answers: • As a further comment you may note that I don't believe aerodynamics are anything for pilots. I believe I should adopt to explain how, and not why:-HOW: It's a fact that there is a differential pressure between the upper and the lower part of a wing. It's a fact, and it's very easy to demonstrate even in a classroom, that differential pressure times area ends up with a force.  • -WHY: It's a fact as well, that Bernoulli holds, and that Newton's 3rd law also holds. However, at least for me, circulation is not a fact, and there is where all my "whys" ends up in nonsense. It doesn't necessarily mean that it doesn't hold. It's just not me the one to disclose this eventual fact, as it requires time, dedication and research; exactly the three parameters you and Johan utilize in your work. You tried to disclose the circulation fact,but in your well documented paper, you ended up rejecting this theory. Yourwork gives sense, and I hope to see soon the reaction of other researchersworking in this field, so they can explain to some one that indeed can work out Euler's equations, the theory of lift. Thank you for your work. Thanks Diego. I think our new theory can be presented to pilots and can also be understood and appreciated by pilots, because it is a correct understandable theory, and nothing is more practically useful than a correct understandable theory. Right? Obama and Reinfelt Saving the World Obama announced his presidency plans in New Direction on Climate Change: • Few challenges facing America and the World are more urgent than combatting climate change. • The science is beyond dispute, the facts are clear: • Sea levels are rising, coast lines are shrinking, record drought, spreading famine and storms that are growing stronger with each passing hurricane season. • Climate change and our dependence of foreign oil if left unaddressed will continue to weaken our economy and threaten our national security. • We will invest $15 billion each year to catalyze private sector effort to build a clean energy future: We will invest in solar power, wind power and next generation of biofuels. • This investment will not only help us reduce our dependence on foreign oil, making the US more secure, and will not only help us bring about a clean energy future saving the planet,  but it will also help us transform our industry and steer our country out of the economic crisis by creating 5 million new green jobs that pay well a cannot be outsourced. Obama seems to believe that the science beyond dispute is represented by chief climate activist James Hansen, NASA Goddard Institute for Space Studies,  stating in a presentation in DC in 2007 • Why should I be speaking out?  • I think there is a huge gap between what is understood about global warming by the relevent scientific community and what is known about global warming by those who need to know, the public and policy makers. • There is an urgency in the problem because of the large inertia in the systems. We have had in the last 30 years 0.5 degree C of global warming. But there is another half degree in the pipeline because gases that are already in the atmosphere, and another half degree because of energy infrastructure which is place. • Even though the climate change so far is just beginning to be noticable, there is a lot more in the pipeline. • If we follow the present course for another 10 years, we will have a different planet: No ice in the arctic, sea level rise of 6 meter and extermination of species. It's an urgent problem to begin to address. • Greenland and Antarctic ice sheet decreasing. Sea level is now rising 35 cm per century but the concern is that it is a very nonlinear process which could cause a sea level rise of 5 - 6  meters over a century. If we continue with business as usual we will get global warming of 2-3 degrees C.  We need to get on a different track within the next few years. • We cannot burn fossil fuels unless we capture the CO2. Our Prime Minister Fredrik Reinfelt also believes in James Hansen, and Obama, as he prepares for the Copenhagen Climate Council in December: • I have on the behalf of the EU wellcomed the new signals and leadership now shown on climate change from the US adminstration. We are following very closely what they are intending to do and hoping to come together in our efforts. I think it is extremely important with the incoming EU presidency of Sweden to be very active in talks and in working together between the EU and US on this issue. But the science of global warming is not beyond dispute, as I have discussed on previous blogs. Suppose Obama gets to know that the science is disputed and that the facts are not clear. What would he then say? And what would Reinfelt then say? Would that change the subject of the talks? But Obamas idea of saving at the same time the US from both the economical crisis and energy security threats, and the World from burning up, is clever, maybe even too clever... måndag 24 augusti 2009 Reality of the Virtual vs Virtual Reality Slavoj Zizek suggests to complement the concept of virtual reality as reproduction of reality, with the concept of  Zizek compares virtuality of the real with reality of the virtual with examples from politics, sociology, psychoanalysis and also physics, which connects to my knols Simulation TechnologySimulations of Wittgenstein and Hyperreality in physics In his discussion of the concept of reality of the virtual, Zizek uses the Lacanian triad of imaginary-symbolic-real applied to the concepts of virtual and real: • imaginary virtual • symbolic virtual • real virtual   • imaginary real • symbolic real • real real which Zizek characterizes, in short, as:  • imaginary virtual: filtered virtual image of e.g. other people  • symbolic virtual: beliefs which have to be virtual to be operative, like paternal authority, Santa Claus, democracy.  • real virtual: to be defined, the jewel of the collection and, recalling the Lacanian definition of real  = that which resists symbolization, • imaginary real: images too strong to be directly confronted • symbolic real: scientific formulas like quantum physics, which work but which appear to be meaningless with regard to our ordinary notion of reality • real real: core of real, obscene shadow of symbolic real, undertext of e.g. Sound of Music:and and  Shortcuts.  Zizek recalls the decomposition of Donald Rumsfeld of knowledge into known- knowns, known-unknowns and unknown-unknowns. Zizek then completes with unknown- knowns = things we don't know that we know = unconscious, which he seems to view as a form reality of the virtual.  To explore the relation between reality of the virtual and virtuality of the real,  Zizek considers Einstein's theory of gravitation connecting mass to curvature of space which can be viewed in two ways:  • mass defines curved space = real defines virtual = virtual reality • curved space defines mass = virtual defines real = reality of the virtual Similarly, Newtonian theory of gravitation connecting mass to gravitational potential, can be viewed in two ways: • mass defines potential = real defines virtual = virtual reality • potential defines mass =  virtual defines real = reality of the virtual as discussed in the The Hen and the Egg of Gravitation with the message that it is not so clear what is most real: mass or gravitational potential. It may depend on our senses. In psychoanalytic terms the connection between trauma and symbolic space can be viewed as • trauma deforms symbolic space = virtual reality • deformed symbolic space generates trauma = reality of the virtual or in fascism/antisemitism • Jews deform social space into social antagonism = virtual reality • social antagonism deforms social space into antisemitism = reality of the virtual Evidently,  a relation of cause - effect is represented by the order of real - virtual, with the usual way of thinking being that the real preceeds the virtual. But Zizek says that the cause - effect can be turned around, as in the theory of gravitation, in which case the virtual preceeds the real.  Finally, if the cause - effect relation is unclear or irrelevant, virtual reality = reality of the virtual.  Of course there is a connection to body - soul with The soul is not only a representation of reality, but the soul lives its own life and generates its own reality. Further, there seems to be a connection between reality of the virtual and hyperreality = image without real origin. In the context of a mathematical model like the Navier-Stokes equations expressing physical laws of balance, • digital computational solutions of the NS equations are representations of reality = virtual reality • reality is created by analog computational solution of balance laws = reality of the virtual. lördag 22 augusti 2009 Penguin Logic of IPCC Vincent Gray summaries his experience as expert reviewer for the Intergovernmental Panel on Climate Change IPCC in The Triumph of Double-Speak as follows: • Despite over 20 years’ of effort and four major Reports, the IPCC has not succeeded in providing any evidence that increases in  greenhouse gases are having a measurable effect on the climate. Why is it, then, that so many people believe that they have done so. The answer lies in their subtle use of  doublespeak, the technique of creating confusion by manipulation of language. This newsletter shows how they have confused and twisted the meanings of words in such a way as to create triumph out of failure.   If what Gray claims is true, then the Copenhagen Climate Council based on the IPCC reports does not have to open, and the leaders of the world can foucs on solving real problems instead of creating real problems by inventing imaginary problems.  Let's see if Gray's analysis is correct by going to the documents and then focus on the Fourth Assessment Report AR4 from 2007. In particular let's check if it respresents a form of  Science of Penguin Logic or pseudo-science. AR4 states in Technical Summary: • While this report provides new and important policy-relevant information on the scientifi c understanding of  climate change, the complexity of the climate system  and the multiple interactions that determine its behaviour impose limitations on our ability to understand fully the  future course of Earth’s global climate.  • There is still an incomplete physical understanding of many components of the climate system and their role in climate change. Key uncertainties include aspects of the roles played by  clouds, the cryosphere, the oceans, land use and couplings  between climate and biogeochemical cycles.  • The areas of  science covered in this report continue to undergo rapid  progress and it should be recognised that the present  assessment reflects scientifi c understanding based on the  peer-reviewed literature available in mid-2006.   • Models differ considerably in their estimates of the  strength of different feedbacks in the climate system.  • Equilibrium climate sensitivity is likely to be in the range  2°C to 4.5°C with a most likely value of about 3°C, based upon multiple observational and modelling constraints. There is a good understanding of the origin of differences in equilibrium climate sensitivity found in different  models. Cloud feedbacks are the primary source of inter-model differences in equilibrium climate sensitivity. • The overall response of global climate to radiative  forcing is complex due to a number of positive and negative feedbacks that can have a strong influence on the climate  system radiative balance.  The key quantity is climate sensitivity measuring global warming vs doubling of the CO2 level in the atmosphere: IPCC states that it is likely to be in the range  2° to 4.5° C, with according to the IPCC Uncertainty Guidance likely = probability > 66%. To help interpretation of this statement IPCC informs us  • Finally we come to the most difficult question of when the detection and attribution of human­induced climate change is likely to occur. The answer to this question must be subjective, particularly in the light of the very large signal and noise uncertainties discussed in this chapter. Some scientists maintain that these uncertainties currently preclude any answer to the question posed above. Other scientists would and have claimed...that confident detection of a significant anthropogenic climate change has already occurred... This can be interpreted as a reservation that convincing scientific support of the IPCC climate sensitivity estimate is lacking. But using doublespeak it is also interpreted by IPCC as something close to a truth: • Most of the observed increase in globally averaged temperature since the mid 20th century is very likely due to the observed increase in anthropogenic greenhouse gas   concentrations’.    We see that IPCC oscillates between not-knowingthe most difficult question is if human induced climate change is likely to occur? and knowing: is very likely due to...anthropogenic greenhouse gas.  This is an extreme form of doublespeak, which is also practiced by modern theoretical physicists in search of a Theory Of Everything saying nothing about the physics of the world we live in. Knowing everything and nothing at the same time!  Let us analyze the logic of the key statement of IPCC: • Climate sensitivity between 2° and 4.5° C with probability > 66% = likely. Suppose we compare with the following possible statement by IPCC:  • Climate sensitivity between 1° and 10° C with probability > 95% = extremely likely. This statement could seem more alarming by threatening with an extreme of 10° C combined with extremely likely. IPCC could take one step further to  • Climate sensitivity between -10°  and  +20° C with probability > 99% = virtually certain. which could seem even more alarming. We seem to be led to the conclusion that IPCC uses Penguin Logic. What do you think? Compare also Sheep Herd Accuracy. fredag 21 augusti 2009 Feedback, Sensitivity, Cancellation and Duality The sensitivity of a mathematical model is a measure of the effect on a certain model output from variation of certain model input data. The sensitivity to errors in data, modeling and computation directly connects to the accuracy of a model.  Climate sensitivity is primarily concerned with the effect on the global mean temperature from increasing the CO2 concentration in the atmosphere.  Concerning the climate sensitivity of current climate models, IPCC states: • Spread in model climate  sensitivity is a major factor contributing to the range in  projections of future climate changes.  • Consequently, differences in climate  sensitivity between models have received close scrutiny in all  four IPCC reports. • Climate sensitivity is largely determined  by internal feedback processes that amplify or dampen  the influence of radiative forcing on climate.  • (A) To assess the  reliability of model estimates of climate sensitivity, the ability  of climate models to reproduce different climate changes  induced by specific forcings may be evaluated.  • (B) An alternative approach, which is followed here,  is to assess the reliability of key climate feedback processes  known to play a critical role in the models’ estimate of climate  sensitivity.  Here (A) is a reasonable way of testing climate sensitivity, and gives a large spread shown in Fig 10.2, while (B) boils down to  • To assess the reliability of model estimates of climate sensitivity, we assess the reliability of key climate feedback processes  known to play a critical role in the models’ estimate of climate sensitivity.  In other words, assessment of climate model sensitivities, is replaced by assessment of the feedback processes built into the model. But this is an internal check which appears to be circular: You build in a certain feedback process into the model and you then test model sensitivity by testing the validity of the feedback process you have put in. But in most cases you cannot isolate and experimentally test the validity of the feedback process you have put in: If you could directly observe climate sensitivity experimentally, then climate models would serve no purpose.  But some sensitivities can be observed experimentally, and thus can serve as reliability tests of climate models. This is done in a recent article by Richard Lindzen showing that the radiation sensitivity of current climate models with respect to surface temperature, does not fit with observations, as shown in the above figure with ERBE radiation measurements: Climate models show too small radiation. Something is apparently wrong with the climate models, and there are many things that could be wrong... In our exploration of the secret of turbulence by computation, we have studied output sensitivity by duality techniques based on solving associated dual linearized problems, and we have found that local exponential turbulent perturbation growth is controled by effects of cancellation.  In turbulent flow, an important charcteristic of climate atmosphere/ocean circulation,  cancellation means that the worst combination of effects does not occur: Increase in space-time is balanced by decrease in space-time so that the net effect is smaller than worst case. Duality techniques should be able to offer important information on sensitivity also in climate models, but current models lack this capability and there seems to be room for improvement...will cancellation and duality help save humanity? torsdag 20 augusti 2009 Malthus is Back Again Mathematics can be a powerful tool:  In his famous treatise An Essay on the Principle of Population first published in 1798, Thomas Robert Malthus presented a mathematical analysis predicting exponential population growth in time, while food supply would have a much slower linear growth  in time, later referred to as Malthus' Principle of Population. Malthus thus predicted mathematically an inevitable collapse of human civilization if actions were not taken to limit population growth.  But the mathematics of Malthus was wrong: populations did not grow exponentially and food supply not linearly: human civilization did not collapse. Not yet at least... Nevertheless, Malthus is today back again: Based on mathematical climate models the UN International Panel of Climate Change IPCC predicts exponential growth of the global temperature caused by burning of carbonbased fuels, which will lead to a collapse of human civilization on an overheated Earth, if actions are not taken to limit CO2 emission, now.  Exponential growth is thus feared, but our capitalistic society is driven by dreams of exponential growth at x% per year of • GNP  • investments  • income • house prices... But steady exponential growth is not possible, because it will surpass any limit in finite time: The exponential growth of a financial bubble is eventually followed by a financial crisis until the next bubble can start to grow, exponentially. The overall growth is not exponential because of negative feed-back: The bubble is follwed by a compensating crisis. Exponential growth represents positive feed-back: The more it grows the more rapidly it grows. A dynamical system with positive feed-back exponential growth is unstable and in order to survive without explosion has to develop a different dynamics somehow curbing the growth by stabilizing negative feed-back. This is the nature of turbulence which is a fundamental aspect of climate. Also compare with the climate feed-back analysis by Richard Lindzen: • The earth’s climate (in contrast to the climate in current climate mocdels) is dominated by a strong net negative feed-back. Climate sensitivity is on the order of 0.3°C, and such warming as may arise from increasing greenhouse gases will be indistinguishable from the fluctuations in climate that occur naturally from processes internal to the climate system itself. The mathematics of exponential growth can be captured analytically and thus is attractive to a mathematical theoretical mind, but it is too simplistic to capture the dynamics of complex systems such as human populations or turbulence.  Similarly, the IPCC mathematical climate models are most likely too simplistic to capture the dynamics of a the complex system of global climate. Malthus' Principle of Population and the IPCC mathematical models seem to have the same degree of realism.  In the previous blog I noted that global climate and human population now connect on the agenda of the Optimum Population Trust endorsed by Sir David Attenborough: • World population is projected to rise from today's 6.8 billion to 9.15 billion in 2050. The World Population Clock is ticking.  We are rapidly destabilising our climate and destroying the natural world on which we depend for future life. • The West should provide money to promote contraception in the Third World and poor countries would be denied 'carbon allowances' unless they control their numbers.  • Progress on climate change is being seriously hampered by the widespread refusal to acknowledge the link between total greenhouse emissions and the sheer numbers of emitters.  • It is time we abandoned this crazy taboo. Is this also the agenda of the upcoming UN Copenhagen Climate Council? To limit the number of emitters according to Malthus' Principle of Population? To deny poor people carbon allowances unless they control their numbers. Is Malthus back again?  What do you think? onsdag 19 augusti 2009 Authority vs Science: Unreason vs Reason Leading MIT atmospheric physicist and climatologist Richard Lindzen in a talk on The Politics of  Global Warming at the International Conference of Climate Change, New York City, March 8 2009, reminds us about a few simple truths concerning science in general and the science of climate modeling in particular: • Endorsing global warming as scientist, just makes life easier. • Most arguments about global warming boil down to science vs authority. For much of the public authority will win, since they do not want to deal with science. • The climate alarm movement has control of carrots and sticks; most funding for climate would not be there without alarm. • What can be done is to better understand science, in particular the logic of science. Actually, science and logic is often not that hard to understand.  • Current climate models have large positive feed-backs with thermal radiation decreasing under increasing seasurface temperature, while Nature most likely has negative feed-back.  Getting people including many scientistst to understand this, is crucial.  • The Global warming issue has done much to set back climate science, in particular the notion that climate is one-dimensional totally described by some fictitious global mean temperature and some single gross forcing a la CO2 level, is grotesque in its oversimplification. Lindzen tells us something  important: Good science and scientific logic can be understood by many. Authority cannot win against science in the long run.  However, in the short run it can, as is illustrated in the previous blog: Evidently Sir David Attenborough has little understanding of the mathematics of climate models, and thus easily can be convinced that predictions of climate models is the truth: If climate models show global warming up to 10 degrees Celsius over the next hundred years, because the accuracy is not better than 10 degrees, then we have to take action to prevent a certainly dangerous increase of 10 degrees. But is it reasonable to keep poor people from increasing their standard of living because climate models are inaccurate? Is it? Note that Sir David Attenborough has joined the Optimum Population Trust with the following modest proposal on its agenda: • It is time we abandoned this crazy taboo. The idea to limit energy consumption of poor people until they have become rich enough to have few children is amazing in its inhuman Moment22 stupidity. Is this also a result of climate models? What does Sir David Attenborough say? Maybe it is time for an interview... måndag 17 augusti 2009 Sir David Attenborough: The Truth About Climate Change The science of climate modeling predicting global warming by anhropogenic emission of CO2 from carbonbased fuels is nicely summarized by the legendary Sir David Attenborough in the Truth About Global Warming: • The key question is: How can we distinguish between climate variations induced by natural causes and by CO2 emission? • The key thing that convinced me was a temperature graph prepared by climate scientist Professor Peter Cox showing that a climate model with CO2 emission included can reproduce the temperature during the 20th century better than without. • So there you have it: It seems little doubt that this recent rise, this steep rise in temperature, is due to human activity. • It is clear that without the action of human beings there would have been far less temperature change since the 1970s. The science of climate change is the science of climate modeling. Sir Attenborough became convinced by looking at a graph produced by running a certain mathematical climate model with and without a certain greenhouse effect included. But Sir Attenborough did not ask the natural question:  • How reliable and accurate are climate models?   Suppose Sir Attenborough was informed that climate models are not reliable, that their accuracy is unknown, would that change his conviction based on a single graph being the output of a climate model? Suppose he was informed that climate models are constructed so as to give the result of the graphs, a graph which is the result of modeling activity of human beings. What would Sir Attenborough then say? Compare previous blogs on climate simulation. lördag 15 augusti 2009 Swedes in the Lead of Climate Control Swedish scientists have played a leading role in global climate modeling: Svante Arrhenius studied the greenhouse effects of CO2. Carl-Gustaf Rossby pioneered computational weather forecasting sharing Arrhenius concern about CO2. His student Bert Bolin took the issue one step further in a 500 page report of a 1985 conference leading up to the following warning: • In the first half of the next century a rise of global mean temperature could occur which is greater than any in man's history. Bert Bolin was the first chairman of the UN International Panel on Climate Change IPCC 1988-1998 forming the basis of the Kyoto Protocol on control of CO2 emission negociated in December 1997. Bert Bolin passed away in January 2008, at 82. His successor is Erland Källen. Al Gore told Bert Bolin in a written statement in December 2007:  • Bert, you set up the framework for the IPCC and without your contributions we would not have come to where we are today. Thank you for starting the process. fredag 14 augusti 2009 Al Gore in the Kingdom of Denmark • 150 years ago the scientist John Tyndall in UK discovered for the first time that CO2 intercepts infrared radiation/heat. • From his discovery followed a great deal of work that led to growing concern that from the rapid accumulation of CO2 in the atmosphere, the build up of heat in the atmosphere and ocean would reach dangerous levels. • This year an important event will take place in this hall in this city, in this Kingdom. All nations will gather in an effort to secure a treaty limiting the accumultation of greenhouse gases and the emissions that lead to this accumulation. • We are now facing three interrelated crisis: climate, financial and energy security, all three linked by a common thread to an absurd overdependence on carbonbased fuels. If we grab hold of that thread and pull it, these crisis begin to unravel and we hold in our hands the answer to all three: • A historic shift from expensive vulnerable polluting carbonbased to new sources of energy that are free for ever: wind, solar and earth. In Denmark now 1/4 comes from wind • CO2 is tasteless, odorless, colorless and has no price tag. It does trap heat. • More increases are in store because the heat built up in the oceans that will be released into the atmosphere. • We must put top urgent priority on preventing the catastrophy that would befall us if we did not act. • The changes that are now needed will require participation and leadership from all parts of civilization. • It is critically important that we get the rules of the market place correct and that the signals we derive from the market, are ones that accurately reflect human values, so that we can make decisions... that will allow us to live our lives in ways that are in keeping with what we know to be right. • There is a very simple test of what is right where the climate is concerned: If the next generation looks back at this year and sees around them the worsening catastrophies that were foretold if the world did not act....If they look back on us and ask: What were they thinking, why did they sit on their hands, why did they choose not to take action to avoid the horrendous catastrophy that the scientific community spelled out to them, and told them would happen if they did not act. • If they instead see around them in their world millions of good green jobs, a spirit of renewal, a sense of optimism and hope, a feeling that, yes we can deal with the problems. If they look back with gratitude, this means we have done our job. • But it is not much time; we have to do it this year, not next year. The clock is ticking, beacuse mother Nature does not do bail-outs:  • We have already as predicted seen increasing droughts, destructive fires, stronger storms, record flooding, spread of tropical diseases... • But there is good news: The worlds business community and leaders are beginning to respond. • Our policies in the US are changing: President Obama within on one month passed the largest green renewable energy stimulus bill in history.  • Every nation and business has a leadership role to play. In short, Gore first claims that scientists have shown that: • CO2 emission causes global warming, which will cause horrendous catastrophy, and then makes a political call: • The Leaders of the World have to act and limit CO2 emissions. UN Secretary General Ban Ki-Moon backs up Al Gore's message in his address to the Global Environment Forum sending the scaring message: • droughts, floods and other natural disasters... as well as mass social unrest and violence... human suffering will be incalculable... if the world’s leaders do not seal a deal  on climate change... in Copenhagen... • We have just four months. Four months to secure the future of our planet. But  scientists do not seem to agree on answers to the basic questions: • How much global warming is caused by CO2 emission? • What will be the effects of global warming? • What will be the effects of limits of CO2 emission, for the developing world? In order for the December meeting in Copenhagen to be meaningful, some answers seem to be required...unless everything is just politics for the Leaders of the World... One question naturally presents itself: Does the ambition of the World Leaders of industrial countries to limit the use of carbonbased fuels, come from self-interest to guarantee continued access to these fuels? Modernity in Physics, Arts and Music In the previous blog I observed that modern physics is based on assumptions which cannot be directly verified experimentally. It appears that this aspect of modernity has a parallel in non-figurative arts and atonal music, which together with modern physics emerged in the beginning of the 20th century, in what can be viewed as a preparation to the collapse of classical Western culture in the 1st World War followed by the 2nd.  An aspect of the modernity of non-figurative art is that it is impossible to compare the painting with what is supposed to be depicted. A non-figurative painting does not depict something explicitely, if anything only implicitely, as in the above painting.  An aspect of the modernity of atonal music is that it is impossible to decide what the key is. Atonal music does not show tonality directly,  if any only indirectly.  The modernity of modern physics is that it is impossible to experimentally verify basic assumptions. The basic assumptions thus do not reveal their character directly,  only indirectly through traces as shown in the above picture from a particle experiment. In classical physics basic assumptions such as Hooke's law, Fourier's law and Coulomb's law, can be directly checked in experiments. But the quanta, photons and quarks of modern physics cannot be observed directly.  Modern physics, art and music thus share a common quality of hiding to inspection, or impliciteness and indirectness, like the shades of Platon which only vaugely indicate their origin, if any. We can compare with the life in the modern city with the qualities of the people we meet in the street hidden to direct inspection, only available indirectly by clothing and appearance, while in the classical village everybody directly knows everything about everybody. onsdag 12 augusti 2009 Logic of Penguin Science = ?? The statement A implies B means that if A is true, then B is also true. An elementary mistake in logical scientific reasoning is to conclude that if A implies B and B is observed to be true, then A is true. But this is to confuse A implies B with B implies A We illustrate: Let  • A= You bang your head into a wall.  • B = You have a headache. We could probably agree that there is theoretical evidence that A implies B: Head bang leads to head ache, in theory at least. Suppose now that B is true, that is suppose that you have a headache. Can we then conclude that A is true, that is that you bang your head into a wall? Not necessarily: You may get a headache from other causes, like drinking to much alcohol. It can even be that the implication that you get a headache from head bang is incorrect, so that there is no connection at all; you may have an unusually solid skull. Yet this type of logic is a trademark of modern physics/science:  • If we assume that a gas is in a state of molecular chaos with the velocities of two molecules before collision being statistically independent, then we can theoretically derive Boltzmann's equation, which has certain solutions which agree with certain observations. Hence the gas in a state of molecular chaos. • If we assume that there is a smallest quantum of energy, then we can theoretically derive a formula for the spectrum of black-body radiation, which agrees with observation. Hence there is a smallest quantum of energy. • If we assume that light consists of particles named photons, then we can theoretically derive a formula for photoelectricity, which agrees with certain observations. Hence light consists of photon particles.  • If we assume Pauli's exclusion principle, then we can explain certain observed atomic electron configurations.  Hence electrons obey Pauli's exclusion principle. • If we assume that the wave function collapses at observation, then we can theoretically explain an certain observed blips on a screen. Hence the wave function collapses at observation. • If we assume Heisenberg's uncertainty principle for elementary particles, then we can theoretically explain an observed interaction between observer and observed particle. Hence elementary particles obey Heisenberg's uncertainty principle.  • If we assume that a proton consists of three quarks, then we can theoretically derive a formula for the observed mass of a proton. Hence a proton consists of three quarks. • If we assume that spacetime observations of different observers are connected by the Lorentz transformation of special relativity, then we can theoretically explain the observation that the speed of light is the same for all observers. Hence spacetime observations of different observers are connected by the Lorentz transformation. • If we assume that spacetime is curved, then we can theoretically explain observed gravitation. Hence spacetime is curved. • If we assume there was a Big Bang, then we can theoretically explain the observed expansion of the Universe. Hence there was a Big Bang. • If we assume there is a black hole at the center of a galaxy, then we can theoretically explain the observed shape of a galaxy. Hence there is a black hole in the center of a galaxy. • If string theory would predict an observable phenomenon, it would follow that matter consists of tiny vibrating strings. • If we assume that the Earth rests on four invisible tortoises, then we can theoretically explain why the Earth does not fall down. Hence the Earth rests on four invisible tortoises. • If we assume that CO2 is a critical greenhouse gas, then we can theoretically explain observed global warming. Hence CO2 is a critical greenhouse gas. Do you see the possibly incorrect logic in these statements? If so, do you see the potential danger of such possibly incorrect logic? Do you think such possibly incorrect logic represents science or pseudo-science?  Notice that in all the above cases, the fact that a certain phenomenon is observed, which can be theoretically explained from a certain assumption, is used to motivate that the assumption is not just an assumption but a true fact: There is molecular chaos and a smallest quantum of energy, electrons do respect the exclusion principle, the Lorentz transformation must connect different observations, spacetime is curved, light is photons, there was a Big Bang, there is a black hole in the center of a galaxy, a proton is three quarks, the Earth is resting on four tortoises, CO2 is a critical greenhouse gas.  Notice also that in all cases, it is impossible to directly check if the assumption is valid, which is part of the beauty. The assumption is hidden to inspection and can only be tested indirectly: It is impossible to directly observe molecular chaos, a smallest quantum of energy, photon, electron, particle exclusion, wavefunction collapse, uncertainty, quark, spacetime curvature, black hole, tortoise, string...or that CO2 is a critical greenhouse gas. It is therefore impossible to directly disprove their existence...Clever, but there is an obvious drawback, since the existence is also impossible to verify...science or pseudo-science? The argument is that the assumption must be true, because this is the only way a theoretical explanation seems to be possible. Our inability to come up with an alternative explanation thus is used as evidence: The more we restrict our creativity and perspective, the more sure we get that we are right. Convincing or penguin science? Compare the same logic in a trial: If we assume X had a reason to kill Y, then we can theoretically explain the observed murder of Y. Hence X had a reason to kill Y. And thus probably did it! What if you were X? Notice in particular that present climate politics is based on the idea that CO2 is the cause of the observed global warming, with the motivation that certain theoretical climate models show global warming from CO2. But the observed modest global warming during the 20th century of 0.7 degrees Celsius may have natural causes rather than anthropogenic burning of fossil fuels. What do you think? What does a penguin in the Antarctic think? Compare e.g. EIKE. tisdag 11 augusti 2009 Interview with Erland Källen: Meteorologist Interview with Erland Källen, Professor of Dynamic Meteorology at Stockholm University. CJ: What is the accuracy of the climate models used in the IPCC predictions of the effects of greenhouse gases on the global climate? EK: The error margins are pretty big. The scenario showing 2 degrees warming has an error margin between 1 and nearly 4 degrees, with the upper margin bigger than the lower. All scenarios thus give a warming with the worst scenario over 6 degrees...  CJ: Do you really mean that if the upper error margin was 10 degrees, then the worst scenario would be more than 12 degrees, so that a bigger error margin would indicate more warming? Or do we use the term error margin differently? EK: ?? EK:  Over a longer time period there is a connection between increase of CO2 and global temperature....It is impossible that natural variations only, could explain the warming the last 50 years....From a moral point of view it is very difficult to understand, why we in the rich part of the World have a right to demand birth control in developing countries, if we don't at the same time open to an increased standard of living...To explain the warming the last 50 years it is difficult to see another main reason than increase of CO2...From computer simulations we draw the conclusion that increased CO2 is the most plausible explanation of observed temperature change.. CJ: Are you gradually changing from impossible...to difficult to see...to most plausible...to...? EK: ?? Erland Källen does not seem to be willing to be interviewed by me. But the questions remain. Chill-Out: Climate Change?? Chill-Out -- The Truth about the Climate Bubble (in Swedish) by Lars Bern and Maggie Thauersköld,  is an important contribution to the Swedish debate on Anthropogenic Global Warming AGW. Read and think! The key question is if the global warming of 0.7 degrees Celsius during the 20th century, is due to an increase of CO2 in the atmosphere from 0.028% to 0.038% due to anthropogenic burning of fossil fuels, and if therefore strict limitations on CO2 emissions must be imposed to save the World?  Al Gore says YES! based on the following key statements in the 2007 Synthesis Report of the UN Intergovernmental Panel of Climate Change IPCC: • Continued green-house-gas/CO2 emissions at or above current rates  would cause further warming and induce many changes in the global climate system during the 21st century that  would very likely be larger than those observed during  the 20th century.  Chill-Out puts these statements into perspective and in particular points to the fact that the predictions of IPCC are based on computer simulations showing a better fit to measured temperature with anthropogenic warming included than without.  Note that the IPCC statements are very cautious, which reflects a generally accepted view that the accuracy/reliability of current climate models is questionable, which I have discussed in previous blogs on climate simulation. The next UN Climate Conference will take place in Copenhagen in December under Swedish chairmanship of EU. UN Climate chief Yvo de Boer hopes the conference will in particular reach agreements to limit the growth of emissions in developing countries, required to be necessary by predictions of catastrophical global warming. The key question is if poor people will have to remain poor because of the scientifically vague predictions of IPCC based on certain computer simulations generally viewed to be unreliable?Fredrik Reinfeldt, Swedish prime minister and current EU president, says YES! calling for immediate global action on climate change at the opening of Nordic Climate Solutions, in Copenhagen November 27 2008: • We must act today, in order to save tomorrow. Chill-Out helps you to understand the background and meaning of this statement. Also compare A man-made mortality tale: How the IPCC’s fairly sober summary of climate science has been spun to tell a story of Fate, Doom and human folly. Read and think! måndag 10 augusti 2009 Role of Mathematics Education in Society?? Can we learn something about the role of mathematics education in our society, from how mathematics departments present their educational programs?  The following statements are typical:  • Princeton University: The mathematician's best work is ART, a high perfect art, as daring as the most secret dreams of imagination. Mathematical genius and artistic genius touch one another. (Gösta Mittag-Leffler). • MITMathematics provides a language and tools for understanding the physical world around us and the abstract world within us.  • University of ChicagoOne of the wonderful things about the University of Chicago is that EVERYONE has to take mathematics. Most students complete this requirement by taking one of our calculus sequences.  • Chalmers University of Technology: The mathematical sciences are fundamental and indispensable to a large part of modern science and engineering. Progress in other disciplines is often linked to an increased use of mathematics. Mathematics is however also a subject in itself, and fundamental research is a necessary condition for its many applications. • University of Washington: Mathematics is both a science and an art. Like any great art, mathematics has an intrinsic beauty and coherence that has attracted practitioners for centuries. Yet, unlike other arts, mathematics is a surprisingly effective tool for describing the natural world. Indeed, mathematics has come to serve as the foundation of modern science, through its language and results. Some mathematical results were initially developed in order to solve internally generated mathematical problems and only later found application in other disciplines; other mathematical results were inspired by the needs of these other disciplines. The two facets of mathematics - tool of science and subject of inquiry for its own sake - have come to be interwoven into a complex fabric.  • University of OxfordMathematics plays a pivotal role in the progress of society and its continued growth relies on the exchange and development of research ideas, the encouragement and teaching of the next generation of mathematical thinkers, and outreach to the public and schools.  We  summarize: • Mathematics is a form of sublime art, which miraculously has shown to be very useful in science and engineering. EVERYBODY needs to learn mathematics in order to understand the physical world outside and abstract world inside ourselves. We observe that the existence of the computer is not visible.  The message is that mathematics primarily is a form of art which by developing according to its own inner principles, to which computing does not belong, best serves the needs of society. In this scenario there is little incentive to reform motivated by the computer now changing the society outside mathematics departments and their educational programs.
96d375c57070d4d2
You are currently browsing the tag archive for the ‘Gross-Pitaevskii equation’ tag. The Schrödinger equation \displaystyle i \hbar \partial_t |\psi \rangle = H |\psi\rangle is the fundamental equation of motion for (non-relativistic) quantum mechanics, modeling both one-particle systems and {N}-particle systems for {N>1}. Remarkably, despite being a linear equation, solutions {|\psi\rangle} to this equation can be governed by a non-linear equation in the large particle limit {N \rightarrow \infty}. In particular, when modeling a Bose-Einstein condensate with a suitably scaled interaction potential {V} in the large particle limit, the solution can be governed by the cubic nonlinear Schrödinger equation \displaystyle i \partial_t \phi = \Delta \phi + \lambda |\phi|^2 \phi. \ \ \ \ \ (1) I recently attended a talk by Natasa Pavlovic on the rigorous derivation of this type of limiting behaviour, which was initiated by the pioneering work of Hepp and Spohn, and has now attracted a vast recent literature. The rigorous details here are rather sophisticated; but the heuristic explanation of the phenomenon is fairly simple, and actually rather pretty in my opinion, involving the foundational quantum mechanics of {N}-particle systems. I am recording this heuristic derivation here, partly for my own benefit, but perhaps it will be of interest to some readers. This discussion will be purely formal, in the sense that (important) analytic issues such as differentiability, existence and uniqueness, etc. will be largely ignored. Read the rest of this entry » RSS Google+ feed
ab223c0fb3413223
Theory Department Max Planck Institute of Microstructure Physics Theory Department   >   Research   >   Correlated electron-nuclear motion Exact factorization of the time-dependent electron-nuclear wavefunction A. Abedi, F. Agostini, S. K. Min, C. Proetto, Y. Suzuki, F. Tandetzky The Born-Oppenheimer (BO) approximation is among the most basic approximations in the quantum theory of molecules and solids. It is based on the fact that electrons usually move much faster than the nuclei. This allows us to visualize a molecule or solid as a set of nuclei moving on the potential energy surface generated by the electrons in a specific electronic state. The total wavefunction is then a product of this electronic state, , and a nuclear wavefunction satisfying the Schrödinger equation Here, =(R1 ...RNn) , denotes the nuclear configuration and = (r1 ...rNe) represents the set of electronic positions. The concept of the potential energy surface, given in the BO approximation by is enormously important in the interpretation of all experiments involving nuclear motion. Likewise, the vector potential and the Berry phase associated with it, provide an intuitive understanding of the behavior of a system near conical intersections. Here and in the following, denotes the inner product over all electronic coordinates. Berry- Pancharatnam phases are usually interpreted as arising from an approximate decoupling of a system from "the rest of the world", thereby making the system Hamiltonian dependent on some "environmental" parameters. The best example is the BO approximation, where the electronic Hamiltonian depends parametrically on the nuclear positions; i.e., the stationary electronic Schrödinger equation is solved for each fixed nuclear configuration , yielding -dependent eigenvalues and eigenfunctions . Hence one has to acknowledge the fact that in the traditional treatment of molecules and solids the concepts of the potential energy surface and the Berry phase arise as consequence of the BO approximation. Yet, this is "just" an approximation, and some of the most fascinating phenomena of condensed-matter physics, like superconductivity, appear in the regime where the BO approximation is not valid. This raises the question: If one were to solve the Schrödinger equation of the full electron-nuclear Hamiltonian exactly (i.e. beyond the BO approximation) does the Berry phase and the potential energy surface survive, and if so, how and where does it show up? Moreover, many interesting phenomena occur when molecules or solids are exposed to time-dependent external fields, such as lasers. One may ask: Can one give a precise meaning to a time-dependent potential energy surface and a time-dependent Berry phase? In a recent Letter [1] we are able to answer all of the above questions. We prove that the exact solution of the TDSE, , can be written as a single product where satisfies the partial normalization condition, , for any fixed nuclear configuration, , at any time t. An immediate consequence of the identity (1) is that, is the probability density of finding the nuclear configuration at time t. The electronic wavefunction, , in the gauge where satisfies the equation The nuclear wavefunction obeys the equation . (3) Via these exact equations the time-dependent potential energy surface (TDPES) : and the time-dependent Berry connection , (5) are defined as rigorous concepts. These two quantities mediate the coupling between the nuclear and the electronic degrees of freedom in a formally exact way. The vector potential can be expressed as . (6) This equation is interesting in several respects. First, writing , the last term on the RHS of Eq. (6) can be represented as , so it can be gauged away. Consequently, any true Berry connection must come from the first term. If the exact Ψ(t) is real-valued (e.g. for a non-current-carrying ground state) then the first term on the RHS of Eq. 6 vanishes and hence the exact Berry connection vanishes. Second, since , is the true nuclear (many-body) current density, Eq. 6 implies that the gauge-invariant current density, , that follows from Eq. (3) does indeed reproduce the exact nuclear current density. Hence, the solution of Eq. (3) is, in every respect, the proper nuclear many-body wavefunction: Its absolute-value squared gives the exact nuclear (N-body) density while its phase yields the correct nuclear (N-body) current density. Fig. 1: Snapshots of the TDPES (blue lines) and nuclear density (black) at times indicated,for the H2+ molecule subject to the laser-field (see text), I1 = 1014W/cm2 (dashed line) and I2 = 2.5 ×1013W/cm2 (solid line). The circles indicate the position and energy of the classical particle in the exact-Ehrenfest calculation (I1: open, I2: solid). For reference, the ground-state BO surface is shown as the thin red line. To demonstrate the usefulness of the approach we have calculated the TDPES's for a numerically exactly solvable model: the H2+ molecular ion in 1D, subject to a linearly polarized laser field. TDPES's along with the corresponding nuclear density, |χ(R,t)|2, are plotted in Fig.1 at six snapshots of time. For the stronger field the dissociation of the molecule is dramatically reflected in the exact TDPES. Also in the "exact Ehrenfest" approximation where the nuclei are treated as classical particles moving on the exact TDPES, the molecule dissociates (open circles in Fig.1). However, for the weaker field only exact quantum calculation leads to dissociation while in the "exact Ehrenfest" calculation the system gets stuck in a local minimum of the TDPES (solid circles), suggesting that tunneling is the leading dissociation mechanism. This reveals that the TDPES is a powerful interpretive tool to analyze and interpret different types of dissociation processes such as direct vs. tunneling. Steps in the exact time-dependent potential energy surface The concept of non-adiabatic transition has been widely used to describe various dynamical processes. The root of this concept is the adiabatic treatment of dynamical processes in which the complete system is approximately decomposed to two parts, based on the assumption that part of the complete system usually changes in a much shorter time-scale than the rest and can be adjusted instantaneously to the adiabatic changes of the rest. The "fast" part of the system, within the adiabatic approximation, depends on the rest only via an "environmental" parameter that changes adiabatically compare to the time-scale in which the fast part changes. This implies that, the fast part of the system is treated independently for each environmental parameter that represents a specific configuration of the rest. However, this ideal picture may break down in different situations. This is when the concepts such as "non-adiabatic coupling" and "non- adiabatic transition" between the adiabatic states come to remedy the adiabatic approximation and describe the dynamical processes. For example, in the Born-Oppenheimer (BO) approximation, the "faster" electronic motion is separated from the "slower" nuclear motion and the Hamiltonian that describes the electronic motion has a parametric dependence on the nuclear configuration. However, this approximation may break down, for example when two or more BOPESs come close or even cross for some nuclear configurations. Some of the most fascinating and most challenging molecular processes occur in the regime where the BO approximation is not valid, e.g. ultrafast nuclear motion through conical intersections, radiationless relaxation of excited electronic states, intra- and inter-molecular electron and proton transfer, to name a few. On the other hand, an adiabatic description of dynamical processes, such as the BO approximation, provides a great deal of intuition that is fundamental to our understanding of the processes. Hence, the non-adiabatic treatments are usually based on the essential pictures that the adiabatic approximation provides. The standard way of studying and interpreting "non-adiabatic" molecular processes is to expand the full molecular wave-function in terms of the BO electronic states. Within this expansion, non-adiabatic processes can be viewed as a nuclear wave- packet with contributions on several BOPESs, coupled through the non -adiabatic coupling (NAC) terms which in turn induce transitions between the BOPESs. While this provides a formally exact description, one may nevertheless ask: Is it also possible to study the molecular process using a single PES? This question is particularly relevant if one thinks of a classical or semi-classical treatment of the nuclei where a well-defined single classical force would be highly desirable. In our previous works, we have introduced an exact time-dependent potential energy surface (TDPES) that, together with an exact time-dependent vector potential , governs the nuclear motion. These concepts emerge from a novel way to approach the coupled electron-nuclear dynamics via an exact factorization, , of the electron-nuclear wave function[1]. The crucial point of this representation of the correlated electron-nuclear many-body problem is that the wave-function that satisfies the exact nuclear equation of motion leads to an N- body density and an N-body current density that reproduce the true nuclear N-body density and current density obtained from the full wave -function[2]. In this sense, , can be viewed as the proper nuclear wave-function. The time evolution of , on the other hand, is completely determined by the TDPES and the vector potential. Moreover, these potentials are unique up to within a gauge transformation. In other words, if one wants a TDSE whose solution yields the true nuclear N-body density and current density, then the potentials appearing in this TDSE are (up to within a gauge transformation) uniquely given by and ; there is no other choice. This also implies, that the gradient of this exact TDPES is the only correct force on the nuclei in the classical limit (plus terms arising from the vector potential, if those cannot be gauged away). In our recent work, we have investigated the generic features of the exact TDPES without external laser but in the presence of strong non-adiabatic couplings. As a major result we observe that the exact TDPES exhibits nearly discontinuous steps connecting different static BOPES, reminiscent of Tully's surface hopping [R.K. Preston and J.C. Tully, J. Chem. Phys. 54, 4297 (1971)] in the classical limit. To investigate the TDPES in detail, we write it as a sum of two parts: , defined as , (7) is form-invariant under the gauge-transformation, whereas , defined as , (8) is the part that depends on the choice of the gauge. We have calculated the TDPES for a numerically exactly solvable model of Shin and Metiu [S. Shin and H. Metiu, J. Chem. Phys. 102, 23 (1995)], whose schematic representation in given in Fig.2. Fig. 2: Schematic representation of the Shin-Metiu model system.R and r indicate the coordinates of the moving ion and electron, respectively, in one dimension. L is the distance between the fixed ions. The parameters of the model were chosen such that there is a very strong coupling between the first two BOPESs. The first four BOPESs are shown in Fig.3 (left panel), along with the initial nuclear density. The same figure (right panel) presents the time-evolution of the populations of the BO states. We have shown that the exact TDPES exhibits nearly discontinuous steps connecting different static BOPESs, reminiscent of Tully's surface hopping in the classical limit. Fig. 3: Left: lowest four BO surfaces, as functions of the nuclear coordinate. The first (red line) and second (green line) surfaces will be considered in the actual calculations that follow, the third and forth (dashed black lines) are shown as reference. The squared modulus (reduced by ten times and rigidly shifted in order to superimpose it on the energy curves) of the initial nuclear wave-packet is also shown (black line). Right: populations of the BO states along the time evolution. The strong non-adiabatic nature of the model is underlined by the population exchange at the crossing of the coupling region. For the 1D model system studied in our work, the TDPES is the only potential that governs the dynamics of the nuclear wave-function (the vector potential can be gauged away) and provides us with an alternative way of visualizing and interpreting the non-adiabatic processes. We have shown that (see Fig. 4) the gauge-invariant part of the TDPES,ϵgi(R,t) , is characterized by two generic features: (i) in the vicinity of the avoided crossing,ϵgi(R,t) becomes identical with a diabatic PES in the direction of the wave-packet motion, (ii) far from the avoided crossing, ϵgi(R,t) , as a function of R, is piecewise identical with different BOPESs and exhibits nearly discontinuous steps in between. The latter feature holds after the wave-packet branches and leaves the avoided crossing. The gauge-dependent part, ϵgd(R,t) , on the other hand, is piecewise constant in the region where ϵgi(R,t) coincides with different BOPESs. Hence ϵgd(R,t) has little effect on the gradient of the total TDPES, but may shift the BOPES-pieces of ϵgi(R,t) by different constants causing the exact TDPES to be piecewise parallel to the BOPESs. Fig. 4: TDPES and nuclear densities at different time-steps, namely t=0 fs, t=10.88 fs and t=26.61 fs. The different panels show: (top) GI part of the TDPES (black dots) and the two lowest BOPESs (first, dashed red line, and second, dashed green line) as reference; (center) the GD part of the TDPES (green dots); (bottom) nuclear density (dashed black line) and its components, |Fl(R,t)|2 (l=1 red line and l=2 green line), on different BO surfaces. The gray boxes define the regions in R-space where the energies have been calculated, since the nuclear density is (numerically) not zero. The diabatic feature (i) of the TDPES supports the use of diabatic surfaces as the driving potential when a wave-packet approaches a region of strong non-adiabatic coupling. The step feature (ii) is in agreement with the semi-classical picture of non-adiabatic nuclear dynamics provided by the Tully's surface hopping scheme, that suggests to calculate the classical forces acting on the nuclei according to the gradient of only one of the BOPESs. The exact TDPES represented in Fig. 4 can beviewed from a different perspective. The nuclear wave-packet from a semi-classical point of view can be represented as an ensemble of classical trajectories, along which point-particles evolve under the action of a classical force which is the gradient of ϵgi. According to our observations, on different sides of a step such a force is calculated from different BOPESs. This is reminiscent of Tully's surface hopping approach, that deals with the problem of coupled electron-nuclear dynamics semi-classically. The method introduces stochastic jumps between BOPESs to select the adiabatic surface that, at each point in time, governs the classical nuclear dynamics. The nuclear density is reconstructed from bundles of classical trajectories. Such bundles evolve independently from one another on different adiabatic surfaces and are a semi-classical approximation of the components, labeled as |Fl(R,t) |2 in Fig. 4, of the exact nuclear density. The step feature of the TDPES, following from the exact solution of the full TDSE, makes clear that, after the wave-packet splits at the avoided crossing, the motion of its components (the bundles in surface hopping language) is driven by single adiabatic surfaces and not (like, e.g., in Ehrenfest dynamics) by an average electronic potential. Non-adiabatic processes via mixed quantum-classical dynamics: A novel perspective from the exact time-dependent potential energy surface We have started our journey towards a full ab initio treatment of the coupled electron-nuclear dynamics by introducing an exact separation of electronic and nuclear motions [1]. We have derived the equations of motion that govern the dynamics of each subsystem and studied the potentials that mediate the couplings between the two subsystems in a formally exact way [2]. In particular, we have studied features of the TDPES in two topically demanding situations: molecules in strong fields [1, 2] and splitting of a nuclear wave-packet at avoided crossings [3] of BO potential energy surfaces. These studies provide us with the essential elements (fundamental equations of motion and insights of the coupling potentials) for making approximations, especially for the systematic development of semiclassical approximations. Here we report on the recent progress towards developing a mixed quantum-classical scheme based on the exact separation of the electronic and nuclear motions [3]. In order to examine the classical approximation of the nuclear dynamics, we first study the classical nuclear dynamics on the exact TDPES that are obtained from the exact solution of the time-dependent Schrödinger equation for an exactly solvable model, developed by Shin and Metiu (Fig. 2). We have studied the classical nuclear motion using the full TDPES and the gauge-invariant component of the exact TDPES by integrating Hamilton's equations using the classical force calculated as the gradient of the potential (full or GI) at the position of the classical particle. We present some of the results in Fig.5, where we follow classical dynamics along the exact potential (EX) and along its GI part. Fig. 5: The figure shows classical positions (dots) at different times, as indicated in the plots, with the corresponding potentials, ϵGI(R,t) (orange lines) and ϵ(R,t) (blue lines). The nuclear density (dashed black line) is plotted as reference. As expected, the single trajectory approach implemented here is not able to capture all dynamical details of quantum evolution. However, some observables can be adequately reproduced by the evolution on the exact TDPES, as shown in Fig.6. Fig. 6: Classical position (left panel) and velocity (right panel) and mean nuclear position and velocity as functions of time. The dashed black line represents the average nuclear values from quantum calculation, the blue and orange lines are the positions and velocities of the classical particle when it evolves on the exact potential and on the GI part of the potential, respectively. Our results show the importance of the gauge-dependent component of the TDPES as the results obtained from the classical evolution on the GI part of the full potential only deviate largely from the exact results. This procedure has been refereed to as "exact Ehrenfest" in the study of the dissociation of H2+ in[1]. The Ehrenfest theorem relates the time-derivative of the expectation value of a quantum-mechanical operator to the expectation value of the commutator of that operator with the Hamiltonian, i.e., , (10) where the average is calculated over the full wave-function, . When the electron-nuclear wave-function is represented in the factorized form, , Ehrenfest theorem can be extended to the nuclear wavefunction , (12) provided that the nuclear momentum operator is replaced with , and the full Hamiltonian is replaced by the nuclear Hamiltonian , (13) where, the average operations in Eqs. (11) and (12) are performed over the nuclear density only. In 1D cases (A(R,t)=0 according to the gauge condition) and for a nuclear density that is infinitely localized at the classical position Rc(t) (the trajectory), |χ(R,t)|2→δ(R−Rc(t)), Eqs. (11) and (12) lead to Hamilton's equations (9). After studying various aspects of the classical nuclear dynamics on the exact TDPES and its gauge-invariant component, we have developed a mixed quantum-classical method based on the exact factorization of the electronic and nuclear motions. Starting from the exact equations of motion for electrons and nuclei, we rigorously take the classical limit of the nuclear motion and derive a mixed quantum-classical scheme . In order to take the classical limit, the nuclear wave-function is expanded in an asymptotic series in powers of ħ following the van Vleck proposal [J. H. van Vleck, Proc. Natl. Acad. Sci. 14, 178 (1928)]. If only the zero-th order terms of the expansion are considered, the nuclear time-dependent Schrödinger equation leads to the Hamilton-Jacobi equation for the classical action associated to the classical nuclear dynamics. The nuclear Hamiltonian generating the evolution can be obtained from the exact nuclear Hamiltonian (13) by replacing with Pν, the nuclear momentum evaluated along the classical trajectory. Newton's equation can be easily derived from the Hamilton-Jacobi equation. The classical evolution obtained according to the described procedure is coupled to the electronic equation obtained by expanding the exact electronic equation (2) on the adiabatic basis, , formed by the eigenstate, , of the BO Hamiltonian , and neglecting, as a first approximation, the spatial dependence of the coefficients, . The set of ordinary differential equations for these coefficients is coupled to the classical nuclear evolution equation. The performance of the mixed quantum-classical scheme in comparison with the exact solution of the TDSE is examined by calculating the populations of the BO states and the nuclear kinetic energy. Fig. 7: Left panel: populations of the BO states as functions of time determined by quantum (blue) and mixed quantum-classical (MQC, dashed red) propagation schemes. Right panel: nuclear kinetic energy as function of time. As it is seen in Fig. 7, the results of the mixed quantum-classical approach, rigorously derived from the exact Eqs. (2-3), are in a very good agreement with results obtained from the exact solution of the time-dependent Schrödinger equation. A. Abedi, N. T. Maitra and E. K. U. Gross, Phys. Rev. Lett., 105, 123002, (2010). A. Abedi, N. T. Maitra and E. K. U. Gross, J. Chem. Phys., 137, 22A530, (2012). A. Abedi, F. Agostini, Y. Suzuki and E. K. U. Gross, arXiv:1302.1453v1. Accepted for publication in Phys. Rev. Lett. To top
8ce35073fc93253b
The questions Q1. What are simple ways to think mathematically about the physical meanings of the Planck constant? Q2. How does the Planck constant appear in mathematics of quantum mechanics? In particular, quantization is an important notion in mathematical physics and there are various forms of quantization for classical Hamiltonian systems. What is the role of the Planck constant in mathematical quantization? Q3. How does the Planck constant relate to the uncertainty principle and to mathematical formulations of the uncertainty principle? Q4. What is the mathematical and physical meaning of letting the Planck constant tend to zero? (Or to infinity, if this ever happens.) One purpose of this question is for me to try to get better early intuition towards a seminar we are running in the fall. Another purpose is that the Planck constant plays almost no role (and, in fact, is hardly mentioned) in the literature on quantum computation and quantum information, and I am curious about it. Related MO questions: Does quantum mechanics ever really quantize classical mechanics? ; • 2 $\begingroup$ In my humble opinion (I'm not a professor and don't know physics) one of the best talks (from an informative point of view) of quantum gravity is from the official channel of YouTube Instituto de Física Teórica IFT with title Gravedad y Mecánica Cuántica (César Gómez), posted at December 3rd, 2014. I believe that it doesn't answer all your questions, and you will need a friend knowing Spanish. The ponent César Gómez has other talks in this channel about what is the energy, or also the talk ¿Por qué los agujeros negros NO son negros? January 29th, 2019. Are the best talks that I know. $\endgroup$ – user142929 Sep 11, 2019 at 11:02 • 3 $\begingroup$ $h$, like the Boltzmann constant $k_B$ and the speed of light $c$, is just a human construct that doesn’t appear in the laws of nature in their simplest form. It is an artifact of the fact that, historically, humans didn’t know that energy and frequency are “really the same thing”, much like energy and temperature ($E = k_B T$) and length and duration ($x = ct$) are “really the same thing” (the latter in particular mixing under Lorentz boosts). It’s as if we’d decided to use different units for different forms of energy, and had to introduce arbitrary conversion factors in every equation. $\endgroup$ – user76284 Sep 11, 2019 at 20:56 • 60 $\begingroup$ If I had asked this question, it would have been closed. $\endgroup$ – bobuhito Sep 12, 2019 at 3:01 • 22 $\begingroup$ @bobuhito We low-points users ("lusers", for short) just have to hope that the high-point users ask the questions we really want to know. Just like how poor people just have to hope that rich peoples' interests align with their own. It's just how The System works. $\endgroup$ Sep 12, 2019 at 20:13 • 3 $\begingroup$ @Nat both constants are famously equal to 1, of course $\endgroup$ Feb 7, 2020 at 20:41 10 Answers 10 Let's give it a try. Of course, the precise mathematical meaning is perhaps absent, so the answers are sort of heuristic. But if I understand correctly, you want to gain intuition ;) The first observation is that Planck's constant has units, it is not a numerical constant but carries physical dimension. In some sense, this makes all the difference: in your favorite unit system, $\hbar$ has the numerical value $1$. Period. Nothing more to say... Now what is the impact? If you think of e.g. Fourier transform, the phase in your integral is $e^{ikx}$ and for a mathematician, everything is fine with this. Now for a physicist (depending on the application), $x$ has a unit of length. So there is no way to exponentiate a length per se, you can only plug in dimension-less quantities in a transcendental function. This means that the $k$ has a physical dimension of $1/\text{length}$. The physical interpretation is that $k$ is an inverse wave length. Now in quantum physics, there comes the time that you want your wave functions in the momentum representation. So you have to replace $k$ by the physical momentum $p$ which has a different unit, namely that of momentum. This requires dividing the product $px$ in your phase by a quantity of dimension momentum times length which is action. So the observation in physics is that there is a universal constant providing a scale for doing exactly that, $\hbar$. The familiar uncertainty relation you know from Fourier transform becomes thereby scaled by $\hbar$ as well. Another, perhaps more important observation is that in quantization in the Schrödinger approach you consider wave functions on configuration space, depending on the position $x$ of dimension length. Now the relevant operators are the momentum operators encoded by a derivative. But derivatives have dimension $1/\text{length}$ instead of momentum. Thus you have to rescale the derivative by a physical constant of dimension action to get a quantity of dimension momentum. Again, $\hbar$ is the one doing the job. From what I said (and much more can be said) it should be clear that $\hbar$ does not tend to zero at all (it's $1$, right?). So what should these statements $\hbar \to 0$ then really mean from a physical point of view? This is in fact a quite subtle and, I guess, ultimately not well understood point. The observation in daily life is that quantum effects do not play any role, the world behaves classically. So classical physics is at least a perfect approximation in many situations. Quantum physics only enters the picture if we perform very precise measurements etc. Now the idea is that the more fundamental theory (quantum) has a certain less fundamental theory (classical) as limit. The interpretation of this limit is subtle. But in many situations, the limit relates to parameters of the system which carry dimensions, typically the dimension of action. Now it is the ratio of this system-dependent parameter (say a combination of masses, lengths etc) and $\hbar$ which tells us whether the classical theory is a good approximation or not. The main point is that we need parameters $\alpha$ of the same dimension as $\hbar$ to have a dimension-less ratio $\hbar/\alpha$. Only such a dimension-less parameter can be considered to be "small" or "large". The classical limit is thus better understood as $\hbar/\alpha \to 0$, meaning that we look at many different systems with different values of $\alpha$ (whatever its concrete physical interpretation may be) and get the classical limit as limiting scenario if these parameters assume values much larger than $\hbar$. We can compare them since they have both the same physical dimension. Now why don't we see these arguments in math? The (perhaps pretty obscure) observation is that in the explicit situations we can handle, the mathematical limit $\hbar/\alpha \to 0$ can also be understood as $\hbar \to 0$ while $\alpha$ is fixed. Mathematically this is not a big deal, but physically an absurd interpretation: $\hbar$ is a fundamental constant of nature and we have no $\hbar$-wheel where we can adjust its value. Hmm, lot of blabla, but I hope that this clarifies the physical side of the story a bit. Mathematically, one has several ways to incorporate "dimensional" constants. One nice way is to look at graded algebras where the grading refers to the power of the dimension you are looking at. Then the grading helps to keep track of the correct dimensions. In particular in quantization theory this turned out to be a very useful tool. • 5 $\begingroup$ I really like this answer, $\endgroup$ – Nik Weaver Sep 11, 2019 at 13:03 • $\begingroup$ Dear Stefan, many thanks for the answer! $\endgroup$ – Gil Kalai Sep 11, 2019 at 16:32 • 1 $\begingroup$ Seconded: this is an awesome answer. $\endgroup$ – Alon Amit Sep 11, 2019 at 20:14 • 2 $\begingroup$ @JohnJiang --- yes, it's because you cannot "add apples and pears" (you can only add quantities that have the same dimension) $\endgroup$ Sep 13, 2019 at 4:22 • 1 $\begingroup$ “The familiar uncertainty relation you know from Fourier transform”: I wondered for how long this was known, and found that according to Primas (2017, p. 37), it was first proved by Karl Küpfmüller (Einschwingvorgänge in Wellenfiltern, Elektrische Nachrichtentechnik 1 (1924) 141–152) and simultaneously Norbert Wiener (Göttingen seminar, for which he takes credit in (1956, p. 107)). $\endgroup$ Sep 17, 2019 at 5:52 To build intuition for the Planck constant $\hbar$, which I understand is the purpose of the OP, I would start by noting that $\hbar$ is not a dimensionless number: it has dimensions of energy $\times$ time, or of momentum $\times$ position, meaning that it represents an action. (Equivalently, it could represent an angular momentum, I will get back to that at the end.) Q1 & Q2: Mathematically, the action $S=\int_{t_1}^{t_2} Ldt$ is the integral of the Lagrangian $L[q(t),\dot{q}(t)]$, a functional of position $q(t)$ and velocity $\dot{q}(t)$ over time, along a path that is bounded in space and time. The evolution in time of a physical system is given by an integral over all paths of the exponent of $iS$. Since the exponent must be dimensionless, one needs to divide $S$ by some quantity with the dimension of action, and that quantity is the Planck constant. To properly define this path integral is the central problem of the mathematics of quantization. Q4: Since $h$ is not dimensionless, what is meant by "taking the limit $h\rightarrow 0$ is that one first identifies a typical value $L_0$ of the Lagrangian $L$. There is no unique prescription for this. One then takes the limit $\epsilon=h/S_0\rightarrow 0$, where $S_0=(t_2-t_1)L_0$. In this limit one will find that the path integral of $e^{iS/\hbar}$ is dominated by the paths at which $S$ is extremal, with corrections that vanish as powers of $\epsilon$. The OP also asks about the opposite limit, $h/S_0\rightarrow\infty$, which appears when one considers vanishingly small time intervals $t_2-t_1\rightarrow 0$. In that limit one will find that paths with very large velocity dominate. This is one manifestation of the uncertainty principle. Q3: Even when $\epsilon\approx 0$, so a single path predominantly contributes to the path integral, there remain contributions from close-by paths, so one should actually think of this path as a thin tube. The resulting uncertainty in position and velocity corresponds to an uncertainty in the action $S$ of order $\hbar$. Finally, the OP remarks on the absence of Planck's constant in the literature of quantum computation and quantum information. That is simply a choice (made in the early days of the field) to normalize the action $S$ by twice the angular momentum of the electron spin, which is just $\hbar$. So with that choice $S$ is dimensionless and Planck's constant $\hbar=1$, which is why it vanishes from sight. • $\begingroup$ Dear Carlo, many thanks for the answer! $\endgroup$ – Gil Kalai Sep 11, 2019 at 16:33 To supplement the excellent answers previously given, some more remarks situated in the canonical quantization formalism, which is equivalent (at physicist level) to the path integral formalism referred to by Carlo Beenakker: When we open our eyes, we see that we're given a playground in which we can organize what we see according to position $x$, time $t$, angle $\phi $, etc. Soon afterwards, as inquisitive beings, we try to figure out how we can move around in our playground. We define conjugate objects, which, quantum mechanically, are the generators of these movements. The momentum operator $p$ generates a spatial translation by $x_0 $ of the playground via the operation $\exp (-ix_0 p/\hbar )$ on a state vector describing the playground. The Hamiltonian $H$ generates a temporal translation by $t_0 $ via $\exp (-it_0 H/\hbar )$. The angular momentum operator $L$ generates a rotation by angle $\phi_{0} $ via $\exp (-i\phi_{0} L/\hbar )$. These operations form groups - the described structure is very natural from that point of view. So, our inquisitive nature leads us to organize our physical quantities in conjugate pairs, and in each instance, the product of the pair has dimensions of an action - hence, action takes on such a central role. In the case of the angle/angular momentum pair, since angles are dimensionless, angular momentum itself has same the units as an action, and one can swap measuring in units of $\hbar $ with measuring in units of an angular momentum characteristic of the system, as Carlo notes. For any dimensionful quantity, one has to provide a scale. It doesn't really make sense to declare a dimensionful quantity "large" or "small" without stating what one is comparing to. A fly will disagree with me about the size of a penny. $\hbar $ is an important scale for action because it controls whether our system will display quantum behavior (typical action comparable to $\hbar $) or classical behavior (typical action large compared to $\hbar $ - or, $\hbar $ small compared to a typical action, which is where the notion of taking $\hbar \rightarrow 0$ comes from, as described by Stefan Waldmann). One place where this becomes very apparent is, indeed, in uncertainty relations. The standard example is the position/momentum pair. If we define a position operator $x$ to partner the momentum operator $p$, the operator algebra encoding the transformation behavior described above is given by $[x,p]=i\hbar $. The corresponding uncertainty relation, $\Delta x \cdot \Delta p \geq \hbar /2$, follows directly from this. If the typical positions and momenta in the system are very large, as in a classical system, the uncertainties forced by this inequality become negligible by comparison, and quantum effects become invisible. (Note that we don't argue in quite the same simple fashion for the other conjugate pairs mentioned above. In the case of time/energy, this is because we don't treat time as a dynamical variable, and hence do not define a time operator - time is an external parameter, rigidly given by a ticking clock. Until we consider relativity. The case of angles is tricky because of the compact nature of an angle - so, also there, we set up the algebra differently). • 1 $\begingroup$ Dear Michael, many thanks for the answer! $\endgroup$ – Gil Kalai Sep 12, 2019 at 6:21 About your Q1: I think that the simplest—and most obvious—way to think mathematically about the physical meaning of Planck's constant $h$ is that it is a kind of quantitative measure of the departure from commutativity: Although the word "quantization" does not have a uniquely defined mathematical meaning, it is almost always connected with the passage of our mathematical description from models built on commutative algebras of functions of (commutative) space-time coordinates to models involving non-commutative algebras of operators acting on Hilbert spaces. In this sense, the most familiar noncommutative object in physics is the algebra of canonical commutation relations CCRs: $[x,p]=i\hbar\mathbf{1}$, which comes in place of the Poisson bracket of canonical coordinates, as a non-commutative analogue of phase-space coordinates. Regarding Q3, Heisenberg's uncertainty relation $\Delta x\Delta p\geq \frac{\hbar}{2}$ is a direct consequence of the CCRs (as has already been mentioned in previous answers). So in some sense it might be reasonable to say that the quantitative measure of the departure from commutativity—that is, Planck's constant—provides at the same time a definite lower bound indicating the limits (imposed by physical reality itself and not technological limitations) of our probabilistic knowledge, as this is described in the frame of QM. About your Q2: On the side of physics, the value of $h$ comes from Planck's interpretation of black-body radiation. Planck considered a multiple-oscillator model with discrete energy spectrum (borrowing an older "trick" of Ludwig Boltzmann), he then obtained an expression for the entropy per oscillator and demanded that this expression should be consistent with Wien's law of radiation. This led him to $E=h\nu$. On the mathematics side, things are not so straightforward, mainly because, as mentioned by the OP, there are various forms of quantization for classical Hamiltonian systems. I think the simplest viewpoint is the one provided by (formal) deformation quantization: QM is considered as a deformation of classical mechanics, with the Planck constant being the deformation parameter. The algebra of smooth functions on the Poisson manifold is replaced with the same vector space but equipped with a new noncommutative associative unital product $\star$ (the Moyal product), whose commutator agrees, up to order $\hbar$, with the underlying Poisson bracket. This, combined with the Wigner–Weyl transform, provides the opportunity to consider QM as a "smooth" deformation of the algebra of classical observables rather than a "sharp" or "discontinuous" change of our view of the (mathematical) nature of the observables (functions on a manifold which suddenly become operators on a Hilbert space). Regarding Q4, i understand that the question is intimately related to the Correspondence principle or the so-called classical limit of QM introduced by Niels Bohr. This roughly states that our formulation of QM should be able to recover classical mechanics via some continuity argument while $\frac{\hbar}{S}\to 0$. In my understanding this is not a rigorous mathematical postulate but rather a heuristic argument (quite common in the development of physical theories). However, its mathematical formulation is quite subtle: in some cases of simple systems and under quite binding assumptions (for example if we restrict ourselves to coherent states etc) this can be interpreted as $\hbar\to 0$ but in general, simply setting $\hbar$ equal to zero produces indefinite limits or physically meaningless (see arXiv:1201.0150 [quant-ph]) results: it is the values of various parameters -usually depending on the particular system- that should be taken into account, as has already been outlined in Stefan Waldmann's answer. Attempting to discuss these limiting procedures from a more rigorous mathematical viewpoint, i think it is group contractions with respect to some continuous subgroup -determined by the parameters to be considered together with $\hbar$- which provide reasonable methods and results. (For example, this is the case in frame of deformation quantization; Moyal bracket reduces to Poisson bracket up to $\hbar^2$ in this setting). Interesting details on the origin and the motivations of this description can be found in: The classical limit of quantum mechanical correlation functions, K. Hepp, Comm. Math. Phys., 35, 265-277, (1974). It is always important to have a broader sense of what we are looking for in the classical limit: Given the probabilistic nature of QM predictions vs the deterministic predictions of classical physics, it is important to keep in mind that apart from special states, it is not reasonable to expect to get the classical trajectories in the classical limit of QM (see for example What is the limit $\hbar\to 0$ of quantum theory); instead it seems more reasonable to expect a more "modern" interpretation of the correspondence principle: In the classical limit, QM probability distributions are expected to be identified with the probability distributions of suitable ensembles of classical trajectories. See for example Classical Limit and the WKB Approximation, Am. J. of Phys., L.S. Brown, 40, 371 (1972), where this last point is analyzed in a quite rigorous and technical manner. Maybe the article: The classical limit of quantum theory, R.F. Werner, arXiv:quant-ph/9504016 might be of some further interest. • $\begingroup$ Dear Konstantinos, many thanks for the answer! $\endgroup$ – Gil Kalai Sep 12, 2019 at 6:21 The previous answers all give good discussions of the physical significance of $\hbar$ and the classical limit "$\hbar \to 0$" (in large quotation marks), but few of them discuss your "motivation" comment that $\hbar$ rarely appears in the study of quantum computing and information. David Mermin dedicates an entire section of his great paper "From Cbits to Qbits" to that question: Like my disapproving colleague, some physicists may be appalled to have finished what purports to be an exposition of quantum mechanics — indeed, of applied (well, gedanken applied) quantum mechanics — without ever having run into Planck’s constant. How can this be? The answer goes back to my first reason why enough quantum mechanics to understand quantum computation can be taught in a mere four hours. We are interested in discrete (2-state) systems and discrete (unitary) transformations. But Planck’s constant only appears in the context of continuously infinite systems (position eigenstates) and continuous families of transformations (time development) that act on them. Its role is to relate the conventional units in which we measure space and time, to the units in which it is quantum-mechanically natural to take the generators of the unitary transformations that produce translations in space or time. If we are not interested in location in continuous space and are only interested in global rather than infinitesimal unitary transformations, then $\hbar$ need never enter the story. The engineer, who must figure out how to implement unitary transformations acting over time on Qbits located in different regions of physical space, must indeed deal with $\hbar$ and with Hamiltonians that generate the unitary transformations out of which the computation is built. But the designer of algorithms for the finished machine need only deal with the resulting unitary transformations, from which $\hbar$ has disappeared as a result, for example, of judicious choices by the engineers of the times over which the interactions that produce the unitary transformations act. Deploring the absence of $\hbar$ from expositions of quantum computer science is rather like complaining that the $I$-$V$ curve for a $p$-$n$ junction never appears in expositions of classical computer science. It is to confuse computer science with computer engineering. So basically, quantum algorithms researchers are implicitly setting $\hbar = 1$, or more precisely they're sweeping it up in the various fixed time constants and so on that determine the gates' operation at the microscopic level, and only considering the value of the dimensionless ratios $(\Delta E) T/\hbar$ that arise from the Schrodinger equation. Universal physical constants like $c$ and $h$ have numerical values that are accidents of our choice of units, and often people choose units such that they equal 1. So the numerical value of Planck's constant has no significance. However, no change of units can make a nonzero number equal zero, so the fact that $h\ne0$ does have physical significance. One way in which this comes up is that in a phase-space plane of $(x,p)$, where $x$ is position and $p$ is momentum, we can't localize a state to an area smaller than $h$. In terms of concrete wavefunctions $\Psi(x)$, you can think of this as a property of Fourier transforms. More generally, it comes about because the position and momentum operators don't commute. In a universe where $h=0$, we would be able to localize states to points in phase space. This has implications for statistical mechanics. The third law of thermodynamics holds because of this discreteness of phase space. There is this whole idea of recovering classical physics by letting $h\rightarrow0$, which gives a correspondence principle between classical and quantum physics. This is not the only way of making the correspondence principle pop out, nor does it always work. Another common way in which this limit occurs is the limit of large numbers of particles. For example, nuclei have ~100 particles in them, so they are in between the classical and quantum limits. Since $h$ can't actually change (remember, it equals 1), a better way of thinking about the $h\rightarrow0$ limit is to instead think of a limit where the other variables describing your system are such that something else becomes big compared to $h$. For example, a spinning basketball has $\sim10^{34}\hbar$ of angular momentum, which is why we don't notice that its process of slowing down consists of quantum jumps. Another purpose is that the Planck constant plays almost no role (and, in fact, is hardly mentioned) in the literature of quantum computation and quantum information and I am curious about it. This is probably symptomatic of the fact that $h$ plays no role at all in the foundations of quantum mechanics. If you look at axiomatic formulations of quantum mechanics, they basically are descriptions of a kind of information theory. They don't have anything to do with space or motion. Examples below. Hardy, "Quantum Theory From Five Reasonable Axioms," https://arxiv.org/abs/quant-ph/0101012 Masanes and Mueller, "A derivation of quantum theory from physical requirements," https://arxiv.org/abs/1004.1483 Mackey, The Mathematical Foundations of Quantum Mechanics, 1963 • 1 $\begingroup$ Dear Ben, many thanks for the answer! $\endgroup$ – Gil Kalai Sep 12, 2019 at 18:21 See: "Determination of the Planck constant using a watt balance with a superconducting magnet system at the National Institute of Standards and Technology" (Apr 24 2014), by Stephan Schlamminger, Darine Haddad, Frank Seifert, Leon S Chao, David B Newell, Ruimin Liu, Richard L Steiner, and Jon R Pratt along with this APS Physics article: "Living with the New SI" (March 25, 2019, Physics 12, 33): "Over the past several decades, researchers have developed two kilogram-to-Planck experiments. The first, called the Kibble balance, works by offsetting the downward force of gravity on a chunk of metal with an upward magnetic force on a coil held in a magnetic field. Researchers tune the magnetic force by running current through the coil, and that current is measured in terms of Planck’s constant. The second experiment, conceived by the International Avogadro Project, involves fabrication of a near-perfect sphere of silicon. Using a combination of x-ray crystallography and optical interferometry, researchers can count the number of atoms in the sphere and connect its mass to Planck’s constant In 2017, both of these methods returned values of Planck’s constant — based on the standard kilogram — having a precision of 10 parts per billion (ppb). These highly precise demonstrations have now allowed the metrology community to “turn the tables” by making Planck’s constant the defined quantity rather than the kilogram. As a result, the kilogram inherits the uncertainty that previously appeared in the Planck measurement.". The Planck becomes: $h = 6.626 \, 069 \, 79(30) × 10^{−34} \;\text{J} s.$ That increases the uncertainty of the mass of the kilogram by 10 micrograms, a change at the 10 ppb level. A better physical definition of the Planck permits normalization. A simple explanation without too much overlap is on the Wikipedia webpage "Natural Units": In physics, natural units are physical units of measurement based only on universal physical constants. For example, the elementary charge $e$ is a natural unit of electric charge, and the speed of light $c$ is a natural unit of speed. A purely natural system of units has all of its units defined in this way, and usually such that the numerical values of the selected physical constants in terms of these units are exactly $1$. If you nondimensionalize your variables and "normalize" the numerical values of certain fundamental constants to $1$ you can simplify and speed up your calculations. You need to be careful not to mix dimensioned with dimensionless quantities when using Planck base units and Hartree atomic units. You also can not normalize all your constants and need to choose your set carefully. There can also be a loss of precision, Planck units use the gravitational constant $G$, which is measurable in a laboratory only to four significant digits. These are the Planck units based on Lorentz–Heaviside units (instead of on the more conventional Gaussian units), the rationalized Planck units are defined so that $c$ $=4\pi$$G$ = $\hbar$ = $\epsilon_{0}$ = $k_{\text{B}}$=$\,1$. In Wikipedia's explanation of derived units it mentions: • A speed of 1 Planck length per Planck time is the speed of light in a vacuum, the maximum possible physical speed in special relativity; 1 nano-(Planck length per Planck time) is about 1.079 km/h. With Planck units, the units are defined by properties of quantum mechanics and gravity. Not coincidentally, the Planck unit of length is approximately the distance at which quantum gravity effects become important. Likewise, atomic units are based on the mass and charge of an electron, and not coincidentally the atomic unit of length is the Bohr radius describing the "orbit" of the electron in a hydrogen atom. In the Hartree system the numerical values of the following four fundamental physical constants are all unity by definition: In Hartree atomic units, the speed of light is approximately 137 atomic units of velocity. See the webpage: "8.1: Atomic and Molecular Calculations are Expressed in Atomic Units" for an example of how setting the numerical values of four fundamental physical constants to unity permits simplification of the Hamiltonian. Main article: Chronology of the Universe In Big Bang cosmology, the Planck epoch or Planck era is the earliest stage of the Big Bang, before the time passed was equal to the Planck time, $t_P$, or approximately $10^{-43}$ seconds. There is no currently available physical theory to describe such short times, and it is not clear in what sense the concept of time is meaningful for values smaller than the Planck time. It is generally assumed that quantum effects of gravity dominate physical interactions at this time scale. At this scale, the unified force of the Standard Model is assumed to be unified with gravitation. Inconceivably hot and dense, the state of the Planck epoch was succeeded by the Grand unification epoch, where gravitation is separated from the unified force of the Standard Model, in turn followed by the Inflationary epoch, which ended after about $10^{-32}$ seconds (or about $10^{10} \; t_P$). Q2. How does the Planck constant appears in mathematics of quantum mechanics. In particular, quantization is an important notion in mathematical physics and there are various forms of quantization for classical Hamiltonian systems. What is the role of the Planck constant in mathematical quantization. See above, the Planck is a physical unit of “action” which sets the scale at which effects of quantum physics are genuinely important and physics is no longer well approximated by classical mechanics/classical field theory. See: "Planck's constant in geometric quantization". Any calculation of the position and momentum of an object (at the quantum level) involves some uncertainty and Heisenberg's uncertainty principle states that complementary properties cannot be observed or measured simultaneously. Knowing the value of a Planck reduces the error in the equation: $$\sigma _{x}\sigma _{p}\geq {\frac {\hbar }{2}}~~\text{where ħ is the reduced Planck constant, h/(2π).}$$ Q4. What is the mathematical and physical meaning of letting the Planck constant tending to zero. (Or to infinity, if this ever happens.) I'll leave you with this answer: When does ℏ→0 provide a valid transition from quantum to classcial mechanics? When and why does it fail? • 2 $\begingroup$ Dear Rob, many thanks for the answer and welcome to MO! $\endgroup$ – Gil Kalai Sep 14, 2019 at 19:24 Waves (electromagnetic, acoustic, etc) carry energy. Classical physics calculates this energy, or rather the energy flux which, for example, is responsible for heating the surface on which sunlight falls. This energy is always proportional to the square of the wave amplitude. In order to explain an important puzzle with the radiation emitted by a "black body" in 1900 Planck introduced the hypothesis that there are "portions", or quanta, of radiation energy: ℏ*omega, where omega is frequency, and ℏ is the Planck constant. Thus for a given frequency there is a minimal portion of energy that can be transmitted by a wave of any nature. This idea has allowed to understand (among many other things) the stability of atoms. A Hydrogen atom consists of an electron rotating around the proton. Since a rotating charge should emit electromagnetic radiation, classically the rotating electron should loose energy and finally fall on the proton, and calculations show that this should happen within ~1 nanosecond. According to Planck and quantum mechanics, this does not happen because the quantum becomes larger than the actual energy of the electron: it rotates, but does not have enough energy for emitting a single quantum. Thus our existence, and the existence of the Universe that we know is due to the fact that Planck constant is non-zero! The internal angular momentum (called "spin") of electron, proton, neutron is equal to ℏ/2, it is a quantum-mechanical phenomenon. For any quantum formula, the limit ℏ->0 gives the corresponding classical result (which, in some cases, can be zero or infinity, the latter case meaning that quantum mechanics is indispensable). Evidently, there are purely classical phenomena in which the elementary quantum is of no importance (e.g. the frequency of rotation of the weals of my car is so low that the energy quantum is negligible, compared to the mechanical energy of the rotating weal). Consequence: all quantum formulas contain the Planck constant, the limit h->0 giving the classical result. In particular, quantum computing cannot exist in this limit. This means that there should be a condition for the possibility of QC of the form A*ℏ >1. The expression for this A is presently unknown. • 1 $\begingroup$ Dear Michel, welcome to MO! $\endgroup$ – Gil Kalai Sep 24, 2019 at 17:34 • $\begingroup$ The Planck constant ℏ is fundamental for quantum mechanics, like the speed of light c is fundamental for relativity. The obvious fact that one can use arbitrary units for physical quantities (and, in particular, choose units where ℏ=1, or c=1) is trivial and has no meaningful content, because what matters are the ratios between physical quantities, e.g. the Empire State building is 100 (?) times taller than me, and this fact does not depend on the chosen units. $\endgroup$ Oct 2, 2019 at 10:03 • $\begingroup$ (cont) The harmonic oscillator with proper frequency omega can be described classically (with energy E being an arbitrary positive value) under the condition that E>> ℏomega, otherwise it becomes important that the values of energy are discreet with energy difference ℏomega. QM also explains why the electron in the Hydrogen atom does not loose energy in radiating em waves eventually falling on the proton (classical electrodynamics calculates the time for this as 1 nanosecond). Because of QM this does not happen: there is a ground state with minimal energy -me^4/2ℏ^2. $\endgroup$ Oct 2, 2019 at 10:20 • $\begingroup$ (cont) Thus, in the limit ℏ->0 all quantum mechanics (quantum computing including) disappears. It is rather bizarre that the theory of quantum computing has no use for the standard attributes of quantum mechanics: Planck constant, Energy, Hamiltonian, and Schroedinger equation. The single notion of "entanglement" is by far not sufficient to understand the properties and the evolution of a quantum system, like e.g. a quantum computer. $\endgroup$ Oct 2, 2019 at 10:31 For the Planck constant the rule is simple : any time you have a derivative term $\partial$ then there is also the Planck constant $\bar{h}\partial$. This is true for space $\bar{h}\nabla$ and for time $\bar{h}\partial_t$. For example we have the Schrödinger equation $$i\partial_t \phi = -\frac{\bar{h}^2}{2m}\Delta\phi +V\phi.$$ As mention in the others post, a physicist would say it is a dimension question. A mathematician would say the world is a 4 dimension manifold, some quantity are scalar but other like $p=\bar{h}\partial$ belong to the tangent space or the cotangent space. These depend on the choice of the charts on the manifold (or just the basis if the manifold is simply a vector space). So one can see $\bar{h}$ as the constant associated to usual choice of basis which in real life has been made once for all by the International System of Units (at least if you follow the metric system.) Q2 : Quantification can be a important and difficult step. The only rule here is to have $$[X,P]=i\bar{h}.$$The usual choice is $X=x$ (the multiplicative operator) and $P=i\bar{h}\partial_x$. But of course $X$ and $P$ are symmetric and one can choose $X=i\nabla_k$ and $p=\bar{h}k$ (multiplicative operator). Both are equivalent by the Fourier transform. We usually expect to recover the classical mechanics in some point. One should first mention here is the Ehrenfest theorem (https://en.wikipedia.org/wiki/Ehrenfest_theorem). With $H(x,p)$ a hamiltonian depending on $x$ and $p$ (for example $H(x,p)=p^2/m+V(x)$), we have $$\partial_t\langle X \rangle = \langle\partial_p H (X,P) \rangle ,\quad \partial_t\langle P \rangle = -\langle \partial_x H (X,P) \rangle $$ which are the classical Hamiltonian equations. This follows directly from the Schrodinger equation and the commutating rules of $X$ and $P$. The constant $\bar{h}$ appears if one now consider evolution of non linear term as $\partial_t \langle X^2 \rangle$. From a physical point of view the particle is a wave function diffusing in time. Q3 : For a physicist point of view the Heisenberg uncertainty principle played a major role in the quantum revolution. It gives a clear proof that there is no way to keep thinking particle in the classical way with position and motion and one have to see particles also as wave (Schrödinger) or to use matrix mechanics (Heisenberg). From a mathematical point of view this is only a elementary remark : if a function is located in space, then its Fourier transform is spread. A other nice interpretation (when we deals with fermions) is that $\bar{h}$ ($\bar{h}^3$ in three dimension) is the space needed to "park" a particle in the phase space. Here we can mention the Weyl law https://en.wikipedia.org/wiki/Weyl_law. Q4 : Studying quantum system in the limit when $\bar{h}\rightarrow 0$ is a whole mathematical area called semiclassical regime. Some of the most interesting results reveal some correspondence between classical trajectories and eigenvectors of the Hamiltonian (or usually just the Laplacian) of large eigenvalue (In some case $\lambda\rightarrow \infty$ is the equivalent to $\bar{h}\rightarrow 0$) I would also mention here the buzz word "Quantum Chaos" Motivations: For quantum computing and quantum information physicists only works with system already quantized with a few quantum states. $\bar{h}$ doesn't play any role since they have already reduce their system. It only implies that in order to play with a few quantum states one should use only system made of single atoms or electrons. • $\begingroup$ Dear Raphael, many thanks for the answer! $\endgroup$ – Gil Kalai Sep 14, 2019 at 19:24 • $\begingroup$ "It only implies that in order to play with a few quantum states one should use only system made of single atoms or electrons" - ANY system is made of single atoms or electrons! And $\hbar$ DOES play an important role, because a superposition of states with different energies $E_1$ and $E_2$ will oscillate in time with a frequency $\frac {E_1 - E_2} \hbar$ (according to Schroedinger's equation). $\endgroup$ Oct 3, 2019 at 9:58 • $\begingroup$ @RaphaelB4: The correct LaTeX code is \hbar instead of \bar h. $\endgroup$ – Alex M. Oct 3, 2019 at 10:35 There are already good answers for Q1-Q3, so I'll clarify Q4. In physics, quantities typically have a dimension, such as length, mass, time etc., and a unit, such as metre, kilogram, second, etc. Pure mathematical numbers have neither a dimension nor a unit. Let's consider time. A possible mathematical model for time is to use a one-dimensional oriented affine space $\mathbb{T}$ over the real numbers. By choosing an origin ("time zero") $O \in \mathbb{T}$ and a ("positive") reference time $t_1 \in \mathbb{T}\setminus O$, such that $t_1$ appears after $O$ with respect to the orientation, the pair $(O, t_1)$ provides an oriented affine coordinate base. This coordinate base induces an isomorphism of oriented real affine spaces $\mathbb{T} \to \mathbb{A}_\mathbb{R}^1$, where the real affine space $\mathbb{A}_\mathbb{R}^1$ carries the orientation induced by the real numbers. In particular, we map $O \mapsto 0$ and $t_1 \mapsto 1$. This isomorphism of real affine spaces also induces an isomorphism of the corresponding real vector spaces of oriented time intervals (durations). In $\mathbb{T}$, the vector $T_{ref}$ with $O + T_{ref} = t_1$ gives a reference unit $T_{ref}$ for durations. Given a duration $T$, we obtain a real number $t$ by $t := T/T_{ref}$. In this sense, the reference unit $T_{ref}$ provides a time scale. An equation of time intervals can be made dimensionless by dividing each occurring time by the reference time $T_{ref}$ and also each variable of a duration by $T_{ref}$. Example. If we are given an equation for time events, which is $$2 s + T = 5 s,$$ and if my reference time length is $T_{ref} = 1 s$, then I will declare $t := X/T_{ref}$ and I will obtain in dimension-less form $$2 + t = 5.$$ This equation basically has the same structure and the same look, but it is dimensionless. It is an equation for real numbers only. Now, it is often reasonable to have physical processes, where different choices of a good reference time are appropriate. Say, we are given an atomic process, where a good reference time length is $T_1 := 10^{-12} s$, and a geological process, where a good reference time length is $T_2 := 10^{12} s$. If we make times dimensionless by $t := T/T_1$, then the other option would be to consider $T/T_2 = T_1 / T_2 \cdot T/T_1 = \varepsilon \cdot t$ with a parameter $$\varepsilon := T_1/T_2 \ll 1$$ which is (positive and) very small. So, if we make atomic times dimensionless by considering $t$, and if me make geological times dimensionless by considering $\varepsilon t$, then the small parameter $\varepsilon$ will occur in our mathematical equations. A possible treatment would perhaps be to apply a perturbation method, where we take formally $\varepsilon \to 0+$ in the equations to obtain a reduced problem, which might be better treatable, and then try an expansion ansatz in $\varepsilon$. The limit $\varepsilon \to 0+$ is only a mathematical limit. As for the physical problem, it means, that the quotient of the choosen reference times $T_1/T_2 \to 0+$, because $T_1 \ll T_2$, in order to obtain a proper reduced physical problem. Basically, the same thing happens with "$\hbar \to 0+$." One chooses a reference length, reference time, and reference mass, and one rewrites the equations in a dimensionless form. If we have another good reference time (or reference length or reference mass), then a small parameter $\varepsilon$ will appear, maybe in exactly the same positions, where $\hbar$ has occurred in the original physical equations. So one simply writes $\hbar$ instead of $\epsilon$, to keep the notational form of the original physical equations, but in fact, dimensionless pure mathematical equation are meant, that now contain an additional small parameter. Your Answer
c1486c77d0f0887e
Citation for this page in APA citation style.           Close Mortimer Adler Rogers Albritton Alexander of Aphrodisias Samuel Alexander William Alston Louise Antony Thomas Aquinas David Armstrong Harald Atmanspacher Robert Audi Alexander Bain Mark Balaguer Jeffrey Barrett William Barrett William Belsham Henri Bergson George Berkeley Isaiah Berlin Richard J. Bernstein Bernard Berofsky Robert Bishop Max Black Susanne Bobzien Emil du Bois-Reymond Hilary Bok Laurence BonJour George Boole Émile Boutroux Daniel Boyd Michael Burke Lawrence Cahoone Joseph Keim Campbell Rudolf Carnap Nancy Cartwright Gregg Caruso Ernst Cassirer David Chalmers Roderick Chisholm Randolph Clarke Samuel Clarke Anthony Collins Antonella Corradini Diodorus Cronus Jonathan Dancy Donald Davidson Mario De Caro Daniel Dennett Jacques Derrida René Descartes Richard Double Fred Dretske John Dupré John Earman Laura Waddell Ekstrom Austin Farrer Herbert Feigl Arthur Fine John Martin Fischer Frederic Fitch Owen Flanagan Luciano Floridi Philippa Foot Alfred Fouilleé Harry Frankfurt Richard L. Franklin Bas van Fraassen Michael Frede Gottlob Frege Peter Geach Edmund Gettier Carl Ginet Alvin Goldman Nicholas St. John Green H.Paul Grice Ian Hacking Ishtiyaque Haji Stuart Hampshire Sam Harris William Hasker Georg W.F. Hegel Martin Heidegger Thomas Hobbes David Hodgson Shadsworth Hodgson Baron d'Holbach Ted Honderich Pamela Huby David Hume Ferenc Huoranszki Frank Jackson William James Lord Kames Robert Kane Immanuel Kant Tomis Kapitan Walter Kaufmann Jaegwon Kim William King Hilary Kornblith Christine Korsgaard Saul Kripke Thomas Kuhn Andrea Lavazza Christoph Lehner Keith Lehrer Gottfried Leibniz Jules Lequyer Michael Levin Joseph Levine George Henry Lewes David Lewis Peter Lipton C. Lloyd Morgan John Locke Michael Lockwood Arthur O. Lovejoy E. Jonathan Lowe John R. Lucas Alasdair MacIntyre Ruth Barcan Marcus Tim Maudlin James Martineau Nicholas Maxwell Storrs McCall Hugh McCann Colin McGinn Michael McKenna Brian McLaughlin John McTaggart Paul E. Meehl Uwe Meixner Alfred Mele Trenton Merricks John Stuart Mill Dickinson Miller Thomas Nagel Otto Neurath Friedrich Nietzsche John Norton Robert Nozick William of Ockham Timothy O'Connor David F. Pears Charles Sanders Peirce Derk Pereboom Steven Pinker Karl Popper Huw Price Hilary Putnam Willard van Orman Quine Frank Ramsey Ayn Rand Michael Rea Thomas Reid Charles Renouvier Nicholas Rescher Richard Rorty Josiah Royce Bertrand Russell Paul Russell Gilbert Ryle Jean-Paul Sartre Kenneth Sayre Moritz Schlick Arthur Schopenhauer John Searle Wilfrid Sellars Alan Sidelle Ted Sider Henry Sidgwick Walter Sinnott-Armstrong Saul Smilansky Michael Smith Baruch Spinoza L. Susan Stebbing Isabelle Stengers George F. Stout Galen Strawson Peter Strawson Eleonore Stump Francisco Suárez Richard Taylor Kevin Timpe Mark Twain Peter Unger Peter van Inwagen Manuel Vargas John Venn Kadri Vihvelin G.H. von Wright David Foster Wallace R. Jay Wallace Ted Warfield Roy Weatherford C.F. von Weizsäcker William Whewell Alfred North Whitehead David Widerker David Wiggins Bernard Williams Timothy Williamson Ludwig Wittgenstein Susan Wolf David Albert Michael Arbib Walter Baade Bernard Baars Jeffrey Bada Leslie Ballentine Marcello Barbieri Gregory Bateson John S. Bell Mara Beller Charles Bennett Ludwig von Bertalanffy Susan Blackmore Margaret Boden David Bohm Niels Bohr Ludwig Boltzmann Emile Borel Max Born Satyendra Nath Bose Walther Bothe Jean Bricmont Hans Briegel Leon Brillouin Stephen Brush Henry Thomas Buckle S. H. Burbury Melvin Calvin Donald Campbell Sadi Carnot Anthony Cashmore Eric Chaisson Gregory Chaitin Jean-Pierre Changeux Rudolf Clausius Arthur Holly Compton John Conway Jerry Coyne John Cramer Francis Crick E. P. Culverwell Antonio Damasio Olivier Darrigol Charles Darwin Richard Dawkins Terrence Deacon Lüder Deecke Richard Dedekind Louis de Broglie Stanislas Dehaene Max Delbrück Abraham de Moivre Bernard d'Espagnat Paul Dirac Hans Driesch John Eccles Arthur Stanley Eddington Gerald Edelman Paul Ehrenfest Manfred Eigen Albert Einstein George F. R. Ellis Hugh Everett, III Franz Exner Richard Feynman R. A. Fisher David Foster Joseph Fourier Philipp Frank Steven Frautschi Edward Fredkin Benjamin Gal-Or Lila Gatlin Michael Gazzaniga Nicholas Georgescu-Roegen GianCarlo Ghirardi J. Willard Gibbs Nicolas Gisin Paul Glimcher Thomas Gold A. O. Gomes Brian Goodwin Joshua Greene Dirk ter Haar Jacques Hadamard Mark Hadley Patrick Haggard J. B. S. Haldane Stuart Hameroff Augustin Hamon Sam Harris Ralph Hartley Hyman Hartman John-Dylan Haynes Donald Hebb Martin Heisenberg Werner Heisenberg John Herschel Basil Hiley Art Hobson Jesper Hoffmeyer Don Howard William Stanley Jevons Roman Jakobson E. T. Jaynes Pascual Jordan Ruth E. Kastner Stuart Kauffman Martin J. Klein William R. Klemm Christof Koch Simon Kochen Hans Kornhuber Stephen Kosslyn Daniel Koshland Ladislav Kovàč Leopold Kronecker Rolf Landauer Alfred Landé Pierre-Simon Laplace David Layzer Joseph LeDoux Gilbert Lewis Benjamin Libet David Lindley Seth Lloyd Hendrik Lorentz Josef Loschmidt Ernst Mach Donald MacKay Henry Margenau Owen Maroney Humberto Maturana James Clerk Maxwell Ernst Mayr John McCarthy Warren McCulloch N. David Mermin George Miller Stanley Miller Ulrich Mohrhoff Jacques Monod Emmy Noether Alexander Oparin Abraham Pais Howard Pattee Wolfgang Pauli Massimo Pauri Roger Penrose Steven Pinker Colin Pittendrigh Max Planck Susan Pockett Henri Poincaré Daniel Pollen Ilya Prigogine Hans Primas Henry Quastler Adolphe Quételet Lord Rayleigh Jürgen Renn Emil Roduner Juan Roederer Jerome Rothstein David Ruelle Tilman Sauer Jürgen Schmidhuber Erwin Schrödinger Aaron Schurger Sebastian Seung Thomas Sebeok Franco Selleri Claude Shannon Charles Sherrington David Shiang Abner Shimony Herbert Simon Dean Keith Simonton Edmund Sinnott B. F. Skinner Lee Smolin Ray Solomonoff Roger Sperry John Stachel Henry Stapp Tom Stonier Antoine Suarez Leo Szilard Max Tegmark Teilhard de Chardin Libb Thims William Thomson (Kelvin) Richard Tolman Giulio Tononi Peter Tse Francisco Varela Vlatko Vedral Mikhail Volkenstein Heinz von Foerster Richard von Mises John von Neumann Jakob von Uexküll C. S. Unnikrishnan C. H. Waddington John B. Watson Daniel Wegner Steven Weinberg Paul A. Weiss Herman Weyl John Wheeler Wilhelm Wien Norbert Wiener Eugene Wigner E. O. Wilson Günther Witzany Stephen Wolfram H. Dieter Zeh Ernst Zermelo Wojciech Zurek Konrad Zuse Fritz Zwicky Free Will Mental Causation James Symposium The Copenhagen Interpretation of Quantum Theory (Annotated) Heisenberg's "paradox" is that we must use the language and concepts of classical physics to describe the results of quantum physics. But Dirac thought new, albeit non-intuitive, concepts might arise from a careful study of quantum physics The Copenhagen interpretation of quantum theory starts from a paradox. Any experiment in physics, whether it refers to the phenomena of daily life or to atomic events, is to be described in the terms of classical physics. The concepts of classical physics form the language by which we describe the arrangements of our experiments and state the results. We cannot and should not replace these concepts by any others. Still the application of these concepts is limited by the relations of uncertainty. We must keep in mind this limited range of applicability of the classical concepts while using them, but we cannot and should not try to improve them. For a better understanding of this paradox it is useful to compare the procedure for the theoretical interpretation of an experiment in classical physics and in quantum theory. In Newton's mechanics, for instance, we may start by measuring the position and the velocity of the planet whose motion we are going to study. The result of the observation is translated into mathematics by deriving numbers for the co-ordinates and the momenta of the planet from the observation. Then the equations of motion are used to derive from these values of the co-ordinates and momenta at a given time the values of these co-ordinates or any other properties of the system at a later time, and in this way the astronomer can predict the properties of the system at a later time. He can, for instance, predict the exact time for an eclipse of the moon. The uncertainty principle limits the accuracy for the position and velocity of a particle In quantum theory the procedure is slightly different. We could for instance be interested in the motion of an electron through a cloud chamber and could determine by some kind of observation the initial position and velocity of the electron. But this determination will not be accurate - it will at least contain the inaccuracies following from the uncertainty relations and will probably contain still larger errors due to the difficulty of the experiment. It is the first of these inaccuracies which allows us to translate the result of the observation into the mathematical scheme of quantum theory. A probability function is written down which represents the experimental situation at the time of the measurement, including even the possible errors of the measurement. Even in classical physics there are errors in position and velocity that can be expressed as a probability. "Observed" means some human observer acquired new knowledge This probability function represents a mixture of two things, partly a fact and partly our knowledge of a fact. It represents a fact in so far as it assigns at the initial time the probability unity (i.e., complete certainty) to the initial situation: the electron moving with the observed velocity at the observed position; 'observed' means observed within the accuracy of the experiment. It represents our knowledge in so far as another observer could perhaps know the position of the electron more accurately. The error in the experiment does - at least to some extent - not represent a property of the electron but a deficiency in our knowledge of the electron. Also this deficiency of knowledge is expressed in the probability function. In classical physics one should in a careful investigation also consider the error of the observation. As a result one would get a probability distribution for the initial values of the co-ordinates and velocities and therefore something very similar to the probability function in quantum mechanics. Only the necessary uncertainty due to the uncertainty relations is lacking in classical physics. The Schrödinger equation of motion gives the probabilities for position at later times, but it does not give any specific positions - an actual path of the particle - just all the possible positions, with calculable probabilities for each position When the probability function in quantum theory has been determined at the initial time from the observation, one can from the laws of quantum theory calculate the probability function at any later time and can thereby determine the probability for a measurement giving a specified value of the measured quantity. We can, for instance, predict the probability for finding the electron at a later time at a given point in the cloud chamber. It should be emphasised, however, that the probability function does not in itself represent a course of events in the course of time. It represents a tendency for events and our knowledge of events. The probability function can be connected with reality only if one essential condition is fulfilled: if a new measurement is made to determine a certain property of the system. Only then does the probability function allow us to calculate the probable result of the new measurement. The result of the measurement again will be stated in terms of classical physics. Therefore, the theoretical-interpretation of an experiment requires three distinct steps: (I) the translation of the initial experimental situation into a probability function; (2) the following up of this function in the course of time; (3) the statement of a new measurement to be made of the system, the result of which can then be calculated from the probability function. For the first step the fulfilment of the uncertainty relations is a necessary condition. The second step cannot be described in terms of the classical concepts; there is no description of what happens to the system between the initial observation and the next measurement. It is only in the third step that we change over again from the 'possible' to the 'actual'. The next few paragraphs describe Heisenberg's microscope example, which shows how an observation must disturb the particle Let us illustrate these three steps in a simple ideal experiment. It has been said that the atom consists of a nucleus and electrons moving around the nucleus; it has also been stated that the concept of an electronic orbit is doubtful. One could argue that it should at least in principle be possible to observe the electron in its orbit. One should simply look at the atom through a microscope of a very high revolving power, then one would see the electron moving in its orbit. Such a high revolving power could to be sure not be obtained by a microscope using ordinary light, since the inaccuracy of the measurement of the position can never be smaller than the wave length of the light. But a microscope using γ-rays with a wave length smaller than the size of the atom would do. Such a microscope has not yet been constructed but that should not prevent us from discussing the ideal experiment. Is the first step, the translation of the result of the observation into a probability function, possible? It is possible only if the uncertainty relation is fulfilled after the observation. The position of the electron will be known with an accuracy given by the wave length of the γ-ray. The electron may have been practically at rest before the observation. But in the act of observation at least one light quantum of the γ-ray must have passed the microscope and must first have been deflected by the electron. Therefore, the electron has been pushed by the light quantum, it has changed its momentum and its velocity, and one can show that the uncertainty of this change is just big enough to guarantee the validity of the uncertainty relations. Therefore, there is no difficulty with the first step. At the same time one can easily see that there is no way of observing the orbit of the electron around the nucleus. The second step shows a wave pocket moving not around the nucleus but away from the atom, because the first light quantum will have knocked the electron out from the atom. The momentum of light quantum of the γ-ray is much bigger than the original momentum of the electron if the wave length of the γ-ray is much smaller than the size of the atom. Therefore, the first light quantum is sufficient to knock the electron out of the atom and one can never observe more than one point in the orbit of the electron; therefore, there is no orbit in the ordinary sense. The next observation - the third step - will show the electron on its path from the atom. We cannot say the particle has a classical path. Is this an epistemological problem or an ontological problem? Quite generally there is no way of describing what happens between two consecutive observations. It is of course tempting to say that the electron must have been somewhere between the two observations and that therefore the electron must have described some kind of path or orbit even if it may be impossible to know which path. This would be a reasonable argument in classical physics. But in quantum theory it would be a misuse of the language which, as we will see later, cannot be justified. We can leave it open for the moment, whether this warning is a statement about the way in which we should talk about atomic events or a statement about the events themselves, whether it refers to epistemology or to ontology. In any case we have to be very cautious about the wording of any statement concerning the behaviour of atomic particles. Actually we need not speak of particles at all. For many experiments it is more convenient to speak of matter waves; for instance, of stationary matter waves around the atomic nucleus. Such a description would directly contradict the other description if one does not pay attention to the limitations given by the uncertainty relations. Through the limitations the contradiction is avoided. The use of 'matter waves' is convenient, for example, when dealing with the radiation emitted by the atom. By means of its frequencies and intensities the radiation gives information about the oscillating charge distribution in the atom, and there the wave picture comes much nearer to the truth than the particle picture. Bohr says the wave picture and particle picture are complementary. He uses complementarity frequently in describing the proper interpretation of the new quantum theory. Position and momentum are complementary. Space-time descriptions (usually waves with positions in time) are complementary to deterministic (i.e., causal) descriptions (usually particles with momentum and energy) The deterministic and continuous evolution of the probability is complementary to the discontinuous observation of an actual position Therefore, Bohr advocated the use of both pictures, which he called 'complementary' to each other. The two pictures are of course mutually exclusive, because a certain thing cannot at the same time be a particle (i.e., substance confined to a very small volume) and a wave (i.e., a field spread out over a large space), but the two complement each other. By playing with both pictures, by going from the one picture to the other and back again, we finally get the right impression of the strange kind of reality behind our atomic experiments. Bohr uses the concept of 'complementarity' at several places in the interpretation of quantum theory. The knowledge of the position of a particle is complementary to the knowledge of its velocity or momentum. If we know the one with high accuracy we cannot know the other with high accuracy; still we must know both for determining the behaviour of the system. The space-time description of the atomic events is complementary to their deterministic description. The probability function obeys an equation of motion as the coordinates did in Newtonian mechanics; its change in the course of time is completely determined by the quantum mechanical equation, but it does not allow a description in space and time. The observation, on the other hand, enforces the description in space and time but breaks the determined continuity of the probability function by changing our knowledge of the system. Waves vs. particles were a new philosophical dualism, the origin of Bohr complementarity? Generally the dualism between two different descriptions of the same reality is no longer a difficulty since we know from the mathematical formulation of the theory that contradictions cannot arise. The dualism between the two complementary pictures - waves and particles - is also clearly brought out in the flexibility of the mathematical scheme. The formalism is normally written to resemble Newtonian mechanics, with equations of motion for the coordinates and the momenta of the particles. But by a simple transformation it can be rewritten to resemble a wave equation for an ordinary three-dimensional matter wave. Therefore, this possibility of playing with different complementary pictures has its analogy in the different transformations of the mathematical scheme; it does not lead to any difficulties in the Copenhagen interpretation of quantum theory. A real difficulty in the understanding of this interpretation arises, however, when one asks the famous question: But what happens 'really' in an atomic event? It has been said before that the mechanism and the results of an observation can always be stated in terms of the classical concepts. But what one deduces from an observation is a probability function, a mathematical expression that combines statements about possibilities or tendencies with statements about our knowledge of facts So we cannot completely objectify the result of an observation, we cannot describe what 'happens' between this observation and the next. This looks as if we had introduced an element of subjectivism into the theory, as if we meant to say: what happens depends on our way of observing it or on the fact that we observe it. Before discussing this problem of subjectivism it is necessary to explain quite clearly why one would get into hopeless difficulties if one tried to describe what happens between two consecutive observations. Here is the famous two-slit experiment We include an animation of the experiment For this purpose it is convenient to discuss the following ideal experiment: We assume that a small source of monochromatic light radiates toward a black screen with two small holes in it. The diameter of the holes may be not much bigger than the wave length of the light, but their distance will be very much bigger. At some distance behind the screen a photographic plate registers the incident light. If one describes this experiment in terms of the wave picture, one says that the primary wave penetrates through the two holes, there will be secondary spherical waves starting from the holes that interfere with one another, and the interference will produce a pattern of varying intensity on the photographic plate. The blackening of the photographic plate is a quantum process, a chemical reaction produced by single light quanta. Therefore, it must also be possible to describe the experiment in terms of light quanta. If it would be permissible to say what happens to the single light quantum between its emission from the light source and its absorption in the photographic plate, one could argue as follows: The single light quantum can come through the first hole or through the second one. If it goes through the first hole and is scattered there, its probability for being absorbed at a certain point of the photographic plate cannot depend upon whether the second hole is closed or open. The probability distribution on the plate will be the same as if only the first hole was open. If the experiment is repeated many times and one takes together all cases in which the light quantum has gone through the first hole, the blackening of the plate due to these cases will correspond to this probability distribution. If one considers only those light quanta that go through the second hole, the blackening should correspond to a probability distribution derived from the assumption that only the second hole is open. The total blackening, therefore, should just be the sum of the blackenings in the two cases; in other words, there should be no interference pattern. But we know this is not correct, and the experiment will show the interference pattern. Therefore, the statement that any light quantum must have gone either through the first or through the second hole is problematic and leads to contradictions. This example shows clearly that the concept of the probability function does not allow a description of what happens between two observations. Any attempt to find such a description would lead to contradictions; this must mean that the term 'happens' is restricted to the observation. Here begins some confusion about the role of the observer. Does "reality" depend on whether we observe it? Now, this is a very strange result, since it seems to indicate that the observation plays a decisive role in the event and that the reality varies, depending upon whether we observe it or not. To make this point clearer we have to analyse the process of observation more closely. To begin with, it is important to remember that in natural science we are not interested in the universe as a whole, including ourselves, but we direct our attention to some part of the universe and make that the object of our studies. In atomic physics this part is usually a very small object, an atomic particle or a group of such particles, sometimes much larger - the size does not matter; but it is important that a large part of the universe, including ourselves, does not belong to the object. Now, the theoretical interpretation of an experiment starts with the two steps that have been discussed. In the first step we have to describe the arrangement of the experiment, eventually combined with a first observation, in terms of classical physics and translate this description into a probability function. This probability function follows the laws of quantum theory, and its change in the course of time, which is continuous, can be calculated from the initial conditions; this is the second step. "Possibilities" are perfectly understandable for the lay person. "Tendencies" and Aristotle's "potentia" are unnecessary. For each possibility, quantum mechanics lets us calculate the probability. The probability function combines objective and subjective elements. It contains statements about possibilities or better tendencies ('potentia' in Aristotelian philosophy), and these statements are completely objective, they do not depend on any observer; and it contains statements about our knowledge of the system, which of course are subjective in so far as they may be different for different observers. In ideal cases the subjective element in the probability function may be practically negligible as compared with the objective one. The physicists then speak of a 'pure case'. When we now come to the next observation. the result of which should be predicted from the theory, it is very important to realize that our object has to be in contact with the other part of the world, namely, the experimental arrangement, the measuring rod, etc., before or at least at the moment of observation. This means that the equation of motion for the probability function does now contain the influence of the interaction with the measuring device. Describing the measuring apparatus in classical terms does not mean it is not a quantum object. Heisenberg thinks classical terms are necessary for us to communicate knowledge This influence introduces a new element of uncertainty, since the measuring device is necessarily described in the terms of classical physics; such a description contains all the uncertainties concerning the microscopic structure of the device which we know from thermodynamics, and since the device is connected with the rest of the world, it contains in fact the uncertainties of the microscopic structure of the whole world. These uncertainties may be called objective in so far as they are simply a consequence of the description in the terms of classical physics and do not depend on any observer. They may be called subjective in so far as they refer to our incomplete knowledge of the world. After this interaction has taken place, the probability function contains the objective element of tendency and the subjective element of incomplete knowledge, even if it has been a 'pure case' before. It is for this reason that the result of the observation cannot generally be predicted with certainty; what can be predicted is the probability of a certain result of the observation, and this statement about the probability can be checked by repeating the experiment many times. The probability function does - unlike the common procedure in Newtonian mechanics - not describe a certain event but, at least during the process of observation, a whole ensemble of possible events. Quantum mechanical systems evolve in two ways the first is the wave function deterministically exploring all the possibilities for interaction, the second is the particle randomly choosing one of those possibilities to become actual. The discontinuous transition form "possible" to "actual" should not be confused with the Heisenberg "cut" or with the transition from quantum to classical This discontinuity (or "collapse" of probabilities) registers new information first at the quantum level. Quantum information is subsequently amplified in the macroscopic apparatus and only later recorded as new knowledge in the mind of the observer. The observation itself changes the probability function discontinuously; it selects of all possible events the actual one that has taken place. Since through the observation our knowledge of the system has changed discontinuously, its mathematical representation also has undergone the discontinuous change and we speak of a 'quantum jump'. When the old adage 'Natura non facit saltus' is used as a basis for criticism of quantum theory, we can reply that certainly our knowledge can change suddenly and that this fact justifies the use of the term 'quantum jump'. Therefore, the transition from the 'possible' to the 'actual' takes place during the act of observation. If we want to describe what happens in an atomic event, we have to realize that the word 'happens' can apply only to the observation, not to the state of affairs between two observations. It applies to the physical, not the psychical act of observation, and we may say that the transition from the 'possible' to the 'actual' takes place as soon as the interaction of the object with the measuring device, and thereby with the rest of the world, has come into play; it is not connected with the act of registration of the result by the mind of the observer. The discontinuous change in the probability function, however, takes place with the act of registration, because it is the discontinuous change of our knowledge in the instant of registration that has its image in the discontinuous change of the probability function. To what extent, then, have we finally come to an objective description of the world, especially of the atomic world? In classical physics science started from the belief - or should one say from the illusion? - that we could describe the world or at least parts of the world without any reference to ourselves. This is actually possible to a large extent. We know that the city of London exists whether we see it or not. It may be said that classical physics is just that idealisation in which we can speak about parts of the world without any reference to ourselves. Its success has led to the general ideal of an objective description of the world. Objectivity has become the first criterion for the value of any scientific result. Does the Copenhagen interpretation of quantum theory still comply with this ideal? One may perhaps say that quantum theory corresponds to this ideal as far as possible. John von Neumann's and Eugene Wigner's claims that a "conscious observer" is needed for a quantum process to become actual seems to be no part of the original Copenhagen Interpretation of Heisenberg? It has been stated in the beginning that the Copenhagen interpretation of quantum theory starts with a paradox. It starts from the fact that we describe our experiments in the terms of classical physics and at the same time from the knowledge that these concepts do not fit nature accurately. The tension between these two starting points is the root of the statistical character of quantum theory. Therefore, it has sometimes been suggested that one should depart from the classical concepts altogether and that a radical change in the concepts used for describing the experiments might possibly lead back to a non-statistical, completely objective description of nature. This suggestion, however, rests upon a misunderstanding. The concepts of classical physics are just a refinement of the concepts of daily life and are an essential part of the language which forms the basis of all natural science. Our actual situation in science is such that we do use the classical concepts for the description of the experiments, and it was the problem of quantum theory to find theoretical interpretation of the experiments on this basis. There is no use in discussing what could be done if we were other beings than we are. At this point we have to realize, as von Weizsäcker has put it, that 'Nature is earlier than man, but man is earlier than natural science.' The first part of the sentence justifies classical physics, with its ideal of complete objectivity. The second part tells us why we cannot escape the paradox of quantum theory, namely, the necessity of using the classical concepts. We have to add some comments on the actual procedure in the quantum-theoretical interpretation of atomic events. It has been said that we always start with a division of the world into an object, which we are going to study, and the rest of the world, and that this division is to some extent arbitrary. The measuring apparatus could be treated quantum mechanically. It is a quantum object. But the location of the Heisenberg "cut" is arbitrary. We still must use classical concepts (the "paradox"). It should indeed not make any difference in the final result if we, e.g., add some part of the measuring device or the whole device to the object and apply the laws of quantum theory to this more complicated object. It can be shown that such an alteration of the theoretical treatment would not alter the predictions concerning a given experiment. This follows mathematically from the fact that the laws of quantum theory are for the phenomena in which Planck's constant can be considered as a very small quantity, approximately identical with the classical laws. But it would be a mistake to believe that this application of the quantum-theoretical laws to the measuring device could help to avoid the fundamental paradox of quantum theory. The measuring device deserves this name only if it is in close contact with the rest of the world, if there is an interaction between the device and the observer. Therefore, the uncertainty with respect to the microscopic behaviour of the world will enter into the quantum-theoretical system here just as well as in the first interpretation. If the measuring device would be isolated from the rest of the world, it would be neither a measuring device nor could it be described in the terms of classical physics at all. It is not arbitrary that we somewhere separate the "object" of study from the "subjective" physicist and the tools made by experimenters. With regard to this situation Bohr has emphasised that it is more realistic to state that the division into the object and the rest of the world is not arbitrary. Our actual situation in research work in atomic physics is usually this: we wish to understand a certain phenomenon, we wish to recognise how this phenomenon follows from the general laws of nature. Therefore that part of matter or radiation which takes part in the phenomenon is the natural 'object' in the theoretical treatment and should be separated in this respect from the tools used to study the phenomenon. This again emphasises a subjective element in the description of atomic events, since the measuring device has been constructed by the observer, and we have to remember that what we observe is not nature in itself but nature exposed to our method of questioning. Our scientific work in physics consists in asking questions about nature in the language that we possess and trying to get an answer from experiment by the means that are at our disposal. In this way quantum theory reminds us, as Bohr has put it, of the old wisdom that when searching for harmony in life one must never forget that in the drama of existence we are ourselves both players and spectators. It is understandable that in our scientific relation to nature our own activity becomes very important when we have to deal with parts of nature into which we can penetrate only by using the most elaborate tools. The Copenhagen Interpretation of Quantum Theory" was the third of the Gifford-Lectures given by Heisenberg in winter 1955/56 at St. Andrews University, Scotland. The lectures have been published in the book Werner Heisenberg: Physics and Philosophy (Harper & Brothers, New York, USA, 1958). • Knowledge about quantum processes and events must be expressed in classical language (a paradox?) "Observed" means some new knowledge in the mind of an observer. • The uncertainty principle adds inaccuracies that are not experimental errors. • The probability function (a wave) allows us to calculate future possibilities. One possibility discontinuously and randomly becomes an actuality (a particle) We cannot say the particle has a classical path. • The wave picture and particle picture are complementary. Position and momentum are also complementary. The deterministic and continuous evolution of the probability is complementary to the discontinuous appearance of an actual position. • Quantum theory does not introduce the mind of the physicist as a part of the atomic event. • Quantum theory is a statistical theory. It does not describe a certain event but a whole ensemble of possible events. The probabilities are not classical (epistemic) probabilities. • The measuring apparatus can be treated as a quantum object. Normal | Teacher | Scholar
2db37183bd016607
News & Events Professors Newman and Orphan Named MacArthur Fellows Tags: honors research highlights ESE Dianne Newman Victoria Orphan Graduate Student Wins Best Paper Prize Electrical Engineering graduate student Chun-Lin Liu, working with Professor Vaidyanathan, has received the best paper prize for his paper entitle, “High Order Super Nested Arrays". The prize was presented to him at the 2016 IEEE Sensor Array and Multichannel Signal Processing Workshop. [Read the paper] Tags: EE honors research highlights P. P. Vaidyanathan Chun-Lin Liu New Breed of Optical Soliton Wave Discovered Tags: APhMS research highlights Kerry Vahala Counting on Grains of Sand José E. Andrade, Professor of Civil and Mechanical Engineering; Executive Officer for Mechanical and Civil Engineering, and colleagues have developed a new method that measures the way forces move through granular materials—one that could improve our understanding of everything from how soils bear the weight of buildings to what stresses are at work deep below the surface of the earth. [Caltech story] Tags: research highlights MCE Jose Andrade The Utility of Instability Professors Dennis M. Kochmann and Chiara Daraio along with colleagues from Harvard have designed and created mechanical chains made of soft matter that can transmit signals across long distances. Because they are flexible, the circuits could be used in machines such as soft robots or lightweight aircraft constructed from pliable, nonmetallic materials. "Engineers tend to shy away from instability. "Though there are many applications, the fundamental principles that we explore are most exciting to me," Kochmann says. "These nonlinear systems show very similar behavior to materials at the atomic scale but these are difficult to access experimentally or computationally. Now we have built a simple macroscale analogue that mimics how they behave." [Caltech story] Tags: research highlights Chiara Daraio GALCIT MCE Dennis Kochmann Improving Computer Graphics with Quantum Mechanics The Schrödinger equation, the basic description of quantum mechanical behavior, can be used to describe the motion of superfluids—fluids, supercooled to temperatures near absolute zero, that behave as though they are without viscosity. Professor Peter Schröder and his colleagues realized that the same equation with some small modifications can also be used to describe vorticity dominated phenomena of fluids at the macroscopic level--from smoke gently rising from a flame to the concentrated vorticity of a twister. [Caltech story & video] Tags: research highlights CMS Peter Schroeder Counting L.A.’s Trees Professor Pietro Perona, has developed a method using Google Earth and Google Street View to count the trees in the city of Los Angeles. The process of counting the trees using human tree counters is very expensive and would cost about $3 million today. The last time the city did such counting was more than two decades ago and at the time there were 700,000 street trees. Perona has tested the methodology in a section of Pasadena where the city recently commissioned a sidewalk survey. By comparing the results to the known inventory, he determined that the computer was about 80% accurate. [LA Times story] [KPCC story] Tags: EE research highlights CMS Pietro Perona A Microscopic Glowing Van Gogh Paul Rothemund, Research Professor of Bioengineering, Computing and Mathematical Sciences, and Computation and Neural Systems, and colleagues have developed a technique that allows manmade DNA shapes to be placed wherever desired; to within a margin of error of just 20 nanometers. This technique removes a major hurdle for the large-scale integration of molecular devices on chips. As a demonstration of the technique’s capabilities the group has created one of the world's smallest reproductions of Vincent van Gogh's The Starry Night. [Caltech story] Tags: research highlights CMS Paul Rothemund Community Seismic Network Detected Air Pulse From Refinery Explosion The Community Seismic Network’s (CSN) tight network of low-cost detectors are improving the resolution of seismic data gathering and could offer city inspectors crucial information on building damage after a quake. On February 18, 2015, an explosion rattled the ExxonMobil refinery in Torrance, causing ground shaking equivalent to that of a magnitude-2.0 earthquake and blasting out an air pressure wave similar to a sonic boom. Traveling at 343 meters per second the air pressure wave reached a 52-story high-rise in downtown Los Angeles 66 seconds after the blast. The building's seismometers, which are part of the CSN, noted and recorded the motion of each individual floor. "We want first responders, structural engineers, and facilities engineers to be able to make decisions based on what the data say," explained Monica Kohler, Research Assistant Professor of Mechanical and Civil Engineering, and the lead author of a paper detailing the high-rise's response that recently appeared in the journal Earthquake Spectra. [Caltech story] Tags: research highlights MCE Monica Kohler Realtime Camera Planning Yisong Yue, Assistant Professor of Computing and Mathematical Sciences, is working with colleagues at Disney Research to develop machine-learning algorithms to make automated cameras more human-like.  Professor Yue's research group is generally interested in building AI systems that imitate demonstrated behavior, including laboratory animals, basketball players, humans playing video games, etc.  In this recent work with Disney Research, they are developing an automated camera system that learns how best to film sports matches by watching how human camera operators behave at particular moments. Early testing shows that its shots are far smoother than other automated cameras. [Learn more about the applications] [Learn more about the theory] [techradar story] [Sports Illustrated story] Tags: research highlights CMS Yisong Yue
7f8cd517fa0022fa
My watch list   In mathematics and physics, a soliton is a self-reinforcing solitary wave (a wave packet or pulse) that maintains its shape while it travels at constant speed; solitons are caused by a cancellation of nonlinear and dispersive effects in the medium. ("Dispersive effects" refer to dispersion relations, relationships between the frequency and the speed of waves in the medium.) Solitons are found in many physical phenomena, as they arise as the solutions of a widespread class of weakly nonlinear dispersive partial differential equations describing physical systems. The soliton phenomenon was first described by John Scott Russell (1808–1882) who observed a solitary wave in the Union Canal (a canal in Scotland), reproduced the phenomenon in a wave tank, and named it the "Wave of Translation". A single definition of a soliton is difficult to procure. Drazin and Johnson (1989) ascribe 3 properties to solitons: 1. They are of permanent form; 2. They are localised within a region; 3. They can interact with other solitons, and emerge from the collision unchanged, except for a phase shift. More formal definitions exist, but they require substantial mathematics. On the other hand, some scientists use the term soliton for phenomena that do not quite have these three properties (for instance, the 'light bullets' of nonlinear optics are often called solitons despite losing energy during interaction). To see how dispersion and non-linearity can interact to produce permanent and localized wave forms, consider a pulse of light traveling in glass. This pulse can be thought of as consisting of light of several different frequencies; since glass shows dispersion, these different frequencies will travel at different speeds and the shape of the pulse will therefore change over time. However, there is also the non-linear Kerr effect: the speed of light of a given frequency depends on the light's amplitude or strength. If the pulse has just the right shape, the Kerr effect will exactly cancel the effect of dispersion, and the pulse's shape won't change over time: a soliton. See soliton (optics) for a much more detailed description. Many exactly solvable models have soliton solutions, including the Korteweg-de Vries equation, the nonlinear Schrödinger equation, the coupled nonlinear Schrödinger equation, and the sine-Gordon equation. The soliton solutions are typically obtained by means of the inverse scattering transform and owe their stability to the integrability of the field equations. The mathematical theory of these equations is a broad and very active field of mathematical research. Some types of tidal bore, a wave phenomenon of a few rivers including the River Severn, are 'undular': a wavefront followed by a train of solitons. Other solitons occur as the undersea internal waves, initiated by seabed topography, that propagate on the oceanic pycnocline. Atmospheric solitons also exist, such as the Morning Glory Cloud of the Gulf of Carpentaria, where pressure solitons travelling in a temperature inversion layer produce vast linear roll clouds. The recent and not widely accepted soliton model in neuroscience proposes to explain the signal conduction within neurons as pressure solitons. A topological soliton, or topological defect, is any solution of a set of partial differential equations that is stable against decay to the "trivial solution" due to topological constraints, rather than due to the integrability of the field equations. The constraint arises almost always because the differential equations must obey a set of boundary conditions, and the boundary has a non-trivial homotopy group, preserved by the differential equations. Thus, the solutions of the differential equations can be classified into homotopy classes. There is no continuous transformation that will map a solution in one homotopy class to another; thus the solutions are truly distinct, and maintain their integrity, even in the face of extremely powerful forces. Examples of topological solitons include the screw dislocation in a crystalline lattice, the Dirac string and the magnetic monopole in electromagnetism, the Skyrmion and the Wess-Zumino-Witten model in quantum field theory, and cosmic strings and domain walls in cosmology. In 1834, John Scott Russell describes his wave of translation. In 1965 Norman Zabusky of Bell Labs and Martin Kruskal of Princeton University first demonstrated soliton behaviour in media subject to the Korteweg-de Vries equation (KdV equation) in a computational investigation using a finite difference approach. In 1967, Gardner, Greene, Kruskal and Miura discovered an inverse scattering transform enabling analytical solution of the KdV equation. The work of Peter Lax on Lax pairs and the Lax equation has since extended this to solution of many related soliton-generating systems. Solitons in fiber optics In 1973, Akira Hasegawa of AT&T Bell Labs was the first to suggest that solitons could exist in optical fibers, due to a balance between self-phase modulation and anomalous dispersion. He also proposed the idea of a soliton-based transmission system to increase performance of optical telecommunications. Solitons in a fiber optic system are described by the Manakov equations. In 1987, P. Emplit, J.P. Hamaide, F. Reynaud, C. Froehly and A. Barthelemy, from the Universities of Brussels and Limoges, made the first experimental observation of the propagation of a dark soliton, in an optical fiber. In 1988, Linn Mollenauer and his team transmitted soliton pulses over 4,000 kilometers using a phenomenon called the Raman effect, named for the Indian scientist Sir C. V. Raman who first described it in the 1920s, to provide optical gain in the fiber. In 1991, a Bell Labs research team transmitted solitons error-free at 2.5 gigabits per second over more than 14,000 kilometers, using erbium optical fiber amplifiers (spliced-in segments of optical fiber containing the rare earth element erbium). Pump lasers, coupled to the optical amplifiers, activate the erbium, which energizes the light pulses. In 1998, Thierry Georges and his team at France Télécom R&D Center, combining optical solitons of different wavelengths (wavelength division multiplexing), demonstrated a data transmission of 1 terabit per second (1,000,000,000,000 units of information per second). In 2001, the practical use of solitons became a reality when Algety Telecom deployed submarine telecommunications equipment in Europe carrying real traffic using John Scott Russell's solitary wave. (The founders of Algety Telecom were a team of engineers in France Telecom's R&D Center (CNET) who had been working for nearly 10 years to develop soliton technology.In 2000 Corvis Corp signed an agreement to acquire Algety Telecom, in an all-share transaction, but later closed the Algety subsidiary.Corvis was then purchased by Broadwing, and Broadwing subsequently were purchased by Level3) The present status of using optical soliton for communication is not clear and more information is needed.)[citation needed] For some reasons, it is possible to observe both positive and negative solitons in optic fibre. However, usually only positive solitons are observed for water wave. The bound state of two solitons is known as a bion. See also • Clapotis • Freak waves may be a related phenomenon. • Oscillons • Q-Ball a non-topological soliton • Soliton (topological). • Soliton (optics) • Soliton model of nerve impulse propagation • Spatial soliton • Solitary waves in discrete media [1] • Topological quantum number • N. J. Zabusky and M. D. Kruskal (1965). Interaction of 'Solitons' in a Collisionless Plasma and the Recurrence of Initial States. Phys Rev Lett 15, 240 • A. Hasegawa and F. Tappert (1973). Transmission of stationary nonlinear optical pulses in dispersive dielectric fibers. I. Anomalous dispersion. Appl. Phys. Lett. Volume 23, Issue 3, pp. 142-144. • P. Emplit, J.P. Hamaide, F. Reynaud, C. Froehly and A. Barthelemy (1987) Picosecond steps and dark pulses through nonlinear single mode fibers. Optics. Comm. 62, 374 • P. G. Drazin and R. S. Johnson (1989). Solitons: an introduction. Cambridge University Press. • N. Manton and P. Sutcliffe (2004). Topological solitons. Cambridge University Press. This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Soliton". A list of authors is available in Wikipedia.
47a1fcda7f194f07
Phase diagram of hard-core bosons on clean and disordered 2-leg ladders Mott insulator – Luttinger liquid – Bose glass François Crépin Laboratoire de Physique des Solides, Université Paris-Sud, UMR-8502 CNRS, 91405 Orsay, France    Nicolas Laflorencie Laboratoire de Physique des Solides, Université Paris-Sud, UMR-8502 CNRS, 91405 Orsay, France    Guillaume Roux Laboratoire de Physique Théorique et Modèles Statistiques, Université Paris-Sud, UMR-8626 CNRS, 91405 Orsay, France    Pascal Simon Laboratoire de Physique des Solides, Université Paris-Sud, UMR-8502 CNRS, 91405 Orsay, France May 10, 2022 One dimensional free-fermions and hard-core bosons are often considered to be equivalent. Indeed, when restricted to nearest-neighbor hopping on a chain the particles cannot exchange themselves, and therefore hardly experience their own statistics. Apart from the off-diagonal correlations which depends on the so-called Jordan-Wigner string, real-space observables are similar for free-fermions and hard-core bosons on a chain. Interestingly, by coupling only two chains, thus forming a two-leg ladder, particle exchange becomes allowed, and leads to a totally different physics between free-fermions and hard-core bosons. Using a combination of analytical (strong coupling, field theory, renormalization group) and numerical (quantum Monte Carlo, density-matrix renormalization group) approaches, we study the apparently simple but non-trivial model of hard-core bosons hopping in a two-leg ladder geometry. At half-filling, while a band insulator appears for fermions at large interchain hopping only, a Mott gap opens up for bosons as soon as through a Kosterlitz-Thouless transition. Away from half-filling, the situation is even more interesting since a gapless Luttinger liquid mode emerges in the symmetric sector with a non-trivial filling-dependent Luttinger parameter . Consequences for experiments in cold atoms, spin ladders in a magnetic field, as well as disorder effects are discussed. In particular, a quantum phase transition is expected at finite disorder strength between a 1D superfluid and an insulating Bose glass phase. I Introduction Low-dimensional bosonic systems have attracted an increasing interest during the last few decades Fisher89 ; Nozieres-book ; Greiner02 ; Leggett-book ; Bloch08 . The competition between various ingredients such as interactions, quantum fluctuations, geometrical constraints (low dimensionality/coordinance, frustration), possibly disorder, may lead to a large variety of interesting states of matter, as for instance the enigmatic supersolid phase of He under pressure Balibar10 . Perhaps the most extreme situation arises for one-dimensional (1D) systems where interactions are known to induce dramatic effects Giamarchi-book ; Cazalilla11 . While 1D systems have long been considered as purely abstract objects reserved for theoretical studies, they have now become experimentally accessible. In particular, due to a very intense activity, several systems are now available in the lab to achieve 1D bosonic physics in various contexts: Josephson junction arrays Chow98 ; Fazio01 ; Refael07 , superfluid He in porous media Gordillo00 ; Savard09 ; Taniguchi10 ; DelMaestro10 ; DelMaestro11 , cold atoms Jaksch98 ; Greiner02 ; Paredes04 ; Roati08 ; Billy08 ; Chen11 , quantum antiferromagnets Chaboussant97 ; Mila98 ; Hikihara2001 ; Affleck05 ; Garlea07 ; Klanjsek08 ; Ruegg08 ; Giamarchi08 ; Hong10 ; Bouillot11 . Furthermore, several theoretical studies have recently shown that 1D physics is still (80 years after Bethe’s seminal work Bethe31 ) a very active playground for exploring novel and exotic phases of matter Sheng08 ; Sheng09 ; Block11 ; Chen11 . As in most condensed matter problems, only a few competing terms dictate the physics of 1D bosonic systems. For instance, when kinetic processes dominate over (repulsive) interactions, a delocalized superfluid (SF) phase is favored. Note that despite the absence of a true Bose-condensed phase in 1D systems Hohenberg67 , superfluidity is still possible at zero temperature since it only requires off-diagonal quasi long-range order, the resulting phase being a Luttinger liquid (LL). Conversely, when strong repulsion dominates, a Mott insulator (MI) is expected, either breaking or not lattice symmetries, depending on the commensurability of the particle filling. (Color online) Schematic picture for the two-leg ladder system. Figure 1: (Color online) Schematic picture for the two-leg ladder system. An interesting case in 1D is the hard-core boson limit (also known as the Tonk-Girardeau gas, as achieved in cold atom experiments Paredes04 ) which, on a lattice, corresponds to infinite on-site repulsion. In such a limit, and in the absence of density-density interactions between neighboring sites, an exact mapping between fermions and hard-core bosons Girardeau60 implies that bosons behave ”almost” like free-fermions, at least regarding their real space properties, where a “real space Pauli principle” holds. However, when the lattice geometry allows for particle exchange, the bosonic statistics may play a major role, leading to qualitatively different phase diagrams. It is quite clear in two dimensions Fradkin89 ; Bernardet02 , but for quasi-1D geometries like ladders one may wonder whether a minimal model made of two coupled chains would allow the bosonic statistics to emerge and lead to qualitatively different physics as compared to free-fermions. In this paper, we aim at exploring one of the simplest quasi-1D model where free-fermions and hard-core bosons display qualitatively distinct phase diagrams. The model, defined on a two-leg ladder (depicted in Fig. 1), is governed by the following Hamiltonian where the sites along each leg are labeled by , being the total length of each chain. The operator creates a particle on site and , are the longitudinal and transverse hopping integrals respectively. is the onsite density operator and is the chemical potential. The filling of the ladder will be denoted by in the following, with the total number of bosons. The main phases of (1), discussed in this work, are summarized on Fig 2 in the cases of free-fermions (left) and hard-core bosons (right). For non-interacting fermions, the phase diagram is trivially obtained by filling two bands: the system is a simple metallic state at all fillings , except when the interchain hopping where a band insulator appears at half-filling . The situation is very different with hard-core bosons for which there is no Pauli principle when filling the states in momentum space. Indeed, as demonstrated below, the interchain hopping is shown to be a relevant perturbation at half-filling, opening up a charge gap via a Kosterlitz-Thouless mechanism. Therefore, in contrast with free-fermions, the bosonic insulating state at half-filling, called rung-Mott insulator, extends down to . For incommensurate fillings, the bosonic system is a single mode LL (called LL by contrast with LL for the two-band or two-mode Luttinger liquid of free-fermions, see Fig. 2) with a finite superfluid (SF) fraction and, most interestingly, a strongly renormalized Luttinger parameter which varies continuously between and . In the rest of the paper, using a combination of analytical and numerical techniques, we discuss in details all these non-trivial aspects. In section II, after recalling the similarities between free-fermions and hard-core bosons on a chain, we present some well-known exact results for free-fermions on a ladder. In section III, the two coupled chains problem is studied for hard-core bosons in the two analytically accessible limits: strong and weak interchain couplings, using a perturbative approach, bosonization and renormalization group (RG) techniques. We also compare the physics of this model with the well known case of quantum spin ladders. We then use exact numerical techniques, quantum Monte Carlo and density-matrix renormalization group, and confront them with analytical predictions in order to provide quantitative results for several quantities of importance: the bosonic densities (normal and SF), the charge gap, correlation functions, correlation lengths, and Luttinger parameters. In particular, section IV focuses on the half-filled insulating state while the LL behavior at incommensurate fillings is studied in great details in section V where the global phase diagram is shown. Finally, in section VI we discuss two important issues related to experiments in cold atoms and spin ladder materials as well as the effect of disorder on this bosonic ladder. Section VII concludes the paper. (Color online) Phase diagram of the two-leg ladder model Eq. ( Figure 2: (Color online) Phase diagram of the two-leg ladder model Eq. (1) for free-fermions (left) and hard-core bosons (right). The metallic phase for fermions comprises either one LL or two bands LL, whereas the bosonic SF is a single mode (symmetric, see the text) Luttinger liquid LL. For hard-core bosons, the rung-Mott insulating phase extends down to . Ii Free-fermions vs hard-core bosons ii.1 Quantum statistics in 1D systems The concept of quantum statistics stems from the indistinguishability of identical particles in many-body quantum systems. If is the wave-function of a system made of two indistinguishable particles, the probability of finding two particles at position and must be invariant under the exchange or particles, that is, . Therefore and are equal, up to a phase factor . In three dimensions, it can be argued that a further exchange of particles is actually equivalent to the identity transformation, imposing that be an integer multiple of . Two situations arise: the wave-function is either symmetric or antisymmetric under the exchange of two particles, thus defining bosonic and fermionic statistics, respectively. Note that the present restriction does not hold in lower dimensions where so-called fractional statistics are expected to exist. However, even though we will only be interested in 1D and quasi 1D systems we will focus on bosons and fermions only. This real-space approach of quantum statistics extends to any Hilbert space in that a given many-body state – a Fock state – will either be totally symmetric or totally antisymmetric under the exchange of any two particles. As a consequence, fermions obey Pauli’s exclusion principle, while several bosons can indeed occupy the same quantum state – actually a macroscopic number in the phenomenon of Bose-Einstein condensation. There is one situation though, where differences between bosons and fermions are less drastic. Indeed, a 1D gas of impenetrable bosons exhibits similarities with a 1D gas of spinless fermions. The hard-core constraint mimics the exclusion principle, while the 1D character ensures that the particles cannot move around each other. Wave-functions are identical up to a symmetrization factor enforcing the quantum statistics. Therefore, quantities depending only on the modulus of the wave-function, such as density correlations, are identical, whereas off-diagonal quantities, such as the momentum distribution, are affected by statistics. These various quantities – density and momentum distribution – are readily encoded in the single-particle density-matrix , where and are creation and annihilation operators. ii.1.1 When fermions and hard-core bosons look alike In the continuum, if is a solution of the Schrödinger equation for a 1D gas of fermions, satisfying the constraint whenever for all and , then is also a solution of the Schrödinger equation for a 1D gas of hard-core bosons, with , an antisymmetric function which can only be equal to . Girardeau60 It appears clearly that , so that any quantity depending only on the modulus of the wave function is identical for fermions and hard-core bosons. As noticed by Girardeau, Girardeau60 ; Girardeau65 this one to one correspondence between fermionic and bosonic wave-functions is only possible in one dimension. The hard-core constraint divides the parameter space in disjoint regions, and changes sign discontinuously at the boundary of each regions. In two and three dimensions, the parameter space remains connected and a function equivalent to cannot be defined. On a 1D lattice the correspondence is ensured through the Jordan-Wigner transformationJordan1928 . Hard-core bosons on a lattice have mixed commutation relations: for , and . The Jordan-Wigner transformation maps this problem of hard-core bosons onto a problem of spinless fermions through: The Jordan-Wigner string, although making the transformation non-local, ensures that and satisfy anticommutation relation and are indeed fermionic operators. As in the continuum case, there is a one to one correspondence between the eigenstates of the Fermi Hamiltonian and the eigenstates of the Bose Hamiltonian. ii.1.2 When fermions and hard-core bosons look different Although fermions and hard-core bosons look alike in real space, in terms of their local density, one expect their momentum distribution to be rather different from one another. Indeed the ground-state (GS) of a Fermi gas is the Fermi sea, in which all eigenstates – momentum states in a free gas – are filled up to the Fermi energy, whereas interacting bosons exhibit a peak in their the momentum distribution, around . Indeed for hard-core bosons at zero-temperature, when ,Haldane81 and around , as schematized in Fig. 3. This algebraic divergence is a result of the enhancement of quantum fluctuations in reduced dimensions, that forbid the existence of a true condensate. (Color online) Schematic representation of the momentum distribution for 1D free-fermions (left) and hard-core bosons (right). ii.2 Free-fermions on a 2-leg ladder: exact solution (Color online) Left: single particle dispersion bands Figure 4: (Color online) Left: single particle dispersion bands (black curves) and (red dashed) plotted (in units of ) for the two-leg ladder model Eq. (1) with (a) and (b). Right: Corresponding particle filling (per rung) plotted against the chemical potential . We now turn to the two-leg ladder model Eq. (1) in the case of fermionic particles. Single-particle eigenstates of this Hamiltonian defined on a ladder with sites are plane waves of the form: where is a multiple of , , and is a state with a single particle at site . The associated eigenvalues read: which corresponds to the band dispersions depicted on Fig. 4. For fermions at zero-temperature, the average filling vs chemical potential is simply the sum of the fillings of each band. Two examples for and are given in Fig. 4. They correspond to two situations. First, if , the system has either four Fermi points (when ) leading to a two-mode LL, or two Fermi points, leading to a single-mode LL equivalent to a chain. Cusps in the curve of Fig. 4 signal the change in the number of Fermi points. Secondly, when , the two bands are separated by a gap and, at half-filling , the system is a band insulator. The corresponding GS has one particle on each rung in the symmetric state with a total energy . Away from the half-filling, the ladder is in a single-mode LL. The situation for hard-core bosons cannot be simply understood in terms of filling single-particle states. A for a single chain, a Jordan-Wigner transformation can be performed by choosing a particular path for the string along the lattice Azzouz1994 ; Dai1998 ; Hori2004 . However, the fermionic model thus obtained cannot be solved exactly as it contains correlated-like hopping terms. Mean-field approximations yield reasonable but non-quantitative results, underlying the fact that the bosonic version of the model is non-trivial. In the following, we derive quantitatively exact results working in the bosonic language. Iii Analytical results for hard-core bosons iii.1 Strong coupling limit iii.1.1 Gap at half-filling In this section, we compute the gap at half-filling in the large- limit, in which corresponds to the energy cost to add one extra hard-core boson on top of a half-filled ladder. Details of the calculation can be found in appendix A. Setting , rungs decouple and four states are available on each of them: an empty state with energy , two 1-particle states with energies , and a 2-particle state with energy . These energies are plotted against the chemical potential in Fig. 5. The average filling of the lattice is as long as , 1 for and 2 as soon as . We start from a half-filled ladder of hard-core bosons – and bosons – and treat as a perturbation of . The GS of is constructed by putting one particle on each rung in the symmetric state . We have . creates particle-hole excitations on neighboring rungs and induces a second-order correction to the GS energy: The first excited states of with particles are times degenerate and best written in the momentum representation: lifts the degeneracy : indeed, it moves the extra particle along the ladder with both nearest (first order perturbation) and next nearest (second order perturbation) hopping and creates particle-hole excitations on nearest-neighboring rungs. Therefore the corrected energies of the -particle states are The situation and calculation are similar when doping the half-filled ladder with holes. The size of the plateau at half-filling is then The gap is larger in the case of hard-core bosons than it is in the case of spinless fermions. To conclude this section, it is instructive to look at the perturbative approach in the fermionic case, for which we know that the gap is exactly , without a second order correction. This is due to the fact that the half-filled GS and the -particle states are exact eigenstates of the full Hamiltonian. does not create particle-holes excitations, since . On the contrary, in the bosonic case, . The diagonal terms vanishes because of the hard-core constraint, while the off-diagonal terms are equal because of the commutation of the bosonic operators (they vanish for fermions!). This difference, versus , illustrates how the simplest quasi-1D system is indeed sensitive to quantum statistics. (Color online) Energies of the four available states on a given rung, in the limit Figure 5: (Color online) Energies of the four available states on a given rung, in the limit . iii.1.2 Effective model for incommensurate fillings We can analyze the strong rung coupling limit in the same spirit as for Heisenberg spin ladders in a magnetic field Mila98 ; Totsuka98 . According to Fig. 5, at a critical value of the chemical potential , levels and become degenerate, with an energy . Doping the plateau with particles can be studied through degenerate perturbation theory on the Hamiltonian with respect to the GS Hamiltonian Let us call the projector on the GS subspace, in which and are the only states allowed on a given rung . We call the projector on the complementary subspace. The effective Hamiltonian to second order in perturbation theory isMessiah with the eigenvalue of the degenerate subspace under consideration. Here, given the degenerate GS and the form of , a virtual excited state is a state with an empty rung. Then, in everything that follows, . Again, a more detailed calculation is given in appendix A. We find the following effective Hamiltonian expressed in terms of spinless fermions: In this language, a spinless fermion stands for a doubly occupied rung, while an empty site indicates a rung with one particle in the symmetric state. Note that one recovers the expression of the gap to second order. One particle is added when the effective chemical potential equates the bottom of the energy band: . On the other side of the plateau the exact same Hamiltonian is obtained, with spinless fermions now standing for empty rungs, and . As already noticed in the context of 2D supersolid phases of quantum antiferromagnets in a field Schmidt08 ; Picon08 ; Albuquerque10 or Heisenberg ladders in a field Bouillot11 , an important ingredient which emerges as a second order process in the above effective Hamiltonian Eq. (13) is the correlated (or assisted) hopping between second neighbors. For Bose-Hubbard chains in the limit of large but finite on-site repulsion, an effective model of spinless fermions similar to Eq. (13) was obtained Cazalilla03 . We discuss further the relationship with Bose-Hubbard chains and spin ladders in an external field below in sections V.3.1 and VI.3. iii.2 Weakly coupled chains: bosonization In this section, we develop a low energy approach in order to analyze the behavior of two weakly coupled chains of interacting bosons at half filling. Our starting point is the following Hamiltonian with describes two uncoupled chains of bosons interacting via the onsite repulsion such that Eq. (1) is recovered for . couples the two chains. The low-energy excitations of a single chain of interacting bosons are collective excitations corresponding to the sound modes in the quasi-condensate. A suitable approach to describe them is bosonization Giamarchi-book . Within this representation, we can write a bosonic creation operator ( being the continuum limit version of the lattice boson creation operator ) as where is an integer and is the boson density. and are two dual bosonic fields obeying the following commutation relation In this language, accounts for the long wavelength density fluctuations, while can be regarded as the superfluid phase. Using Eq. (15) and taking the continuum limit, we can rewrite in Eq. (14) as Giamarchi-book where we have introduced the pair of bosonic fields and living on chain . The low-energy physics of each chain is characterized by the same two parameters, the sound velocity and the Luttinger parameter which encodes interactions in the system. For interacting bosons with on-site repulsion, . With nearest-neighbor interactions it is possible to reach , thus making a connection with XXZ spin chains. Using Eq. (15) we can rewrite as In order to write this equation, we restricted the summation in Eq. (15) to the values which provide the leading terms in the perturbation. Half-filling corresponds to with the lattice spacing. Introducing symmetric and antisymmetric fields defined as ,, , and retaining only the non-oscillatory terms, we can rewrite as where and . We see that the coupling between the two chains is quite complex in the bosonized representation. To study this kind of Hamiltonian a strategy consists in deriving RG equations for the various coupling constants. To do so, we first write the Euclidian action associated to Eq. (19) in the following general form: where we have introduced four dimensionless coupling constants , , and finally which is generated under the RG flow as we will see. These couplings constants have bare values which are directly extracted from the Hamiltonian in Eq. (19) and read , , . We write RG equations for the six coupling constants using standard techniques Giamarchi-book : The running short distance cutoff is parametrized as . Since the bare values of are larger than one, the coupling is strongly relevant compared to the other couplings and is therefore driven to strong coupling. This implies that the field becomes gapped. The superfluid phase fields of both chains lock together as soon as interchain tunneling is switched on. Next we introduce the scale such that which equivalently defines an energy scale . For , one can use the full set of RG equations written in Eq. (III.2). For , one can simplify the action (20) by replacing the field by its average value (a similar procedure is described in Ref. Kim99, for coupled spin chains) and thus obtain an effective action valid at lower energy scales . The phase field being gapped, we focus on the symmetric sector whose effective action now takes the simpler form where we have introduced and with . The action is of the sine-Gordon type. The RG equations associated to and are obtained by further integrating high-energy degrees of freedom between and , leading to which are the standard Kosterlitz-Thouless flow equations Giamarchi-book . The coupling flows to strong coupling when is smaller than . Since always decreases during the fist stage of the RG transformation, is driven to strong coupling and, ultimately, a gap opens up in the symmetric sector, at a very low energy scale. We stress that the gap opening is non trivial from the RG point of view: first, a gap opens in the antisymmetric sector (pining of the field ), which eventually induces the opening of the gap in the symmetric sector (pining of the field ). The scaling of the gap with , obtained from the numerical solution of the RG flow, is shown in Fig. 6. It strongly depends on the initial value of . For hard-core bosons – – the charge gap grows exponentially slowly with : Indeed, during the first step of the flow, , is irrelevant and renormalized downwards to a small value, while decreases to a value smaller but close to 1. At the start of the second step, has become a relevant perturbation but is still very close to marginality. On the contrary we could imagine a situation with well below 1 – for instance, adding nearest neighbor repulsive interactions in (14) – and would much farther away from marginality at the start of the second step, therefore leading to a much faster – power-law – opening of the gap (see Fig. 6), Finally, this whole RG analysis shows that a gap in the symmetric sector opens up as soon as transverse hopping is switched on. This result is drastically different from the fermionic case where a band gap appears only for a finite large value of . (Color online) Charge gap obtained from the numerical integration of the two-step RG flow. The scaling of the gap depends strongly on the initial value of Figure 6: (Color online) Charge gap obtained from the numerical integration of the two-step RG flow. The scaling of the gap depends strongly on the initial value of . We find for the exponent of the power-law, eq. (26), in good agreement with the DMRG result of equation (40). iii.3 Comparison with spin- ladders A two-leg ladder of hard-core bosons is equivalent to a spin- two-leg ladder. Indeed, spins- satisfy mixed commutation relations, when expressed in terms of . A hard-core boson on a given site corresponds to a magnetic moment with while an empty site corresponds to the opposite with . Consider the model of hard-core bosons we have studied and add nearest-neighbour repulsion along and between the chain. It maps to a spin- ladder: For it reduces to the model Eq. (1), with and . The strong coupling analysisBarnes93 ; Mila98 ; Totsuka98 leads to a similar picture: singlets form on each rung and a finite spin gap ( magnetization plateau) extends around zero magnetic field. A weak-coupling analysis leads to the same conclusion. However the mechanism for the formation of the plateau is different than in our study of hard-core bosons because of the exchange coupling. We recall below the bosonized form of the Hamiltonian (27). In a similar way to the physics described in section III.2, the dynamics decouple into two modes, symmetric and antisymmetric such that one can write  Strong92 with Here, . is the original Luttinger parameter of the single spin chain and depends on , the anisotropy parameter. For instance for (hard-core bosons) and for (Heisenberg chain). Note that the exchange along the chains should also give rise to an umklapp term of the form: with . However as long as this term is always less relevant than the other interaction terms and will not open any gap. When vanishes one should also include the term which we studied in the previous section. For a non-zero , it is anyway less relevant and the gap in the symmetric sector opens directly because of the exchange along . Indeed, for all and , and is relevant, therefore ordering the field . The scaling of the gap is known from the sine-Gordon model and depends on the initial conditions for and . Away from marginality, the charge gap scales as a power-law: For instance, at the isotropic Heisenberg point , and the gap opens linearly with . Since the antisymmetric sector is always gapped, we expect the familiar picture of ”rung-singlet” known for Heisenberg ladders Dagotto96 to hold here for the generic U(1) symmetric case for which the gapped state is called a Rung-Mott-Insulator (RMI). However, the correlation length of the symmetric mode is much larger than for Heisenberg ladders. Therefore, the usual short-range resonating singlets picture, very useful for spin ladders White94 ; Dagotto96 , has to be taken more carefully here, especially in the limit of small . Iv Rung-Mott Insulator: Numerical results Coming back to the original problem of hard-core bosons, we present in this section quantum Monte Carlo Sandvik91 ; Syljuaasen02 ; Evertz03 (QMC) and density-matrix renormalization group White1992 ; White1993 ; Schollwock2005 (DMRG) results at half-filling. The Rung-Mott insulating state (RMI) will be studied in great details using these two techniques. iv.1 Density plateau at half-filling iv.1.1 QMC results We use the QMC stochastic series expansion (SSE) algorithm Sandvik91 , in its directed loop framework Sandvik99 ; Syljuaasen02 . This method has been used quite intensively for the last decade. For our purpose here, the implementation of the algorithm is rather straightforward since one can, for simplicity, work in the representation of the equivalent spin- XY model in a transverse field. Indeed, using the Matsubara-Matsuda mapping Matsubara56 , hard-core bosons operators are replaced by spin- operators: This equivalence with spin- systems, already evoked in section III.3 to discuss the connection with XXZ spin ladders, will also be used later in section VI when discussing potential links of our results with experiments on magnetic field induced LL phases in spin ladders. Coming back to the bosonic language, we work in the grand-canonical ensemble where the total particle number (longitudinal magnetization) can fluctuate. We measure the following observables: (i) The density of particles (ii) the compressibility and (iii) the superfluid stiffness In the above definitions, is simply the inverse temperature , and is a small twist angle enforced on all bonds in the direction Yang1961 ; Shastry90 along the legs. Technically, the superfluid stiffness Fisher73 is efficiently measured via the fluctuations of the winding number Pollock87 during the SSE simulation Sandvik97 . (Color online) QMC results for the particle density Figure 7: (Color online) QMC results for the particle density (top), the SF density (middle), and the compressibility (bottom) obtained on ladders with (Left) and (Right), for two inverse temperature , as indicated on the plot. On the left panel (), the compressibility data are obtained from either a direct measurement of [full lines Eq. (34)], or from a numerical derivative of [circles Eq. (36)]. Note for (left) the superfluid density data are also shown for and at . In Fig. 7, we show QMC results for the total particle density , the superfluid density (directly related to as discussed below in Sec. IV.2), and the compressibility . Note that the later can also be extracted from the numerical derivative QMC data are shown for two representative values of the interchain hopping: and . As expected for large , a plateau in the density at half-filling is clearly visible for if , signaling the incompressible RMI state where the superfluid density and the compressibility both vanish. For , interpretation of finite size numerical data is more difficult since the gap at half-filling turns out to be much smaller. Indeed, the density curve versus does not show any visible plateau (top left of Fig. 7). Nevertheless, the gap tends to show up in the compressibility which displays a downward feature, signaling the insulating state. The SF density also vanishes, as it becomes visible when is increased from up to . Obviously, when gets smaller, it will be quite difficult to directly get quantitative information from or , like for instance the value of the gap at half-filling. If we want to do so, one needs to run QMC simulations in the GS, i.e. at temperature well below the energy gap and in the thermodynamic limit, i.e. for system lengths where is the correlation length associated with the short-range order of the RMI. Of course this would be a difficult task, that we can fortunately circumvent. In a QMC framework, the most useful observable to characterize the insulating state, and to get a precise estimate of the correlation length is the superfluid stiffness computed directly at half-filling, which is expected to vanish as . A direct estimate of the gap with SSE, in principle possible, would require a much more important numerical effort. Zero-temperature DMRG simulations are more competitive to measure tiny gaps as we discuss now before coming back to SSE estimates of the SF stiffness and the correlation length. (Color online) Charge gap of the two-leg ladder model as a function of the interchain coupling obtained from DMRG calculations, and compared with perturbation theory Eq. ( Figure 8: (Color online) Charge gap of the two-leg ladder model as a function of the interchain coupling obtained from DMRG calculations, and compared with perturbation theory Eq. (5) for free-fermions and Eq. (9) for the strong rung coupling expansion. The red curve is a fit to the expression Eq (39) (see the text). iv.1.2 Gap from DMRG The gap is computed at zero temperature with DMRG using the definition where is the GS energy with bosons, so that is directly the width of the plateau. In the DMRG calculation, we keep 1000 states per block and extrapolate the results over sizes ranging from 24 to 160 using the following ansatz from the finite-size scaling of the gap The result is reported on Fig. 8 in which the strong-coupling of Sec. III.1.1 and free-fermions predictions are also given for comparison. According to the bosonization prediction of Sec. III.2, we fit the opening for using the following law with , a and as fitting parameters. The obtained values are respectively , and . The latter critical value is perfectly compatible with within our numerical precision. The opening of the gap is particularly slow (non-analytic in ), but reaches a sizable magnitude for , thus providing a first clear quantitative difference with free fermions. (Color online) Comparison of the opening of the gap in the XXZ ladder for three different DMRG results (symbols) are compared to Eq. ( Figure 9: (Color online) Comparison of the opening of the gap in the XXZ ladder for three different situations. DMRG results (symbols) are compared to Eq. (39) for hard-core bosons, to Eq. (40) for coupled XXZ chains, and to the linear behavior from Greven et alGreven1996 in the SU(2) case. Since it is well known that SU(2) spin-1/2 ladders have a gap at the isotropic point Barnes93 , and that this gap opens linearly Greven1996 with in the limit of weakly coupled chains, we investigate how interactions modify the opening of the gap in order to have an intermediate situation. We introduce intra-chain interactions with nearest-neighbor density interactions corresponding to in the XXZ language model. From the Bethe-ansatz solution, the Luttinger parameter of the chains at zero interchain coupling is thus . We also recompute the SU(2) (with and for which ) gap to check the numerical accuracy and gather results in Fig. 9. As expected from the RG calculation of Sec. III.2, the intermediate opening is best fitted by a power-law with an exponent of the order of three: close to the exponent obtained from the RG calculation, and non-trivially related to the value of (see Fig. 6). The gap in the isotropic limit reproduces well the earlier findings of Ref. Greven1996, . The law governing the opening of the gap is thus very sensitive to interactions, with their expected tendency to boost the charge gap. iv.2 Superfluid density iv.2.1 QMC results (Color online) SF density Figure 10: (Color online) SF density at half-filling (normalized by the decoupled chains value ), plotted versus . QMC results obtained for ladders of size in the GS at inverse temperatures with . As introduced by Fisher, Barber and Jasnow in Ref. Fisher73, , the superfluid stiffness (or helicity modulus) is defined by imposing twisted boundary conditions in one direction, such that the hard-core boson Hamiltonian Eq. (1) reads now Such a twist angle mimics the effect of a phase gradient imposed in the bosonic wave function. In a superfluid state, it would lead to a superflow with a gain in the kinetic energy density Fisher73 Based on the winding number fluctuations Pollock87 ; Sandvik97 , we compute for various system sizes the superfluid density at half-filling versus interchain hoppings . As already discussed, it is crucial to get zero-temperature estimates. This is ensured here by performing SSE simulations at for the values of considered in the following analysis. In Fig. 10, the SF density is plotted against for (Color online) (a) SF density plotted versus the linear size Figure 11: (Color online) (a) SF density plotted versus the linear size for various values of , as indicated below with different symbols. The linear-log plot clearly shows an exponential decay with . (b) Collapse of the above data obtained by rescaling the x-axis in order to get all points on a single curve. The full line is . The obtained correlation length are displayed in Fig. 12. iv.2.2 Scaling analysis We therefore use another strategy to extract the critical properties. As visible in Fig. 11(a) for various values of , the SF density decay exponentially at large , at least for not too small values of the coupling: , while it roughly remains constant for the sizes considered here when . We therefore identify two scaling regimes, depending on the scaling variable , where is the correlation length of the RMI state: Data of Fig. 11(a) have been analyzed by rescaling the x-axis in order to collapse the data : using the best fit of the form , we have fixed for and adjusted all the other data to get the best collapse. Results are displayed in Fig 11(b) where the two scaling regimes are clearly visible. With this technique, we have been able to extract the value of as a function of
5935bedb594caf26
Aug 122015 es1  es3 Today is the birthday (1887) of Erwin Rudolf Josef Alexander Schrödinger (1887 ), a Nobel Prize-winning Austrian physicist who developed a number of fundamental ideas in the field of quantum theory, which formed the basis of wave mechanics. He formulated the basic wave equation (stationary and time-dependent Schrödinger equation) and, more popularly, proposed an original interpretation of the physical meaning of the wave function which led to his famous thought experiment “Schrödinger’s Cat” which supposedly illustrates the absurdity of the Copenhagen interpretation of quantum mechanics. For those who know (and care) about the implications of this thought experiment I have to say that I’ve never seen the point of it. The Copenhagen interpretation states that the wave function of certain subatomic particles exists in two (or more) simultaneously contradictory states until they are observed, at which point the function “collapses” or resolves to one or the other. Erwin Schrödinger’s thought experiment involved a closed box within which was a chamber containing a very small amount of radioactive material a particle of which within a fixed span of time might decay or not decay. The state of the particle would be measured by a Geiger counter and if it had decayed would trigger the release of cyanide gas. Also in the box was a cat. Schrödinger’s point was that it was absurd to imagine that until the box was opened by an “observer” the decay state of the particle was unknown and therefore that the cat was simultaneously alive and dead. Here’s the original: —Erwin Schrödinger, Die gegenwärtige Situation in der Quantenmechanik (The present situation in quantum mechanics), Naturwissenschaften (translated by John D. Trimmer in Proceedings of the American Philosophical Society) Einstein had always been troubled by the idea that matter could simultaneously exist in two contradictory states and was so delighted by the thought experiment that he wrote: You are the only contemporary physicist, besides Laue, who sees that one cannot get around the assumption of reality, if only one is honest. Most of them simply do not see what sort of risky game they are playing with reality—reality as something independent of what is experimentally established. Their interpretation is, however, refuted most elegantly by your system of radioactive atom + amplifier + charge of gunpowder + cat in a box, in which the psi-function of the system contains both the cat alive and blown to bits. Nobody really doubts that the presence or absence of the cat is something independent of the act of observation. I don’t know where he got the gunpowder from, but that’s not the only mistake that he made. I’ve wondered for years why they thought that the wave function had to be observed by a human for it to collapse. Why isn’t the cat an observer? Why not the Geiger counter? Anything in the macro world that interacts with the wave function is an observer. Oh dear, Erwin, you should have taken an anthropology class with me.  I gather from recent reading that I am not the only person to have spotted the fallacy. Niels Bohr apparently made the same observation a long time ago. Oh well, it’s not surprising; he’s much smarter than I am. The experiment as described is a purely theoretical one, and the machine proposed is not known to have been constructed. However, successful experiments involving similar principles, e.g. superpositions (that is matter in 2 states at the same time) of relatively large (by the standards of quantum physics) objects have been performed. These experiments do not show that a cat-sized object can be superposed (both alive and dead), but the known upper limit on “cat states” has been pushed upwards by them. In many cases the state is short-lived, even when cooled to near absolute zero. 1. A “cat state” has been achieved with photons. 2  A beryllium ion has been trapped in a superposed state. 3. An experiment involving a superconducting quantum interference device (“SQUID”) has been linked to the theme of the thought experiment: “The superposition state does not correspond to a billion electrons flowing one way and a billion others flowing the other way. Superconducting electrons move en masse. All the superconducting electrons in the SQUID flow both ways around the loop at once when they are in the Schrödinger’s cat state.” 4. A piezoelectric “tuning fork” has been constructed, which is both vibrating and still at the same time. All this thinking makes me hungry but I don’t think I can do much with cyanide and a dead cat. So I am left with Schrödinger’s home of Vienna, which has already given me enough headaches. But . . . Vienna is well known for dishes made with a cheese called quark, which by silly coincidence is the name of an elementary sub-atomic particle. By an even sillier coincidence there are different types of quark particles which are referred to as “flavors.” Quark the dairy product is made by warming soured milk until the desired degree of coagulation (denaturation, curdling) of milk proteins is met, and then strained. It can be classified as fresh acid-set cheese, though in some countries it is traditionally considered a distinct fermented milk product. Traditional quark is made without rennet, but in some modern dairies rennet is added. It is soft, white and unaged, and usually has no salt added. Last time I gave a Viennese recipe I included a link for this video on how to make apple strudel: Well , you can adapt it to make Viennese Topfenstrudel, using sweetened quark in place of apples. Problem solved.
316cb189644835ad
The Dirac sea is a theoretical model of the void as an infinite sea of particles with negative energy. It was first postulated by the British physicist Paul Dirac in 1930[1] to explain the anomalous negative-energy quantum states predicted by the Dirac equation for relativistic electrons (electrons traveling near the speed of light).[2] The positron, the antimatter counterpart of the electron, was originally conceived of as a hole in the Dirac sea, before its experimental discovery in 1932.[nb 1] Dirac sea for a massive particle.  •  particles,  •  antiparticles In hole theory, the solutions with negative time evolution factors[clarification needed] are reinterpreted as representing the positron, discovered by Carl Anderson. The interpretation of this result requires a Dirac sea, showing that the Dirac equation is not merely a combination of special relativity and quantum mechanics, but it also implies that the number of particles cannot be conserved.[3] Dirac sea theory has been displaced by quantum field theory, though they are mathematically compatible. Similar ideas on holes in crystals had been developed by soviet physicist Yakov Frenkel in 1926, but there is no indication the concept was discussed with Dirac when the two met in a soviet physics Congress in the summer of 1928. The origins of the Dirac sea lie in the energy spectrum of the Dirac equation, an extension of the Schrödinger equation consistent with special relativity, an equation that Dirac had formulated in 1928. Although this equation was extremely successful in describing electron dynamics, it possesses a rather peculiar feature: for each quantum state possessing a positive energy E, there is a corresponding state with energy -E. This is not a big difficulty when an isolated electron is considered, because its energy is conserved and negative-energy electrons may be left out. However, difficulties arise when effects of the electromagnetic field are considered, because a positive-energy electron would be able to shed energy by continuously emitting photons, a process that could continue without limit as the electron descends into ever lower energy states. However, real electrons clearly do not behave in this way. Dirac's solution to this was to rely on the Pauli exclusion principle. Electrons are fermions, and obey the exclusion principle, which means that no two electrons can share a single energy state within an atom. Dirac hypothesized that what we think of as the "vacuum" is actually the state in which all the negative-energy states are filled, and none of the positive-energy states. Therefore, if we want to introduce a single electron we would have to put it in a positive-energy state, as all the negative-energy states are occupied. Furthermore, even if the electron loses energy by emitting photons it would be forbidden from dropping below zero energy. Dirac further pointed out that a situation might exist in which all the negative-energy states are occupied except one. This "hole" in the sea of negative-energy electrons would respond to electric fields as though it were a positively charged particle. Initially, Dirac identified this hole as a proton. However, Robert Oppenheimer pointed out that an electron and its hole would be able to annihilate each other, releasing energy on the order of the electron's rest energy in the form of energetic photons; if holes were protons, stable atoms would not exist.[4] Hermann Weyl also noted that a hole should act as though it has the same mass as an electron, whereas the proton is about two thousand times heavier. The issue was finally resolved in 1932, when the positron was discovered by Carl Anderson, with all the physical properties predicted for the Dirac hole. Inelegance of Dirac seaEdit Despite its success, the idea of the Dirac sea tends not to strike people as very elegant. The existence of the sea implies an infinite negative electric charge filling all of space. In order to make any sense out of this, one must assume that the "bare vacuum" must have an infinite positive charge density which is exactly cancelled by the Dirac sea. Since the absolute energy density is unobservable—the cosmological constant aside—the infinite energy density of the vacuum does not represent a problem. Only changes in the energy density are observable. Geoffrey Landis (author of "Ripples in the Dirac Sea", a hard science fiction short story) also notes[citation needed] that Pauli exclusion does not definitively mean that a filled Dirac sea cannot accept more electrons, since, as Hilbert elucidated, a sea of infinite extent can accept new particles even if it is filled. This happens when we have a chiral anomaly and a gauge instanton. The development of quantum field theory (QFT) in the 1930s made it possible to reformulate the Dirac equation in a way that treats the positron as a "real" particle rather than the absence of a particle, and makes the vacuum the state in which no particles exist instead of an infinite sea of particles. This picture is much more convincing, especially since it recaptures all the valid predictions of the Dirac sea, such as electron-positron annihilation. On the other hand, the field formulation does not eliminate all the difficulties raised by the Dirac sea; in particular the problem of the vacuum possessing infinite energy. Mathematical expressionEdit Upon solving the free Dirac equation, one finds[5] for plane wave solutions with 3-momentum p. This is a direct consequence of the relativistic energy-momentum relation upon which the Dirac equation is built. The quantity U is a constant 2 × 1 column vector and N is a normalization constant. The quantity ε is called the time evolution factor, and its interpretation in similar roles in, for example, the plane wave solutions of the Schrödinger equation, is the energy of the wave (particle). This interpretation is not immediately available here since it may acquire negative values. A similar situation prevails for the Klein–Gordon equation. In that case, the absolute value of ε can be interpreted as the energy of the wave since in the canonical formalism, waves with negative ε actually have positive energy Ep.[6] But this is not the case with the Dirac equation. The energy in the canonical formalism associated with negative ε is Ep.[7] Modern interpretationEdit The Dirac sea interpretation and the modern QFT interpretation are related by what may be thought of as a very simple Bogoliubov transformation, an identification between the creation and annihilation operators of two different free field theories.[citation needed] In the modern interpretation, the field operator for a Dirac spinor is a sum of creation operators and annihilation operators, in a schematic notation: An operator with negative frequency lowers the energy of any state by an amount proportional to the frequency, while operators with positive frequency raise the energy of any state. In the modern interpretation, the positive frequency operators add a positive energy particle, adding to the energy, while the negative frequency operators annihilate a positive energy particle, and lower the energy. For a fermionic field, the creation operator   gives zero when the state with momentum k is already filled, while the annihilation operator   gives zero when the state with momentum k is empty. But then it is possible to reinterpret the annihilation operator as a creation operator for a negative energy particle. It still lowers the energy of the vacuum, but in this point of view it does so by creating a negative energy object. This reinterpretation only affects the philosophy. To reproduce the rules for when annihilation in the vacuum gives zero, the notion of "empty" and "filled" must be reversed for the negative energy states. Instead of being states with no antiparticle, these are states that are already filled with a negative energy particle. The price is that there is a nonuniformity in certain expressions, because replacing annihilation with creation adds a constant to the negative energy particle number. The number operator for a Fermi field[8] is: which means that if one replaces N by 1−N for negative energy states, there is a constant shift in quantities like the energy and the charge density, quantities that count the total number of particles. The infinite constant gives the Dirac sea an infinite energy and charge density. The vacuum charge density should be zero, since the vacuum is Lorentz invariant, but this is artificial to arrange in Dirac's picture. The way it is done is by passing to the modern interpretation. Dirac's idea is more directly applicable to solid state physics, where the valence band in a solid can be regarded as a "sea" of electrons. Holes in this sea indeed occur, and are extremely important for understanding the effects of semiconductors, though they are never referred to as "positrons". Unlike in particle physics, there is an underlying positive charge—the charge of the ionic lattice—that cancels out the electric charge of the sea. Revival in the theory of causal fermion systemsEdit Dirac's original concept of a sea of particles was revived in the theory of causal fermion systems, a recent proposal for a unified physical theory. In this approach, the problems of the infinite vacuum energy and infinite charge density of the Dirac sea disappear because these divergences drop out of the physical equations formulated via the causal action principle.[9] These equations do not require a preexisting space-time, making it possible to realize the concept that space-time and all structures therein arise as a result of the collective interaction of the sea states with each other and with the additional particles and "holes" in the sea. See alsoEdit 1. ^ This was not the original intent of Dirac though, as the title of his 1930 paper (A Theory of Electrons and Protons) indicates. But it soon afterwards became clear that the mass of holes must be that of the electron. • Alvarez-Gaume, Luis; Vazquez-Mozo, Miguel A. (2005). "Introductory Lectures on Quantum Field Theory". CERN Yellow Report CERN. 1 (96): 2010–001. arXiv:hep-th/0510040. • Dirac, P. A. M. (1930). "A Theory of Electrons and Protons". Proc. R. Soc. Lond. A. 126 (801): 360–365. Bibcode:1930RSPSA.126..360D. doi:10.1098/rspa.1930.0013. JSTOR 95359. • Dirac, P. A. M. (1931). "Quantized Singularities In The Electromagnetic Fields". Proc. Roy. Soc. A. 133 (821): 60–72. Bibcode:1931RSPSA.133...60D. doi:10.1098/rspa.1931.0130. JSTOR 95639. • Finster, F. (2011). "A formulation of quantum field theory realizing a sea of interacting Dirac particles". Lett. Math. Phys. 97 (2): 165–183. arXiv:0911.2102. Bibcode:2011LMaPh..97..165F. doi:10.1007/s11005-011-0473-1. ISSN 0377-9017. S2CID 39764396. • Greiner, W. (2000). Relativistic Quantum Mechanics. Wave Equations (3rd ed.). Springer Verlag. ISBN 978-3-5406-74573. (Chapter 12 is dedicate to hole theory.) • Sattler, K. D. (2010). Handbook of Nanophysics: Principles and Methods. CRC Press. pp. 10–4. ISBN 978-1-4200-7540-3. Retrieved 2011-10-24.
db13f950c73551fe
Tuesday, February 12, 2019 Bohmian Rapsody Visits to a Bohmian village Over all of my physics life, I have been under the local influence of some Gaul villages that have ideas about physics that are not 100% aligned with the main stream views: When I was a student in Hamburg, I was good friends with people working on algebraic quantum field theory. Of course there were opinions that they were the only people seriously working on QFT as they were proving theorems while others dealt with perturbative series only that are known to diverge and are thus obviously worthless. Funnily enough they were literally sitting above the HERA tunnel where electron proton collisions took place that were very well described by exactly those divergent series. Still, I learned a lot from these people and would say there are few that have thought more deeply about structural properties of quantum physics. These days, I use more and more of these things in my own teaching (in particular in our Mathematical Quantum Mechanics and Mathematical Statistical Physics classes as well as when thinking about foundations, see below) and even some other physicists start using their language. Later, as a PhD student at the Albert Einstein Institute in Potsdam, there was an accumulation point of people from the Loop Quantum Gravity community with Thomas Thiemann and Renate Loll having long term positions and many others frequently visiting. As you probably know, a bit later, I decided (together with Giuseppe Policastro) to look into this more deeply resulting in a series of papers there were well received at least amongst our peers and about which I am still a bit proud. Now, I have been in Munich for over ten years. And here at the LMU math department there is a group calling themselves the Workgroup Mathematical Foundations of Physics. And let's be honest, I call them the Bohmians (and sometimes the Bohemians). And once more, most people believe that the Bohmian interpretation of quantum mechanics is just a fringe approach that is not worth wasting any time on. You will have already guessed it: I did so none the less. So here is a condensed report of what I learned and what I think should be the official opinion on this approach. This is an informal write up of a notes paper that I put on the arXiv today. Bohmians don't like about the usual (termed Copenhagen lacking a better word) approach to quantum mechanics that you are not allowed to talk about so many things and that the observer plays such a prominent role by determining via a measurement what aspect is real an what is not. They think this is far too subjective. So rather, they want quantum mechanics to be about particles that then are allowed to follow trajectories. "But we know this is impossible!" I hear you cry. So, let's see how this works. The key observation is that the Schrödinger equation for a Hamilton operator of the form kinetic term (possibly with magnetic field) plus potential term, has  a conserved current $$j = \bar\psi\nabla\psi - (\nabla\bar\psi)\psi.$$ So as your probability density is $\rho=\bar\psi\psi$, you can think of that being made up of particles moving with a velocity field $$v = j/\rho = 2\Im(\nabla \psi/\psi).$$ What this buys you is that if you have a bunch of particles that is initially distributed like the probability density and follows the flow of the velocity field it will also later be distributed like $|\psi |^2$. What is important is that they keep the Schrödinger equation in tact. So everything that you can do with the original Schrödinger equation (i.e. everything) can be done in the Bohmian approach as well.  If you set up your Hamiltonian to describe a double slit experiment, the Bohmian particles will flow nicely to the screen and arrange themselves in interference fringes (as the probability density does). So you will never come to a situation where any experimental outcome will differ  from what the Copenhagen prescription predicts. The price you have to pay, however, is that you end up with a very non-local theory: The velocity field lives in configuration space, so the velocity of every particle depends on the position of all other particles in the universe. I would say, this is already a show stopper (given what we know about quantum field theory whose raison d'être is locality) but let's ignore this aesthetic concern. What got me into this business was the attempt to understand how the set-ups like Bell's inequality and GHZ and the like work out that are supposed to show that quantum mechanics cannot be classical (technically that the state space cannot be described as local probability densities). The problem with those is that they are often phrased in terms of spin degrees of freedom which have Hamiltonians that are not directly of the form above. You can use a Stern-Gerlach-type apparatus to translate the spin degree of freedom to a positional but at the price of a Hamiltonian that is not explicitly know let alone for which you can analytically solve the Schrödinger equation. So you don't see much. But from Reinhard Werner and collaborators I learned how to set up qubit-like algebras from positional observables of free particles (at different times, so get something non-commuting which you need to make use of entanglement as a specific quantum resource). So here is my favourite example: You start with two particles each following a free time evolution but confined to an interval. You set those up in a particular entangled state (stationary as it is an eigenstate of the Hamiltonian) built from the two lowest levels of the particle in the box. And then you observe for each particle if it is in the left or the right half of the interval. From symmetry considerations (details in my paper) you can see that each particle is with the same probability on the left and the right. But they are anti-correlated when measured at the same time. But when measured at different times, the correlation oscillates like the cosine of the time difference. From the Bohmian perspective, for the static initial state, the velocity field vanishes everywhere, nothing moves. But in order to capture the time dependent correlations, as soon as one particle has been measured, the position of the second particle has to oscillate in the box (how the measurement works in detail is not specified in the Bohmian approach since it involves other degrees of freedom and remember, everything depends on everything but somehow it has to work since you want to produce the correlations that are predicted by the Copenhagen approach). The trajectory of the second particle depending on its initial position This is somehow the Bohmian version of the collapse of the wave function but they would never phrase it that way. And here is where it becomes problematic: If you could see the Bohmian particle moving you could decide if the other particle has been measured (it would oscillate) or not (it would stand still). No matter where the other particle is located. With this observation you could build a telephone that transmits information instantaneously, something that should not exist. So you have to conclude you must not be able to look at the second particle and see if it oscillates or not. Bohmians  tell you you cannot because all you are supposed to observer about the particles are their positions (and not their velocity). And if you try to measure the velocity by measuring the position at two instants in time you don't because the first observation disturbs the particle so much that it invalidates the original state. As it turns out, you are not allowed to observe anything else about the particles than that they are distributed like $|\psi |^2$ because if you could, you could build a similar telephone (at least statistically) as I explain the in the paper (this fact is known in the Bohm literature but I found it nowhere so clearly demonstrated as in this two particle system). My conclusion is that the Bohm approach adds something (the particle positions) to the wave function but then in the end tells you you are not allowed to observe this or have any knowledge of this beyond what is already encoded in the wave function. It's like making up an invisible friend. PS: If you haven't seen "Bohemian Rhapsody", yet, you should, even if there are good reasons to criticise the dramatisation of real events.
528be20ac7347bfe
Galilean transformations are said to have 10 degrees of freedom. Four for translation in space and time, three for rotation, and three for direction of the uniform motion. If I scale space axis by $\alpha$ and do the same with time axis, you can see that Newton's second law remains the same. So why don't we consider scaling (of time and space) another type of Galilean transformation? • $\begingroup$ I agree that "scale" is a badly neglected degree of physical freedom. $\endgroup$ – Steve Jun 1 '19 at 23:50 • $\begingroup$ @Steve, it is not a degree of freedom in classical mechanics. $\endgroup$ – Akerai Jun 2 '19 at 0:18 • $\begingroup$ @Akerai, why exactly is that? Is it simply because no apparent practical means of changing the scale of physical things currently exists (and thus the theoretical possibility didn't need to be considered by the classical physicists)? $\endgroup$ – Steve Jun 2 '19 at 0:38 • 1 $\begingroup$ @Steve It’s because, as Akerai states in an answer here, the laws of classical mechanic are in fact generally not scale-invariant as OP claims. $\endgroup$ – Thatpotatoisaspy Jun 2 '19 at 3:10 • 1 $\begingroup$ Interestingly enough, the Schrödinger equation is both invariant under galilean transformations and dilations, even if it has a length scale (mass). It is even conformally invariant! cf. mathoverflow.net/a/270122/106114 $\endgroup$ – AccidentalFourierTransform Jun 2 '19 at 14:13 Imagine you take the transformation you mentioned above: $$x^i \rightarrow x'^i = \alpha x^i,\\ t \rightarrow t' = \alpha t,$$ where $\alpha \in \mathbb{R}$. Then assuming the Newton's law holds in the new coordinates, it will be of the form $$F^i = m \frac{d^2x'^i}{dt' ^2} = m \frac{d^2 (\alpha x^i)}{ dt^2} \left(\frac{dt}{dt'} \right)^2.$$ As you can see, the derivative $dt/dt' = 1/ \alpha$, and therefore the equality becomes $$m \frac{d^2x'^i}{dt' ^2} = m \frac{d^2 (\alpha x^i)}{ dt^2}\frac{1}{\alpha^2} = \frac{m}{\alpha} \frac{d^2 (x^i)}{ dt^2}.$$ Therefore the Newton's law is actually not invariant under this transformation as you claimed before, the transformation actually transforms an object of mass $m/\alpha$ to an object of mass $m$. • $\begingroup$ Thanks, I made a mistake in my calculation. Thank you so much for clearing up. $\endgroup$ – Shuheng Zheng Jun 2 '19 at 3:40 • 3 $\begingroup$ This make sense. If it is truly scale invariant, we probably won't need units on acceleration forces etc. $\endgroup$ – Shuheng Zheng Jun 2 '19 at 3:41 Your Answer
786ce05ddcdcd7e5
Eslam Khalaf (Harvard University) The theory of Moire materials, such as twisted bilayer graphene Dienstag, 18. Mai 2021, 16:00 Uhr (online) Erik van Loon and Tim Wehling (University of Bremen) Random Phase Approximation for gapped systems: role of vertex corrections and applicability of the constrained random phase approximation Freitag, 16. April 2021, 16:30 Uhr (online) Alexandre René (University of Ottawa) Adapting deep learning to close the theory-experiment gap in neuroscience Dienstag, 23. März 2021, 16:00 Uhr (online) Michael Scherer (University of Cologne) Flatten the band: Novel quantum materials with a twist Mittwoch,17. Februar 2021, 15:30 Uhr (online) Shibabrata Nandi (FZ Jülich) Structure, magnetism and superconductivity in Fe-based superconductors Mittwoch, 27. Januar 2021, 17:00 Uhr (online) In iron-based high-temperature superconductors, magnetic fluctuations and magneto-elastic effects are believed to be important for the superconducting electron pairing mechanism. To gain insight into the interplay between the different ordering phenomena and the underlying couplings we studied the magnetic order and lattice distortion on AFe2As2 (A = Ca, Sr, Ba, Eu) single crystals by neutron and x-ray diffraction. High-resolution x-ray diffraction and neutron scattering measurements reveal an unusually strong response of the lattice and ordered magnetic moment to superconductivity in Co-doped BaFe2As2. We propose that the coupling between lattice and superconductivity is indirect and arises due to the magnetoelastic coupling, in the form of emergent nematic order, and the strong competition between the magnetism and superconductivity. In contrast to the coexistence of superconductivity and antiferromagnetism in the Co-doped “122” sample, we show that the superconductivity and ferromagnetism coexist in the P-doped Eu based “122” Fe-pnictides. Coexistence between these two antagonistic phenomena are puzzling but can be explained in terms of the formation of a spontaneous vortex state. Idan Tamir (FU Berlin) Low-dimensional superconductors, STM and beyond Dienstag, 26 Mai, 2020, 11:00 Uhr (MBP1 015) In my talk I will discuss the sensitivity of the superconducting state in thin films and will present new spectroscopic results obtained from such films. Peizhe Tang (MPI for the Structure and Dynamics of Matter, Hamburg) Phase transitions and electronic tuning in magnetic topological materials Dienstag, 18. Februar 2020, 11:00 Uhr (26C 401) The interplay between magnetism and topology brings rich physics in the condensed matter physics in recent years. Many exotic phenomena have been observed in related systems, including the quantum anomalous Hall (QAH) effect, Weyl fermions, and antiferromagnetic (AFM) Dirac fermions. In this seminar, I will talk about three topics. The first one is about the magnetic phase transition driven by topological phase transition in magnetically doped topological insulator thin films, such as Cr doped Bi2(SexTe1-x)3 thin films [1]. In the second part, I will expand the notion of Dirac fermions into AFM system, in which both time reversal symmetry (T) and inversion symmetry (P) are broken but their combination PT is survived [2]. The third one is about the QAH phase with large Chern number driven by electric field in MnBi2Te4 thin film, whose 3D bulk state is reported as an AFM TI and thick film as axion insulator [3]. Our results provide several possible platforms to study the interplay of topological physics and magnetisms. [1] Jingsong Zhang, Cuizu Chang, Peizhe Tang, et.al., Science 339, 1582 (2013) [2] Peizhe Tang, Quan Zhou, Gang Xu, Shou-Cheng Zhang, Nature Physics 12, 1100 (2016) [3] Shiqiao Du, et.al., arXiv:1909.01194 (2019) Johannes Lischner (Imperial College, London) Controlling the electronic structure of 2d materials by twisting and defects Donnerstag, 13. Februar 2020, 15:00 Uhr (MBP1 026) Nagamalleswara Rao Dasari (Universität Erlangen-Nürnberg) Ultrafast dynamics of strongly correlated systems Freitag, 13. Dezember, 2019, 13:30 Uhr (26C 401) Recent studies on correlated systems have taken a new direction with the availability of ultrashort laser pulses. Using these pulses, we can excite and probe the physical properties of quantum materials on their intrinsic time scales before the system returns to the thermal equilibrium. Such experiments offer us an opportunity to explore hidden quantum states of matter and possible transient enhancement of collective orders in the correlated systems. In the first part of my talk, I will discuss how to uncover local interactions of Mott insulators, for example, Hubbard U and Hund's coupling J, using subcycle terahertz pulse. The second part of my talk will focus on controlling of non-local fluctuations in the low-dimensional systems using asymmetric light pulses. [1] Nagamalleswararao Dasari, Jiajun Li, Philipp Werner and Martin Eckstein, "Revealing Hund’s multiplets in Mott insulators under strong electric fields". arXiv:1907.00754 [2] Nagamalleswararao Dasari and Martin Eckstein, "Ultra-fast electric field controlled spin correlations in the Hubbard model". Phys. Rev. B 100, 121114 (R) 2019   Grafik Vacchini Urheberrecht: © Vacchini Bassano Vacchini (University of Milan) Master equations for the description of non-Markovian dynamics in open quantum system theory Donnerstag, 14. November, 2019, 10:00 - 12:00Uhr (MBP2 116) Open quantum system theory deals with the dynamics of non-isolated quantum systems. Their interaction with other quantum degrees of freedom, typically called environment, is effectively taken into account, giving rise to effects not appearing in a unitary evolution. The dynamics of open quantum systems can in particular be non-Markovian, i.e. feature memory effects. In recent years a large amount of research work has been devoted to define and characterize non-Markovian quantum dynamics.This tutorial lecture provides an introduction to this research field. The first part of the presentation will motivate and introduce the basic concepts in the description of Markovian open quantum system dynamics. We will introduce the notion of completely positive quantum dynamical map for the evolution of a system affected by a quantum environment, and consider Lindblad master equations giving rise to quantum dynamical semigroups. An approach to the characterization of non-Markovian quantum dynamics based on the behavior of the trace distance as quantifier of distinguishability between states will further be introduced. We will then consider the main projection operator techniques, allowing to consider memory effects and leading to master equations in time-local or memory kernel form. We will finally construct classes of memory kernels that can be linked to a collisional dynamics and do provide well-defined complete positivity time evolutions. Karsten Held (TU Vienna) Spatio-temporal electronic correlations: From quantum criticality to pi-tons Dienstag, 10. September, 2019, 15:30 Uhr (MBP1 026) Electronic correlations give rise to fascinating physical phenomena such as high-temperature superconductivity and (quantum) criticality, but their theoretical description remains a grand challenge. Dynamical mean field theory has been a big step forward: it accurately describes the local electronic correlations including their quantum, temporal dynamics. In recent years diagrammatic extensions of dynamical mean field theory, such as the dynamical vertex approximation, have been developed. These methods not only include the dynamics but also non-local correlations on all length scales [1]. After a brief introduction to these methods, I will present some recent highlights: the discovery of a new universality class of quantum critical exponents in the Hubbard model [2], the description of quantum criticality in the periodic Anderson model [3], and the discovery of new polaritons in strongly correlated electron systems, coined $\pi$-tons[4]. [1] G. Rohringer, H. Hafermann, A. Toschi, A. A. Katanin, A. E. Antipov, M. I. Katsnelson, A. I. Lichtenstein, A. N. Rubtsov, and K. Held, Rev. Mod. Phys. 90, 025003 (2018) [2] T. Schäfer, A. A. Katanin, K. Held, and A. Toschi Phys. Rev. Lett. 119, 046402 (2017). [3] T. Schäfer, A. A. Katanin, M. Kitatani, A. Toschi, and K. Held Phys. Rev. Lett. (2019) accepted [arXiv:1812.03821]. [4] A. Kauch, P. Pudleiner, K. Astleithner, T. Ribic, and K. Held [arXiv:1902.09342] Takuya Okogawa (Technical University of Denmark) Helical edge states coupled to localized spins Mittwoch, 11. September, 2019, 10:00Uhr (26C 401) We research on the electronic and the transport properties of the helical edge state in Quantum Spin Hall insulator coupled to an environment localized spin: spin bath. We calculate the density of states and the current of this system using the equilibrium and non-equilibrium Green’s function formalism, respectively. Our result of the current correction agrees with the one derived from the Fermi Golden rule. Furthermore, the calculation is also performed for the system with an additional external magnetic field. Lara Ortmanns (TU Delft) Magnons in Bilayers of van der Waals Materials Donnerstag, 05. September, 2019, ab 13 Uhr (MBP1 026) Van der Waals magnets are materials that are composed of 2D-layers bounded to each other through weak van der Waals interactions. We have calculated monolayer and bilayer dispersions for different types of exchange couplings and anisotropies. We discuss energy gaps, cases of degeneracy, their origin and possible lifting and explain how we derived analytic expressions for a bilayer with ferromagnetic intra-and antiferromagnetic interlayer coupling. We conclude with an outlook to possible further extensions of the project. David Schlegel (University of Göttingen) Time-periodic Structure in Open Quantum Systems Motivated by the idea of quantum time crystals in which systems exhibit time-translation symmetry breaking, we explore a novel approach to achieve time-periodic structure in open quantum systems instead of closed systems with many-body interactions. We employ the method of quantum trajectories to simulate the dynamics of the open quantum system as stochastic processes. Applying this method to a system of non-interacting Fermions on a ring coupled to an environment modeling local measurements, we reveal interrupted time-periodic structure in individual quantum trajectories. In my talk about my masterproject, I will outline the fundamental ideas behind quantum time crystals, the theory of open quantum systems with focus on the quantum trajectory method, and show recent results for the considered open quantum system. Orazio Scarlatella (Institute de Physique Théorique of CEA/Saclay, Paris) Correlated driven-dissipative systems Mittwoch, 12. Juni, 2019, 15 Uhr (MBP1 026) Driven-dissipative systems represent natural platforms to study non-equilibrium phases. In the first part of the talk, I will present some physical results for which both non-equilibrium conditions and interactions are crucial. I will argue that a prototype model of correlated driven-dissipative lattice bosons, relevant for upcoming generation of circuit QED arrays experiments, exhibits a phase transition where a finite frequency mode becomes unstable, as an effect of quantum interactions and non-equilibrium conditions. In the broken-symmetry phase the corresponding macroscopic order parameter becomes non-stationary and oscillates in time without damping, thus breaking continuous time-translational symmetry. To get some more insights on this transition, I studied the spectral properties of Markovian driven-dissipative quantum systems using a Lehmann representation. Focusing on the nonlinear quantum Van der Pol oscillator as a paradigmatic example, I showed that a sign constraint of spectral functions, which is mathematically exact for closed systems, gets relaxed for open systems; it is eventually replaced by an interplay between dissipation and interactions. In the last part of the talk, I will finally discuss a new method to solve quantum impurity models, small interacting quantum systems coupled to a non-Markovian environment, in presence of additional Markovian dissipation. I will derive a Dyson equation for the time-evolution operator of the reduced density matrix and approximate its self-energy resumming only non-crossing diagrams. I will test this approach on a simple problem of a fermionic impurity. Ka Chun Chan (University of Freiburg) Heat current and seebeck effect through single molecular junction Dienstag, 29. Januar, 2019, 16 Uhr (26C 401) James Freericks (Georgetown University, Washington DC) The Keldysh-ETH approach to quantum computing Donnerstag, 29. November, 2018, 11 Uhr (MBP2 116) It is well known that thermal state preparation at low-temperature is a challenge for current quantum computers. Yet, such an initial state is required for many different applications including simulating Green's functions. Here, we propose an alternative that is based on the eigenstate thermalization hypothesis for equilibrium systems and on Keldysh's nonequilibrium formulation for driven dissipative systems. We show how each can be employed within more conventional algorithms to simulate strongly correlated condensed matter systems on quantum computers. Luca Binci (University of Rome): Ab-initio frequency dependent Born effective charges Mittwoch, 24. Oktober 2018, 10:30 Uhr (26C 401) High temperature superconductivity reached a new record with the discovery of H3S. In this material it has been found the highest critical temperature, even though at huge pressures. An interesting fact is that, in the normal phase, the ions of this system exhibit remarkable high effective charges. The effective charge is a well-known quantity in first principles calculations. It describes the polarization induced by the collective displacements of nuclei belonging to a given sublattice. This quantity is well defined for insulating crystals; however in metals it needs a generalization at finite frequency since, for this kind of systems, the static polarization is not defined. In this thesis work we plan to develop a method to calculate ab-initio the Born effective charge tensor at finite frequencies. The final goal is to reproduce the infrared absorption spectrum of H3S. Yasuhiro Takura (University of Tsukuba, Japan): Excess entropy production in quantum system Dienstag, 18. September 2018, 16 Uhr (26C 401) Ronald Starke (TU Bergakademie Freiberg): Relativistic covariance of electrodynamics in media Mittwoch, 29. August 2018, 14 Uhr (26C 401) Konstantin Nestmann (TU Dresden): Time-convolutionless master equation: series expansions and convergence Freitag, 06. Juli 2018, 13 Uhr (MBP1 015) The talk's topic concerns the formally exact time-convolutionless master equation describing the dynamics of open quantum systems out of equilibrium. New series expansions for the master equation’s generator are presented and compared to existing series expansions. One of these derived series is then used to describe the stationary states for a quantum dot model. Konstantinos Ladovrechis (Institute of Theoretical Physics, IFW Dresden): Anomalous Floquet topological crystalline insulators Mittwoch, 20. Juni, 2018, 10 Uhr (MBP2 015) Periodically driven systems can host so-called anomalous topological phases, in which protected boundary states coexist with topologically trivial Floquet bulk bands. An anomalous version of reflection symmetry protected topological crystalline insulators is introduced, obtained as a stack of weakly-coupled two-dimensional layers. The system has tunable and robust surface Dirac cones even though the mirror Chern numbers of the Floquet bulk bands vanish. The protection of boundary modes is discussed by adapting the scattering theory of topological invariants to mirror symmetry protected topological phases. Hernán Calvo (Instituto de Física Enrique Gaviola (CONICET) and FaMAF, Universidad Nacional de Córdoba, Argentina) Quantum-dot based nanomotors with strong Coulomb interactions Dienstag, 19. Juni, 2018, 16 Uhr (26C 401) In recent years there has been increasing excitement regarding nanoelectromechanical systems (NEMS) and particularly current-driven nanomotors [1]. Despite the broad variety of stimulating results found, the regime of strong Coulomb interactions has not been fully explored for this application. In this talk, we consider NEMS composed of a set of coupled quantum dots interacting with mechanical degrees of freedom taken in the adiabatic limit and weakly coupled to electronic reservoirs. A real-time diagrammatic approach [2] is used to derive general expressions for the current-induced forces, friction coefficients, and zero-frequency force noise in the Coulomb blockade regime of transport. We show our expressions obey Onsager’s reciprocity relations and the fluctuation-dissipation theorem for the energy dissipation of the mechanical modes [3]. The obtained results are illustrated with a nanomotor consisting of a double quantum dot capacitively coupled to rotating charges. We analyze the dynamics and performance of the motor as a function of the applied voltage and loading force for trajectories encircling different triple points in the charge stability diagram. [1] R. Bustos-Marún, G. Refael, and F. von Oppen, Phys. Rev. Lett. 111, 060802 (2013). [3] H. L. Calvo, F. D. Ribetto, and R. A. Bustos-Marún, Phys. Rev. B 96, 165309 (2017). Eugene Kogan (Department of Physics, Bar-Ilan University Israel): Spin-anisotropic magnetic impurity in a Fermi gas: Integration of poor man’s scaling equations Montag 15. Januar, 2018, 14 Uhr (26C 401) We consider a single magnetic impurity described by the spin-anisotropic s-d(f ) exchange (Kondo) model and formulate a scaling equation for the spin-anisotropic model when the density of states (DOS) of electrons is a power-law function of energy (measured relative to the Fermi energy).We solve this equation containing terms up to the second order in coupling constants in terms of elliptic functions. From the obtained solution we find the phases corresponding to the infinite isotropic antiferromagnetic Heisenberg exchange, to the impurity spin decoupled from the electron environment (only for the pseudogap DOS), and to the infinite Ising exchange (only for the diverging DOS). We analyze the critical surfaces, corresponding to the finite isotropic antiferromagnetic Heisenberg exchange for the pseudogap DOS. RKKY interaction in graphene Dienstag 16. Januar 16, 2018, 16 Uhr (26C 401) We consider RKKY interaction between two magnetic impurities in graphene at a finite temperature. The consideration is based on the perturbation theory for the thermodynamic potential in the imaginary timerepresentation. We analyze the symmetry of the RKKY interaction on the bipartite lattice at half filling. Our analytical calculation of the interaction is based on direct evaluation of real space spin susceptibility. Thomas C. Lang (Universität Innsbruck): Diagrams, world lines, auxiliary fields and pumpkin spice - a basic introduction into stochastic flavors for simulating quantum many body systems Montag 18.Dezember, 2017,16:00 - 17:00 Uhr (26C 401), Mittwoch, 20. Dezember, 2017, 16:00 - 17:00 Uhr (26C 401), Donnerstag, 21.Dezember, 2017, 15:00 - 16:00 Uhr (26C 401) Karel Temmink (Institute for Theoretical Physics (ITFA) and Anton Pannekoek Institute for Astronomy (API), University of Amsterdam): Tensor Network Methods for Open Quantum Systems Mittwoch, 20. Dezember, 2017, 10:00 (26C 401) Presently, Tensor Network (TN) methods have firmly established themselves as reliable, efficient, and extremely powerful tools for quantum calculations. TNs have proven especially successful in regimes where the time-evolution is unitary and/or entropy obeys an area law, such as ground state calculations in closed quantum systems. However, less has been accomplished for open quantum systems, where the time-evolution generated by the Lindblad master equation is no longer unitary, and dissipation and Hamiltonian interactions compete. These systems, which are often relatively poorly understood analytically, are also notorious in computational physics, as they tend to cause all sorts of numerical issues, with the most well-known being that simulated density operators often lose positivity and therefore cease being physical. In my talk, I will introduce the general framework of TNs (Matrix Product States/-Operators) for closed quantum system ground state calculations, show how they can be extended to open quantum systems non-equilibrium steady state (NESS) calculations, and end with an example calculation of the NESS of a dissipative XXX Heisenberg spin chain. Ronald Starke (TU Freiberg): Refractive index and dielectric tensor Donnerstag, 05. Oktober, 2017, 10:00 Uhr (26 C 401) The standard ab initio calculation of the refractive index is based on its identification with the root of the scalar dielectric function, a treatment which cannot be generalized directly to the case of frequency- and wavevector-dependent dielectric tensors. We discuss this problem on a fundamental level starting from the microscopic electromagnetic wave equation in materials, which was recently developed within the Functional Approach to electrodynamics in media. In particular, we investigate under which conditions the standard treatment can be justified, but we then provide a more general method of calculating the frequency- and direction-dependent refractive indices by means of a (2 × 2) complex-valued “optical tensor”. In principle, this method allows for the ab initio prediction of such diverse optical properties as birefringence and optical activity. Ribhu Kaul (University of Kentucky): A lecture on deconfined quantum criticality Montag, 25. September, 2017, 16:00 Uhr (26 C 401) Quantum phase transitions in two dimensional SU(N) and SO(N) magnets Dienstag, 26. September, 2017, 16:00 Uhr (26 C 401) I will discuss the phases and phase transitions in some simple SU(N) and SO(N) quantum spin models, studied both using ideas from quantum field theory and with large scale numerical simulations. These models provide interesting examples where the emergence of gauge fields, both at critical points and extended phases, can be studied in quantum spin systems. Tommaso Roscilde (Laboratoire de Physique, Ecole Normale Supérieure de Lyon): Quantum critical phenomena through the lens of quantum correlations Freitag, 7. Juli, 2017, 11:00 Uhr (26C 402) In quantum systems correlations can take forms which are impossible in classical mechanics. The most famous, yet elusive form of quantum correlation is represented by entanglement, a property well defined and investigated for pure states, and envisioned as a resource for nearly all technological tasks harnessing quantum many-body physics. In the real life of mixed states, on the other hand, incoherent fluctuations appear in the game, making the distinction of quantum vs. classical correlations less sharp. Being able to discern the “quantumness" of correlations in mixed states, and to identify many-body regimes in which correlations have a pronounced quantum character, represents a formidable question of both fundamental as well as technological nature. In this seminar I will provide an overview of the theoretical importance of quantum correlations, starting from their very definition - to which we contributed recently with a statistical physics approach allowing to calculate them in generic systems, and potentially to measure them for a large class of quantum many-body systems relevant to experiments in AMO physics and solid-state physics. Furthermore I will discuss the centrality of quantum correlations inthe phase diagram of quantum critical phenomena - using the transverse-field Ising model as paradigmatic example I will show that quantum correlations at finite temperature provide an unprecedented insight, of purely quantum nature, into the various phases and their mutual crossovers. In particular the quantum critical enhancement of quantum correlations can be paired up with their metrological importance, opening the appealing perspective of "quantum critical metrology", which envisions a possible technological use of one of the pillars of modern quantum condensed matter. Sudipto Singha Roy (Harish-Chandra Research Institute, Allahbad, Indien): Doped resonating valence bond states: a quantum information study Freitag, 23. Juni 2017, 11:00 Uhr (26C 402) Resonating valence bond states have played a crucial role in the description of exotic phases in strongly correlated systems, especially in the realm of Mott insulators and the associated high­Tc superconducting phase transition. In particular, RVB states are considered to be an important system to study the ground state properties of the doped quantum spin­1/2 ladder. It is therefore interesting to understand how quantum correlations are distributed among the constituents of these composite systems. In this regard, we formulate an analytical recursive method to generate the wave function of doped short­range resonating valence bond (RVB) states as a tool to efficiently estimate multisite entanglement as well as other physical quantities in doped quantum spin ladders. Importantly, our results show that within a specific doping concentration and model parameter regimes, the doped RVB state essentially characterizes the trends of genuine multiparty entanglement in the exact ground states of a Hubbard model with large onsite interactions. Moreover, we consider an isotropic RVB network of spin­1/2 particles with a finite fraction of defects, where the corresponding wave function of the network is rotationally invariant under the action of local unitaries. By using quantum information­theoretic concepts like strong subadditivity of von Neumann entropy and approximate quantum telecloning, we prove analytically that in the presence of defects, caused by loss of a finite fraction of spins, the RVB network sustains genuine multisite entanglement, and at the same time may exhibit finite moderate­range bipartite entanglement, in contrast to the case with no defects. Fabian Kugler (LMU München): Multiloop functional renormalization group that sums up all parquet diagrams Mittwoch, 05. April 2017, 9:15 - 11:15 Uhr (26C 401) We present a multiloop flow equation for the four-point vertex in the functional renormalization group (fRG) framework. The multiloop flow consists of successive one-loop calculations and sums up all parquet diagrams to arbitrary order. This provides substantial improvement of fRG computations for the four-point vertex and, consequently, the self-energy. Using the X-ray-edge singularity as an example, we show that solving the multiloop fRG flow is equivalent to solving the (first-order) parquet equations and illustrate this with numerical results. Björn Sbierski (FU Berlin): Functional RG approach to spinless fermions in one dimension Dienstag, 21. Februar 2017, 16:00 - 17:00 Uhr (26C 401) Bruce Normand (Paul Scherrer Institut, Villingen, Schweiz): Gapless spin-liquid ground state in the S = 1/2 kagome antiferromagnet Freitag, 03. Februar 2017, 14:00 - 15:00 Uhr (26C 402) The defining problem in the field of frustrated quantum magnetism is the ground state of the nearest-neighbour S = 1/2 antiferromagnetic Heisenberg model on the kagome lattice. Despite the simplicity of the Hamiltonian, the solution has defied all theoretical and numerical methods employed to date. We apply the formalism of tensor-network states (TNS), specifically the method of projected entangled simplex states (PESS), whose combination of a correct accounting for multipartite entanglement and infinite system size provides qualitatively new insight. By studying the ground-state energy, the staggered magnetization we find at all finite tensor bond dimensions and the effects of a second-neighbour coupling, we demonstrate that the ground state is a gapless spin liquid. We discuss the comparison with other numerical studies and the physical interpretation of the gapless ground state. Hannes Pichler (ITAMP, Harvard University): The quantum stochastic Schrödinger equation with time delays: a MPS approach Montag, 19. Dezember 2016, 14:00 - 15:00 Uhr (26C 401) We study the dynamics of photonic quantum circuits consisting of nodes coupled by quantum channels. We are interested in the regime where the time delay in communication between the nodes is significant. This includes the problem of quantum feedback, where a quantum signal is fed back on a system with a time delay. We formulate the quantum stochastic Schrödinger equation for problems with time delays and develop a matrix product state approach to solve it, which accounts in an efficient way for the entanglement between the emitted photons in the waveguide, and thus the non-Markovian character of the dynamics. We illustrate this approach with two paradigmatic quantum optical examples: two coherently driven distant atoms coupled to a photonic waveguide with a time delay, and a driven atom coupled to its own output field with a time delay as an instance of a quantum feedback problem. Dante Kennes (Department of Physics, Columbia University): Entanglement scaling in many-body localized systems Freitag, 01. Juli 2016, 10:30 - 12:30 Uhr (MBP2 015) We study the properties of excited states in one-dimensional many-body localized (MBL) sys- tems using a matrix product state algorithm. First, the method is tested for a large disordered non-interacting system, where for comparison we compute a quasi-exact reference solution via a Monte Carlo sampling of the single-particle levels. Thereafter, we present extensive data obtained for large interacting systems of L ∼ 100 sites and large bond dimensions χ ∼ 1700, which allows us to quantitatively analyze the scaling behavior of the entanglement S in the system. The MBL phase is characterized by a logarithmic growth S(L) ∼ log(L) over a large scale separating the regimes where volume and area laws hold. We check the validity of the eigenstate thermalization hypothesis. Our results are consistent with the existence of a mobility edge. Leeor Kronik (Weizmann Institute of Science, Israel): Electronic structure from density functional theory: challenges and progress Donnerstag, 02. Juni, 2016, 10:30 - 11:30 Uhr (MBP1 026) Imke Schneider (Fachbereich Physik, TU Kaiserslautern): Spin-charge-separated quasi-particles in one-dimensional quantum fluids Mittwoch, 16. März, 2016, 15:00 - 16:00 Uhr (MBP1 026) One-dimensional quantum fluids are prominent examples of systems in which the Fermi liquid paradigm of electron-like quasi-particles is known to break down. Instead Luttinger liquid theory predicts a low-energy spectrum described by two decoupled free bosonic fields associated with collective spin and charge degrees of freedom, respectively. Here, we revisit the problem of dynamical response in these systems arguing that, as a result of spectral nonlinearity, long-lived excitations are best understood in terms of generally strongly interacting fermionic holons and spinons. This has far reaching ramifications for the construction of mobile impurity models used to determine threshold singularities in dynamical response functions. We formulate and solve the appropriate mobile impurity model describing the spinon threshold in the single-particle Green’s function. Our formulation further raises the question whether it is possible to realize a model of noninteracting fermionic holons and spinons in microscopic lattice models of interacting spinful fermions. We investigate this issue in some detail by means of density matrix renormalization group (DMRG) computations. Miguel Martín-Delgado (Theoretical Physics 1 Department, Universidad Complutense de Madrid): Modern Aspects of Quantum Physics and Topology Donnerstag, 25. Februar, 2016, 10:30 - 12:00 Uhr (26C 401) In recent years, topological effects have found a variety of remarkable applications in quantum physics. A conceptual insight as to why topology plays a role in quantum physics is presented. This review includes basic explanations of how topology is a solution for quantum information and computation. New forms of quantum matter have appeared in condensed matter such as topological insulators and superconductors. They will be described in a broad context and emphasis is given on the classification of topological orders as new forms of quantum entanglement, highlighting their similarities and differences. A glimpse into the possible future developments will be commented in the outlook. Michael Thoss (Interdisziplinäres Zentrum für Molekulare Materialien (ICMM), Institut für Theoretische Physik, Friedrich-Alexander-Universität Erlangen-Nürnberg): Simulation of quantum dynamics and transport using multiconfiguration wave-function methods Mittwoch, 13. Mai, 2015, 12:45-13:45 Uhr (MBP2 117) The accurate theoretical treatment and simulation of quantum dynamical processes in many-body systems is a central goal in chemical and condensed matter physics. In this talk, the multilayer multiconfiguration time-dependent Hartree (ML-MCTDH) method [1] is discussed as an example of an approach that allows an accurate description of quantum dynamics and transport in systems with many degrees of freedom. The ML-MCTDH method is a variational basis-set approach, which uses a multiconfiguration expansion of the wave function employing a multilayer representation and time-dependent basis functions. It extends the original MCTDH method [2] to significantly larger and more complex systems. Employing the second quantization representation of Fock space, the ML-MCTDH method can also be used to treat the dynamics of indistinguishable particles [3,4]. Illustrative applications of the methodology to models for charge transfer and transport are discussed, including electron transport in molecular junctions. [1] H. Wang and M. Thoss, J. Chem. Phys. 119, 1289 (2003). [2] H.-D. Meyer, U. Manthe, and L.S. Cederbaum, Chem. Phys. Lett. 165 , 73 (1990); H.-D. Meyer, F. Gatti, and G.A. Worth (Eds.), Multidimensional Quantum Dynamics: MCTDH Theory and Applications, Wiley-VCH, Weilheim, 2009. [3] H. Wang and M. Thoss, J. Chem. Phys. 131, 024114 (2009). [4] E. Wilner, H. Wang, G. Cohen, M. Thoss, E. Rabani, Phys. Rev. B 88, 045137 (2013); 89, 205129 (2014). Audrey Cottet (Laboratoire Pierre Aigrain, Département de Physique de l’Ecole Normale Supérieure): Mesoscopic Quantum Electrodynamics with a single spin Donnerstag, 7. Mai, 2015, 13 Uhr (MBP1 026) A new type of experiments combining microwave cavities and mesoscopic circuits gathering nanoconductors and fermionic reservoirs has recently appeared [1,2,3]. This mesoscopic Quantum Electrodynamics (QED) offers many new possibilities like for instance quantum computing schemes based on localized electronic spins, or a powerful photonic study of electronic transport. In the first part of this seminar, I will introduce a general theoretical framework to describe these experiments. This task faces two challenges. First, one has to quantize the electromagnetic field properly by taking into account electromagnetic boundary conditions which are naturally omitted in atomic cavity QED, due to the smallness of an atom. Second, in the nanocircuits, one has to take into account collective plasmonic modes, as well as electronic quasiparticle states which are absent from circuit QED performed with superconducting quantum bits. I will present a description of mesoscopic QED experiments which takes into account these specificities [4]. In the second part of this seminar, I will present experimental results demonstrating the coherent coupling of a single spin to photons stored in a microwave resonator. Using a circuit design based on a nanoscale spin-valve [5], we coherently hybridize the individual spin and charge states of a double quantum dot while preserving spin coherence. This scheme allows us to increase by five orders of magnitude the natural (magnetic) spin-photon coupling, up to the MHz range at the single spin level. Our coupling strength yields a cooperativity which reaches 2.3, with a spin coherence time of about 60ns [6]. We thereby demonstrate a mesoscopic device which could be used for non-destructive single spin read-out and distant spin/spin coupling via virtual cavity photons. [1] M. R. Delbecq, V. Schmitt, F. D. Parmentier, N. Roch, J. J. Viennot, G. Fève, B. Huard, C. Mora, A. Cottet, and T. Kontos, Phys. Rev. Lett. 107, 256804 (2011). [2] T. Frey, P. J. Leek, M. Beck, A. Blais, T. Ihn, K. Ensslin, & A. Wallraff, Phys. Rev. Lett. 108 046807 (2012). [3] K. D. Petersson, L. W. McFaul, M. D. Schroer, M. Jung, J. M. Taylor, A. A. Houck & J. R. Petta, Nature 490, 380 (2012) [4] A. Cottet, T. Kontos & B. Douçot, arXiv:1501.00803 [5] A. Cottet & T. Kontos, Phys. Rev. Lett. 105, 160502 (2010). [6] J.J. Viennot, M.C. Dartiailh, A. Cottet & T. Kontos, submitted Takeo Kato (Institute of Solid State Physics, University of Tokyo): Kondo signature in heat transport via a local two-state system Dienstag, 24. Februar 2015, 16 Uhr (Physikzentrum 26C, 401) Heat and electric transport have several similarities as well as dissimilarities. Fourier's law in heat transport corresponds to Ohm's law in electric transport, and these laws are commonly categorized as diffusive transport. Ballistic transport leads to the quantization of conductance in electric as well as heat transport. The conductance quantum was measured in mesoscopic electric conduction in 1988 [1], and much later, the version of heat transport was also measured [2]. Recently, the concept of thermal diode has also been discussed, and an experiment has been conducted for demonstrating this [3]. Recent progress in transport studies strongly indicates that heat transport analogue exists for many categories of electric transport. In this talk, we present theoretical study of the Kondo effect in heat transport via a local two-state system [4]. This system is described by the spin-boson Hamiltonian with Ohmic dissipation, which can be mapped onto the Kondo model with anisotropic exchange coupling. We derive the exact formula of thermal conductance, and evaluate it by the Monte Carlo method. Thermal conductance has a scaling form indicating the universal behavior characteristic of the Kondo effect. Below the Kondo temperature, conductance follows the universal temperature dependence proportional to T^3, showing nontrivial enhancement. This is a manifestation of strong correlation between system and reservoirs, which is analogous to the Kondo effect in electric transport. We also discuss coupling dependence of heat conductance. [1] B. J. Wees et al., Phys. Rev. Lett. 60, 848 (1988). [2] K. Schwab et al., Nature (London) 404, 974 (2000); H.-Y. Chiu et al., Phys. Rev. Lett. 95, 226101 (2005). [3] N. Li, J. Ren, L. Wang, G. Zhang, P. Hänggi and B. Li, Rev. Mod. Phys. 84, 1045 (2012); C. W. Chang, D. Okawa, A. Majumdar and A. Zettl, Science 314, 1121 (2006). [4] K. Saito and T. Kato, Phys. Rev. Lett. 111, 214301 (2013) Takafumi Suzuki (Institute of Solid State Physics, University of Tokyo): Photon-assisted current noises through a quantum dot system Dienstag, 10. Februar 2015, 16 Uhr (Physikzentrum 26C, 401) Photon-assisted transport through mesoscopic conductors has attracted much attention because the quantum nature of transport processes is significantly modified by time-dependent fields. In recent years, the scattering theory has revealed that current noises provide information about the photon-assisted transport of noninteracting electrons. For example, Levitov and Lesovik showed that photon-assisted current noises can detect the phase of the transmission amplitudes induced by the external time-dependent field [1]. Studying the effect of the Coulomb interaction is an important next step to discuss interesting physics, such as the Coulomb blockade and the Kondo effect.In this talk, I will discuss the photon-assisted transport in an interacting quantum dot system under a periodically oscillating gate voltage [2]. Photon-assisted current noises in the presence of the Coulomb interaction are calculated based on a gauge-invariant formulation of time-dependent transport. The behavior of the vertex corrections under the AC field will be discussed within the self-consistent Hartree-Fock approximation. The present result provides a useful viewpoint for understanding photon-assisted transport in interacting electron systems. [1] G. B. Lesovik and L. S. Levitov, PRL 72, 538 (1994) [2] T. J. Suzuki and T. Kato, arXiv:1411.3520 NGSCES Conference Plakat NGSCES Conference 2015 Tutorium bei der DPG Frühjahrstagung Plakat DPG Frühjahrstagung Plakat Eröffnungsfeier
c7840c17897eefdd
OPJOptics and Photonics Journal2160-8881Scientific Research Publishing10.4236/opj.2017.710017OPJ-79958ArticlesChemistry&Materials Science Engineering Physics&Mathematics Finite One-Dimensional Photonic Crystal with Gaussian Modulation: Transmission and Escape Maríade la Luz Silba-Vélez1*David-ArmandoContreras-Solorio1RolandoPérez-Álvarez2CarlosIván Cabrera1Academic Unit of Physics, Autonomous University of Zacatecas, Zacatecas, MexicoCenter of Science Research, Institute of Basic and Applied Sciences, Autonomous University of the State of Morelos, Cuernavaca, Mexico* E-mail:madelaluzsilbavelez@outlook.com(MDLLS);17102017071017018015, September 201727, October 2017 30, October 2017© Copyright 2014 by authors and Scientific Research Publishing Inc. 2014This work is licensed under the Creative Commons Attribution International License (CC BY). http://creativecommons.org/licenses/by/4.0/ This paper studied the transmission coefficient and escape frequencies in a system of planar dielectric layers where the refractive index changes from one layer to another through a Gaussian function. The wave equation with normal incidence is analyzed. For the calculations, the transfer matrix formalism is used. In a previous work, the transmission and escape problem for Gaussian electronic superlattices is investigated. Now it studied the electromagnetic modes for a system formed by layers where the refractive index of the structure is modulated by a Gaussian function. The system presents transparency bands of transmission and gaps without transmission. The escape frequencies are situated near these transparency bands but they do not coincide with them. is the frequency (mode) and &Gamma; describes the width of the states. For these systems, the escape states are very wide. A non Gaussian system presents resonance peaks in the transmission and the escape states are narrow. The formation of transparency bands in the transmission for a Gaussian system is attributed to the widening of the escape states. Transmission Coefficient Escape Frequencies Photonic Crystal 1. Introduction When we talk about photonic crystals, we refer to periodic structures as systems used to manipulate light. At the 80’s years, Yablonovich and Sajeev proposed to make artificial structures with dielectric material for the study of the electro- magnetic properties of crystal photonics. The first one was interested in inhibiting spontaneous emission, while the second was interested in the study of light introducing a small localized disorder in the periodic system [1] [2] . Knowledge of the optical properties of materials is a factor that impacts the technological progress, for example in the construction of devices or tele- communications because of the possibility of being able to build such systems today [3] [4] . Nowdays we can build structures with dielectric materials like wave guides, mirrors, microcavities or passband filters [5] . The study for energy, electronic, optical and acoustic filters is an interesting and active field. Formerly frequency band pass filters where light can be transmitted in full when the frequency of the incident photons corresponds to the passband and total reflection occurs when the incident energy corresponds to the stopband, were studied. For example, the study of periodic profile systems, i.e. by bilayers formed with refractive indices n 1 and n 2 systems correspondingly [6] [7] . We focused on a little more interesting systems. We study the transmission and escape frequencies in a photonic crystal where the refractive index of the structure are modulated by the Gaussian function as proposed in [8] . Such systems have been studied and it has been observed that the probability of transmission is almost equal to unity in the passband. We also show results in regular systems, i.e., layers with only two values of refractive indices n1 and n2. In regular systems there are transmissions bands which present resonance peaks inside the bands with high values of transmittance but there is not transparency bands. On the other hand, we mean by escape frequencies the resonance frequencies for an open system which is separated by the exterior by a partly transparent boundary surface. This structure losses energy to the exterior via radiation and we do not consider radiation coming into the system from the exterior. In this case the frequencies obtained are complex eigenvalues. The formation of transparency bands in a Gaussian system is an outstanding fact. We think that these transparency bands are due to the smoothness of the Gaussian change of refractive index which facilitates the transmission of the waves. On the other hand, this kind of variation of refractive index, with high value of index in the middle of the structure and progressive reduction to the extremes of the system, would make the waves to escape more easily from the structure. This means that there would be a reduction of the states lifetime and a widening in frequency. Then, the transparency bands would be the envelope of the wide escape states. Thus, our purpose in this work is to compare the transparency bands with the escape states. For electromagnetic waves, in [9] , they found that in case of 1D Fabry-Perot structure the transmission resonance frequencies and real parts of the complex eigenfrequencies are identical but if a multilayer system is considered the values of the escape frequencies are different from the frequencies resonance for the transmittance, although they may be very close [10] . We found in previous work where the electronic problem was studied [11] that the escape energies are different from the energy resonances in the transmittance, and this is much more pronounced for Gaussian superlattices (where the barriers height is modulated by a Gaussian function) than for regular ones where all the barriers have the same height. The analogy between electrons in semiconductor materials and photons in photonic crystals allows us to use the same methodology to calculate the coefficient of transmission and study the escape problem through the transfer matrix formalism [12] [13] [14] [15] . In this case, we change the type of matrix due to some numerical limitations of the associated transfer matrix used in the previous work. We follow the same idea to the Schrödinger equation [11] but now applied to the Maxwell equation. The paper is organized as follows. In Section 2, we present the theoretical model. We use the transfer matrix formalism. In Section 3, we describe the structures we are interested in. In Section 4, we present our results. Finally, in Section 5 we formulate our conclusions. 2. Theoretical Model We can follow the same idea as in [11] but now applied to the Maxwell equation. In fact, it is analogous, because the differential operator of the Maxwell equation is similar to the equation of Schrödinger with the difference in the characteristic parameters. In the case of a polarization wave s where the electric field vector E is transverse to the plane of incidence can be determined by applying the conditions of continuity in the interface. All electric field are perpendicular to the incident plane, with outward direction of the sheet, while two magnetic field vectors are taken so that the flow energy is positive in the direction of propagation. We describe the propagation light with variable refractive index by the equation d d x [ μ 0 μ ( x ) d E z ( x ) d x ] + [ ω 2 μ 0 ε ( x ) − μ 0 μ ( x ) κ 2 ] E z ( x ) = 0   , (1) where E z ( x ) and μ 0 μ ( x ) d E z ( x ) d x are continuous functions. The equation of motion is derived from the Maxwell equations in the substance, ∇ ⋅ D ( r , t ) = ρ ( r , t ) , (2) ∇ × E ( r , t ) = − ∂ B ( r , t ) ∂ t   , (3) ∇ ⋅ B ( r , t ) = 0   , (4) ∇ × H ( r , t ) = j ( r , t ) + ∂ D ( r , t ) ∂ t   . (5) If there are no sources, the fields are of the form E t ( ρ , x , t ) = E 0 t ( x ) e i ( κ ⋅ ρ − ω t )   , (6) H t ( ρ , x , t ) = H 0 t ( x ) e i ( κ ⋅ ρ − ω t )   , (7) E z ( ρ , x , t ) = E 0 z ( x ) e i ( κ ⋅ ρ − ω t )   , (8) H z ( ρ , x , t ) = H 0 z ( x ) e i ( κ ⋅ ρ − ω t )   , (9) ρ = y e y + z e z   , (10) κ = κ y e y + κ z e z   , (11) E 0 t ( x ) = E 0 y ( x ) e y + E 0 z ( z ) e z   , (12) H 0 t ( x ) = H 0 y ( x ) e y + H 0 z ( x ) e z   . (13) With a little algebraic manipulation we can obtain (1) considering that D x ≡ 0 . We study one-dimensional systems where the profile for the refractive index n s y s ( x ) is described as Section 3. We concentrate in a structure of N plane dielectric layers perpedicular to the x axis. The width of each j layer are denoted by d j and each refractive index by n j . We also consider the study in medium homogeneous, isotropic and lossless. In other words, we consider a system where at its ends, the refractive indices are constant ( n L and n R ), while in the intermediate zone the refractive index n ( z ) is dependent on the position (the growth direction of the heterostructure). We use the formalism of transfer matrix conformed by the dynamical matrix D j and the propagation matrix P j [16] . We know that the energy flux is given by the Poynting vector S . Imposing the condition of continuity in the interface ( x = 0 ) for E y and H z we have E 1 + E ′ 1 = E 2 + E ′ 2 , (14) ϵ 1 μ 1 ( E 1 − E ′ 1 ) cos θ 1 = ϵ 2 μ 2 ( E 2 − E ′ 2 ) cos θ 2 . (15) θ 1 and θ 2 are the angles of the wave vectors k 1 and k 2 , respectively, with respect to the normal to the interface. These two equations can be rewritten in matrix form as D 1 ( E 1 E ′ 1 ) = D 2 ( E 2 E ′ 2 ) . (16) Then we can rewrite the continuity condition in N layers for every interface through a matrix D j = ( 1 1 ϵ j μ j cos θ j − ϵ j μ j cos θ j ) . (17) D j is so-called dynamic matrix s for j = 1 , 2 , ⋯ , N . Analogously we can the same methodology followed for polarization p [8] [16] . In our case we only focus on the polarization s since the study is at normal incidence. The objective of the coefficient matrix is to transfer the coefficients of the solutions of the field, in the manner [ A 0 B 0 ] = [ M 11 M 12 M 21 M 22 ] [ A s B s ] , (18) A 0 , B 0 , A s , B s corresponding to coefficients incoming and outgoing coefficients media respectively. We define the matrix P j which is so-called propagation matrix, it considers the phase change due to the thickness of each layer P j = [ e − i ϕ j 0 0 e i ϕ j ] . (19) Φ j is given by ϕ j = k j d j = n j ω d j c cos θ j where k j = n j ω c cos θ j . It is possible to show that a layered system such as that shown in Figure 1. To make the total matrix of the system is given by D 0 − 1 [ ∏ i = 1 N D j P j D j − 1 ] D s = [ M 11 M 12 M 21 M 22 ] . (20) The system is composed of N layers, where D 0 and D s refers to matrices of the corresponding incoming and outgoing media. Details can be seen in [16] [17] . We are interested in studying the scattering problem of (18). In this problem we consider the incoming, reflected and transmitted waves, then the coefficient B s is null. The transmittance T is given by the ratio of the Poynting power flow of the transmitted wave to that of the incidente wave, then T = n s n 0 | 1 M 11 | . (21) The reflection coefficient R is obtained in a similar way, or taking into account that R + T = 1 . The problem of escape is related to having photons confined in the crystal and they leave the system. We consider only outgoing waves, and the coefficients A 0 = 0 and B s = 0 out of the interval ( x 0 , x s ) . Then (18) gives the transcendental equation, M 11 = 0. (22) The solutions to (22) have the form ω = ω r − i Γ , where the real part ω r represents the frequency while the imaginary part Γ describes the fact that the modes have finite lifetime and they decay. For simplicity we make a change of variable, in the Section 3 we will explain in detail. 3. The Refractive Index Profile3.1. System Modulated by a Gaussian Function We are interested in a photonic crystal where the refractive index of the structure is modulated by the Gaussian function (see Figure 1). We can use the proposal in [8] , for odd layers n ( x ) = ( n max − n min ) exp ( − ( ( x − x 0 ) / σ ) 2 ) + n min where n max is the refractive index of the value of the central layer, n min the minimum value of the initial and final layer and σ is the standard deviation. To guarantee that the initial and final layer take the value 1.2 we take n min = 1.2 − n max exp ( − x / σ ) 2 1 − exp ( − ( x / σ ) 2 ) (23) We do this in order that the traveling wave traverses the structure in a smooth way as possible. In all cases studied we take n max = 2 . The even layers are constant and for our case we consider them with a value equal to the exterior of the structure. We show a result for system where the odd layers are modulated by a Gaussian function so that the maximum refractive index is at the ends and the minimum at the center of the structure to compare them (Section 4). The calculations are performed at normal incidence with non-magnetic materials. The ratio of the widths of the layers d e v e n / d o d d = 8 . It was normalized in the frequency range and instead of plotting in function of ω the results are presented in function of f Δ / c . In this case we consider the total width of the structure as Δ = 1 and f is the frequency1. 3.2. Regular System We refer to regular system to structures formed by refractive indices n 1 and n 2 . The refractive indices are interleaved with odd number of layers N c = 3 , 5 , 7 , ⋯ . For example, for N c = 7 we have refractive indices n 1 , n 2 , n 1 , n 2 , n 1 , n 2 , n 1 and with its corresponding widths d 1 and d 2 . 4. Results and Discussion 1 ω = 2 π   f . In this section we present the transmission coefficients and escape frequencies for systems in which the profiles are formed for layers modulated by a Gaussian function. We show it for 17, 25, 37, 45 layers with Gaussian modulation (see Figure 2 and Figure 3), the case reversed for 17 layers (see Figure 4) and the case regular also for 17 layers (see Figure 4). When we refer to the inverse system the odd layers are modulated by a Gaussian function so that the maximum refractive index is at the ends and the minimum at the center of the structure. The regular system case consist in that the value of the refractive index in the odd layers is constant and equal. We use allowable values for porous silicon. We can observe that the greater the number of layers the passband is better defined. We made the calculations using Mathematica 10.4. It is important to stress that it was difficult to solve the escape problem in (10) for more than 11 layers. This was overcome separating the element matrix M 11 equals to zero in its real and imaginary part and using an interpolation with known numerical methods. From Figures 2-4 we see that the Gaussian systems present transparency bands with gaps of no transmission. For these systems the escape states does not necessarily coincide with the resonaces of the passbands and the escape states are very wide. By contrast, the reversed Gaussian and the regular structures present passbands without transparency, only peaks of resonances of transmission for the eigenstates of the system. At the same time, for these structures, the coincidence of escape states and resonances of transmission is much better and the escape states are narrow. This means that the escape states lifetime for Gaussian systems is shorter than for a reversed Gaussian and regular systems, which is what we expected because the photons are more difficult to escape from these last systems. By contrast, the progressive reduction of refractive index from the middle of the structure to the extremes, facilitates the escape of photons in a Gaussian system . Simultaneously, from the transmission bands and the position of the wide escape states for Gaussian structures, we deduce that the transparency bands are formed by the envelope of the wide escape states, which was the expectation mentioned in the Introduction. Mathematically speaking we are facing two different boundary problems on the same equation. The solutions need not be the same. In fact, the transmission problem has a continuum spectrum while the escape problem has a discrete spectrum with complex values. However, in certain situations, as the case of the regular structure, the two problems in some extent appear to be closely related. In the case of the Gaussian structure the relationship between the two problems is less clear. 5. Conclusions Using the Maxwell’s equation and the transfer matrix formalism, we obtained transmission coefficients and the solution of the problem of escape in systems where the refractive index of the layers is modulated by a Gaussian function, and for a reversed Gaussian and a regular systems. The variation of parameters such as the refractive index and the number of layers in Gaussian structures, allow passbands with very good transmission, i.e., very good transparency. This type of systems has potential applications as frequency filters for photons. The structure has broad intervals of frequency or passbands where there is almost total transmission, separated by stopbands where there is no propagation of photons. The escape frequencies have very wide linewidths and are situated inside or near the passbands, but they do not necessarily coincide with the passbands. For Gaussian structures, the escape frequencies have very wide linewidths and are situated inside or near the passbands, but they do not necessarily coincide with the passbands. For a regular and and inverted Gaussian structures the resonance and the escape frequencies are very close. We associate the large width of the escape frequencies to the formation of transparency bands in the Gaussian structures. MLSV and DACS thank the support of PRODEP SEP-SES. CIC wishes to thank the support of COZCYT and also of CONACYT (Grant Number 337137). RPA acknowledges hospitality at Autonomous University of Zacatecas. Cite this paper de la Luz Silba- Vélez, M., Contreras-Solorio, D.-A., Pérez- Álvarez, R. and Cabrera, C.I. (2017) Finite One-Dimensional Photonic Crystal with Gaussian Modulation: Transmission and Escape. Optics and Photonics Journal, 7, 170-180. https://doi.org/10.4236/opj.2017.710017 ReferencesYablonovich, Y. (1987) Inhibited Spontaneous Emission in Solid-State Physics and Electronics. Physical Review Letters, 58, 2059-2062. https://doi.org/10.1103/PhysRevLett.58.2059ohn, S. (1987) Strong Localization of Photons in Certain Disordered Dielectric Superlattices. Physical Review Letters, 58, 2486-2489. https://doi.org/10.1103/PhysRevLett.58.2486O’Brien, J.D., Lee, P., Cao, J.R., Kuang, W., Kim, C., Kim, W., Yang, T. and Choi, S. (2004) Photonic Crystal Lasers. ENN, 8, 617-628.Istrate, E. and Sargent, E.H. (2006) Photonic Crystal Heterostructures and Interfaces. Reviews of Modern Physics, 78, 455-481. https://doi.org/10.1103/RevModPhys.78.455Joannopoulos, J.D., Johnson, S.G., Winn, J.N. and Meade, R.D. (2008) Photonic Crystals: Molding the Flow of Light. Princeton University Press, Princeton.Aly, A.H., Ismaeel, M. and Abdel-Rahman, E. (2012) Comparative Study of the One Dimensional Dielectric and Metallic Photonic Crystals. Optics and Photonocs Journal, 2, 105-112. https://doi.org/10.4236/opj.2012.22014Segovia-Chaves, F. (2014) Energy Flux Reflected and Transmitted in One-Dimensional Photonic Crystal. Revista de la Facultad de Ciencias Básicas, 10, 158-167. https://doi.org/10.18359/rfcb.327Madrigal-Melchor, J., Enciso-Mu&ntilde;oz, A. and Contreras-Solorio, D.A. (2013) Optical Transmittance of a Multilayer Structure with Gaussian Modulation of the Refractive Index. IOP Conference Series: Materials Science and Engineering, 45, 012032. http://iopscience.iop.org/article/10.1088/1757-899X/45/1/012032 https://doi.org/10.1088/1757-899X/45/1/012032Maksimovi&cacute;, M. (2008) Optical Resonances in Multilayer Structures. PhD Thesis, University of Twente, The Netherlands.Settimi, A., Severini, S., Mattiucci, N., Sibilia, C., Centini, M., D’Aguanno, G. and Bertolotti, M., Scalora, M., Bloemer, M. and Bowden, C.M. (2013) Quasinormal-Mode Description of Waves in One-Dimensional Photonic Crystals. Physical Review E, 68, 026614. https://doi.org/10.1103/PhysRevE.68.026614Silba-Vélez, M. de la Luz, Pérez-álvarez, R. and Contreras-Solorio, D.A. (2015) Transmission and Escape in Finite Superlattices with Gaussian Modulation. Revista Mexicana de Física, 61, 132-136.Pérez-Alvarez, R. and Rodriguez-Coppola, H. (1988) Transfer Matrix in 1D Schr&ouml;dinger Problems with Constant and Position-Dependent Mass. Physica Status Solidi B, 145, 493-500. https://doi.org/10.1002/pssb.2221450214Pérez-álvarez, R., Trallero-Herrero, C. and Garca-Moliner, F. (2001) Transfer Matrix in One Dimensional Problems. European Journal of Physics, 22, 275. https://doi.org/10.1088/0143-0807/22/4/302Pérez-álvarez, R. and Garca-Moliner, F. (2004) Transfer Matrix, Green Function and Related Techniques: Tools for the Study of Multilayer Heterostructures. Universitat Jaume I.Silba-Vélez, M. de la Luz (2010) One-Dimensional Variable Mass Problems in Multilayer Systems. Thesis, Faculty of Sciences, Autonomous University of the State of Morelos, Cuernavaca, Mexico.Pochi Yeh (2005) Optical Waves in Layered Media. 2nd Edition, Wiley-Interscience, Hoboken.Luna, H.R. (2007) Transmittance of a Layer with Negative Refractive Index. Masters Thesis, Academic Unit of Physics, Autonomous University of Zacatecas, Zacatecas.
41ff1870c527972d
Large-amplitude quasi-solitons in superfluid films Susumu Kurihara Research output: Contribution to journalArticlepeer-review 320 Citations (Scopus) Nonlinear time evolution of the condensate wave function in superfluid films is studied on the basis of a Schrodinger equation, which incorporates van der Waals potential due to substrate in its fully nonlinear form, and a surface tension term. In the weak nonlinearity limit, our equation reduces to the ordinary (cubic) nonlinear Schrodinger equation for which exact soliton solutions are known. It is demonstrated by numerical analysis that even under strong nonlinearity, where our equation is far different from cubic Schrödinger equation, there exist quite stable composite "quasi-solitons". These quasi-solitons are bound states of localized excitations of amplitude and phase of the condensate (superfluid thickness and superfluid velocity, in more physical terms). Thus the present work shows the persistence of the solitonic behavior of superfluid films in the fully nonlinear situation. Original languageEnglish Pages (from-to)3262-3267 Number of pages6 JournalJournal of the Physical Society of Japan Issue number10 Publication statusPublished - 1981 Oct Externally publishedYes ASJC Scopus subject areas • Physics and Astronomy(all) Dive into the research topics of 'Large-amplitude quasi-solitons in superfluid films'. Together they form a unique fingerprint. Cite this
57156b7b93948ead
A. Usman et. al., J. Phys. Stu. 2, 2 44 (2008) Journal of This article is released under the Creative Commons Attribution-NoncommercialNo Derivative Works 3.0 License. Physical Significances Of Fifth-Order Nonlinearity For Pulse Dynamics In Monomode Optical Fibres A. Usman1*, J. Osman2, D. R. Tilley2 1 Department of Physics, Federal University of Technology, PMB 2076, Yola, Adamawa State, Nigeria 2 School of Physics, Universiti Sains Malaysia, 11800 USM, Penang, Malaysia * Corresponding Author: [email protected], Tel: 234 8052066228 Received 10 January 2008; accepted 9 February 2008 Abstract - We discuss, with illustrations, some physical significances of fifth-order nonlinear susceptibility for pulse dynamics in monomode optical fibres. The amplitude dynamic governing equation is the cubic-quintic nonlinear Schrödinger equation, (CQNLSE), which has soliton properties similar to the cubic nonlinear Schrödinger equation, (CNLSE), based on solutions by a variational method. Some differences, with regards to pulse durations in the range from 10 picoseconds to a few femtoseconds that make CQNLSE experimentally more viable are explained. PACS: 42.81.Dp; 42.65.Tg; 42.81.Wg; 42.81.-i Keywords: Optical fibers, nonlinear Schrödinger equations, fifth-order susceptibility, variational method, saturation effects, two-state solution. 1. Introduction The technology of optical telecommunications [1] and many other modern optical devices, [1,2] where optical fibres are the media for pulse transmission from one point to the other, is rapidly advancing. For the physics of the propagating pulses in those optical devices, the governing amplitude propagation equation has always been described with the cubic nonlinear Schrödinger equation, (CNLSE), which implies an intensity dependent nonlinear refractive index of the form n(ω,|E|2 ) ∼ n0 + n2|E|2 where n0 (3 ) /(8n0), is the nonlinear refractive index denotes linear refractive index, E is the electric field and n2 ≡ 3 χ xxxx (3 ) corresponding to the third-order nonlinear susceptibility tensor χ xxxx . There are, however, a few main physical reasons justifying the necessity of using the cubic-quintic nonlinear Schrödinger equation, (CQNLSE), of which refractive index takes the form n(ω,|Ε|) ∼ n0 + n2|Ε|2 + n4|Ε|4 where n4 ≡ (5 ) 5 χ xxxxxx /(32n0), the nonlinear refractive index that corresponds to the leading fifth-order susceptibility tensor. One reason for using the latter expression of the nonlinear refractive index that has been well illustrated [3] is to do with very high input intensity. A far-reaching reason, however, is that at most of the input intensities wherein CNLSE is applied, the CQNLSE gives correct dynamics and does reveal some physical significances that are useful for device modeling applications [1]. In most of the existing discussions [2 – 6], one would find tendencies to prefer either one of the two governing equations of which the trends are in favour of the CNLSE against the CQNLSE, [1, 7]. In a few reported considerations of the effects or consequences of the fifth-order nonlinearity, [8], inherent in the CQNLSE, two significant phenomena that were not realized are the saturation effect and the two-state solution. The present piece of work gives explicit descriptions, with illustrations, of the two phenomena in manners unreported to date. (See section 4). 2. Dynamic Governing Equation From the first principles, i.e., from the Maxwell’s equations, the dynamic governing amplitude propagation equation can be obtained following the method of ref. [4] extended to include n4 signifying (5 ) effects of χ xxxxxx . The equation would reveal two perturbation terms: the third-order dispersion and selfsteepening terms which have distortion effects on the propagating pulses. If these are neglected, the dimensionless form of the CQNLSE takes the form i ∂ U δ 0 ∂ 2U 2 4 − − α 0U + δ1 U U + δ 2ν U U = 0 2 2 ∂T ∂ξ where i ≡ −1 , U(ξ,Τ) ≡ Α(ξ,Τ)/Α0 is the dimensionless complex amplitude with A(ξ,Τ) and A0 respectively denoting actual pulse amplitude and initial or input amplitude, ξ(≡z/LD) is the dimensionless propagation distance with z and LD respectively denoting actual propagation distance and dispersion length; T ≡ (t − z/vg)/τ0 denotes the shifted dimensionless time where t is the actual time, vg is the group velocity and τ0, the real pulse duration. Other parameters are defined as following. For normal dispersion δ0 = +1, δ1 = +1 and δ2 = ±1; for the anomalous dispersion to be considered in this note, δ0 = −1, δ1 = +1 and δ2 = ±1; ν is the most crucial parameter [1] that has an expression of the form 2 n4 β 2 λ 3 n22 π τ 20 where λ is the optical wavelength of the propagating pulse in the monomode optical fibre, and |β2| is the magnitude of dispersion parameter in its second-order term such that λ > 1⋅3µm for β3 (i.e., the third-order ≡ dispersion term) to be insignificant numerically and physically. The parameter, α 0 LD λϖ 2 4π , where ϖ 2 has the meaning of the separation constant; when α 0 is determined from appropriate initial conditions for equation (1), ϖ 2 is simply evaluated. In addition to previously reported variational model of equation (1), [1], another model is presented in this contribution. Through the soliton theory and saturation effects, in the present model, differences between CQNLSE and CNLSE are illustrated with explanation. 3. Description By The Variational Method The CQNLSE (1) has a Lagrangian of the form 45 ∂ U  δ0 ∂ U i  ∂ U∗ U − −U∗ ∂ ξ  2 ∂ T 2 ∂ξ + α0 U δ1 2 δ2 3 where asterisks denote complex conjugates. In the Ritz variational procedures [5] the Gaussian trial functions for both initial and subsequent profiles have been proved to be close approximations of the analytical profiles via the criterion of integral contents [1,5]. With the trial function defined [1] for the subsequent pulses, and then used in equation (3) a Lagrangian density with respect to the dimensionless time is obtained. According to the variational principle δ∫〈L〉dξ = 0, the function L G dτ is the reduced Lagrangian where LG defines the Lagrangian density. The reduced Lagrangian is the dependent variable for the Euler-Lagrange equation given by δ L δ (i ) ∂ ∂ξ  ∂ L  ∂  +  ∂ [∂ (i ) ∂ ξ ] ∂ T  ∂ L  ∂ L =0  −  ∂ [∂ (i ) ∂ T ] ∂ (i ) where (i) denotes anyone of the Gaussian parameters [1] for the propagating pulse; these are the complex amplitude, the pulsewidth and chirp function of the propagation distance. If equation (4) is worked for the parameters, variational equations, in differential forms, are obtained. The details so far briefly given here are available in ref. [1]. Solutions of the variational equations contain all results to completely describe pulse dynamics. Harmonic oscillator equation is a principal result. In turn, potential function is obtained from the harmonic oscillator equation typifying the pulse as a particle in a potential well [1]. Since the details of the procedures are elsewhere [1], the potential function is stated explicitly thus Φ( y ) = 1 ξr + − (1 + ξ r ) y2 y where y(ξ) ≡ g(ξ)/g0 is the normalized pulsewidth in which g(ξ) is the dimensionless pulsewidth and g0 defines initial pulsewidth; ξr is the crucial parameter that defines a factor of pulse compression/decompression of which expression is ξr = 9 2δ 1 E0 g0 9δ 0 + 4 3δ 2ν E02 where E0 ≡ g0|G0|2 such that |G0| is the input amplitude of the pulse. In the bright solitary wave configuration, ξr = −2 as deducible from the set of allowed intervals of values for pulse compression/decompression factor [1,5]. Equation (6) can then be shown to yield G 1 2 12 . = 0 9 2 + 8 3δ 2ν G0 g0 3 2 by putting ν = 0 to correspond to CNLSE [5] one obtains G 1 = 1 04 . g0 2 (8) 46 that is, by applying all of the detailed procedures described here to CNLSE, equation (8) would be obtained for the bright soliton pulse. 4. Discussion Of The Variational Model In the anomalous dispersion regime of pulse propagation for which β2 < 0, equation (2) may be used to experimentally observe variation of |β2| with optical wavelength, λ, assuming fixed magnitudes of other parameters. One significant implication of (2) is that ν ∝ 1/(τ0)2 in complete analogy to coefficients of perturbation terms which have been advanced as necessary for CNLSE to be valid for pulse durations in femtoseconds [1,6]. Observe that this means that as τ0 decreases, both ν and input or incident power increase significantly so that the CQNLSE (1) would describe distortionless propagation [7]. As noted previously, [1], the parameter ν of the order 0.044 can correspond to τ0 = 10⋅0ps for given values of |β2| and λ. Thus, durations from 10.0ps necessarily require inclusion of n4 if a numerical significance of order 10−3 is set for ν. Fig. 1 simulates variation of input dimensionless pulsewidth with respect to input dimensionless pulseheight, giving another variational model comparable to the previous one [1,7]. The essence of the present model is rooted in clearer illustration of difference between the CNLSE and CQNLSE through the phenomena of saturation and two-state solution which are the main physical significances of χ(5). Saturation implies that at certain values of the pulsewidth, value of pulseheight does not change significantly. Correspondingly, except at the minimum value of the pulsewidth, every other value of the pulsewidth corresponds to two values of pulseheight thus defining two-state solution. c2 7 c3 c1 3 c4 2 1 c5 0 |G | 0 (5 ) Fig. 1. Two-state solution and saturation effect of χ xxxxxx through the nonlinearity coefficient ν: (a) Curves c1, c2 and c3 obtained from equation (7) for δ2 = −1, corresponds to ν = 0⋅1 , 0⋅125, and 0⋅15, curve c4 obtained from equation (8) for CNLSE corresponds to ν = 0 in equation (7), and curve c5 depicts three superimposed curves for δ2 = +1 with ν = 0⋅1, 0⋅125, and 0⋅15 respectively. 47 For the monomode optical fibre, δ2 = −1, curves c1, c2 and c3 of Fig. 1 respectively simulate twostate solutions of the CQNLSE for ν = 0⋅1, 0⋅125, and 0⋅15. It could be seen that one value of g0 gives two values of |G0|. The three values have respective minimum values g0(min) ~ 0.8324, 0.931 and 1.0194 corresponding to |G0|min ≈ 2.475, 2.213 and 2.021. The unique values may be considered as one form of saturation effect. As the curves depict, at |G0| ≈ 3.03, 2.711 and 2.475 further increases of g0 does not produce significant increases in |G0|, i.e., the respective maximum amplitudes are saturated. Due to a reason that most optical media have n4 < 0, CNLSE precludes two-state phenomena. Curves c4 and c5 of δ Fig. 1 simulate saturation effects of ν respectively for CNLSE and CQNLSE (1) whenever n4 > 0 for 2 = +1. Curve c4 corresponds to ν = 0 in equation (8). Actually, curve c5 corresponds to three superimposed curves drawn from equation (7) for δ2 = +1 with ν = 0.1, 0.125 and 0.15. That is, the curves seem to overlap. In nonlinear theory, the differences must be considered significant. However, at one value of |G0|, the input amplitude as from |G0| ∼ 0.75, it could be seen that the magnitude of differences between pulsewidths get larger against the CNLSE as this would imply errors in the latter. Moreover, curve c4 reaches saturation at higher amplitude compared to anyone of curves c5. Fig. 2 depicts the same model. It is a generalized form of relations between the pulsewidth parameter and input intensity parameter for CQNLSE. Curve (1) corresponds to δ2 = −1and curve (2) is for δ2 = +1. 9 8 7 6 5 4 (2) 3 2 1 0 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 G Fig. 2. Generalized relation obtained for CQNLSE from equation (7) where curve (1) corresponds to δ2 = −1 and curve (2) for δ2 = +1. Pulsewidth parameter, gp ≡ g0/(ν)1/2 and input intensity parameter Gp ≡ ν|G0|2. Note that the input amplitude parameter is G A ≡ Gp =ν 1 2 G0 . 5. Conclusion From the results of variational solution of CQNLSE [1], physical significances of fifth-order nonlinear susceptibility have been illustrated using another variational model that describes bright solitary 48 wave. This model enables us to clearly explain differences between the CNLSE and CQNLSE. In its dimensionless form as usually applicable in soliton theory, the latter sustains saturation of the input amplitude and two-state solution. The two-state solution, though has formerly been demonstrated to be inherent in most optical media [1, 7], due to their negative valued fifth-order nonlinear refractive index, n4, has now been shown to be absent for n4 > 0. It could, thus, be seen from Fig. 1 that with a given set of values of the propagation parameters, for pulse durations as from 10.0 ps to a few femtoseconds, the CNLSE is not adequate for the dynamics of bright solitary waves. Acknowledgement A. Usman acknowledges support of School of Physics, Universiti Sains Malaysia, in providing some of the materials used for this work. References [1] A. Usman, J. Osman, and D. R. Tilley, J. Nonl. Opt. Phys. Mater. 7, 461 (1998) and the references therein. [2] G. P. Agrawal, Fiber-optic Communication systems, (John Wiley, New York, 1992), Chapt. 9. [3] D. Pushkarov, and S. Tanev, Optics Comm. 124, 353 (1996). [4] A. Kumar, Phys. Repts. 187, 63 (1990). [5] D. Anderson, Phys. Rev. A27, 3135 (1983). [6] M. J. Potasek, J. Appl. Phys. 85, 941 (1988). [7] A. Usman, PhD Thesis. University of Science, Malaysia (USM), (2000). [8] A. Kumar, S. N. Sarkar, and A. K. Ghatak, Opt. Lett., 11, 321 (1986). Physics - Journal of Physics Students are available in ref. [1]. Solutions of the .... the references therein. [2] G. P. Agrawal, Fiber-optic Communication systems, (John Wiley, New York, 1992),. Chapt. 9. 92KB Sizes 1 Downloads 132 Views Recommend Documents Physics - Journal of Physics Students wind streams using the spacecraft data during high amplitude days. ... intensity during high amplitude anisotropic wave train events. PACS: 96.40.Kk, 96.40. Physics - Journal of Physics Students related to u by: This article is released under the Creative Commons. Attribution-Noncommercial-. No Derivative Works 3.0. License. Physics - Journal of Physics Students Forbush decreases associated with shock-associated cloud are caused by ... between magnetic clouds and cosmic ray intensity decrease while Kudo et al. Physics - Journal of Physics Students The general relativistic equation of motion for a photon in the gravitational ..... [3] P.G. Bergmann, Intro. to the Theory of Relativity, (Prentice Hall 1987), p203-222. Physics - Journal of Physics Students Physics - Journal of Physics Students observed active during 11 to 18 July 2005 and the interplanetary magnetic field ... and energy on the Sun and significant magnetospheric activity via solar wind. Physics - Journal of Physics Students ... of (72) suggests considering that m should transform as the time coordinate does i.e.. 2. 2. 2. /. 1 cV p. Vc ... transform via the same transformation factor Dc i.e.. Physics - Journal of Physics Students spin localised at the site i. In this work we consider the nearest neighbour ( )nn and next nearest neighbour. ( ) nnn interactions 1. J and 2. J respectively. New Journal of Physics Jan 30, 2012 - behave like insulators in the bulk, but support robust conducting edge .... Au = Bu = 0, and add to (1) the corresponding gauge fixing term Sgf.
8548cd9140b3cbd8
Scalar Torsion – Unified Field Theory Key Scalar Torsion is the New Symmetry of General Relativity scalar torsion toroidal donut Image credit: Under Cartan transformations, the new formalism of the General Theory of Relativity (GTR) leads to different pictures of the same gravitational phenomena. “We reformulate the general theory of relativity in the language of Riemann-Cartan geometry (J. B. Fonseca-Neto, C. Romero, S. P. G. Martinez)[1]. They show that in an arbitrary Cartan gauge general relativity has the form of a scalar-tensor theory – “…we extend the concept of space-time symmetry to the more general case of Riemann-Cartan space-times endowed with scalar torsion.” To Tesla, the General Theory of Relativity was just: GTR has the problem of using the speed of light as a constant, but if you look you are using the speed of light. Einstein ignored everything he couldn’t see, but if you want to derive particles you can’t ignore it. The speed of light is infinite, according to the ancient Greeks and up until Galileo Galilei in early 1560’s. Quantum Theory explains everything except General Relativity and Gravity. But if space-times are endowed with scalar torsion we are down to just Gravity with the New Symmetry of General Relativity. And if Gravity, as explained by Bob McElrath, a former theory postdoc at UC Davis and now at CERN, emerges from neutrinos [2] then, it too is explained by scalar torsion waves, since Prof. Konstantin Meyl PhD has presented the theory that neutrinos are scalar waves moving faster than the speed of light.[3] Properties of Neutrinos In 2002 the Nobel Prize was awarded to Raymond Davis Jr. and Masatoshi Koshiba for the detection of cosmic neutrinos. A neutrino oscillates between the properties of an electron and a positron and the average of the charge of the neutrino is zero and the mass is as well. But the effective value is not zero. Just like AC current: the average of the voltage is zero but the effective value is not zero. The result is that you have a particle with no charge and no mass, but it has energy and it has a pulse and this is the only way to explain the existence of the physical neutrino. Neutrinos – Potential Vortices – Scalar Torsion Waves – Unified Field Theory From Objectivity to a Unified Field Theory Potential vortices, newly discovered properties of the electric field, fundamentally change the picture of the physical world. Prof. Konstantin Meyl PhD., who lectures at Technical University of Berlin, University of Clausthal and at the University of Applied Sciences, Furtwangen has written several books. Some of them have been translated into English. He has designed a fully functional replica of Nicola Tesla’s longitudinal electric wave-potential vortex which propagates scalar-like through space and which phenomenon can now be studied and examined once again. He used James Clerk Maxwell‘s 3rd Equation known as Faraday’s Law, as a hypothetical factor and proves that the electric vortex is a part of it. Because his theory is based on an extension of the Maxwell theory, so classic physical laws remain in force as his theory is a special case scenario that does not affect them. This non-speculative theory enables new interpretations of several principles of electrical engineering and quantum physics. It leads to feasible interpretations of experimental observations which to this day have not been possible to explain via existing theories. For example, quantum particle characteristics can be calculated when interpreted as a vortex. Likewise a number of neutrino experimental results can be explained when the neutrinos are regarded as a vortex. Dr. Meyl’s theory describes how field vortices form scalar waves via his extended field theory. Neutrino Power – Alternative Clean, Cheap Energy Neutrino energy (scalar torsion) from black hole Neutrinos blasting from the centre of a Galaxy Radio Galaxy Pictor A Credits: X-ray: NASA/CXC/Univ. of Hertfordshire/M. Hardcastle et al.; Radio: CSIRO/ATNF/ATCA If one goes into resonance with neutrino radiation then one can collect it, but the frequencies are higher than the present day semi-conductors. Nevertheless, neutrino power is available as an inexhaustible form of energy due to a remarkable overunity effect. Significant advances can result in terms of environmental sustainability and regarding today’s electromagnetic pollution, by means of this revised theory. However, the Khazarian Mafia or Hyksos blue bloods who are less than 1% of the population and who own the world and everything in it, cannot get what they want from clean, cheap energy; because what they want is full control over what happens on earth, using their occult knowledge and our taxes and interest on their black magic money scam. They are very proud of being able to trace their bloodline back to Cain, who was not Adam’s son and not fully human, hence their copper-based blue blood. But for regular red blooded people with a heart and the ability to empathize and create, all this means is that newly discovered properties of the electric field are fundamentally changing our view of the physical world. In the enhanced view of potential vortex, the physical comprehension becomes ever more objective, as the Meyl theory explains not only interactions but temperature, which to date is inexplicable via conventional theories. Electric Scalar Waves and Magnetic Scalar Waves Since Maxwell’s four equations of 1861 describing electromagnetic waves, we have understood and used them but not until Heinrich Hertz demonstrated their existence in 1887. These electromagnetic waves were predicted by Maxwell 26 years earlier, and were not demonstrated by Hertz until after Maxwell’s death in 1879, albeit using the truncated version called the Maxwell-Heaviside equation of 1874. Oliver Heaviside was an English self-taught electrical engineer, mathematician, and physicist. It was not known then and still it is not accepted by mainstream electromagnetic engineers, that the electromagnetic waves comprise electric scalar waves and magnetic scalar waves within the recognized electromagnetic waves. This is in part because of the truncation of the equation, and because the 3rd equation describes the magnetic monopoles as being set to zero. These days mobile phones are sending that part of Maxwell’s third equation – Gauss’ Magnetism Law which is set to 0 (and should not be) – by default. Once it is recognized, understood and used intentionally, that the magnetic monoplole is not zero, our technology will take another quantum leap. Meyl derives the extended Maxwell equation from his unified field theory as contained in the soon to be translated book: “From Objectivity to the Theory of Everything” or Unified Field Theory published in German. His intention is write papers on it soon so it can be peer-reviewed. His equation says there are two sides of the coin, the electric and the magnetic, and one is changing to the other one if there is movement e.g. us around sun and/or the sun around the centre of the galaxy etc.. If you have an electric field, the field is the influence for the light, and if there is an electric field it will influence the speed of light, and if there is movement we get a magnetic field from the electric field, and vice versa. We cannot measure the aether wind because we are moving with it. Aether is the field, an electric and magnetic field. The cause for the known value of the speed of light is called the aether, but putting the 3nd Maxwell structure forming particle potential which is the magnetic monopole to zero was the error, so physicists couldn’t explain the particles. Paul Dirac was the first who understood that the magnetic monopole has to be there to explain the observations, and the Helmholtz Society found it in 2009. Quantum physicists like Max Planck were experiencing these quantum effects as he called them and tried to explain all the field effects by quantum effects, like for gravitational effects they created gravitons. In other words they postulated what they wanted to explain. Now they are in process of postulating from non-derived postulations such as electrons, positron, protons etc. such postulations as quarks, muons and more. The Big Bang is a Big BluffKonstantin Meyl How Matter is Produced If the Schrödinger equation is derived from the extended field theory of Meyl then all particles which the Schrödinger equation is describing have to be vortex structures, according to his objectivity theory. The fields of the vortex balls run around one point which is the centre of the ball which is still, i.e. the speed of light is at zero, and the field lines all go to the centre which means it is infinite at the centre. Such balls become stable matter. Electromagnetic waves don’t have this property, but all matter is vortex balls, according to Meyl’s theory. Scalar torsion Energy Emanates from a Singularity at the Centers of Galaxies Wandering Black Hole Wandering Black Hole found by NASA’s Chandra X-ray Observatory and ESA’s XMM-Newton X-ray observatory. Image credit: Photo : Flickr/Creative Commons/NASA The universe is a web of singularities (“black holes” and “white holes”) from which faster-than-light scalar torsion energy flows. This energy flows throughout the universe, from the galactic scale to the subatomic, spiraling and branching (fractaling) as it goes, following toroidal (doughnut-shaped) flows.[5] Other names for scalar torsion fields are: chi, prana, tachyon energy, zero point energy, life force, torsion fields/waves, longitudinal waves, neutrino power and more. hexagonal scalar torsion-wave structured frozen water crystals Dr Emoto Masaru’s book cover showing the hexagonal structure which torsion waves create in water. At Sound Energy Research scientists created torsion field imprints in distilled water using scalar torsion wave technologies. The result is structured water called scalar wave–structured water™. They provided samples to Dr. Masaru Emoto who froze them and studied the crystals, which formed hexagonal structures like those created by human consciousness. The scalar torsion technology creates the same effects as mental intent that has been captured and frozen by Dr. Emoto. The inference to be drawn is that scalar torsion waves, albeit bereft of any electromagnetic properties or mass, are “carrier waves” of consciousness via the scalar torsion field.[6] [4] Watch Nassim Haramein – Crossing the event horizon DVD
edb9cfa6e6cb535d
TY - JOUR AB - We consider the low-density limit of a Fermi gas in the BCS approximation. We show that if the interaction potential allows for a two-particle bound state, the system at zero temperature is well approximated by the Gross-Pitaevskii functional, describing a Bose-Einstein condensate of fermion pairs. AU - Hainzl, Christian AU - Robert Seiringer ID - 2397 IS - 2 JF - Letters in Mathematical Physics TI - Low density limit of BCS theory and Bose-Einstein condensation of Fermion pairs VL - 100 ER - TY - GEN AB - We extend the mathematical theory of quantum hypothesis testing to the general W*-algebraic setting and explore its relation with recent developments in non-equilibrium quantum statistical mechanics. In particular, we relate the large deviation principle for the full counting statistics of entropy flow to quantum hypothesis testing of the arrow of time. AU - Jakšić, Vojkan AU - Ogata, Yoshiko AU - Pillet, Claude A AU - Robert Seiringer ID - 2398 IS - 6 T2 - Reviews in Mathematical Physics TI - Quantum hypothesis testing and non-equilibrium statistical mechanics VL - 24 ER - TY - CHAP AB - Bose–Einstein condensation (BEC) in cold atomic gases was first achieved experimentally in 1995 [1, 6]. After initial failed attempts with spin-polarized atomic hydrogen, the first successful demonstrations of this phenomenon used gases of rubidium and sodium atoms, respectively. Since then there has been a surge of activity in this field, with ingenious experiments putting forth more and more astonishing results about the behavior of matter at very cold temperatures. AU - Robert Seiringer ED - Rivasseau, Vincent ED - Robert Seiringer ED - Solovej, Jan P ED - Spencer, Thomas ID - 2399 T2 - Quantum Many Body Systems TI - Cold quantum gases and bose einstein condensation VL - 2051 ER - TY - JOUR AB - We investigate the frequency of positive squareful numbers x, y, z≤B for which x+y=z and present a conjecture concerning its asymptotic behavior. AU - Timothy Browning AU - Valckenborgh, K Van ID - 240 IS - 2 JF - Experimental Mathematics TI - Sums of three squareful numbers VL - 21 ER - TY - JOUR AB - If the polaron coupling constant α is large enough, bipolarons or multi-polarons will form. When passing through the critical α c from above, does the radius of the system simply get arbitrarily large or does it reach a maximum and then explode? We prove that it is always the latter. We also prove the analogous statement for the Pekar-Tomasevich (PT) approximation to the energy, in which case there is a solution to the PT equation at α c. Similarly, we show that the same phenomenon occurs for atoms, e. g., helium, at the critical value of the nuclear charge. Our proofs rely only on energy estimates, not on a detailed analysis of the Schrödinger equation, and are very general. They use the fact that the Coulomb repulsion decays like 1/r, while 'uncertainty principle' localization energies decay more rapidly, as 1/r 2. AU - Frank, Rupert L AU - Lieb, Élliott H AU - Robert Seiringer ID - 2400 IS - 2 JF - Communications in Mathematical Physics TI - Binding of polarons and atoms at threshold VL - 313 ER - TY - JOUR AB - We find further implications of the BMV conjecture, which states that for hermitian matrices B≥0 and A, the function λ {mapping} Tr exp(A - λB) is the Laplace transform of a positive measure supported on [0,∞]. AU - Lieb, Élliott H AU - Robert Seiringer ID - 2401 IS - 1 JF - Journal of Statistical Physics TI - Further implications of the Bessis-Moussa-Villani conjecture VL - 149 ER - TY - JOUR AB - We consider a model of quantum-mechanical particles interacting via point interactions of infinite scattering length. In the case of fermions we prove a Lieb-Thirring inequality for the energy, i.e., we show that the energy is bounded from below by a constant times the integral of the particle density to the power. AU - Frank, Rupert L AU - Robert Seiringer ID - 2402 IS - 9 JF - Journal of Mathematical Physics TI - Lieb-Thirring inequality for a model of particles with point interactions VL - 53 ER - TY - JOUR AB - We study the effects of random scatterers on the ground state of the one-dimensional Lieb-Liniger model of interacting bosons on the unit interval in the Gross-Pitaevskii regime. We prove that Bose-Einstein condensation survives even a strong random potential with a high density of scatterers. The character of the wavefunction of the condensate, however, depends in an essential way on the interplay between randomness and the strength of the two-body interaction. For low density of scatterers and strong interactions the wavefunction extends over the whole interval. A high density of scatterers and weak interactions, on the other hand, lead to localization of the wavefunction in a fragmented subset of the interval. AU - Robert Seiringer AU - Yngvason, Jakob AU - Zagrebnov, Valentin A ID - 2403 IS - 11 JF - Journal of Statistical Mechanics Theory and Experiment TI - Disordered Bose-Einstein condensates with interaction in one dimension VL - 2012 ER - TY - JOUR AB - The representation of integral binary forms as sums of two squares is discussed and applied to establish the Manin conjecture for certain Châtelet surfaces over ℚ. AU - de la Bretèche, Régis AU - Timothy Browning ID - 241 IS - 2 JF - Israel Journal of Mathematics TI - Binary forms as sums of two squares and Châtelet surfaces VL - 191 ER - TY - JOUR AB - The kingdom of fungi provides model organisms for biotechnology, cell biology, genetics, and life sciences in general. Only when their phylogenetic relationships are stably resolved, can individual results from fungal research be integrated into a holistic picture of biology. However, and despite recent progress, many deep relationships within the fungi remain unclear. Here, we present the first phylogenomic study of an entire eukaryotic kingdom that uses a consistency criterion to strengthen phylogenetic conclusions. We reason that branches (splits) recovered with independent data and different tree reconstruction methods are likely to reflect true evolutionary relationships. Two complementary phylogenomic data sets based on 99 fungal genomes and 109 fungal expressed sequence tag (EST) sets analyzed with four different tree reconstruction methods shed light from different angles on the fungal tree of life. Eleven additional data sets address specifically the phylogenetic position of Blastocladiomycota, Ustilaginomycotina, and Dothideomycetes, respectively. The combined evidence from the resulting trees supports the deep-level stability of the fungal groups toward a comprehensive natural system of the fungi. In addition, our analysis reveals methodologically interesting aspects. Enrichment for EST encoded data-a common practice in phylogenomic analyses-introduces a strong bias toward slowly evolving and functionally correlated genes. Consequently, the generalization of phylogenomic data sets as collections of randomly selected genes cannot be taken for granted. A thorough characterization of the data to assess possible influences on the tree reconstruction should therefore become a standard in phylogenomic analyses. AU - Ebersberger, Ingo AU - De Matos Simoes, Ricardo AU - Kupczok, Anne AU - Gube, Matthias AU - Kothe, Erika AU - Voigt, Kerstin AU - Von Haeseler, Arndt ID - 2411 IS - 5 JF - Molecular Biology and Evolution TI - A consistent phylogenetic backbone for the fungi VL - 29 ER - TY - JOUR AB - We investigate the first and second moments of shifted convolutions of the generalized divisor function d 3(n). AU - Baier, Stephan AU - Timothy Browning AU - Marasingha, Gihan AU - Zhao, Liangyi ID - 242 IS - 3 JF - Proceedings of the Edinburgh Mathematical Society TI - Averages of shifted convolutions of d3 (n) VL - 55 ER - TY - JOUR AB - Let P(t) ∈ ℚ[t] be an irreducible quadratic polynomial and suppose that K is a quartic extension of ℚ containing the roots of P(t). Let N K/ℚ(X) be a full norm form for the extension K/ℚ. We show that the variety P(t) =N K/ℚ(X)≠ 0 satisfies the Hasse principle and weak approximation. The proof uses analytic methods. AU - Timothy Browning AU - Heath-Brown, Roger ID - 243 IS - 5 JF - Geometric and Functional Analysis TI - Quadratic polynomials represented by norm forms VL - 22 ER - TY - JOUR AB - The colored Tverberg theorem asserts that for eve;ry d and r there exists t=t(d,r) such that for every set C ⊂ ℝ d of cardinality (d + 1)t, partitioned into t-point subsets C 1, C 2,...,C d+1 (which we think of as color classes; e. g., the points of C 1 are red, the points of C 2 blue, etc.), there exist r disjoint sets R 1, R 2,...,R r⊆C that are rainbow, meaning that {pipe}R i∩C j{pipe}≤1 for every i,j, and whose convex hulls all have a common point. All known proofs of this theorem are topological. We present a geometric version of a recent beautiful proof by Blagojević, Matschke, and Ziegler, avoiding a direct use of topological methods. The purpose of this de-topologization is to make the proof more concrete and intuitive, and accessible to a wider audience. AU - Matoušek, Jiří AU - Martin Tancer AU - Uli Wagner ID - 2438 IS - 2 JF - Discrete & Computational Geometry TI - A geometric proof of the colored Tverberg theorem VL - 47 ER - TY - JOUR AB - A Monte Carlo approximation algorithm for the Tukey depth problem in high dimensions is introduced. The algorithm is a generalization of an algorithm presented by Rousseeuw and Struyf (1998) . The performance of this algorithm is studied both analytically and experimentally. AU - Chen, Dan AU - Morin, Pat AU - Uli Wagner ID - 2439 IS - 5 JF - Computational Geometry: Theory and Applications TI - Absolute approximation of Tukey depth: Theory and experiments VL - 46 ER - TY - JOUR AB - We investigate the solubility of the congruence xy ≡ 1 (mod p), where p is a prime and x, y are restricted to lie in suitable short intervals. Our work relies on a mean value theorem for incomplete Kloosterman sums. AU - Timothy Browning AU - Haynes, Alan K ID - 244 IS - 2 JF - International Journal of Number Theory TI - Incomplete kloosterman sums and multiplicative inverses in short intervals VL - 9 ER - TY - CONF AB - We present an algorithm for computing [X, Y], i.e., all homotopy classes of continuous maps X → Y, where X, Y are topological spaces given as finite simplicial complexes, Y is (d - 1)-connected for some d ≥ 2 (for example, Y can be the d-dimensional sphere S d), and dim X ≤ 2d - 2. These conditions on X, Y guarantee that [X, Y] has a natural structure of a finitely generated Abelian group, and the algorithm finds generators and relations for it. We combine several tools and ideas from homotopy theory (such as Postnikov systems, simplicial sets, and obstruction theory) with algorithmic tools from effective algebraic topology (objects with effective homology). We hope that a further extension of the methods developed here will yield an algorithm for computing, in some cases of interest, the ℤ 2-index, which is a quantity playing a prominent role in Borsuk-Ulam style applications of topology in combinatorics and geometry, e.g., in topological lower bounds for the chromatic number of a graph. In a certain range of dimensions, deciding the embeddability of a simplicial complex into ℝ d also amounts to a ℤ 2-index computation. This is the main motivation of our work. We believe that investigating the computational complexity of questions in homotopy theory and similar areas presents a fascinating research area, and we hope that our work may help bridge the cultural gap between algebraic topology and theoretical computer science. AU - Čadek, Martin AU - Marek Krcál AU - Matoušek, Jiří AU - Sergeraert, Francis AU - Vokřínek, Lukáš AU - Uli Wagner ID - 2440 TI - Computing all maps into a sphere ER - TY - CONF AB - Eigenvalues associated to graphs are a well-studied subject. In particular the spectra of the adjacency matrix and of the Laplacian of random graphs G(n, p) are known quite precisely. We consider generalizations of these matrices to simplicial complexes of higher dimensions and study their eigenvalues for the Linial-Meshulam model X k(n, p) of random k-dimensional simplicial complexes on n vertices. We show that for p = Ω(log n/n), the eigenvalues of both, the higher-dimensional adjacency matrix and the Laplacian, are a.a.s. sharply concentrated around two values. In a second part of the paper, we discuss a possible higherdimensional analogue of the Discrete Cheeger Inequality. This fundamental inequality expresses a close relationship between the eigenvalues of a graph and its combinatorial expansion properties; in particular, spectral expansion (a large eigenvalue gap) implies edge expansion. Recently, a higher-dimensional analogue of edge expansion for simplicial complexes was introduced by Gromov, and independently by Linial, Meshulam and Wallach and by Newman and Rabinovich. It is natural to ask whether there is a higher-dimensional version of Cheeger's inequality. We show that the most straightforward version of a higher-dimensional Cheeger inequality fails: for every k > 1, there is an infinite family of k-dimensional complexes that are spectrally expanding (there is a large eigenvalue gap for the Laplacian) but not combinatorially expanding. AU - Gundert, Anna AU - Uli Wagner ID - 2441 TI - On Laplacians of random complexes ER - TY - JOUR AB - Constitutive endocytic recycling is a crucial mechanism allowing regulation of the activity of proteins at the plasma membrane and for rapid changes in their localization, as demonstrated in plants for PIN-FORMED (PIN) proteins, the auxin transporters. To identify novel molecular components of endocytic recycling, mainly exocytosis, we designed a PIN1-green fluorescent protein fluorescence imaging-based forward genetic screen for Arabidopsis thaliana mutants that showed increased intracellular accumulation of cargos in response to the trafficking inhibitor brefeldin A (BFA). We identified bex5 (for BFA-visualized exocytic trafficking defective), a novel dominant mutant carrying a missense mutation that disrupts a conserved sequence motif of the small GTPase, RAS GENES FROM RAT BRAINA1b. bex5 displays defects such as enhanced protein accumulation in abnormal BFA compartments, aberrant endosomes, and defective exocytosis and transcytosis. BEX5/RabA1b localizes to trans-Golgi network/early endosomes (TGN/EE) and acts on distinct trafficking processes like those regulated by GTP exchange factors on ADP-ribosylation factors GNOM-LIKE1 and HOPM INTERACTOR7/BFA-VISUALIZED ENDOCYTIC TRAFFICKING DEFECTIVE1, which regulate trafficking at the Golgi apparatus and TGN/EE, respectively. All together, this study identifies Arabidopsis BEX5/RabA1b as a novel regulator of protein trafficking from a TGN/EE compartment to the plasma membrane. AU - Feraru, Elena AU - Feraru, Mugurel Ioan AU - Asaoka, Rin AU - Paciorek, Tomasz AU - De Rycke, Riet M AU - Tanaka, Hirokazu AU - Nakano, Akihiko AU - Jirí Friml ID - 2453 IS - 7 JF - Plant Cell TI - BEX5/RabA1b regulates trans-Golgi network-to-plasma membrane protein trafficking in Arabidopsis VL - 24 ER - TY - JOUR AB - The third EMBO Conference on Plant Molecular Biology, which focused on ‘Plant development and environmental interactions’,was held in May 2012 in Matera, Italy. Here, we review some of the topics and themes that emerged from the various contributions; namely, steering technologies, transcriptional networks and hormonal regulation, small RNAs, cell and tissue polarity, environmental control and natural variation. We intend to provide the reader who might have missed this remarkable event with a glimpse of the recent progress made in this blossoming research field. AU - Beeckman, Tom AU - Friml, Jirí ID - 2456 IS - 20 JF - Development TI - Plant developmental biologists meet on stairways in Matera VL - 139 ER - TY - JOUR AB - Initiation and successive development of organs induce mechanical stresses at the cellular level. Using the tomato shoot apex, a new study now proposes that mechanical strain regulates the plasma membrane abundance of the PIN1 auxin transporter, thereby reinforcing a positive feed-back loop between growth and auxin accumulation. AU - Li, Hongjiang AU - Friml, Jirí AU - Grunewald, Wim ID - 2458 IS - 16 JF - Current Biology TI - Cell polarity: Stretching prevents developmental cramps VL - 22 ER - TY - JOUR AB - Coordinated, subcellular trafficking of proteins is one of the fundamental properties of the multicellular eukaryotic organisms. Trafficking involves a large diversity of compartments, pathways, cargo molecules, and vesicle-sorting events. It is also crucial in regulating the localization and, thus, the activity of various proteins, but the process is still poorly genetically defined in plants. In the past, forward genetics screens had been used to determine the function of genes by searching for a specific morphological phenotype in the organism population in which mutations had been induced chemically or by irradiation. Unfortunately, these straightforward genetic screens turned out to be limited in identifying new regulators of intracellular protein transport, because mutations affecting essential trafficking pathways often lead to lethality. In addition, the use of these approaches has been restricted by functional redundancy among trafficking regulators. Screens for mutants that rely on the observation of changes in the cellular localization or dynamics of fluorescent subcellular markers enable, at least partially, to circumvent these issues. Hence, such image-based screens provide the possibility to identify either alleles with weak effects or components of the subcellular trafficking machinery that have no strong impact on the plant growth. AU - Zwiewka, Marta AU - Friml, Jirí ID - 2459 IS - May JF - Frontiers in Plant Science TI - Fluorescence imaging-based forward genetic screens to identify trafficking regulators in plants VL - 3 ER - TY - JOUR AB - Interneurons are critical for neuronal circuit function, but how their dendritic morphologies and membrane properties influence information flow within neuronal circuits is largely unknown. We studied the spatiotemporal profile of synaptic integration and short-term plasticity in dendrites of mature cerebellar stellate cells by combining two-photon guided electrical stimulation, glutamate uncaging, electron microscopy, and modeling. Synaptic activation within thin (0.4 μm) dendrites produced somatic responses that became smaller and slower with increasing distance from the soma, sublinear subthreshold input-output relationships, and a somatodendritic gradient of short-term plasticity. Unlike most studies showing that neurons employ active dendritic mechanisms, we found that passive cable properties of thin dendrites determine the sublinear integration and plasticity gradient, which both result from large dendritic depolarizations that reduce synaptic driving force. These integrative properties allow stellate cells to act as spatiotemporal filters of synaptic input patterns, thereby biasing their output in favor of sparse presynaptic activity. Stellate cells are critical sources of inhibition in the cerebellum, but how their dendrites integrate excitatory synaptic inputs is unknown. Abrahamsson et al. show that thin dendrites and passive membrane properties of SCs promote sublinear synaptic summation and distance-dependent short-term plasticity. AU - Abrahamsson, Therese AU - Cathala, Laurence AU - Matsui, Ko AU - Ryuichi Shigemoto AU - DiGregorio, David A ID - 2474 IS - 6 JF - Neuron TI - Thin dendrites of cerebellar interneurons confer sublinear synaptic integration and a gradient of short-term plasticity VL - 73 ER - TY - JOUR AB - Background: One of the best-characterized causative factors of Alzheimer's disease (AD) is the generation of amyloid-β peptide (Aβ). AD subjects are at high risk of epileptic seizures accompanied by aberrant neuronal excitability, which in itself enhances Aβ generation. However, the molecular linkage between epileptic seizures and Aβ generation in AD remains unclear. Results: X11 and X11-like (X11L) gene knockout mice suffered from epileptic seizures, along with a malfunction of hyperpolarization-activated cyclic nucleotide gated (HCN) channels. Genetic ablation of HCN1 in mice and HCN1 channel blockage in cultured Neuro2a (N2a) cells enhanced Aβ generation. Interestingly, HCN1 levels dramatically decreased in the temporal lobe of cynomolgus monkeys (Macaca fascicularis) during aging and were significantly diminished in the temporal lobe of sporadic AD patients. Conclusion: Because HCN1 associates with amyloid-β precursor protein (APP) and X11/X11L in the brain, genetic deficiency of X11/X11L may induce aberrant HCN1 distribution along with epilepsy. Moreover, the reduction in HCN1 levels in aged primates may contribute to augmented Aβ generation. Taken together, HCN1 is proposed to play an important role in the molecular linkage between epileptic seizures and Aβ generation, and in the aggravation of sporadic AD. AU - Saito, Yuhki AU - Inoue, Tsuyoshi AU - Zhu, Gang AU - Kimura, Naoki AU - Okada, Motohiro AU - Nishimura, Masaki AU - Murayama, Shigeo AU - Kaneko, Sunao AU - Ryuichi Shigemoto AU - Imoto, Keiji AU - Suzuki, Toshiharu ID - 2475 IS - 1 JF - Molecular Neurodegeneration TI - Hyperpolarization-activated cyclic nucleotide gated channels: A potential molecular link between epileptic seizures and Aβ generation in Alzheimer's disease VL - 7 ER - TY - JOUR AB - Recently developed pharmacogenetic and optogenetic approaches, with their own advantages and disadvantages, have become indispensable tools in modern neuroscience. Here, we employed a previously described knock-in mouse line (GABA ARγ2 77Ilox) in which the γ2 subunit of the GABA A receptor (GABA AR) was mutated to become zolpidem insensitive (γ2 77I) and used viral vectors to swap γ2 77I with wild-type, zolpidem-sensitive γ2 subunits (γ2 77F). The verification of unaltered density and subcellular distribution of the virally introduced γ2 subunits requires their selective labelling. For this we generated six N- and six C-terminal-tagged γ2 subunits, with which cortical cultures of GABA ARγ2 -/- mice were transduced using lentiviruses. We found that the N-terminal AU1 tag resulted in excellent immunodetection and unimpaired synaptic localization. Unaltered kinetic properties of the AU1-tagged γ2 ( AU1γ2 77F) channels were demonstrated with whole-cell patch-clamp recordings of spontaneous IPSCs from cultured cells. Next, we carried out stereotaxic injections of lenti- and adeno-associated viruses containing Cre-recombinase and the AU1γ2 77F subunit (Cre-2A- AU1γ2 77F) into the neocortex of GABA ARγ2 77Ilox mice. Light microscopic immunofluorescence and electron microscopic freeze-fracture replica immunogold labelling demonstrated the efficient immunodetection of the AU1 tag and the normal enrichment of the AU1γ2 77F subunits in perisomatic GABAergic synapses. In line with this, miniature and action potential-evoked IPSCs whole-cell recorded from transduced cells had unaltered amplitudes, kinetics and restored zolpidem sensitivity. Our results obtained with a wide range of structural and functional verification methods reveal unaltered subcellular distributions and functional properties of γ2 77I and AU1γ2 77F GABA ARs in cortical pyramidal cells. This transgenic-viral pharmacogenetic approach has the advantage that it does not require any extrinsic protein that might endow some unforeseen alterations of the genetically modified cells. In addition, this virus-based approach opens up the possibility of modifying multiple cell types in distinct brain regions and performing alternative recombination-based intersectional genetic manipulations. AU - Sümegi, Máté AU - Fukazawa, Yugo AU - Matsui, Ko AU - Lörincz, Andrea AU - Eyre, Mark D AU - Nusser, Zoltán AU - Ryuichi Shigemoto ID - 2476 IS - 7 JF - Journal of Physiology TI - Virus-mediated swapping of zolpidem-insensitive with zolpidem-sensitive GABA A receptors in cortical pyramidal cells VL - 590 ER - TY - JOUR AB - Dynamic activity of glia has repeatedly been demonstrated, but if such activity is independent from neuronal activity, glia would not have any role in the information processing in the brain or in the generation of animal behavior. Evidence for neurons communicating with glia is solid, but the signaling pathway leading back from glial-to-neuronal activity was often difficult to study. Here, we introduced a transgenic mouse line in which channelrhodopsin-2, a light-gated cation channel, was expressed in astrocytes. Selective photostimulation of these astrocytes in vivo triggered neuronal activation. Using slice preparations, we show that glial photostimulation leads to release of glutamate, which was sufficient to activate AMPA receptors on Purkinje cells and to induce long-term depression of parallel fiber-to-Purkinje cell synapses through activation of metabotropic glutamate receptors. In contrast to neuronal synaptic vesicular release, glial activation likely causes preferential activation of extrasynaptic receptors that appose glial membrane. Finally, we show that neuronal activation by glial stimulation can lead to perturbation of cerebellar modulated motor behavior. These findings demonstrate that glia can modulate the tone of neuronal activity and behavior. This animal model is expected to be a potentially powerful approach to study the role of glia in brain function. AU - Sasaki, Takuya AU - Beppu, Kaoru AU - Tanaka, Kenji F AU - Fukazawa, Yugo AU - Ryuichi Shigemoto AU - Matsui, Ko ID - 2477 IS - 50 JF - PNAS TI - Application of an optogenetic byway for perturbing neuronal activity via glial photostimulation VL - 109 ER - TY - JOUR AB - Visual information must be relayed through the lateral geniculate nucleus before it reaches the visual cortex. However, not all spikes created in the retina lead to postsynaptic spikes and properties of the retinogeniculate synapse contribute to this filtering. To understand the mechanisms underlying this filtering process, we conducted electrophysiology to assess the properties of signal transmission in the Long-Evans rat. We also performed SDS-digested freeze-fracture replica labeling to quantify the receptor and transporter distribution, as well as EM reconstruction to describe the 3D structure. To analyze the impact of transmitter diffusion on the activity of the receptors, simulations were integrated. We identified that a large contributor to the filtering is the marked paired-pulse depression at this synapse, which was intensified by the morphological characteristics of the contacts. The broad presynaptic and postsynaptic contact area restricts transmitter diffusion two dimensionally. Additionally, the presence of multiple closely arranged release sites invites intersynaptic spillover, which causes desensitization of AMPA receptors. The presence of AMPA receptors that slowly recover from desensitization along with the high presynaptic release probability and multivesicular release at each synapse also contribute to the depression. These features contrast with many other synapses where spatiotemporal spread of transmitter is limited by rapid transmitter clearance allowing synapses to operate more independently. We propose that the micrometer-order structure can ultimately affect the visual information processing. AU - Budisantoso, Timotheus AU - Matsui, Ko AU - Kamasawa, Naomi AU - Fukazawa, Yugo AU - Ryuichi Shigemoto ID - 2514 IS - 7 JF - Journal of Neuroscience TI - Mechanisms underlying signal filtering at a multisynapse contact VL - 32 ER - TY - JOUR AB - We investigated the temporal and spatial expression of SK2 in the developing mouse hippocampus using molecular and biochemical techniques, quantitative immunogold electron microscopy, and electrophysiology. The mRNA encoding SK2 was expressed in the developing and adult hippocampus. Western blotting and immunohistochemistry showed that SK2 protein increased with age. This was accompanied by a shift in subcellular localization. Early in development (P5), SK2 was predominantly localized to the endoplasmic reticulum in the pyramidal cell layer. But by P30 SK2 was almost exclusively expressed in the dendrites and spines. The level of SK2 at the postsynaptic density (PSD) also increased during development. In the adult, SK2 expression on the spine plasma membrane showed a proximal-to-distal gradient. Consistent with this redistribution and gradient of SK2, the selective SK channel blocker apamin increased evoked excitatory postsynaptic potentials (EPSPs) only in CA1 pyramidal neurons from mice older than P15. However, the effect of apamin on EPSPs was not different between synapses in proximal or distal stratum radiatum or stratum lacunosum-moleculare in adult. These results show a developmental increase and gradient in SK2-containing channel surface expression that underlie their influence on neurotransmission, and that may contribute to increased memory acquisition during early development. AU - Ballesteros-Merino, Carmen AU - Lin, Michael AU - Wu, Wendy W AU - Ferrándiz-Huertas, Clotilde AU - Cabañero, María José AU - Watanabe, Masahiko AU - Fukazawa, Yugo AU - Ryuichi Shigemoto AU - Maylie, James G AU - Adelman, John P AU - Luján, Rafael ID - 2515 IS - 6 JF - Hippocampus TI - Developmental profile of SK2 channel expression and function in CA1 neurons VL - 22 ER - TY - JOUR AB - Left-right asymmetry of human brain function has been known for a century, although much of molecular and cellular basis of brain laterality remains to be elusive. Recent studies suggest that hippocampal CA3-CA1 excitatory synapses are asymmetrically arranged, however, the functional implication of the asymmetrical circuitry has not been studied at the behavioral level. In order to address the left-right asymmetry of hippocampal function in behaving mice, we analyzed the performance of "split-brain" mice in the Barnes maze. The "split-brain" mice received ventral hippocampal commissure and corpus callosum transection in addition to deprivation of visual input from one eye. In such mice, the hippocampus in the side of visual deprivation receives sensory-driven input. Better spatial task performance was achieved by the mice which were forced to use the right hippocampus than those which were forced to use the left hippocampus. In two-choice spatial maze, forced usage of left hippocampus resulted in a comparable performance to the right counterpart, suggesting that both hippocampal hemispheres are capable of conducting spatial learning. Therefore, the results obtained from the Barnes maze suggest that the usage of the right hippocampus improves the accuracy of spatial memory. Performance of non-spatial yet hippocampus-dependent tasks (e.g. fear conditioning) was not influenced by the laterality of the hippocampus. AU - Shinohara, Yoshiaki AU - Hosoya, Aki AU - Yamasaki, Nobuyuki AU - Ahmed, Hassan AU - Hattori, Satoko AU - Eguchi, Megumi AU - Yamaguchi, Shun AU - Miyakawa, Tsuyoshi AU - Hirase, Hajime AU - Ryuichi Shigemoto ID - 2687 IS - 2 JF - Hippocampus TI - Right-hemispheric dominance of spatial memory in split-brain mice VL - 22 ER - TY - JOUR AB - To gain insights into structure-function relationship of excitatory synapses, we revisit our quantitative analysis of synaptic AMPAR by highly sensitive freeze-fracture replica labeling in eight different connections. All of these connections showed linear correlation between synapse size and AMPAR number indicating a common intra-synapse-type relationship in CNS synapses. On the contrary, inter-synapse-type relationship is unexpected indicating no correlation between averages of synapse size and AMPAR number. Interestingly, connections with large average synapse size and low AMPAR density showed high variability of AMPAR number and mosaic distribution within the postsynaptic membrane. We propose an idea that these connections may quickly exhibit synaptic plasticity by modifying AMPAR density/number whereas those with high AMPAR density change their efficacy by modifying synapse size. AU - Fukazawa, Yugo AU - Ryuichi Shigemoto ID - 2688 IS - 3 JF - Current Opinion in Neurobiology TI - Intra-synapse-type and inter-synapse-type relationships between synaptic size and AMPAR expression VL - 22 ER - TY - JOUR AB - R-type calcium channels (RTCCs) are well known for their role in synaptic plasticity, but little is known about their subcellular distribution across various neuronal compartments. Using subtype-specific antibodies, we characterized the regional and subcellular localization of Ca v2.3 in mice and rats at both light and electron microscopic levels. Ca v2.3 immunogold particles were found to be predominantly presynaptic in the interpeduncular nucleus, but postsynaptic in other brain regions. Serial section analysis of electron microscopic images from the hippocampal CA1 revealed a higher density of immunogold particles in the dendritic shaft plasma membrane compared with the pyramidal cell somata. However, the labeling densities were not significantly different among the apical, oblique, or basal dendrites. Immunogold particles were also observed over the plasma membrane of dendritic spines, including both synaptic and extrasynaptic sites. Individual spine heads contained <20 immunogold particles, with an average density of ~260 immunoparticles per μm 3 spine head volume, in accordance with the density of RTCCs estimated using calcium imaging (Sabatini and Svoboda, 2000). The Ca v2.3 density was variable among similar-sized spine heads and did not correlate with the density in the parent dendrite, implying that spines are individual calcium compartments operating autonomously from their parent dendrites. AU - Parajuli, Laxmi K AU - Nakajima, Chikako AU - Kulik, Ákos AU - Matsui, Ko AU - Schneider, Toni AU - Ryuichi Shigemoto AU - Fukazawa, Yugo ID - 2689 IS - 39 JF - Journal of Neuroscience TI - Quantitative regional and ultra structural localization of the Ca v2 3 subunit of R type calcium channel in mouse brain VL - 32 ER - TY - GEN AU - László Erdös ID - 2696 T2 - ArXiv TI - Universality for random matrices and log-gases ER - TY - CONF AU - László Erdös ID - 2700 TI - Lecture notes on quantum Brownian motion VL - 95 ER - TY - CONF AB - We consider Markov decision processes (MDPs) with specifications given as Büchi (liveness) objectives. We consider the problem of computing the set of almost-sure winning vertices from where the objective can be ensured with probability 1. We study for the first time the average case complexity of the classical algorithm for computing the set of almost-sure winning vertices for MDPs with Büchi objectives. Our contributions are as follows: First, we show that for MDPs with constant out-degree the expected number of iterations is at most logarithmic and the average case running time is linear (as compared to the worst case linear number of iterations and quadratic time complexity). Second, for the average case analysis over all MDPs we show that the expected number of iterations is constant and the average case running time is linear (again as compared to the worst case linear number of iterations and quadratic time complexity). Finally we also show that given that all MDPs are equally likely, the probability that the classical algorithm requires more than constant number of iterations is exponentially small. AU - Chatterjee, Krishnendu AU - Joglekar, Manas AU - Shah, Nisarg ID - 2715 TI - Average case analysis of the classical algorithm for Markov decision processes with Büchi objectives VL - 18 ER - TY - JOUR AB - Consider N × N Hermitian or symmetric random matrices H where the distribution of the (i, j) matrix element is given by a probability measure ν ij with a subexponential decay. Let σ ij 2 be the variance for the probability measure ν ij with the normalization property that Σ iσ i,j 2 = 1 for all j. Under essentially the only condition that c ≤ N σ ij 2 ≤ c -1 for some constant c > 0, we prove that, in the limit N → ∞, the eigenvalue spacing statistics of H in the bulk of the spectrum coincide with those of the Gaussian unitary or orthogonal ensemble (GUE or GOE). We also show that for band matrices with bandwidth M the local semicircle law holds to the energy scale M -1. AU - László Erdös AU - Yau, Horng-Tzer AU - Yin, Jun ID - 2767 IS - 1-2 JF - Probability Theory and Related Fields TI - Bulk universality for generalized Wigner matrices VL - 154 ER - TY - JOUR AB - We consider a two dimensional magnetic Schrödinger operator with a spatially stationary random magnetic field. We assume that the magnetic field has a positive lower bound and that it has Fourier modes on arbitrarily short scales. We prove the Wegner estimate at arbitrary energy, i. e. we show that the averaged density of states is finite throughout the whole spectrum. We also prove Anderson localization at the bottom of the spectrum. AU - László Erdös AU - Hasler, David G ID - 2768 IS - 2 JF - Communications in Mathematical Physics TI - Wegner estimate and Anderson localization for random magnetic fields VL - 309 ER - TY - JOUR AB - We present a generalization of the method of the local relaxation flow to establish the universality of local spectral statistics of a broad class of large random matrices. We show that the local distribution of the eigenvalues coincides with the local statistics of the corresponding Gaussian ensemble provided the distribution of the individual matrix element is smooth and the eigenvalues {X J} N j=1 are close to their classical location {y j} N j=1 determined by the limiting density of eigenvalues. Under the scaling where the typical distance between neighboring eigenvalues is of order 1/N, the necessary apriori estimate on the location of eigenvalues requires only to know that E|x j - γ j| 2 ≤ N-1-ε on average. This information can be obtained by well established methods for various matrix ensembles. We demonstrate the method by proving local spectral universality for sample covariance matrices. AU - László Erdös AU - Schlein, Benjamin AU - Yau, Horng-Tzer AU - Yin, Jun ID - 2769 IS - 1 JF - Annales de l'institut Henri Poincare (B) Probability and Statistics TI - The local relaxation flow approach to universality of the local statistics for random matrices VL - 48 ER - TY - JOUR AB - Consider N×N Hermitian or symmetric random matrices H with independent entries, where the distribution of the (i,j) matrix element is given by the probability measure vij with zero expectation and with variance σ ιj 2. We assume that the variances satisfy the normalization condition Σiσij2=1 for all j and that there is a positive constant c such that c≤Nσ ιj 2 ιc -1. We further assume that the probability distributions νij have a uniform subexponential decay. We prove that the Stieltjes transform of the empirical eigenvalue distribution of H is given by the Wigner semicircle law uniformly up to the edges of the spectrum with an error of order (Nη) -1 where η is the imaginary part of the spectral parameter in the Stieltjes transform. There are three corollaries to this strong local semicircle law: (1) Rigidity of eigenvalues: If γj=γj,N denotes the classical location of the j-th eigenvalue under the semicircle law ordered in increasing order, then the j-th eigenvalue λj is close to γj in the sense that for some positive constants C, c P{double-struck}(∃j:|λ j-γ j|≥(logN) CloglogN[min(j,N-j+1)] -1/3N -2/3)≤ C exp[-(logN) cloglogN] for N large enough. (2) The proof of Dyson's conjecture (Dyson, 1962 [15]) which states that the time scale of the Dyson Brownian motion to reach local equilibrium is of order N -1 up to logarithmic corrections. (3) The edge universality holds in the sense that the probability distributions of the largest (and the smallest) eigenvalues of two generalized Wigner ensembles are the same in the large N limit provided that the second moments of the two ensembles are identical. AU - László Erdös AU - Yau, Horng-Tzer AU - Yin, Jun ID - 2770 IS - 3 JF - Advances in Mathematics TI - Rigidity of eigenvalues of generalized Wigner matrices VL - 229 ER - TY - JOUR AB - We consider a magnetic Schrödinger operator in two dimensions. The magnetic field is given as the sum of a large and constant magnetic field and a random magnetic field. Moreover, we allow for an additional deterministic potential as well as a magnetic field which are both periodic. We show that the spectrum of this operator is contained in broadened bands around the Landau levels and that the edges of these bands consist of pure point spectrum with exponentially decaying eigenfunctions. The proof is based on a recent Wegner estimate obtained in Erdos and Hasler (Commun. Math. Phys., preprint, arXiv:1012.5185) and a multiscale analysis. AU - László Erdös AU - Hasler, David G ID - 2771 IS - 5 JF - Journal of Statistical Physics TI - Anderson localization at band edges for random magnetic fields VL - 146 ER - TY - JOUR AB - We consider the semiclassical asymptotics of the sum of negative eigenvalues of the three-dimensional Pauli operator with an external potential and a self-generated magnetic field B. We also add the field energy β ∫ B 2 and we minimize over all magnetic fields. The parameter β effectively determines the strength of the field. We consider the weak field regime with βh 2 ≥ const > 0, where h is the semiclassical parameter. For smooth potentials we prove that the semiclassical asymptotics of the total energy is given by the non-magnetic Weyl term to leading order with an error bound that is smaller by a factor h 1+e{open}, i. e. the subleading term vanishes. However for potentials with a Coulomb singularity, the subleading term does not vanish due to the non-semiclassical effect of the singularity. Combined with a multiscale technique, this refined estimate is used in the companion paper (Erdo{double acute}s et al. in Scott correction for large molecules with a self-generated magnetic field, Preprint, 2011) to prove the second order Scott correction to the ground state energy of large atoms and molecules. AU - László Erdös AU - Fournais, Søren AU - Solovej, Jan P ID - 2772 IS - 4 JF - Annales Henri Poincare TI - Second order semiclassics with self generated magnetic fields VL - 13 ER - TY - JOUR AB - Recently we proved [3, 4, 6, 7, 9, 10, 11] that the eigenvalue correlation functions of a general class of random matrices converge, weakly with respect to the energy, to the corresponding ones of Gaussian matrices. Tao and Vu [15] gave a proof that for the special case of Hermitian Wigner matrices the convergence can be strengthened to vague convergence at any fixed energy in the bulk. In this article we show that this theorem is an immediate corollary of our earlier results. Indeed, a more general form of this theorem also follows directly from our work [2]. AU - László Erdös AU - Yau, Horng-Tzer ID - 2773 JF - Electronic Journal of Probability TI - A comment on the Wigner-Dyson-Mehta bulk universality conjecture for Wigner matrices VL - 17 ER - TY - JOUR AB - We consider a large neutral molecule with total nuclear charge Z in non-relativistic quantum mechanics with a self-generated classical electromagnetic field. To ensure stability, we assume that Zα 2 ≤ κ 0 for a sufficiently small κ 0, where α denotes the fine structure constant. We show that, in the simultaneous limit Z → ∞, α → 0 such that κ = Zα 2 is fixed, the ground state energy of the system is given by a two term expansion c 1Z 7/3 + c 2(κ) Z 2 + o(Z 2). The leading term is given by the non-magnetic Thomas-Fermi theory. Our result shows that the magnetic field affects only the second (so-called Scott) term in the expansion. AU - László Erdös AU - Fournais, Søren AU - Solovej, Jan P ID - 2774 IS - 3 JF - Communications in Mathematical Physics TI - Scott correction for large atoms and molecules in a self-generated magnetic field VL - 312 ER - TY - JOUR AB - The Wigner-Dyson-Gaudin-Mehta conjecture asserts that the local eigenvalue statistics of large random matrices exhibit universal behavior depending only on the symmetry class of the matrix ensemble. For invariant matrix models, the eigenvalue distributions are given by a log-gas with potential V and inverse temperature β = 1, 2, 4, corresponding to the orthogonal, unitary and symplectic ensembles. For β ∉ {1, 2, 4}, there is no natural random matrix ensemble behind this model, but the statistical physics interpretation of the log-gas is still valid for all β > 0. The universality conjecture for invariant ensembles asserts that the local eigenvalue statistics are independent of V. In this article, we review our recent solution to the universality conjecture for both invariant and non-invariant ensembles. We will also demonstrate that the local ergodicity of the Dyson Brownian motion is the intrinsic mechanism behind the universality. Furthermore, we review the solution of Dyson's conjecture on the local relaxation time of the Dyson Brownian motion. Related questions such as delocalization of eigenvectors and local version of Wigner's semicircle law will also be discussed. AU - László Erdös AU - Yau, Horng-Tzer ID - 2775 IS - 3 JF - Bulletin of the American Mathematical Society TI - Universality of local spectral statistics of random matrices VL - 49 ER - TY - JOUR AB - We consider the ensemble of adjacency matrices of Erdős-Rényi random graphs, i.e. graphs on N vertices where every edge is chosen independently and with probability p ≡ p(N). We rescale the matrix so that its bulk eigenvalues are of order one. Under the assumption pN≫N2/3 , we prove the universality of eigenvalue distributions both in the bulk and at the edge of the spectrum. More precisely, we prove (1) that the eigenvalue spacing of the Erdős-Rényi graph in the bulk of the spectrum has the same distribution as that of the Gaussian orthogonal ensemble; and (2) that the second largest eigenvalue of the Erdős-Rényi graph has the same distribution as the largest eigenvalue of the Gaussian orthogonal ensemble. As an application of our method, we prove the bulk universality of generalized Wigner matrices under the assumption that the matrix entries have at least 4 + ε moments. AU - László Erdös AU - Knowles, Antti AU - Yau, Horng-Tzer AU - Yin, Jun ID - 2776 IS - 3 JF - Communications in Mathematical Physics TI - Spectral statistics of Erdős-Rényi graphs II: Eigenvalue spacing and the extreme eigenvalues VL - 314 ER - TY - JOUR AB - We consider a large neutral molecule with total nuclear charge Z in a model with self-generated classical magnetic field and where the kinetic energy of the electrons is treated relativistically. To ensure stability, we assume that Zα < 2/π, where α denotes the fine structure constant. We are interested in the ground state energy in the simultaneous limit Z → ∞, α → 0 such that κ = Zα is fixed. The leading term in the energy asymptotics is independent of κ, it is given by the Thomas-Fermi energy of order Z7/3 and it is unchanged by including the self-generated magnetic field. We prove the first correction term to this energy, the so-called Scott correction of the form S(αZ)Z2. The current paper extends the result of Solovej et al. [Commun. Pure Appl. Math.LXIII, 39-118 (2010)] on the Scott correction for relativistic molecules to include a self-generated magnetic field. Furthermore, we show that the corresponding Scott correction function S, first identified by Solovej et al. [Commun. Pure Appl. Math.LXIII, 39-118 (2010)], is unchanged by including a magnetic field. We also prove new Lieb-Thirring inequalities for the relativistic kinetic energy with magnetic fields. AU - László Erdös AU - Fournais, Søren AU - Solovej, Jan P ID - 2777 IS - 9 JF - Journal of Mathematical Physics TI - Relativistic Scott correction in self-generated magnetic fields VL - 53 ER - TY - JOUR AB - We prove the bulk universality of the β-ensembles with non-convex regular analytic potentials for any β > 0. This removes the convexity assumption appeared in the earlier work [P. Bourgade, L. Erdös, and H.-T. Yau, Universality of general β-ensembles, preprint arXiv:0907.5605 (2011)]. The convexity condition enabled us to use the logarithmic Sobolev inequality to estimate events with small probability. The new idea is to introduce a "convexified measure" so that the local statistics are preserved under this convexification. AU - Bourgade, Paul AU - László Erdös AU - Yau, Horng-Tzer ID - 2778 IS - 9 JF - Journal of Mathematical Physics TI - Bulk universality of general β-ensembles with non-convex potential VL - 53 ER - TY - JOUR AB - We consider a two-dimensional magnetic Schrödinger operator on a square lattice with a spatially stationary random magnetic field. We prove Anderson localization near the spectral edges. We use a new approach to establish a Wegner estimate that does not rely on the monotonicity of the energy on the random parameters. AU - László Erdös AU - Hasler, David G ID - 2779 IS - 8 JF - Annales Henri Poincare TI - Wegner estimate for random magnetic Laplacian on ℤ 2 VL - 13 ER - TY - JOUR AB - When a binary fluid demixes under a slow temperature ramp, nucleation, coarsening and sedimentation of droplets lead to an oscillatory evolution of the phase-separating system. The advection of the sedimenting droplets is found to be chaotic. The flow is driven by density differences between two phases. Here, we show how image processing can be combined with particle tracking to resolve droplet size and velocity simultaneously. Droplets are used as tracer particles, and the sedimentation velocity is determined. Taking these effects into account, droplets with radii in the range of 4-40 μm are detected and tracked. Based on these data, we resolve the oscillations in the droplet size distribution that are coupled to the convective flow. AU - Lapp, Tobias AU - Rohloff, Martin AU - Vollmer, Jürgen T AU - Björn Hof ID - 2802 IS - 5 JF - Experiments in Fluids TI - Particle tracking for polydisperse sedimenting droplets in phase separation VL - 52 ER - TY - JOUR AB - Recent numerical studies suggest that in pipe and related shear flows, the region of phase space separating laminar from turbulent motion is organized by a chaotic attractor, called an edge state, which mediates the transition process. We here confirm the existence of the edge state in laboratory experiments. We observe that it governs the dynamics during the decay of turbulence underlining its potential relevance for turbulence control. In addition we unveil two unstable traveling wave solutions underlying the experimental flow fields. This observation corroborates earlier suggestions that unstable solutions organize turbulence and its stability border. AU - de Lózar, Alberto AU - Mellibovsky, Fernando AU - Avila, Marc AU - Björn Hof ID - 2803 IS - 21 JF - Physical Review Letters TI - Edge state in pipe flow experiments VL - 108 ER - TY - JOUR AB - The analysis of the size distribution of droplets condensing on a substrate (breath figures) is a test ground for scaling theories. Here, we show that a faithful description of these distributions must explicitly deal with the growth mechanisms of the droplets. This finding establishes a gateway connecting nucleation and growth of the smallest droplets on surfaces to gross features of the evolution of the droplet size distribution AU - Blaschke, Johannes AU - Lapp, Tobias AU - Björn Hof AU - Vollmer, Jürgen T ID - 2804 IS - 6 JF - Physical Review Letters TI - Breath figures: Nucleation, growth, coalescence, and the size distribution of droplets VL - 109 ER - TY - CONF AB - We study the problem of maximum marginal prediction (MMP) in probabilistic graphical models, a task that occurs, for example, as the Bayes optimal decision rule under a Hamming loss. MMP is typically performed as a two-stage procedure: one estimates each variable's marginal probability and then forms a prediction from the states of maximal probability. In this work we propose a simple yet effective technique for accelerating MMP when inference is sampling-based: instead of the above two-stage procedure we directly estimate the posterior probability of each decision variable. This allows us to identify the point of time when we are sufficiently certain about any individual decision. Whenever this is the case, we dynamically prune the variables we are confident about from the underlying factor graph. Consequently, at any time only samples of variables whose decision is still uncertain need to be created. Experiments in two prototypical scenarios, multi-label classification and image inpainting, show that adaptive sampling can drastically accelerate MMP without sacrificing prediction accuracy. AU - Lampert, Christoph ID - 2825 TI - Dynamic pruning of factor graphs for maximum marginal prediction VL - 1 ER -
08ec1876aa7a8bcb
Archive for August, 2011 Terms of Ontological Endearment 25 August 2011 Mosaic of Reality Material Witness In chapter twelve of his On Physics and Philosophy Bernard d’Espagnat tackles three kinds of materialism: dialectical materialism (briefly), “scientific” materialism, and what he calls “neomaterialism.” Ultimately… ultimate reality isn’t the same as “empirical” or “epistemological” reality, something materialists just don’t get. At least that’s what he says, and I largely agree. Here’s my summary of the chapter. Dialectical Materialism vs Bohr D’Espagnat says he’s not going to do a detailed analysis of dialectical materialism. He says it’s been sufficiently dismantled elsewhere. However, he warns against seeing too many parallels between Neils Bohr’s approach and this form of materialism. Bohr’s thought and dialectics may share some general features, but that’s different from dialectical materialism. Bohr had a “human-centred” approach, which could be called materialism only if you radically changed the meaning of the word. Scientific Materialism vs Atomism D’Espagnat says “materialism” or “mechanism” doesn’t automatically refer to atomism. Descartes didn’t believe in atoms, and even in the 19th century ether and fields lay outside the realm of the atom. Macroman on the Street vs the Microworld The man on the street and even many scientists (particularly in the softer sciences such as biology) think of nature as composed of smaller and smaller grains or specks, eventually leading to atoms. This microworld has (roughly) the same nature as the macroscopic world we experience. The problem with that idea is that standard quantum theory and the experimental results used to test it show conclusively that atoms, particles, and the forces emanating from them just aren’t like the world at large (as we experience it). This material reductionism doesn’t work. Standard vs Non-standard Interpretations Penrose (calling himself a physicalist) adds gravitational effects to the Schrödinger equation. Sokal and Bricmont rely on Broglie–Bohm. However, the first choice is more a research program than a fully fledged theory, and the second choice runs into some trouble with relativity. The Sokal and Bricmont approach combines corpuscles with nonlocal entities or forces that have the same strength whatever the distance. This isn’t your grandmother’s materialism. Empirical Reality vs Materialist Reality Standard quantum mechanics rejects both approaches. At best these materialist approaches describe some “empirical” or “epistemological” reality, a product of how our “mind structure” divides and categorizes reality. Positivism vs Materialism Some materialist apologists say quantum mechanics is a product of its times: the 1920s, when positivism (and its emphasis on observation rather than underlying reality) reigned. D’Espagnat rejects that objection. He says that whatever the origins of quantum theory, rival interpretations still need to be bolstered by evidence. Research vs Traditions of Research Michel Bibol and Larry Laudan offer subtler challenges by examining the higher-level assumptions that scientists use. Laudan calls them “traditions of research,” which Bitbol calls “values.” They’re what imparts meaning to a scientific quest. Observations vs “Ampliative” Arguments D’Espagnat acknowledges that when mainstream physicists reject Broglie–Bohm because its concepts are unnecessarily complicated or because “action at a distance” messes with relativity they are using “ampliative” arguments. These are arguments that go beyond what the observations are telling us. After all, physicists could reject the relativity principle as long as they come up with some theory that uses other principles, but acts as if the relativity principle still works. Bohm vs Materialism However, even David Bohm rejected materialism. He first spoke of a wave function then later a quantum potential. Neither is localized, hardly what a conventional materialist would call real. Although Bohm found a way to explain physics without specifying consciousness, he also noted that quantum physics suggests a “mental pole” exists. Sophistication vs Atomic Materialism Adding sophistication to atomic materialism doesn’t rescue it. Rather, its “atomism” disappears and its materialism looks increasingly doubtful. Neomaterialism vs Matter A third approach to materialism comes from André Comte-Sponville. He acknowledges nonseparability, a concept that other materialists ignore. D’Espagnat calls this approach “neomaterialism.” Comte-Sponville gets himself into definitional circles trying to define “matter.” It’s supposed to be everything (but a vacuum), yet also produces the mind. However, if thoughts are real then they’d already be part of “matter.” Neutral vs Suggestive Terms D’Espagnat also criticizes Comte-Sponville for using “image-carrying words” such as “matter.” D’Espagnat notes that he himself doesn’t use “matter,” “God,” or “spirit.” Rather he tries to use neutral terms such as “mind-independent reality.” Nonseparability vs Neomaterialism Comte-Sponville says the primary question is whether matter is idealist or spiritualist on the one side, or of a physical nature similar to what we experience on the macroscopic level. He’s not an idealist or spiritualist, so he clearly believes in a physical reality. But as with scientific materialism the idea that reality bears any resemblance to our macroscopic experiences is blown out of the water by quantum physics. Nonseparability—which Comte-Sponville says is a “mystery”—is an issue whatever theory you choose. It ensures that “ultimate reality” is nothing like our everyday experiences. Utility vs Evidence Comte-Sponville eventually acknowledges that if matter includes thought then matter can’t be defined as everything except thought. However, he says that ultimately what the “natural sciences” say is less important than neomaterialism’s purpose: to explain mind from concepts other than mind, and to do all this to “defeat religion, superstition and illusion.” D’Espagnat says this argument about the usefulness of neomaterialism just ends up being a circular argument. Deeply held convictions are not themselves an argument. Empirical vs Ultimate Reality Ontologically interpretable theories are not consistent with experiment. D’Espagnat says particles and their attributes have a well-defined existence only in relation to knowledge, hence the mind. Our knowledge of particles and other micro-objects are just that: a kind of knowledge, hence pointing to elements of an empirical, not ultimate, reality. D’Espagnat says that he and Comte-Sponville both agree that “existence” comes before “knowledge.” But d’Espagnat says mind comes from an “independent reality” not “empirical reality.” This a materialism does not make. Convenient Ontologies vs Creeds Back to materialism in general, d’Espagnat agrees it’s a “tradition of research” as Laudan might put it. These traditions use values that neither explain nor predict. They are not testable. These research traditions may include contradictory theories under their umbrella. But some scientists attach a lot of meaning to this identity, and aren’t likely to give up on the term “materialism.” On a day-to-day basis physicists are using and abusing terms from classical physics such as “particles.” Since physicists would find it hard to move ahead just pondering observations and equations, these concepts are convenient components of a “fabricated ontology.” D’Espagnat warns these scientists that relying on this ontology to support their rationality may be useful from a practical point of view. Just don’t convert that choice into “an illegitimate doctrinal creed.” Knowledge of Good and Banal 10 August 2011 Knowledge of Good and Banal Philosopher’s Walk A little past the halfway point in Bernard d’Espagnat’s On Physics and Philosophy he switches from a look at the relevance of physics to philosophy to the relevance of philosophy to physics. If the first chapter of part two, chapter eleven, is any indication, the last half of the book should be a much easier read than the first, though perhaps less satisfying. It was a huge challenge to wade through d’Espagnat’s descriptions of quantum theory and interpretation, hence I felt the need to write (and post) lots of notes to help me out. At least I felt a sense of reward whenever I finally grasped something of the physics. But as far as I can tell I agree with d’Espgant’s philosophy anyway. Part two may make easier reading, but I already felt a lot of the modern philosophy, soft sciences, and cultural studies he critiques was just plain hokum. I don’t need more convincing. In any event, I will trudge on, and I expect I’ll be posting updates to my dualistic summary much more often now. Science vs Philosophy Descartes, Pascal, and Leibniz were brilliant scientists and philosophers, but by the eighteenth century a huge breach developed. Nature of Things vs Behaviour Fourier refused to speculate on the nature of heat. Instead his heat propagation equation quantitatively predicted heat’s behaviour. Intuitive vs Unintuitive Notions Specialization works when concepts such as (in Fourier’s time) “hotter” and “colder” seem obvious, so don’t need to be defined by a theory. But what’s a quantum field or space-time metrics? Then we do need to consider the nature of such concepts. Ontology vs Operationalism The physicist can give up his exclusive interest in behaviours, or can decide that “behaviour” is just a series of recorded observations. The first option sounds like philosophy, while the second gets close to operationalism. Physics-Aware Philosophy vs Philosophy-Aware Physics In first part of his book d’Espagnat called on philosophers to pay attention to the physics. In this second part he calls on physicists to pay attention to the philosophy. Epistemology vs Scientific Knowledge D’Espagnat says epistemology, the philosophy of knowledge, is particularly important when considering scientific knowledge. Logical Positivists vs Modern Sceptics Epistemology forty years ago was dominated by logical positivists. Nowadays there’s more diversity. Present-day epistemologists often combine extreme scepticism toward science with an “everything goes” attitude to knowledge in general. They also talk of “paradigms” in a way that suggests an underlying belief in objectivist realism. Stubborn Epistemologists vs Blasé Physicists Physics has moved far, far away from realist attitudes to experimental data. Epistemologists generally ignore two points physicists find obvious. The first is that equations, such as Maxwell’s, show remarkable power and longevity, even if interpretations of those equations have changed. The second is that with the help of these equations physical science gets better and better at predicting phenomena. Paradigm Change vs Continuous Change D’Espagnat finds fault with much of contemporary epistemology, but he says Thomas Kuhn and others usefully pointed out that science doesn’t always change slowly but surely. Kuhn sees a strong sociological basis in “paradigm changes”: it’s easier to cast doubt on the present “received” theory than to prove its replacement. Therefore advocates of change must use more tools of persuasion than just the data. Experimental Choice vs Outcome D’Espagnat appreciates that funding and fashion might influence choice of experiment, but he strongly doubts that they affect the results of those experiments. Short-Term Chaos vs Long-Term Progress D’Espagnat says epistemologists probably act like historians, seeing short-term upheaval during science’s most productive periods. But in the long term d’Espagnat strongly believes the “winner” theory will explain not just new facts, but the old ones the previous theory took care of. Huygens’ Waves vs Newton’s Corpuscles D’Espagnat acknowledges how Newton’s corpuscular theory of light replaced Huygens’ wave theory even though Huygens’ explained double refraction a lot better. Does that mean explanatory power is sometimes lost as science “progresses”? D’Espagnat emphasizes that today’s theory of light is quantum electrodynamics, which improves upon both Newton’s and Huygens’ theories. Universal Physics vs The Rest of Science D’Espagnat says epistemologists might dispute the universality of this argument. Instead of physics maybe their claims apply to some other sciences. He believes in the universality of science in principle, but he admits maybe sciences less dependent on technology may suffer a loss of craft as one theory replaces another. However, d’Espagnat still believes such a loss would be temporary. Epistemologists again confuse loss of predictive power and a (temporary) lack of interest in some field. Paradigms vs Reality Kuhn-like epistemologists are so fixed on objectivist or constructivist realism that they see a change in concepts as a radical change in physics’ view of reality. As a result some epistemologists speak of the “noncumulative” nature of physics. Allegory vs Equations D’Espagnat points to the remarkable stability of equations despite changes in “wordings and outward interpretations.” A change in concepts doesn’t destroy the old theory, it just generalizes it and provides a new allegorical picture. Kuhnian vs Other Viewpoints D’Espagnat notes that in the past fifty years other approaches to scientific knowledge have developed that don’t rely on Kuhn. Professional Language vs Sloppy Thinking D’Espagnat says most professional languages help prevent misleading shifts in meaning, but philosophical language actually encourages it. If you apply critical thinking to philosophical texts you’ll often discover ambiguous meaning and mannered style replacing sound arguments. Context vs Scientific Purpose D’Espagnat says some epistemologists delve deeply into the psychology or sociology of scientific discovery, yet remain near silent about what science is really concerned with. Ideas vs Evidence Jean-Jacques Rousseau decided humans are good by nature, but forgot this was an idea of his not a piece of evidence. D’Espagnat says many philosophers of science act the same way, clinging to an idea that is ultimately just part of their dogma. Empiricists decided a priori that evidence comes from the senses, while positivists have their verification principle. Relativity vs Quantum Theory Epistemologists have started taking into account relativity theory but don’t realize how damaging quantum theory is to some of their views. Positivists vs Realists Some realist epistemologists speak of entities as having an unconditional individual existence, or naturally assume that particles travel on continuous trajectories. Realists point to positivism’s failings on philosophical grounds, but the physics points to the failings of realism. Science vs Cultural Fashion Some epistemologists think the positivism of the 1920s led to the “weak” objectivity of standard quantum theory, while today’s attitudes are friendlier towards realism. D’Espagnat calls this argument “valueless.” If it were just an issue of social psychology and today’s fashion then physicists should now have solved the quantum interpretation problem. However, as he’s already explained in detail, other quantum interpretations that make the right predictions cannot be interpreted ontologically, and vice versa. Language vs Thought Throughout the twentieth century many philosophers paid attention to language, thinking it had to mirror—even mould—the logic of thought. The problem is various languages have very different structures. Do we think if a group speaks a different language it thinks differently? D’Espagnat believes “language creates thought” is a Rousseau-like assumption. Aristotle came up with the concept of potentia not so he could think in a new way, but to accommodate new data. New language is convenient and helpful, but springs from a need to explain new evidence. Quantum vs Classic Logic Quantum theory muddies the distinction between concepts of objects and predicates. Some people have put forward a quantum logic to remedy that situation, but this new logic isn’t a necessary part of quantum theory. Metalogic vs Specific Rules D’Espagnat believes that the metalogic used to speak about logic is a universal logic, while specific thinking rules might apply to specific situations. He notes with approval Bohr’s “basic truth” that everyday language is the only clear means of communication that we have. Sociologism vs Science D’Espagnat condemns the idea that “anthropological situations” determine scientific results. He asks, for instance, if the Heisenberg uncertainty principle would have failed had German and Danish culture been different. He says this is sheer absurdity and calls the attitude “sociologism.” Sokal the Anti-Sociologist vs Sokal the Realist D’Espagnat applauds physicist Alan Sokal’s exposé of sociologists’ fuzzy thinking (by submitting an incoherent, jargon-filled paper to a humanities journal). However, d’Espagnat regrets how Sokal “drifted to the other extreme” by clinging to physical realism. Certainties vs The End of Certainties D’Espagnat disagrees with the phrase “the end of certainties,” which is often used to describe the loss of certain knowledge in modern times. He rejects this idea, whether it refers to challenges to determinism or physical realism. Predictive rules, whether of events or probabilities of events, do work. Once experimentally verified they keep on working. D’Espagnat thinks this is “certain” knowledge, though he agrees that “illusively simple” certainties may prove deceptive and short-lived.
a379865515428074
Dear Reader, There are several reasons you might be seeing this page. In order to read the online edition of The Feynman Lectures on Physics, javascript must be supported by your browser and enabled. If you have have visited this website previously it's possible you may have a mixture of incompatible files (.js, .css, and .html) in your browser cache. If you use an ad blocker it may be preventing our pages from downloading necessary resources. So, please try the following: make sure javascript is enabled, clear your browser cache (at least of files from feynmanlectures.caltech.edu), turn off your browser extensions, and open this page: If it does not open, or only shows you this message again, then please let us know: This type of problem is rare, and there's a good chance it can be fixed if we have some clues about the cause. So, if you can, after enabling javascript, clearing the cache and disabling extensions, please open your browser's javascript console, load the page above, and if this generates any messages (particularly errors or warnings) on the console, then please make a copy (text or screenshot) of those messages and send them with the above-listed information to the email address given below. By sending us information you will be helping not only yourself, but others who may be having similar problems accessing the online edition of The Feynman Lectures on Physics. Your time and consideration are greatly appreciated. Best regards, Mike Gottlieb Editor, The Feynman Lectures on Physics New Millennium Edition 21The Schrödinger Equation in a Classical Context: A Seminar on Superconductivity (There was no summary for this lecture.) 21–1Schrödinger’s equation in a magnetic field This lecture is only for entertainment. I would like to give the lecture in a somewhat different style—just to see how it works out. It’s not a part of the course—in the sense that it is not supposed to be a last minute effort to teach you something new. But, rather, I imagine that I’m giving a seminar or research report on the subject to a more advanced audience, to people who have already been educated in quantum mechanics. The main difference between a seminar and a regular lecture is that the seminar speaker does not carry out all the steps, or all the algebra. He says: “If you do such and such, this is what comes out,” instead of showing all of the details. So in this lecture I’ll describe the ideas all the way along but just give you the results of the computations. You should realize that you’re not supposed to understand everything immediately, but believe (more or less) that things would come out if you went through the steps. All that aside, this is a subject I want to talk about. It is recent and modern and would be a perfectly legitimate talk to give at a research seminar. My subject is the Schrödinger equation in a classical setting—the case of superconductivity. Ordinarily, the wave function which appears in the Schrödinger equation applies to only one or two particles. And the wave function itself is not something that has a classical meaning—unlike the electric field, or the vector potential, or things of that kind. The wave function for a single particle is a “field”—in the sense that it is a function of position—but it does not generally have a classical significance. Nevertheless, there are some situations in which a quantum mechanical wave function does have classical significance, and they are the ones I would like to take up. The peculiar quantum mechanical behavior of matter on a small scale doesn’t usually make itself felt on a large scale except in the standard way that it produces Newton’s laws—the laws of the so-called classical mechanics. But there are certain situations in which the peculiarities of quantum mechanics can come out in a special way on a large scale. At low temperatures, when the energy of a system has been reduced very, very low, instead of a large number of states being involved, only a very, very small number of states near the ground state are involved. Under those circumstances the quantum mechanical character of that ground state can appear on a macroscopic scale. It is the purpose of this lecture to show a connection between quantum mechanics and large-scale effects—not the usual discussion of the way that quantum mechanics reproduces Newtonian mechanics on the average, but a special situation in which quantum mechanics will produce its own characteristic effects on a large or “macroscopic” scale. I will begin by reminding you of some of the properties of the Schrödinger equation.1 I want to describe the behavior of a particle in a magnetic field using the Schrödinger equation, because the superconductive phenomena are involved with magnetic fields. An external magnetic field is described by a vector potential, and the problem is: what are the laws of quantum mechanics in a vector potential? The principle that describes the behavior of quantum mechanics in a vector potential is very simple. The amplitude that a particle goes from one place to another along a certain route when there’s a field present is the same as the amplitude that it would go along the same route when there’s no field, multiplied by the exponential of the line integral of the vector potential, times the electric charge divided by Planck’s constant2 (see Fig. 21–1): \begin{equation} \label{Eq:III:21:1} \braket{b}{a}_{\text{in $\FLPA$}}=\braket{b}{a}_{A=0}\cdot \exp\biggl[\frac{iq}{\hbar}\int_a^b\FLPA\cdot d\FLPs\biggr]. \end{equation} It is a basic statement of quantum mechanics. Fig. 21–1.The amplitude to go from $a$ to $b$ along the path $\Gamma$ is proportional to $\exp\bigl[(iq/\hbar)\int_a^b\FigA\cdot d\Figs\bigr]$. Now without the vector potential the Schrödinger equation of a charged particle (nonrelativistic, no spin) is \begin{equation} \label{Eq:III:21:2} -\frac{\hbar}{i}\,\ddp{\psi}{t}=\Hcalop\psi= \frac{1}{2m}\biggl(\frac{\hbar}{i}\,\FLPnabla\biggr)\cdot \biggl(\frac{\hbar}{i}\,\FLPnabla\biggr)\psi+q\phi\psi, \end{equation} \begin{gather} \label{Eq:III:21:2} -\frac{\hbar}{i}\,\ddp{\psi}{t}=\Hcalop\psi=\\[1ex] \frac{1}{2m}\biggl(\frac{\hbar}{i}\,\FLPnabla\biggr)\cdot \biggl(\frac{\hbar}{i}\,\FLPnabla\biggr)\psi+q\phi\psi,\notag \end{gather} where $\phi$ is the electric potential so that $q\phi$ is the potential energy.3 Equation (21.1) is equivalent to the statement that in a magnetic field the gradients in the Hamiltonian are replaced in each case by the gradient minus $q\FLPA$, so that Eq. (21.2) becomes \begin{equation} \label{Eq:III:21:3} -\frac{\hbar}{i}\,\ddp{\psi}{t}=\Hcalop\psi= \frac{1}{2m}\biggl(\frac{\hbar}{i}\,\FLPnabla-q\FLPA\biggr)\cdot \biggl(\frac{\hbar}{i}\,\FLPnabla-q\FLPA\biggr)\psi+q\phi\psi, \end{equation} \begin{gather} \label{Eq:III:21:3} -\frac{\hbar}{i}\,\ddp{\psi}{t}=\Hcalop\psi=\\[1ex] \frac{1}{2m}\biggl(\frac{\hbar}{i}\,\FLPnabla-q\FLPA\biggr)\cdot \biggl(\frac{\hbar}{i}\,\FLPnabla-q\FLPA\biggr)\psi+q\phi\psi,\notag \end{gather} This is the Schrödinger equation for a particle with charge $q$ moving in an electromagnetic field $\FLPA,\phi$ (nonrelativistic, no spin). To show that this is true I’d like to illustrate by a simple example in which instead of having a continuous situation we have a line of atoms along the $x$-axis with the spacing $b$ and we have an amplitude $iK\!/\hbar$ per unit time for an electron to jump from one atom to another when there is no field.4 Now according to Eq. (21.1) if there’s a vector potential in the $x$-direction $A_x(x,t)$, the amplitude to jump will be altered from what it was before by a factor $\exp[(iq/\hbar)\,A_xb]$, the exponent being $iq/\hbar$ times the vector potential integrated from one atom to the next. For simplicity we will write $(q/\hbar)A_x\equiv f(x)$, since $A_x$, will, in general, depend on $x$. If the amplitude to find the electron at the atom “$n$” located at $x$ is called $C(x)\equiv C_n$, then the rate of change of that amplitude is given by the following equation: \begin{align} -\frac{\hbar}{i}\ddp{}{t}C(x)=E_0C(x)&\!-\!Ke^{-ibf(x+b/2)}C(x\!+\!b)\notag\\ \label{Eq:III:21:4} &-\!Ke^{+ibf(x-b/2)}C(x\!-\!b). \end{align} There are three pieces. First, there’s some energy $E_0$ if the electron is located at $x$. As usual, that gives the term $E_0C(x)$. Next, there is the term $-KC(x+b)$, which is the amplitude for the electron to have jumped backwards one step from atom “$n+1$,” located at $x+b$. However, in doing so in a vector potential, the phase of the amplitude must be shifted according to the rule in Eq. (21.1). If $A_x$ is not changing appreciably in one atomic spacing, the integral can be written as just the value of $A_x$ at the midpoint, times the spacing $b$. So $(iq/\hbar)$ times the integral is just $ibf(x+b/2)$. Since the electron is jumping backwards, I showed this phase shift with a minus sign. That gives the second piece. In the same manner there’s a certain amplitude to have jumped from the other side, but this time we need the vector potential at a distance $(b/2)$ on the other side of $x$, times the distance $b$. That gives the third piece. The sum gives the equation for the amplitude to be at $x$ in a vector potential. Now we know that if the function $C(x)$ is smooth enough (we take the long wavelength limit), and if we let the atoms get closer together, Eq. (21.4) will approach the behavior of an electron in free space. So the next step is to expand the right-hand side of (21.4) in powers of $b$, assuming $b$ is very small. For example, if $b$ is zero the right-hand side is just $(E_0-2K)C(x)$, so in the zeroth approximation the energy is $E_0-2K$. Next comes the terms in $b$. But because the two exponentials have opposite signs, only even powers of $b$ remain. So if you make a Taylor expansion of $C(x)$, of $f(x)$, and of the exponentials, and then collect the terms in $b^2$, you get \begin{align} -\frac{\hbar}{i}\,\ddp{C(x)}{t}&=E_0C(x)-2KC(x)\notag\\ \label{Eq:III:21:5} &\quad-Kb^2\{C''(x)-2if(x)C'(x)-if'(x)C(x)-f^2(x)C(x)\}. \end{align} \begin{align} \label{Eq:III:21:5} -\frac{\hbar}{i}\,\ddp{C(x)}{t}&=E_0C(x)-2KC(x)\\[1ex] &-Kb^2\bigl\{C''(x)-2if(x)C'(x)\,-\notag\\[1ex] &\phantom{-Kb^2\{}\;if'(x)C(x)-f^2(x)C(x)\bigr\}.\notag \end{align} (The “primes” mean differentiation with respect to $x$.) Now this horrible combination of things looks quite complicated. But mathematically it’s exactly the same as \begin{equation} \label{Eq:III:21:6} -\frac{\hbar}{i}\,\ddp{C(x)}{t}=(E_0-2K)C(x)-Kb^2 \biggl[\ddp{}{x}-if(x)\biggr] \biggl[\ddp{}{x}-if(x)\biggr]C(x). \end{equation} \begin{align} \label{Eq:III:21:6} -\frac{\hbar}{i}\ddp{C(x)}{t}&=(E_0-2K)C(x)\\ &-Kb^2 \biggl[\ddp{}{x}-if(x)\biggr] \biggl[\ddp{}{x}-if(x)\biggr]C(x).\notag \end{align} The second bracket operating on $C(x)$ gives $C'(x)$ minus $if(x)C(x)$. The first bracket operating on these two terms gives the $C''$ term and terms in the first derivative of $f(x)$ and the first derivative of $C(x)$. Now remember that the solutions for zero magnetic field5 represent a particle with an effective mass $m_{\text{eff}}$ given by \begin{equation*} Kb^2=\frac{\hbar^2}{2m_{\text{eff}}}. \end{equation*} If you then set $E_0=2K$, and put back $f(x)=(q/\hbar)A_x$, you can easily check that Eq. (21.6) is the same as the first part of Eq. (21.3). (The origin of the potential energy term is well known, so I haven’t bothered to include it in this discussion.) The proposition of Eq. (21.1) that the vector potential changes all the amplitudes by the exponential factor is the same as the rule that the momentum operator, $(\hbar/i)\FLPnabla$ gets replaced by \begin{equation*} \frac{\hbar}{i}\,\FLPnabla-q\FLPA, \end{equation*} as you see in the Schrödinger equation of (21.3). 21–2The equation of continuity for probabilities Now I turn to a second point. An important part of the Schrödinger equation for a single particle is the idea that the probability to find the particle at a position is given by the absolute square of the wave function. It is also characteristic of the quantum mechanics that probability is conserved in a local sense. When the probability of finding the electron somewhere decreases, while the probability of the electron being elsewhere increases (keeping the total probability unchanged), something must be going on in between. In other words, the electron has a continuity in the sense that if the probability decreases at one place and builds up at another place, there must be some kind of flow between. If you put a wall, for example, in the way, it will have an influence and the probabilities will not be the same. So the conservation of probability alone is not the complete statement of the conservation law, just as the conservation of energy alone is not as deep and important as the local conservation of energy.6 If energy is disappearing, there must be a flow of energy to correspond. In the same way, we would like to find a “current” of probability such that if there is any change in the probability density (the probability of being found in a unit volume), it can be considered as coming from an inflow or an outflow due to some current. This current would be a vector which could be interpreted this way—the $x$-component would be the net probability per second and per unit area that a particle passes in the $x$-direction across a plane parallel to the $yz$-plane. Passage toward $+x$ is considered a positive flow, and passage in the opposite direction, a negative flow. Is there such a current? Well, you know that the probability density $P(\FLPr,t)$ is given in terms of the wave function by \begin{equation} \label{Eq:III:21:7} P(\FLPr,t)=\psi\cconj(\FLPr,t)\psi(\FLPr,t). \end{equation} I am asking: Is there a current $\FLPJ$ such that \begin{equation} \label{Eq:III:21:8} \ddp{P}{t}=-\FLPdiv{\FLPJ}? \end{equation} If I take the time derivative of Eq. (21.7), I get two terms: \begin{equation} \label{Eq:III:21:9} \ddp{P}{t}=\psi\cconj\,\ddp{\psi}{t}+\psi\,\ddp{\psi\cconj}{t}. \end{equation} Now use the Schrödinger equation—Eq. (21.3)—for $\ddpl{\psi}{t}$; and take the complex conjugate of it to get $\ddpl{\psi\cconj}{t}$—each $i$ gets its sign reversed. You get \begin{equation} \begin{aligned} \ddp{P}{t}&=-\frac{i}{\hbar}\biggl[\psi\cconj\,\frac{1}{2m} \biggl(\frac{\hbar}{i}\,\FLPnabla-q\FLPA\biggr)\cdot \biggl(\frac{\hbar}{i}\,\FLPnabla-q\FLPA\biggr)\psi+ q\phi\psi\cconj\psi\\[.5ex] &\hphantom{{}={}}-\psi\,\frac{1}{2m} \biggl(-\frac{\hbar}{i}\,\FLPnabla-q\FLPA\biggr)\cdot \biggl(-\frac{\hbar}{i}\,\FLPnabla-q\FLPA\biggr)\psi\cconj -q\phi\psi\psi\cconj\biggr]. \end{aligned} \label{Eq:III:21:10} \end{equation} \begin{align} \label{Eq:III:21:10} &\ddp{P}{t}=\\ &-\frac{i}{\hbar}\biggl[\psi\cconj\!\frac{1}{2m} \biggl(\!\frac{\hbar}{i}\FLPnabla\!-q\FLPA\!\biggr)\!\cdot\! \biggl(\!\frac{\hbar}{i}\FLPnabla\!-q\FLPA\!\biggr)\psi\!+\! q\phi\psi\cconj\psi\notag\\[1ex] &\kern{2em}-\psi\frac{1}{2m} \biggl(\!-\frac{\hbar}{i}\FLPnabla\!-q\FLPA\!\biggr)\!\cdot\! \biggl(\!-\frac{\hbar}{i}\FLPnabla\!-q\FLPA\!\biggr)\psi\cconj\! \!-\!q\phi\psi\psi\cconj\biggr].\notag \end{align} The potential terms and a lot of other stuff cancel out. And it turns out that what is left can indeed be written as a perfect divergence. The whole equation is equivalent to \begin{equation} \label{Eq:III:21:11} \ddp{P}{t}=-\FLPdiv{\biggl\{ \frac{1}{2m}\,\psi\cconj \biggl(\frac{\hbar}{i}\,\FLPnabla-q\FLPA\biggr)\psi+ \frac{1}{2m}\,\psi \biggl(-\frac{\hbar}{i}\,\FLPnabla-q\FLPA\biggr)\psi\cconj \biggr\}}. \end{equation} \begin{align} \label{Eq:III:21:11} \ddp{P}{t}\!=-\FLPdiv{\biggl\{\! &\frac{1}{2m}\psi\cconj \biggl(\!\frac{\hbar}{i}\FLPnabla\!-q\FLPA\!\biggr)\psi\,+\\[1ex] &\frac{1}{2m}\psi \biggl(\!-\frac{\hbar}{i}\FLPnabla\!-q\FLPA\!\biggr)\psi\cconj \!\biggr\}}.\notag \end{align} It is really not as complicated as it seems. It is a symmetrical combination of $\psi\cconj$ times a certain operation on $\psi$, plus $\psi$ times the complex conjugate operation on $\psi\cconj$. It is some quantity plus its own complex conjugate, so the whole thing is real—as it ought to be. The operation can be remembered this way: it is just the momentum operator $\Pcalvecop$ minus $q\FLPA$. I could write the current in Eq. (21.8) as \begin{equation} \label{Eq:III:21:12} \FLPJ\!=\!\frac{1}{2}\biggl\{\! \psi\cconj\biggl[\!\frac{\Pcalvecop\!-\!q\FLPA}{m}\!\biggr]\psi\!+\! \psi\biggl[\!\frac{\Pcalvecop\!-\!q\FLPA}{m}\!\biggr]\cconj\!\!\!\psi\cconj \!\biggr\}. \end{equation} There is then a current $\FLPJ$ which completes Eq. (21.8). Equation (21.11) shows that the probability is conserved locally. If a particle disappears from one region it cannot appear in another without something going on in between. Imagine that the first region is surrounded by a closed surface far enough out that there is zero probability to find the electron at the surface. The total probability to find the electron somewhere inside the surface is the volume integral of $P$. But according to Gauss’s theorem the volume integral of the divergence $\FLPJ$ is equal to the surface integral of its normal component. If $\psi$ is zero at the surface, Eq. (21.12) says that $\FLPJ$ is zero, so the total probability to find the particle inside can’t change. Only if some of the probability approaches the boundary can some of it leak out. We can say that it only gets out by moving through the surface—and that is local conservation. 21–3Two kinds of momentum The equation for the current is rather interesting, and sometimes causes a certain amount of worry. You would think the current would be something like the density of particles times the velocity. The density should be something like $\psi\psi\cconj$, which is o.k. And each term in Eq. (21.12) looks like the typical form for the average-value of the operator \begin{equation} \label{Eq:III:21:13} \frac{\Pcalvecop-q\FLPA}{m}, \end{equation} so maybe we should think of it as the velocity of flow. It looks as though we have two suggestions for relations of velocity to momentum, because we would also think that momentum divided by mass, $\Pcalvecop/m$, should be a velocity. The two possibilities differ by the vector potential. It happens that these two possibilities were also discovered in classical physics, when it was found that momentum could be defined in two ways.7 One of them is called “kinematic momentum,” but for absolute clarity I will in this lecture call it the “$mv$-momentum.” This is the momentum obtained by multiplying mass by velocity. The other is a more mathematical, more abstract momentum, sometimes called the “dynamical momentum,” which I’ll call “$p$-momentum.” The two possibilities are \begin{equation} \label{Eq:III:21:14} \text{$mv$-momentum}=m\FLPv, \end{equation} \begin{equation} \label{Eq:III:21:15} \text{$p$-momentum}=m\FLPv + q\FLPA. \end{equation} It turns out that in quantum mechanics with magnetic fields it is the $p$-momentum which is connected to the gradient operator $\Pcalvecop$, so it follows that (21.13) is the operator of a velocity. I’d like to make a brief digression to show you what this is all about—why there must be something like Eq. (21.15) in the quantum mechanics. The wave function changes with time according to the Schrödinger equation in Eq. (21.3). If I would suddenly change the vector potential, the wave function wouldn’t change at the first instant; only its rate of change changes. Now think of what would happen in the following circumstance. Suppose I have a long solenoid, in which I can produce a flux of magnetic field ($\FLPB$-field), as shown in Fig. 21–2. And there is a charged particle sitting nearby. Suppose this flux nearly instantaneously builds up from zero to something. I start with zero vector potential and then I turn on a vector potential. That means that I produce suddenly a circumferential vector potential $\FLPA$. You’ll remember that the line integral of $\FLPA$ around a loop is the same as the flux of $\FLPB$ through the loop.8 Now what happens if I suddenly turn on a vector potential? According to the quantum mechanical equation the sudden change of $\FLPA$ does not make a sudden change of $\psi$; the wave function is still the same. So the gradient is also unchanged. Fig. 21–2.The electric field outside a solenoid with an increasing current. But remember what happens electrically when I suddenly turn on a flux. During the short time that the flux is rising, there’s an electric field generated whose line integral is the rate of change of the flux with time: \begin{equation} \FLPE=-\ddp{\FLPA}{t}. \end{equation} That electric field is enormous if the flux is changing rapidly, and it gives a force on the particle. The force is the charge times the electric field, and so during the build up of the flux the particle obtains a total impulse (that is, a change in $m\FLPv$) equal to $-q\FLPA$. In other words, if you suddenly turn on a vector potential at a charge, this charge immediately picks up an $mv$-momentum equal to $-q\FLPA$. But there is something that isn’t changed immediately and that’s the difference between $m\FLPv$ and $-q\FLPA$. And so the sum $\FLPp=m\FLPv+q\FLPA$ is something which is not changed when you make a sudden change in the vector potential. This quantity $\FLPp$ is what we have called the $p$-momentum and is of importance in classical mechanics in the theory of dynamics, but it also has a direct significance in quantum mechanics. It depends on the character of the wave function, and it is the one to be identified with the operator \begin{equation*} \Pcalvecop=\frac{\hbar}{i}\,\FLPnabla. \end{equation*} 21–4The meaning of the wave function When Schrödinger first discovered his equation he discovered the conservation law of Eq. (21.8) as a consequence of his equation. But he imagined incorrectly that $P$ was the electric charge density of the electron and that $\FLPJ$ was the electric current density, so he thought that the electrons interacted with the electromagnetic field through these charges and currents. When he solved his equations for the hydrogen atom and calculated $\psi$, he wasn’t calculating the probability of anything—there were no amplitudes at that time—the interpretation was completely different. The atomic nucleus was stationary but there were currents moving around; the charges $P$ and currents $\FLPJ$ would generate electromagnetic fields and the thing would radiate light. He soon found on doing a number of problems that it didn’t work out quite right. It was at this point that Born made an essential contribution to our ideas regarding quantum mechanics. It was Born who correctly (as far as we know) interpreted the $\psi$ of the Schrödinger equation in terms of a probability amplitude—that very difficult idea that the square of the amplitude is not the charge density but is only the probability per unit volume of finding an electron there, and that when you do find the electron some place the entire charge is there. That whole idea is due to Born. The wave function $\psi(\FLPr)$ for an electron in an atom does not, then, describe a smeared-out electron with a smooth charge density. The electron is either here, or there, or somewhere else, but wherever it is, it is a point charge. On the other hand, think of a situation in which there are an enormous number of particles in exactly the same state, a very large number of them with exactly the same wave function. Then what? One of them is here and one of them is there, and the probability of finding any one of them at a given place is proportional to $\psi\psi\cconj$. But since there are so many particles, if I look in any volume $dx\,dy\,dz$ I will generally find a number close to $\psi\psi\cconj\,dx\,dy\,dz$. So in a situation in which $\psi$ is the wave function for each of an enormous number of particles which are all in the same state, $\psi\psi\cconj$ can be interpreted as the density of particles. If, under these circumstances, each particle carries the same charge $q$, we can, in fact, go further and interpret $\psi\cconj\psi$ as the density of electricity. Normally, $\psi\psi\cconj$ is given the dimensions of a probability density, then $\psi$ should be multiplied by $q$ to give the dimensions of a charge density. For our present purposes we can put this constant factor into $\psi$, and take $\psi\psi\cconj$ itself as the electric charge density. With this understanding, $\FLPJ$ (the current of probability I have calculated) becomes directly the electric current density. So in the situation in which we can have very many particles in exactly the same state, there is possible a new physical interpretation of the wave functions. The charge density and the electric current can be calculated directly from the wave functions and the wave functions take on a physical meaning which extends into classical, macroscopic situations. Something similar can happen with neutral particles. When we have the wave function of a single photon, it is the amplitude to find a photon somewhere. Although we haven’t ever written it down there is an equation for the photon wave function analogous to the Schrödinger equation for the electron. The photon equation is just the same as Maxwell’s equations for the electromagnetic field, and the wave function is the same as the vector potential $\FLPA$. The wave function turns out to be just the vector potential. The quantum physics is the same thing as the classical physics because photons are noninteracting Bose particles and many of them can be in the same state—as you know, they like to be in the same state. The moment that you have billions in the same state (that is, in the same electromagnetic wave), you can measure the wave function, which is the vector potential, directly. Of course, it worked historically the other way. The first observations were on situations with many photons in the same state, and so we were able to discover the correct equation for a single photon by observing directly with our hands on a macroscopic level the nature of the wave function. Now the trouble with the electron is that you cannot put more than one in the same state. Therefore, it was long believed that the wave function of the Schrödinger equation would never have a macroscopic representation analogous to the macroscopic representation of the amplitude for photons. On the other hand, it is now realized that the phenomenon of superconductivity presents us with just this situation. As you know, very many metals become superconducting below a certain temperature9—the temperature is different for different metals. When you reduce the temperature sufficiently the metals conduct electricity without any resistance. This phenomenon has been observed for a very large number of metals but not for all, and the theory of this phenomenon has caused a great deal of difficulty. It took a very long time to understand what was going on inside of superconductors, and I will only describe enough of it for our present purposes. It turns out that due to the interactions of the electrons with the vibrations of the atoms in the lattice, there is a small net effective attraction between the electrons. The result is that the electrons form together, if I may speak very qualitatively and crudely, bound pairs. Now you know that a single electron is a Fermi particle. But a bound pair would act as a Bose particle, because if I exchange both electrons in a pair I change the sign of the wave function twice, and that means that I don’t change anything. A pair is a Bose particle. The energy of pairing—that is, the net attraction—is very, very weak. Only a tiny temperature is needed to throw the electrons apart by thermal agitation, and convert them back to “normal” electrons. But when you make the temperature sufficiently low that they have to do their very best to get into the absolutely lowest state; then they do collect in pairs. I don’t wish you to imagine that the pairs are really held together very closely like a point particle. As a matter of fact, one of the great difficulties of understanding this phenomenon originally was that that is not the way things are. The two electrons which form the pair are really spread over a considerable distance; and the mean distance between pairs is relatively smaller than the size of a single pair. Several pairs are occupying the same space at the same time. Both the reason why electrons in a metal form pairs and an estimate of the energy given up in forming a pair have been a triumph of recent times. This fundamental point in the theory of superconductivity was first explained in the theory of Bardeen, Cooper, and Schrieffer,10 but that is not the subject of this seminar. We will accept, however, the idea that the electrons do, in some manner or other, work in pairs, that we can think of these pairs as behaving more or less like particles, and that we can therefore talk about the wave function for a “pair.” Now the Schrödinger equation for the pair will be more or less like Eq. (21.3). There will be one difference in that the charge $q$ will be twice the charge of an electron. Also, we don’t know the inertia—or effective mass—for the pair in the crystal lattice, so we don’t know what number to put in for $m$. Nor should we think that if we go to very high frequencies (or short wavelengths), this is exactly the right form, because the kinetic energy that corresponds to very rapidly varying wave functions may be so great as to break up the pairs. At finite temperatures there are always a few pairs which are broken up according to the usual Boltzmann theory. The probability that a pair is broken is proportional to $\exp(-E_{\text{pair}}/\kappa T)$. The electrons that are not bound in pairs are called “normal” electrons and will move around in the crystal in the ordinary way. I will, however, consider only the situation at essentially zero temperature—or, in any case, I will disregard the complications produced by those electrons which are not in pairs. Since electron pairs are bosons, when there are a lot of them in a given state there is an especially large amplitude for other pairs to go to the same state. So nearly all of the pairs will be locked down at the lowest energy in exactly the same state—it won’t be easy to get one of them into another state. There’s more amplitude to go into the same state than into an unoccupied state by the famous factor $\sqrt{n}$, where $n-1$ is the occupancy of the lowest state. So we would expect all the pairs to be moving in the same state. What then will our theory look like? I’ll call $\psi$ the wave function of a pair in the lowest energy state. However, since $\psi\psi\cconj$ is going to be proportional to the charge density $\rho$, I can just as well write $\psi$ as the square root of the charge density times some phase factor: \begin{equation} \label{Eq:III:21:17} \psi(\FLPr)=\rho^{1/2}(\FLPr)e^{i\theta(\FLPr)}, \end{equation} where $\rho$ and $\theta$ are real functions of $\FLPr$. (Any complex function can, of course, be written this way.) It’s clear what we mean when we talk about the charge density, but what is the physical meaning of the phase $\theta$ of the wave function? Well, let’s see what happens if we substitute $\psi(\FLPr)$ into Eq. (21.12), and express the current density in terms of these new variables $\rho$ and $\theta$. It’s just a change of variables and I won’t go through all the algebra, but it comes out \begin{equation} \label{Eq:III:21:18} \FLPJ=\frac{\hbar}{m}\biggl( \FLPgrad{\theta}-\frac{q}{\hbar}\,\FLPA\biggr)\rho. \end{equation} Since both the current density and the charge density have a direct physical meaning for the superconducting electron gas, both $\rho$ and $\theta$ are real things. The phase is just as observable as $\rho$; it is a piece of the current density $\FLPJ$. The absolute phase is not observable, but if the gradient of the phase is known everywhere, the phase is known except for a constant. You can define the phase at one point, and then the phase everywhere is determined. Incidentally, the equation for the current can be analyzed a little nicer, when you think that the current density $\FLPJ$ is in fact the charge density times the velocity of motion of the fluid of electrons, or $\rho\FLPv$. Equation (21.18) is then equivalent to \begin{equation} \label{Eq:III:21:19} m\FLPv=\hbar\,\FLPgrad{\theta}-q\FLPA. \end{equation} Notice that there are two pieces in the $mv$-momentum; one is a contribution from the vector potential, and the other, a contribution from the behavior of the wave function. In other words, the quantity $\hbar\,\FLPgrad{\theta}$ is just what we have called the $p$-momentum. 21–6The Meissner effect Now we can describe some of the phenomena of superconductivity. First, there is no electrical resistance. There’s no resistance because all the electrons are collectively in the same state. In the ordinary flow of current you knock one electron or the other out of the regular flow, gradually deteriorating the general momentum. But here to get one electron away from what all the others are doing is very hard because of the tendency of all Bose particles to go in the same state. A current once started, just keeps on going forever. It’s also easy to understand that if you have a piece of metal in the superconducting state and turn on a magnetic field which isn’t too strong (we won’t go into the details of how strong), the magnetic field can’t penetrate the metal. If, as you build up the magnetic field, any of it were to build up inside the metal, there would be a rate of change of flux which would produce an electric field, and an electric field would immediately generate a current which, by Lenz’s law, would oppose the flux. Since all the electrons will move together, an infinitesimal electric field will generate enough current to oppose completely any applied magnetic field. So if you turn the field on after you’ve cooled a metal to the superconducting state, it will be excluded. Even more interesting is a related phenomenon discovered experimentally by Meissner.11 If you have a piece of the metal at a high temperature (so that it is a normal conductor) and establish a magnetic field through it, and then you lower the temperature below the critical temperature (where the metal becomes a superconductor), the field is expelled. In other words, it starts up its own current—and in just the right amount to push the field out. We can see the reason for that in the equations, and I’d like to explain how. Suppose that we take a piece of superconducting material which is in one lump. Then in a steady situation of any kind the divergence of the current must be zero because there’s no place for it to go. It is convenient to choose to make the divergence of $\FLPA$ equal to zero. (I should explain why choosing this convention doesn’t mean any loss of generality, but I don’t want to take the time.) Taking the divergence of Eq. (21.18), then gives that the Laplacian of $\theta$ is equal to zero. One moment. What about the variation of $\rho$? I forgot to mention an important point. There is a background of positive charge in this metal due to the atomic ions of the lattice. If the charge density $\rho$ is uniform there is no net charge and no electric field. If there would be any accumulation of electrons in one region the charge wouldn’t be neutralized and there would be a terrific repulsion pushing the electrons apart.12 So in ordinary circumstances the charge density of the electrons in the superconductor is almost perfectly uniform—I can take $\rho$ as a constant. Now the only way that $\nabla^2\theta$ can be zero everywhere inside the lump of metal is for $\theta$ to be a constant. And that means that there is no contribution to $\FLPJ$ from $p$-momentum. Equation (21.18) then says that the current is proportional to $\rho$ times $\FLPA$. So everywhere in a lump of superconducting material the current is necessarily proportional to the vector potential: \begin{equation} \label{Eq:III:21:20} \FLPJ=-\rho\,\frac{q}{m}\,\FLPA. \end{equation} Since $\rho$ and $q$ have the same (negative) sign, and since $\rho$ is a constant, I can set $-\rho q/m=-(\text{some positive constant})$; then \begin{equation} \label{Eq:III:21:21} \FLPJ=-(\text{some positive constant})\FLPA. \end{equation} This equation was originally proposed by London and London13 to explain the experimental observations of superconductivity—long before the quantum mechanical origin of the effect was understood. Now we can use Eq. (21.20) in the equations of electromagnetism to solve for the fields. The vector potential is related to the current density by \begin{equation} \label{Eq:III:21:22} \nabla^2\FLPA=-\frac{1}{\epsO c^2}\,\FLPJ. \end{equation} If I use Eq. (21.21) for $\FLPJ$, I have \begin{equation} \label{Eq:III:21:23} \nabla^2\FLPA=\lambda^2\FLPA, \end{equation} where $\lambda^2$ is just a new constant; \begin{equation} \label{Eq:III:21:24} \lambda^2=\rho\,\frac{q}{\epsO mc^2}. \end{equation} We can now try to solve this equation for $\FLPA$ and see what happens in detail. For example, in one dimension Eq. (21.23) has exponential solutions of the form $e^{-\lambda x}$ and $e^{+\lambda x}$. These solutions mean that the vector potential must decrease exponentially as you go from the surface into the material. (It can’t increase because there would be a blow up.) If the piece of metal is very large compared to $1/\lambda$, the field only penetrates to a thin layer at the surface—a layer about $1/\lambda$ in thickness. The entire remainder of the interior is free of field, as sketched in Fig. 21–3. This is the explanation of the Meissner effect. Fig. 21–3.(a) A superconducting cylinder in a magnetic field; (b) the magnetic field $B$ as a function of $r$. How big is the distance $1/\lambda$? Well, remember that $r_0$, the “electromagnetic radius” of the electron ($2.8\times10^{-13}$ cm), is given by \begin{equation*} mc^2=\frac{q_e^2}{4\pi\epsO r_0}. \end{equation*} Also, remember that $q$ in Eq. (21.24) is twice the charge of an electron, so \begin{equation*} \frac{q}{\epsO mc^2}=\frac{8\pi r_0}{q_e}. \end{equation*} Writing $\rho$ as $q_eN$, where $N$ is the number of electrons per cubic centimeter, we have \begin{equation} \label{Eq:III:21:25} \lambda^2=8\pi Nr_0. \end{equation} For a metal such as lead there are about $3\times10^{22}$ atoms per cm$^3$, so if each one contributed only one conduction electron, $1/\lambda$ would be about $2\times10^{-6}$ cm. That gives you the order of magnitude. 21–7Flux quantization Fig. 21–4.A ring in a magnetic field: (a) in the normal state; (b) in the superconducting state; (c) after the external field is removed. The London equation (21.21) was proposed to account for the observed facts of superconductivity including the Meissner effect. In recent times, however, there have been some even more dramatic predictions. One prediction made by London was so peculiar that nobody paid much attention to it until recently. I will now discuss it. This time instead of taking a single lump, suppose we take a ring whose thickness is large compared to $1/\lambda$, and try to see what would happen if we started with a magnetic field through the ring, then cooled it to the superconducting state, and afterward removed the original source of $\FLPB$. The sequence of events is sketched in Fig. 21–4. In the normal state there will be a field in the body of the ring as sketched in part (a) of the figure. When the ring is made superconducting, the field is forced outside of the material (as we have just seen). There will then be some flux through the hole of the ring as sketched in part (b). If the external field is now removed, the lines of field going through the hole are “trapped” as shown in part (c). The flux $\Phi$ through the center can’t decrease because $\ddpl{\Phi}{t}$ must be equal to the line integral of $\FLPE$ around the ring, which is zero in a superconductor. As the external field is removed a super current starts flowing around the ring to keep the flux through the ring a constant. (It’s the old eddy-current idea, only with zero resistance.) These currents will, however, all flow near the surface (down to a depth $1/\lambda$), as can be shown by the same kind of analysis that I made for the solid block. These currents can keep the magnetic field out of the body of the ring, and produce the permanently trapped magnetic field as well. Now, however, there is an essential difference, and our equations predict a surprising effect. The argument I made above that $\theta$ must be a constant in a solid block does not apply for a ring, as you can see from the following arguments. Fig. 21–5.The curve $\Gamma$ inside a superconducting ring. Well inside the body of the ring the current density $\FLPJ$ is zero; so Eq. (21.18) gives \begin{equation} \label{Eq:III:21:26} \hbar\,\FLPgrad{\theta}=q\FLPA. \end{equation} Now consider what we get if we take the line integral of $\FLPA$ around a curve $\Gamma$, which goes around the ring near the center of its cross-section so that it never gets near the surface, as drawn in Fig. 21–5. From Eq. (21.26), \begin{equation} \label{Eq:III:21:27} \hbar\oint\FLPgrad{\theta}\cdot d\FLPs=q\oint\FLPA\cdot d\FLPs. \end{equation} Now you know that the line integral of $\FLPA$ around any loop is equal to the flux of $\FLPB$ through the loop \begin{equation*} \oint\FLPA\cdot d\FLPs=\Phi. \end{equation*} Equation (21.27) then becomes \begin{equation} \label{Eq:III:21:28} \oint\FLPgrad{\theta}\cdot d\FLPs=\frac{q}{\hbar}\,\Phi. \end{equation} The line integral of a gradient from one point to another (say from point $1$ to point $2$) is the difference of the values of the function at the two points. Namely, \begin{equation*} \int_1^2\FLPgrad{\theta}\cdot d\FLPs=\theta_2-\theta_1. \end{equation*} If we let the two end points $1$ and $2$ come together to make a closed loop you might at first think that $\theta_2$ would equal $\theta_1$, so that the integral in Eq. (21.28) would be zero. That would be true for a closed loop in a simply-connected piece of superconductor, but it is not necessarily true for a ring-shaped piece. The only physical requirement we can make is that there can be only one value of the wave function for each point. Whatever $\theta$ does as you go around the ring, when you get back to the starting point the $\theta$ you get must give the same value for the wave function \begin{equation*} \psi=\sqrt{\rho}e^{i\theta}. \end{equation*} This will happen if $\theta$ changes by $2\pi n$, where $n$ is any integer. So if we make one complete turn around the ring the left-hand side of Eq. (21.27) must be $\hbar\cdot2\pi n$. Using Eq. (21.28), I get that \begin{equation} \label{Eq:III:21:29} 2\pi n\hbar=q\Phi. \end{equation} The trapped flux must always be an integer times $2\pi\hbar/q$! If you would think of the ring as a classical object with an ideally perfect (that is, infinite) conductivity, you would think that whatever flux was initially found through it would just stay there—any amount of flux at all could be trapped. But the quantum-mechanical theory of superconductivity says that the flux can be zero, or $2\pi\hbar/q$, or $4\pi\hbar/q$, or $6\pi\hbar/q$, and so on, but no value in between. It must be a multiple of a basic quantum mechanical unit. London14 predicted that the flux trapped by a superconducting ring would be quantized and said that the possible values of the flux would be given by Eq. (21.29) with $q$ equal to the electronic charge. According to London the basic unit of flux should be $2\pi\hbar/q_e$, which is about $4\times10^{-7}$ $\text{gauss}\cdot\text{cm}^2$. To visualize such a flux, think of a tiny cylinder a tenth of a millimeter in diameter; the magnetic field inside it when it contains this amount of flux is about one percent of the earth’s magnetic field. It should be possible to observe such a flux by a sensitive magnetic measurement. In 1961 such a quantized flux was looked for and found by Deaver and Fairbank15 at Stanford University and at about the same time by Doll and Näbauer16 in Germany. In the experiment of Deaver and Fairbank, a tiny cylinder of superconductor was made by electroplating a thin layer of tin on a one-centimeter length of No. 56 ($1.3\times10^{-3}$ cm diameter) copper wire. The tin becomes superconducting below $3.8^\circ$K while the copper remains a normal metal. The wire was put in a small controlled magnetic field, and the temperature reduced until the tin became superconducting. Then the external source of field was removed. You would expect this to generate a current by Lenz’s law so that the flux inside would not change. The little cylinder should now have magnetic moment proportional to the flux inside. The magnetic moment was measured by jiggling the wire up and down (like the needle on a sewing machine, but at the rate of $100$ cycles per second) inside a pair of little coils at the ends of the tin cylinder. The induced voltage in the coils was then a measure of the magnetic moment. When the experiment was done by Deaver and Fairbank, they found that the flux was quantized, but that the basic unit was only one-half as large as London had predicted. Doll and Näbauer got the same result. At first this was quite mysterious,17 but we now understand why it should be so. According to the Bardeen, Cooper, and Schrieffer theory of superconductivity, the $q$ which appears in Eq. (21.29) is the charge of a pair of electrons and so is equal to $2q_e$. The basic flux unit is \begin{equation} \label{Eq:III:21:30} \Phi_0=\frac{\pi\hbar}{q_e}\approx2\times10^{-7}\text{ gauss}\cdot\text{cm}^2 \end{equation} or one-half the amount predicted by London. Everything now fits together, and the measurements show the existence of the predicted purely quantum-mechanical effect on a large scale. 21–8The dynamics of superconductivity The Meissner effect and the flux quantization are two confirmations of our general ideas. Just for the sake of completeness I would like to show you what the complete equations of a superconducting fluid would be from this point of view—it is rather interesting. Up to this point I have only put the expression for $\psi$ into equations for charge density and current. If I put it into the complete Schrödinger equation I get equations for $\rho$ and $\theta$. It should be interesting to see what develops, because here we have a “fluid” of electron pairs with a charge density $\rho$ and a mysterious $\theta$—we can try to see what kind of equations we get for such a “fluid”! So we substitute the wave function of Eq. (21.17) into the Schrödinger equation (21.3) and remember that $\rho$ and $\theta$ are real functions of $x$, $y$, $z$, and $t$. If we separate real and imaginary parts we obtain then two equations. To write them in a shorter form I will—following Eq. (21.19)—write \begin{equation} \label{Eq:III:21:31} \frac{\hbar}{m}\,\FLPgrad{\theta}-\frac{q}{m}\,\FLPA=\FLPv. \end{equation} One of the equations I get is then \begin{equation} \label{Eq:III:21:32} \ddp{\rho}{t}=-\FLPdiv{\rho\FLPv}. \end{equation} Since $\rho\FLPv$ is first $\FLPJ$, this is just the continuity equation once more. The other equation I obtain tells how $\theta$ varies; it is \begin{equation} \label{Eq:III:21:33} \hbar\,\ddp{\theta}{t}=-\frac{m}{2}\,v^2-q\phi+ \frac{\hbar^2}{2m}\biggl\{ \frac{1}{\sqrt{\rho}}\,\nabla^2(\sqrt{\rho})\biggr\}. \end{equation} Those who are thoroughly familiar with hydrodynamics (of which I’m sure few of you are) will recognize this as the equation of motion for an electrically charged fluid if we identify $\hbar\theta$ as the “velocity potential”—except that the last term, which should be the energy of compression of the fluid, has a rather strange dependence on the density $\rho$. In any case, the equation says that the rate of change of the quantity $\hbar\theta$ is given by a kinetic energy term, $-\tfrac{1}{2}mv^2$, plus a potential energy term, $-q\phi$, with an additional term, containing the factor $\hbar^2$, which we could call a “quantum mechanical energy.” We have seen that inside a superconductor $\rho$ is kept very uniform by the electrostatic forces, so this term can almost certainly be neglected in every practical application provided we have only one superconducting region. If we have a boundary between two superconductors (or other circumstances in which the value of $\rho$ may change rapidly) this term can become important. For those who are not so familiar with the equations of hydrodynamics, I can rewrite Eq. (21.33) in a form that makes the physics more apparent by using Eq. (21.31) to express $\theta$ in terms of $\FLPv$. Taking the gradient of the whole of Eq. (21.33) and expressing $\FLPgrad{\theta}$ in terms of $\FLPA$ and $\FLPv$ by using (21.31), I get \begin{equation} \label{Eq:III:21:34} \ddp{\FLPv}{t}=\frac{q}{m}\biggl(-\FLPgrad{\phi}-\ddp{\FLPA}{t}\biggr)- \FLPv\times(\FLPcurl{\FLPv})-(\FLPv\cdot\FLPnabla)\FLPv+ \FLPgrad{\frac{\hbar^2}{2m^2} \biggl(\frac{1}{\sqrt{\rho}}\,\nabla^2\sqrt{\rho}\biggr)}. \end{equation} \begin{align} \ddp{\FLPv}{t}&=\frac{q}{m}\biggl(-\FLPgrad{\phi}-\ddp{\FLPA}{t}\biggr) -\FLPv\times(\FLPcurl{\FLPv})\notag\\[1ex] \label{Eq:III:21:34} &-(\FLPv\cdot\FLPnabla)\FLPv + \FLPgrad{\frac{\hbar^2}{2m^2} \biggl(\frac{1}{\sqrt{\rho}}\,\nabla^2\sqrt{\rho}\biggr)}. \end{align} What does this equation mean? First, remember that \begin{equation} \label{Eq:III:21:35} -\FLPgrad{\phi}-\ddp{\FLPA}{t}=\FLPE. \end{equation} Next, notice that if I take the curl of Eq. (21.31), I get \begin{equation} \label{Eq:III:21:36} \FLPcurl{\FLPv}=-\frac{q}{m}\,\FLPcurl{\FLPA}, \end{equation} since the curl of a gradient is always zero. But $\FLPcurl{\FLPA}$ is the magnetic field $\FLPB$, so the first two terms can be written as \begin{equation*} \frac{q}{m}(\FLPE+\FLPv\times\FLPB). \end{equation*} Finally, you should understand that $\ddpl{\FLPv}{t}$ stands for the rate of change of the velocity of the fluid at a point. If you concentrate on a particular particle, its acceleration is the total derivative of $\FLPv$ (or, as it is sometimes called in fluid dynamics, the “comoving acceleration”), which is related to $\ddpl{\FLPv}{t}$ by18 \begin{equation} \label{Eq:III:21:37} \left.\ddt{\FLPv}{t}\right|_{\text{comoving}}\kern{-2ex}= \ddp{\FLPv}{t}+(\FLPv\cdot\FLPnabla)\FLPv. \end{equation} This extra term also appears as the third term on the right side of Eq. (21.34). Taking it to the left side, I can write Eq. (21.34) in the following way: \begin{equation} \label{Eq:III:21:38} \left.m\ddt{\FLPv}{t}\right|_{\text{comoving}}\kern{-3.5ex}= q(\FLPE\!+\!\FLPv\!\times\!\FLPB)\!+\!\FLPgrad{\frac{\hbar^2}{2m} \!\biggl(\!\frac{1}{\sqrt{\rho}}\nabla^2\!\!\sqrt{\rho}\!\biggr)}. \end{equation} We also have from Eq. (21.36) that \begin{equation} \label{Eq:III:21:39} \FLPcurl{\FLPv}=-\frac{q}{m}\,\FLPB. \end{equation} These two equations are the equations of motion of the superconducting electron fluid. The first equation is just Newton’s law for a charged fluid in an electromagnetic field. It says that the acceleration of each particle of the fluid whose charge is $q$ comes from the ordinary Lorentz force $q(\FLPE+\FLPv\times\FLPB)$ plus an additional force, which is the gradient of some mystical quantum mechanical potential—a force which is not very big except at the junction between two superconductors. The second equation says that the fluid is “ideal”—the curl of $\FLPv$ has zero divergence (the divergence of $\FLPB$ is always zero). That means that the velocity can be expressed in terms of velocity potential. Ordinarily one writes that $\FLPcurl{\FLPv}=\FLPzero$ for an ideal fluid, but for an ideal charged fluid in a magnetic field, this gets modified to Eq. (21.39). So, Schrödinger’s equation for the electron pairs in a superconductor gives us the equations of motion of an electrically charged ideal fluid. Superconductivity is the same as the problem of the hydrodynamics of a charged liquid. If you want to solve any problem about superconductors you take these equations for the fluid [or the equivalent pair, Eqs. (21.32) and (21.33)], and combine them with Maxwell’s equations to get the fields. (The charges and currents you use to get the fields must, of course, include the ones from the superconductor as well as from the external sources.) Incidentally, I believe that Eq. (21.38) is not quite correct, but ought to have an additional term involving the density. This new term does not depend on quantum mechanics, but comes from the ordinary energy associated with variations of density. Just as in an ordinary fluid there should be a potential energy density proportional to the square of the deviation of $\rho$ from $\rho_0$, the undisturbed density (which is, here, also equal to the charge density of the crystal lattice). Since there will be forces proportional to the gradient of this energy, there should be another term in Eq. (21.38) of the form: $(\text{const})\,\FLPgrad{(\rho-\rho_0)^2}$. This term did not appear from the analysis because it comes from the interactions between particles, which I neglected in using an independent-particle approximation. It is, however, just the force I referred to when I made the qualitative statement that electrostatic forces would tend to keep $\rho$ nearly constant inside a superconductor. 21–9The Josephson junction Fig. 21–6.Two superconductors separated by a thin insulator. I would like to discuss next a very interesting situation that was noticed by Josephson19 while analyzing what might happen at a junction between two superconductors. Suppose we have two superconductors which are connected by a thin layer of insulating material as in Fig. 21–6. Such an arrangement is now called a “Josephson junction.” If the insulating layer is thick, the electrons can’t get through; but if the layer is thin enough, there can be an appreciable quantum mechanical amplitude for electrons to jump across. This is just another example of the quantum-mechanical penetration of a barrier. Josephson analyzed this situation and discovered that a number of strange phenomena should occur. In order to analyze such a junction I’ll call the amplitude to find an electron on one side, $\psi_1$, and the amplitude to find it on the other, $\psi_2$. In the superconducting state the wave function $\psi_1$ is the common wave function of all the electrons on one side, and $\psi_2$ is the corresponding function on the other side. I could do this problem for different kinds of superconductors, but let us take a very simple situation in which the material is the same on both sides so that the junction is symmetrical and simple. Also, for a moment let there be no magnetic field. Then the two amplitudes should be related in the following way: \begin{align*} i\hbar\,\ddp{\psi_1}{t}&=U_1\psi_1+K\psi_2,\\[1ex] i\hbar\,\ddp{\psi_2}{t}&=U_2\psi_2+K\psi_1. \end{align*} The constant $K$ is a characteristic of the junction. If $K$ were zero, these two equations would just describe the lowest energy state—with energy $U$—of each superconductor. But there is coupling between the two sides by the amplitude $K$ that there may be leakage from one side to the other. (It is just the “flip-flop” amplitude of a two-state system.) If the two sides are identical, $U_1$ would equal $U_2$ and I could just subtract them off. But now suppose that we connect the two superconducting regions to the two terminals of a battery so that there is a potential difference $V$ across the junction. Then $U_1-U_2=qV$. I can, for convenience, define the zero of energy to be halfway between, then the two equations are \begin{equation} \begin{aligned} i\hbar\,\ddp{\psi_1}{t}&=+\frac{qV}{2}\,\psi_1+K\psi_2,\\[1ex] i\hbar\,\ddp{\psi_2}{t}&=-\frac{qV}{2}\,\psi_2+K\psi_1. \end{aligned} \label{Eq:III:21:40} \end{equation} These are the standard equations for two quantum mechanical states coupled together. This time, let’s analyze these equations in another way. Let’s make the substitutions \begin{equation} \begin{aligned} \psi_1&=\sqrt{\rho_1}e^{i\theta_1},\\[1ex] \psi_2&=\sqrt{\rho_2}e^{i\theta_2}, \end{aligned} \label{Eq:III:21:41} \end{equation} where $\theta_1$ and $\theta_2$ are the phases on the two sides of the junction and $\rho_1$ and $\rho_2$ are the density of electrons at those two points. Remember that in actual practice $\rho_1$ and $\rho_2$ are almost exactly the same and are equal to $\rho_0$, the normal density of electrons in the superconducting material. Now if you substitute these equations for $\psi_1$ and $\psi_2$ into (21.40), you get four equations by equating the real and imaginary parts in each case. Letting $(\theta_2-\theta_1)=\delta$, for short, the result is \begin{align} &\begin{aligned} \dot{\rho}_1&=+\frac{2}{\hbar}\,K\sqrt{\rho_2\rho_1}\sin\delta,\\[1.5ex] \dot{\rho}_2&=-\frac{2}{\hbar}\,K\sqrt{\rho_2\rho_1}\sin\delta, \end{aligned}\\[3ex] \label{Eq:III:21:42} &\begin{aligned} \dot{\theta}_1&=-\frac{K}{\hbar}\sqrt{\frac{\rho_2}{\rho_1}}\cos\delta- \frac{qV}{2\hbar},\\[1.5ex] \dot{\theta}_2&=-\frac{K}{\hbar}\sqrt{\frac{\rho_1}{\rho_2}}\cos\delta+ \frac{qV}{2\hbar}. \end{aligned} \label{Eq:III:21:43} \end{align} The first two equations say that $\dot{\rho}_1=-\dot{\rho}_2$. “But,” you say, “they must both be zero if $\rho_1$ and $\rho_2$ are both constant and equal to $\rho_0$.” Not quite. These equations are not the whole story. They say what $\dot{\rho}_1$ and $\dot{\rho}_2$ would be if there were no extra electric forces due to an unbalance between the electron fluid and the background of positive ions. They tell how the densities would start to change, and therefore describe the kind of current that would begin to flow. This current from side $1$ to side $2$ would be just $\dot{\rho}_1$ (or $-\dot{\rho}_2$), or \begin{equation} \label{Eq:III:21:44} J=\frac{2K}{\hbar}\sqrt{\rho_1\rho_2}\sin\delta. \end{equation} Such a current would soon charge up side $2$, except that we have forgotten that the two sides are connected by wires to the battery. The current that flows will not charge up region $2$ (or discharge region $1$) because currents will flow to keep the potential constant. These currents from the battery have not been included in our equations. When they are included, $\rho_1$ and $\rho_2$ do not in fact change, but the current across the junction is still given by Eq. (21.44). Since $\rho_1$ and $\rho_2$ do remain constant and equal to $\rho_0$, let’s set $2K\rho_0/\hbar=J_0$, and write \begin{equation} \label{Eq:III:21:45} J=J_0\sin\delta. \end{equation} $J_0$, like $K$, is then a number which is a characteristic of the particular junction. The other pair of equations (21.43) tells us about $\theta_1$ and $\theta_2$. We are interested in the difference $\delta=\theta_2-\theta_1$ to use Eq. (21.45); what we get is \begin{equation} \label{Eq:III:21:46} \dot{\delta}=\dot{\theta}_2-\dot{\theta}_1=\frac{qV}{\hbar}. \end{equation} That means that we can write \begin{equation} \label{Eq:III:21:47} \delta(t)=\delta_0+\frac{q}{\hbar}\int V(t)\,dt, \end{equation} where $\delta_0$ is the value of $\delta$ at $t=0$. Remember also that $q$ is the charge of a pair, namely, $q=2q_e$. In Eqs. (21.45) and (21.47) we have an important result, the general theory of the Josephson junction. Now what are the consequences? First, put on a dc voltage. If you put on a dc voltage, $V_0$, the argument of the sine becomes $(\delta_0+(q/\hbar)V_0t)$. Since $\hbar$ is a small number (compared to ordinary voltage and times), the sine oscillates rather rapidly and the net current is nothing. (In practice, since the temperature is not zero, you would get a small current due to the conduction by “normal” electrons.) On the other hand if you have zero voltage across the junction, you can get a current! With no voltage the current can be any amount between $+J_0$ and $-J_0$ (depending on the value of $\delta_0$). But try to put a voltage across it and the current goes to zero. This strange behavior has recently been observed experimentally.20 There is another way of getting a current—by applying a voltage at a very high frequency in addition to a dc voltage. Let \begin{equation*} V=V_0+v\cos\omega t, \end{equation*} where $v\ll V_0$. Then $\delta(t)$ is \begin{equation*} \delta_0+\frac{q}{\hbar}\,V_0t+\frac{q}{\hbar}\,\frac{v}{\omega}\sin\omega t. \end{equation*} Now for $\Delta x$ small, \begin{equation*} \sin\,(x+\Delta x)\approx\sin x+\Delta x\cos x. \end{equation*} Using this approximation for $\sin\delta$, I get \begin{equation*} J=\!J_0\Bigl[\sin\Bigl(\!\delta_0\!+\!\frac{q}{\hbar}V_0t\!\Bigr)\!+\! \frac{q}{\hbar}\frac{v}{\omega}\sin\omega t \cos\Bigl(\!\delta_0\!+\!\frac{q}{\hbar}V_0t\!\Bigr)\Bigr]. \end{equation*} The first term is zero on the average, but the second term is not if \begin{equation*} \omega=\frac{q}{\hbar}\,V_0. \end{equation*} There should be a current if the ac voltage has just this frequency. Shapiro21 claims to have observed such a resonance effect. If you look up papers on the subject you will find that they often write the formula for the current as \begin{equation} \label{Eq:III:21:48} J=J_0\sin\biggl(\delta_0+\frac{2q_e}{\hbar}\int\FLPA\cdot d\FLPs\biggr), \end{equation} where the integral is to be taken across the junction. The reason for this is that when there’s a vector potential across the junction the flip-flop amplitude is modified in phase in the way that we explained earlier. If you chase that extra phase through, it comes out as given above. Fig. 21–7.Two Josephson junctions in parallel. Finally, I would like to describe a very dramatic and interesting experiment which has recently been made on the interference of the currents from each of two junctions. In quantum mechanics we’re used to the interference between amplitudes from two different slits. Now we’re going to do the interference between two junctions caused by the difference in the phase of the arrival of the currents through two different paths. In Fig. 21–7, I show two different junctions, “a” and “b”, connected in parallel. The ends, $P$ and $Q$, are connected to our electrical instruments which measure any current flow. The external current, $J_{\text{total}}$, will be the sum of the currents through the two junctions. Let $J_{\text{a}}$ and $J_{\text{b}}$ be the currents through the two junctions, and let their phases be $\delta_{\text{a}}$ and $\delta_{\text{b}}$. Now the phase difference of the wave functions between $P$ and $Q$ must be the same whether you go on one route or the other. Along the route through junction “a”, the phase difference between $P$ and $Q$ is $\delta_{\text{a}}$ plus the line integral of the vector potential along the upper route: \begin{equation} \label{Eq:III:21:49} \Delta\text{Phase}_{P\to Q}=\delta_{\text{a}}+ \frac{2q_e}{\hbar}\int_{\text{upper}}\kern{-3ex}\FLPA\cdot d\FLPs. \end{equation} Why? Because the phase $\theta$ is related to $\FLPA$ by Eq. (21.26). If you integrate that equation along some path, the left-hand side gives the phase change, which is then just proportional to the line integral of $\FLPA$, as we have written here. The phase change along the lower route can be written similarly \begin{equation} \label{Eq:III:21:50} \Delta\text{Phase}_{P\to Q}=\delta_{\text{b}}+ \frac{2q_e}{\hbar}\int_{\text{lower}}\kern{-3ex}\FLPA\cdot d\FLPs. \end{equation} These two must be equal; and if I subtract them I get that the difference of the deltas must be the line integral of $\FLPA$ around the circuit: \begin{equation*} \delta_{\text{b}}-\delta_{\text{a}}= \frac{2q_e}{\hbar}\oint_\Gamma\FLPA\cdot d\FLPs. \end{equation*} Here the integral is around the closed loop $\Gamma$ of Fig. 21–7 which circles through both junctions. The integral over $\FLPA$ is the magnetic flux $\Phi$ through the loop. So the two $\delta$’s are going to differ by $2q_e/\hbar$ times the magnetic flux $\Phi$ which passes between the two branches of the circuit: \begin{equation} \label{Eq:III:21:51} \delta_{\text{b}}-\delta_{\text{a}}=\frac{2q_e}{\hbar}\,\Phi. \end{equation} I can control this phase difference by changing the magnetic field on the circuit, so I can adjust the differences in phases and see whether or not the total current that flows through the two junctions shows any interference of the two parts. The total current will be the sum of $J_{\text{a}}$ and $J_{\text{b}}$. For convenience, I will write \begin{equation*} \delta_{\text{a}}=\delta_0-\frac{q_e}{\hbar}\,\Phi,\quad \delta_{\text{b}}=\delta_0+\frac{q_e}{\hbar}\,\Phi. \end{equation*} Then, \begin{align} J_{\text{total}} &=J_0\biggl\{\!\sin\biggl(\! \delta_0\!-\!\frac{q_e}{\hbar}\Phi\!\biggr)\!+\sin\biggl(\! \delta_0\!+\!\frac{q_e}{\hbar}\,\Phi\!\biggr)\!\biggr\}\notag\\[1.5ex] \label{Eq:III:21:52} &=2J_0\sin\delta_0\cos\frac{q_e\Phi}{\hbar}. \end{align} Now we don’t know anything about $\delta_0$, and nature can adjust that anyway she wants depending on the circumstances. In particular, it will depend on the external voltage we apply to the junction. No matter what we do, however, $\sin\delta_0$ can never get bigger than $1$. So the maximum current for any given $\Phi$ is given by \begin{equation*} J_{\text{max}}=2J_0\left\lvert \cos\frac{q_e}{\hbar}\,\Phi\right\rvert. \end{equation*} This maximum current will vary with $\Phi$ and will itself have maxima whenever \begin{equation*} \Phi=n\,\frac{\pi\hbar}{q_e}, \end{equation*} with $n$ some integer. That is to say that the current takes on its maximum values where the flux linkage has just those quantized values we found in Eq. (21.30)! The Josephson current through a double junction was recently measured22 as a function of the magnetic field in the area between the junctions. The results are shown in Fig. 21–8. There is a general background of current from various effects we have neglected, but the rapid oscillations of the current with changes in the magnetic field are due to the interference term $\cos q_e\Phi/\hbar$ of Eq. (21.52). Fig. 21–8.A recording of the current through a pair of Josephson junctions as a function of the magnetic field in the region between the two junctions (see Fig. 21–7). [This recording was provided by R. C. Jaklevic, J. Lambe, A. H. Silver, and J. E. Mercereau of the Scientific Laboratory, Ford Motor Company.] One of the intriguing questions about quantum mechanics is the question of whether the vector potential exists in a place where there’s no field.23 This experiment I have just described has also been done with a tiny solenoid between the two junctions so that the only significant magnetic $\FLPB$ field is inside the solenoid and a negligible amount is on the superconducting wires themselves. Yet it is reported that the amount of current depends oscillatorily on the flux of magnetic field inside that solenoid even though that field never touches the wires—another demonstration of the “physical reality” of the vector potential.24 I don’t know what will come next. But look what can be done. First, notice that the interference between two junctions can be used to make a sensitive magnetometer. If a pair of junctions is made with an enclosed area of, say, $1$ mm$^2$, the maxima in the curve of Fig. 21–8 would be separated by $2\times10^{-6}$ gauss. It is certainly possible to tell when you are $1/10$ of the way between two peaks; so it should be possible to use such a junction to measure magnetic fields as small as $2\times10^{-7}$ gauss—or to measure larger fields to such a precision. One should be able to go even further. Suppose for example we put a set of $10$ or $20$ junctions close together and equally spaced. Then we can have the interference between $10$ or $20$ slits and as we change the magnetic field we will get very sharp maxima and minima. Instead of a $2$-slit interference we can have a $20$- or perhaps even a $100$-slit interferometer for measuring the magnetic field. Perhaps we can predict that the measurement of magnetic fields will—by using the effects of quantum-mechanical interference—eventually become almost as precise as the measurement of wavelength of light. These then are some illustrations of things that are happening in modern times—the transistor, the laser, and now these junctions, whose ultimate practical applications are still not known. The quantum mechanics which was discovered in 1926 has had nearly 40 years of development, and rather suddenly it has begun to be exploited in many practical and real ways. We are really getting control of nature on a very delicate and beautiful level. I am sorry to say, gentlemen, that to participate in this adventure it is absolutely imperative that you learn quantum mechanics as soon as possible. It was our hope that in this course we would find a way to make comprehensible to you at the earliest possible moment the mysteries of this part of physics. 1. I’m not really reminding you, because I haven’t shown you some of these equations before; but remember the spirit of this seminar. 2. Volume II, Section 15–5. 3. Not to be confused with our earlier use of $\phi$ for a state label! 4. $K$ is the same quantity that was called $A$ in the problem of a linear lattice with no magnetic field. See Chapter 13. 5. Section 13–3. 6. Volume II, Section 27–1. 7. See, for example, J. D. Jackson, Classical Electrodynamics, John Wiley and Sons, Inc., New York(1962), p. 408. 8. Volume II, Chapter 14, Section 14–1. 9. First discovered by Kamerlingh-Onnes in 1911; H. Kamerlingh-Onnes, Comm. Phys. Lab., Univ. Leyden, Nos. 119, 120, 122 (1911). You will find a nice up-to-date discussion of the subject in E. A. Lynton, Superconductivity, John Wiley and Sons, Inc., New York, 1962. 10. J. Bardeen, L. N. Cooper, and J. R. Schrieffer, Phys. Rev. 108, 1175 (1957). 11. W. Meissner and R. Ochsenfeld, Naturwiss. 21, 787 (1933). 12. Actually if the electric field were too strong, pairs would be broken up and the “normal” electrons created would move in to help neutralize any excess of positive charge. Still, it takes energy to make these normal electrons, so the main point is that a nearly uniform density $\rho$ is highly favored energetically. 13. F. London and H. London, Proc. Roy. Soc. (London) A149, 71 (1935); Physica 2, 341 (1935). 14. F. London, Superfluids; John Wiley and Sons, Inc., New York, 1950, Vol. I, p. 152. 15. B. S. Deaver, Jr., and W. M. Fairbank, Phys. Rev. Letters 7, 43 (1961). 16. R. Doll and M. Näbauer, Phys. Rev. Letters 7, 51 (1961). 17. It has once been suggested by Onsager that this might happen (see Deaver and Fairbank, Ref. 15), although no one else ever understood why. 18. See Volume II, Section 40–2. 19. B. D. Josephson, Physics Letters 1, 251 (1962). 20. P. W. Anderson and J. M. Rowell, Phys. Rev. Letters 10, 230 (1963). 21. S. Shapiro, Phys. Rev. Letters 11, 80 (1963). 22. Jaklevic, Lambe, Silver, and Mercereau, Phys. Rev. Letters 12, 159 (1964). 23. Jaklevic, Lambe, Silver, and Mercereau, Phys. Rev. Letters 12, 274 (1964). 24. See Volume II, Chapter 15, Section 15–5.
bed2515810254424
Department of Physics, The University of Tokyo go to 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2012 seminars Oct. 4 Cord Mueller (Centre for Quantum Technologies, Singapore) Oct. 11 Shunsuke Furukawa Oct. 18 (no seminar) Oct. 25 Zhifang Xu Nov. 1 Naoyuki Sakumichi Nov. 8 Shohei Watabe Nov. 15 Shingo Kobayashi Nov. 22 Shinpei Endo Nov. 29 Nguyen Thanh Phuc Dec. 6 Tatsuhiko Ikeda Dec. 13 Yui Kuramochi Dec. 20 Ken Funo Jan. 10 (no seminar) Jan. 17 Yusuke Horinouchi Jan. 24 Yasuhiro Hatsugai (Univ. of Tsukuba) Feb. 14 Sho Sugiura (Dept. of Basic Science, Univ. of Tokyo) 2012/10/4(Thu) @#933 13:00- speaker Cord Mueller (Centre for Quantum Technologies, Singapore) title Bogoliubov theory of disordered Bose-Einstein condensates abstract When interacting bosons are condensed in optical potentials of various forms, it can be quite a challenge to tell the condensate from its excitations, both quantum and thermal. In this talk, I will describe a Bogoliubov theory of inhomogeneous condensates that is capable of describing the excitations of bosonic superfluids in arbitrary external potentials. Joint work with Christopher Gaul (Madrid). Recent reference: arXiv:1202.3489 2012/10/11(Thu) @#933 14:00- speaker Shunsuke Furukawa title Quantum Hall states in rapidly rotating two-component Bose gases abstract Under rapid rotation, ultracold gases of bosonic atoms have been predicted to enter a highly correlated regime, which is analogaous to quantum Hall systems. In this talk, I will present our recent study on the quantum Hall states in rapidly rotating two-component (or pseudo-spin-1/2) Bose gases. These systems offer an ideal setting in which to study the roles of "(pseudo-)spins" in quantum Hall physics. Our main results include (a) the numerical evidence for a non-Abelian spin-singlet state, whose quasiparticles feature non-Abelian statistics, and (b) a phase transtion between different quantum Hall states that occurs as the ratio of the intercomponent to intracomponent interactions changes. Reference: S.F. and M. Ueda, Phys. Rev. A 86, 031604(R) (2012) 2012/10/25(Thu) @#933 13:00- speaker Zhifang Xu title Spinor Bose-Einstein condensates with spin-orbit couplings abstract Recently, the groundbreaking experiments in Spielman's group on emulating spin-orbit couplings in a pseudo spin-1/2 atomic Bose gases have stimulated tremendous efforts on spin-orbit coupled quantum gases. Based on mean-field approximations, plane-wave, stripe, triangular-lattice, and square-lattice phases are found as ground states. In this talk, I will present our recent study on how to systematically classify different phases based on their symmetries. We can then not only understand different lattice phases already found but also find two different kagome lattice phase and a nematic vortex lattice phase, both of which emerge spontaneously without lattice potentials. Reference: Z. F. Xu, Y. Kawaguchi, L. You, and M. Ueda, Phys. Rev. A 86, 033628 (2012). 2012/11/1(Thu) @#933 13:00- speaker Naoyuki Sakumichi title BEC-BCS crossover theory based on Lee-Yang cluster expansion abstract We propose a new systematic approach to the BCS-BEC crossover at finite temperature based on the cluster (virial) expansion formulation of Lee and Yang [1], which enables us to systematically expand the thermodynamic function in terms of cluster functions. It is found that the proposed theory leads the thermodynamic function of a standard BCS-BEC crossover theory of Noziéres and Schmitt-Rink (NSR) [2] by first-order approximation. Our approach is basically different from standard perturbative-expansion techniques in terms of interaction parameter, as the theory of NSR. Concretely, although the second virial coefficient (which is dominant in the high-temperature limit) of NSR is equivalent to that of ideal Fermi gas, that of the proposed theory is exact for any value of an s-wave scattering length. [1] T. D. Lee and C. N. Yang, Phys. Rev. 113, 1165 (1958); 117, 22 (1960). [2] P. Noziéres and S. Schmitt-Rink, J. Low Temp. Phys., 59, 195 (1985). 2012/11/8(Thu) @#933 13:00- speaker Shohei Watabe title recent development of Monte Carlo methods abstract The Green's function formalism is one of useful tools for studying many-body systems. This is a perturbation theory, and if one finds minimum diagrams describing the system he is considering qualitatively or quantitatively well, it helps him understand what mechanism might be there. On the other hand, another interesting tool is known for studying condensed matters; that is the Monte Carlo method. This method gives unbiased results for some systems. In this seminar, I will review recent development of Monte Carlo methods, and talk about what I am going to do. 2012/11/15(Thu) @#933 13:00- speaker Shingo Kobayashi title Topological influence and back-action in multiple topological excitation systems abstract Topological excitations exist in various subfields of physics, such as condensed matter physics, elementary particle physics, and cosmology. They have been observed experimentally in gaseous Bose-Einstein condensates. We can classify them using the homotopy group. However, there is a case that the homotopy group is not consistent with a charge of a topological excitation when it coexists with a vortex, which effect is called the topological influence. In this case, physically consistent charges are given by the Abe homotopy group [1,2]. In this talk, I will discuss the relationship between the topological influence and the total charge conservation. To be consistent with the charge conservation, I introduce a back-action on a vortex in terms of the topological influence on a topological excitation. [1] M. Abe, Jpn. J. Math. 16, (1940) 179. [2] S. Kobayashi, et al., Nucl. Phys. B 856, (2012) 577. 2012/11/22(Thu) @#933 13:00- speaker Shimpei Endo title Polarons in a mass imbalanced two component Fermi gas abstract Recently, mass imbalanced atomic mixtures such as K-Li, Yb-Li, Cs-Li, have been realized experimentally. In these systems, non-trivial three-body or four-body bound states can appear and their interplay with many-body physics is of great interests. Furthermore, the heavy mass particle can be used as a probe in a Fermi environment. In this seminar, I discuss the polaron physics in this mass imbalanced Fermi gas. I discuss the dynamics of a single heavy particle immersed in a Fermionic environment. I then talk about multi-heavy particles immersed in a Fermionic environment, and their effective interactions. 2012/11/29(Thu) @#933 13:00- speaker Nguyen Thanh Phuc title Fluctuation-induced first-order quantum phase transition in spinor Bose-Einstein condensates abstract Quantum fluctuations are ubiquitous in a wide range of physical phenomena. In ultracold atoms it has also been proven to play an important role in the breaking of symmetry. In spin-2 spinor Bose-Einstein condensates, the energy spectrum contradicts with the order of phase transition at the mean-field level. In this seminar, by taking quantum fluctuations into account, we show that the energy spectrum reconciles with the first-order quantum phase transition. The energy gap of quasi-Nambu-Goldstone modes and Beliaev damping are also discussed. 2012/12/6(Thu) @#933 13:00- speaker Tatsuhiko Ikeda title Universal quantum correction to diagonal entropy after control abstract The diagonal entropy has recently been proposed to describe the thermodynamic entropy in isolated quantum systems under control [1]. Unlike von Neumann's entropy, it varies every time we control the system. Although the second law of thermodynamics is expected to hold, it is proven under the condition that the density matrix before control is diagonal in the eigenenergy basis. To examine the second law in more general situations, without assuming the above condition, we have evaluated the diagonal entropy after control and found that it involves a universal quantum correction which is sub-extensive [2]. As a consequence, we conclude that the second law is retained in large systems but may breaks down in small systems. [1] A. Polkovnikov, Annals of Physics 326, 486 (2011). [2] T. N. Ikeda, A. Polkovnikov, and M. Ueda, in preparation. 2012/12/13(Thu) @#933 13:00- speaker Yui Kuramochi title Theory of simultaneous continuous measurement process of photon counting and homodyne detection abstract The quantum continuous measurement is the quantum measurement in which weak quantum measurements are done continuously in time. Among the studies on the quantum continuous measurement, there are two types of measurement process: jump type and diffusive type continuous measurements. In this seminar, we found the general theory which can include both jump type and diffusive type measurement outcomes. This theory is applied to a typical measurement process of simultaneous measurement of photon counting and homodyne detection. The stochastic Schrödinger equation discribing the measurement process is analytically solved. Using this solution, a probablity density and a generating functional of the measurement outputs are obtained. These formulae are applied to typical initial conditions: coherent, number, thermal and squeezed states. Finally Monte Carlo Simulations of the photon number expectation value for several paths are presented. 2012/12/20(Thu) @#933 13:00- speaker Ken Funo title Thermodynamic energy gain from entanglement abstract When we consider an observer that can measure the microscopic degrees of freedom and feedback control the system, we can extract work from the system above the limit of the conventional second law of thermodynamics. Generalized second law by controlling thermal fluctuation has been shown. Also, information to free energy conversion has been experimently demonstrated. In this seminar, we discuss the effect of entanglement when considering the feedback control. We show that work can be extracted from the entangled system beyond the classical correlation. 2013/1/17(Thu) @#933 13:00- speaker Yusuke Horinouchi title Introduction to the Functional Renormalization Group abstract The Functional Renormalization Group (FRG) is a powerfull methodology in field theory, which is applicable even in the strong coupling regime. The basic idea of the FRG is to introduce an infrared cutoff to the free propagator of the theory and to study the flow of the effective action by changing this cutoff. Because the resulting equation is exact, it has many advantages over other perturbative or non-perturbative approaches. It is a systematic non-perturbative approach which is independent of the details of the system. I will review the basic formulation of the FRG, and present its demonstration in a harmonic oscillator perturbed by a quartic interaction. 2013/1/24(Thu) @#933 13:00- speaker Yasuhiro Hatsugai (Institute of Physics, University of Tsukuba) title Symmetry and order parameters for topological phases abstract Beyond the great success of the Ginzburg-Landau theory associated with symmetry breaking (SB), condensed matter physicists focus on more than that recently. That is, phases without any fundamental SB but possessing characteristic feature are of the central interest, which are the quantum/spin liquids. This class of matter includes quite wide varieties such as quantum (spin) Hall states, gapped quantum spins, anisotropic superfluids/superconductors and graphene. Some of cold atoms and photonic systems belong to the class as well. Even though the SB is absent, too much generic states are boring. Then still the symmetry plays an important role to constrain the physical states. Gauge symmetries, time-reversal and charge conjugation are important examples. When the quantum/spin liquid is stable against for some perturbation due to geometrical constraints, one may consider the state topological. As for the topological phases with some symmetry protection, we can define "topological order parameter" using topological objects, such as gap nodes, quantized Berry phases and Chern numbers. Some of edge states, which are induced by geometric perturbation such as boundaries and impurities, are again topologically stable and used for the topological order parameters. This is the bulk-edge correspondence. As for the gapped cases, these topological order parameters are adiabatic invariants and useful for identification of the quantum phase transition. We will describe generic idea of the topological phases and show validity of our topological characterization for various quantum phases. 2013/2/14(Thu) @#201a 13:00- speaker Sho Sugiura (Dept. of Basic Science, Univ. of Tokyo) title Thermal Pure Quantum State Corresponds To Various Ensembles abstract A thermal equilibrium state of a quantum many-body system is conventionally represented by the density operator of the statistical-mechanical ensembles. It can also be represented by a typical pure state, which we call a thermal pure quantum (TPQ) state. This state is not obtained by purification of a mixed state, hence any extra systems such as reservoirs are unnecessary. A single realization of the TPQ state suffices for calculating all statistical-mechanical properties, including correlation functions and genuine thermodynamic variables, of a quantum system at finite temperature. In this talk, we firstly introduce the TPQ state corresponding to microcanonical ensenble. Then, we extend it to the TPQ state corresponding to canonical ensenble. The TPQ states corresponding to other ensembles can also be constructed in a similar manner. Next, We show that all these TPQ states are equivalent, i.e., they give identical thermodynamic results. We also show that they are transformed to each other by simple analytic transformations. This formulation is not only interesting as fundamental physics but also advantageous in practical applications because one needs only to construct a single pure state by just multiplying the hamiltonian matrix to a random vector. Schedule of summer semester (start from 13:00 @ #933) 2012/4/12(Thu) @933 13:30- speaker Shohei Watabe (Univ. of Tokyo) title Many-body effect on an interacting Bose gas abstract In this seminar, I will talk about our recent challenges to fix problems in mean-field theories for bosons. A well-known mean-field theory for a Bose-Einstein condensate is the Shono-Popov approximation, including all possible first-order diagrams of a Bose-condensed system. However, this theory has problems: (1) The critical temperature is equal to an ideal Bose gas. (2) Correlation functions have infrared divergence. (3) It involves the first-order phase transition. These problems are shared with the Nozieres-Schmitt-Rink theory (applicable to superfluid Fermi gases) in the BEC limit as well. I will discuss how to fix two of the three problems in Green's function language and also discuss one problem still remains. This work is collaborated with Prof. Yoji Ohashi. Apr. 19(Thu) Shingo Kobayashi (in Japanese)     "Chern Numbers, Quaternions, and Berry's Phases in Fermi Systems"     J. E. Avron, L. Sadun, J. Segert and B. Simon, J. Math. Phys. 124, (1989) 595-627 Apr. 26(Thu) Shinpei Endo (in English)     "Resonantly paired fermionic superfluids"     V. Gurarie and L. Radzihovsky, Ann. Phys. 322 (2007) 2-119 May. 24(Thu) Nguyen Thanh Phuc (in English)     "Dynamics of Trapped Bose Gases at Finite Temperatures"     Zaremba Nikuni Griffin, Journal of Low Temperature Physics 116 277 (1999) May. 31(Thu) Tatsuhiko Ikeda (in Japanese)     "Rigorous results on valence-bond ground states in antiferromagnets"     Affleck Kennedy Lieb Tasaki PRL 59 799-802 (1987)     "Valence bond ground states in isotropic quantum antiferromagnets"     Affleck Kennedy Lieb Tasaki CMP 3 477-528 (1988) Jun. 7(Thu) Yui Kuramochi (in Japanese)     "An Operational Approach to Quantum Probability"     E. B. Davies and J. T. Lewis, Commun. math. Phys. 17 239-260 (1970) Jun. 14(Thu) Ken Funo (in English)     "Information causality as a physical principle"     M. Pawlowski, T. Paterek, D. Kaszlikowski, V. Scarani, A. Winter, and M. Zukowski, Nature 461, 1101 (2009) Jun. 21(Thu) Tomohiro Shitara (in Japanese)     N. Bohr, Phys. Rev. 48, 696-702 (1935) 2012/6/25(Mon) @233 10:00- speaker Hitoshi Murayama (IPMU & Berkeley) title Unified description of Nambu--Goldstone bosons without Lorentz invariance abstract Using the effective Lagrangian approach, we clarify general issues about Nambu--Goldstone bosons without Lorentz invariance. We show how to count their number and study their dispersion relations. Their number is less than the number of broken generators when some of them form canonically conjugate pairs. The pairing occurs when the generators have a nonzero expectation value of their commutator. For non-semi-simple algebras, central extensions are possible. Underlying geometry of the coset space in general is partially symplectic. Jun. 28(Thu) Hiroyuki Shimizu (in Japanese)     "The topological theory of defects in ordered media"     N. D. Mermin, Rev. Mod. Phys. 51, 591-648 (1979) 2012/7/3(Tue) @431 13:00- speaker Motohiko Ezawa (University of Tokyo) title From graphene to silicene: A topological insulator made of silicon abstract Silicene is a monolayer of silicon atoms forming a two-dimensional honeycomb lattice, which shares almost every remarkable property with graphene. The low energy dynamics is described by Dirac electrons, but they are massive due to relatively large spin-orbit interactions. I will explain the following properties of silicene: 1) The band structure is controllable by applying an electric field. 2) Silicene undergoes a phase transition from a topological insulator to a band insulator by applying external electric field. 3) The topological phase transition can be detected experimentally by way of diamagnetism. 4) There is a novel valley-spin selection rules revealed by way of photon absorption. 5) Silicene yields a remarkably many phases when the exchange field is additionally introduced. 6) A silicon nanotubes can be used to convey spin currents under an electric field. Jul. 5(Thu) Shun Tanaka (in Japanese)     "On the Einstein Podolsky Rosen paradox"     J. S. Bell, Physics 1 (3): 195-200 (1964)     "The Problem of Hidden Variables in Quantum Mechanics"     Kochen, S. and Specker, Journal of Mathematics and Mechanics (1967) 2012/7/9(Mon) @201a 13:00- speaker Takahiro Sagawa (Kyoto University) title How to Reconcile Maxwell's Demon with the Second Law? abstract As is known from the nineteenth century, Maxwell's demon can adiabatically decrease the entropy of thermodynamic systems by feedback control. What reconciles the demon with the second law? In this talk, I will answer this question on the basis of my recent work with Prof. Ueda (arXiv:1206.2479): the positive entropy production during measurement compensates for the negative entropy production during feedback control. This talk is organized as follows. First, I will introduce the basic concepts in information theory and nonequilibrium thermodynamics. Second, I will briefly review the history of Maxwell's demon, and clarify what was understood and what was misunderstood. Third, I will talk about our recent results on the generalizations of the fluctuation theorem for information exchanges, which clarifies the information-entropy balance in a broad class of information processing including the conventional Maxwell's demon. Jul. 12(Thu) Yuske Horinouchi     "Energetics of a strongly correlated Fermi gas"     S. Tan, Ann. Phys. 323 2952 (2008)     "Large momentum part of a strongly correlated Fermi gas"     S. Tan, Ann. Phys. 323 2971 (2008)     "Generalized virial theorem and pressure relation for a strongly correlated Fermi gas"     S. Tan, Ann. Phys. 323 2987 (2008) 2012/8/27(Mon) @933 13:00- speaker Doerte Blume (Washington State Univ.) title Two-Component Fermi Gases with Unequal Masses:Three-, Four- and Many-Body Physics abstract Weakly-bound few-body systems have been studied extensively by the atomic, nuclear and condensed matter communities since the early days of quantum mechanics. This talk summarizes our recent theoretical studies of few-fermion systems consisting of three and four particles. In particular, we discuss the energetics and structural properties of extremely weakly-bound three- and four-fermion systems consisting of a majority of heavy fermions and a single light impurity. For positive interspecies s-wave scattering length and sufficiently large mass ratio, a weakly-bound universal four-body bound state is predicted to exist. We also discuss the behavior of two-component Fermi gases with infinitely large interspecies s-wave scattering length. Employing the virial equation of state, thermodynamic properties of unequal-mass Fermi gases at unitarity are discussed in the high-temperature limit. 2012/8/27(Mon) @933 15:00- speaker Jose D'Incao (JIRA, University of Colorado and NIST) title Efimov physics for atoms and dipolar species abstract Strides made by the field of theoretical atomic physics have resulted in a tremendous deepening of our understanding of ultracold gases in the quantum mechanical realm. Increasingly, these gains are being translated into prospects for controlling atomic behavior, whether for development of the next generation of atomic clocks, or for creating novel phases of atomic gases, or for the manipulation of chemical reaction dynamics. In this talk, I will show that a more fundamental phenomena, known today as Efimov physics, controls the interactions between a few atoms and molecules. Predicted about 40 years ago, the Efimov effect is one of the most counterintuitive quantum phenomena that manifest in a "simple" few-particle system. I will discuss our recent findings on Efimov physics in atomic systems as well as its extention to strongly dipolar systems. Even though a long-range anisotropic dipolar interaction has all the ingredients to "destroy" the Efimov effect, our work shows that not only does the effective attractive interaction that characterizes the Efimov effect persist, but also that the dipolar interaction is extremely beneficial for the study of the Efimov effect.
c22b2fecffee30af
Aug 122015 es1  es3 Today is the birthday (1887) of Erwin Rudolf Josef Alexander Schrödinger (1887 ), a Nobel Prize-winning Austrian physicist who developed a number of fundamental ideas in the field of quantum theory, which formed the basis of wave mechanics. He formulated the basic wave equation (stationary and time-dependent Schrödinger equation) and, more popularly, proposed an original interpretation of the physical meaning of the wave function which led to his famous thought experiment “Schrödinger’s Cat” which supposedly illustrates the absurdity of the Copenhagen interpretation of quantum mechanics. For those who know (and care) about the implications of this thought experiment I have to say that I’ve never seen the point of it. The Copenhagen interpretation states that the wave function of certain subatomic particles exists in two (or more) simultaneously contradictory states until they are observed, at which point the function “collapses” or resolves to one or the other. Erwin Schrödinger’s thought experiment involved a closed box within which was a chamber containing a very small amount of radioactive material a particle of which within a fixed span of time might decay or not decay. The state of the particle would be measured by a Geiger counter and if it had decayed would trigger the release of cyanide gas. Also in the box was a cat. Schrödinger’s point was that it was absurd to imagine that until the box was opened by an “observer” the decay state of the particle was unknown and therefore that the cat was simultaneously alive and dead. Here’s the original: Einstein had always been troubled by the idea that matter could simultaneously exist in two contradictory states and was so delighted by the thought experiment that he wrote: I don’t know where he got the gunpowder from, but that’s not the only mistake that he made. I’ve wondered for years why they thought that the wave function had to be observed by a human for it to collapse. Why isn’t the cat an observer? Why not the Geiger counter? Anything in the macro world that interacts with the wave function is an observer. Oh dear, Erwin, you should have taken an anthropology class with me.  I gather from recent reading that I am not the only person to have spotted the fallacy. Niels Bohr apparently made the same observation a long time ago. Oh well, it’s not surprising; he’s much smarter than I am. The experiment as described is a purely theoretical one, and the machine proposed is not known to have been constructed. However, successful experiments involving similar principles, e.g. superpositions (that is matter in 2 states at the same time) of relatively large (by the standards of quantum physics) objects have been performed. These experiments do not show that a cat-sized object can be superposed (both alive and dead), but the known upper limit on “cat states” has been pushed upwards by them. In many cases the state is short-lived, even when cooled to near absolute zero. 1. A “cat state” has been achieved with photons. 2  A beryllium ion has been trapped in a superposed state. 3. An experiment involving a superconducting quantum interference device (“SQUID”) has been linked to the theme of the thought experiment: “The superposition state does not correspond to a billion electrons flowing one way and a billion others flowing the other way. Superconducting electrons move en masse. All the superconducting electrons in the SQUID flow both ways around the loop at once when they are in the Schrödinger’s cat state.” 4. A piezoelectric “tuning fork” has been constructed, which is both vibrating and still at the same time. All this thinking makes me hungry but I don’t think I can do much with cyanide and a dead cat. So I am left with Schrödinger’s home of Vienna, which has already given me enough headaches. But . . . Vienna is well known for dishes made with a cheese called quark, which by silly coincidence is the name of an elementary sub-atomic particle. By an even sillier coincidence there are different types of quark particles which are referred to as “flavors.” Quark the dairy product is made by warming soured milk until the desired degree of coagulation (denaturation, curdling) of milk proteins is met, and then strained. It can be classified as fresh acid-set cheese, though in some countries it is traditionally considered a distinct fermented milk product. Traditional quark is made without rennet, but in some modern dairies rennet is added. It is soft, white and unaged, and usually has no salt added. Last time I gave a Viennese recipe I included a link for this video on how to make apple strudel: Well , you can adapt it to make Viennese Topfenstrudel, using sweetened quark in place of apples. Problem solved.
0af11b17637a9fd7
Quantum mechanics fundamental theory in physics describing the properties of nature on an atomic scale (Redirected from Quantum physics) Quantum mechanics explain how the universe works at a scale smaller than atoms. It is also called quantum physics or quantum theory. Mechanics is the part of physics that explains how things move and quantum is the Latin word for 'how much'. A quantum of energy is the least amount possible (or the least extra amount), and quantum mechanics describes how that energy moves or interacts. Atoms used to be considered the smallest building blocks of matter, but modern science has shown that there are even smaller particles, like protons, neutrons and electrons. Quantum mechanics describes how the particles that make up atoms work. Quantum mechanics also tells us how electromagnetic waves (like light) work. Wave–particle duality means that particles behave like waves and waves behave like particles. (They are not two kinds of thing, they are something like both: this is their duality.) Much of modern physics and chemistry can be described and understood using the mathematical rules of quantum mechanics. The mathematics used to study subatomic particles and electromagnetic waves is very complex because they act in very strange ways. The wavelength of a wave of light Waves and photonsEdit Photons are particles that are point-sized, tinier than atoms. Photons are like "packets" or packages of energy. Light sources such as candles or lasers produce light in bits called photons. The more photons a lamp produces, the brighter the light. Light is a form of energy that behaves like the waves in water or radio waves. The distance between the top of one wave and the top of the next wave is called a 'wavelength'. Each photon carries a certain amount, or 'quantum', of energy depending on its wavelength. Black at left is ultraviolet (high frequency); black at right is infrared (low frequency). A light's color depends on its wavelength. The color violet (the bottom or innermost color of the rainbow) has a wavelength of about 400 nm ("nanometers") which is 0.00004 centimeters or 0.000016 inches. Photons with wavelengths of 10-400 nm are called ultraviolet (or UV) light. Such light cannot be seen by the human eye. On the other end of the spectrum, red light is about 700 nm. Infrared light is about 700 nm to 300,000 nm. Human eyes are not sensitive to infrared light either. Wavelengths are not always so small. Radio waves have longer wavelengths. The wavelengths for an FM radio can be several meters in length (for example, stations transmitting on 99.5 FM are emitting radio energy with a wavelength of about 3 meters, which is about 10 feet). Each photon has a certain amount of energy related to its wavelength. The shorter the wavelength of a photon, the greater its energy. For example, an ultraviolet photon has more energy than an infrared photon. Pictorial description of frequency Wavelength and frequency (the number of times the wave crests per second) are inversely proportional, which means a longer wavelength will have a lower frequency, and vice versa. If the color of the light is infrared (lower in frequency than red light), each photon can heat up what it hits. So, if a strong infrared lamp (a heat lamp) is pointed at a person, that person will feel warm, or even hot, because of the energy stored in the many photons. The surface of the infrared lamp may even get hot enough to burn someone who may touch it. Humans cannot see infrared light, but we can feel the radiation in the form of heat. For example, a person walking by a brick building that has been heated by the sun will feel heat from the building without having to touch it. The mathematical equations of quantum mechanics are abstract, which means it is impossible to know the exact physical properties of a particle (like its position or momentum) for sure. Instead, a mathematical function called the wavefunction provides information about the probability with which a particle has a given property. For example, the wavefunction can tell you what the probability is that a particle can be found in a certain location, but it can't tell you where it is for sure. Because of this uncertainty and other factors, you cannot use classical mechanics (the physics that describe how large objects move) to predict the motion of quantum particles. On the left, a plastic thermometer is under a bright heat lamp. This infrared radiation warms but does not damage the thermometer. On the right, another plastic thermometer gets hit by a low intensity ultraviolet light. This radiation damages but does not warm the thermometer. Ultraviolet light is higher in frequency than violet light, such that it is not even in the visible light range. Each photon in the ultraviolet range has a lot of energy, enough to hurt skin cells and cause a sunburn. In fact, most forms of sunburn are not caused by heat; they are caused by the high energy of the sun's UV rays damaging your skin cells. Even higher frequencies of light (or electromagnetic radiation) can penetrate deeper into the body and cause even more damage. X-rays have so much energy that they can go deep into the human body and kill cells. Humans cannot see or feel ultraviolet light or x-rays. They may only know they have been under such high frequency light when they get a radiation burn. Areas where it is important to kill germs often use ultraviolet lamps to destroy bacteria, fungi, etc. X-rays are sometimes used to kill cancer cells. Quantum mechanics started when it was discovered that if a particle has a certain frequency, it must also have a certain amount of energy. Energy is proportional to frequency (E ∝ f). The higher the frequency, the more energy a photon has, and the more damage it can do. Quantum mechanics later grew to explain the internal structure of atoms. Quantum mechanics also explains the way that a photon can interfere with itself, and many other things never imagined in classical physics. Max Planck discovered the relationship between frequency and energy. Nobody before had ever guessed that frequency is directly proportional to energy (this means that as one of them doubles, the other does, too). Under what are called natural units, then the number representing the frequency of a photon would also represent its energy. The equation would then be: meaning energy equals frequency. But the way physics grew, there was no natural connection between the units that were used to measure energy and the units commonly used to measure time (and therefore frequency). So the formula that Planck worked out to make the numbers all come out right was: or, energy equals h times frequency. This h is a number called Planck's constant after its discoverer. Quantum mechanics is based on the knowledge that a photon of a certain frequency means a photon of a certain amount of energy. Besides that relationship, a specific kind of atom can only give off certain frequencies of radiation, so it can also only give off photons that have certain amounts of energy. Double-slit experiment: light goes from the light source at left to fringes (marked in the black edge) at the right. Photoelectric effect: photons hit metal and electrons are pushed away. Isaac Newton thought that light was made of very small things that we would now call particles (he referred to them as "Corpuscles"). Christiaan Huygens thought that light was made of waves. Scientists thought that a thing cannot be a particle and a wave at the same time. Scientists did experiments to find out whether light was made of particles or waves. They found out that both ideas were right — light was somehow both waves and particles. The Double-slit experiment performed by Thomas Young showed that light must act like a wave. The Photoelectric effect discovered by Albert Einstein proved that light had to act like particles that carried specific amounts of energy, and that the energies were linked to their frequencies. This experimental result is called the "wave-particle duality" in quantum mechanics. Later, physicists found out that everything behaves both like a wave and like a particle, not just light. However, this effect is much smaller in large objects. Here are some of the people who discovered the basic parts of quantum mechanics: Max Planck, Albert Einstein, Satyendra Nath Bose, Niels Bohr, Louis de Broglie, Max Born, Paul Dirac, Werner Heisenberg, Wolfgang Pauli, Erwin Schrödinger, John von Neumann, and Richard Feynman. They did their work in the first half of the 20th century. Beyond PlanckEdit Visible light given off by glowing hydrogen. (Wavelengths in nanometers.) Quantum mechanics formulae and ideas were made to explain the light that comes from glowing hydrogen. The quantum theory of the atom also had to explain why the electron stays in its orbit, which other ideas were not able to explain. It followed from the older ideas that the electron would have to fall in to the center of the atom because it starts out being kept in orbit by its own energy, but it would quickly lose its energy as it revolves in its orbit. (This is because electrons and other charged particles were known to emit light and lose energy when they changed speed or turned.) Hydrogen lamps work like neon lights, but neon lights have their own unique group of colors (and frequencies) of light. Scientists learned that they could identify all elements by the light colors they produce. They just could not figure out how the frequencies were determined. Then, a Swiss mathematician named Johann Balmer figured out an equation that told what λ (lambda, for wave length) would be: where B is a number that Balmer determined to be equal to 364.56 nm. This equation only worked for the visible light from a hydrogen lamp. But later, the equation was made more general: where R is the Rydberg constant, equal to 0.0110 nm−1, and n must be greater than m. Putting in different numbers for m and n, it is easy to predict frequencies for many types of light (ultraviolet, visible, and infared). To see how this works, go to Hyperphysics and go down past the middle of the page. (Use H = 1 for hydrogen.) In 1908, Walter Ritz made the Ritz combination principle that shows how certain gaps between frequencies keep repeating themselves. This turned out to be important to Werner Heisenberg several years later. In 1905, Albert Einstein used Planck's idea to show that a beam of light is made up of a stream of particles called photons. The energy of each photon depends on its frequency. Einstein's idea is the beginning of the idea in quantum mechanics that all subatomic particles like electrons, protons, neutrons, and others are both waves and particles at the same time. (See picture of atom with the electron as waves at atom.) This led to a theory about subatomic particles and electromagnetic waves called wave-particle duality. This is where particles and waves were neither one nor the other, but had certain properties of both. An electron falls to lower orbit and a photon is created. In 1913, Niels Bohr came up with the idea that electrons could only take up certain orbits around the nucleus of an atom. Under Bohr's theory, the numbers called m and n in the equation above could represent orbits. Bohr's theory said electrons could begin in some orbit m and end up in some orbit n, or an electron could begin in some orbit n and end up in some orbit m so if a photon hits an electron, its energy will be absorbed, and the electron will move to a higher orbit because of that extra energy. Under Bohr's theory, if an electron falls from a higher orbit to a lower orbit, then it will have to give up energy in the form of a photon. The energy of the photon will equal the energy difference between the two orbits, and the energy of a photon makes it have a certain frequency and color. Bohr's theory provided a good explanation of many aspects of subatomic phenomena, but failed to answer why each of the colors of light produced by glowing hydrogen (and by glowing neon or any other element) has a brightness of its own, and the brightness differences are always the same for each element. By the time Niels Bohr came out with his theory, most things about the light produced by a hydrogen lamp were known, but scientists still could not explain the brightness of each of the lines produced by glowing hydrogen. Spaced-out intensities in arbitrary units Werner Heisenberg took on the job of explaining the brightness or "intensity" of each line. He could not use any simple rule like the one Balmer had come up with. He had to use the very difficult math of classical physics that figures everything out in terms of things like the mass (weight) of an electron, the charge (static electric strength) of an electron, and other tiny quantities. Classical physics already had answers for the brightness of the bands of color that a hydrogen lamp produces, but the classical theory said that there should be a continuous rainbow, and not four separate color bands. Heisenberg's explanation is: There is some law that says what frequencies of light glowing hydrogen will produce. It has to predict spaced-out frequencies when the electrons involved are moving between orbits close to the nucleus (center) of the atom, but it also has to predict that the frequencies will get closer and closer together as we look at what the electron does in moving between orbits farther and farther out. It will also predict that the intensity differences between frequencies get closer and closer together as we go out. Where classical physics already gives the right answers by one set of equations the new physics has to give the same answers but by different equations. Classical physics uses the methods of the French mathematician Fourier to make a math picture of the physical world, and it uses collections of smooth curves that go together to make one smooth curve that gives, in this case, intensities for light of all frequencies from some light. But it is not right because that smooth curve only appears at higher frequencies. At lower frequencies, there are always isolated points and nothing connects the dots. So, to make a map of the real world, Heisenberg had to make a big change. He had to do something to pick out only the numbers that would match what was seen in nature. Sometimes people say he "guessed" these equations, but he was not making blind guesses. He found what he needed. The numbers that he calculated would put dots on a graph, but there would be no line drawn between the dots. And making one "graph" just of dots for every set of calculations would have wasted lots of paper and not have gotten anything done. Heisenberg found a way to efficiently predict the intensities for different frequencies and to organize that information in a helpful way. Just using the empirical rule given above, the one that Balmer got started and Rydberg improved, we can see how to get one set of numbers that would help Heisenberg get the kind of picture that he wanted: The rule says that when the electron moves from one orbit to another it either gains or loses energy, depending on whether it is getting farther from the center or nearer to it. So we can put these orbits or energy levels in as headings along the top and the side of a grid. For historical reasons the lowest orbit is called n, and the next orbit out is called n - a, then comes n - b, and so forth. It is confusing that they used negative numbers when the electrons were actually gaining energy, but that is just the way it is. Since the Rydberg rule gives us frequencies, we can use that rule to put in numbers depending on where the electron goes. If the electron starts at n and ends up at n, then it has not really gone anywhere, so it did not gain energy and it did not lose energy. So the frequency is 0. If the electron starts at n-a and ends up at n, then it has fallen from a higher orbit to a lower orbit. If it does so then it loses energy, and the energy it loses shows up as a photon. The photon has a certain amount of energy, e, and that is related to a certain frequency f by the equation e = h f. So we know that a certain change of orbit is going to produce a certain frequency of light, f. If the electron starts at n and ends up at n - a, that means it has gone from a lower orbit to a higher orbit. That only happens when a photon of a certain frequency and energy comes in from the outside, is absorbed by the electron and gives it its energy, and that is what makes the electron go out to a higher orbit. So, to keep everything making sense, we write that frequency as a negative number. There was a photon with a certain frequency and now it has been taken away. So we can make a grid like this, where f(a←b) means the frequency involved when an electron goes from energy state (orbit) b to energy state a (Again, sequences look backwards, but that is the way they were originally written.): Grid of f Electron States n n-a n-b n-c .... n f(n←n) f(n←n-a) f(n←n-b) f(n←n-c) ..... n-a f(n-a←n) f(n-a←n-a) f(n-a←n-b) f(n-a←n-c) ..... n-b f(n-b←n) f(n-b←n-a) f(n-b←n-b) f(n-b←n-c) ..... Heisenberg did not make the grids like this. He just did the math that would let him get the intensities he was looking for. But to do that he had to multiply two amplitudes (how high a wave measures) to work out the intensity. (In classical physics, intensity equals amplitude squared.) He made an odd-looking equation to handle this problem, wrote out the rest of his paper, handed it to his boss, and went on vacation. Dr. Born looked at his funny equation and it seemed a little crazy. He must have wondered, "Why did Heisenberg give me this strange thing? Why does he have to do it this way?" Then he realized that he was looking at a blueprint for something he already knew very well. He was used to calling the grid or table that we could write by doing, for instance, all the math for frequencies, a matrix. And Heisenberg's weird equation was a rule for multiplying two of them together. Max Born was a very, very good mathematician. He knew that since the two matrices (grids) being multiplied represented different things (like position (x,y,z) and momentum (mv), for instance), then when you multiply the first matrix by the second you get one answer and when you multiply the second matrix by the first matrix you get another answer. Even though he did not know about matrix math, Heisenberg already saw this "different answers" problem and it had bothered him. But Dr. Born was such a good mathematician that he saw that the difference between the first matrix multiplication and the second matrix multiplication was always going to involve Planck's constant, h, multiplied by the square root of negative one, i. So within a few days of Heisenberg's discovery they already had the basic math for what Heisenberg liked to call the "indeterminacy principle." By "indeterminate" Heisenberg meant that something like an electron is just not pinned down until it gets pinned down. It is a little like a jellyfish that is always squishing around and cannot be "in one place" unless you kill it. Later, people got in the habit of calling it "Heisenberg's uncertainty principle," which made many people make the mistake of thinking that electrons and things like that are really "somewhere" but we are just uncertain about it in our own minds. That idea is wrong. It is not what Heisenberg was talking about. Having trouble measuring something is a problem, but it is not the problem Heisenberg was talking about. Heisenberg's idea is very hard to grasp, but we can make it clearer with an example. First, we will start calling these grids "matrices," because we will soon need to talk about matrix multiplication. Suppose that we start with two kinds of measurements, position (q) and momentum (p). In 1925, Heisenberg wrote an equation like this one:   (Equation for the conjugate variables momentum and position) He did not know it, but this equation gives a blueprint for writing out two matrices (grids) and for multiplying them. The rules for multiplying one matrix by another are a little messy, but here are the two matrices according to the blueprint, and then their product: Matrix of p Electron States n-a n-b n-c .... n p(n←n-a) p(n←n-b) p(n←n-c) ..... n-a p(n-a←n-a) p(n-a←n-b) p(n-a←n-c) ..... n-b p(n-b←n-a) p(n-b←n-b) p(n-b←n-c) ..... Matrix of q Electron States n-b n-c n-d .... n-a q(n-a←n-b) q(n-a←n-c) q(n-a←n-d) ..... n-b q(n-b←n-b) q(n-b←n-c) q(n-b←n-d) ..... n-c q(n-c←n-b) q(n-c←n-c) q(n-c←n-d) ..... The matrix for the product of the above two matrices as specified by the relevant equation in Heisenberg's 1925 paper is: Electron States n-b n-c n-d ..... n A ..... ..... ..... n-a ..... B ..... ..... n-b ..... ..... C ..... and so forth. If the matrices were reversed, the following values would result: and so forth. Note how changing the order of multiplication changes the numbers, step by step, that are actually multiplied. Beyond HeisenbergEdit The work of Werner Heisenberg seemed to break a log jam. Very soon, many different other ways of explaining things came from people such as Louis de Broglie, Max Born, Paul Dirac, Wolfgang Pauli, and Erwin Schrödinger. The work of each of these physicists is its own story. The math used by Heisenberg and earlier people is not very hard to understand, but the equations quickly grew very complicated as physicists looked more deeply into the atomic world. Further mysteriesEdit In the early days of quantum mechanics, Albert Einstein suggested that if it were right then quantum mechanics would mean that there would be "spooky action at a distance." It turned out that quantum mechanics was right, and that what Einstein had used as a reason to reject quantum mechanics actually happened. This kind of "spooky connection" between certain quantum events is now called "quantum entanglement". Two entangled particles are separated: one on Earth and one taken to some distant planet. Measuring one of them forces it to "decide" which role to take, and the other one must then take the other role whenever (after that) it is measured. When an experiment brings two things (photons, electrons, etc.) together, they must then share a common description in quantum mechanics. When they are later separated, they keep the same quantum mechanical description or "state." In the diagram, one characteristic (e.g., "up" spin) is drawn in red, and its mate (e.g., "down" spin) is drawn in blue. The purple band means that when, e.g., two electrons are put together the pair shares both characteristics. So both electrons could show either up spin or down spin. When they are later separated, one remaining on Earth and one going to some planet of the star Alpha Centauri, they still each have both spins. In other words, each one of them can "decide" to show itself as a spin-up electron or a spin-down electron. But if later on someone measures the other one, it must "decide" to show itself as having the opposite spin. Einstein argued that over such a great distance it was crazy to think that forcing one electron to show its spin would then somehow make the other electron show an opposite characteristic. He said that the two electrons must have been spin-up or spin-down all along, but that quantum mechanics could not predict which characteristic each electron had. Being unable to predict, only being able to look at one of them with the right experiment, meant that quantum mechanics could not account for something important. Therefore, Einstein said, quantum mechanics had a big hole in it. Quantum mechanics was incomplete. Later, it turned out that experiments showed that it was Einstein who was wrong.[1] Heisenberg uncertainty principleEdit In 1925, Werner Heisenberg described the Uncertainty principle, which says that the more we know about where a particle is, the less we can know about how fast it is going and in which direction. In other words, the more we know about the speed and direction of something small, the less we can know about its position. Physicists usually talk about the momentum in such discussions instead of talking about speed. Momentum is just the speed of something in a certain direction times its mass. The reason behind Heisenberg's uncertainty principle says that we can never know both the location and the momentum of a particle. Because light is an abundant particle, it is used for measuring other particles. The only way to measure it is to bounce the light wave off of the particle and record the results. If a high energy, or high frequency, light beam is used, we can tell precisely where it is, but cannot tell how fast it was going. This is because the high energy photon transfers energy to the particle and changes the particle's speed. If we use a low energy photon, we can tell how fast it is going, but not where it is. This is because we are using light with a longer wavelength. The longer wavelength means the particle could be anywhere along the stretch of the wave. The principle also says that there are many pairs of measurements for which we cannot know both of them about any particle (a very small thing), no matter how hard we try. The more we learn about one of such a pair, the less we can know about the other. Even Albert Einstein had trouble accepting such a bizarre concept, and in a well-known debate said, "God does not play dice". To this, Danish physicist Niels Bohr famously responded, "Einstein, don't tell God what to do". Uses of quantum mechanicsEdit Electrons surround every atom's nucleus. Chemical bonds link atoms to form molecules. A chemical bond links two atoms when electrons are shared between those atoms. Thus quantum mechanics is the physics of the chemical bond and of chemistry. Quantum mechanics helps us understand how molecules are made, and what their properties are.[2] Quantum mechanics can also help us understand big things, such as stars and even the whole universe. Quantum mechanics is a very important part of the theory of how the universe began called the Big Bang. Everything made of matter is attracted to other matter because of a fundamental force called gravity. Einstein's theory that explains gravity is called the theory of general relativity. A problem in modern physics is that some conclusions of quantum mechanics do not seem to agree with the theory of general relativity. Quantum mechanics is the part of physics that can explain why all electronic technology works as it does. Thus quantum mechanics explains how computers work, because computers are electronic machines. But the designers of the early computer hardware of around 1950 or 1960 did not need to think about quantum mechanics. The designers of radios and televisions at that time did not think about quantum mechanics either. However, the design of the more powerful integrated circuits and computer memory technologies of recent years does require quantum mechanics. Quantum mechanics has also made possible technologies such as: Why quantum mechanics is hard to learnEdit Quantum mechanics is a challenging subject for several reasons: • Quantum mechanics explains things in very different ways from what we learn about the world when we are children. • Understanding quantum mechanics requires more mathematics than algebra and simple calculus. It also requires matrix algebra, complex numbers, probability theory, and partial differential equations. • Physicists are not sure what some of the equations of quantum mechanics tell us about the real world. • Quantum mechanics suggests that atoms and subatomic particles behave in strange ways, completely unlike anything we see in our everyday lives. • Quantum mechanics describes things that are extremely small, so we cannot see some of them without special equipment, and we cannot see many of them at all. Quantum mechanics describes nature in a way that is different from how we usually think about science. It tells us how likely to happen some things are, rather than telling us that they certainly will happen. One example is Young's double-slit experiment. If we shoot single photons (single units of light) from a laser at a sheet of photographic film, we will see a single spot of light on the developed film. If we put a sheet of metal in between, and make two very narrow slits in the sheet, when we fire many photons at the metal sheet, and they have to go through the slits, then we will see something remarkable. All the way across the sheet of developed film we will see a series of bright and dark bands. We can use mathematics to tell exactly where the bright bands will be and how bright the light was that made them, that is, we can tell ahead of time how many photons will fall on each band. But if we slow the process down and see where each photon lands on the screen we can never tell ahead of time where the next one will show up. We can know for sure that it is most likely that a photon will hit the center bright band, and that it gets less and less likely that a photon will show up at bands farther and farther from the center. So we know for sure that the bands will be brightest at the center and get dimmer and dimmer farther away. But we never know for sure which photon will go into which band. One of the strange conclusions of quantum mechanics theory is the "Schrödinger's cat" effect. Certain properties of a particle, such as their position, speed of motion, direction of motion, and "spin", cannot be talked about until something measures them (a photon bouncing off of an electron would count as a measurement of its position, for example). Before the measurement, the particle is in a "superposition of states," in which its properties have many values at the same time. Schrödinger said that quantum mechanics seemed to say that if something (such as the life or death of a cat) was determined by a quantum event, then its state would be determined by the state that resulted from the quantum event, but only at the time that somebody looked at the state of the quantum event. In the time before the state of the quantum event is looked at, perhaps "the living and dead cat (pardon the expression) [are] mixed or smeared out in equal parts."[3] Reduced Planck's constantEdit People often use the symbol  , which is called "h-bar."   . H-bar is a unit of angular momentum. When this new unit is used to describe the orbits of electrons in atoms, the angular momentum of any electron in orbit is always a whole number.[4] The particle in a 1-dimensional well is the most simple example showing that the energy of a particle can only have specific values. The energy is said to be "quantized." The well has zero potential energy inside a range and has infinite potential energy everywhere outside that range. For the 1-dimensional case in the   direction, the time-independent Schrödinger equation can be written as:[5] Using differential equations, we can figure out that   can be written as or as   (by Euler's formula). The walls of the box mean that the wavefunction must have a special form. The wavefunction of the particle must be zero anytime the walls are infinitely tall. At each wall: Consider x = 0 • sin 0 = 0, cos 0 = 1. To satisfy   the cos term has to be removed. Hence D = 0 Now consider:   • at  ,   • If   then   for all x. This solution is not useful. • therefore   must be true, giving us We can see that   must be an integer. This means that the particle can only have special energy values and cannot have the energy values in between. This is an example of energy "quantization." Related pagesEdit • Feynman, Richard, 1985. The Strange Theory of Light and Matter . Princeton University Press. • McEvoy, J.P. and Oscar Zarate, 1996. Introducing Quantum Theory. Icon Books. 1. For an overview of the whole issue of entanglement, see J.P. McEvoy and Oscar Zarate, Introducing Quantum Theory, pp. 168—170. 2. For a good foundation see The Nature of the Chemical Bond, by Linus Pauling. 3. Schrödinger: "The Present Situation in Quantum Mechanics," p. 8 of 22. 4. Scientific American Reader, Simon and Schuster, 1953, p. 117. 5. Derivation of particle in a box, chemistry.tidalswan.com More readingEdit • Cox, Brian; & Forshaw, Jeff (2011). The Quantum Universe: Everything That Can Happen Does Happen. Allen Lane. ISBN 978-1-84614-432-5 Other websitesEdit
23d0b62f5a4fabe8
- Art Gallery - The transactional interpretation of quantum mechanics (TIQM) takes the psi and psi* wave functions of the standard quantum formalism to be retarded (forward in time) and advanced (backward in time) waves that form a quantum interaction as a Wheeler–Feynman handshake or transaction. It was first proposed in 1986 by John G. Cramer, who argues that it helps in developing intuition for quantum processes. He also suggests that it avoids the philosophical problems with the Copenhagen interpretation and the role of the observer, and also resolves various quantum paradoxes.[1][2][3] TIQM formed a minor plot point in his science fiction novel Einstein's Bridge. More recently, he has also argued TIQM to be consistent with the Afshar experiment, while claiming that the Copenhagen interpretation and the many-worlds interpretation are not.[4] The existence of both advanced and retarded waves as admissible solutions to Maxwell's equations was explored in the Wheeler–Feynman absorber theory. Cramer revived their idea of two waves for his transactional interpretation of quantum theory. While the ordinary Schrödinger equation does not admit advanced solutions, its relativistic version does, and these advanced solutions are the ones used by TIQM. In TIQM, the source emits a usual (retarded) wave forward in time, but it also emits an advanced wave backward in time; furthermore, the receiver, who is later in time, also emits an advanced wave backward in time and a retarded wave forward in time. A quantum event occurs when a "handshake" exchange of advanced and retarded waves triggers the formation of a transaction in which energy, momentum, angular momentum, etc. are transferred. The quantum mechanism behind transaction formation has been demonstrated explicitly for the case of a photon transfer between atoms in Sect. 5.4 of Carver Mead's book Collective Electrodynamics. In this interpretation, the collapse of the wavefunction does not happen at any specific point in time, but is "atemporal" and occurs along the whole transaction, and the emission/absorption process is time-symmetric. The waves are seen as physically real, rather than a mere mathematical device to record the observer's knowledge as in some other interpretations of quantum mechanics. Philosopher and writer Ruth Kastner argues that the waves exist as possibilities outside of physical spacetime and that therefore it is necessary to accept such possibilities as part of reality.[5] Cramer has used TIQM in teaching quantum mechanics at the University of Washington in Seattle. Advances over previous interpretations TIQM is explicitly non-local and, as a consequence, logically consistent with counterfactual definiteness (CFD), the minimum realist assumption.[2] As such it incorporates the non-locality demonstrated by the Bell test experiments and eliminates the observer-dependent reality that has been criticized as part of the Copenhagen interpretation. Greenberger–Horne–Zeilinger state the key advance over Everett's Relative State Interpretation[6] is to regard the conjugate state vector of the Dirac formalism as ontologically real, incorporating a part of the formalism that, prior to TIQM, had been interpretationally neglected. Having interpreted the conjugate state vector as an advanced wave, it is shown that the origins of the Born rule follow naturally from the description of a transaction.[2] The transactional interpretation is superficially similar to the two-state vector formalism (TSVF)[7] which has its origin in work by Yakir Aharonov, Peter Bergmann and Joel Lebowitz of 1964.[8][9] However, it has important differences—the TSVF is lacking the confirmation and therefore cannot provide a physical referent for the Born Rule (as TI does). Kastner has criticized some other time-symmetric interpretations, including TSVF, as making ontologically inconsistent claims.[10] Kastner has developed a new Relativistic Transactional Interpretation (RTI) also called Possibilist Transactional Interpretation (PTI) in which space-time itself emerges by a way of transactions. It has been argued that this relativistic transactional interpretation can provide the quantum dynamics for the causal sets program.[11] In 1996, Tim Maudlin proposed a thought experiment involving Wheeler's delayed choice experiment that is generally taken as a refutation of TIQM.[12] However Kastner showed Maudlin's argument is not fatal for TIQM.[13][14] In his book, The Quantum Handshake, Cramer has added a hierarchy to the description of pseudo-time to deal with Maudlin's objection and has pointed out that some of Maudlin's arguments are based on the inappropriate application of Heisenberg's knowledge interpretation to the transactional description.[15] Transactional Interpretation faces criticisms. The following is partial list and some replies: 1. "TI does not generate new predictions / is not testable / has not been tested." TI is an exact interpretation of QM and so its predictions must be the same as QM. Like the many-worlds interpretation (MWI), TI is a "pure" interpretation in that it does not add anything ad hoc but provides a physical referent for a part of the formalism that has lacked one (the advanced states implicitly appearing in the Born rule). Thus the demand often placed on TI for new predictions or testability is a mistaken one that misconstrues the project of interpretation as one of theory modification.[16] 2. “It is not made clear where in spacetime a transaction occurs.” One clear account is given in Cramer (1986), which pictures a transaction as a four-vector standing wave whose endpoints are the emission and absorption events.[17] 3. "Maudlin (1996, 2002) has demonstrated that TI is inconsistent." Maudlin's probability criticism confused the transactional interpretation with Heisenberg's knowledge interpretation. However, he raised a valid point concerning causally connected possible outcomes, which led Cramer to add hierarchy to the pseudo-time description of transaction formation.[18][13][19][20][21] Kastner has extended TI to the relativistic domain, and in light of this expansion of the interpretation, it can be shown that the Maudlin Challenge cannot even be mounted, and is therefore nullified; there is no need for the 'hierarchy' proposal of Cramer.[22] Maudlin has also claimed that all the dynamics of TI is deterministic and therefore there can be no 'collapse.' But this appears to disregard the response of absorbers, which is the whole innovation of the model. Specifically, the linearity of the Schrödinger evolution is broken by the response of absorbers; this directly sets up the non-unitary measurement transition, without any need for ad hoc modifications to the theory. The non-unitarity is discussed, for example in Chapter 3 of Kastner's book The Transactional Interpretation of Quantum Mechanics: The Reality of Possibility (CUP, 2012).[23] 4. "It is not clear how the transactional interpretation handles the quantum mechanics of more than one particle." This issue is addressed in Cramer's 1986 paper, in which he gives many examples of the application of TIQM to multi-particle quantum systems. However, if the question is about the existence of multi-particle wave functions in normal 3D space, Cramer's 2015 book goes into some detail in justifying multi-particle wave functions in 3D space.[24] A criticism of Cramer's 2015 account of dealing with multi-particle quantum systems is found in Kastner 2016, "An Overview of the Transactional Interpretation and its Evolution into the 21st Century, Philosophy Compass (2016).[25] It observes in particular that the account in Cramer 2015 is necessarily anti-realist about the multi-particle states: if they are only part of a 'map,' then they are not real, and in this form TI becomes an instrumentalist interpretation, contrary to its original spirit. Thus the so-called "retreat" to Hilbert space (criticized also below in the lengthy discussion of note [24]) can instead be seen as a needed expansion of the ontology, rather than a retreat to anti-realism/instrumentalism about the multi-particle states. The vague statement (under [24]) that "Offer waves are somewhat ephemeral three-dimensional space objects" indicates the lack of clear definition of the ontology when one attempts to keep everything in 3+1 spacetime. See also Quantum entanglement Quantum nonlocality Wheeler–Feynman absorber theory Cramer, John (July 2009). "Transactional Interpretation of Quantum Mechanics". Reviews of Modern Physics. 58 (3): 795–798. doi:10.1007/978-3-540-70626-7_223. ISBN 978-3-540-70622-9.open access Cramer, John G. (July 1986). "The Transactional Interpretation of Quantum Mechanics". Reviews of Modern Physics. 58 (3): 647–688. Bibcode:1986RvMP...58..647C. doi:10.1103/RevModPhys.58.647.open access Cramer, John G. (February 1988). "An Overview of the Transactional Interpretation" (PDF). International Journal of Theoretical Physics. 27 (2): 227–236. Bibcode:1988IJTP...27..227C. doi:10.1007/BF00670751. Cramer, John G. (December 2005). "A Farewell to Copenhagen?". Analog. The Alternate View. Dell Magazines. George Musser and Ruth Kastner; "Can We Resolve Quantum Paradoxes by Stepping Out of Space and Time?", Scientific American blog, June 21, 2013. Everett, Hugh (July 1957). "Relative State Formulation of Quantum Mechanics" (PDF). Reviews of Modern Physics. 29 (3): 454–462. Bibcode:1957RvMP...29..454E. doi:10.1103/RevModPhys.29.454. Avshalom C. Elitzur, Eliahu Cohen: The Retrocausal Nature of Quantum Measurement Revealed by Partial and Weak Measurements, AIP Conf. Proc. 1408: Quantum Retrocausation: Theory and Experiment (13–14 June 2011, San Diego, California), pp. 120–131, doi:10.1063/1.3663720 Aharonov, Yakir; Bergmann, Peter G.; Lebowitz, Joel L. (1964-06-22). "Time Symmetry in the Quantum Process of Measurement". Physical Review. American Physical Society (APS). 134 (6B): 1410–1416. doi:10.1103/physrev.134.b1410. ISSN 0031-899X. Yakir Aharonov, Lev Vaidman: Protective measurements of two-state vectors, in: Robert Sonné Cohen, Michael Horne, John J. Stachel (eds.): Potentiality, Entanglement and Passion-At-A-Distance, Quantum Mechanical Studies for A. M. Shimony, Volume Two, 1997, ISBN 978-0792344537, pp. 1–8, p. 2 Kastner, Ruth E. (2017). "Is there really "retrocausation" in time-symmetric approaches to quantum mechanics?". AIP Conference Proceedings. 1841: 020002.arXiv:1607.04196. doi:10.1063/1.4982766. Kastner, Ruth E. (August 2012). "The Possibilist Transactional Interpretation and Relativity". Foundations of Physics. 42 (8): 1094–1113.arXiv:1204.5227. Bibcode:2012FoPh...42.1094K. doi:10.1007/s10701-012-9658-4. Maudlin, Tim (1996). Quantum Nonlocality and Relativity: Metaphysical Intimations of Modern Physics (1st ed.). Wiley-Blackwell. ISBN 978-1444331271. Kastner, Ruth E (May 2006). "Cramer's Transactional Interpretation and Causal Loop Problems". Synthese. 150 (1): 1–14.arXiv:quant-ph/0408109. doi:10.1007/s11229-004-6264-9. Kastner, Ruth E (2012). "On Delayed Choice and Contingent Absorber Experiments". ISRN Mathematical Physics. 2012 (1): 1–9.arXiv:1205.3258. Bibcode:2012arXiv1205.3258K. doi:10.5402/2012/617291. Cramer, John G. (2016). The Quantum Handshake: Entanglement, Nonlocality and Transactions. Springer Science+Business Media. ISBN 978-3319246406. The Quantum Handshake by John G. Cramer, p. 183: "No consistent interpretation of quantum mechanics can be tested experimentally, because each is an interpretation of the same quantum mechanical formalism, and the formalism makes the predictions. The Transactional Interpretation is an exact interpretation of the QM formalism. Like the Many-Worlds and the Copenhagen interpretations, the TI is a "pure" interpretation that does not add anything ad hoc, but does provide a physical referent for a part of the formalism that has lacked on (e.g. the advanced wave functions appearing in the Born probability rule and amplitude calculations). Thus the demand for new predictions or testability from an interpretation is based on a conceptual error by the questioner that misconstrues an interpretation as a modification of quantum theory. According to Occam's Razor, the hypothesis that introduces the fewest independent assumptions is to be preferred. The TI offers this advantage over its rivals, in that the Born probability rule is a result rather than an independent assumption." The Quantum Handshake by John G. Cramer, p. 183: The TIQM "pictures a transaction as emerging from an offer-confirmation handshake as a four-vector standing wave normal in three-dimensional space with endpoints at the emission and absorption verticies. Kastner has predicted an alternative account of transaction formation in which the formation of a transaction is not a spatiotemporal process but one taking place on a level of possibility in a higher Hilbert space rather than in 3+1-dimensional spacetime." Berkovitz, J. (2002). ``On Causal Loops in the Quantum Realm," in T. Placek and J. Butterfield (Ed.), Proceedings of the NATO Advanced Research Workshop on Modality, Probability and Bell's Theorems, Kluwer, 233–255. Marchildon, L (2006). "Causal Loops and Collapse in the Transactional Interpretation of Quantum Mechanics". Physics Essays. 19 (3): 422–9.arXiv:quant-ph/0603018. Bibcode:2006PhyEs..19..422M. doi:10.4006/1.3025811. The Quantum Handshake by John G. Cramer, p. 184: "Maulin raised an interesting challenge for the Transactional Interpretation by pointing out a paradox that can be constructed when the non-detection of a slow particle moving in one direction that modifies the detection configuration in another direction. This problem is dealt with by the TI ... by introducing a hierarchy in the order of the transactional formation ... Other solutions to the problem raised by Maudlin can be found in the references." The Quantum Handshake by John G. Cramer, p. 184: Maudlin also made the claim, based on his assumption that the wave function is a representation of observer knowledge, that it must change when new information is made available. "That Heisenberg-inspired view is not a part of the Transactional Interpretation, and introducing it leads to bogus probability argument. In the Transactional Interpretation, the offer wave does not magically change in mid-flight at the instant when new information becomes available, and its correct application leads to the correct calculation of probabilities that are consistent with observation." Kastner, R. E. (2016). "The Relativistic Transactional Interpretation: Immune to the Maudlin Challenge".arXiv:1610.04609 [quant-ph]. Kastner, R. E. The Transactional Interpretation of Quantum Mechanics: The Reality of Possibility (CUP, 2012) The Quantum Handshake by John G. Cramer, p. 184. Cramer's earlier publications "provided many examples of the application of the TI to systems involving more than one particle. These include the Freedman-Clauser experiment, which describes a 2-photon transaction with three vertices, and the Hanbury-Brown-Twiss effect, which describes a 2-photon transaction with four vertices. [Other publications contain] many examples of more complicated multi-particle systems, including systems with both atoms and photons. But perhaps the question posed above is based on the belief that quantum mechanical wave functions for systems of more than one particle cannot exist in normal three-dimensional space and must be characterized instead as existing only in an abstract Hilbert space of many dimensions. Indeed, Kastner’s "Possibilist Transactional Interpretation" takes this point of view and describes transaction formation as ultimately appearing in 3D space but forming from the Hilbert-space wave functions. ... The "standard" Transactional Interpretation presented here, with its insights into the mechanism behind wave function collapse through transaction formation, provides a new view of the situation that makes the retreat to Hilbert space unnecessary. The offer wave for each particle can be considered as the wave function of a free (i.e., uncorrelated) particle and can be viewed as existing in normal three-dimensional space. The application of conservation laws and the influence of the variables of the other particles of the system on the particle of interest come not in the offer wave stage of the process but in the formation of the transactions. The transactions "knit together" the various otherwise independent particle wave functions that span a wide range of possible parameter values into a consistent ensemble, and only those wave function sub-components that are correlated to satisfy the conservation law boundary conditions at the transaction vertices are permitted to participate in this transaction formation. The "allowed zones" of Hilbert space arise from the action of transaction formation, not from constraints on the initial offer waves, i.e., particle wave functions. Thus, the assertion that the quantum wave functions of individual particles in a multi-particle quantum system cannot exist in ordinary three-dimensional space is a misinterpretation of the role of Hilbert space, the application of conservation laws, and the origins of entanglement. It confuses the "map" with the "territory". Offer waves are somewhat ephemeral three-dimensional space objects, but only those components of the offer wave that satisfy conservation laws and entanglement criteria are permitted to be projected into the final transaction, which also exists in three-dimensional space." Kastner, R. E. (2016). "The Transactional Interpretation and its Evolution into the 21st Century: An Overview".arXiv:1608.00660 [quant-ph]. Further reading John G. Cramer, The Quantum Handshake: Entanglement, Nonlocality and Transactions, Springer Verlag 2016, ISBN 978-3-319-24642-0. Ruth E. Kastner, The Transactional Interpretation of Quantum Mechanics: The Reality of Possibility, Cambridge University Press, 2012. Ruth E. Kastner, Understanding Our Unseen Reality: Solving Quantum Riddles, Imperial College Press, 2015. Tim Maudlin, Quantum Non-Locality and Relativity, Blackwell Publishers 2002, ISBN 0-631-23220-6 (discusses a gedanken experiment designed to refute the TIQM; this has been refuted in Kastner 2012, Chapter 5) Carver A. Mead, Collective Electrodynamics: Quantum Foundations of Electromagnetism, 2000, ISBN 9780262133784. John Gribbin, Schrödinger's Kittens and the Search for Reality: solving the quantum mysteries has an overview of Cramer’s interpretation and says that “with any luck at all it will supersede the Copenhagen interpretation as the standard way of thinking about quantum physics for the next generation of scientists.” External links Pavel V. Kurakin, George G. Malinetskii, How bees can possibly explain quantum paradoxes, Automates Intelligents (February 2, 2005). (This paper tells about a work attempting to develop TIQM further) Kastner has also applied TIQM to other quantum mechanical issues in [1] "The Transactional Interpretation, Counterfactuals, and Weak Values in Quantum Theory" and [2] "The Quantum Liar Experiment in the Transactional Interpretation" A generally comprehensible introduction to the Transactional Interpretation can be found in "Quantum Mechanics - the dream stuff is made of" (September 2015) Quantum mechanics Introduction History timeline Glossary Classical mechanics Old quantum theory collapse Universal wavefunction Wave–particle duality Matter wave Wave propagation Virtual particle Dirac Klein–Gordon Pauli Rydberg Schrödinger Heisenberg Interaction Matrix mechanics Path integral formulation Phase space Schrödinger algebra calculus differential stochastic geometry group Q-analog Measurement problem QBism biology chemistry chaos cognition complexity theory computing Quantum technology links Matrix isolation Phase qubit Quantum dot cellular automaton display laser single-photon source solar cell Quantum well Dirac sea Fractional quantum mechanics Quantum electrodynamics links Quantum geometry Quantum field theory links Quantum gravity links Quantum information science Quantum mechanics of time travel Textbooks Physics Encyclopedia Hellenica World - Scientific Library Retrieved from "http://en.wikipedia.org/"
59ee62ef9714dd28
Frank Wilczek [1.14.09] The most exciting thing that can happen is when theoretical dreams that started as fantasies, as desires, become projects that people work hard to build. There is nothing like it; it is the ultimate tribute. At one moment you have just a glimmer of a thought and at another moment squiggles on paper. Then one day you walk into a laboratory and there are all these pipes, and liquid helium is flowing, and currents are coming in and out with complicated wiring, and somehow all this activity is supposedly corresponds to those little thoughts that you had. When this happens, it's magic. FRANK WILCZEK, a theoretical physicist at MIT and recipient of the Nobel Prize in Physics (2004), is known, among other things, for the discovery of asymptotic freedom, the development of quantum chromodynamics, the invention of axions, and the discovery and exploitation of new forms of quantum statistics (anyons). He is the author of Lightness of Being: Mass, Ether, and the Unification of Forces. Frank Wilczek's Edge Bio Page [FRANK WILCZEK:] In retrospect, I realize now that having the Nobel Prize hovering out there but never quite arriving was a heavy psychological weight; it bore me down. It was a tremendous relief to get it. Fortunately, it turns out I didn't anticipate that getting it is fantastic fun—the whole bit: there are marvelous ceremonies in Sweden, it's a grand party, and it continues, and is still continuing. I've been going to big events several times a month. The most profound aspect of it, though, is that I've really felt from my colleagues something I didn't anticipate: a outpouring of genuine affection. It's not too strong to call it love. Not for me personally—but because our field, theoretical fundamental physics, gets recognition and attention. People appreciate what's been accomplished, and it comes across as recognition for an entire community and an attitude towards life that produced success. So I've been in a happy mood. But that was a while ago, and the ceremonial business gets old after a while, and takes time. Such an abrupt change of life encourages thinking about the next stage. I was pleased when I developed a kind of three-point plan that gives me direction. Now I ask myself, when I'm doing something in my work: Is it relating to point one? Is it relating to point two? Is it relating to point three? If it's not relating to any of those, then I'm wasting my time. Point one is in a sense the most straightforward. An undignified way to put it would be to say it's defending turf, or pissing on trees, but I won't say that: I'll say it's following up ideas that I've had physics in the past that are reaching fruition. There are several that I'm very excited about now. The great machine at CERN, the LHC, is going to start operating in about a year. Ideas—about unification and supersymmetry and producing Higgs particles—that I had a big hand in developing 20-30 years ago, are finally going to be tested. Of course, if they're correct that'll be a major advance in our understanding of the world, and very gratifying to me personally. Then there's the area of exotic behavior of electrons at low temperature, so-called anyons, which is a little more tech It was thought for a long time that all particles were either bosons or fermions. In the early 80s, I realized there were other possibilities, and it turns out that there are materials in which these other possibilities can be realized, where the electrons organize themselves into collective states that have different properties from individual electrons and actually do obey the peculiar new rules, and are anyons. This is leading to qualitatively new possibilities for electronics. I call it anyonics. Recently advanced anyonics has been notionally bootstrapped into strategy for building quantum computers that might even turn out to be successful. In any case, whether it's successful or not, the vision of anyonics—this new form of electronics—has inspired a lot of funding and experimentalists are getting into the game. Here similarly, there are kinds of experiments that have been in my head for 20 years but are very difficult, and people needed motivation and money to do them, that are now going to be done. It's a lot of fun to be involved in something that might actually have practical consequences and might even change the world. This stuff also, in a way, brings me back to my childhood because when I was growing up, my father was an electrical engineer and was taking home circuit diagrams, and I really admired these things. Now I get to think about making fundamentally new kinds of circuits, and it's very cool. I really like the mixture of abstract and concrete. At a deeper level, what excites me about quantum computing and this whole subject of quantum information processing is that it touches such fundamental questions that potentially it could lead to qualitatively new kinds of intelligences. It's notorious that human beings have a hard time understanding quantum mechanics; it's hard for humans to relate to its basic notions of superpositions of states—that you can have Schrödinger's cat that's both dead and alive—that are not in our experience. But an intelligence based on quantum computers—mechanical quantum thinking—from the start would have that in its bones, so to speak, or in its circuits. That would be its primary way of thinking. It's quite challenging but fascinating to try to put yourself in the other guy's shoes, when that guy has a fundamentally different kind of mind, a quantum mind. It's almost an embarrassment of riches, but some of the ideas I had about axions turn out to go together very very well with inflationary cosmology, and to get new pictures for what the dark matter might be. It ties into questions about anthropic reasoning, because with axions you get really different amounts of dark matter in different parts of the multiverse. The amounts of dark matter would be different elsewhere and the only way to argue about how much dark matter there should be turns out to be, if you have too much dark matter, life as we know it couldn't arise. There's a lot of stuff in physics that I really feel I have to keep track of, and do justice to. That's point one. The second point is another way of having fun: looking for outlets, cultivating a public, not just thinking about science all the time. I'm in the midst of writing a mystery novel that combines physics with music, philosophy, sex, the rule that only three people at most can share a Nobel Prize—and murder (or was it suicide?). When a four-person MIT-Harvard collaboration makes a great discovery in physics (they figure out what the dark matter is) somebody's got to go. That project and I hope other subsequent projects will be outlets in reaching out to the public and bringing in all of life and just having fun. The third point is what I like to call the Odysseus project. I'm a great fan of Odysseus, the wanderer who had adventures and was very clever. I really want to do more great work—not following up what I did before, but doing essentially different things. I got into theoretical physics almost by accident; when I was an undergraduate, I had intended to study how minds work and neurobiology. But it became clear to me rather quickly, at Chicago, that that subject at that time wasn't ripe for the kind of mathematical analytical approach that I really like and get excited about, and am good at. I switched and majored in mathematics and eventually wound up in physics. But I've always maintained that interest and in the meantime the tools available for addressing those questions have improved exponentially. Both in terms of studying the brain itself—imaging techniques and genetic techniques and a variety of others—but also the inspiring model of computation. The explosion of computational ability and understanding of computer science and networks is a rich source of metaphors and possible ways of thinking about the nature of intelligence and how the brain works. That's a direction I really want to explore more deeply. I've been reading a lot; I don't know exactly what I want to do, but I have been nosing out what's possible and what's available. I think it's a capital mistake, as Sherlock Holmes said, to start theorizing before you have the data. So I'm gathering the data. Quantum Computers and Anyons Quantum computing is an inspiring vision, but at present it's not clear what the technical means to carry it off are. There is a variety of proposals. It's not clear which is the best, or if any of them is practical. Let me backtrack a little bit, though, because even before you get to a full-scale quantum computer, there are information processing tasks for which quantum mechanics could be useful with much less than a full-scale quantum computer. A full-scale quantum computer is extremely demanding: you have to build various kinds of gates, you have to connect them in complicated ways, you have to do error correction—it's very complicated. That's sort of like envisioning a supersonic aircraft when you're at the stage of the Wright brothers. However, there are applications that I think are almost in-hand. The most mature is for a kind of cryptography: you can exploit the fact that quantum mechanics has this phenomenon that's roughly called 'collapse of the wave' function—I don't like it—I don't think that's a really good way to talk about it—but for better or worse, that's the standard terminology. Which in this case means that if you send a message that's essentially quantum mechanical—in terms of the direction of spins of photons, for instance—then you can send photons one by one with different spins and encode information that way. If someone eavesdrops on this, you can tell because the act of observation necessarily disturbs the information you're sending. So that's very useful. If you want to transmit messages and make sure that they haven't been eavesdropped, you can have that guaranteed by the laws of physics. If somebody eavesdrops, you'll be able to tell. You can't prevent it, necessarily, but you can tell. If you do things right, the probability of anyone being able to eavesdrop successfully can be made negligibly small. So that's a valuable application that's almost tangible. People are beginning to try to commercialize that kind of idea. I think in the long run the killer application of quantum computers will be doing quantum mechanics. Doing chemistry by numbers, designing molecules, designing materials by calculation. A capable quantum computer would let chemists and materials scientists work at a another level, because instead of having to mix up the stuff and watch what happens, you can just compute. We know exactly what the equations are that govern the behavior of nuclei and electrons and the things that make up atoms and molecules. So in principle, it's a solved problem to figure out chemistry: just compute. We don't know all the laws of physics, but it's essentially certain that we know the adequate laws of physics with sufficient accuracy to design molecules and to predict their properties with confidence. But our practical ability to solve the equations is limited. The equations live in big multi-dimensional spaces, and they have a complicated structure and, to make a long story short, we can't solve any but very simple problems. With a quantum computer we'll be able to do much better. As I sort of alluded to earlier, it's not decided yet what the best long-term strategy is for achieving powerful quantum computers. People are doing simulations and building little prototypes. There are different strategies being pursued based on nuclear spins, electron spins, trapped atoms, anyons. I am very fond of anyons because I worked at the beginning on the fundamental physics involved. It was thought, until the late 70s and early 80s, that all fundamental particles, or all quantum mechanical objects that you could regard as discrete entities fell into two classes: so-called bosons after the Indian physicist Bose, and fermions, after Enrico Fermi. Bosons are particles such that if you take one around another, the quantum mechanical wave function doesn't change. Fermions are particles such that if you take one around another the quantum mechanical wave function is multiplied by a minus sign. It was thought for a long time that those were the only consistent possibilities for behavior of quantum mechanical entities. In the late 70s and early 80s, we realized that in two plus one dimensions, not in our everyday three dimensional space (plus one dimension for time), but in planar systems, there are other possibilities. In such systems, if you take one particle around another, you might get not a factor of one or minus one, but multiplication by a complex number—there are more general possibilities. More recently, the idea that when you move one particle around another, it's possible not only that the wave function gets multiplied by a number, but that it actually gets distorted and moves around in bigger space, has generated a lot of excitement. Then you have this fantastic mapping from motion in real space as you wind things around each other, to motion of the wave function in Hibert space—in quantum mechanical space. It's that ability navigate your way through Hilbert space—that connects to quantum computing and gives you access to a gigantic space with potentially huge bandwidth that you can play around with in highly parallel ways, if you're clever about the things you do in real space. But in anyons we're really at the primitive stage. There's very little doubt that the theory is correct, but the experiments are at a fairly primitive stage—they're just breaking now. Quantum Logic and Quantum Minds To do justice to the possible states, the possible conditions that just a few objects can be in, say, five spins, classically you would think you would have to say for each one whether it's up or down. At any one time they are in some particular configuration. In quantum mechanics, every single configuration—there are 32 of them, up or down for each spin—has some probability of existing. So simultaneously to do justice to the physical situation, instead of just saying that there is some configuration these objects are in, you have to specify roughly that there is a certain probability for each one and those probabilities evolve. But that verbal description is too rough because what's involved is not probabilities, it's something called amplitudes. The difference is profound. Whereas probabilities have a kind of independence, with amplitudes the different configurations can interact with one other. There are different states which are in the physical reality and they are interacting with each other. Classically they would be different things that couldn't happen simultaneously. In quantum theory they coexist and interact with one another. That also goes to this issue of logic that I mentioned before. One way of representing true or false that is famously used in computers is, you have true as one and false as zero, spin up is true, spin down in false. In quantum theory the true statement and the false statement can interact with each other and you can do useful computations by having simultaneous propositions that contradict each other, sort of interacting with each other, working in creative tension. I just love that idea. I love the idea of opposites coexisting and working with one another. Come to think of it, it's kind of sexy. More on Quantum Computers Realizing this vision will be a vast enterprise. It's hard to know how long it's going to take to get something useful, let alone something that is competitive with the kind of computing we already have developed, which is already very powerful and keeps improving, let alone create new minds that are different from and more powerful than the kind of minds we're familiar with. We'll need to progress on several fronts. You can set aside the question of engineering, if you like, and ask: Suppose I had a big quantum computer, what would I do with it, how would I program it, what kind of tasks could it accomplish? That is a mathematical investigation. You abstract the physical realization away. Then it becomes a question for mathematicians, and even philosophers have got involved in it. Then there is the other big question: how do I build it? How do I build it in practice? That's a question very much for physicists. In fact, there is no winning design yet. People have struggled to make even very small prototypes. My intuition, though, is that when there is a really good idea, progress could be very rapid. That is what I am hoping for and going after. I have glimmers of how it might be done, based on anyons. I've been thinking about this sort of thing on and off for a long time. I pioneered some of the physics, but other theorists including Alexei Kitaev and my former student Chetan Nayak have taken things to another level. There's now a whole field called "topological quantum computing" with its own literature, and conferences, and it's moving fast. What has changed is that now a lot of people, and in particular experimentalists, have taken it up. Methods and Styles in Physics The great art of theoretical physics is the revelation of surprising things about reality. Historically there have been many approaches to that art, which have succeeded in different ways. In the early days of physics, people like Galileo and Newton were very close to the data and stressed that they were trying to put observed behavior into mathematical terms. They developed some powerful abstract concepts, but by today's standards those concepts were down-to-earth; they were always in terms of things that you could touch and feel, or at least see through telescopes. That approach very much dominated physics, at least through the 19th century. Maxwell's great synthesis of electricity and magnetism and optics, and leading to the understanding that light was a form of electricity and magnetism, predicting new kinds of light that we call radio and microwaves and so forth—that came from a very systematic review of all that was known about electricity and magnetism experimentally and trying to put it into equations, noticing an inconsistency and fixing it up. That's the kind of classic approach. In the 20th century, some of the most successful enterprises have looked rather different. Without going into the details it's hard to do justice to all the subtleties, but it's clear that theories like special relativity—especially general relativity—were based on much larger conceptual leaps. In constructing special relativity, Einstein abstracted just two very broad regularities about the physical world: that is that the laws of physics should look the same if you're moving at a constant velocity, and that the speed of light should be a universal constant. This wasn't based on a broad survey of a lot of detailed experimental facts and putting them together; it was selecting a few very key facts and exploiting them conceptually for all they're worth. General relativity even more so: it was trying to make the theory of gravity consistent with the insights of special relativity. This was a very theoretical enterprise, not driven by any specific experimental facts*, but led to a theory that changed our notions of space and time, did lead to experimental predictions, and to many surprises. (*Actually, there was a big "coincidence" that Newtonian gravity left unexplained, the equality of inertial and gravitational mass, which was an important guiding clue.) The Dirac equation is a more complicated case. Dirac was moved by broad theoretical imperatives; he wanted to make the existing equation for quantum mechanical behavior of electrons—that's the Schrödinger equation—consistent with special relativity. To do that, he invented a new equation—the Dirac equation—that seemed very strange and problematic, yet undeniably beautiful, when he first found it. That strange equation turned out to require vastly new interpretations of all the symbols in it, that weren't anticipated. It led to the prediction of antimatter and the beginnings of quantum field theory. This was another revolution that was, in a sense, conceptually driven. On the other hand, what gave Dirac and others confidence that his equation was on the right track, was that it predicted corrections to the behavior of electrons in hydrogen atoms that were very specific, and that agreed with precision measurements. This support forced them to stick with it, and find an interpretation to let it be true! So there was important empirical guidance, and encouragement, from the start. Our foundational work on QCD falls in the same pattern. We were led to specific equations by theoretical considerations, but the equations seemed problematic. They were full of particles that aren't observed (quarks and—especially—gluons), and didn't contain any of the particles that are observed! We persisted with them nevertheless, because they explained a few precision measurements, and that persistence eventually paid off. In general, as physics has matured in the 20th century, we've realized more and more the power of mathematical considerations of consistency and symmetry to dictate the form of physical laws. We can do a lot with less experimental input. (Nevertheless the ultimate standard must be getting experimental output: illuminating reality.) How far can esthetics take you? Should you let that be your main guide, or should you try to assemble and do justice to a lot of specific facts? Different people have different styles; some people try to use a lot of facts and extrapolate a little bit; other people try not to use any facts at all and construct a theory that's so beautiful that it has to be right and then fill in the facts later. I try to consider both possibilities, and see which one is fruitful. What's been fruitful for me is to take salient experimental facts that are somehow striking, or that seem anomalous—don't really fit into our understanding of physics—and try to improve the equations to include just those facts. My reading of history is that even the greatest advances in physics, when you pick them apart, were always based on a firm empirical foundation and straightening out some anomalies between the existing theoretical framework and some known facts about the world. Certainly QCD was that way; when we developed asymptotic freedom to explain some behaviors of quarks, that they seem to not interact when they're close together seemed inconsistent with quantum field theory, but we were able to push and find very specific quantum field theories in which that behavior was consistent which essentially solved the problem of the strong interaction, and has had many fruitful consequences. Axions also—similar thing—a little anomaly— there's a quantity that happens to be very small in the world, but our theories don't explain why it's small; you can change the theories to make them a little more symmetrical—then we do get zero—but that has other consequences: the existence of these new particles rocks cosmology, and they might be the dark matter—I love that kind of thing. String theory is sort of the extreme of non-empirical physics. In fact, its historical origins were based on empirical observations, but wrong ones. String theory was originally based on trying to explain the nature of the strong interactions, the fact that hadrons come in big families, and the idea was that they could be modeled as different states of strings that are spinning around or vibrating in different ways. That idea was highly developed in the late 60s and early 70s, but we put it out of business with QCD, which is a very different theory that turns out to be the correct theory of the strong interaction. But the mathematics that was developed around that wrong idea, amazingly, turned out to contain, if you do things just right, and tune it up, to contain a description of general relativity and at the same time obeys quantum mechanics. This had been one of the great conceptual challenges of 20th century physics: to combine the two very different seeming kinds of theories—quantum mechanics, our crowning achievement in understanding the micro-world, and general relativity, which was abstracted from the behavior of space and time in the macro-world. Those theories are of a very different nature and, when you try to combine them, you find that it's very difficult to make an entirely consistent union of the two. But these evolved string theories seem to do that. The problems that arise in making a quantum theory of gravity, unfortunately for theoretical physicists who want to focus on them, really only arise in thought experiments of a very high order—thought experiments involving particles of enormous energies, or the deep interior of black holes, or perhaps the earliest moments of the Big Bang that we don't understand very well. All very remote from any practical, do-able experiments. It's very hard to check the fundamental hypotheses of this kind of idea. The initial hope, when the so-called first string revolution occurred in the mid-1980s, was that when you actually solved the equations of string theory, you'd find a more or less unique solution, or maybe a handful of solutions, and it would be clear that one of them described the real world. From these highly conceptual considerations of what it takes to make a theory of quantum gravity, you would be led "by the way" to things that we can access and experiment, and it would describe reality. But as time went on, people found more and more solutions with all kinds of different properties, and that hope—that indirectly by addressing conceptual questions you would be able to work your way down to description of concrete things about reality—has gotten more and more tenuous. That's where it stands today. My personal style in fundamental physics continues to be opportunistic: To look at the phenomena as they emerge and think about possibilities to beautify the equations that the equations themselves suggest. As I mentioned earlier, I certainly intend to push harder on ideas that I had a long time ago but that still seem promising and still haven't been exhausted in supersymmetry and axions and even in additional applications of QCD. I'm also always trying to think of new things. For example, I've been thinking about the new possibilities for phenomena that might be associated with this Higgs particle that probably will be discovered at the LHC. I realized something I'd been well aware of at some low level for a long time, but I think now I've realized has profound implications, which is that the Higgs particle uniquely opens a window into phenomena that no other particle within the standard model would be sensitive to. If you look at the mathematics of the standard model, you discover that there are possibilities for hidden sectors—things that would interact very weakly with the kind of particles we've had access to so far, but would interact powerfully with the Higgs particles. We'll be opening that window. Very recently I've been trying to see if we can get inflation out of the standard model, by having the Higgs particle interact in a slightly nonstandard way with gravity. That seems promising too. Most of my bright ideas will turn out to be wrong, but that's OK. I have fun, and my ego is secure. On National Greatness In 1983 the Congress of the United States canceled the SSC project, the Superconducting SuperCollider, that was under construction near Waxahachie, Texas. Many years of planning, many careers had been invested in that project, also $2 billion had already been put into the construction. All that came out of it was a tunnel from nowhere to nothing. Now it's 2009 and a roughly equivalent machine, the Large Hadron Collider LHC, will coming into operation at CERN near Geneva. The United States has some part in that. It has invested half a billion dollars out of the $15 billion total. But it's a machine that is in Europe, really built by the Europeans; there's no doubt that they have contributed much more. Of course, the information that comes out will be shared by the entire scientific community. So the end result, in terms of tangible knowledge, is the same. We avoided spending the extra money. Was that a clever thing to do? I don't think so. Even in the narrowest economic perspective, I think it wasn't a clever thing to do. Most of the work that went into this $15 billion was local, locally subcontracted within Europe. It went directly into the economies involved and furthermore into dynamic sectors of the economy for high-tech industries involved in superconducting magnets, fancy cryogenic engineering and civil engineering of great sophistication and of course computer technology. All that know-how is going to pay off much more than the investment in the long run. But even if it weren't the case that purely economically it was a good thing to do, The United States missed an opportunity for national greatness. A 100 years or 200 years from now, people will largely have forgotten about the various spats we got into, the so-called national greatness of imposing our will on foreigners, and they will remember the glorious expansion of human knowledge that is going to happen at the LHC and the gigantic effort that went into getting it. As a nation we don't get many opportunities to show history our national greatness, and I think we really missed one there. Maybe we can recoup. The time is right for an assault on the process of aging. A lot of the basic biology is in place. We know what has to be done. The aging process itself is really the profound aspect of public health, eliminating major diseases, even big ones like cancer or heart disease, would only increase life expectancy by a few years. We really have to get to the root of the process. Another project on a grand scale would be to search systematically for life in the galaxy. We have tools in astronomy, we can design tools, to find distant planets that might be earth like, study their atmospheres, and see if there is evidence for life. It would be feasible, given a national investment of will and money, to survey the galaxy and see if there are additional earth-like planets that are supporting life. We should think hard about doing things we will be proud to be remembered for, and think big.
2936ea3066a1a9bd
No polls currently selected on this page! Repository is empty Physical Chemistry 1 Code: 40821 ECTS: 6.0 Lecturers in charge: izv. prof. dr. sc. Josip Požar - Lectures Lecturers: doc. dr. sc. Nikola Bregović - Seminar Take exam: Studomat 1. komponenta Lecture typeTotal Lectures 60 Seminar 30 Introductory overview of physical chemisty. Quantity calculus. Wave nature of particles. Uncertainty principle. Postulates of quantum mechanics. Harmonic oscillator. Particle in a box. Hydrogen atom. Atomic orbitals. Spin and manyelectron atoms. Atomic spectra. Born-Oppenheimer aproximation. Molecular orbitals. Diatomic molecules. Correlation diagram. Hibridization. Hückel molecular orbitals. Elektronic structure of crystals. Ligand field theory. Quantum chemistry in schools. Molecular spectra. Absorption, emission and scattering. Molecular rotations. Molecular vibrations. IR spectra. Electronic spectra. Lasers. Photoelectron spectra. Magnetic resonance. NMR. Spectroscopy in schools. Properties of gases. Ideal gas and real gases. Kinetic theory of gases. Distribution of molecular velocities and speeds. Collisions. Statistical mechanics. Boltzmann's law. - to explain the experimental facts that were not in accord with the laws of classical physics, - to state the postulates of quantum mechanics, set the Schrödinger equation for simple systems (particle in a box, harmonic oscillator) and to describe the solutions obtained, - to describe to procedure for solving the Schrödinger equation for the hydrogen atom and to explain the physical meaning of the solutions obtained, - to apply the postulates of quantum mechanics to description of molecules, to corroborate the Born-Oppenheimer approximation and to the describe the structure of energy levels in diatomic and polyatomic molecules on a qualitative level, - to distinguish between the absorption, emission and scattering of electromagnetic radiation and to state which information regarding the molecular structure can be deduced from rotation, vibration and electronic spectra of molecules, - to calculate the molecular bond length from the corresponding rotation spectra of linear molecules - to estimate the dissociation energy of diatomic molecules from the corresponding vibration spectra (IR and Raman) and to determine the bond length in certain vibration energy levels from the vibration-rotation transitions - to apply the Frank-Condon principle in description of electronic spectra of molecules, and to distinguish between the progression and sequence lines in vibronic transitions - to classify the physical-chemical processes following the absorption of electromagnetic radiation and to distinguish between the mechanisms governing the luminescence (phosphorescence and fluorescence) - to describe the conditions leading to stimulated emission of radiation and to state the unique properties of laser radiation - to describe the principles governing the magnetic resonance: NMR (chemical shifts and nuclear spin coupling) and EPR (nuclear and electron spin coupling) - to distinguish between the properties of ideal and real gases and to explain how these differences affect the properties of gaseous substances - to define the distribution of molecular speeds and velocities in gases and to apply these distributions for calculation of physical-chemical properties of ideal gases - to define the Boltzmann distribution law and apply it for deducing the distribution of molecules among quantum states and the corresponding energy levels Prerequisit for: Enrollment : Passed : General Chemistry Passed : Mathematics 1 Passed : Physics 1 Attended : Mathematics 2 Attended : Physics 2 3. semester Mandatory course - Regular study - Biology and Chemistry Education Consultations schedule:
870b82d6ea0106c7
Impact Factor 3.560 | CiteScore 3.1 More on impact › Front. Phys., 23 April 2021 | Direct Optimal Control Approach to Laser-Driven Quantum Particle Dynamics • Institut für Physik, Universität Rostock, Rostock, Germany Optimal control theory is usually formulated as an indirect method requiring the solution of a two-point boundary value problem. Practically, the solution is obtained by iterative forward and backward propagation of quantum wavepackets. Here, we propose direct optimal control as a robust and flexible alternative. It is based on a discretization of the dynamical equations resulting in a nonlinear optimization problem. The method is illustrated for the case of laser-driven wavepacket dynamics in a bistable potential. The wavepacket is parameterized in terms of a single Gaussian function and field optimization is performed for a wide range of particle masses and lengths of the control interval. Using the optimized field in a full quantum propagation still yields reasonable control yields for most of the considered cases. Analysis of the deviations leads to conditions which have to be fulfilled to make the semiclassical single Gaussian approximation meaningful for field optimization. 1 Introduction “Teaching lasers to control molecules” has been a long-standing goal in molecular physics [1]. Among the various methods of the early days [15], optical control theory (OCT) emerged as a versatile tool. Originally developed by Rabitz et al. [6, 7] and Kosloff et al. [8], numerous methodological extensions have been developed over the years (for reviews, see e.g., [912]). In terms of practical realizations of chemical reaction control, the feedback strategy [1, 13, 14] as well as straightforward resonant excitation schemes [1517] have been most successful. In quantum optimal control theory the goal of optimizing the expectation value of a target operator such as a projector onto a certain state, is formulated as a variational problem for a cost functional subject to certain constraints. The latter includes, for instance, some penalty for high field intensities or that the wavepacket should fulfill the Schrödinger equation. This control problem is usually solved using an indirect approach, i.e., the cost functional is not minimized directly. Instead, the stationarity condition for the cost functional is converted to a two-point boundary problem for two coupled Schrödinger equations. A numerical solution is obtained by iterative forward and backward propagation of the actual wavepacket and an auxiliary wavepacket, respectively (e.g., [18]). This procedure is sometimes referred to as the optimize and then discretize paradigm [19]. Indirect methods for optimal control are in use in other areas of physics, e.g., stochastic control [20], but also in engineering and biology [21]. Direct optimal control, in contrast, follows the discretize and then optimize paradigm, i.e. the cost functional is minimized directly using methods from nonlinear optimization. Although being popular, for instance, in applied mathematics [22], engineering [23], and biology [21], there have been no applications to quantum molecular dynamics so far. The present paper is devoted to fill this gap. Indirect optimal control requires to solve iteratively two time-dependent Schrödinger equations where the numerical effort scales exponentially with the number of degrees of freedom. To cope with this situation the Multi-Configurational Time-Dependent Hartree (MCTDH) approach is most suited [24, 25]. An OCT implementation has been reported in Ref. [26], for an application see also Ref. [27]. The solution of the time-dependent Schrödinger equation requires a priori knowledge of the potential energy surface. But, when driving the wavepacket into a particular region of configuration space using laser control, a global potential might not be needed. Thus on-the-fly approaches, e.g., in the context of MCTDH [28, 29] could be of advantage. On the other hand, semiclassical approximations in terms of Gaussian wavepackets play a prominent role in molecular quantum dynamics [30] and indeed there has been a semiclassical formulation of indirect OCT reported in Refs. [31, 32] (for related work using Wigner space sampling, see Ref. [33]). In this paper we explore direct OCT using a representation of the wavepacket dynamics in terms of a single Gaussian function. Although this choice has been made for numerical convenience, it also facilitates exploration of its limitations by comparison with solutions of the time-dependent Schrödinger equation. Specifically, for the considered problem of quantum particle motion in a bistable potential we are able to identify conditions for which the single Gaussian approximation is adequate. 2 Theoretical Methods 2.1 Equations of Motion The equations for the time evolution of a quantum mechanical state can be obtained from the time-dependent variational principle starting with the stationarity condition for the action S, i.e. [34]. δSt1t2L(Ψ,Ψ*)dt=0 ,(1) where the quantum Lagrangian is given by (Note that atomic units are used throughout) In the following we will focus on one-dimensional systems (coordinate x and momentum p) coupled to a radiation field, E(t), in dipole approximation (dipole operator μ(x)). Thus the Hamiltonian operator in the coordinate representation is given by Equation 1 yields the condition [34]. Re[δΨ|i∂t-H(t)|Ψ]=0 .(4) Assuming that the time-dependence of the wavepacket is implicitly parameterized by the set of time-dependent real parameters a(t)={a1(t),,an(t)}, this yields Inserting Eq. 5 into Eq. 4 gives the equations of motion for the general set of parameters used to describe the wavepacket ai˙=j=1nKijReΨaj|HΨi=1,,n ,(6) with Kij being the elements of the inverse of the matrix formed by ImΨ/ai|Ψ/aj. In order to connect to on-the-fly approaches and to reduce the number of differential equations of motion (and thus the computational cost) we assume that the wavepacket has the following Gaussian form [30] at all times where α and β describe the width and tilt of the phase space Gaussian. Further, x0 and p0 are the average position and momentum, respectively. Hence, we identify a(t)={α(t),β(t),x0(t),p0(t)} and using Eq. 6 gives the following set of coupled differential equations α˙=4αβm ,(8) β˙=2(α2β2)m4α2αU(t) ,(9) x˙0=p0m ,(10) subject to some initial conditions at time t0. Here, we defined the time-dependent expectation value of the potential In the next section we will focus on the control problem assuming that these equations of motion can be solved, which implies that the expectation value of the potential and its derivatives are available. 2.2 Statement of the Control Problem Let us start with a brief summary of optimal control theory [9, 10, 35]. Given a functional of the form J[a,u,k]=T[a(tf),k,tf]+t0tf[a(t),u(t),k,t]dt .(13) where T and are the terminal and running cost, respectively, the task is to find the state trajectory a(t), external control u(t) (where the time t∈[t0,tf]) and the set of static parameters k that minimize the functional J[a,u,k]. The minimization is performed subject to the following differential constraints a˙(t)=f[a(t),u(t),k,t] ,t[t0,tf].(14) Further, there can be path constraints hLh[a(t),u(t),k,t]hU ,(15) and event constraints such as eLe[F[a(t),u(t)],k,t0,tf]eU .(16) Here, the subscript L and and U denotes the lower and upper boundary, respectively, defining the constraints. Notice that in contrast to path constraints, event constraints are not time-dependent, but could include a functional, F, of, e.g., the state trajectory or the external control (see below). Next, we specify this general control problem to the model introduced in Section 2.1. The state is characterized by the set a(t)={α(t),β(t),x0(t),p0(t)} and the external control is given by the laser field u(t)=E(t). Additional time-independent parameters, k, will not be used. The differential constraints (14) are given by Eqs. 811. The goal of the optimization can be stated as follows. Given some initial quantum state |Ψ(t0), parameterized by ai={αi,βi,x0i,p0i}, find a laser field E(t) such that the overlap is maximized between the time-evolved final state at t=tf, |Ψ(tf), and some target state |Φt. Thus, the terminal cost in Eq. 13 is given by (notice the minus sign because the terminal cost will be minimized and we want to maximize the overlap) Here, for simplicity we will use the parametrization of Eq. 7 for the target state as well, labeling the target parameters as at={αt,βt,x0t,p0t}. The running cost will be chosen as follows [E(t),t]=κ[E(t)]2s(t) ,s(t)=sin2(πtft)+ϵ.(18) Besides the field intensity we have included a factor κ scaling the penalty for high field strengths as well as a shape function s(t), which ensures that the field increases(decreases) slowly when turned on(off) [36]. Note that ϵ is a small parameter introduced to avoid division by zero and numerical problems at times t=0 and t=tf. Throughout the text we have used ϵ=0.005. For the application presented below we don’t use any path constraints, but event constraints. Given the event e[F[E(t)],a(t0)]=(α(t0)β(t0)x0(t0)p0(t0)t0tfE(t)dt) ,(19) upper and lower bounds will be chosen equal as follows eL=eU=(αiβix0ip0i0) .(20) Hence, the parameters of the initial state are fixed and not subject to optimization. Further, we enforce the zero-net-force condition by demanding that F[E(t)]=t0tfE(t)dt=0 [37]. The optimization problem will be solved using a direct method, i.e. by means of discretization of the differential equations. Details will be specified in the next section. 2.3 Model System and Computational Details The direct optimal control approach will be applied to the problem of particle dynamics in a bistable potential. This could represent, for instance, proton or hydrogen atom transfer in a tautomerization reaction [38, 39]. The following potential will be used Here, xB is the distance between the minimum of the potential and the top of the barrier, and VB is the barrier height. The system-field interaction is treated in semiclassical approximation, taking the polarization of the field in the same direction as the dipole, and assuming a linear model for the latter (q is the charge) Specific parameters for the numerical simulations have been chosen to mimic typical situations in proton transfer reactions [38, 39], i.e. xB=2a0 (1.06 Å), VB=0.01Eh(6.3 kcal/mol), and q=1 (=1e). The particle’s mass, m, will be used to tune the ‘quantumness’ of the dynamics. Exemplary, we show potential and eigenstates for two choices of the masses in Figure 1. Comparing the two cases we note that in particular the number of eigenstates below the barrier is 8 and 16 for masses of 1 mH and 5 mH respectively (where mH is the hydrogen mass). FIGURE 1. Eigenstates for a particle of mass (A)1 mH and (B)5 mH in the potential given by Eq. 21 with xB=2a0 and VB=0.01Eh. Solid and dashed lines correspond to even and odd eigenstates, respectively. Using Eqs. 21,22 together with Eq. 7 one can calculate the time-dependent expectation value of the potential, Eq. 12, and its derivatives with respect to α and x0 required for the equations of motion (Eqs. 9, 11). Although in the present case the required expectation value could have been calculated analytically, we have used a more general prescription. To this end the potential is globally approximated by a sum of Gaussians of the form V(x)p=1ggpebp(xxp)2 .(23) We have used g=5 which gives gp={31.000,1.529,1.529,31.000,1.348} (in units of VB), bp={1.397,1.658,1.658,1.397,0.}  (in units of xB2), and xp={2.981,1.142,1.142,2.981,0.} (in units of xB). Using Eq. 23 one obtains for Eq. 12 U(t)=p=15gpeBp(2α(t)2α(t)+bp)1/2qx0(t)E(t) ,(24) αU(t)=p=15Dp(14α(t)2bpα(t)(2α(t)+bp)(x0(t)xp)2) ,(25) x0U(t)=2p=15Dp(x0(t)xp)qE(t) ,(26) For the solution of the control problem the software package PSOPT has been used [40]. This package employs an approximation for the state trajectory of the form a(t)aN(t)=k=0Na(tk)k(t) ,(29) where tk are the Gauss-Lobatto quadrature nodes (a(tk)=aN(tk)) and k are the Lagrange basis polynomials. This approximation allows to transform the performance functional (Eq. 13) into the performance function G(y)=T[a(tf),k,tf]+k=0N[aN(tk),uN(tk),k,tk]wk ,(30) and the differential constraints into a set of holonomic constraints for the decision vector y=(u(t0),,u(tN),a(t0),,a(tN),k,t0,tf); wk are the Gauss-Lobatto weights. For more details see Ref. [40]. The performance function (30) is optimized using nonlinear programming (NLP) algorithms, such as the ones implemented in IPOPT [41]. PSOPT provides different discretization schemes. The global pseudospectral Legendre and Chebyshev discretization yield very slow convergence for non-smooth functions [19], as it is the case for the solutions found for α(t) and β(t) (see first and second row, (b) and (d) columns of Figure 2 below). Increasing the number of nodes is not an option for these discretization schemes because of the non-sparsity of the Jacobian matrices which cannot be handled properly by the implemented IPOPT NLP solver. This issue translates into a disproportional increase of computational time. The local methods available are trapezoidal and Hermite-Simpson discretization. In order to check their performance we simulated the case of a particle of mass of 1 mH and a final time of tf=20,000 au. In doing so the number of time discretization nodes has been scanned from 200 to 6,000. To evaluate the discretization error we use the maximum relative local error, εdisc, defined in Ref. [40]. The results are shown in Figure 3. If the number of nodes is below 1,000 the trapezoidal method has a smaller error εdisc compared to Hermite-Simpson for the same number of nodes. Beyond 1,000 nodes, Hermite-Simpson outperforms the trapezoidal discretization. However, this comes at the expense of an increased computational time as can be seen in the lower panel of Figure 3. For the simulations reported below we have used Hermite-Simpson discretization with 2,000 nodes, which offers a good balance between accuracy and speed. FIGURE 2. Initial guess (A) and (C) and optimal solution (B) and (D) for state, a(t), and control field for two different particle masses (1 mH(A) and (B), 5 mH(C) and (D)). FIGURE 3. Maximum relative local error (upper panel) and timing (lower panel) for trapezoidal (blue) and Hermite-Simpson (orange) as a function of the number of nodes. In order to quantify the importance of quantum effects beyond the simple Gaussian ansatz for the wavepacket, Eq. 7, MCTDH simulations have been performed using the optimized field. For this purpose the Heidelberg MCTDH package has been used [42]. 3 Results 3.1 Laser-Controlled Proton Transfer In the following we present a proof-of-principle application of direct OCT using the example of proton transfer in a bistable potential. Specifically, the two cases (particle masses) given in Figure 1 will be considered. For the initial state we choose the parameters of a Gaussian in the left well, and as the target state we choose a symmetrically located Gaussian in the right side well. The Gaussian parameters have been optimized to the ground state using a local harmonic approximation. Although direct control in principle allows to vary the final time, in the present application the final time has been fixed to tf=20,000 au. The penalty factor has been chosen as κ=0.3 a02/Eh (cf. Eq. 18). To solve the problem we also have to provide an initial guess for states and control which is shown in Figures 2A,C. The rapid oscillations have been chosen randomly; there is no correlation between the different variables. The optimal solutions for the two particle masses are given in Figures 2B,D. Apparently, the optimal field is able to drive the center of the wavepacket across the barrier into the right minimum at t=tf. In this respect one should note that the optimal fields have a relatively simple shape and little resemblance with the initial guess. This is one of the major advantages of the direct approach to optimal control problems, i.e. the convergence region of the initial guess is very broad. The dynamics is rather similar, i.e., in both cases the trajectory passes the barrier coming from the turning point at the left hand side. Just before and after the barrier the wavepacket gets localized in coordinate and delocalized in momentum space, whereas the position-momentum correlation (β) vanishes. The wavepacket passes the top of the barrier with large momentum. The question now arises if the optimum field found for a single Gaussian wavepacket is able to trigger the same particle dynamics in the full quantum case. To this end the optimal field is used within a quantum dynamics simulation. The results are compared in Figure 4 in terms of coordinate and momentum expectation values and the respective standard deviation. Until after the barrier crossing, Gaussian and full quantum results are rather similar. Indeed, if the goal would have been to trigger the localization of the wavepacket somewhere in the region of the right well at a particular time, the optimal field would still perform this task also in the quantum case. Of course, the agreement between classical and quantum propagation is better in case of the heavier mass even though there is considerable larger spread of the wavepacket in the quantum case after reflection at the right turning point. For the lighter mass the agreement after barrier crossing is less favorable due to the larger spread and the structured character of the quantum wavepacket which cannot be captured by a single Gaussian. FIGURE 4. Comparison of the coordinate (top row) and momentum (bottom row) expectation values and their respective standard deviation (shaded area), using the Gaussian approximation (blue) and the full quantum propagation (orange). Both trajectories are propagated under the influence of the optimal control field as obtained for the Gaussian (A)1 mH(B)5 mH). 3.2 Region of Validity of the Gaussian Wavepacket Approximation Single Gaussians cannot capture the dynamics of structured wavepackets. Nevertheless, the agreement between Gaussian and full quantum results is at least qualitative, even for the lighter particle. This provides the motivation for the investigation of the validity of the Gaussian approximation over a wider range of parameters. Again the optimum field is obtained following the procedure described in Section 3.1, but now for different final times (ranging from 5,000 au to 20,000 au in steps of 1,000 au) and masses (ranging from 1 to 10 mH in steps of 1 mH). To evaluate the performance of the optimum field to drive the wavepacket to the right well in the full quantum case we choose the following error: Err=|x0tΨ˜(tf)|x|Ψ˜(tf)|xB ,(31) where Ψ˜(tf) is the exact quantum wavefunction at the final time. This error will be between 0 and 1 if the expectation value of the quantum wavepacket crossed the barrier and greater than 1 if it did not. Results are shown in Figure 5. FIGURE 5. Error according to Eq. 31 as a function of different final times and masses. Green lines represent an odd number of half harmonic oscillation periods for the corresponding mass (2n+1)T/2 with n=1,2,3 and red lines represent an integer number of periods nT with n=2,3. In general, we can see from Figure 5 that the Gaussian optimal control fields are able to drive the particle reaction on a broad range of masses and final times. As expected the performance deteriorates for the lighter masses. There are some features which deserve closer attention. For example, there are regions where the Gaussian wavepacket approach works exceptionally well (characterized by stripes of intense blue color). In these regions the final time is matching a total integer number of well oscillations plus the barrier crossing time. Assuming that these oscillations are harmonic with period T and taking the barrier crossing time as being half of the harmonic period, these final times can be estimated. The middle green line in Figure 5 corresponds to a final time of 5T/2. It nicely matches with the dark blue region where the approach works well. Thus, in general one would expect regions with (2n+1)T/2 and nT where the approximation works well and not so well, respectively. This is roughly seen in Figure 5, although the deviation from the harmonic approximation causes some quantitative disagreement. This analysis points to the importance of the final time tf for the effect of the quantumness of the dynamics on the overlap with the target. In passing we note that in principle direct optimal control offers the possibility to optimize the final time as well, e.g., to fulfill some constraints with respect to the spread of the wavepacket. Another interesting feature apparent from Figure 5 are the isolated “islands” of poor performance, e.g. at tf=14,000 au and m=7 mH. To rationalize this behavior Figure 6 shows various expectation values for tf=14,000 au and m=6 and 7 mH. The first and second row compares Gaussian and quantum results and we can notice that the corresponding trajectories diverge considerably more for 7 mH (b) than for 6 mH (a), even though a naive consideration would suggest that the performance of the single Gaussian approximation is better for the more massive particle. In general we observe that while in the good performing cases the wavepacket essentially stays localized, the opposite is true for the poor performing cases, which stands out as a likely reason for the discrepancy between Gaussian and quantum propagation in the later case. This holds irrespective of the actual mass of the particle. From the second and fourth rows of Figure 6 we notice that the cases m=6 and 7 mH differ in the momentum and thus kinetic energy when crossing the barrier. While in the former case the momentum is maximum at the barrier top, in the latter the particle is slowed down when reaching the barrier. As a consequence it becomes rather delocalized in position space and thus the single Gaussian approximation fails. FIGURE 6. Expectation values of coordinate and momentum (shaded areas indicate the standard deviation), optimal field, as well as total (Etot), potential (V) and kinetic (K) energy of the moving wave packet (rows from top to bottom) for tf=14,000 au and (A)6 mH, (B)7 mH. In the bottom row the expectation values are plotted at the respective positions of the Gaussian wavepacket. In principle one could expect that decreasing the penalty factor κ would alleviate this problem, i.e. stronger fields would imply higher momentum. However, after inspecting Figure 6, it is apparent that for a given final time it depends on the initial direction of momentum whether the wavepacket will pass the barrier with high or low momentum. This idea supports the conclusion that not only the mass of the particle, but also the specific optimal path, are important for the validity of the single Gaussian approximation. Controlling the initial direction in a way which works in a black-box fashion for all cases covered in Figure 5 has not been successfull. However, in contrast to indirect control, where one would have to compute running cost derivatives with respect to state variables to get coupling terms between forward and backward Schrödinger equation, including additional running costs is straightforward in direct control. To demonstrate this we have added a second term to the running costs of Eq. 18, which serves to maximize the kinetic energy, i.e. [p0(t),t]=ηp02(t)2m .(32) Here, η is a penalty scaling factor and the minus sign ensures that this term gets maximized. It is expected that this will lead to barrier crossing with high momentum and thus a reduced error, Eq. 31. The results shown in Figure 7 clearly support our hypothesis, i.e. adding the running cost functional Eq. 32 leads to the elimination of the poor-performing islands. Hence, using the flexibility of the direct optimal control approach the region of validity of the single Gaussian approximation could be extended. FIGURE 7. Error according to Eq. 31 as a function of different final times and masses. Running cost according to Eq. 32 has been used together with Eq. 18. The penalty scaling factor was η=0.003, except for a few cases where lower or higher values has been used, ranging from 0.001 to 0.015. 4 Conclusion In this paper we have introduced a new tool for quantum optimal control. In contrast to indirect methods, which require the solution of a two-point boundary value problem, the present direct method builds on the first discretize and then optimize paradigm. Thus, by construction there is no need for explicit propagation of a wavepacket. So far direct methods have found application mostly in engineering [23, 40]. The performance and capabilities of the direct method have been demonstrated for the case of one-dimensional particle transfer in a bistable potential. For simplicity the wavepacket has been approximated by a single Gaussian function, but in principle other forms are possible, e.g., superposition of Gaussians [28] or even expansions in terms of an eigenstate basis. Of course, Gaussians have the potential advantage of being suited for on-the-fly simulations, which brings OCT into the realm of the dynamics of complex molecular systems, at least in principle. At this point it will be required to explore the scaling of the numerical effort associated with the direct method more thouroughly. Here, we merely explored the dependence on the number of nodes. But the number of parameters will be another limiting factor. Preliminary calculations performed on regular hardware showed that about 50 parameters and 500 nodes are feasible. For a simple test system the question has been addressed whether the quantumness of the dynamics influences the final control yield, given a field which has been optimized for the single Gaussian approximation. Interestingly, it turned out that nearly complete particle transfer can be achieved for a wide range of masses and final times. Here, the important point is whether the wavepacket crosses the barrier with high or low momentum, which for the given model is decided by the sign of the momentum during the initial dynamics. As a consequence, even the optimization based on a simple Gaussian wavepacket, possibly using on-the-fly dynamics, may provide reasonable control fields. Data Availability Statement Author Contributions ARR has performed the work and analyzed the results. OK has designed the project and supervised the scientific work. All authors have discussed and interpreted the results and contributed to writing the article. This work has been funded by the grant Ku952/10–1 from the Deutsche Forschungsgemeinschaft (DFG). Conflict of Interest 1. Judson RS, Rabitz H. Teaching lasers to control molecules. Phys Rev Lett (1992) 68:1500. doi:10.1103/physrevlett.68.1500 PubMed Abstract | CrossRef Full Text | Google Scholar 2. Paramonov GK, Savva VA. Resonance effects in molecule vibrational excitation by picosecond laser pulses. Phys Lett A (1983) 97:340–2. doi:10.1016/0375-9601(83)90658-8 CrossRef Full Text | Google Scholar 3. Tannor DJ, Rice SA. Control of selectivity of chemical reaction via control of wave packet evolution. J Chem Phys (1985) 83:5013–8. doi:10.1063/1.449767 CrossRef Full Text | Google Scholar 4. Tannor DJ, Kosloff R, Rice SA. Coherent pulse sequence induced control of selectivity of reactions: exact quantum mechanical calculations. J Chem Phys (1986) 85:5805. doi:10.1063/1.451542 CrossRef Full Text | Google Scholar 5. Brumer P, Shapiro M. Control of unimolecular reactions using coherent light. Chem Phys Lett (1986) 126:541–6. doi:10.1016/s0009-2614(86)80171-3 CrossRef Full Text | Google Scholar 6. Shi S, Woody A, Rabitz H. Optimal control of selective vibrational excitation in harmonic linear chain molecules. J Chem Phys (1988) 88:6870–83. doi:10.1063/1.454384 CrossRef Full Text | Google Scholar 7. Shi S, Rabitz H. Selective excitation in harmonic molecular systems by optimally designed fields. Chem Phys (1989) 139:185–99. doi:10.1016/0301-0104(89)90011-6 CrossRef Full Text | Google Scholar 8. Kosloff R, Rice SA, Gaspard P, Tersigni S, Tannor DJ. Wavepacket dancing: achieving chemical selectivity by shaping light pulses. Chem Phys (1989) 139:201–20. doi:10.1016/0301-0104(89)90012-8 CrossRef Full Text | Google Scholar 9. Brif C, Chakrabarti R, Rabitz H. Control of quantum phenomena: past, present and future. New J Phys (2010) 12:075008. doi:10.1088/1367-2630/12/7/075008 CrossRef Full Text | Google Scholar 10. Werschnik J, Gross EKU. Quantum optimal control theory. J Phys B: Mol Opt Phys (2007) 40:R175–211. doi:10.1088/0953-4075/40/18/r01 CrossRef Full Text | Google Scholar 11. Worth GA, Richings GW. Optimal control by computer. Annu Rep Prog Chem Sect C: Phys Chem (2013) 109:113. doi:10.1039/c3pc90003g CrossRef Full Text | Google Scholar 12. Keefer D, de Vivie-Riedle R. Pathways to new applications for quantum control. Acc Chem Res (2018) 51:2279–86. doi:10.1021/acs.accounts.8b00244 PubMed Abstract | CrossRef Full Text | Google Scholar 13. Brixner T, Gerber G. Quantum control of gas-phase and liquid-phase femtochemistry. ChemPhysChem (2003) 4:418. doi:10.1002/cphc.200200581 PubMed Abstract | CrossRef Full Text | Google Scholar 14. Prokhorenko VI, Nagy AM, Waschuk SA, Brown LS, Birge RR, Miller RJD. Coherent control of retinal isomerization in bacteriorhodopsin. Science (2006) 313:1257–61. doi:10.1126/science.1130747 PubMed Abstract | CrossRef Full Text | Google Scholar 15. Stensitzki T, Yang Y, Kozich V, Ahmed AA, Kössl F, Kühn O, et al. Acceleration of a ground-state reaction by selective femtosecond-infrared-laser-pulse excitation. Nat Chem 10 (2018) 126–31. doi:10.1038/nchem.2909 PubMed Abstract | CrossRef Full Text | Google Scholar 16. Nunes CM, Pereira NAM, Reva I, Amado PSM, Cristiano MLS, Fausto R. Bond-breaking/bond-forming reactions by vibrational excitation: infrared-induced bidirectional tautomerization of matrix-isolated thiotropolone. J Phys Chem Lett (2020) 11:8034–9. doi:10.1021/acs.jpclett.0c02272 PubMed Abstract | CrossRef Full Text | Google Scholar 17. Heyne K, Kühn O. Infrared laser excitation controlled reaction acceleration in the electronic ground state. J Am Chem Soc (2019) 141:11730–8. doi:10.1021/jacs.9b02600 PubMed Abstract | CrossRef Full Text | Google Scholar 18. Zhu W, Rabitz H. A rapid monotonically convergent iteration algorithm for quantum optimal control over the expectation value of a positive definite operator. J Chem Phys (1998) 109:385–91. doi:10.1063/1.476575 CrossRef Full Text | Google Scholar 19. Kelly M. An introduction to trajectory optimization: how to do your own direct collocation. SIAM Rev (2017) 59:849–904. doi:10.1137/16m1062569 CrossRef Full Text | Google Scholar 20. Kappen HJ. An introduction to stochastic control theory, path integrals and reinforcement learning. AIP Conf Proc (2007) 887:149–81. doi:10.1063/1.2709596 CrossRef Full Text | Google Scholar 21. Chen-Charpentier BM, Jackson M. Direct and indirect optimal control applied to plant virus propagation with seasonality and delays. J Comput Appl Math (2020) 380:112983. doi:10.1016/ CrossRef Full Text | Google Scholar 22. Betts JT. Practical methods for optimal control and estimation using nonlinear programming. Philadelphia, PA: Advances in design and control Society for Industrial and Applied Mathematics (2010). 23. Pardo D, Moller L, Neunert M, Winkler AW, Buchli J. Evaluating direct transcription and nonlinear optimization methods for robot motion planning. IEEE Robot Autom Lett (2016) 1:946–53. doi:10.1109/lra.2016.2527062 CrossRef Full Text | Google Scholar 24. Meyer H-D, Manthe U, Cederbaum LS. The multi-configurational time-dependent Hartree approach. Chem Phys Lett (1990) 165:73–8. doi:10.1016/0009-2614(90)87014-i CrossRef Full Text | Google Scholar 25. Beck M, Jäckle A, Worth GA, Meyer HD. The multiconfiguration time-dependent Hartree (MCTDH) method: a highly efficient algorithm for propagating wavepackets. Phys Rep (2000) 324:1–105. doi:10.1016/s0370-1573(99)00047-2 CrossRef Full Text | Google Scholar 26. Schröder M, Carreón-Macedo J-L, Brown A. Implementation of an iterative algorithm for optimal control of molecular dynamics into MCTDH. Phys Chem Chem Phys (2008) 10:850. doi:10.1039/b714821f PubMed Abstract | CrossRef Full Text | Google Scholar 27. Accardi A, Borowski A, Kühn O. Nonadiabatic quantum dynamics and laser control of Br2in solid argon†. J Phys Chem A (2009) 113:7491–8. doi:10.1021/jp900551n PubMed Abstract | CrossRef Full Text | Google Scholar 28. Richings GW, Polyak I, Spinlove KE, Worth GA, Burghardt I, Lasorne B. Quantum dynamics simulations using Gaussian wavepackets: the vMCG method. Int Rev Phys Chem (2015) 34:269–308. doi:10.1080/0144235x.2015.1051354 CrossRef Full Text | Google Scholar 29. Richings GW, Habershon S. MCTDH on-the-fly: efficient grid-based quantum dynamics without pre-computed potential energy surfaces. J Chem Phys (2018) 148:134116. doi:10.1063/1.5024869 PubMed Abstract | CrossRef Full Text | Google Scholar 30. Heller EJ. The semiclassical way to dynamics and spectroscopy. Princeton, NJ: Princeton University Press (2018). 31. Kondorskiy A, Nakamura H. Semiclassical formulation of optimal control theory. J Theor Comput Chem (2005) 04:75–87. doi:10.1142/s0219633605001416 CrossRef Full Text | Google Scholar 32. Kondorskiy A, Mil’nikov G, Nakamura H. Semiclassical guided optimal control of molecular dynamics. Phys Rev A (2005) 72:041401. doi:10.1103/physreva.72.041401 CrossRef Full Text | Google Scholar 33. Bonačić-Koutecký V, Mitrić R. Theoretical exploration of ultrafast dynamics in atomic clusters: analysis and control. Chem Rev (2005) 105:11–66. doi:10.1021/cr0206925 PubMed Abstract | CrossRef Full Text | Google Scholar 34. Broeckhove J, Lathouwers L, Kesteloot E, Van Leuven P. On the equivalence of time-dependent variational principles. Chem Phys Lett (1988) 149:547–50. doi:10.1016/0009-2614(88)80380-4 CrossRef Full Text | Google Scholar 35. Worth GA, Sanz CS. Guiding the time-evolution of a molecule: optical control by computer. Phys Chem Chem Phys (2010) 12:15570. doi:10.1039/c0cp01740j PubMed Abstract | CrossRef Full Text | Google Scholar 36. Sundermann K, de Vivie-Riedle R. Extensions to quantum optimal control algorithms and applications to special problems in state selective molecular dynamics. J Chem Phys (2000) 110:1896. doi:10.1063/1.477856 CrossRef Full Text | Google Scholar 37. Došlić N. Generalization of the Rabi population inversion dynamics in the sub-one-cycle pulse limit. Phys Rev A (2006) 74:013402. doi:10.1103/PhysRevA.74.013402 Google Scholar 38. Došlić N, Kühn O, Manz J. Infrared laser pulse controlled ultrafast H-atom switching in two-dimensional asymmetric double well potentials. Ber Bunsen Ges Phys Chem (1998) 102:292–7. doi:10.1002/bbpc.19981020303 Google Scholar 39. Došlić N, Abdel-Latif MK, Kühn O. Laser control of single and double proton transfer reactions. Acta Chim Slov (2011) 58:411–24. PubMed Abstract | Google Scholar 40. Becerra VM. Solving complex optimal control problems at no cost with PSOPT. In: IEEE international symposium on computer-aided control system design; 2010 Sept 8–10; Yokohama, Japan. Piscataway, NJ: IEEE (2010). p. 1391–6. Google Scholar 41. Wächter A, Biegler LT. On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Math Program (2006) 106:25–57. doi:10.1007/s10107-004-0559-y CrossRef Full Text | Google Scholar 42. Worth GA, Beck MH, Jäckle A, Meyer HD, Meyer H-D, Vendrell O, et al. The MCTDH package version 8.2 (2000) Version 8.3 (2002) Version 8.4 (2007) Version 8.5 (2011), used Version 8.5.4 Available at: Tech. rep. (2000). Google Scholar Keywords: optimal control, quantum dynamics, semiclassical dynamics, Gaussian wavepackets, proton transfer Citation: Ramos Ramos AR and Kühn O (2021) Direct Optimal Control Approach to Laser-Driven Quantum Particle Dynamics. Front. Phys. 9:615168. doi: 10.3389/fphy.2021.615168 Received: 08 October 2020; Accepted: 09 February 2021; Published: 23 April 2021. Edited by: Tamar Seideman, Northwestern University, United States Reviewed by: Ilya Averbukh, Weizmann Institute of Science, Israel Ilia Tutunnikov, Weizmann Institute of Science, Rehovot, Israel, in collaboration with reviewer [IA] Regina De Vivie-Riedle, Ludwig Maximilian University of Munich, Germany *Correspondence: O. Kühn,
fa700913b2c1dce7
Notes on Quantum Mechanics No Comments PDF version: Notes on Quantum Mechanics – By Logan Thrasher Collins The Schrödinger equation and wave functions Overview of the Schrödinger equation and wave functions Quantum mechanical systems are described in terms of wave functions Ψ(x,y,z,t). Unlike classical functions of motion, wave functions determine the probability that a given particle may occur in some region. The way that this is achieved involves integration and will be discussed later in these notes. To find a wave function, one must solve the Schrödinger equation for the system in question. There are time-dependent and time-independent versions of the Schrödinger equation. The time-dependent version is given in 1D and 3D by the first pair of equations below and the time-independent version is given in 1D and 3D by the second pair of equations below. Here, ћ is h/2π (and h is Planck’s constant), V is the particle’s potential energy, E is the particle’s total energy, Ψ is a time dependent wave function, ψ is a time-independent wave function, and m is the mass of the particle. After this point, these notes will focus on 1D cases unless otherwise specified (it will often be relatively straightforward to extrapolate to the 3D case). For a wave function to make physical sense, it needs to satisfy the constraint that its integral from –∞ to ∞ must equal 1. This reflects the probabilistic nature of quantum mechanics; the probability that a particle may be found anywhere in space must be 1. For this reason, one must usually find a (possibly complex) normalization constant A after finding the wave function solution to the Schrödinger equation. This is accomplished by solving the following integral for A. Here, Ψ* is the complex conjugate of the wave function without the normalization constant and Ψ is the wave function without the normalization constant. To obtain solutions to the time-dependent Schrödinger equation, one must first solve the time-independent Schrödinger equation to get ψ(x). The general solution for the time-dependent Schrödinger equation is any linear combination of the product of ψ(x) with an exponential term (see below). The coefficients cn can be real or complex. Physically, |cn|2 represents the probability that a measurement of the system’s energy would return a value of En. As such, an infinite sum of all the |cn|2 values is equal to 1. In addition, note that each Ψn(x,t) = ψn(x)e–iEnt/ is known as a stationary state. The reason these solutions are called stationary states is because the expectation values of measurable quantities are independent of time when the system is in a stationary state (as a result of the time-dependent term canceling out). Using wave functions Once a wave function is known, it can be used to learn about the given quantum mechanical system. Though wave functions specify the state of a quantum mechanical system, this state usually cannot undergo measurement without altering the system, so the wave function must be interpreted probabilistically. The way the probabilistic interpretation is achieved will be explained over the course of this section. Before going further, it will be useful to understand some methods from probability. First, the expectation value is the average of all the possible outcomes of a measurement as weighted by their likelihood (it is not the most likely outcome as the name might suggest). Next, the standard deviation σ describes the spread of a distribution about an average value. Note that the square of the standard deviation is called the variance. Equations for the expectation value and standard deviation are given as follows. The first equation computes the expectation value for a discrete variable j. Here, P(j) is the probability of measurement f(j) for a given j. The second equation is a convenient way to compute the standard deviation σ associated with the expectation value for j. The third equation computes the expectation value for a continuous function f(x). Here, ρ(x) is the probability density of x. When ρ(x) is integrated over an interval a to b, it gives the probability that measurement x will be found over that interval. The fourth equation the same as the second equation, but finds the standard deviation σ for the continuous variable x. In quantum mechanics, operators are employed in place of measurable quantities such as position, momentum, and energy. These operators play a special role in the probabilistic interpretation of wave functions since they help one to compute an expectation value for the corresponding measurable quantity. To compute the expectation value for a measurable quantity Q in quantum mechanics, the following equation is used. Here, Ψ is the time-dependent wave function, Ψ* is the complex conjugate of the time-dependent wave function, and Q̂ is the operator corresponding to Q. Any quantum operator which corresponds to a classical dynamical variable can be expressed in terms of the momentum operator –iℏ(∂/∂x). By rewriting a given classical expression in terms of momentum p and then replacing every p within the expression by –iℏ(∂/∂x), the corresponding quantum operator is obtained. Below, a table of common quantum mechanical operators in 1D and 3D is given. Heisenberg uncertainty principle The Heisenberg uncertainty principle explains why quantum mechanics requires a probabilistic interpretation. According to the Heisenberg uncertainty principle, the more precisely the position of a particle is determined via some measurement, the less precisely its momentum can be known (and vice versa). The Heisenberg uncertainty principle is quantified by the following equation. The reason for the Heisenberg uncertainty principle comes from the wave nature of matter (and not from the observer effect). For a sinusoidal wave, the wave itself is not really located at any particular site, it is instead spread out across the cycles of the sinusoid. For a pulse wave, the wave can be localized to the site of the pulse, but it does not really have a wavelength. There are also intermediate cases where the wavelength is somewhat poorly defined and the location is somewhat well-defined or vice-versa. Since the wavelength of a particle is related to the momentum by the de Broglie formula p = h/λ = 2πℏ/λ, this means that the interplay between the wavelength and the position applies to momentum and position as well. The Heisenberg uncertainty principle quantifies this interplay. Some simple quantum mechanical systems Infinite square well The infinite square well is a system for which a particle’s V(x) = 0 when 0 ≤ x ≤ a and its V(x) = ∞ otherwise. Because the potential energy is infinite outside of the well, the probability of finding the particle there is zero. Inside the well, the time-independent Schrödinger equation is given as follows. This equation is the same as the classical simple harmonic oscillator. For the infinite square well, certain boundary conditions apply. In order for the wave function to be continuous, the wave function must equal zero once it reaches the walls, so ψ(0) = ψ(a) = 0. The general solution to the infinite square well differential equation is given as the first equation below. The boundary condition ψ(0) = 0 is employed in the second equation below. Since the coefficient B = 0, there are only sine solutions to the equation. Furthermore, if ψ(a) = 0, then Asin(ka) = 0. This means that k = nπ/a (where n = 1, 2, 3…) as given by the third equation below. The fourth equation below shows that this set of values for k leads to a set of possible discrete energy levels for the system To find the constant A, the wave function ψ = Asin(nπx/a) must undergo normalization. As mentioned earlier, normalization is achieved by setting the normalization integral equal to 1 and solving for the constant A. Note that the time-independent Schrödinger equation can be utilized in the normalization integral since the exponential component of the time-dependent Schrödinger equation would cancel anyways. Using this information, the wave functions for the infinite square well particle system are obtained. The time-independent and time-dependent wave functions are both displayed below at left and right respectively. This infinite set of wave functions has some important properties. They possess discrete energies that increase by a factor of n2 with each level (and n = 1 is the ground state). The wave functions are also orthonormal. This property is described by the following equation. Here, δmn is the Kronecker delta and is defined below. Another important property of these wave functions is completeness. This means that any function can be expressed as a linear combination of the time-independent wave functions ψn. The reason for this remarkable property is that the general solution (see below) is equivalent to a Fourier series. The first equation below can be employed to compute the nth coefficient cn. Here, f(x) = Ψ(x,0) which is an initial wave function. Note that the initial wave function can be any function Ψ(x,0) and the result will generate coefficients for that starting point. This first equation is derived using the orthonormality of the solution set. Note that the formula applies to most quantum mechanical systems since the properties of orthonormality and completeness hold for most quantum mechanical systems (though there are some exceptions). The second equation below computes the cn coefficients specifically for the infinite square well system. Quantum harmonic oscillator For the quantum harmonic oscillator, the potential energy in the Schrödinger equation is given by V(x) = 0.5kx2 = 0.5mω2x2. This means that the following time-independent Schrödinger equation needs to be solved. There are two main methods for solving this differential equation. These include a ladder operator approach and a power series approach. Both of these methods are quite complicated and will not be covered here. The solutions for n = 0, 1, 2, 3, 4, 5 are given below. Here, Hn(y) is the nth Hermite polynomial. The first five Hermite polynomials and the corresponding energies for the system are given in the table. Note that the discrete energy levels for the quantum harmonic oscillator follow the form (n + 0.5)ћω. As with any quantum mechanical system, the quantum harmonic oscillator is further described by the general time-dependent solution. To identify the coefficients cn for this general solution, Fourier’s trick is employed (see previous section) where f(x) is once again any initial wave function Ψ(x,0). Quantum free particle Though the classical free particle is a simple problem, there are some nuances which arise in the case of the quantum mechanical free particle which greatly complicate the system. To start, the Schrödinger equation for the quantum free particle is given in the first equation below. Here, k = (2mE)0.5/ћ. Note that V(x) = 0 since there is no external potential acting on the particle. The second equation below is a general time-independent solution to the system in exponential form. The third equation below is the time-dependent solution to the system where the terms are multiplied by e–iEt/ћ. Realize that this general solution can be written as a single term by redefining k as ±(2mE)0.5/ћ. When k > 0, the solution is a wave propagating to the right. When k < 0, the solution is a wave propagating to the left. The speed of these propagating waves can be found by dividing the coefficient of t (which is ћk2/2m) by the coefficient of x (which is k). Since this is speed, the direction of the wave does not matter, so one can take the absolute value of k. By contrast, the speed of a classical particle is found by solving E = 0.5mv2, which gives a puzzling result that is twice as fast as the quantum particle. Another challenge associated with the quantum free particle is that its wave function is non-normalizable (as shown below). Because of this, one can conclude that free particles cannot exist in stationary states. Equivalently, free particles never exhibit definite energies. To resolve these issues with the quantum free particle, it has been found that the wave function of a quantum free particle actually carries a range of energies and speeds known as a wave packet. The solution for this wave packet involves the integral given by the first equation below and a function ϕ(k) given by the second equation below. This second equation allows one to determine ϕ(k) to fit a desired initial wave function Ψ(x,0). It was obtained using a mathematical tool called Plancherel’s theorem. The above solution to the quantum free particle is now normalizable. Furthermore, the issue with the speed of the quantum free particle having a value twice as large as the speed of the classical free particle is fixed by considering a phenomenon known as group velocity. The waveform of the particle is an oscillating sinusoid (see image). This waveform includes an envelope, which represents the overall shape of the oscillations rather than the individual ripples. The group velocity vg is the speed of this envelope while the phase velocity vp is the speed of the ripples. It can be shown using the definitions of phase velocity and group velocity (see below) that the group velocity is twice the phase velocity, resolving the problem with the particle speed. The group velocity of the envelope is thus what actually corresponds to the speed of the particle. Interlude on bound states and scattering states To review, the solutions to the Schrödinger equation for the infinite square well and quantum harmonic oscillator were normalizable and labeled by a discrete index n while the solution to the Schrödinger equation for the free particle was not normalizable and was labeled by a continuous variable k. The solutions which are normalizable and labeled by a discrete index are known as bound states. The solutions which are not normalizable and are labeled by a continuous variable are known scattering states. Bound states and scattering states are related to certain classical mechanical phenomena. Bound states correspond to a classical particle in a potential well where the energy is not large enough for the particle to escape the well. Scattering states correspond to a particle which might be influenced by a potential but has a large enough energy to pass through the potential without getting trapped. In quantum mechanics, bound states occur when E < V(∞) and E < V(–∞) since the phenomenon of quantum tunneling allows quantum particles to leak through any finite potential barrier. Scattering states occur when E > V(∞) or E > V(–∞). Since most potentials go to zero at infinity or negative infinity, this simplifies to bound states happening when E < 0 and scattering states happening when E > 0. The infinite square well and the quantum harmonic oscillator represent bound states since V(x) goes to ∞ when x → ±∞. By contrast, the quantum free particle represents a scattering state since V(x) = 0 everywhere. However, there are also potentials which can result in both bound and scattering states. These kinds of potentials will be explored in the following sections. Delta-function well Recall that the Dirac delta function δ(x) is an infinitely high and infinitely narrow spike at the origin with an area equal to 1 (the area is obtained by integrating). The spike appears at the point a along the x axis when δ(x – a) is used. One important property of the Dirac delta function is that f(x)δ(x – a) = f(a)δ(x – a). By integrating both sides of the equation of this property, one can obtain the following useful expression. Note that a ± ϵ is used as the bounds since any positive value ϵ will then allow the bounds to encompass the Dirac delta function spike. The delta-function well is a potential of the form –αδ(x) where α is a positive constant. As a result, the time-independent Schrödinger equation for the delta-function well system is given as follows. This equation has solutions that yield bound states when E < 0 and scattering states when E > 0. For the bound states where E < 0, the general solutions are given by equations below. The substitution κ is defined by the first equation below, the second equation below is the general solution for x < 0, and the third equation below is the general solution for x > 0. (Since E is assumed to have a negative value, κ is real and positive). Note that V(x) = 0 for x < 0 and x > 0. In the solution for x < 0, the Ae–κx term explodes as x → –∞, so A must equal zero. In the solution for x > 0, the Feκx term explodes as x → ∞, so F must equal zero. To combine these equations, one must use appropriate boundary conditions at x = 0. For any quantum system, ψ is continuous and dψ/dt is continuous except at points where the potential is infinite. The requirement for ψ to exhibit continuity means that F = B at x = 0. As a result, the solution for the bound states can be concisely stated as follows. In addition, a plot of the delta-function well’s bound state time-independent wave function is given below. The presence of the delta function influences the energy E. To find the energy, one can integrate the time-independent Schrödinger equation for the delta-function well system. By making the bounds of integration ±ϵ and then taking the limit as ϵ approaches zero, the integral works only on the negative spike of the delta function at x = 0. The result for the energy is at the end of the following set of equations. As seen above, the delta-function well only exhibits a single bound state energy E. By normalizing the wave function ψ(x) = Be–κ|x|, the constant B is found (as seen in the first equation below). The second equation below describes the single bound state wave function and reiterates the single bound state energy associated with this wave function. For the scattering states where E > 0, the general solutions are given by equations below. The substitution k is defined by the first equation below, the second equation below is the general solution for x < 0, and the third equation below is the general solution for x > 0. (Since E is assumed to have a positive value, k is real and positive). Note that V(x) = 0 for x < 0 and x > 0. None of the terms explode this time, so none of the terms can be ruled out as equal to zero. As a consequence of the requirement for ψ(x) to be continuous at x = 0, the following equation involving the constants A, B, F, and G must hold true. This is the first boundary condition. There is also a second boundary condition which involves dψ/dx. Recall the following step (see first equation below) from the process of integrating the Schrödinger equation. To implement this step, the derivatives of ψ(x) (see second equation below) are found and then the limits of these derivatives from the left and right directions are taken (see third equation below). Since ψ(0) = A + B as seen in the equation above, the second boundary condition can be given as the final equation below. By rearranging the final equation above and substituting in a parameter β = mα/ћ2k, the following expression is obtained. This expression is a compact way of writing the second boundary condition. These two boundary conditions provide two equations, but there are four unknowns in these equations (five unknowns if k is included). Despite this, the physical significance of the unknown constants can be helpful. When eikx is multiplied by the factor for time-dependence e–iEt/ћ, it gives rise to a wave propagating to the right. When e–ikx is multiplied by the factor for time-dependence e–iEt/ћ, it gives rise to a wave propagating to the left. As a result, the constants describe the amplitudes of various waves. A is the amplitude of a wave moving to the right on the x < 0 side of the delta-function potential, B is the amplitude of a wave moving to the left on the x < 0 side of the delta-function potential, F is the amplitude of a wave moving to the right on the x > 0 side of the delta-function potential, and G is the amplitude of a wave moving to the left on the x > 0 side of the delta-function potential. In a typical experiment on this type of system, particles are fired from one side of the delta-function potential, the left or the right. If the particles are coming from the left (moving to the right), the term with G will equal zero. If the particles are coming from the right (moving to the left), the term with A will equal zero. This can be understood intuitively by examining the figure above. As an example, for the case of particles fired from the left (moving to the right), A is the amplitude of the incident wave, B is the amplitude of the reflected wave, and F is the amplitude of the transmitted wave. The equations of the two boundary conditions are reiterated in the first line below. By solving these equations, the second line of expressions is found. Since the probability of finding a particle at a certain location is |Ψ|2, the relative probability R of an incident particle undergoing reflection and the relative probability T of an incident particle undergoing transmission are given by the third line of expressions below.  Also for the example case of particles fired from the left (moving to the right), by substituting back from β = mα/ћ2k and k = (2mE)0.5/ћ to get the expressions in terms of energy, the following equations are obtained for the reflection and transmission relative probabilities. By performing the same process, but with A = 0 instead of G = 0, corresponding equations can be found for the case of particles fired from the right (moving towards the left). It is important to note that, since these scattering wave functions are not normalizable, they do not actually represent possible particle states. To solve this problem, one must construct normalizable linear combinations of the stationary states in a manner similar to that performed with the quantum free particle system. In this way, wave packets will occur and the actual particles will be described by the range of energies of the wave packets. Because the actual normalizable system exhibits a range of energies, the probabilities R and T should be thought of as approximate measures of reflection and transmission for particles with energies in the vicinity of E. Finite square well The finite square well is a system for which a particle’s V(x) = –V0 when –a ≤ x ≤ a and its V(x) = 0 otherwise. For this system, the Schrödinger equation is given as follows for the conditions x < –a, –a ≤ x ≤ a, and x > a. Note that the equations for x < –a and x > a are the same since V(x) = 0 in both cases (but the boundary conditions will differ as will be explained soon). As with the Delta-function potential well, the finite square well has both bound states (with E < 0) and scattering states (with E > 0). First, the bound states with E < 0 will be considered. In this case, the Schrödinger equations for the finite square well are as follows. For the cases of x < –a and x > a where V(x) = 0, the general solutions to the Schrödinger equation are respectively Ae–κx + Beκx and Fe–κx + Geκx where A, B, F, and G are arbitrary constants. In the x < –a case, the Ae–κx term blows up as x → –∞, making this term physically invalid. As a result, the physically admissible solution is ψ(x) = Beκx. In the x > a case, the Geκx term blows up as as x → ∞, making this term physically invalid. As a result, the physically admissible solution is ψ(x) = Fe–κx. For the case of –a ≤ x ≤ a, the general solution to the Schrödinger equation is ψ(x) = Csin(lx) + Dcos(lx). Note that, because E must be greater than the minimum potential energy Vmin = –V0, the value of l ends up real and positive (even though E is also negative). These solutions are summarized by the following equations. Since the potential V(x) = –V0 is an even function (symmetric about the y axis), one can choose to write the solutions to the wave function as either even or odd. This comes from some properties of the time-independent Schrödinger equation. Next, it is again important to constrain these solutions using the boundary conditions which require the continuity of ψ(x) and dψ/dx at ±a. For the even solutions, the constant C in ψ(x) = Csin(lx) + Dcos(lx) is zero. Because C = 0, the remaining equation is the even function ψ(x) = Dcos(lx) for –a ≤ x ≤ a. So, the continuity of ψ(x) and dψ/dx at +a necessitates the following two equations to hold true. The third equation comes from dividing the second equation by the first equation to solve for κ. For the odd solutions, the constant D in ψ(x) = Csin(lx) + Dcos(lx) is zero. Because D = 0, the remaining equation is the odd function ψ(x) = Dsin(lx) for –a ≤ x ≤ a. So, the continuity of ψ(x) and dψ/dx at +a necessitates the following two equations to hold true. The third equation comes from dividing the second equation by the first equation to solve for κ. As κ and l are both functions of E, the κ = ltan(la) and κ = –lcot(la) equations can be solved for E. To do this, it is convenient to use the notation z = la and z0 = (a/ћ)(2mV0)0.5. Simplifying the κ = ltan(la) and κ = –lcot(la) equations using this notation gives the following results. These equations can be solved numerically for z or graphically for z by looking for points of intersection (after obtaining z, E is easily computed). Let us consider the tan(z) equation. There are two limiting cases of interest. These include a well which is wide and deep and a well which is shallow and narrow. Though not included in these notes, similar calculations can be performed for the –cot(z) equation. For a wide and deep well, the value of z0 is large. Intersections between the curves of tan(zn) and ((z0/zn)2 – 1)0.5 occur at nπ/2 for odd n and at nπ for even n. This leads to the following equations which describe values of En. From this outcome, it can be seen that infinite V0 results in the infinite square well case with an infinite number of bound states. However, for any finite square well, there are only a finite number of bound states. For a shallow and narrow well, the value of z0 is small. As the value of z0 decreases, fewer and fewer bound states exist. Once z0 is smaller than π/2, there is only one bound state (which is an even bound state). Interestingly, no matter how small the well, this one bound state always persists. The scattering states, which occur when E > 0, will now be considered. In this case, the Schrödinger equations for the finite square well are as follows. The general solutions to the Schrödinger equation for the finite square well’s scattering states are as follows. But recall that in a typical scattering experiment, particles are fired from one side of the delta-function potential, the left or the right. Here it will be assumed that the particles are fired from the left side of the well (moving towards the right). Note that similar calculations could be performed for the opposite case. With this assumption, one can realize that the coefficient A represents the incident (from the left) wave’s amplitude, the coefficient B represents the reflected wave’s amplitude, and the coefficient F represents the transmitted (to the right) wave’s amplitude. Finally, the coefficient G = 0 since there is not an incident wave from the right moving towards the left. There are four boundary conditions, continuity of ψ(x) at ±a and continuity of dψ/dx at ±a. These boundary conditions yield the following equations. With the above equations, one can eliminate C and D and subsequently solve the system for B and F. This yields the equations below for B and F. As with the delta-function well, a transmission coefficient T = |F|2/|A|2 can be computed across the finite square well. Recall that T represents the probability of the particle undergoing transmission across the well (in this case when moving from the right side to the left side). The probability of the particle undergoing reflection is R = 1 – T. Since 1/T equals the equation below, whenever the sine squared term is zero, the probability of transmission T = 1. Recall that a sine (or sine squared) term is zero when the function inside of it equals nπ such that n is any integer. Remarkably, the above equation is the same as the one which describes the infinite square well’s energies. But realize that, for the finite square well, this only holds in the case of T = 1. Reference: Griffiths, D. J., & Schroeter, D. F. (2018). Introduction to Quantum Mechanics (3rd ed.). Cambridge University Press. 10.1017/9781316995433 Cover image source: Notes on x-ray physics No Comments Thomson scattering and Compton scattering Scattering from atoms Refraction, reflection, and absorption X-ray fluorescence and Auger emission Cover image courtesy of: Asia Times Global Highlights in Neuroengineering 2005-2018 No Comments PDF version: global highlights in neuroengineering 2005-2018 – logan thrasher collins Optogenetic stimulation using ChR2 (Boyden, Zhang, Bamberg, Nagel, & Deisseroth, 2005) • Ed Boyden, Karl Deisseroth, and colleagues developed optogenetics, a revolutionary technique for stimulating neural activity. • Optogenetics involves engineering neurons to express light-gated ion channels. The first channel used for this purpose was ChR2 (a protein originally found in bacteria which responds to blue light). In this way, a neuron exposed to an appropriate wavelength of light will be stimulated. • Over time, optogenetics has gained a place as an essential experimental tool for neuroscientists across the world. It has been expanded upon and improved in numerous ways and has even allowed control of animal behavior via implanted fiber optics and other light sources. Optogenetics may eventually be used in the development of improved brain-computer interfaces. Blue Brain Project cortical column simulation (Markram, 2006) • In the early stages of the Blue Brain Project, neuronal cell types from the layers of the rat neocortex were reconstructed. Furthermore, their electrophysiology was experimentally characterized. • Next, a virtual neocortical column with about 10,000 multicompartmental Hodgkin-Huxley-type neurons and over ten million synapses was built. Its connectivity was defined according the patterns of connectivity found in biological rats, (though this involved the numbers of inputs and outputs quantified for given cell types rather than explicit wiring). In addition, the spatial distributions of boutons forming synaptic terminals upon target cells reflected biological data. • The cortical column was emulated using the Blue Gene/L supercomputer and the dynamics of the emulation reflected its biological counterpart. cortical column Optogenetic silencing using halorhodopsin (Han & Boyden, 2007) • Ed Boyden continued developing optogenetic tools to manipulate neural activity. Along with Xue Han, he expressed a codon-optimized version of a bacterial halorhodopsin (along with the ChR2 protein) in neurons. • Upon exposure to yellow light, halorhodopsin pumps chloride ions into the cell, hyperpolarizing the membrane and inhibiting neural activity. • Using halorhodopsin and ChR2, neurons could be easily activated and inhibited using yellow and blue light respectively. halorhodopsin and chr2 wavelengths (Livet et al., 2007) • Lichtman and colleagues used Cre/Lox recombination tools to create genes which express a randomized set of three or more differently-colored fluorescent proteins (XFPs) in a given neuron, labeling the neuron with a unique combination of colors. About ninety distinct colors were emitted across a population of genetically modified neurons. • The detailed structures within neural tissue equipped with the Brainbow system can be imaged much more easily since neurons can be distinguished via color contrast. • As a proof-of-concept, hundreds of synaptic contacts and axonal processes were reconstructed in a selected volume of the cerebellum. Several other neural structures were also imaged using Brainbow. • The fluorescent proteins expressed by the Brainbow system are usable in vivo. High temporal precision optogenetics (Gunaydin et al., 2010) • Karl Deisseroth, Peter Hegemann, and colleagues used protein engineering to improve the temporal resolution of optogenetic stimulation. • Glutamic acid at position 123 in ChR2 was mutated to threonine, producing a new ion channel protein (dubbed ChETA). • The ChETA protein allows for induction of spike trains with frequencies up to 200 Hz and greatly decreases the incidence of unintended spikes. Furthermore, ChETA eliminates plateau potentials (a phenomenon which interferes with precise control of neural activity). ultrafast optogenetics Hippocampal prosthesis in rats (Berger et al., 2012) • Theodore Berger and his team developed an artificial replacement for neurons which transmit information from the CA3 region to the CA1 region of the hippocampus. • This cognitive prosthesis employs recording and stimulation electrodes along with a multi-input multi-output (MIMO) model to encode the information in CA3 and transfer it to CA1. • The hippocampal prosthesis was shown to restore and enhance memory in rats as evaluated by behavioral testing and brain imaging. In vivo superresolution microscopy for neuroimaging (Berning, Willig, Steffens, Dibaj, & Hell, 2012) • Stefan Hell (2014 Nobel laureate in chemistry) developed stimulated emission depletion microscopy (STED), a type of superresolution fluorescence microscopy which allows imaging of synapses and dendritic spines. • STED microscopy uses a torus-shaped de-excitation laser that interferes with the excitation laser to deplete fluorescence except in a very small spot. In this way, the diffraction limit is surpassed since the resulting light illuminates extremely small regions of the sample. • Neurons in transgenic mice (equipped with glass-sealed holes in their skulls) were imaged using STED. Synapses and dendritic spines were observed up to fifteen nanometers below the surface of the brain tissue. superresolution microscopy in vivo In vivo three-photon microscopy (Horton et al., 2013) • Multi-photon excitation uses pulsed lasers to excite fluorophores with two or more photons of light with long wavelengths. During the excitation, the photons undergo a nonlinear recombination process, yielding a single emitted photon with a much shorter wavelength. Because the excitation photons possess long wavelengths, they can penetrate tissue much more deeply than traditional microscopy allows. • Horton and colleagues developed a three-photon excitation method to facilitate even deeper tissue penetration than the commonly used two-photon microscopic techniques. • Since three photons were involved per excitation event, even longer excitation wavelengths (about 1,700 nm) were usable, allowing the construction of a 3-dimensional image stack that reached a depth of up to 1.4 mm within the living mouse brain. • Blood vessels and RFP-labeled neurons were imaged using this approach. Furthermore, the depth was sufficient to enable imaging of neurons within the mouse hippocampus. 3-photon microscopy Whole-brain functional recording from larval zebrafish (Ahrens, Orger, Robson, Li, & Keller, 2013) • Laser-scanning light-sheet microscopy was used to volumetrically image the entire brains of larval zebrafish (an optically transparent organism). • The genetically encoded calcium sensor GCaMP5G facilitated functional recording at single-cell resolution from about 80% of the total neurons in the larval zebrafish brains. Computational methods were used to distinguish between individual neurons. • Populations of neurons that underwent correlated activity patterns were identifiedto show the technique’s utility for uncovering the dynamics of neural circuits. These populations included hindbrain neurons that were functionally linked to neural activity in the spinal cord and a population of neurons which showed coupled oscillations on the left and right halves. whole-brain recording from larval zebrafish Eyewire: crowdsourcing method for retina mapping (Marx, 2013) • The Eyewire project was created by Sebastian Seung’s research group. It is a crowdsourcing initiative for connectomic mapping within the retina towards uncovering neural circuits involved in visual processing. • Laboratories first collect data via serial electron microscopy as well as functional data from two-photon microscopy. • In the Eyewire game, images of tissue slices are provided to players who then help reconstruct neural morphologies and circuits by “coloring in” the parts of the images which correspond to cells and stacking many images on top of each other to generate 3D maps. Artificial intelligence tools help provide initial “best guesses” and guide the players, but the people ultimately perform the task of reconstruction. • By November 2013, around 82,000 participants had played the game. Its popularity continues to grow. The BRAIN Initiative (“Fact Sheet: BRAIN Initiative,” 2013) • The BRAIN Initiative (Brain Research through Advancing Innovative Technologies) provided neuroscientists with $110 million in governmental funding and $122 million in funding from private sources such as the Howard Hughes Medical Institute and the Allen Institute for Brain Science. • The BRAIN Initiative focused on funding research which develops and utilizes new technologies for functional connectomics. It helped to accelerate research on tools for decoding the mechanisms of neural circuits in order to understand and treat mental illness, neurodegenerative diseases, and traumatic brain injury. • The BRAIN Initiative emphasized collaboration between neuroscientists and physicists. It also pushed forward nanotechnology-based methods to image neural tissue, record from neurons, and otherwise collect neurobiological data. The CLARITY method for making brains translucent (Chung & Deisseroth, 2013) • Karl Deisseroth and colleagues developed a method called CLARITY to make samples of neural tissue optically translucent without damaging the fine cellular structures in the tissue. Using CLARITY, entire mouse brains have been turned transparent. • Mouse brains were infused with hydrogel monomers (acrylamide and bisacrylamide) as well as formaldehyde and some other compounds for facilitating crosslinking. Next, the hydrogel monomers were crosslinked by incubating the brains at 37°C. Lipids in the hydrogel-stabilized mouse brains were extracted using hydrophobic organic solvents and electrophoresis. • CLARITY allows antibody labeling, fluorescence microscopy, and other optically-dependent techniques to be used for imaging entire brains. In addition, it renders the tissue permeable to macromolecules, which broadens the types of experimental techniques that these samples can undergo (i.e. macromolecule-based stains, etc.) clarity imaging technique X-ray microtomography used to reconstruct Drosophila brain hemisphere (Mizutani, Saiga, Takeuchi, Uesugi, & Suzuki, 2013) • Mizutani and colleagues stained Drosophila brains with silver nitrate and tetrachloroaurate (a gold-containing compound), facilitating 3-dimensional imaging using X-ray microtomography at a voxel size of 220 × 328 × 314 nm. • To generate the X-rays, a synchrotron source was used. It should be noted that synchrotron sources require large facilities to operate. • Neuronal tracing was performed manually on the 3-dimensional X-ray images of the fly brain, a process which took about 1,700 person-hours. Some neuronal processes were too dense to be resolved, so they were “fused” into unified structures. Furthermore, some neuronal traces were fragmented and most of the cell bodies were not considered. This decreased the number of traces to one third of the estimated number of actual processes in the hemisphere. • Mizutani’s investigation represents an early effort at large-scale connectomics that sets the stage for further initiatives as neuronal tracing, sample preparation, and X-ray microtomography technologies continue to improve. traced drosophila brain hemisphere Telepathic rats engineered using hippocampal prosthesis (S. Deadwyler et al., 2013) • Berger’s hippocampal prosthesis was implanted in pairs of rats. When “donor” rats were trained to perform a task, they developed neural representations (memories) which were recorded by their hippocampal prostheses. • The donor rat memories were run through the MIMO model and transmitted to the stimulation electrodes of the hippocampal prostheses implanted in untrained “recipient” rats. After receiving the memories, the recipient rats showed significant improvements on the task that they had not been trained to perform. rat telepathy Integrated Information Theory 3.0 (Oizumi, Albantakis, & Tononi, 2014) • Integrated information theory (IIT) was originally proposed by Giulio Tononi in 2004. IIT is a quantitative theory of consciousness which may help explain the hard problem of consciousness. • IIT begins by assuming the following phenomenological axioms; each experience is characterized by how it differs from other experiences, an experience cannot be reduced to interdependent parts, and the boundaries which distinguish individual experiences are describable as having defined “spatiotemporal grains.” • From these phenomenological axioms and the assumption of causality, IIT identifies maximally irreducible conceptual structures (MICS) associated with individual experiences. MICS represent particular patterns of qualia that form unified percepts. • IIT also outlines a mathematical measure of an experience’s quantity. This measure is called integrated information or ϕ. (Szigeti et al., 2014) • The anatomical elegans connectome was originally mapped in 1976 by Albertson and Thomson. More data has since been collected on neurotransmitters, electrophysiology, cell morphology, and other characteristics. • Szigeti, Larson, and their colleagues made an online platform for crowdsourcing research on elegans computational neuroscience, with the goal of completing an entire “simulated worm.” • The group also released software called Geppetto, a program that allows users to manipulate both multicompartmental Hodgkin-Huxley models and highly efficient soft-body physics simulations (for modeling the worm’s electrophysiology and anatomy). c. elegans connectome Expansion microscopy (F. Chen, Tillberg, & Boyden, 2015) • The Boyden group developed expansion microscopy, a method which enlarges neural tissue samples (including entire brains) with minimal structural distortions and so facilitates superior optical visualization of the scaled-up neural microanatomy. Furthermore, expansion microscopy greatly increases the optical translucency of treated samples. • Expansion microscopy operates by infusing a swellable polymer network into brain tissue samples along with several chemical treatments to facilitate polymerization and crosslinking and then triggering expansion via dialysis in water. With 4.5-fold enlargement, expansion microscopy only distorts the tissue by about 1% (computed using a comparison between control superresolution microscopy of easily-resolvable cellular features and the expanded version). • Before expansion, samples can express various fluorescent proteins to facilitate superresolution microscopy of the enlarged tissue once the process is complete. Furthermore, expanded tissue is highly amenable to fluorescent stains and antibody-based labels. expansion microscopy Japan’s Brain/MINDS project (Okano, Miyawaki, & Kasai, 2015) • In 2014, the Brain/MINDS (Brain Mapping by Integrated Neurotechnologies for Disease Studies) project was initiated to further neuroscientific understanding of the brain. This project received nearly $30 million in funding for its first year alone. • Brain/MINDS focuses on studying the brain of the common marmoset (a non-human primate abundant in Japan), developing new technologies for brain mapping, and understanding the human brain with the goal of finding new treatments for brain diseases. The TrueNorth chip from DARPA and IBM (Akopyan et al., 2015) • The TrueNorth neuromorphic computing chip was constructed and validated by DARPA and IBM. TrueNorth uses circuit modules which mimic neurons. Inputs to these fundamental circuit modules must overcome a threshold in order to trigger “firing.” • The chip can emulate up to a million neurons with over 250 million synapses while requiring far less power than traditional computing devices. Human Brain Project cortical mesocircuit reconstruction and simulation (Markram et al., 2015) • The Human Brain Project reconstructed a 0.29 mm3 region of rat cortical tissue including about 31,000 neurons and 37 million synapses based on morphological data, statistical connectivity rules (rather than exact connectivity), and other datasets. The cortical mesocircuit was emulated using the Blue Gene/Q supercomputer. • This emulation was sufficiently accurate to reproduce emergent neurological processes and yield insights on the mechanisms of their computations. cortical mesocircuit Recording from C. elegans neurons reveals motor operations (Kato et al., 2015) • Live elegans worms were immobilized in microfluidic devices and the neurons in their head ganglia as well as some of their motor systems were imaged and recorded from using the calcium indicator GCaMP. As the C. elegans connectome is well-characterized, Kato and colleagues were able to determine the identities of most of the cells that underwent imaging (with the help of computational segmentation techniques). • Principal component analysis was used to reduce the dimensionality of the neural activity datasets since over 100 neurons per worm were recorded from simultaneously. • Next, phase space analysis was utilized to visualize the patterns formed by the recording data. Motor behaviors including dorsal turns, ventral turns, forward movements, and backward movements were found to correspond to specific sequences of neural events as uncovered by examining the patterns found in the phase plots. Further analyses revealed various insights about these brain dynamics and their relationship to motor actions. c. elegans brain dynamics Neural lace (Liu et al., 2015) • Charles Lieber’s group developed a syringe-injectable electronic mesh made of submicrometer-thick wiring for neural interfacing. • The meshes were constructed using novel soft electronics for biocompatibility. Upon injection, the neural lace expands to cover and record from centimeter-scale regions of tissue. • Neural lace may allow for “invasive” brain-computer interfaces to circumvent the need for surgical implantation. Lieber has continued to develop this technology towards clinical application. neural lace BigNeuron initiative towards standardized neuronal morphology acquisition (Peng et al., 2015) • Because of the inconsistencies between neuronal reconstruction methods and lack of standardization found in neuronal morphology databases, BigNeuron was established as a community effort to improve the situation. • BigNeuron tests as many automated neuronal reconstruction algorithms as possible using large-scale microscopy datasets (from several types of light microscopy). It uses the Vaa3D neuronal reconstruction software as a central platform. Reconstruction algorithms are added to Vaa3D as plugins. These computational tests are performed on supercomputers. • BigNeuron aims to create a superior community-oriented neuronal morphology database, a set of greatly improved tools for neuronal reconstruction, a standardized protocol for future neuronal reconstructions, and a library of morphological feature definitions to facilitate classification. Human telepathy during a 20 questions game (Stocco et al., 2015) • Using an interactive question-and-answer setup, Stocco and colleagues demonstrated real-time telepathic communication between pairs of individuals via EEG and transcranial magnetic stimulation. Five pairs of participants played games of 20 questions and attempted to identify unknown objects. • EEG data were recorded from the respondent, computationally processed, and transmitted as transcranial magnetic stimulation signals into the mind (occipital lobe stimulation) of a respondent. The respondent’s answers were translated into higher-intensity transcranial magnetic stimulation pulses corresponding to “yes” answers or lower-intensity transcranial magnetic stimulation pulses corresponding to “no” answers. • When compared to control trials in which sham interfaces were used, the people using the brain-brain interfaces were significantly more successful at playing 20 questions games. Expansion FISH (F. Chen et al., 2016) • Boyden, Chen, Marblestone, Church, and colleagues combined fluorescent in situ hybridization (FISH) with expansion microscopy to image the spatial localization of RNA in neural tissue. • The group developed a chemical linker to covalently attach intracellular RNA to the infused polymer network used in expansion microscopy. This allowed for RNAs to maintain their relative spatial locations within each cell post-expansion. • After the tissue was enlarged, FISH was used to fluorescently label targeted RNA molecules. In this way, RNA localization was more effectively resolved. • As a proof-of-concept, expansion FISH was used to reveal the nanoscale distribution of long noncoding RNAs in nuclei as well as the locations of RNAs within dendritic spines. expansion fish Neural dust (Seo et al., 2016) • Michel Maharbiz’s group invented implantable, ~ 1 mm biosensors for wireless neural recording and tested them in rats. • This neural dust could be miniaturized to less than 0.5 mm or even to microscale dimensions using customized electronic components. • Neural dust motes consist of two recording electrodes, a transistor, and a piezoelectric crystal. • The neural dust received external power from ultrasound. Neural signals were recorded by measuring disruptions to the piezoelectric crystal’s reflection of the ultrasound waves. Signal processing mathematics allowed precise detection of activity. neural dust The China Brain Project (Poo et al., 2016) • The China Brain Project was launched to help understand the neural mechanisms of cognition, develop brain research technology platforms, develop preventative and diagnostic interventions for brain disorders, and to improve brain-inspired artificial intelligence technologies. • This project will be take place from 2016 until 2030 with the goal of completing mesoscopic brain circuit maps. • China’s population of non-human primates and preexisting non-human primate research facilities give the China Brain Project an advantage. The project will focus on studying rhesus macaques. Somatosensory cortex stimulation for spinal cord injuries (Flesher et al., 2016) • Gaunt, Flesher, and colleagues found that microstimulation of the primary somatosensory cortex (S1) partially restored tactile sensations to a patient with a spinal cord injury. • Electrode arrays were implanted into the S1 regions of a patient with a spinal cord injury. The array performed intracortical microstimulation over a period of six months. • The patient reported locations and perceptual qualities of the sensations elicited by microstimulation. The patient did not experience pain or “pins and needles” from any of the stimulus trains. Overall, 93% of the stimulus trains were reported as “possibly natural.” • Results from this study might be used to engineer upper-limb neuroprostheses which provide somatosensory feedback. somatosensory stimulation Simulation of rat CA1 region (Bezaire, Raikov, Burk, Vyas, & Soltesz, 2016) • Detailed computational models of 338,740 neurons (including pyramidal cells and various types of interneurons) were equipped with connectivity patterns based on data from the biological CA1 region. External inputs were also estimated using biological data and incorporated into the simulation. It is important to note that these connectivity patterns described the typical convergence and divergence of neurites to and from particular cell types rather than explicitly representing the exact connections found in the biological rat. • Each neuron was simulated using a multicompartmental Hodgkin-Huxley-type model with its morphological structure based on biological data from the given cell type. Furthermore, different cell types received different numbers of presynaptic terminals at specified distances from the soma. In total, over five billion synapses were present within the CA1 model. • The simulation was implemented on several different supercomputers. Due to the model’s complexity, a four second simulation took about four hours to complete. • As with the biological CA1 region, the simulation gave rise to gamma oscillations and theta oscillations as well as other biologically consistent phenomena. In addition, parvalbumin-expressing interneurons and neurogliaform cells were identified as drivers of the theta oscillations, demonstrating the utility of detailed neuronal simulations for uncovering biological insights. ca1 simulation UltraTracer enhances existing neuronal tracing software (Peng et al., 2017) • UltraTracer is an algorithm that can improve the efficiency of existing neuronal tracing software for handling large datasets while maintaining accuracy. • Datasets with hundreds of billions of voxels were utilized to test UltraTracer. Ten existing tracing algorithms were augmented. • For most of the existing algorithms, the performance improvements were around 3-6 times, though a few showed improvements of 10-30 times. Even when using computers with smaller memory, UltraTracer was consistently able to enhance conventional software. • UltraTracer was made opensource and is available as a plugin for the Vaa3D tracing software suite. Whole-brain electron microscopy in larval zebrafish (Hildebrand et al., 2017) • Serial electron microscopy facilitated imaging of the entire brain of a larval zebrafish at 5.5 days post-fertilization. • Neuronal tracing software (a modified version of the CATMAID software) was used to reconstruct all the myelinated axons found in the larval zebrafish brain. • The reconstructed dataset included 2,589 myelinated axon segments along with some of the associated soma and dendrites. It should be noted that only 834 of the myelinated axons were successfully traced back to their cell bodies. ssem of larval zebrafish brain Hippocampal prosthesis in monkeys (S. A. Deadwyler et al., 2017) • Theodore Berger continued developing his cognitive prosthesis and tested it in Rhesus Macaques. • As with the rats, monkeys with the implant showed substantially improved performance on memory tasks. The $100 billion Softbank Vision Fund (Lomas, 2017) • Masayoshi Son, the CEO of Softbank (a Japanese telecommunications corporation), announced a plan to raise $100 billion in venture capital to invest in artificial intelligence. This plan involved partnering with multiple large companies in order to raise this enormous amount of capital. • By the end of 2017, the Vision Fund successfully reached its $100 billion goal. Masayoshi Son has since announced further plans to continue raising money with a new goal of over $800 billion. • Masayoshi Son’s reason for these massive investments is the Technological Singularity. He agrees with Kurzweil that the Singularity will likely occur at around 2045 and he hopes to help bring the Singularity to fruition. Though Son is aware of the risks posed by artificial superintelligence, he feels that superintelligent AI’s potential to tackle some of humanity’s greatest challenges (such as climate change and the threat of nuclear war) outweighs those risks. Bryan Johnson launches Kernel (Regalado, 2017) • Entrepreneur Bryan Johnson invested $100 million to start Kernel, a neurotechnology company. • Kernel plans to develop implants that allow for recording and stimulation of large numbers of neurons at once. The company’s initial goal is to develop treatments for mental illnesses and neurodegenerative diseases. Its long-term goal is to enhance human intelligence. • Kernel originally partnered with Theodore Berger and intended to utilize his hippocampal prosthesis. Unfortunately, Berger and Kernel parted ways after about six months because Berger’s vision was reportedly too long-range to support a financially viable company (at least for now). • Kernel was originally a company called Kendall Research Systems. This company was started by a former member of the Boyden lab. In total, four members of Kernel’s team are former Boyden lab members. Elon Musk launches NeuraLink (Etherington, 2017) • Elon Musk (CEO of Tesla, SpaceX, and a number of other successful companies) initiated a neuroengineering venture called NeuraLink. • NeuraLink will begin by developing brain-computer interfaces (BCIs) for clinical applications, but the ultimate goal of the company is to enhance human cognitive abilities in order to keep up with artificial intelligence. • Though many of the details around NeuraLink’s research are not yet open to the public, it has been rumored that injectable electronics similar to Lieber’s neural lace might be involved. Facebook announces effort to build brain-computer interfaces (Constine, 2017) • Facebook revealed research on constructing non-invasive brain-computer interfaces (BCIs) at a company-run conference in 2017. The initiative is run by Regina Dugan, Facebook’s head of R&D at division building 8. • Facebook’s researchers are working on a non-invasive BCI which may eventually enable users to type one hundred words per minute with their thoughts alone. This effort builds on past investigations which have been used to help paralyzed patients. • The building 8 group is also developing a wearable device for “skin hearing.” Using just a series of vibrating actuators which mimic the cochlea, test subjects have so far been able to recognize up to nine words. Facebook intends to vastly expand this device’s capabilities. DARPA funds research to develop improved brain-computer interfaces (Hatmaker, 2017) • The U.S. government agency DARPA awarded $65 million in total funding to six research groups. • The recipients of this grant included five academic laboratories (headed by Arto Nurmikko, Ken Shepard, Jose-Alain Sahel and Serge Picaud, Vicent Pieribone, and Ehud Isacoff) and one small company called Paradromics Inc. • DARPA’s goal for this initiative is to develop a nickel-sized bidirectional brain-computer interface (BCI) which can record from and stimulate up to one million individual neurons at once. Human Brain Project analyzes brain computations using algebraic topology (Reimann et al., 2017) • Investigators at the Human Brain Project utilized algebraic topology to analyze the reconstructed ~ 31,000 neuron cortical microcircuit from their earlier work. • The analysis involved representing the cortical network as a digraph, finding directed cliques (complete directed subgraphs belonging to a digraph), and determining the net directionality of information flow (by computing the sum of the squares of the differences between in-degree and out-degree for all the neurons in a clique). In algebraic topology, directed cliques of n neurons are called directed simplices of dimension n-1. • Vast numbers of high-dimensional directed cliques were found in the cortical microcircuit (as compared to null models and other controls). Spike correlations between pairs of neurons within a clique were found to increase with the clique’s dimension and with the proximity of the neurons to the clique’s sink. Furthermore, topological metrics allowed insights into the flow of neural information among multiple cliques. • Experimental patch-clamp data supported the significance of the findings. In addition, similar patterns were found within the elegans connectome, suggesting that the results may generalize to nervous systems across species. hbp algebraic topology Early testing of hippocampal prosthesis algorithm in humans (Song, She, Hampson, Deadwyler, & Berger, 2017) • Dong Song (who was working alongside Berger) tested the MIMO algorithm on human epilepsy patients using implanted recording and stimulation electrodes. The full hippocampal prosthesis was not implanted, but the electrodes acted similarly, though in a temporary capacity. Although only two patients were tested in this study, many trials were performed to compensate for the small sample size. • Hippocampal spike trains from individual cells in CA1 and CA3 were recorded from the patients during a delayed match-to-sample task. The patients were shown various images while neural activity data were recorded by the electrodes and processed by the MIMO model. The patients were then asked to recall which image they had been shown previously by picking it from a group of “distractor” images. Memories encoded by the MIMO model were used to stimulate hippocampal cells during the recall phase. • In comparison to controls in which the same two epilepsy patients were not assisted by the algorithm and stimulation, the experimental trials demonstrated a significant increase in successful pattern matching. Brain imaging factory in China (Cyranoski, 2017) • Qingming Luo started the HUST-Suzhou Institute for Brainsmatics, a brain imaging “factory.” Each of the numerous machines in Luo’s facility performs automated processing and imaging of tissue samples. The devices make ultrathin slices of brain tissue using diamond blades, treat the samples with fluorescent stains or other contrast-enhancing chemicals, and image then using fluorescence microscopy. • The institute has already demonstrated its potential by mapping the morphology of a previously unknown neuron which “wraps around” the entire mouse brain. china brain mapping image Automated patch-clamp robot for in vivo neural recording (Suk et al., 2017) • Ed Boyden and colleagues developed a robotic system to automate patch-clamp recordings from individual neurons. The robot was tested in vivo using mice and achieved a data collection yield similar to that of skilled human experimenters. • By continuously imaging neural tissue using two-photon microscopy, the robot can adapt to a target cell’s movement and shift the pipette to compensate. This adaptation is facilitated by a novel algorithm called an imagepatching algorithm. As the pipette approaches its target, the algorithm adjusts the pipette’s trajectory based on the real-time two-photon microscopy. • The robot can be used in vivo so long as the target cells express a fluorescent marker or otherwise fluoresce corresponding to their size and position. automated patch clamp system Genome editing in the mammalian brain (Nishiyama, Mikuni, & Yasuda, 2017) • Precise genome editing in the brain has historically been challenging because most neurons are postmitotic (non-dividing) and the postmitotic state prevents homology-directed repair (HDR) from occurring. HDR is a mechanism of DNA repair which allows for targeted insertions of DNA fragments with overhangs homologous to the region of interest (by contrast, non-homologous end-joining is highly unpredictable). • Nishiyama, Mikuni, and Yasuda developed a technique which allows genome editing in postmitotic mammalian neurons using adeno-associated viruses (AAVs) and CRISPR-Cas9. • The AAVs delivered ssDNA sequences encoding a single guide RNA (sgRNA) and an insert. Inserts encoding a hemagglutinin tag (HA) and inserts encoding EGFP were both tested. Cas9 was encoded endogenously by transgenic host cells and in transgenic host animals. • The technique achieved precise genome editing in vitro and in vivo with a low rate of off-target effects. Inserts did not cause deletion of nearby endogenous sequences for 98.1% of infected neurons. genome editing neuronsNeuropixels probe (Jun et al., 2017) • Jun and colleagues created the Neuropixels probe to facilitate simultaneous recording from hundreds of individual neurons with high spatiotemporal resolution. Previous extracellular probes were only able to record from a few dozen individual neurons. • The Neuropixels recording shank is one centimeter long and includes 384 recording channels. Due to the small size of the accompanying apparatus (a 6×9 mm base and a data transmission cable), it enables high-throughput recording in freely moving animals. Because the shank is quite long, Neuropixels can record from multiple brain regions at once. • Voltage signals are processed directly on the base of the Neuropixels apparatus, allowing for noise-free data transmission along the cable for further analysis. EEG-based facial image reconstruction (Nemrodov, Niemeier, Patel, & Nestor, 2018) • EEG data associated with viewing images of faces was collected and used to determine the neural correlates of facial processing. In this way, the images were computationally reconstructed in a fashion resembling “mind reading.” • It should be noted that the images reconstructed using data taken from multiple people were more accurate than the images reconstructed using single individuals. Nonetheless, the single individual data still yielded statistically significant accuracy. • In addition to reconstructing the images themselves, the process gave insights on the cognitive steps involved in perceiving faces. eeg reconstructions of faces Near-infrared light and upconversion nanoparticles for optogenetic stimulation (S. Chen et al., 2018) • Upconversion nanoparticles absorb two or more low-energy photons and emit a higher energy photon. For instance, multiple near-infrared photons can be converted into a single visible spectrum photon. • Shuo Chen and colleagues injected upconversion nanoparticles into the brains of mice and used them to convert externally applied near-infrared (NIR) light into visible light within the brain tissue. In this way, optogenetic stimulation was performed without the need for surgical implantation of fiber optics or similarly invasive procedures. • The authors demonstrated stimulation via upconversion of NIR to blue light (to activate ChR2) and inhibition via upconversion of NIR to green light (to activate a rhodopsin called Arch). • As a proof-of-concept, this technology was used to alter the behavior of the mice by activating hippocampally-encoded fear memories. upconversion nanoparticles and nir Map of all neuronal cell bodies within mouse brain (Murakami et al., 2018) • Ueda, Murakami, and colleagues combined methods from expansion microscopy and CLARITY to develop a protocol called CUBIC-X which both expands and clears entire brains. Light-sheet fluorescence microscopy was used to image the treated brains and a novel algorithm was developed to detect individual nuclei. • Although expansion microscopy causes some increased tissue transparency on its own, CUBIC-X greatly improved this property in the enlarged tissues, facilitating more detailed whole-brain imaging. • Using CUBIC-X, the spatial locations of all the cell bodies (but not dendrites, axons, or synapses) within the mouse brain were mapped. This process was performed upon several adult mouse brains as well as several developing mouse brains to allow for comparative analysis. • The authors made the spatial atlas publicly available in order to facilitate global cooperation towards annotating connectivity among the neural cell bodies within the atlas. Clinical testing of hippocampal prosthesis algorithm in humans (Hampson et al., 2018) • Further clinical tests of Berger’s hippocampal prosthesis were performed. Twenty-one patients took part in the experiments. Seventeen patients underwent CA3 recording so as to facilitate training and optimization of the MIMO model. Eight patients received CA1 stimulation so as to improve their memories. • Electrodes with the ability to record from single neurons (10-24 single-neuron recording sites) and via EEG (4-6 EEG recording sites) were implanted such that recording and stimulation could occur at CA3 and CA1 respectively. • Patients performed behavioral memory tasks. Both short-term and long-term memory showed an average improvement of 35% across the patients who underwent stimulation. Precise optogenetic manipulation of fifty neurons (Mardinly et al., 2018) • Mardinly and colleagues engineered a novel excitatory optogenetic ion channel called ST-ChroME and a novel inhibitory optogenetic ion channel called IRES-ST-eGtACR1. The channels were localized to the somas of host neurons and generated stronger photocurrents over shorter timescales than previously existing opsins, allowing for powerful and precise optogenetic stimulation and inhibition. • 3D-SHOT is an optical technique in which light is tuned by a device called a spatial light modulator along with several other optical components. Using 3D-SHOT, light was precisely projected upon targeted neurons within a volume of 550×550×100 μm3. • By combining novel optogenetic ion channels and the 3D-SHOT technique, complex patterns of neural activity were created in vivo with high spatial and temporal precision. • Simultaneously, calcium imaging allowed measurement of the induced neural activity. More custom optoelectronic components helped avoid optical crosstalk of the fluorescent calcium markers with the photostimulating laser. optogenetic control of fifty neurons Whole-brain Drosophila connectome data acquired via serial electron microscopy (Zheng et al., 2018) • Zheng, Bock, and colleagues collected serial electron microscopy data on the entire adult Drosophila connectome, providing the data necessary to reconstruct a complete structural map of the fly’s brain at the resolution of individual synapses, dendritic spines, and axonal processes. • The data are in the form of 7050 transmission electron microscopy images (187500 x 87500 pixels and 16 GB per image), each representing a 40nm-thin slice of the fly’s brain. In total the dataset requires 106 TB of storage. • Although much of the the data still must be processed to reconstruct a 3-dimensional map of the Drosophila brain, the authors did create 3-dimensional reconstructions of selected areas in the olfactory pathway of the fly. In doing so, they discovered a new cell type as well as several other previously unrealized insights about the organization of Drosophila’s olfactory biology. drosophila connectome with sem Human telepathy using BrainNet (Jiang et al., 2018) • EEG recordings were taken from two individuals (termed senders) while they played a Tetris-like game. Next, the recordings were converted into transcranial magnetic stimulation signals that acted to provide a third individual (called a receiver) with the necessary information to make decisions in the game without seeing the screen. The occipital cortex was stimulated. Fifteen people (five groups of three) took part in the study. • To convey their information, the senders were told to focus upon either a higher or a lower intensity light corresponding to commands within the game (the two lights were placed on different sides of the computer screen). In the receiver’s mind, this translated to perceiving a flash of light. The receiver was able to distinguish the intensities and implement the correct command within the game. • Using only the telepathically provided stimulation, the receiver made the correct game-playing decisions 81% of the time. Transcriptomic cell type classification across mouse neocortex (Tasic et al., 2018) • Single-cell RNA sequencing was used to characterize gene expression across 23,822 cells from the primary visual cortex and the anterior lateral motor cortex of mice. • Using dimensionality reduction and clustering methods, the resulting data were used to classify the neurons into 133 transcriptomic cell types. • Injections of adeno associated viruses (engineered to express fluorescent markers) facilitated retrograde tracing of neuronal projections within a subset of the sequenced cells. In this way, correspondences between projection patterns and transcriptomic identities were established. Ahrens, M. B., Orger, M. B., Robson, D. N., Li, J. M., & Keller, P. J. (2013). Whole-brain functional imaging at cellular resolution using light-sheet microscopy. Nature Methods, 10, 413. Retrieved from Akopyan, F., Sawada, J., Cassidy, A., Alvarez-Icaza, R., Arthur, J., Merolla, P., … Modha, D. S. (2015). TrueNorth: Design and Tool Flow of a 65 mW 1 Million Neuron Programmable Neurosynaptic Chip. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 34(10), 1537–1557. Berger, T. W., Song, D., Chan, R. H. M., Marmarelis, V. Z., LaCoss, J., Wills, J., … Granacki, J. J. (2012). A Hippocampal Cognitive Prosthesis: Multi-Input, Multi-Output Nonlinear Modeling and VLSI Implementation. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 20(2), 198–211. Berning, S., Willig, K. I., Steffens, H., Dibaj, P., & Hell, S. W. (2012). Nanoscopy in a Living Mouse Brain. Science, 335(6068), 551 LP-551. Retrieved from Bezaire, M. J., Raikov, I., Burk, K., Vyas, D., & Soltesz, I. (2016). Interneuronal mechanisms of hippocampal theta oscillations in a full-scale model of the rodent CA1 circuit. ELife, 5, e18566. Chen, F., Tillberg, P. W., & Boyden, E. S. (2015). Expansion microscopy. Science, 347(6221), 543 LP-548. Retrieved from Chen, F., Wassie, A. T., Cote, A. J., Sinha, A., Alon, S., Asano, S., … Boyden, E. S. (2016). Nanoscale imaging of RNA with expansion microscopy. Nature Methods, 13, 679. Retrieved from Chen, S., Weitemier, A. Z., Zeng, X., He, L., Wang, X., Tao, Y., … McHugh, T. J. (2018). Near-infrared deep brain stimulation via upconversion nanoparticle–mediated optogenetics. Science, 359(6376), 679 LP-684. Retrieved from Chung, K., & Deisseroth, K. (2013). CLARITY for mapping the nervous system. Nature Methods, 10, 508. Retrieved from Constine, J. (2017). Facebook is building brain-computer interfaces for typing and skin-hearing. TechCrunch. Retrieved from Cyranoski, D. (2017). China launches brain-imaging factory. Nature, 548(7667), 268–269. Deadwyler, S. A., Hampson, R. E., Song, D., Opris, I., Gerhardt, G. A., Marmarelis, V. Z., & Berger, T. W. (2017). A cognitive prosthesis for memory facilitation by closed-loop functional ensemble stimulation of hippocampal neurons in primate brain. Experimental Neurology, 287, 452–460. Deadwyler, S., Hampson, R., Sweat, A., Song, D., Chan, R., Opris, I., … Berger, T. (2013). Donor/recipient enhancement of memory in rat hippocampus. Frontiers in Systems Neuroscience. Retrieved from Etherington, D. (2017). Elon Musk’s Neuralink wants to boost the brain to keep up with AI. TechCrunch. Retrieved from Fact Sheet: BRAIN Initiative. (2013). Retrieved from Flesher, S. N., Collinger, J. L., Foldes, S. T., Weiss, J. M., Downey, J. E., Tyler-Kabara, E. C., … Gaunt, R. A. (2016). Intracortical microstimulation of human somatosensory cortex. Science Translational Medicine. Retrieved from Gunaydin, L. A., Yizhar, O., Berndt, A., Sohal, V. S., Deisseroth, K., & Hegemann, P. (2010). Ultrafast optogenetic control. Nature Neuroscience, 13, 387. Retrieved from Hampson, R. E., Song, D., Robinson, B. S., Fetterhoff, D., Dakos, A. S., Roeder, B. M., … Deadwyler, S. A. (2018). Developing a hippocampal neural prosthetic to facilitate human memory encoding and recall. Journal of Neural Engineering, 15(3), 36014. Han, X., & Boyden, E. S. (2007). Multiple-Color Optical Activation, Silencing, and Desynchronization of Neural Activity, with Single-Spike Temporal Resolution. PLOS ONE, 2(3), e299. Retrieved from Hatmaker, T. (2017). DARPA awards $65 million to develop the perfect, tiny two-way brain-computer interface. TechCrunch. Retrieved from Hildebrand, D. G. C., Cicconet, M., Torres, R. M., Choi, W., Quan, T. M., Moon, J., … Engert, F. (2017). Whole-brain serial-section electron microscopy in larval zebrafish. Nature, 545, 345. Retrieved from Horton, N. G., Wang, K., Kobat, D., Clark, C. G., Wise, F. W., Schaffer, C. B., & Xu, C. (2013). In vivo three-photon microscopy of subcortical structures within an intact mouse brain. Nature Photonics, 7, 205. Retrieved from Jiang, L., Stocco, A., Losey, D. M., Abernethy, J. A., Prat, C. S., & Rao, R. P. N. (2018). BrainNet: a multi-person brain-to-brain interface for direct collaboration between brains. ArXiv Preprint ArXiv:1809.08632. Jun, J. J., Steinmetz, N. A., Siegle, J. H., Denman, D. J., Bauza, M., Barbarits, B., … Harris, T. D. (2017). Fully integrated silicon probes for high-density recording of neural activity. Nature, 551, 232. Retrieved from Kato, S., Kaplan, H. S., Schrödel, T., Skora, S., Lindsay, T. H., Yemini, E., … Zimmer, M. (2015). Global Brain Dynamics Embed the Motor Command Sequence of Caenorhabditis elegans. Cell, 163(3), 656–669. Liu, J., Fu, T.-M., Cheng, Z., Hong, G., Zhou, T., Jin, L., … Lieber, C. M. (2015). Syringe-injectable electronics. Nature Nanotechnology, 10, 629. Retrieved from Livet, J., Weissman, T. A., Kang, H., Draft, R. W., Lu, J., Bennis, R. A., … Lichtman, J. W. (2007). Transgenic strategies for combinatorial expression of fluorescent proteins in the nervous system. Nature, 450, 56. Retrieved from Lomas, N. (2017). Superintelligent AI explains Softbank’s push to raise a $100BN Vision Fund. TechCrunch. Retrieved from Mardinly, A. R., Oldenburg, I. A., Pégard, N. C., Sridharan, S., Lyall, E. H., Chesnov, K., … Adesnik, H. (2018). Precise multimodal optical control of neural ensemble activity. Nature Neuroscience, 21(6), 881–893. Markram, H. (2006). The Blue Brain Project. Nature Reviews Neuroscience, 7, 153. Retrieved from Markram, H., Muller, E., Ramaswamy, S., Reimann, M. W., Abdellah, M., Sanchez, C. A., … Schürmann, F. (2015). Reconstruction and Simulation of Neocortical Microcircuitry. Cell, 163(2), 456–492. Marx, V. (2013). Neuroscience waves to the crowd. Nature Methods, 10, 1069. Retrieved from Mizutani, R., Saiga, R., Takeuchi, A., Uesugi, K., & Suzuki, Y. (2013). Three-dimensional network of Drosophila brain hemisphere. Journal of Structural Biology, 184(2), 271–279. Murakami, T. C., Mano, T., Saikawa, S., Horiguchi, S. A., Shigeta, D., Baba, K., … Ueda, H. R. (2018). A three-dimensional single-cell-resolution whole-brain atlas using CUBIC-X expansion microscopy and tissue clearing. Nature Neuroscience, 21(4), 625–637. Nemrodov, D., Niemeier, M., Patel, A., & Nestor, A. (2018). The Neural Dynamics of Facial Identity Processing: Insights from EEG-Based Pattern Analysis and Image Reconstruction. Eneuro, 5(1), ENEURO.0358-17.2018. Nishiyama, J., Mikuni, T., & Yasuda, R. (2017). Virus-Mediated Genome Editing via Homology-Directed Repair in Mitotic and Postmitotic Cells in Mammalian Brain. Neuron, 96(4), 755–768.e5. Oizumi, M., Albantakis, L., & Tononi, G. (2014). From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0. PLOS Computational Biology, 10(5), e1003588. Retrieved from Okano, H., Miyawaki, A., & Kasai, K. (2015). Brain/MINDS: brain-mapping project in Japan. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 370(1668). Peng, H., Hawrylycz, M., Roskams, J., Hill, S., Spruston, N., Meijering, E., & Ascoli, G. A. (2015). BigNeuron: Large-Scale 3D Neuron Reconstruction from Optical Microscopy Images. Neuron, 87(2), 252–256. Peng, H., Zhou, Z., Meijering, E., Zhao, T., Ascoli, G. A., & Hawrylycz, M. (2017). Automatic tracing of ultra-volumes of neuronal images. Nature Methods, 14, 332. Retrieved from Poo, M., Du, J., Ip, N. Y., Xiong, Z.-Q., Xu, B., & Tan, T. (2016). China Brain Project: Basic Neuroscience, Brain Diseases, and Brain-Inspired Computing. Neuron, 92(3), 591–596. Regalado, A. (2017). The Entrepreneur with the $100 Million Plan to Link Brains to Computers. MIT Technology Review. Retrieved from Reimann, M. W., Nolte, M., Scolamiero, M., Turner, K., Perin, R., Chindemi, G., … Markram, H. (2017). Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function. Frontiers in Computational Neuroscience. Retrieved from Seo, D., Neely, R. M., Shen, K., Singhal, U., Alon, E., Rabaey, J. M., … Maharbiz, M. M. (2016). Wireless Recording in the Peripheral Nervous System with Ultrasonic Neural Dust. Neuron, 91(3), 529–539. Song, D., She, X., Hampson, R. E., Deadwyler, S. A., & Berger, T. W. (2017). Multi-resolution multi-trial sparse classification model for decoding visual memories from hippocampal spikes in human. In 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) (pp. 1046–1049). Stocco, A., Prat, C. S., Losey, D. M., Cronin, J. A., Wu, J., Abernethy, J. A., & Rao, R. P. N. (2015). Playing 20 Questions with the Mind: Collaborative Problem Solving by Humans Using a Brain-to-Brain Interface. PLOS ONE, 10(9), e0137303. Retrieved from Suk, H.-J., van Welie, I., Kodandaramaiah, S. B., Allen, B., Forest, C. R., & Boyden, E. S. (2017). Closed-Loop Real-Time Imaging Enables Fully Automated Cell-Targeted Patch-Clamp Neural Recording In Vivo. Neuron, 95(5), 1037–1047.e11. Szigeti, B., Gleeson, P., Vella, M., Khayrulin, S., Palyanov, A., Hokanson, J., … Larson, S. (2014). OpenWorm: an open-science approach to modeling Caenorhabditis elegans. Frontiers in Computational Neuroscience. Retrieved from Tasic, B., Yao, Z., Graybuck, L. T., Smith, K. A., Nguyen, T. N., Bertagnolli, D., … Zeng, H. (2018). Shared and distinct transcriptomic cell types across neocortical areas. Nature, 563(7729), 72–78. Zheng, Z., Lauritzen, J. S., Perlman, E., Robinson, C. G., Nichols, M., Milkie, D., … Bock, D. D. (2018). A Complete Electron Microscopy Volume of the Brain of Adult Drosophila melanogaster. Cell, 174(3), 730–743.e22. Notes on wave optics No Comments PDF version: Notes on wave optics – Logan Thrasher Collins The wave equation Solutions to the wave equation Intensity and energy of electromagnetic waves Superposition of waves Polarization of light eq18Fig. 2
9a5bccc6effdc8fc
On an irregular basis various Special Seminars take place at the MPQ. The seminars are organized by scientists of our divisions, administration or staff representatives. The location will be announced with the event. Room: Herbert Walther Lecture Hall Quantum light-matter interfaces that reversibly map photonic quantum states onto atomic states, are essential components in the quantum engineering toolbox with applications in quantum communication, computing, and quantum-enabled sensing. [more] Doped resonating valence bond states: a quantum information study (Dr. S. Singha) Resonating valence bond states have played a crucial role in the description of exotic phases in strongly correlated systems, especially in the realm of Mott insulators and the associated highTc superconducting phase transition. [more] Detection of Zak phases and topological invariants in a chiral quantum walk of twisted photons (Dr. A. Dauphin) Initially discovered in condensed matter, topological phases have so far been simulated in a variety of synthetic systems (ultracold atoms in optical lattices, photonic bandgap materials, mechanical systems, ...). [more] Representations in deep learning and quantum many-body physics (Dr. P. Wittek) Representation is of central importance in both quantum many-body physics and machine learning. [more] Non-linear response in extended systems: a real-time approach (Dr. C. Attaccalite) I will present a new formalism to study linear and non-linear response in extended systems. Our approach is based on real-time solution of an effective Schrödinger equation. [more] Show more Go to Editor View
d920bcbeb371cef3
Inverse Schrödinger scattering on the line with partial knowledge of the potential. (English) Zbl 0844.34016 The one-dimensional Schrödinger equation \(\psi ''(k,x) + k^2 \psi (k,x) = Q(x) \psi (k,x)\), \(x \in \mathbb{R}\) is considered. It is proved that the potential \(Q(x)\) in \(L^1_1 (\mathbb{R})\) is uniquely determined by the scattering data including its reflection coefficient from the right (left) and the knowledge of this potential on the right (left) half line. Neither the bound state energies nor the bound state norming constants are needed to determine \(Q(x)\). As an example \(Q(x)\) is constructed from the scattering data consisting of the bound state energies, the knowledge of the potential on a set of nonzero measure, and either of the reflection coefficients. Two inverse scattering problems for a generalized Schrödinger equation \(\psi ''(k,x) + k^2 H(x)^2 \psi (k,x) = Q(x) \psi (k,x)\) when the potential to be recovered is partially known are also studied. Reviewer: V.Burjan (Praha) 34A55 Inverse problems involving ordinary differential equations 81U40 Inverse scattering problems in quantum theory Full Text: DOI
798b125eb03d4bd9
WIAS Preprint No. 1441, (2009) Fast numerical methods for waves in periodic media • Ehrhardt, Matthias • Zheng, Chunxiong 2010 Mathematics Subject Classification • 65M99 35B27 35Q60 35J05 81-08 • artificial boundary conditions, periodic potential, Schrödinger equation, Helmholtz equation, hyperbolic equation, unbounded domain, Dirichlet-to-Neumann maps, Robin-to-Robin maps, band structure, Floquet-Bloch theory, high-order finite elements Periodic media problems widely exist in many modern application areas like semiconductor nanostructures (e.g. quantum dots and nanocrystals), semi-conductor superlattices, photonic crystals (PC) structures, meta materials or Bragg gratings of surface plasmon polariton (SPP) waveguides, etc. Often these application problems are modeled by partial differential equations with periodic coefficients and/or periodic geometries. In order to numerically solve these periodic structure problems efficiently one usually confines the spatial domain to a bounded computational domain (i.e. in a neighborhood of the region of physical interest). Hereby, the usual strategy is to introduce so-called emphartificial boundaries and impose suitable boundary conditions. For wave-like equations, the ideal boundary conditions should not only lead to well-posed problems, but also mimic the perfect absorption of waves traveling out of the computational domain through the artificial boundaries. In the first part of this chapter we present a novel analytical impedance expression for general second order ODE problems with periodic coefficients. This new expression for the kernel of the Dirichlet-to-Neumann mapping of the artificial boundary conditions is then used for computing the bound states of the Schrödinger operator with periodic potentials at infinity. Other potential applications are associated with the exact artificial boundary conditions for some time-dependent problems with periodic structures. As an example, a two-dimensional hyperbolic equation modeling the TM polarization of the electromagnetic field with a periodic dielectric permittivity is considered. In the second part of this chapter we present a new numerical technique for solving periodic structure problems. This novel approach possesses several advantages. First, it allows for a fast evaluation of the Sommerfeld-to-Sommerfeld operator for periodic array problems. Secondly, this computational method can also be used for bi-periodic structure problems with local defects. In the sequel we consider several problems, such as the exterior elliptic problems with strong coercivity, the time-dependent Schrödinger equation and the Helmholtz equation with damping. Finally, in the third part we consider periodic arrays that are structures consisting of geometrically identical subdomains, usually called periodic cells. We use the Helmholtz equation as a model equation and consider the definition and evaluation of the exact boundary mappings for general semi-infinite arrays that are periodic in one direction for any real wavenumber. The well-posedness of the Helmholtz equation is established via the emphlimiting absorption principle (LABP). An algorithm based on the doubling procedure of the second part of this chapter and an extrapolation method is proposed to construct the exact Sommerfeld-to-Sommerfeld boundary mapping. This new algorithm benefits from its robustness and the simplicity of implementation. But it also suffers from the high computational cost and the resonance wave numbers. To overcome these shortcomings, we propose another algorithm based on a conjecture about the asymptotic behaviour of limiting absorption principle solutions. The price we have to pay is the resolution of some generalized eigenvalue problem, but still the overall computational cost is significantly reduced. Numerical evidences show that this algorithm presents theoretically the same results as the first algorithm. Moreover, some quantitative comparisons between these two algorithms are given. Download Documents
f1a36a6b243b9d2f
Spin filtering by a periodic nanospintronic device Amnon Aharony Also at Tel Aviv University. Department of Physics and the Ilse Katz Center for Meso- and Nano-Scale Science and Technology, Ben Gurion University, Beer Sheva 84105, Israel    Ora Entin-Wohlman Also at Tel Aviv University. Department of Physics and the Ilse Katz Center for Meso- and Nano-Scale Science and Technology, Ben Gurion University, Beer Sheva 84105, ISRAEL    Yasuhiro Tokura NTT Basic Research Laboratories, NTT Corporation, Atsugi-shi, Kanagawa 243-0198, Japan    Shingo Katsumoto Institute of Solid State Physics, University of Tokyo, Kashiwa, Chiba 277-8581, Japan June 3, 2020 For a linear chain of diamond-like elements, we show that the Rashba spin-orbit interaction (which can be tuned by a perpendicular gate voltage) and the Aharonov-Bohm flux (due to a perpendicular magnetic field) can combine to select only one propagating ballistic mode, for which the electronic spins are fully polarized along a direction that can be controlled by the electric and magnetic fields and by the electron energy. All the other modes are evanescent. For a wide range of parameters, this chain can serve as a spin filter. 71.70.Ej, 72.25.-b, 73.23.Ad I Introduction In addition to their charge, electrons also carry a spin, which is the quantum relativistic source for the electron’s intrinsic magnetic moment. Future device technology and quantum information processing may be based on spintronics, 1 where one manipulates the electron’s spin (and not only its charge). One major aim of spintronics is to build mesoscopic spin valves (or spin filters), which generate a tunable spin-polarized current out of unpolarized sources. Much recent research aims to achieve this goal by using narrow-gap semiconductor heterostructures, where the spins are subject to the Rashba 3 spin-orbit interaction (SOI): in a two-dimensional electron gas confined by an asymmetric potential well, the strength of this SOI can be varied by an electric field perpendicular to the plane in which the electrons move. koga An early proposal of a spin field-effect transistor 2 used the Rashba SOI to control the spin precession of electrons moving in quasi-one-dimensional wires. Placed between two ferromagnets, the transport of polarized electrons through such a semiconductor could be regulated by the electric field. However, such devices are difficult to make, due to the metal-semiconductor conductivity mismatch. Some of the most striking quantum effects arise due to interference, which is best demonstrated in quantum networks containing loops. Indeed, interference due to the Rashba SOI has been measured on a nanolithographically-defined square loop array. koga06 Here we discuss the possibility to construct a spin filter from such loops. Recently, several groups proposed spin filters based on a single loop, subject to both an electric and a magnetic [Aharonov-Bohm (AB) AB ] perpendicular fields. citro ; hatano ; oreg However, such devices produce a full polarization of the outgoing electrons only for special values of the two fields. In the present paper we consider a chain of such loops, as shown in Fig. 1. The effects of the Rashba SOI on the spectrum of the diamond chain of Fig. 1 was studied by Bercioux et al.. berc1 They found a strong variation of the averaged (over energies) conductance with the strength of the SOI, which they associated with localization of the electron due to interference between different paths in each diamond. Later, this group berc2 found similar effects due to both the SOI and an AB flux. However, the possibility to use such networks to achieve spin filtering has not been considered. As we show below, the polarization of the outgoing electrons depends on the energy. Therefore, averaging over energies mixes different polarization directions and eliminates the possibility of obtaining full polarization. Chain of diamonds. Figure 1: Chain of diamonds. We find that both the ballistic conductance and the spin polarization of the electrons going through the device can be sharply varied by an electric field (determining the SOI koga ), a magnetic field (determining the AB phases of the orbital electronic wave functions) and the electrons’ energy (set by the chemical potential in the source). Varying these three parameters, we find large parameter ranges where all the energy eigenstates of the device except one become evanescent and decay exponentially, forming the localized states discussed in Refs. berc1, and berc2, . However, the electrons in the remaining single mode propagate with fully polarized spins. Thus, electrons which enter with arbitrary spins exit fully polarized. Since this polarization can be tuned by the parameters, our system is an ideal spin filter. Section II outlines the tight binding model which we use for solving the Schrödinger equation on the periodic chain of diamonds. Section III presents results for the ballistic conductance and for the polarization of the electrons in the regions where they are fully polarized. Finally, Sec. IV contains a discussion of our results, including a comparison with the case of a single diamond and a discussion of the application of our results to a finite chain. Ii Tight binding model With SOI, we need to solve for the two-component spinor at each point on the network. Bercioux et al. berc1 ; berc2 treated each bond of the network as a continuous one-dimensional (1D) wire. Having expressed the solutions along each bond in terms of the spinors of the nodes at its two ends, they used the Neumann boundary conditions at the nodes to derive discrete equations for the spinors at these nodes. As we discuss elsewhere, deG these boundary conditions are sufficient but not necessary for current conservation at the nodes. A more systematic way to treat such network replaces each continuous bond bond by a discrete sequence of sites, and then studies the tight binding model for the wave functions on these sites (and on the original nodes). As the number of these intermediate sites increases, one has more sites per unit cell, and therefore one ends up with more energy bands for the solutions which contain waves moving along the main axis of the network [i.e. along the (1,1,0) direction in Fig. 1]. Qualitatively, we find that all these bands are similar to each other, and also similar to those found for the continuous network used in Refs. berc1, and berc2, . Therefore, we choose to report here only on the simplest case, with no intermediate sites within the bonds. Thus, we treat a simple tight-binding model, with sites only on the corners of the diamonds. The latter model could also describe a network of quantum dots or anti-dots, located at these nodes. kats The stationary spinors , with energy , obey the Schrödinger equations, where the sum is over the nearest-neighbor nodes , is the (real) hopping matrix element (in the absence of fields) and is a unitary matrix, representing the phase factors due to the AB flux and to the SOI, and respectively. For our structure, all bonds are in the plane, and both the uniform magnetic field and the potential asymmetry which creates the SOI are along the -axis. As can be seen from Fig. 1, the ’th unit cell contains three sites, and Eq. (1) reduces to equations for the related spinors, and . Choosing the edges of the diamonds along the and axes (see Fig. 1), so that site is located at ( is the length of each edge), the unitary hopping matrices within the ’th diamond are given by TBSOI where is the vector of Pauli matrices, ( measures the strength of the ‘microscopic’ SOI, ) and represents the AB phase associated with a single square diamond (here, is the flux unit; is Planck’s constant, is the speed of light and is the electron charge). Note that the dependence of and of on results from our choice of gauge for the vector potential. The net flux through each diamond is equal to , independent of . For one encounters dispersionless modes, for which . Since these solutions have zero velocity, and therefore carry no current, we ignore them in the following discussion. We next eliminate the spinors and from the equations, and end up with effective one-dimensional equations, with and Unlike the individual ’s, the ‘renormalized’ hopping matrix is not unitary. This lack of unitarity reflects interference between the two paths in a diamond, which may decrease the current along the chain. In the following we concentrate on propagating waves, where is the lattice constant of the diamond system along its axis , the (real) wave-vector is in the range and is a normalized spinor (which depends on ). For such solutions, Eq. (4) implies that must obey the eigenvalue equation , with the hermitian matrix We next write where . It follows that the spinor must be an eigenvector of the spin component along : . Thus, . Given , this equation can be written as a quadratic equation in . Denoting the solutions by , we end up with four solutions . These solutions are propagating (evanescent) if is real (complex). For each one then has , so that is invariant under flipping the sign of . Since is an eigenvector of , each solution with a given is associated with a full polarization along the direction , As usual for Rashba SOI, is always perpendicular to the direction of motion along the axis of the diamond chain, . In the absence of an AB flux (i.e. ) remains in the direction . However, the orbital AB flux causes a rotation of the polarization axis towards the direction. Below we present results for and for . Since ( is odd (even) in , flipping the sign of flips the sign of but not that of . The probability current from site to site is The current from site to site on the diamond chain, equal to the sum of the currents from to and to , is thus found to be For a single propagating solution of the form (6), Eqs. (5) and (10) yield It is easy to see that flips sign with . When we have only a pair of propagating modes, we thus concentrate on the one with . Iii Results Figure 2 shows the spectrum of the propagating solutions (real ’s), for several values of and . The left column shows results for , similar to Ref. berc1, : increasing splits the energy band vertically, and changes its width. Thus, the SOI can turn propagating waves into evanescent ones, with complex (our figures show only the solutions with real ). However, whenever the energy allows for real values of , there exist four such values, forming pairs which move in opposite directions and have opposite spins along . The situation becomes more interesting when we have both the SOI and the AB flux. Adding only the latter (upper plot on the right hand side of Fig. 2) creates a gap (i.e. evanescent states) around . The degeneracy of the propagating solutions is not lifted, since the two spin directions have exactly the same energies. As seen in the right column in Fig. 2, increasing at fixed causes the splitting of each sub-band horizontally. We next discuss the ballistic conductance of our device, . For an ideal conductor, this conductance is given by , where is the number of right-moving (or left-moving) propagating modes at a given energy. land ; Imry ; MV This formula clearly applies for the infinite periodic chain of diamonds discussed here. Below we argue that the filtering effect which we find also survives for a finite chain, under certain conditions. As Fig. 2 shows, at a given energy one can encounter zero, two or four propagating solutions. The number can be read directly from Fig. 2: on the left hand side of this figure, the number of real ’s (both left-moving and right-moving) is always zero or four, and thus or . In contrast, the right hand side of Fig. 2 shows 0, 2 or 4 real ’s, i.e. or , depending on the parameters and . (Color online) The spectrum ( Figure 2: (Color online) The spectrum ( versus ) of the propagating solutions. Here, and the wave vector is in units of . Left: . Right: . Top to bottom: . The vertical lines indicate boundaries at which the number of propagating solutions changes. We next consider electrons coming with arbitrary spin directions from a reservoir at , with energy equal to their chemical potential in that reservoir. For each electron, its spinor will become a combination of the eigenmodes of the problem inside the system. In fact, the same will happen to electrons which enter into a finite but long chain from the left hand side: their spinor within the chain will become a similar combination of the four eigensolutions there, multiplied by some transmission coefficients. When all four ’s have non-zero imaginary parts, all of these modes are evanescent, and the wave function will decay to zero, resulting with zero current. In that case there are no propagating modes, and . When all four ’s are real, i.e. , the incoming wave function is a combination of two right-moving modes, and it has no definite spin. However, for the wave function of the right moving electron is a linear combination of one propagating and one evanescent modes. The latter will decay, and the spinor will converge to that of the single propagating solution, which has a uniquely polarized spin, see Eq. (10). Without the AB flux, we always had or . For , we find regions of energy where . Figure 3 shows contour plots of in the plane, for several values of . As one can see, for energies and there are large regions where . In these regions, the electron will have a well defined polarization, which depends only on and . We next consider specific cuts through these contour plots. Figure 4 shows results as a function of for fixed energy and AB phase . The plots show only the ( or ) right-moving modes (). The other propagating modes have opposite signs for , and . The dashed curves represent the second mode, which arises only when . For our purposes, we concentrate on the regions where , where one has only the dotted lines. The top plots show the solutions for and the corresponding currents . The bottom plots show the spin components and . The variation of with is striking: the spins of the propagating electrons switch the sign of their in-plane component with a small change of near . Note also the flipping of as crosses . This flipping persists as increases, and the range with near these points narrows. Figure 5 shows results as a function of , for the same energy, but at fixed SO strength . Clearly, even a relatively small AB flux already yields a single right-moving propagating mode () and therefore fully polarized spins. At small , the polarization starts close to the direction, but it then rotates towards the direction as increases towards , and flips sign after crossing these points. Figure 3: (Supplied separately) Contour plots of the ballistic conductance (in units of ) in the plane (the AB phase and the SO strength are in units of ). The values are represented by dark, medium and bright areas. The number above each plot is the energy (in units of ). (Color online) Wave vectors Figure 4: (Color online) Wave vectors (in units of ), currents , and spin components and , for right-moving modes, as functions of the SO strength (in units of ), for and . For values of at which , the figures show only one mode (dotted line). When , the figures show two modes (dotted and dashed lines). (Color online) Same as Fig. Figure 5: (Color online) Same as Fig. 4, for and for fixed SO strength , as functions of (in units of ). Iv Discussion Given the above analysis, we may compare our system with that of the single diamond, Ref. hatano, . As we report elsewhere, future the single diamond generates fully polarized electrons, along a controllable direction, whenever and for any . Although this condition is less restrictive than that given in Ref. hatano, , it is still much more restrictive than the conditions we found above. The literature contains many other proposals for spin filters, also based on the Rashba SOI. Usually, these give only a partial polarization. Some of these devices also require a large Zeeman field. In contrast, our filter can work at a relatively low (and fixed) magnetic field (as apparently desired technologically), so that the Zeeman energy is negligible. Note also that both and depend on the diamond size , and therefore one can choose a geometry which corresponds to the available ranges of the magnetic field and the microscopic Rashba parameters. In real experiments it is not realistic to use an infinite chain of diamonds. We now argue that under appropriate conditions it is sufficient to use a finite chain, as long as it is longer than the decay lengths of the evanescent modes. For the electrons coming in from the left we don’t need to worry about the details of the connection between the incoming lead and the chain: even if some of the electrons are reflected back into that lead, those which are transmitted into the chain will split into a sum of the four modes there, and when we still remain with fully polarized electrons (although their overall amplitude may involve a transmission factor with magnitude smaller than 1). The situation on the right hand end of the chain is more delicate. Here we should avoid reflections, since they may modify the outgoing spinors and change their polarization. A standard way to avoid reflections is to use adiabatic contacts. This is usually done for retaining the ballistic conductance of mesoscopic devices. Imry One way to avoid reflections is to have a large leakage to the ground near the exit channel, so that only a small fraction of electrons enter into the exit lead. For our filter to be useful, one also needs to measure the outgoing spins, or to relate the outgoing spin polarization to some measurement of a voltage or a current. This issue is common to many proposed filters, and it requires separate research. For the present purposes, we mention just a few possibilities. First, one can follow the original proposal of Datta and Das, 2 and connect the right hand end of the device adiabatically to a ferromagnetic lead, whose magnetization can be tuned. The outgoing current will decrease with the angle between the electron polarization and this magnetization. Second, to avoid connections to ferromagnets, one can also connect our filter adiabatically to another such filter, with different parameters which may block the polarized electrons coming from the first filter. Another way to test the spin polarization, is to couple one of the -nodes (Fig. 1) to a side quantum dot, that is in a Pauli spin blockade region. ono After a while, the side dot will capture one of the polarized electrons, and this will block the current (which contains electrons with the same polarization). Changing the parameters will then change the spin direction of the propagating electrons, and allow some current until the next blocking occurs. In conclusion, we propose a simple spin filter, which yields a full polarization over a broad range of parameters. For given energy and magnetic flux (which need not be very large), the polarization of the outgoing electrons can be tuned by varying the electric field which determines the SOI strength . We acknowledge discussions with Joe Imry. AA and OEW acknowledge the hospitality of NTT and of the ISSP, where this project started, and support from the ISF and from the DIP.
e719350c79d686fd
Davydov solitons in protein α-helices Proteins sustain life through catalysis of biochemical processes in living organisms. Protein function involves physical work and as such can be performed only at the expense of free energy released by biochemical reactions. Protein dynamics is subject to quantum physical laws because proteins are nanosystems. The quantum transport of energy in proteins, however, is unclear due to challenges for solving the many-body Schrödinger equation. In our recent article published in Physica A: Statistical Mechanics and its Applications, we study the transport of energy inside protein α-helices by deriving a system of quantum equations of motion from the Davydov Hamiltonian with the use of the Schrödinger equation and the generalized Ehrenfest theorem. Numerically solving the system of quantum equations of motion for different initial distributions of the amide I energy over the peptide groups confirmed the generation of both moving or stationary Davydov solitons in the absence of thermal agitation. In our simulations, the soliton generation, propagation, and stability were found to be dependent on the symmetry of the exciton-phonon interaction Hamiltonian and the initial site of application of the exciton energy. Davydov soliton
48b5e936dc48a0f3
Jump to navigation Jump to search File:Lightning over Oradea Romania 2.jpg Lightning is the electric breakdown of air by strong electric fields, producing a plasma, which causes an energy transfer from the electric field to heat, mechanical energy (the random motion of air molecules caused by the heat), and light. In physics and other sciences, energy (from the Greek ενεργός, energos, "active, working")[1] is a scalar physical quantity that is a property of objects and systems which is conserved by nature. Energy is often defined as the ability to do work. Several different forms of energy, such as kinetic, potential, thermal, chemical, nuclear, and mass have been defined to explain all known natural phenomena. Energy is converted from one form to another, but it is never created or destroyed. This principle, the conservation of energy, was first postulated in the early 19th century, and applies to any isolated system. According to Noether's theorem, the conservation of energy is a consequence of the fact that the laws of physics do not change over time.[2] Although the total energy of a system does not change with time, its value may depend on the frame of reference. For example, a seated passenger in a moving airplane has zero kinetic energy relative to the airplane, but nonzero kinetic energy relative to the earth. File:Thomas Young (scientist).jpg Thomas Young - the first to use the term "energy" in the modern sense. The concept of energy emerged out of the idea of vis viva, which Leibniz defined as the product of the mass of an object and its velocity squared; he believed that total vis viva was conserved. To account for slowing due to friction, Leibniz claimed that heat consisted of the random motion of the constituent parts of matter — a view shared by Isaac Newton, although it would be more than a century until this was generally accepted. In 1807, Thomas Young was the first to use the term "energy", instead of vis viva, in its modern sense.[3] Gustave-Gaspard Coriolis described "kinetic energy" in 1829 in its modern sense, and in 1853, William Rankine coined the term "potential energy." It was argued for some years whether energy was a substance (the caloric) or merely a physical quantity, such as momentum. He amalgamated all of these laws into the laws of thermodynamics, which aided in the rapid development of explanations of chemical processes using the concept of energy by Rudolf Clausius, Josiah Willard Gibbs and Walther Nernst. It also led to a mathematical formulation of the concept of entropy by Clausius, and to the introduction of laws of radiant energy by Jožef Stefan. During a 1961 lecture[4] for undergraduate students at the California Institute of Technology, Richard Feynman, a celebrated physics teacher and Nobel Laureate, said this about the concept of energy: The Feynman Lectures on Physics[4] Since 1918 it has been known that the law of conservation of energy is the direct mathematical consequence of the translational symmetry of the quantity conjugate to energy, namely time. That is, energy is conserved because the laws of physics do not distinguish between different moments of time (see Noether's theorem). Energy in various contexts since the beginning of the universe The concept of energy and its transformations is extremely useful in explaining and predicting most natural phenomena. The direction of transformations in energy (what kind of energy is transformed to what other kind) is often described by entropy (equal energy spread among all available degrees of freedom) considerations, since in practice all energy transformations are permitted on a small scale, but certain larger transformations are not permitted because it is statistically unlikely that energy or matter will randomly move into more concentrated forms or smaller spaces. The concept of energy is used often in all fields of science. In chemistry, the energy differences between substances determine whether, and to what extent, it can be converted into another substance or react with other substances. In biology, chemical bonds are broken and made during metabolic processes, and the associated changes in available energy are studied in the subfield of bioenergetics. Energy is often stored by cells in the form of substances such as carbohydrate molecules (including sugars) and lipids, which release energy when reacted with oxygen. In geology and meteorology, continental drift, mountain ranges, volcanos, and earthquakes are phenomena that can be explained in terms of energy transformations in the Earth's interior. [5] While meteorological phenomena like wind, rain, hail, snow, lightning, tornadoes and hurricanes, are all a result of energy transformations brought about by solar energy on the planet Earth. Energy transformations in the universe over time are characterized by various kinds of potential energy which has been available since the Big Bang, later being "released" (transformed to more active types of energy such as kinetic or radiant energy), when a triggering mechanism is available. Familiar examples of such processes include nuclear decay, in which energy is released which was originally "stored" in heavy isotopes (such as uranium and thorium), using the gravitational potential energy released from the gravitational collapse of supernovae, which created these elements before they were incorporated into the solar system and the Earth. Heat from such nuclear decay in the core of the Earth releases heat, which in turn may lift mountains via orogenesis. This lifting represents a kind of gravitational potential energy storage, which may be released to active kinetic energy in landslides, after a triggering event. Earthquakes also release stored elastic potential energy in rocks, a store which has been produced ultimately from the same heat sources. Thus, according to present understanding, familiar events such as landslides and earthquakes release energy stored since the collapse of long-destroyed stars. In another similar chain of transformations from the dawn of the universe, nuclear fussion of hydrogen in the Sun releases potential energy stored at the time of the Big Bang, when according to theory, space expanded and the universe cooled too rapidly for hydrogen to completely fuse into heavier elements. Energy from this fusion process is triggered by heat and pressure generated from gravitational collapse of hydrogen clouds when they produce stars, and some of the energy is transformed to sunlight. Such sunlight from our Sun may again be stored as gravitational potential energy after it strikes the Earth, when (for example) water evaporates from oceans and is deposited upon mountains (where, after being released at a hydroelectric dam, it can be used to drive turbine/generators to produce electricity). Sunlight also drives all weather phenomenon, including violent events triggered when large unstable areas of warm ocean, heated over months, give up some of their thermal energy suddently to power a few days of hurricanes. Sunlight is also is captured by plants as chemical potential energy, when carbon dioxide and water are converted into carbohydrates, lipids, and oxygen. This energy may be triggered suddenly by a spark in a forest fire; or may be available more slowly for animal or human metabolism, when these molecules are ingested, and catabolism is triggered by enzyme action. Through all of these tranformation chains, potential energy stored at the time of the Big Bang is later released by intermediate events, sometimes being stored in a number of ways over time between releases as more active energy. In all these events, one kind of energy is converted to other types of energy, including heat. Regarding applications of the concept of energy • The word "energy" is also used outside of physics in many ways, which can lead to ambiguity and inconsistency. The vernacular terminology is not consistent with technical terminology. For example, the important public-service announcement, "Please conserve energy" uses vernacular notions of "conservation" and "energy" which make sense in their own context but are utterly incompatible with the technical notions of "conservation" and "energy" (such as are used in the law of conservation of energy).[7] In classical physics energy is considered a scalar quantity, the canonical conjugate to time. In special relativity energy is also a scalar (although not a Lorentz scalar but a time component of the energy-momentum 4-vector).[8] In other words, energy is invariant with respect to rotations of space, but not invariant with respect to rotations of space-time (= boosts). Energy transfer Because energy is strictly conserved and is also locally conserved (wherever it can be defined), it is important to remember that by definition of energy the transfer of energy between the "system" and adjacent regions is work. A familiar example is mechanical work. In simple cases this is written as: if there are no other energy-transfer processes involved. Here   is the amount of energy transferred, and   represents the work done on the system. where   represents the heat flow into the system. There are other ways in which an open system can gain or lose energy. If mass is counted as energy (as in many relativistic problems) then must contain a term for mass lost or gained. In chemical systems, energy can be added to a system by means of adding substances with different chemical potentials, which potentials are then extracted (both of these process are illustrated by fueling an auto, a system which gains in energy thereby, without addition of either work or heat). These terms may be added to the above equation, or they can generally be subsumed into a quantity called "energy addition term " which refers to any type of energy carried over the surface of a control volume or system volume. Examples may be seen above, and many others can be imagined (for example, the kinetic energy of a stream of particles entering a system, or energy from a laser beam adds to system energy, without either being either work-done or heat-added, in the classic senses). Energy is also transferred from potential energy () to kinetic energy () and then back to potential energy constantly. This is referred to as conservation of energy. In this closed system, energy can not be created or destroyed, so the initial energy and the final energy will be equal to each other. This can be demonstrated by the following: The equation can then be simplified further since (mass times acceleration due to gravity times the height) and (half times mass times velocity squared). Then the total amount of energy can be found by adding . Energy and the laws of motion The Hamiltonian The Lagrangian Another energy-related concept is called the Lagrangian, after Joseph Louis Lagrange. This is even more fundamental than the Hamiltonian, and can be used to derive the equations of motion. In non-relativistic physics, the Lagrangian is the kinetic energy minus potential energy. Usually, the Lagrange formalism is mathematically more convenient than the Hamiltonian for non-conservative systems (like systems with friction). Energy and thermodynamics Internal energy Type Composition of Internal Energy (U) Sensible energy the portion of the internal energy of a system associated with kinetic energies (molecular translation, rotation, and vibration; electron translation and spin; and nuclear spin) of the molecules. Latent energy the internal energy associated with the phase of a system. Chemical energy the internal energy associated with the different kinds of aggregration of atoms in matter. Nuclear energy the tremendous amount of energy associated with the strong bonds within the nucleus of the atom itself. Energy interactions those types of energies not stored in the system (e.g. heat transfer, mass transfer, and work), but which are recognized at the system boundary as they cross it, which represent gains or losses by a system during a process. Thermal energy the sum of sensible and latent forms of internal energy. The laws of thermodynamics According to the second law of thermodynamics, work can be totally converted into heat, but not vice versa.This is a mathematical consequence of statistical mechanics. The first law of thermodynamics simply asserts that energy is conserved,[11] and that heat is included as a form of energy transfer. A commonly-used corollary of the first law is that for a "system" subject only to pressure forces and heat transfer (e.g. a cylinder-full of gas), the differential change in energy of the system (with a gain in energy signified by a positive quantity) is given by: where the first term on the right is the heat transfer into the system, defined in terms of temperature T and entropy S (in which entropy increases and the change dS is positive when the system is heated); and the last term on the right hand side is identified as "work" done on the system, where pressure is P and volume V (the negative sign results since compressiong of the system is needed to do work on it, so that the volume change dV is negative when work is done on the system). Although this equation is the standard text-book example of energy conservation in classical thermodynamics, it is highly specific, ignoring all chemical, electric, nuclear, and gravitational forces, effects such as advection of any form of energy other than heat, and because it contains a term that depends on temperature. The most general statement of the first law — i.e. conservation of energy — is valid even in situations in which temperature is undefinable. Energy is sometimes expressed as: which is unsatisfactory[7] because there cannot exist any thermodynamic state functions W or Q that are meaningful on the right hand side of this equation, except perhaps in trivial cases. Equipartition of energy The energy of a mechanical harmonic oscillator (a mass on a spring) is alternatively kinetic and potential. At two points in the oscillation cycle it is entirely kinetic, and alternatively at two other points it is entirely potential. Over the whole cycle, or over many cycles net energy is thus equally split between kinetic and potential. This is called equipartition principle - total energy of a system with many degrees of freedom is equally split among all these degrees of freedom. This principle is vitally important to understanding the behavior of a quantity closely related to energy, called entropy. Entropy is a measure of evenness of a distribution of energy between parts of a system. This concept is also related to the second law of thermodynamics which basically states that when an isolated system is given more degrees of freedom (= is given new available energy states which are the same as existing states), then energy spreads over all available degrees equally without distinction between "new" and "old" degrees. Oscillators, phonons, and photons In an ensemble (connected collection) of unsynchronized oscillators, the average energy is spread equally between kinetic and potential types. In a solid, thermal energy (often referred to loosely as heat content) can be accurately described by an ensemble of thermal phonons that act as mechanical oscillators. In this model, thermal energy is equally kinetic and potential. In an ideal gas, the interaction potential between particles is essentially the delta function which stores no energy: thus, all of the thermal energy is kinetic. Because an electric oscillator (LC circuit) is analogous to a mechanical oscillator, its energy must be, on average, equally kinetic and potential. It is entirely arbitrary whether the magnetic energy is considered kinetic and the electric energy considered potential, or vice versa. That is, either the inductor is analogous to the mass while the capacitor is analogous to the spring, or vice versa. 1. By extension of the previous line of thought, in free space the electromagnetic field can be considered an ensemble of oscillators, meaning that radiation energy can be considered equally potential and kinetic. This model is useful, for example, when the electromagnetic Lagrangian is of primary interest and is interpreted in terms of potential and kinetic energy. 1. On the other hand, in the key equation , the contribution is called the rest energy, and all other contributions to the energy are called kinetic energy. For a particle that has mass, this implies that the kinetic energy is at speeds much smaller than c, as can be proved by writing  √ and expanding the square root to lowest order. By this line of reasoning, the energy of a photon is entirely kinetic, because the photon is massless and has no rest energy. This expression is useful, for example, when the energy-versus-momentum relationship is of primary interest. The two analyses are entirely consistent. The electric and magnetic degrees of freedom in item 1 are transverse to the direction of motion, while the speed in item 2 is along the direction of motion. For non-relativistic particles these two notions of potential versus kinetic energy are numerically equal, so the ambiguity is harmless, but not so for relativistic particles. Work and virtual work Work is roughly force times distance. But more precisely, it is This says that the work () is equal to the integral (along a certain path) of the force; for details see the mechanical work article. Quantum mechanics In quantum mechanics energy is defined in terms of the energy operator as a time derivative of the wave function. The Schrödinger equation equates energy operator to the full energy of a particle or a system. It thus can be considered as a definition of measurement of energy in quantum mechanics. The Schrödinger equation describes the space- and time-dependence of the wave function of quantum systems. The solution of this equation for bound system is discrete (a set of permitted states, each characterized by an energy level) which results in the concept of quanta. In the solution of the Schrödinger equation for any oscillator (vibrator) and for electromagnetic wave in vacuum, the resulting energy states are related to the frequency by the Planck equation (where is the Planck's constant and the frequency). In the case of electromagnetic wave these energy states are called quanta of light or photons. When calculating kinetic energy (= work to accelerate a mass from zero speed to some finite speed) relativistically - using Lorentz transformations instead of Newtonian mechanics, Einstein discovered unexpected by-product of these calculations to be an energy term which does not vanish at zero speed. He called it rest mass energy - energy which every mass must possess even when being at rest. The amount of energy is directly proportional to the mass of body: m is the mass, c is the speed of light in vacuum, E is the rest mass energy. For example, consider electron-positron annihilation, in which the rest mass of individual particles is destroyed, but the inertia equivalent of the system of the two particles (its invariant mass) remains (since all energy is associated with mass), and this inertia and invariant mass is carried off by photons which individually are massless, but as a system retain their mass. This is a reversible process - the inverse process is called pair creation - in which the rest mass of particles is created from energy of two (or more) annihilating photons. In general relativity, the stress-energy tensor serves as the source term for the gravitational field, in rough analogy to the way mass serves as the source term in the non-relativistic Newtonian approximation.[8] It is not uncommon to hear that energy is "equivalent" to mass. It would be more accurate to state that every energy has inertia and gravity equivalent, and because mass is a form of energy, then mass too has inertia and gravity associated with it. There is no absolute measure of energy, because energy is defined as the work that one system does (or can do) on another. Thus, only of the transition of a system from one state into another can be defined and thus measured. File:X-ray microcalorimeter diagram.jpg A Calorimeter - An instrument used by physicists to measure energy Forms of energy File:Hot metalwork.jpg Heat, a form of energy, is partly potential energy and partly kinetic energy. Classical mechanics distinguishes between potential energy, which is a function of the position of an object, and kinetic energy, which is a function of its movement. Both position and movement are relative to a frame of reference, which must be specified: this is often (and originally) an arbitrary fixed point on the surface of the Earth, the terrestrial frame of reference. Some introductory authors[citation needed] attempt to separate all forms of energy in either kinetic or potential: this is not incorrect, but neither is it clear that it is a real simplification, as Feynman points out: Examples of the interconversion of energy Mechanical energy is converted into by Mechanical energy Lever Thermal energy Brakes Electric energy Dynamo Electromagnetic radiation Synchrotron Chemical energy Matches Nuclear energy Particle accelerator Potential energy Potential energy, symbols Ep, V or Φ, is defined as the work done against a given force (= work of given force with minus sign) in changing the position of an object with respect to a reference position (often taken to be infinite separation). If F is the force and s is the displacement, with the dot representing the scalar product of the two vectors. The name "potential" energy originally signified the idea that the energy could readily be transferred as work—at least in an idealized system (reversible process, see below). This is not completely true for any real system, but is often a reasonable first approximation in classical mechanics. The general equation above can be simplified in a number of common cases, notably when dealing with gravity or with elastic forces. Gravitational potential energy The gravitational force near the Earth's surface varies very little with the height, h, and is equal to the mass, m, multiplied by the gravitational acceleration, g = 9.81 m/s². In these cases, the gravitational potential energy is given by A more general expression for the potential energy due to Newtonian gravitation between two bodies of masses m1 and m2, useful in astronomy, is where r is the separation between the two bodies and G is the gravitational constant, 6.6742(10)×10−11 m³kg−1s−2.[12] In this case, the reference point is the infinite separation of the two bodies. Elastic potential energy File:Bouncing ball strobe edit.jpg Elastic potential energy is defined as a work needed to compress (or expand) a spring. The force, F, in a spring or any other system which obeys Hooke's law is proportional to the extension or compression, x, where k is the force constant of the particular spring (or system). In this case, the calculated work becomes Hooke's law is a good approximation for behaviour of chemical bonds under normal conditions, i.e. when they are not being broken or formed. Kinetic energy Kinetic energy, symbols Ek, T or K, is the work required to accelerate an object to a given speed. Indeed, calculating this work one easily obtains the following: At speeds approaching the speed of light, c, this work must be calculated using Lorentz transformations, which results in the following: This equation reduces to the one above it, at small (compared to c) speed. A mathematical by-product of this work (which is immediately seen in the last equation) is that even at rest a mass has the amount of energy equal to: This energy is thus called rest mass energy. Thermal energy Examples of the interconversion of energy Thermal energy is converted into by Mechanical energy Steam turbine Thermal energy Heat exchanger Electric energy Thermocouple Electromagnetic radiation Hot objects Chemical energy Blast furnace Nuclear energy Supernova The general definition of thermal energy, symbols q or Q, is also problematic. A practical definition for small transfers of heat is where Cv is the heat capacity of the system. This definition will fail if the system undergoes a phase transition—e.g. if ice is melting to water—as in these cases the system can absorb heat without increasing its temperature. In more complex systems, it is preferable to use the concept of internal energy rather than that of thermal energy (see Chemical energy below). Despite the theoretical problems, the above definition is useful in the experimental measurement of energy changes. In a wide variety of situations, it is possible to use the energy released by a system to raise the temperature of another object, e.g. a bath of water. It is also possible to measure the amount of electric energy required to raise the temperature of the object by the same amount. The calorie was originally defined as the amount of energy required to raise the temperature of one gram of water by 1 °C (approximately 4.1855 J, although the definition later changed), and the British thermal unit was defined as the energy required to heat one gallon (UK) of water by 1 °F (later fixed as 1055.06 J). Electric energy Examples of the interconversion of energy Electric energy is converted into by Mechanical energy Electric motor Thermal energy Resistor Electric energy Transformer Electromagnetic radiation Light-emitting diode Chemical energy Electrolysis Nuclear energy Synchrotron The electric potential energy of given configuration of charges is defined as the work which must be done against the Coulomb force to rearrange charges from infinite separation to this configuration (or the work done by the Coulomb force separating the charges from this configuration to infinity). For two point-like charges Q1 and Q2 at a distance r this work, and hence electric potential energy is equal to: where ε0 is the electric constant of a vacuum, 107/4πc0² or 8.854188…×10−12 F/m.[12] If the charge is accumulated in a capacitor (of capacitance C), the reference configuration is usually selected not to be infinite separation of charges, but vice versa - charges at an extremely close proximity to each other (so there is zero net charge on each plate of a capacitor). In this case the work and thus the electric potential energy becomes If an electric current passes through a resistor, electric energy is converted to heat; if the current passes through an electric appliance, some of the electric energy will be converted into other forms of energy (although some will always be lost as heat). The amount of electric energy due to an electric current can be expressed in a number of different ways: where U is the electric potential difference (in volts), Q is the charge (in coulombs), I is the current (in amperes), t is the time for which the current flows (in seconds), P is the power (in watts) and R is the electric resistance (in ohms). The last of these expressions is important in the practical measurement of energy, as potential difference, resistance and time can all be measured with considerable accuracy. Magnetic energy There is no fundamental difference between magnetic energy and electric energy: the two phenomena are related by Maxwell's equations. The potential energy of a magnet of magnetic moment m in a magnetic field B is defined as the work of magnetic force (actually of magnetic torque) on re-alignment of the vector of the magnetic dipole moment, and is equal: while the energy stored in a inductor (of inductance L) when current I is passing via it is This second expression forms the basis for superconducting magnetic energy storage. Electromagnetic fields Examples of the interconversion of energy Electromagnetic radiation is converted into by Mechanical energy Solar sail Thermal energy Solar collector Electric energy Solar cell Electromagnetic radiation Non-linear optics Chemical energy Photosynthesis Nuclear energy Mössbauer spectroscopy Calculating work needed to create an electric or magnetic field in unit volume (say, in a capacitor or an inductor) results in the electric and magnetic fields energy densities: in SI units. Electromagnetic radiation, such as microwaves, visible light or gamma rays, represents a flow of electromagnetic energy. Applying the above expressions to magnetic and electric components of electromagnetic field both the volumetric density and the flow of energy in e/m field can be calculated. The resulting Poynting vector, which is expressed as in SI units, gives the density of the flow of energy and its direction. The energy of electromagnetic radiation is quantized (has discrete energy levels). The spacing between these levels is equal to where h is the Planck constant, 6.6260693(11)×10−34 Js,[12] and ν is the frequency of the radiation. This quantity of electromagnetic energy is usually called a photon. The photons which make up visible light have energies of 270–520 yJ, equivalent to 160–310 kJ/mol, the strength of weaker chemical bonds. Chemical energy Examples of the interconversion of energy Chemical energy is converted into by Mechanical energy Muscle Thermal energy Fire Electric energy Fuel cell Electromagnetic radiation Glowworms Chemical energy Chemical reaction Chemical energy is the energy due to associations of atoms in molecules and various other kinds of aggregrates of matter. It may be defined as a work done by electric forces during re-arrangement of electric charges, electrons and protons, in the process of aggregration. If the chemical energy of a system decreases during a chemical reaction, it is transferred to the surroundings in some form of energy (often heat); on the other hand if the chemical energy of a system increases as a result of a chemical reaction - it is by converting another form of energy from the surroundings. For example, when two hydrogen atoms react to form a dihydrogen molecule, the chemical energy decreases by 724 zJ (the bond energy of the H–H bond); when the electron is completely removed from a hydrogen atom, forming a hydrogen ion (in the gas phase), the chemical energy increases by 2.18 aJ (the ionization energy of hydrogen). It is common to quote the changes in chemical energy for one mole of the substance in question: typical values for the change in molar chemical energy during a chemical reaction range from tens to hundreds of kJ/mol. The chemical energy as defined above is also referred to by chemists as the internal energy, U: technically, this is measured by keeping the volume of the system constant. However, most practical chemistry is performed at constant pressure and, if the volume changes during the reaction (e.g. a gas is given off), a correction must be applied to take account of the work done by or on the atmosphere to obtain the enthalpy, H: ΔH = ΔU + pΔV A second correction, for the change in entropy, S, must also be performed to determine whether a chemical reaction will take place or not, giving the Gibbs free energy, G: These corrections are sometimes negligible, but often not (especially in reactions involving gases). Since the industrial revolution, the burning of coal, oil, natural gas or products derived from them has been a socially significant transformation of chemical energy into other forms of energy. the energy "consumption" (one should really speak of "energy transformation") of a society or country is often quoted in reference to the average energy released by the combustion of these fossil fuels: tonne of coal equivalent (TCE) = 29 GJ tonne of oil equivalent (TOE) = 41.87 GJ On the same basis, a tank-full of gasoline (45 litres, 12 gallons) is equivalent to about 1.6 GJ of chemical energy. Another chemically-based unit of measurement for energy is the "tonne of TNT", taken as 4.184 GJ. Hence, burning a tonne of oil releases about ten times as much energy as the explosion of one tonne of TNT: fortunately, the energy is usually released in a slower, more controlled manner. Simple examples of chemical energy are batteries and food. When you eat the food is digested and turned into chemical energy which can be transformed to kinetic energy. Nuclear energy Examples of the interconversion of energy Nuclear binding energy is converted into by Mechanical energy Alpha radiation Thermal energy Sun Electric energy Beta radiation Electromagnetic radiation Gamma radiation Chemical energy Radioactive decay Nuclear energy Nuclear isomerism Nuclear potential energy, along with electric potential energy, provides the energy released from nuclear fission and nuclear fusion processes. The result of both these processes are nuclei in which strong nuclear forces bind nuclear particles more strongly and closely. Weak nuclear forces (different from strong forces) provide the potential energy for certain kinds of radioactive decay, such as beta decay. The energy released in nuclear processes is so large that the relativistic change in mass (after the energy has been removed) can be as much as several parts per thousand. Nuclear particles (nucleons) like protons and neutrons are not destroyed (law of conservation of baryon number) in fission and fusion processes. A few lighter particles may be created or destroyed (example: beta minus and beta plus decay, or electron capture decay), but these minor processes are not important to the immediate energy release in fission and fusion. Rather, fission and fusion release energy when collections of baryons become more tightly bound, and it is the energy associated with a fraction of the mass of the nucleons (but not the whole particles) which appears as the heat and electromagnetic radiation generated by nuclear reactions. This heat and radiation retains the "missing" mass, but the mass is missing only because it escapes in the form of heat and light, which retain the mass and conduct it out of the system where it is not measured. The energy from the Sun, also called solar energy, is an example of this form of energy conversion. In the Sun, the process of hydrogen fusion converts about 4 million metric tons of solar matter per second into light, which is radiated into space, but during this process, the number of total protons and neutrons in the sun does not change. In this system, the light itself retains the inertial equivalent of this mass, and indeed the mass itself (as a system), which represents 4 million tons per second of electromagnetic radiation, moving into space. Each of the helium nuclei which are formed in the process are less massive than the four protons from they were formed, but (to a good approximation), no particles or atoms are destroyed in the process of turning the sun's nuclear potential energy into light. Surface energy If there is any kind of tension in a surface, such as a stretched sheet of rubber or material interfaces, it is possible to define surface energy. In particular, any meeting of dissimilar materials that don't mix will result in some kind of surface tension, if there is freedom for the surfaces to move then, as seen in capillary surfaces for example, the minimum energy will as usual be sought. A minimal surface, for example, represents the smallest possible energy that a surface can have if its energy is proportional to the area of the surface. For this reason, (open) soap films of small size are minimal surfaces (small size reduces gravity effects, and openness prevents pressure from building up. Note that a bubble is a minimum energy surface but not a minimal surface by definition). Transformations of energy Energy can be converted into matter and vice versa. The mass-energy equivalence formula E = mc², derived independently by Albert Einstein and Henri Poincaré,[citation needed] quantifies the relationship between mass and rest energy. Since is extremely large relative to ordinary human scales, the conversion of mass to other forms of energy can liberate tremendous amounts of energy, as can be seen in nuclear reactors and nuclear weapons. Conversely, the mass equivalent of a unit of energy is minuscule, which is why a loss of energy from most systems is difficult to measure by weight, unless the energy loss is very large. Examples of energy transformation into matter (particles) are found in high energy nuclear physics. As the universe evolves in time, more and more of its energy becomes trapped in irreversible states (i.e., as heat or other kinds of increases in disorder). This has been referred to as the inevitable thermodynamic heat death of the universe. In this heat death the energy of the universe does not change, but the fraction of energy which is available to do work, or be transformed to other usable forms of energy, grows less and less. Law of conservation of energy Energy is subject to the law of conservation of energy. According to this law, energy can neither be created (produced) nor destroyed itself. It can only be transformed. Most kinds of energy (with gravitational energy being a notable exception)[1] are also subject to strict local conservation laws, as well. In this case, energy can only be exchanged between adjacent regions of space, and all observers agree as to the volumetric density of energy in any given space. There is also a global law of conservation of energy, stating that the total energy of the universe cannot change; this is a corollary of the local law, but not vice versa.[4][7] Conservation of energy is the mathematical consequence of translational symmetry of time (that is, the indistinguishability of time intervals taken at different time)[13] - see Noether's theorem. Because energy is quantity which is canonical conjugate to time, it is impossible to define exact amount of energy during any definite time interval - making it impossible to apply the law of conservation of energy. This must not be considered a "violation" of the law. We know the law still holds, because a succession of short time periods does not accumulate any violation of conservation of energy. which is similar in form to the uncertainty principle (but not really mathematically equivalent thereto, since H and t are not dynamically conjugate variables, neither in classical nor in quantum mechanics). In particle physics, this inequality permits a qualitative understanding of virtual particles which carry momentum, exchange by which with real particles is responsible for creation of all known fundamental forces (more accurately known as fundamental interactions). Virtual photons (which are simply lowest quantum mechanical energy state of photons) are also responsible for electrostatic interaction between electric charges (which results in Coulomb law), for spontaneous radiative decay of exited atomic and nuclear states, for the Casimir force, for van der Waals bond forces and some other observable phenomena. Energy and life Any living organism relies on an external source of energy—radiation from the Sun in the case of green plants; chemical energy in some form in the case of animals—to be able to grow and reproduce. The daily 1500–2000 Calories (6–8 MJ) recommended for a human adult are taken as a combination of oxygen and food molecules, the latter mostly carbohydrates and fats, of which glucose (C6H12O6) and stearin (C57H110O6) are convenient examples. The food molecules are oxidised to carbon dioxide and water in the mitochondria C6H12O6 + 6O2 → 6CO2 + 6H2O C57H110O6 + 81.5O2 → 57CO2 + 55H2O and some of the energy is used to convert ADP into ATP ADP + HPO42− → ATP + H2O The rest of the chemical energy in the carbohydrate or fat is converted into heat: the ATP is used as a sort of "energy currency", and some of the chemical energy it contains when split and reacted with water, is used for other metabolism (at each stage of a metabolic pathway, some chemical energy is converted into heat). Only a tiny fraction of the original chemical energy is used for work:[14] gain in kinetic energy of a sprinter during a 100 m race: 4 kJ gain in gravitational potential energy of a 150 kg weight lifted through 2 metres: 3kJ Daily food intake of a normal adult: 6–8 MJ It would appear that living organisms are remarkably inefficient (in the physical sense) in their use of the energy they receive (chemical energy or radiation), and it is true that most real machines manage higher efficiencies. However, in growing organisms the energy that is converted to heat serves a vital purpose, as it allows the organism tissue to be highly ordered with regard to the molecules it is built from. The second law of thermodynamics states that energy (and matter) tends to become more evenly spread out across the universe: to concentrate energy (or matter) in one specific place, it is necessary to spread out a greater amount of energy (as heat) across the remainder of the universe ("the surroundings").[15] Simpler organisms can achieve higher energy efficiencies than more complex ones, but the complex organisms can occupy ecological niches that are not available to their simpler brethren. The conversion of a portion of the chemical energy to heat at each step in a metabolic pathway is the physical reason behind the pyramid of biomass observed in ecology: to take just the first step in the food chain, of the estimated 124.7 Pg/a of carbon that is fixed by photosynthesis, 64.3 Pg/a (52%) are used for the metabolism of green plants,[16] i.e. reconverted into carbon dioxide and heat. See also Notes and references 1. Harper, Douglas. "Energy". Online Etymology Dictionary. Unknown parameter |accessyear= ignored (|access-date= suggested) (help); Unknown parameter |accessmonthday= ignored (help) 2. Lofts, G (2004). "11 — Mechanical Interactions". Jacaranda Physics 1 (2 ed.). Milton, Queensland, Australia: John Willey & Sons Australia Ltd. p. 286. ISBN 0 7016 3777 3. Unknown parameter |coauthors= ignored (help) 3. Smith, Crosbie (1998). The Science of Energy - a Cultural History of Energy Physics in Victorian Britain. The University of Chicago Press. ISBN 0-226-76420-6. 4. 4.0 4.1 4.2 Feynman, Richard (1964). The Feynman Lectures on Physics; Volume 1. U.S.A: Addison Wesley. ISBN 0-201-02115-3. 5. Earth's Energy Budget 6. Berkeley Physics Course Volume 1. Charles Kittle, Walter D Knight and Malvin A Ruderman 7. 7.0 7.1 7.2 The Laws of Thermodynamics including careful definitions of energy, free energy, et cetera. 8. 8.0 8.1 Misner, Thorne, Wheeler (1973). Gravitation. San Francisco: W. H. Freeman. ISBN 0716703440. 9. The Hamiltonian MIT OpenCourseWare website 18.013A Chapter 16.3 Accessed February 2007 10. Cengel, Yungus, A. (2002). Thermodynamics - An Engineering Approach, 4th ed. McGraw-Hill. pp. 17–18. ISBN 0-07-238332-1. Unknown parameter |coauthors= ignored (help) 11. Kittel and Kroemer (1980). Thermal Physics. New York: W. H. Freeman. ISBN 0-7167-1088-9. 12. 12.0 12.1 12.2 Template:CODATA 13. Time Invariance 14. These examples are solely for illustration, as it is not the energy available for work which limits the performance of the athlete but the power output of the sprinter and the force of the weightlifter. A worker stacking shelves in a supermarket does more work (in the physical sense) than either of the athletes, but does it more slowly. 15. Crystals are another example of highly ordered systems that exist in nature: in this case too, the order is associated with the transfer of a large amount of heat (known as the lattice energy) to the surroundings. 16. Ito, Akihito; Oikawa, Takehisa (2004). "Global Mapping of Terrestrial Primary Productivity and Light-Use Efficiency with a Process-Based Model." in Shiyomi, M. et al. (Eds.) Global Environmental Change in the Ocean and on Land. pp. 343–58. Further reading • Alekseev, G. N. (1986). Energy and Entropy. Moscow: Mir Publishers. • Walding, Richard,  Rapkins, Greg,  Rossiter, Glenn (1999-11-01). New Century Senior Physics. Melbourne, Australia: Oxford University Press. ISBN 0-19-551084-4. External links Template:Nature nav Template:Portal energy af:Energie ar:طاقة an:Enerchía ast:Enerxía (física) az:Enerji bn:শক্তি zh-min-nan:Lêng-liōng bs:Energija br:Energiezh bg:Енергия ca:Energia cs:Energie da:Energi de:Energie et:Energia el:Ενέργεια eo:Energio eu:Energia fa:انرژی gl:Enerxía ko:에너지 hi:ऊर्जा hr:Energija io:Energio id:Energi ia:Energia is:Orka it:Energia he:אנרגיה ht:Enèji ku:Wize la:Energia lv:Enerģija lb:Energie lt:Energija ln:Molungé hu:Energia mk:Енергија ml:ഊര്‍ജം mr:ऊर्जा ms:Tenaga mn:Энерги nah:Teōtl nl:Energie no:Energi nn:Energi nov:Energie oc:Energia nds:Energie qu:Micha sq:Energjia simple:Energy sk:Energia sl:Energija sr:Енергија sh:Energija fi:Energia sv:Energi ta:ஆற்றல் th:พลังงาน tg:Энергия uk:Енергія ur:توانائی vec:Energia wo:Kàttan yi:ענערגיע zh-yue:能量 bat-smg:Energėjė Template:WikiDoc Sources
9cc59af7eb13f20a
Citation for this page in APA citation style.           Close Mortimer Adler Rogers Albritton Alexander of Aphrodisias Samuel Alexander William Alston Louise Antony Thomas Aquinas David Armstrong Harald Atmanspacher Robert Audi Alexander Bain Mark Balaguer Jeffrey Barrett William Barrett William Belsham Henri Bergson George Berkeley Isaiah Berlin Richard J. Bernstein Bernard Berofsky Robert Bishop Max Black Susanne Bobzien Emil du Bois-Reymond Hilary Bok Laurence BonJour George Boole Émile Boutroux Michael Burke Lawrence Cahoone Joseph Keim Campbell Rudolf Carnap Nancy Cartwright Gregg Caruso Ernst Cassirer David Chalmers Roderick Chisholm Randolph Clarke Samuel Clarke Anthony Collins Antonella Corradini Diodorus Cronus Jonathan Dancy Donald Davidson Mario De Caro Daniel Dennett Jacques Derrida René Descartes Richard Double Fred Dretske John Dupré John Earman Laura Waddell Ekstrom Herbert Feigl Arthur Fine John Martin Fischer Frederic Fitch Owen Flanagan Luciano Floridi Philippa Foot Alfred Fouilleé Harry Frankfurt Richard L. Franklin Bas van Fraassen Michael Frede Gottlob Frege Peter Geach Edmund Gettier Carl Ginet Alvin Goldman Nicholas St. John Green H.Paul Grice Ian Hacking Ishtiyaque Haji Stuart Hampshire Sam Harris William Hasker Georg W.F. Hegel Martin Heidegger Thomas Hobbes David Hodgson Shadsworth Hodgson Baron d'Holbach Ted Honderich Pamela Huby David Hume Ferenc Huoranszki Frank Jackson William James Lord Kames Robert Kane Immanuel Kant Tomis Kapitan Walter Kaufmann Jaegwon Kim William King Hilary Kornblith Christine Korsgaard Saul Kripke Thomas Kuhn Andrea Lavazza Christoph Lehner Keith Lehrer Gottfried Leibniz Jules Lequyer Michael Levin Joseph Levine George Henry Lewes David Lewis Peter Lipton C. Lloyd Morgan John Locke Michael Lockwood E. Jonathan Lowe John R. Lucas Alasdair MacIntyre Ruth Barcan Marcus James Martineau Storrs McCall Hugh McCann Colin McGinn Michael McKenna Brian McLaughlin John McTaggart Paul E. Meehl Uwe Meixner Alfred Mele Trenton Merricks John Stuart Mill Dickinson Miller Thomas Nagel Otto Neurath Friedrich Nietzsche John Norton Robert Nozick William of Ockham Timothy O'Connor David F. Pears Charles Sanders Peirce Derk Pereboom Steven Pinker Karl Popper Huw Price Hilary Putnam Willard van Orman Quine Frank Ramsey Ayn Rand Michael Rea Thomas Reid Charles Renouvier Nicholas Rescher Richard Rorty Josiah Royce Bertrand Russell Paul Russell Gilbert Ryle Jean-Paul Sartre Kenneth Sayre Moritz Schlick Arthur Schopenhauer John Searle Wilfrid Sellars Alan Sidelle Ted Sider Henry Sidgwick Walter Sinnott-Armstrong Saul Smilansky Michael Smith Baruch Spinoza L. Susan Stebbing Isabelle Stengers George F. Stout Galen Strawson Peter Strawson Eleonore Stump Francisco Suárez Richard Taylor Kevin Timpe Mark Twain Peter Unger Peter van Inwagen Manuel Vargas John Venn Kadri Vihvelin G.H. von Wright David Foster Wallace R. Jay Wallace Ted Warfield Roy Weatherford C.F. von Weizsäcker William Whewell Alfred North Whitehead David Widerker David Wiggins Bernard Williams Timothy Williamson Ludwig Wittgenstein Susan Wolf David Albert Michael Arbib Walter Baade Bernard Baars Jeffrey Bada Leslie Ballentine Gregory Bateson John S. Bell Mara Beller Charles Bennett Ludwig von Bertalanffy Susan Blackmore Margaret Boden David Bohm Niels Bohr Ludwig Boltzmann Emile Borel Max Born Satyendra Nath Bose Walther Bothe Jean Bricmont Hans Briegel Leon Brillouin Stephen Brush Henry Thomas Buckle S. H. Burbury Melvin Calvin Donald Campbell Sadi Carnot Anthony Cashmore Eric Chaisson Gregory Chaitin Jean-Pierre Changeux Rudolf Clausius Arthur Holly Compton John Conway Jerry Coyne John Cramer Francis Crick E. P. Culverwell Antonio Damasio Olivier Darrigol Charles Darwin Richard Dawkins Terrence Deacon Lüder Deecke Richard Dedekind Louis de Broglie Stanislas Dehaene Max Delbrück Abraham de Moivre Paul Dirac Hans Driesch John Eccles Arthur Stanley Eddington Gerald Edelman Paul Ehrenfest Manfred Eigen Albert Einstein George F. R. Ellis Hugh Everett, III Franz Exner Richard Feynman R. A. Fisher David Foster Joseph Fourier Philipp Frank Steven Frautschi Edward Fredkin Lila Gatlin Michael Gazzaniga Nicholas Georgescu-Roegen GianCarlo Ghirardi J. Willard Gibbs Nicolas Gisin Paul Glimcher Thomas Gold A. O. Gomes Brian Goodwin Joshua Greene Dirk ter Haar Jacques Hadamard Mark Hadley Patrick Haggard J. B. S. Haldane Stuart Hameroff Augustin Hamon Sam Harris Ralph Hartley Hyman Hartman John-Dylan Haynes Donald Hebb Martin Heisenberg Werner Heisenberg John Herschel Basil Hiley Art Hobson Jesper Hoffmeyer Don Howard William Stanley Jevons Roman Jakobson E. T. Jaynes Pascual Jordan Ruth E. Kastner Stuart Kauffman Martin J. Klein William R. Klemm Christof Koch Simon Kochen Hans Kornhuber Stephen Kosslyn Daniel Koshland Ladislav Kovàč Leopold Kronecker Rolf Landauer Alfred Landé Pierre-Simon Laplace David Layzer Joseph LeDoux Gilbert Lewis Benjamin Libet David Lindley Seth Lloyd Hendrik Lorentz Josef Loschmidt Ernst Mach Donald MacKay Henry Margenau Owen Maroney Humberto Maturana James Clerk Maxwell Ernst Mayr John McCarthy Warren McCulloch N. David Mermin George Miller Stanley Miller Ulrich Mohrhoff Jacques Monod Emmy Noether Alexander Oparin Abraham Pais Howard Pattee Wolfgang Pauli Massimo Pauri Roger Penrose Steven Pinker Colin Pittendrigh Max Planck Susan Pockett Henri Poincaré Daniel Pollen Ilya Prigogine Hans Primas Henry Quastler Adolphe Quételet Lord Rayleigh Jürgen Renn Juan Roederer Jerome Rothstein David Ruelle Tilman Sauer Jürgen Schmidhuber Erwin Schrödinger Aaron Schurger Sebastian Seung Thomas Sebeok Claude Shannon David Shiang Abner Shimony Herbert Simon Dean Keith Simonton B. F. Skinner Lee Smolin Ray Solomonoff Roger Sperry John Stachel Henry Stapp Tom Stonier Antoine Suarez Leo Szilard Max Tegmark Libb Thims William Thomson (Kelvin) Giulio Tononi Peter Tse Francisco Varela Vlatko Vedral Mikhail Volkenstein Heinz von Foerster Richard von Mises John von Neumann Jakob von Uexküll John B. Watson Daniel Wegner Steven Weinberg Paul A. Weiss Herman Weyl John Wheeler Wilhelm Wien Norbert Wiener Eugene Wigner E. O. Wilson Stephen Wolfram H. Dieter Zeh Ernst Zermelo Wojciech Zurek Konrad Zuse Fritz Zwicky Free Will Mental Causation James Symposium The Measurement Problem The "Problem of Measurement" in quantum mechanics has been defined in various ways, originally by scientists, and more recently by philosophers of science who question the "foundations of quantum mechanics." Measurements are described with diverse concepts in quantum physics such as: • wave functions (probability amplitudes) evolving unitarily and deterministically (preserving information) according to the linear Schrödinger equation, (von Neumann's Process 2) • superposition of states, i.e., linear combinations of wave functions with complex coefficients that carry phase information and produce interference effects (Dirac's principle of superposition), • quantum jumps between states accompanied by the "collapse" of the wave function that can destroy or create information (Dirac's projection postulate, von Neumann's Process 1), • probabilities of collapses and jumps given by the square of the absolute value of the wave function for a given state, • values for possible measurements given by the eigenvalues associated with the eigenstates of the combined measuring apparatus and measured system (Dirac's axiom of measurement), • Heisenberg indeterminacy principle. The original problem, said to be a consequence of Niels Bohr's "Copenhagen interpretation" of quantum mechanics, was to explain how our measuring instruments, which are usually macroscopic objects and treatable with classical physics, can give us information about the microscopic world of atoms and subatomic particles like electrons and photons. Bohr's idea of "complementarity" insisted that a specific experiment could reveal only partial information - for example, a particle's position. "Exhaustive" information requires complementary experiments, for example to also determine a particle's momentum (within the limits of Werner Heisenberg's indeterminacy principle). Some define the problem of measurement simply as the logical contradiction between two laws describing the motion of quantum systems; the unitary, continuous, and deterministic time evolution of the Schrödinger equation versus the non-unitary, discontinuous, and indeterministic collapse of the wave function. John von Neumann saw a problem with two distinct (indeed, opposing) processes. The mathematical formalism of quantum mechanics provides no way to predict when the wave function stops evolving in a unitary fashion and collapses. Experimentally and practically, however, we can say that this occurs when the microscopic system interacts with a measuring apparatus. Others define the measurement problem as the failure to observe macroscopic superpositions. Decoherence theorists (e.g., H. Dieter Zeh and Wojciech Zurek, who use various non-standard interpretations of quantum mechanics that deny the projection postulate - quantum jumps - and even the existence of particles), define the measurement problem as the failure to observe macroscopic superpositions such as Schrödinger's Cat. Unitary time evolution of the wave function according to the Schrödinger wave equation should produce such macroscopic superpositions, they claim. Information physics treats a measuring apparatus quantum mechanically by describing parts of it as in a metastable state like the excited states of an atom, the critically poised electrical potential energy in the discharge tube of a Geiger counter, or the supersaturated water and alcohol molecules of a Wilson cloud chamber. (The pi-bond orbital rotation from cis- to trans- in the light-sensitive retinal molecule is an example of a critically poised apparatus). Excited (metastable) states are poised to collapse when an electron (or photon) collides with the sensitive detector elements in the apparatus. This collapse is macroscopic and irreversible, generally a cascade of quantum events that release large amounts of energy, increasing the (Boltzmann) entropy. But in a "measurement" there is also a local decrease in the entropy (negative entropy or information). The global entropy increase is normally orders of magnitude more than the small local decrease in entropy (an increase in stable information or Shannon entropy) that constitutes the "measured" experimental data available to human observers. The creation of new information in a measurement thus follows the same two core processes of all information creation - quantum cooperative phenomena and thermodynamics. These two are involved in the formation of microscopic objects like atoms and molecules, as well as macroscopic objects like galaxies, stars, and planets. According to the correspondence principle, all the laws of quantum physics asymptotically approach the laws of classical physics in the limit of large quantum numbers and large numbers of particles. Quantum mechanics can be used to describe large macroscopic systems. Does this mean that the positions and momenta of macroscopic objects are uncertain? Yes, it does, although the uncertainty becomes vanishingly small for large objects, it is not zero. Niels Bohr used the uncertainty of macroscopic objects to defeat Albert Einstein's several objections to quantum mechanics at the 1927 Solvay conference. But Bohr and Heisenberg also insisted that a measuring apparatus must be a regarded as a purely classical system. They can't have it both ways. Can the macroscopic apparatus also be treated by quantum physics or not? Can it be described by the Schrödinger equation? Can it be regarded as in a superposition of states? The most famous examples of macroscopic superposition are perhaps Schrödinger's Cat, which is claimed to be in a superposition of live and dead cats, and the Einstein-Podolsky-Rosen experiment, in which entangled electrons or photons are in a superposition of two-particle states that collapse over macroscopic distances to exhibit properties "nonlocally" at speeds faster than the speed of light. These treatments of macroscopic systems with quantum mechanics were intended to expose inconsistencies and incompleteness in quantum theory. The critics hoped to restore determinism and "local reality" to physics. They resulted in some strange and extremely popular "mysteries" about "quantum reality," such as the "many-worlds" interpretation, "hidden variables," and signaling faster than the speed of light. We develop a quantum-mechanical treatment of macroscopic systems, especially a measuring apparatus, to show how it can create new information. If the apparatus were describable only by classical deterministic laws, no new information could come into existence. The apparatus need only be adequately determined, that is to say, "classical" to a sufficient degree of accuracy. How Classical Is A Macroscopic Measuring Apparatus? As Landau and Lifshitz described it in their 1958 textbook Quantum Mechanics" The possibility of a quantitative description of the motion of an electron requires the presence also of physical objects which obey classical mechanics to a sufficient degree of accuracy. If an electron interacts with such a "classical object", the state of the latter is, generally speaking, altered. The nature and magnitude of this change depend on the state of the electron, and therefore may serve to characterise it quantitatively... We have defined "apparatus" as a physical object which is governed, with sufficient accuracy, by classical mechanics. Such, for instance, is a body of large enough mass. However, it must not be supposed that apparatus is necessarily macroscopic. Under certain conditions, the part of apparatus may also be taken by an object which is microscopic, since the idea of "with sufficient accuracy" depends on the actual problem proposed. Thus quantum mechanics occupies a very unusual place among physical theories: it contains classical mechanics as a limiting case [correspondence principle], yet at the same time it requires this limiting case for its own formulation. The measurement problem was analyzed mathematically in 1932 by John von Neumann. Following the work of Niels Bohr and Werner Heisenberg, von Neumann divided the world into a microscopic (atomic-level) quantum system and a macroscopic (classical) measuring apparatus. 1. A non-causal process 1, in which the measured electron winds up randomly in one of the possible physical states (eigenstates) of the measuring apparatus plus electron. This process came to be called the collapse of the wave function or the reduction of the wave packet. The probability for finding the electron in a specific eigenstate is given by the square of the coefficients cn of the expansion of the original system state (wave function ψ) in an infinite set of wave functions φ that represent the eigenfunctions of the measuring apparatus plus electron. Information physics says that the particle "shows up" only when a new stable information structure is created, information that subsequently can be observed. Process 1b. The information created in Von Neumann's Process 1 will only be stable if an amount of positive entropy greater than the negative entropy in the new information structure is transported away, in order to satisfy the second law of thermodynamics. 2. A causal process 2, in which the electron wave function ψ evolves deterministically according to Schrödinger's equation of motion for the wavelike aspect. (ih/2π) ∂ψ/∂t = . This evolution describes the motion of the probability amplitude wave ψ between measurements. The wave function exhibits interference effects. But interference is destroyed if the particle has a definite position or momentum. The particle path can not be observed. Von Neumann claimed there is another major difference between these two processes. Process 1 is thermodynamically irreversible. Process 2 is reversible. This confirms the fundamental connection between quantum mechanics and thermodynamics that information physics finds at the heart of all information creation. Information physics can show quantum mechanically how process 1 creates information. Indeed, something like process 1 is always involved when any information is created, whether or not the new information is ever "observed" by a human being. Process 2 is deterministic and information preserving. Just as the new information recorded in the measurement apparatus cannot subsist unless a compensating amount of entropy is transferred away from the new information, something similar to Process 1b must happen in the mind of an observer if the new information is to constitute an "observation." It is only in cases where information persists long enough for a human being to observe it that we can properly describe the observation as a "measurement" and the human being as an "observer." So, following von Neumann's "process" terminology, we can complete his theory of the measuring process by adding an anthropomorphic Process 3 - a conscious observer recording new information in a mind. This is only possible if there are two local reductions in the entropy (the first in the measurement apparatus, the second in the mind), both balanced by even greater increases in positive entropy that must be transported away from the apparatus and the mind, so the overall increase in entropy can satisfy the second law of thermodynamics. For some physicists, it is the wave-function collapse that gives rise to the problem of measurement because its randomness prevents us from including it in the mathematical formalism of the deterministic Schrödinger equation in process 2. The randomness that is irreducibly involved in all information creation lies at the heart of human freedom. It is the "free" in "free will." The "will" part is as adequately and statistically determined as any macroscopic object. Designing a Quantum Measurement Apparatus The first step is to build an apparatus that allows different components of the wave function to evolve along distinguishable paths into different regions of space, where the different regions correspond to (are correlated with) the physical properties we want to measure. We then can locate a detector in these different regions of space to catch particles travelling a particular path. We do not say that the system is on a particular path in this first step. That would cause the probability amplitude wave function to collapse. This first step is reversible, at least in principle. It is deterministic and an example of von Neumann process 2. Let's consider the separation of a beam of photons into horizontally and vertically polarized photons by a birefringent crystal. We need a beam of photons (and the ability to reduce the intensity to a single photon at a time). Vertically polarized photons pass straight through the crystal. They are called the ordinary ray, shown in red. Horizontally polarized photons, however, are deflected at an angle up through the crystal, then exit the crystal back at the original angle. They are called the extraordinary ray, shown in blue. Note that this first part of our apparatus accomplishes the separation of our two states into distinct physical regions. We have not actually measured yet, so a single photon passing through our measurement apparatus is described as in a linear combination (a superposition) of horizontal and vertical polarization states, | ψ > = ( 1/√2) | h > + ( 1/√2) | v >          (1) See the Dirac Three Polarizers experiment for more details on polarized photons. An Information-Preserving, Reversible Example of Process 2 To show that process 2 is reversible, we can add a second birefringent crystal upside down from the first, but inline with the superposition of physically separated states, Since we have not made a measurement and do not know the path of the photon, the phase information in the (generally complex) coefficients of equation (1) has been preserved, so when they combine in the second crystal, they emerge in a state identical to that before entering the first crystal (black arrow). Note that the two crystals can be treated classically, according to standard optics. An Information-Creating, Irreversible Example of Process 1 But now suppose we insert something between the two crystals that is capable of a measurement to produce observable information. We need a detector that locates the photon in one of the two rays. We can now create an information-creating, irreversible example of process 1. Suppose we insert something between the two crystals that is capable of a measurement to produce observable information. We need detectors, for example two charge-coupled devices that locate the photon in one of the two rays. We can write a quantum description of the CCDs, one measuring horizontal photons, | Ah > (shown as the blue spot), and the other measuring vertical photons, | Av > (shown as the red spot). We treat the detection systems quantum mechanically, and say that each detector has two eigenstates, e.g., | Ah0 >, corresponding to its initial state and correlated with no photons, and the final state | Ah1 >, in which it has detected a horizontal photon. When we actually detect the photon, say in a horizontal polarization state with statistical probability 1/2, two "collapses" or "jumps" occur. The first is the jump of the probability amplitude wave function | ψ > of the photon in equation (1) into the horizontally polarized state | h >. The second is the quantum jump of the horizontal detector from | Ah0 > to | Ah1 >. These two happen together, as the quantum states have become correlated with the states of the sensitive detectors in the classical apparatus. One can say that the photon has become entangled with the sensitive horizontal detector area, so that the wave function describing their interaction is a superposition of photon and apparatus states that cannot be observed independently. | ψ > + | Ah0 >      =>      | ψ, Ah0 >      =>      | h, Ah1 > These jumps destroy (unobservable) phase information, raise the (Boltzmann) entropy of the apparatus, and increase visible information (Shannon entropy) in the form of the visible spot. The entropy increase takes the form of a large chemical energy release when the photographic spot is developed (or a cascade of electrons in a CCD). Note that the birefringent crystal and the parts of the macroscopic apparatus other than the sensitive detectors are treated classically. We can animate these irreversible and reversible processes, We see that our example agrees with Von Neumann. A measurement which finds the photon in a specific state n is thermodynamically irreversible, whereas the deterministic evolution described by Schrödinger's equation is reversible. We thus establish a clear connection between a measurement, which increases the information by some number of bits (Shannon entropy), and the necessary compensating increase in the (Boltzmann) entropy of the macroscopic apparatus, and the cosmic creation process, where new particles form, reducing the entropy locally, and the energy of formation is radiated or conducted away as Boltzmann entropy. Note that the Boltzmann entropy can only be radiated away (ultimately into the night sky to the cosmic microwave background) because the expansion of the universe provides a sink for the entropy, as pointed out by David Layzer. Note also that this cosmic information-creating process requires no conscious observer. The universe is its own observer. The Boundary between the Classical and Quantum Worlds Some scientists (John von Neumann and Eugene Wigner, for example) have argued that in the absence of a conscious observer, or some "cut" between the microscopic and macroscopic world, the evolution of the quantum system and the macroscopic measuring apparatus would be described deterministically by Schrödinger's equation of motion for the wave function | ψ + A > with the Hamiltonian H energy operator, The Role of a Conscious Observer In 1941, Carl von Weizsäcker described the measurement problem as an interaction between a Subject and an Object, a view shared by the philosopher of science Ernst Cassirer. Fritz London and Edmond Bauer made the strongest case for the critical role of a conscious observer in 1939: So far we have only coupled one apparatus with one object. But a coupling, even with a measuring device, is not yet a measurement. A measurement is achieved only when the position of the pointer has been observed. It is precisely this increase of knowledge, acquired by observation, that gives the observer the right to choose among the different components of the mixture predicted by theory, to reject those which are not observed, and to attribute thenceforth to the object a new wave function, that of the pure case which he has found. We note the essential role played by the consciousness of the observer in this transition from the mixture to the pure case. Without his effective intervention, one would never obtain a new function. In 1961, Eugene Wigner made quantum physics even more subjective, claiming that a quantum measurement requires a conscious observer, without which nothing ever happens in the universe. Other physicists were more circumspect. Niels Bohr contrasted Paul Dirac's view with that of Heisenberg: Landau and Lifshitz said clearly that quantum physics was independent of any observer: In this connection the "classical object" is usually called apparatus, and its interaction with the electron is spoken of as measurement. However, it must be most decidedly emphasised that we are here not discussing a process of measurement in which the physicist-observer takes part. By measurement, in quantum mechanics, we understand any process of interaction between classical and quantum objects, occurring apart from and independently of any observer. David Bohm agreed that what is observed is distinct from the observer: If it were necessary to give all parts of the world a completely quantum-mechanical description, a person trying to apply quantum theory to the process of observation would be faced with an insoluble paradox. This would be so because he would then have to regard himself as something connected inseparably with the rest of the world. On the other hand,the very idea of making an observation implies that what is observed is totally distinct from the person observing it. And John Bell said: It would seem that the [quantum] theory is exclusively concerned about 'results of measurement', and has nothing to say about anything else. What exactly qualifies some physical systems to play the role of 'measurer'? Was the wavefunction of the world waiting to jump for thousands of millions of years until a single-celled living creature appeared? Or did it have to wait a little longer, for some better qualified system...with a Ph.D.? If the theory is to apply to anything but highly idealised laboratory operations, are we not obliged to admit that more or less 'measurement-like' processes are going on more or less all the time, more or less everywhere? Do we not have jumping then all the time? Three Essential Steps in a "Measurement" and "Observation" We can distinguish three required elements in a measurement that can clarify the ongoing debate about the role of a conscious observer. 1. In standard quantum theory, the first required element is the collapse of the wave-function. This is the Dirac projection postulate and von Neumann Process 1. However, the collapse might not leave a determinate record. If nothing in the environment is macroscopically affected so as to leave an indelible record of the collapse, we can say that no information about the collapse is created. The overwhelming fraction of collapses are of this kind. Moreover, information might actually be destroyed. For example, collisions between atoms or molecules in a gas that erase past information about their paths. 2. If the collapse occurs when the quantum system is entangled with a macroscopic measurement apparatus, a well-designed apparatus will also "collapse" into a correlated "pointer" state. As we showed above for photons, the detector in the upper half of a Stern-Gerlach apparatus will fire, indicating detection of an electron with spin up. As with photons, if the probability amplitude | ↑ > in the upper half does not collapse as the electron is detected, it can still be recombined with the probability amplitude | ↓ > in the lower half to reconstruct the unseparated beam. When the apparatus detects a particle, the second required element is that it produce a determinate record of the event. But this is impossible without an irreversible thermodynamic process that involves: a) the creation of at least one bit of new information (negative entropy) and b) the transfer away from the measuring apparatus of an amount of positive entropy (generally much, much) greater than teh information created. Notice that no conscious observer need be involved. We can generalize this second step to an event in the physical world that was not designed as a measurement apparatus by a physical scientist, but nevertheless leaves an indelible record of the collapse of a quantum state. This might be a highly specific single event, or the macroscopic consequence of billions of atomic-molecular level of events. 3. Finally, the third required element is an indelible determinate record that can be looked at by an observer (presumably conscious, although the consciousness itself has nothing to do with the measurement). When we have all three of these essential elements, we have what we normally mean by a measurement and an observation, both involving a human being. When we have only the first two, we can say metaphorically that the "universe is measuring itself," creating an information record of quantum collapse events. For example, every hydrogen atom formed in the early recombination era is a record of the time period when macroscopic bodies could begin to form. A certain pattern of photons records the explosion of a supernova billions of light years away. When detected by the CCD in a telescope, it becomes a potential observation. Craters on the back side of the moon recorded collisions with solar system debris that could become observations only when the first NASA mission circled the moon. For Teachers For Scholars John Bell on Measurement Does [the 'collapse of the wavefunction'] happen sometimes outside laboratories? Or only in some authorized 'measuring apparatus'? And whereabouts in that apparatus? In the Einstein—Podolsky-Rosen—Bohm experiment, does 'measurement' occur already in the polarizers, or only in the counters? Or does it occur still later, in the computer collecting the data, or only in the eye, or even perhaps only in the brain, or at the brain—mind interface of the experimenter? David Bohm on Measurement In his 1950 textbook Quantum Theory, Bohm discusses measurement in chapter 22, section 12. 12. Irreversibility of Process of Measurement and Its Fundamental Role in Quantum Theory. From the previous work it follows that a measurement process is irreversible in the sense that, after it has occurred, re-establishment of definite phase relations between the eigenfunctions of the measured variable is overwhelmingly unlikely. This irreversibility greatly resembles that which appears in thermodynamic processes, where a decrease of entropy is also an overwhelmingly unlikely possibility.* * There is, in fact, a close connection between entropy and the process of measurement. See L. Szilard, , 53, 840, 1929. The necessity for such a connection can be seen by considering a box divided by a partition into two equal parts, containing an equal number of gas molecules in each part. Suppose that in this box is placed a device that can provide a rough measurement of the position of each atom as it approaches the partition. This device is coupled automatically to a gate in the partition in such a way that the gate will be opened if a molecule approaches the gate from the right, but closed if it approaches from the left. Thus, in time, all the molecules can be made to accumulate on the left-hand side. In this way, the entropy of the gas decreases. If there were no compensating increase of entropy of the mechanism, then the second law of thermodynamics would be violated. We have seen, however, that in practice, every process which can provide a definite measurement disclosing in which side of the box the molecule actually is, must also be attended by irreversible changes in the measuring apparatus. In fact, it can be shown that these changes must be at least large enough to compensate for the decrease in entropy of the gas. Thus, the second law of thermodynamics cannot actually be violated in this way. This means, of course, that Maxwell's famous "sorting demon " cannot operate, if he is made of matter obeying all of the laws of physics. (See L. Brillouin, American Scientist, 38, 594, 1950.) Because the irreversible behavior of the measuring apparatus is essential for the destruction of definite phase relations and because, in turn, the destruction of definite phase relation's is essential for the consistency of the quantum theory as a whole, it follows that thermodynamic irreversibility enters into the quantum theory in an integral way. This is in remarkable contrast to classical theory, where the concept of thermodynamic irreversibility plays no fundamental role in the basic sciences of mechanics and electrodynamics. Thus, whereas in classical theory fundamental variables (such as position or momentum of an elementary particle) are regarded as having definite values independently of whether the measuring apparatus is reversible or not, in quantum theory we find that such a quantity can take on a well defined value only when the system is coupled indivisibly to a classically describable system undergoing irreversible processes. The very definition of the state of any one system at the microscopic level therefore requires that matter in the large shall undergo irreversible processes. There is a strong analogy here to the behavior of biological systems, where, likewise, the very existence of the fundamental elements (for example, the cells) depends on the maintenance of irreversible processes involving the oxidation of food throughout an organism as a whole. (A stoppage of these processes would result in the dissolution of the cell.) John von Neumann on Measurement In his 1932 Mathematical Foundations of Quantum Mechanics (in German, English edition 1955) John von Neumann explained that two fundamentally different processes are going on in quantum mechanics. cn = < φn | ψ > (ih/2π) ∂ψ/∂t = Information physics establishes that process 1 may create information. It is always involved when information is created. Process 2 is deterministic and information preserving. The first of these processes has come to be called the collapse of the wave function. It gave rise to the so-called problem of measurement because its randomness prevents it from being a part of the deterministic mathematics of process 2. Information physics has solved the problem of measurement by identifying the moment and place of the collapse of the wave function with the creation of an observable information structure. The presence of a conscious observer is not necessary. It is enough that the new information created is observable, should a human observer try to look at it in the future. Information physics is thus subtly involved in the question of what humans can know (epistemology). The Schnitt von Neumann described the collapse of the wave function as requiring a "cut" (Schnitt in German) between the microscopic quantum system and the observer. He said it did not matter where this cut was placed, because the mathematics would produce the same experimental results. There has been a lot of controversy and confusion about this cut. Some have placed it outside a room which includes the measuring apparatus and an observer A, and just before observer B makes a measurement of the physical state of the room, which is imagined to evolve deterministically according to process 2 and the Schrödinger equation. von Neumann contributed a lot to this confusion in his discussion of subjective perceptions and "psycho-physical parallelism, which was encouraged by Neils Bohr. Bohr interpreted his "complementarity principle" as explaining the difference between subjectivity and objectivity (as well as several other dualisms). von Neumann wrote: the Schnitt Quantum Mechanics, by Albert Messiah, on Measurement Messiah says a detailed study of the mechanism of measurement will not be made in his book, but he does say this. The dynamical state of such a system is represented at a given instant of time by its wave function at that instant. The causal relationship between the wave function γ(to) at an initial time to, and the wave function γ(t) at any later time, is expressed through the Schrödinger equation. However, as soon as it is subjected to observation, the system experiences some reaction from the observing instrument. Moreover, the above reaction is to some extent unpredictable and uncontrollable since there is no sharp separation between the observed system and the observing instrument. They must be treated as an indivisible quantum system whose wave function Ψ(t) depends upon the coordinates of the measuring device as well as upon those of the observed system. During the process of observation, the measured system can no longer be considered separately and the very notion of a dynamical state defined by the simpler wave function γ(t) loses its meaning. Thus the intervention of the observing instrument destroys all causal connection between the state of the system before and after the measurement; this explains why one cannot in general predict with certainty in what state the system will be found after the measurement; one can only make predictions of a statistical nature1 1) The statistical predictions concerning the results of measurement are derived very naturally from the study of the mechanism of the measuring operation itself, a study in which the measuring instrument is treated as a quantized object and the complex (system + measuring instrument) evolves in causal fashion in accordance with the Schrödinger equation. A very concise and simple presentation of the measuring process in Quantum Mechanics is given. in F. London and E. Bauer, La Théorie de l'Observation en Mécanique Quantique (Paris, Hermann, 1939). More detailed discussions of this problem may be found in J. von Neumann, Mathematical Foundations of Quantum Mechanics (Princeton, Princeton University Press, 1955), and in D. Bohm, (Quantum Theory New York, Prentice-Hall, 1951). Quantum Mechanics, vol. I, p. 157 Decoherence Theorists on Measurement In general, decoherence theorists see the problem of measurement as why do we not see macroscopic superpositions of states. Why do measurements always show a system and its measuring apparatus to be in a particular state - a "pointer state," and not in a superposition? Our answer is that we never see microscopic systems in a superposition of states either. Dirac's principle of superposition says only that the probability (amplitudes) of finding a system in different states has non-zero values for different states. Measurements always reveal a system to be in one state. Which state is found is a matter of chance. [Decoherence theorists do not like this indeterminism.] The statistics from large numbers of measurements of identically prepared systems verify the predicted probabilities for the different states. The accuracy of these quantum mechanical predictions (1 part in 1015) shows quantum mechanics to be the most accurate theory ever known. Guido Bacciagaluppi summarized the view of decoherence theorists in an article for the Stanford Encyclopedia of Philosophy. He defines the measurement problem as the lack of macroscopic superpositions: The measurement problem, in a nutshell, runs as follows. Quantum mechanical systems are described by wave-like mathematical objects (vectors) of which sums (superpositions) can be formed (see the entry on quantum mechanics). Time evolution (the Schrödinger equation) preserves such sums. Thus, if a quantum mechanical system (say, an electron) is described by a superposition of two given states, say, spin in x-direction equal +1/2 and spin in x-direction equal -1/2, and we let it interact with a measuring apparatus that couples to these states, the final quantum state of the composite will be a sum of two components [that is to say, a macroscopic superposition, which is of course never seen!], one in which the apparatus has coupled to (has registered) x-spin = +1/2, and one in which the apparatus has coupled to (has registered) x-spin = -1/2... [D]ecoherence as such does not provide a solution to the measurement problem, at least not unless it is combined with an appropriate interpretation of the theory (whether this be one that attempts to solve the measurement problem, such as Bohm, Everett or GRW; or one that attempts to dissolve it, such as various versions of the Copenhagen interpretation). Some of the main workers in the field such as Zeh (2000) and (perhaps) Zurek (1998) suggest that decoherence is most naturally understood in terms of Everett-like interpretations. Maximilian Schlosshauer situates the problem of measurement in the context of the so-called "quantum-to-classical transition," namely the question of exactly how deterministic classical behavior emerges from the indeterministic microscopic quantum world. In this section, we shall describe the (in)famous measurement problem of quantum mechanics that we have already referred to in several places in the text. The choice of the term "measurement problem" has purely historical reasons: Certain foundational issues associated with the measurement problem were first illustrated in the context of a quantum-mechanical description of a measuring apparatus interacting with a system. However, one may regard the term "measurement problem" as implying too narrow a scope, chiefly for the following two reasons. First, as we shall see below, the measurement problem is composed of three distinct issues, so it would make sense to rather speak of measurement problems. Second, quantum measurement and the arising foundational problems are but a special case of the more general problem of the quantum-to-classical transition, i.e., the question of how effectively classical systems and properties around us emerge from the underlying quantum domain. On the one hand, then, the problem of the quantum-to-classical transition has a much broader scope than the issue of quantum measurement in the literal sense. On the other hand, however, many interactions between physical systems can be viewed as measurement-like interactions. For example, light scattering off an object carries away information about the position of the object, and it is in this sense that we thus may view these incident photons as a "measuring device." Such ubiquitous measurement-like interactions lie at the heart of the explanation of the quantum-to-classical transition by means of decoherence. Measurement, in the more general sense, thus retains its paramount importance also in the broader context of the quantum-to-classical transition, which in turn motivates us not to abandon the term "measurement problem" altogether in favor of the more general "problem of the quantum-to-classical transition." As indicated above, the measurement problem (and the problem of the quantum-to-classical transition) is composed of three parts, all of which we shall describe in more detail in the following: 1. The problem of the preferred basis (Sect. 2.5.2). What singles out the preferred physical quantities in nature—e.g., why are physical systems usually observed to be in definite positions rather than in superpositions of positions? 2. The problem of the nonobservability of interference (Sect. 2.5.3). Why is it so difficult to observe quantum interference effects, especially on macroscopic scales? 3. The problem of outcomes (Sect. 2.5.4). Why do measurements have outcomes at all, and what selects a particular outcome among the different possibilities described by the quantum probability distribution? [The answer (since Einstein, 1916) is chance.] Familiarity with these problems will turn out to be important for a proper understanding of the scope, achievements, and implications of decoherence. To anticipate, it is fair to conclude that decoherence has essentially resolved the first two problems. Since these problems and their resolution can be formulated in purely operational terms within the standard formalism of quantum mechanics, the role played by decoherence in addressing these two issues is rather undisputed. By contrast, the success of decoherence in tackling the third issue — the problem of outcomes — remains a matter of debate, in particular, because this issue is almost inextricably linked to the choice of a specific interpretation of quantum mechanics (which mostly boils down to a matter of personal preference). In fact, most of the overly optimistic or pessimistic statements about the ability of decoherence to solve "the" measurement problem can be traced back to a misunderstanding of the scope that a standard quantum effect such as decoherence may have in resolving the more interpretive problem of outcomes. The main concern of the decoherence theorists then is to recover a deterministic picture of quantum mechanics that would allow them to predict the outcome of a particular experiment. They have what William James called an "antipathy to chance." Max Tegmark and John Wheeler made this clear in a 2001 article in Scientific American: The discovery of decoherence, combined with the ever more elaborate experimental demonstrations of quantum weirdness, has caused a noticeable shift in the views of physicists. The main motivation for introducing the notion of wave-function collapse had been to explain why experiments produced specific outcomes and not strange superpositions of outcomes. Now much of that motivation is gone. Moreover, it is embarrassing that nobody has provided a testable deterministic equation specifying precisely when the mysterious collapse is supposed to occur. H. Dieter Zeh, the founder of the "decoherence program," defines the measurement problem as a macroscopic entangled superposition of all possible measurement outcomes: Because of the dynamical superposition principle, an initial superposition Σ cn | n > does not lead to definite pointer positions (with their empirically observed frequencies). If decoherence is neglected, one obtains their entangled superposition Σ cn | n > | Ψ n >, that is, a state that is different from all potential measurement outcomes | n > | Ψ n >. This dilemma represents the "quantum measurement problem" to be discussed in Sect. 2.3. Von Neumann's interaction is nonetheless regarded as the first step of a measurement (a "pre-measurement"). Yet, a collapse seems still to be required - now in the measurement device rather than in the microscopic system. Because of the entanglement between system and apparatus, it would then affect the total system. Zeh continues: 2.3 The Measurement Problem The superposition of different measurement outcomes, resulting according to a Schrodinger equation when applied to the total system (as discussed above), demonstrates that a "naive ensemble interpretation" of quantum mechanics in terms of incomplete knowledge is ruled out. It's not clear why the standard ensemble interpretation is "ruled out," but Zeh offers a solution, which is to deny the projection postulate of standard quantum mechanics and use an unconventional interpretation that makes wave-function collapses only "apparent": A way out of this dilemma within quantum mechanical concepts requires one of two possibilities: a modification of the Schrodinger equation that explicitly describes a collapse (also called "spontaneous localization" - see Chap. 8), or an Everett type interpretation, in which all measurement outcomes are assumed to exist in one formal superposition, but to be perceived separately as a consequence of their dynamical autonomy resulting from decoherence. It was John Bell who called Everett's Many-Worlds Interpretation "extravagant." Everett rejected the intuitively simple collapse of multiple possibilities to one actual situation. Instead he proposed the instantaneous creation of multiple universes, each with all the matter and energy of the observable universe. Surely his Many Worlds was the most absurd anti-Occam proposal ever made! Jeffrey Bub worked with David Bohm to develop Bohm's theory of "hidden variables." They hoped their theory might provide a deterministic basis for quantum theory and support Albert Einstein's view of a physical world independent of observations of the world. The standard theory of quantum mechanics is irreducibly statistical and indeterministic, a consequence of the collapse of the wave function when many possibilities for physical outcomes of an experiment reduce to a single actual outcome. This is a book about the interpretation of quantum mechanics, and about the measurement problem. The conceptual entanglements of the measurement problem have their source in the orthodox interpretation of 'entangled' states that arise in quantum mechanical measurement processes... All standard treatments of quantum mechanics take an observable as having a determinate value if the quantum state is an eigenstate of that observable. If the state is not an eigenstate of the observable, no determinate value is attributed to the observable. This principle - sometimes called the 'eigenvalue-eigenstate link' - is explicitly endorsed by Dirac (1958, pp. 46-7) and von Neumann (1955, p. 253), and clearly identified as the 'usual' view by Einstein, Podolsky, and Rosen (1935) in their classic argument for the incompleteness of quantum mechanics (see chapter 2). Since the dynamics of quantum mechanics described by Schrödinger's time-dependent equation of motion is linear, it follows immediately from this orthodox interpretation principle that, after an interaction between two quantum mechanical systems that can be interpreted as a measurement by one system on the other, the state of the composite system is not an eigenstate of the observable measured in the interaction, and not an eigenstate of the indicator observable functioning as a 'pointer.' So, on the orthodox interpretation, neither the measured observable nor the pointer reading have determinate values, after a suitable interaction that correlates pointer readings with values of the measured observable. This is the measurement problem of quantum mechanics. Bacciagaluppi, Guido, The Role of Decoherence in Quantum Mechanics, first published Mon Nov 3, 2003; substantive revision Mon Apr 16, 2012 Jeffrey Bub, Interpreting the Quantum World. Cambridge University, 1997, p.2. Adriana Daneri, A. Loinger, and G. M. Prosperi, Nuclear Physics, 33 (1962) pp.297-319. (W&Z, p.657) Erich Joos, H. Dieter Zeh, et al., Decoherence and the Appearance of a Classical World in Quantum Theory. Springer, 2010, Gunter Ludwig, Zeitschrift für Physik, 135 (1953) p.483 Maximilian Schlosshauer, Decoherence and the Quantum-to-Classical Transition. Springer, 2007, pp.49-50 Leo Szilard, Behavioral Science, 9 (1964) pp.301-10. (W&Z, p.539) Max Tegmark and John Wheeler, Scientific American, February (2001) pp.68-75. John von Neumann, The Mathematical Foundations of Quantum Mechanics, (Princeton, NJ, Princeton U. Press, 1955), pp.347-445. (W&Z, p.549) John Wheeler and Wojciech Zurek, Quantum Theory and Measurement (Princeton, NJ, Princeton U. Press, 1983) (= W&Z) Eugene Wigner, "The Problem of Measurement," Symmetries and Reflections (Bloomington, IN, Indiana U. Press, 1967) pp.153-70. (W&Z, p.324) Chapter 5.4 - Immortality Chapter 5.6 - Mind-Body Part Four - Knowledge Part Six - Solutions Normal | Teacher | Scholar
02590509131c93eb
Research Journal of Optics and Photonics. Research Article, Res J Opt Photonics Vol: 2 Issue: 2 Single Electron Localization and Tunneling in Weakly Coupled Quantum Dot Array Filikhin I1, Karoui A1*, Mitic V2, Maswadeh W1 and Vlahovic B1 1Centers For Research Excellence in Science and Technology, North Carolina Central University, 1801 Fayetteville St. Durham, NC 27707, USA 2University of Nis, Faculty of Electronic Engineering, Medvedeva 14, Nis, Serbia *Corresponding Author : Karoui A Tel: (919)530 6006 [email protected] Received: January 03, 2018 Accepted: January 25, 2018 Published: March 25, 2018 Citation: Filikhin I, Karoui A, Mitic V, Maswadeh W, Vlahovic B (2018) Single Electron Localization and Tunneling in Weakly Coupled Quantum Dot Array. J Opt Photonics 2:2. Electron localization and tunneling in quantum dot (QD) arrays are discussed in this paper. Various arrays of InAs/GaAs QDs are modelled as laterally distributed dots, using single sub-band effective mass approach with effective potential simulating the strain effect. Triple QDs (TQD) configured in triangle and in linear chain is considered. Electron localization in double quantum dot (DQD) and in TQD is studied over the entire electron energy spectrum by varying the geometry parameters of these QD arrays. The spectral distribution of localized and delocalized states appeared very sensitive to the violation of the mirror symmetry of the systems. The effect of adding a third dot to a DQD is also investigated. We show that the presence of a third dot increases the tunneling in the initial DQD Keywords: Triple quantum dots; Quantum wells; Single electron states; Localization; Tunneling Due to their unique properties, nanosized semiconductor heterostructures, such as quantum wells (QWs), quantum dots (QDs) and quantum rings are of great interest for the development of highly efficient nano-devices. Also the ability of growing dense and uniform QD assemblies offers new ways for making new generation of quantum devices. However, there are fundamental issues associated with the practical use of QD assemblies. Actually, imperfections in real world QD assemblies impede making efficient QD based devices, and QD assemblies still perform poorly. For instance, QD based third generation solar cells [1-3] have efficiencies much lower than theoretical predictions [4-6] and even worse than QD-less photovoltaic devices. Also, QD based optical and quantum computing devices are still impractical [7-11], despite the varieties of theoretical studies and device design. Nonetheless, the technological implementation of the well-studied InAs/GaAs QDs is under way in various fields. More interestingly, Ge, SiGe, and Si QD arrays have been introduced in large scale optoelectronic integration [12], as well as in third generation solar cells. Also II-VI QDs have been utilized in ultra-bright and pure color display screens [13]. Primary steps for studying large QD assemblies consists in detailed investigation of Double Quantum Dots (DQDs), triple quantum dots (TQDs), QD rings, and QD chains. DQD and TQD are important at the fundamental level, as well as for future technologies. In particular, TQD system has received special attention in connection to electron confinement [14,15] and charge transport [16,17] for it is considered as the building block of two-dimensional quantum arrays, and an essential element for quantum computing. Special emphasis on tunneling, in such systems, stems from its importance for device performance [1]. Hence, we studied electron energy spectra and electron tunneling in InAs/GaAs DQD and TQD quantum systems. We compared the tunneling in these QD systems with chaotic and regular geometries, taking into account recently published results, for instance [18] as evidence of tunneling rate regularization in chaotic systems. Also, we confirmed a strong influence of the system boundaries on the tunneling rate. Known fabrication technologies always yield, at best, slightly dissymmetric quantum dots and arrays. Dissimilar QDs in the form of truncated disks with atomically flat top surface have been reported in [19]. Such systems induce several convoluted quantum behaviors, for example the drastic change in charge carrier tunneling due to small violation of symmetry, asymmetry induced fine-structure splitting [20], and strong influence of charge transport by chaotic behaviour of such systems [21]. To be noted, chaos and tunneling in the meso- and nano-scale material features have an inalienable relation [22]. These phenomena are highly important for modern and future technologies as mirrored by the relatively long scientific debate [23-28]; for recent review see [29] and references there in. With regard to charge dynamics in QD arrays, it has been recently demonstrated [30] that spectral distribution of electron localized/ delocalized states and the tunneling in DQDs are highly sensitive to the violation of the geometrical symmetry of QD array and their constituents, see also [31]. The tunneling is a fundamental mechanism of charge transfer in electron confining materials; it is best described by the classical model of wave transmission across the potential barrier of a double well [32,33]. One of the main features of this so called dynamical tunneling [34] is for example the splitting of the energy of the degenerate pairs of levels induced by the coupling between nano-objects. In such quantum system, the wave functions are linear combinations of the electron wave functions bound to the isolated nano-objects [35,36]. In present work, we study single electron localization and tunneling in Triple Quantum Well (TQW). We consider this system as a double QW weakly coupled to a third one, and we study the tunneling in the DQW effected by the third QW. We investigate the effect of coupling on induced states as well as the sensitivity of the tunneling to small symmetry violation. Theoretical Model In this work, we consider InAs quantum dots formed within GaAs layers. We focus on observed phenomena in InAs/GaAs QDs that could be approximated as two dimensional quantum wells (QW). The extension to three-dimensional models requires larger computing resources for numerical modelling, however it does not add fundamental insights. A variety of QDs is modelled [37] based on the kp-perturbation single sub-band effective mass approximation. In these cases, the problem is mathematically formulated by the Schrödinger equation: image (1) where image is the single band kp-Hamiltonian operator image m* is the electron effective mass, which depends on the radial position of the electron, thus can be written as m*(r), and Vc(r) is the band gap potential. The Ben-Daniel-Duke boundary conditions [38] are used at the interface of the QW material and the substrate. We use the confinement model proposed in [38] for the conduction band. Both potentials Vc(r) and Vs(r) act inside the QWs. While the potential Vc is attractive, the potential Vs is repulsive. The latter is added to simulate the strain effect in the InAs/GaAs heterostructure. We do not correct the electron mass, for instance, by taking into account the non-parabolic approximation, because of its small effect on this quantum system [29]. Inside the QD the bulk conduction band offset is null, i.e., Vc(r)= 0, while it is equal to Vc outside the QD. The band gap potential for the conduction band is chosen to be Vc(r)= 0.594 eV. The bulk effective masses of InAs and GaAs are m2* = 0.067 m0 and m2* = 0.067 m0, respectively, where m0 is the free electron mass. The magnitude of the effective potential Vs(r) that simulates the strain effect is adjusted so to reproduce experimental data for the InAs/ GaAs quantum dots. The adjustment depends mainly on the materials composing the heterojunction, and on the QD topology, to a lesser degree. For example, the magnitude of Vs for the conduction band chosen in [39] is 0.21 eV. This value was obtained in [40] to reproduce the results obtained based on eighth band k.p calculations for InAs/ GaAs QDs. The value of 0.31 eV was obtained from experimental data reported by Lorke et al. [41] Numerical solution of Equation (1) gives the wave function and energy of a single electron in isolated QW or in pair of QWs (DQW) or in array of QWs. Upon appropriate choice of sizes of QWs, the system demonstrates atom-like electron energy spectrum that encompasses hundreds of confined electron levels. To describe the tunneling of a single electron in a DQW, we define (for each energy level) a localization parameter σ as follows: which varies within the range of [-1,1]. Nk,γ is the probability of electron localization in the Ω γ region, hence it can be written as Nk,γ =ʃΩ γ |Φ(x,y)|2 , where γ =1,2 and Φk(x,y) is the electron wave function for k=1,2,...., the energy quantum numbers that constitute the electron spectrum in a DQW. For the ideal case of QW1 and QW2 having the same shape and size, the electron presence in either QW11) or QW22) has equal probability. For this state, the electron is delocalized and σ = 0. The cases of σ equal or very close to 1 or -1 correspond to electron localization either in QW1 or QW2 . Results and Discussion Atom-like system In a double quantum system, the single electron spectrum is composed of a set of symmetric and anti-symmetric state pairs (quasi-doublets). As example, two wave functions of the quasidoublets are presented in Figure 1, for strongly coupled QWs. In that case, the parameter σ is equal to 0 for either of the two quasi-doublet members. These electron states are delocalised. Energy splitting between members of the quasi-doublet is used as the tunneling rate. Akin diatomic molecules, the DQD appears to have two states, each is characterized by an energy level similar to bonding and anti-bonding energy of the molecule. In molecular physics, chemical reactions lead to transition from one quantum state to another, which result in dissociation or association of the constituent atoms. The case of weakly coupled DQW can be obtained from initial state of separated QWs by reduction of inter-dot distance. The localization parameter σ being within the interval -1< σ <1 allows the probability of electron localization on the left and on the right QW to vary and to be eventually different. Figure 1: Bonding and anti-bonding like states in artificial molecule made of two strongly coupled InAs/GaAs QWs. Ground state quasi-doublets has energies of a) 0.2432 eV and b) 0.2442 eV. The QWs are considered identical. Double quantum wells A pair of adjacent discs acting as QDs are assimilated to be a two dimensional double quantum well (DQW). Ideally symmetric or nearly symmetric quantum wells are important to current study. An example of experimental possibility for highly symmetric , elongationfree QDs was reported [42]. Dynamics of localized and delocalized states along electron spectrum in DQW with dependence on interdot distance was studied [30]. It was shown that tunneling between QWs occurs from high energy levels to the ground state as inter-dot distance consistently decreases. The electron spectrum appears to have three components resulting from: localized states, delocalized states, and states with different probability for localizations in the left and the right sides of the DQW. Noteworthy is the extreme sensitivity of the spectral distribution of the third component to small variations of QW shape, which violate left-right symmetry of the DQW. This fact can be explained by the dependence of total wave function of “two level” quantum system [32] on the energy difference, ΔE1,2 between left and right subsystems, considered isolated. One can write electron wave functions of a quasi-doublet (for one dimension system) as a combination of wave functions of isolated QWs (i.e., basis set): image (2) where tan(θ)=2W/ΔE1,2 with 0 ≤ θ ≤ π, W is matrix element of confinement potential of left (respectively, right) QW with wave functions image (respectively, image) of isolated QWs. Competition of ΔE12 and W effects defines the type of localization in the system. The wave functions are expressed by the formula image for the ideal case of identical QWs, that is when ΔE12=0. Such situation is seen in Figure 1 for delocalized states of a single electron. The localized states appear for isolated QWs when ΔE1,2 ≠ 0, and W=0. The states with different probability for localizations in the left and the right sides of the DQW appear for the values of ΔE1,2 and W which provide non-trivial coefficients in the form given in equation (2). Inverse dependence of the coefficient on ΔE1,2 leads to high sensitivity of the wave function to variation of geometry of the left and right QWs. Due to numerical errors, related to discretization of Equation (1) on a finite coordinate mesh, the ideal situation of identical QWs cannot be realized in the presented numerical modeling. We estimate that the results presented in this paper have numerical error for ΔE1,2 of about 10-7eV, considering “identical” QWs. Typical spectral distribution of localized/delocalized states in DQW is shown in Figure 2 through the variation of σ parameter as a function of inter-dot distance. One can see that the σ parameter, as well as the quasi quadruplets and quasi doublets that readily appear in the DQD spectrum, are symmetric about σ = 0 axis. Calculations showed that if the inter-dot distance is less than 10 nm then all electron states are delocalized. However, when the distance is larger than 36 nm, no delocalization occurs and the QWs can be considered as isolated. When the inter-dot distance is between the values 10 nm and 36 nm, there are localized, delocalized and intermediate states in the spectrum. Dynamics of the spectral distribution of localized/ delocalized states can be described as follows: delocalized states consequently appear at the upper levels of the spectrum when the DQW inter-dot distance is decreased from initial value of 36 nm. Figure 2: 2: σ parameter versus the electron spectrum in InAs/GaAs DQW. The QWs are “identical” and have radius of R=13 nm. For inter-dot distance a) a=14 nm, all states are delocalized states; For b) a=10 nm c) a=28 nm b) and c) a =36 nm, there are localized, delocalized, and intermediate states. Sensitivity to symmetry breaking The sensitivity of the tunneling in DQWs to symmetry violation is illustrated in Figure 3, where σ12 parameter (specifically describes the tunneling between QW1 and QW2 and characterizes the localization) is shown as a function of the energy of electron confined states for different values of ƞ (asymmetry parameter ƞ= (RL-RR)/RL, where RL and RR are the radii of left and right QWs). The spectral distribution for ideally symmetric DQW (ƞ =0) encompasses totally delocalized states. A small symmetry violation (e.g., ƞ = 0.8%) changes the electron localization distribution to completely localized. It can be shown that the sensitivity of the localization parameter varies as 1/ΔE2, where ΔE is the energy difference of the same level when considered in isolated left and right QWs. According to this relation, small symmetry violations, when ΔE →0, results in strong variations of electron localization. Figure 3: Localization parameter σ12 for DQW along the electron spectrum for different asymmetry values (parameter η). Violation of left-right symmetry results from the variation of the right QW radius (RL=13 nm). The inter-dot distance a is fixed to 10 nm. We further demonstrate in Figure 4 (see also [30]) that the spectral density D(σ), which readily shows the distribution of localized/ delocalized states of the DQW, is largely affected by the slightest asymmetry of the DQW configuration. In Figure 4 (a), a small dissymmetry appears, indeed, in the delocalized state distributions for the QW radii ratio ζ= 0.9975, the case of a slightly asymmetric DQW. Electrons are delocalized for most energy levels when interdot distance is zero (QWs are in close contact). This situation quickly changes when the distance increases to 3 nm, where most electron levels become localized. For a stronger asymmetry of DQW, for instance ζ= 0.875. (Shown in Figure 4 (b)), the electron probability is higher in the vicinity of the larger QD (on the right side), independent of the inter-dot distance. In contrast to the previous case, all electron states are mostly localized. For a distance null (when the QWs are in close contact), the distribution turns out chaotic, without any discerned peak. Figure 4: Spectral density function of σ, for different inter-dot distance a, in the case of asymmetric DQWs (described by ζ=R2/R1) with a) R1=40 nm and R2=39.9 nm (ζ=0.9975) and b) R1=40 nm, and R2=35.0 nm (ζ=0.875). Generally, violation of the DQW geometrical symmetry changes the inter-dot distance threshold beyond which the tunneling between the dots becomes possible. For identical QWs considered in previous section, a distance of less than 36 nm is required for electron tunneling. This distance is significantly smaller for asymmetric DQWs. One can clearly see in Figure 4 (a) that the distance is about 3 nm for DQW that has an asymmetry of ζ= 0.9975. Triple Quantum Wells Effect of media: The modification of the tunneling in DQD by a third QD is understood as the effect of media on the electrons. Triple quantum well (TQW) array is considered with various configurations, in particular triangular and linear QW arrays. We focus on two effects, the first is related to adding a third quantum well (QW3) to the system of two quantum wells (QW1 and QW2) and the second is related to small violations of symmetry though changes of QW positions in TQW. We assume a weak coupling of the QWs within the TQW. The changes in the electron localization dynamics over the whole spectrum is studied for a pair of QWs by varying the inter-dot distances within the TQW. Figure 5 shows the TQW geometry and its defining parameters. The QWs are assumed identical. The dot radii were chosen to be R1=R2=R3=13nm. The height b and the distance a between QW1 and QW2 have been varied. The distance between QW1 and QW3 (which is equal to that between QW2 and QW3) is given as Figure 5: Three InAs/GaAs triangular QW array (TQW). The radius of the dots is R=13 nm. The inter-dot distance between QW1 and QW2 is a. The defined distance b between the edge of QW3 and the QW1-QW2 axis. The spectral distribution of delocalized states in DQW and the effect of adding a third QD to form a TQW was modeled. In the TQW, the QWs were arranged in isosceles triangular configuration (see Figure 5). To analyze spectral distribution of localized and delocalized states within the TQW, we selected QW1 and QW2 to form a DQW subsystem, and studied the modification of the tunneling and the electron states by QW3. As seen above, the tunneling in isolated DQW goes consecutively from high energy levels to the ground state when the inter-dot distance is decreased. The behavior of tunneling in DQW, within the TQW, is shown in Figure 6; it appears similar to that in isolated DQW. The spectral distribution of delocalized states in DQW demonstrates that the tunneling increases when the third QW gets closer to the QD pair. The tunneling parameter σ is presented in Figure 6, for three TQD configurations, where the height of the isosceles triangle is decreased from 63, to 43, to 33nm. As the third quantum dot gets closer to the others, significant tunneling occurs for the energy levels less than 0.55, 0.45, and 0.37 eV, as respectively shown in Figure 6 (a-c). Consequently, the part of electron spectrum of delocalized states is decreases. Figure 6: σ parameter and spectrum of the DQW subsystem of the TQW, in case of symmetric triangular configuration (i.e., horizontal shift of third QW relative to the symmetry axis is null, d=0). The inter-dot distance is a=36 nm. Further effects of the third QW interacting with the DQW is shown in Figure 7, where the spectral distribution of delocalized states in the DQW are compared to that of the same DQW associated with the third QW (i.e., forming a TQW). For the latter, one can see that the number of delocalized states in the spectrum is larger than in the DQW. In addition, the energy for localization-delocalization transition is lower for TQW, i.e., 0.368 eV versus 0.452 eV. Thus, the addition of a QW to the DQW drastically changes tunneling properties of the DQW, and the coupling is enhanced in the DQW. Figure 7: Influence of the location of third QW on the tunneling between QD1 and QD2, shown through the σ energy spectra of DQW (a) and TQW (b) Interdot distances between QWs of the DQW is a=34 nm. Distance between QW number 3 and center of distance between wells 1 and 2 is 30 nm. Asymmetry and tunneling in TQW Figure 8 shows the effect of symmetry breaking on the σ parameter describing the tunneling in TQD, where the location of the third QW is modified relative to the mid-point by d, indicated in Figure 10 (b). Figure 8: σ parameter and spectrum of TQW for different symmetries defined by the position d of QW3 relatively to the QW1 and QW2 mid-point: a) d=0, b) d=1 nm. The inter-dot distance between QW1 and QW2 is a=36 nm. As can be seen in Figure 8, the commonly understood spectral distribution of the localized/delocalized states appears “chaotic”. This infers that the symmetry breaking weakens the coupling between QW pairs and decreases the number of delocalized states. To further demonstrate the effects of symmetry violation, we show in Figure 9, the spectral distribution of the σ parameter for two such cases. The first is the spectrum of totally delocalized states of a DQW; for each level of the spectrum the electron is delocalized. The second corresponds to the situation where the TQW symmetry is violated; this was achieved through shifting the third QW by d=1 nm. One can see that electron is localized for most spectral levels. The effect is strong, though the shift of the third QW is only 9% of the inter-dot distance a13. Figure 9: σ12 parameter along with the electron energy spectrum for a triangular TQW. The inter-dot distance is a=12 nm and b=23 nm. The data is obtained for two values of TQW asymmetry, (ζ=d/a13) defined by d the shift of QW3 position relative to the triangle upper vertex. Open squares and solid circles are for d=0 nm (ζ=0) and for d=1 nm (ζ=0.092), respectively. To elucidate the effect of topology combined with symmetry violation by QW position within the TQW array, a linear configuration of identical QWs is utilized. The σ12 parameter was calculated for the ground state of an electron in TQW for different values of the asymmetry. The position of third QW relatively to the center of the a12 distance is shown in the inset for d=+/-1 nm. The shift d changes the quantum states from delocalized to localized, where electron is either bound to QW1 or QW2. Results for calculated σ12 are shown in Figure 10 (a), for different values of asymmetry parameter ζ=d/a13, where d is the shift of QW3 (relative to QW1 – QW2 midpoint), and a13 the QW1 – QW3 separation distance, see the inset of Figure 10 (b). The electron initial state is localized and σ12 = 0 for all spectral levels. The tunneling to delocalized state is suppressed when ζ is larger than 0.1, as shown in Figure 10 (b). The threshold for delocalization suppression differs for different parts of the spectrum. For low-lying levels, the localized state is reached with smaller values of asymmetry ζ. Comparing the effects of symmetry breaking (Figures 9 and Figure 10), one can conclude that for TQW these effects are not as large as for DQW. Similar result was obtained above for TQW with triangular configuration, shown in Figure 9. The tunneling from delocalized to localized state occurs when the asymmetry is larger than 9%. Figure 10: a) The localization parameter σ12 defined for the QW1 and QW2 pair in the TQW along with the electron spectrum. The chain configuration of TQW is considered. Inter-dot distance is a=37 nm. Results for different asymmetry (ζ=d/a13) defined by d, the shift of QW3 relative to the QW1-QW2 midpoint. The solid circles correspond to d=0. b) Parameter |σ12|| for the electron ground state as a function of ζ, the QW3 position asymmetry within TQW. In the inset, the chain configuration of TQW is shown (a12=48 nm, a13=11 nm, and R=13 nm). We studied the spectral distributions of localized/delocalized states in DQWs and TQWs. The single electron spectra in DQW have shown three parts: localized levels (akin of those of isolated QW) when the electron is in one of QWs, levels with different probability for the electron to be in left or right QW (weakly coupled wells), and delocalized levels when the coupling is strong and the probabilities are equal. We showed that the tunneling in DQW is extremely sensitive to small asymmetrical variations of the QW shapes. For TQWs, we correlated the tunneling between a pair of QW to the position of third QW (QW3). The tunneling in the DQW is enhanced as the third QW is added to the system. The influence of QW3 on the tunneling is interpreted as an “effect of the medium”. We found that the tunneling is very sensitive to the position of the third QW relative to the pair. In particular, variations of TQW geometry, while violating the symmetry of the system, leads to decreasing the number of localized states of the spectrum. As a result, the tunneling between the QWs significantly decreased. Such sensitivity is technologically important for future quantum devices as well as next generation photovoltaic cells. This work is supported by award DHS-16-ST-062-001 from the Dept. of Homeland Security, award HRD-0833184 from NSF, and award D01_ W911SR-14-2-0001-0002 from MSRDC/ECBC-DOD. Track Your Manuscript Recommended Conferences 15th World Congrerss in Nanophotonics and Electronics Zurich, Switzerland International Conference on Photonics & Optoelectronics Singapore, Singapore Share This Page
f2f2e87c5e899f2c
We gratefully acknowledge support from the Simons Foundation and member institutions. Authors and titles for Nov 2014, skipping first 150 [ total of 2679 entries: 1-50 | 51-100 | 101-150 | 151-200 | 201-250 | 251-300 | 301-350 | ... | 2651-2679 ] [ showing 50 entries per page: fewer | more ] [151]  arXiv:1411.0461 [pdf, ps, other] Title: Global existence and well-posedness of 2D viscous shallow water system in Sobolev spaces with low regularity Comments: arXiv admin note: substantial text overlap with arXiv:1402.4923 Subjects: Analysis of PDEs (math.AP) [152]  arXiv:1411.0462 [pdf, ps, other] Title: Norm of the Whittaker vector of the deformed Virasoro algebra Subjects: Quantum Algebra (math.QA); Mathematical Physics (math-ph); Representation Theory (math.RT) [153]  arXiv:1411.0463 [pdf, ps, other] Title: Difference equation for the Heckman-Opdam hypergeometric function and its confluent Whittaker limit Comments: 15 pages, minor corrections and references added Journal-ref: Advances in Mathematics 285 (2015), 1225--1240 [154]  arXiv:1411.0465 [pdf, ps, other] Title: Overcoming order reduction in diffusion-reaction splitting. Part 1: Dirichlet boundary conditions Journal-ref: SIAM Journal on Scientific Computing 37(3), 2015, A1577-A1592 [155]  arXiv:1411.0467 [pdf, ps, other] Title: Moduli spaces of 6 and 7-dimensional complete intersections Authors: Jianbo Wang Comments: 6 pages [156]  arXiv:1411.0469 [pdf, ps, other] Title: Nielsen equivalence in Gupta-Sidki groups Subjects: Group Theory (math.GR) [157]  arXiv:1411.0470 [pdf, ps, other] Title: Non-Equilibrium Conformal Field Theories with Impurities Comments: 12 pages + references, 1 figure. Published version Journal-ref: J.Phys. A: 48 (2015) 05FT01 [158]  arXiv:1411.0471 [pdf, ps, other] Title: Global approximation of convex functions by differentiable convex functions on Banach spaces Comments: 8 pages Subjects: Functional Analysis (math.FA) [159]  arXiv:1411.0472 [pdf, ps, other] Title: The $H^{\infty}$-Functional Calculus and Square Function Estimates Subjects: Functional Analysis (math.FA) [160]  arXiv:1411.0474 [pdf, ps, other] Title: Nordhaus-Gaddum-type problems for lines in hypergraphs Comments: 14 pages Subjects: Combinatorics (math.CO) [161]  arXiv:1411.0476 [pdf, ps, other] Title: Integrable Discretization of Soliton Equations via Bilinear Method and Bäcklund Transformation [162]  arXiv:1411.0478 [pdf, ps, other] Title: Trigonometric weight functions as K-theoretic stable envelope maps for the cotangent bundle of a flag variety Comments: Latex, 56 pages, in the new Appendix 5 the Bethe algebra of the XXZ model is described by generators and relations [163]  arXiv:1411.0479 [pdf, other] Title: Fixed-Point Constrained Model Predictive Control of Spacecraft Attitude Subjects: Optimization and Control (math.OC) [164]  arXiv:1411.0482 [pdf] Title: Reduction of CRB in Arbitrary Pre-designed Arrays Using Alter an Element Position Comments: 14 pages, Submitted to IEEE Journal on Selected Areas in Communications Subjects: Information Theory (cs.IT) [165]  arXiv:1411.0483 [pdf, ps, other] Title: The exponential law for spaces of test functions and diffeomorphism groups Comments: 42 pages, mistake corrected, results slightly extended; small changes before publication added. in Indagationes Mathematicae, 2015 Journal-ref: Indagationes Mathematicae 27 (2016) 225-265 Subjects: Functional Analysis (math.FA); Classical Analysis and ODEs (math.CA); Differential Geometry (math.DG) [166]  arXiv:1411.0487 [pdf, ps, other] Title: Real analytic complete non-compact surfaces in Euclidean space with finite total curvature arising as solutions to ODEs Subjects: Differential Geometry (math.DG) [167]  arXiv:1411.0491 [pdf, ps, other] Title: Calabi-Yau Monopoles for the Stenzel Metric Authors: Goncalo Oliveira [168]  arXiv:1411.0492 [pdf, ps, other] Title: Aeppli-Bott-Chern cohomology and Deligne cohomology from a viewpoint of Harvey-Lawson's spark complex Authors: Jyh-Haur Teh Comments: 20 pages Subjects: Differential Geometry (math.DG); Mathematical Physics (math-ph); Algebraic Geometry (math.AG) [169]  arXiv:1411.0497 [pdf, ps, other] Title: Resonance and marginal instability of switching systems Subjects: Dynamical Systems (math.DS) [170]  arXiv:1411.0499 [pdf, ps, other] Title: Splicing for motivic zeta functions Authors: Thomas Cauwbergs Comments: 18 pages Subjects: Algebraic Geometry (math.AG) [171]  arXiv:1411.0500 [pdf, ps, other] Title: Bousfield localisations along Quillen bifunctors Comments: 21 pages. The paper "Bousfield localisations along Quillen bifunctors and applications" (arXiv:1411.0500v1) has been divided into two parts: "Bousfield localisations along Quillen bifunctors", which is this arXiv submission, and "Towers and fibered products of model structures" (arXiv:1602.06808). v3: Some minor changes and corrections. Final version [172]  arXiv:1411.0501 [pdf, ps, other] Title: Strong approximation of Black--Scholes theory based on simple random walks Comments: 27 pages Journal-ref: Studia Scientiarum Mathematicarum Hungarica 53 (1), 93--129 (2016) Subjects: Probability (math.PR) [173]  arXiv:1411.0503 [pdf, ps, other] Title: On the 1D Cubic Nonlinear Schrodinger Equation in an Almost Critical Space Authors: Shaoming Guo Comments: 32 pages; to appear in the J. Fourier Anal. Appl Subjects: Analysis of PDEs (math.AP) [174]  arXiv:1411.0504 [pdf, ps, other] Title: Decomposition of bilinear forms as sum of bounded forms Authors: Mohamed ElMursi Subjects: Functional Analysis (math.FA) [175]  arXiv:1411.0505 [pdf, ps, other] Title: Hausdorff dimension of the arithmetic sum of self-similar sets Authors: Kan Jiang Comments: Revise the paper according to the reports. To appear in Indagationes Mathematicae Subjects: Dynamical Systems (math.DS) [176]  arXiv:1411.0508 [pdf, ps, other] Title: Convergence of ergodic averages for many group rotations Journal-ref: Ergod. Th. Dynam. Sys. 36 (2016) 2107-2120 Subjects: Dynamical Systems (math.DS); Classical Analysis and ODEs (math.CA); Group Theory (math.GR) [177]  arXiv:1411.0510 [pdf, ps, other] Title: A model theoretic study of right-angled buildings Comments: A number of small typos found by typesetter corrected Journal-ref: J. Eur. Math. Soc. 19, 2017, 3091-3141 Subjects: Logic (math.LO) [178]  arXiv:1411.0512 [pdf, other] Title: The classification problem for finitely generated operator systems and spaces Comments: v2: 32 pages. Minor corrections in Section 4 Subjects: Operator Algebras (math.OA); Logic (math.LO) [179]  arXiv:1411.0515 [pdf, ps, other] Title: Efficient pointwise estimation based on discrete data in ergodic nonparametric diffusions Comments: Published at this http URL in the Bernoulli (this http URL) by the International Statistical Institute/Bernoulli Society (this http URL) Journal-ref: Bernoulli 2015, Vol. 21, No. 4, 2569-2594 Subjects: Statistics Theory (math.ST) [180]  arXiv:1411.0516 [pdf, ps, other] Title: A non-iterative method for the electrical impedance tomography based on joint sparse recovery Comments: 22 pages, 8 figures Subjects: Analysis of PDEs (math.AP) [181]  arXiv:1411.0518 [pdf, ps, other] Title: Long-time behavior and weak-strong uniqueness for incompressible viscoelastic flows Authors: Xianpeng Hu, Hao Wu Comments: 28 pages, finished in 2012, accepted by DCDS-A in 2014 Subjects: Analysis of PDEs (math.AP) [182]  arXiv:1411.0519 [pdf, ps, other] Title: Analysis of a New Space-Time Parallel Multigrid Algorithm for Parabolic Problems Subjects: Numerical Analysis (math.NA) [183]  arXiv:1411.0522 [pdf, ps, other] Title: A structure theorem for strong immersions Comments: 9 pages, 0 figures. arXiv admin note: text overlap with arXiv:1304.0728 Subjects: Combinatorics (math.CO) [184]  arXiv:1411.0524 [pdf, ps, other] Title: A formula for the trace of symmetric powers of matrices Subjects: Differential Geometry (math.DG) [185]  arXiv:1411.0526 [pdf, ps, other] Title: Finiteness properties of congruence classes of infinite matrices Authors: Rob Eggermont Journal-ref: Linear Algebra Appl. Vol 484 (2015), pages 290-303 [186]  arXiv:1411.0533 [pdf, ps, other] Title: Long time behaviour of 1/2 Hölder diffusion population processes Authors: Bastien Marmet Subjects: Probability (math.PR) [187]  arXiv:1411.0537 [pdf, other] Title: Toric graph associahedra and compactifications of $M_{0,n}$ Comments: 11 pages, 4 figures. Final Version: Minor clarifications. To appear in Journal of Algebraic Combinatorics Journal-ref: Journal of Algebraic Combinatorics (2016) Vol. 43 Issue 1 pp 139-151 Subjects: Algebraic Geometry (math.AG); Combinatorics (math.CO) [188]  arXiv:1411.0545 [pdf, ps, other] Title: Hyperkahler implosion and Nahm's equations Comments: Version to appear in Communications in Mathematical Physics [189]  arXiv:1411.0550 [pdf, ps, other] Title: Characterization of the Slant Helix as Successor Curve of the General Helix Authors: Toni Menninger Comments: Condensed version of earlier manuscript arXiv:1302.3175 ; 7 pages Journal-ref: Int. Electron. J. Geom. 7(2) : 84-91, 2014 Subjects: Differential Geometry (math.DG) [190]  arXiv:1411.0552 [pdf, ps, other] Title: Stability results of some abstract evolution equations Authors: N. S. Hoang Subjects: Dynamical Systems (math.DS) [191]  arXiv:1411.0555 [pdf, ps, other] Title: Bergman interpolation on finite Riemann surfaces. Part I: Asymptotically Flat Case Authors: Dror Varolin Comments: The main result has been corrected: Sequences of density &lt;1 are still interpolating, but the density of an interpolation sequence is only shown to be at most 1. The corrected result is sharp, by work of Borichev-Lyubarskii. Also added a motivating section on Shapiro-Shields interpolation. Otherwise typos and minor errors corrected. To appear in Journal d'Analyse Subjects: Complex Variables (math.CV) [192]  arXiv:1411.0561 [pdf, other] Title: Ends of unimodular random manifolds Comments: Short note, two figures. Major rewrite from v1: extended the scope and added remarks on foliations [193]  arXiv:1411.0562 [pdf, ps, other] Title: Representations of quantum affine algebras of type $B_N$ Comments: 27 pages Subjects: Quantum Algebra (math.QA) [194]  arXiv:1411.0576 [pdf, ps, other] Title: Ground states and concentration phenomena for the fractional Schrödinger equation Comments: 22 Pages, Accepted for Publications in "Nonlinearity" Subjects: Analysis of PDEs (math.AP) [195]  arXiv:1411.0577 [pdf, ps, other] Title: The algebraic structure of quantum partial isometries Authors: Teodor Banica Comments: 34 pages Journal-ref: Infin. Dimens. Anal. Quantum Probab. Relat. Top. 19 (2016), 1-36 Subjects: Operator Algebras (math.OA); Quantum Algebra (math.QA) [196]  arXiv:1411.0578 [pdf, other] Title: Gaps problems and frequencies of patches in cut and project sets Comments: 27 pages, 4 figures Journal-ref: Math. Proc. Camb. Phil. Soc. 161 (2016) 65-85 Subjects: Dynamical Systems (math.DS); Mathematical Physics (math-ph); Metric Geometry (math.MG); Number Theory (math.NT) [197]  arXiv:1411.0583 [pdf, other] Title: A Hitchhiker's Guide to Automatic Differentiation Comments: 39 pages, 10 figures, Numerical Algorithms (2015) Journal-ref: Numerical Algorithms, Vol. 72 No. 3 (2016), 775-811 Subjects: Numerical Analysis (math.NA) [198]  arXiv:1411.0584 [pdf, ps, other] Title: On the toplogical computation of K4 of the Gaussian and Eisenstein integers Comments: addresses referee's comments Subjects: K-Theory and Homology (math.KT); Number Theory (math.NT) [199]  arXiv:1411.0586 [pdf, ps, other] Title: Random walks in a one-dimensional Lévy random environment Comments: Final version to be published in J. Stat. Phys. 23 pages. (Changes from v1: Theorem 2.4 and Corollary 2.6 have been removed.) Journal-ref: J. Stat. Phys. 163 (2016), no. 1, 22-40 [200]  arXiv:1411.0590 [pdf, other] Title: Sparse matrices describing iterations of integer-valued functions Authors: Bernd C. Kellner Comments: 19 pages Subjects: Combinatorics (math.CO) [ showing 50 entries per page: fewer | more ] Disable MathJax (What is MathJax?) Links to: arXiv, form interface, find, math, 2008, contact, help  (Access key information)
e88d9498a6a4711c
Add to Cart How to Understand Quantum Mechanics John P Ralston How to Understand Quantum Mechanics presents an accessible introduction to understanding quantum mechanics in a natural and intuitive way, which was advocated by Erwin Schrödinger and Albert Einstein. A theoretical physicist reveals dozens of easy tricks that avoid long calculations, makes complicated things simple, and bypasses the worthless anguish of famous scientists who died in angst. The author's approach is light-hearted, and the book is written to be read without equations, however all relevant equations still appear with explanations as to what they mean. The book entertainingly rejects quantum disinformation, the MKS unit system (obsolete), pompous non-explanations, pompous people, the hoax of the "uncertainty principle" (it's just a math relation), and the accumulated junk-DNA that got into the quantum operating system by misreporting it. The order of presentation is new and also unique by warning about traps to be avoided, while separating topics such as quantum probability to let the Schrödinger equation be appreciated in the simplest way on its own terms. This is also the first book on quantum theory that is not based on arbitrary and confusing axioms or foundation principles. The author is so unprincipled he shows where obsolete principles duplicated basic math facts, became redundant, and sometimes were just pawns in academic turf wars. The book has many original topics not found elsewhere, and completely researched references to original historical sources and anecdotes concerting the unrecognized scientists who actually did discover things, did not all get Nobel prizes, and yet had interesting productive lives. About Editors John P Ralston, PhD, is a Professor of Physics and Astronomy at The University of Kansas. He received his PhD in high-energy theory physics from the University of Oregon. His research interests include high-energy theory, single-molecule methods, and pharmaceutical data analysis. Table of Contents Paperback ISBN: 9780750329118 Ebook ISBN: 9781681740348 DOI: 10.1088/978-1-6817-4226-7 Publisher: Morgan & Claypool Publishers « Back
02fca5865e4f7878
Kolmogorov-Arnold-Moser Theory From Scholarpedia Luigi Chierchia and John N. Mather (2010), Scholarpedia, 5(9):2123. doi:10.4249/scholarpedia.2123 revision #91405 [link to/cite this article] (Redirected from KAM theory) Jump to: navigation, search Post-publication activity Curator: Luigi Chierchia Kolmogorov-Arnold-Moser (KAM) theory deals with persistence, under perturbation, of quasi-periodic motions in Hamiltonian dynamical systems. An important example is given by the dynamics of nearly-integrable Hamiltonian systems. In general, the phase space of a completely integrable Hamiltonian system of \(n\) degrees of freedom is foliated by invariant \(n\)-dimensional tori (possibly of different topology). KAM theory shows that, under suitable regularity and non-degeneracy assumptions, most (in measure theoretic sense) of such tori persist (slightly deformed) under small Hamiltonian perturbations. The union of persistent \(n\)-dimensional tori (Kolmogorov set) tend to fill the whole phase space as the strength of the perturbation is decreased. The major technical problem arising in this context is due to the appearance of resonances and of small divisors in the associated formal perturbation series. Classical KAM theory The main objects studied in KAM theory are \(d\)-dimensional embedded tori \(\mathcal{T}^d\) invariant for a Hamiltonian flow \(\phi^t_H: \mathcal{M}^{2n}\to\mathcal{M}^{2n}\ ,\) where \(t\in\mathbb{R}\) denotes the time variable and \(H=H(p,q)\) is a (smooth enough or analytic) Hamiltonian function depending on \(2n\) symplectic (or canonical) variables \(p=(p_1,...,p_n)\) and \(q=(q_1,...,q_n)\) defined on the phase space \(\mathcal{M}^{2n}\ .\) This means that if \((p_0,q_0)\in\mathcal{T}^d\ ,\) then \(\phi^t_H(p_0,q_0)\in\mathcal{T}^d\) for any \(t\in\mathbb{R}\ ,\) \(\phi^t_H(p_0,q_0)=(p(t),q(t))\) denoting the solution of the (standard) Hamilton equations \[\tag{1} \left\{\begin{array}{l}\dot p = - \partial_q H(p,q)\\ \dot q = \partial_p H(p,q)\end{array}\right.\quad{\rm with\ initial\ data }\quad \left\{\begin{array}{l} p(0)=p_0\\ q(0)=q_0\end{array}\right. . \] Here, the dot represents time derivative, while \(\partial_z\) denotes the gradient with respect to the \(z\) variables. A \(d\)-dimensional (embedded and smooth or analytic) invariant torus for \(\phi_H^t\ ,\) with \(2\le d\le n\ ,\) is called a KAM torus if: • the flow \(\phi^t_H\) on \(\mathcal{T}^d\) is conjugated to a linear translation \(\theta \to \theta + \omega t\ ,\) where \(\theta=(\theta_1,...,\theta_d)\) belongs to the standard \(d\)-dimensional torus \(\mathbb{T}^d=\mathbb{R}^d/(2\pi \mathbb{Z})^d\ ;\) the vector \(\omega=(\omega_1,...,\omega_d)\in\mathbb{R}^d\) is called the frequency vector; • the frequency vector \(\omega\) is rationally independent and "badly" approximated by rationals, typically in a Diophantine sense: \[\tag{2} \exists\ \gamma, \tau>0\ {\rm such\ that} \quad |\omega\cdot k|:= |\sum_{j=1}^d \omega_j k_j|\ge \frac{\gamma}{\|k\|^\tau}\ ,\ \forall\ k\in\mathbb{Z}^d\backslash\{0\}\ . \] From measure theory, it follows that the set of Diophantine vectors in \(\mathbb{R}^d\) is of full Lebesgue measure. Note that the case \(d=1\) corresponds to periodic trajectories of period \(2\pi/\omega\) (this case is normally excluded in classical KAM theory since does not involve small divisors). On the other hand, the case \(d=n\) corresponding to maximal KAM tori is particularly relevant. Figure 1: Linear translation on a 2-torus (animation by Corrado Falcolini) Figure 2: A periodic case (animation by Corrado Falcolini) Figure 3: An orbit on a 2-dimensional KAM torus in a 3-dimensional energy level (animation by Corrado Falcolini) Kolmogorov normal forms and Kolmogorov's Theorem Let \(H\) be a real-analytic Hamiltonian on \(\mathcal{M}^{2n}=U\times \mathbb{T}^n\) (with \(U\) an open region in \(\mathbb{R}^n\)) and assume that \(\mathcal{T}^n\) is a maximal KAM torus for \(H\) and that it is a (Lagrangian) graph over the angle variables. Then there exists a symplectic transformation \(\phi: (y,x)\to(p,q)\) (i.e., a diffeomorphism preserving the canonical 2-form \(\displaystyle \sum_{i=1}^n dp_i\wedge dq_i\)) transforming \(H\) in Kolmogorov normal form: \[\tag{3} H\circ\phi(y,x)=K(y,x):=E+\omega\cdot y + Q(y,x) \] for some number \(E\) (the energy level of the KAM torus), some Diophantine frequency vector \(\omega\) and \(Q\) a function vanishing together with its first \(y\)-derivatives at \(y=0\ .\) In the "new" variables \((y,x)\ ,\) the \(n\)-torus \(\{0\}\times\mathbb{T}^n\) is obviously a KAM torus for the transformed Hamiltonian \(H\circ\phi\ .\) One says that the Kolmogorov normal form \(K\) in (3) is non-degenerate if the Hessian matrix (with respect to \(y\)) of the average of \(Q\) over \(\mathbb{T}^n\) is regular. Kolmogorov's Theorem (Kolmogorov, 1954) Let \(K\) be a real-analytic non-degenerate Kolmogorov's normal form and let \(P\) be a real-analytic function in a neighborhood of \(\{y=0\}\times\mathbb{T}^n\ .\) Then, there exists \(\epsilon_0>0\) and for any \(|\epsilon|<\epsilon_0\) a real-analytic symplectic transformation \(\phi_\epsilon\ ,\) close to the identity, such that, if \(H_\epsilon\) denotes the perturbed Hamiltonian \(K+\epsilon P\ ,\) then \(H_\epsilon\circ\phi_\epsilon\) is a non-degenerate Kolmogorov's normal form with the same frequency vector of \(K\ .\) Thus, in particular, \(\mathcal{T}_\epsilon^n:=\phi_\epsilon(\{0\}\times \mathbb{T}^n)\) is a (real-analytic, non-degerate) KAM torus for \(H_\epsilon\) and such a torus is \(\epsilon\)-close to \(\{0\}\times\mathbb{T}^n\ .\) Nearly-integrable Hamiltonian systems A nearly-integrable Hamiltonian system is a Hamiltonian system governed by a Hamiltonian function of the form \(H_\epsilon(y,x)=K(y)+\epsilon P(y,x)\) with \(y=(y_1,...,y_n)\) (action variables) varying in a domain \(B\subset \mathbb{R}^n\) and \(x=(x_1,...,x_n)\) (angle variables) varying in the standard \(n\)-dimensional torus \(\mathbb{T}^n\ .\) For \(\epsilon=0\ ,\) equations (1) give \(\dot y=0\) and \(\dot x=\partial_y K(y)\ ,\) hence \(y=y_0=\) constant and \(x = x_0 + \omega_0\, t\) (mod \(2\pi\)), with \(\omega_0:= \partial_y K\big(y_0\big)\ .\) Thus the torus \(\{y_0\}\times \mathbb{T}^n\) is invariant for the flow \(\phi^t_{K}\ ,\) and if \(\omega_0\) is Diophantine and \(\partial_y^2 K(y_0)\) is invertible, then such a torus is a non-degenerate KAM torus for \(H_0=K\ .\) Since \(K(y)\) can be expanded by Taylor's formula as \[ K=K(y_0)+\omega_0\cdot (y-y_0)+ \frac{1}{2} \partial_y^2 K(y_0) (y-y_0) \cdot(y-y_0)+ O(|y-y_0|^3|), \] it follows from Kolmogorov's Theorem that for \(\epsilon\) small enough such tori persist, giving rise to non-degenerate KAM tori for \(H_\epsilon\). Moser's differentiable version J.K. Moser (followed by H. Rüssmann, J. Pöschel and others) showed that the real-analyticity assumption is not necessary. Indeed, Kolmogorov's Theorem holds under the milder assumption that \(H\) is a \(C^\ell\) differentiable function with \(\ell>2n\) (meaning that \(H\) is of class \(C^{2n}\) and that the derivatives of order \(2n\) are Hölder continuous). Originally (Moser, 1962), Moser's work focused on \(C^{333}\) (exact symplectic) perturbations of integrable twist mappings of the annulus (the most famous example being the so-called standard map). In this case, maximal KAM tori correspond to homotopically non-trivial curves intersecting each radius in only one point. The number of derivatives were reduced to 5 by H. Rüssmann (Rüssmann, 1970), and M. Herman (Herman, 1983) showed that the theorem is valid for \(C^k\) perturbation with \(k\ge 3\ ,\) but false for \(k<3\ .\) Small divisors and classical KAM techniques KAM techniques (i.e., the analytical tools used to prove statements in KAM theory) constitute the hard core of KAM theory and play a major role in applications, extensions and, in general, in the full comprehension of the results. The main technical problem is related to the appearance of small divisors in the Fourier series of perturbative expansions (averaging methods, series expansions of quasi-periodic motions, etc.). Small divisors are expressions of the form \(\omega \cdot k=\sum_{i=1}^d \omega_i k_i\) with \(k\in\mathbb{Z}^d\backslash\{0\}\) an integer vector, which usually are related to Fourier modes associated to the perturbing function and where the frequency vector \(\omega\) often depends upon the slow (action) variables. Such expressions appear in the denominator of (formal) Fourier expansions of the object one aims to construct (e.g., a generating function or the formal expansion of a quasi-periodic solution). Since \(\omega\cdot k\) may became arbitrarily small for any vector \(\omega\in\mathbb{R}^d\) as \(k\) varies, the convergence of the perturbative series is in doubt. Kolmogorov's scheme Two main ideas are needed to overcome the convergence problems related to the appearance of small divisors: (i) keep the frequency of the motion fixed; (ii) use a Newton quadratic method (the name comes form the elementary tangent Newton method for finding roots of real functions) to control the growth of the remainder terms. More specifically, Kolmogorov (Kolmogorov, 1954) constructed a (real-analytic), near-identity, symplectic transformation \(\phi_1\) transforming a Hamiltonian of the form \(H=K+\epsilon P\ ,\) with \(K\) a non-degenerate Kolmogorov normal form as in (3), into a new Hamiltonian of the form \[\tag{4} H_1:=H\circ\phi_1=K_1+\epsilon^2 P_1, \] with \(K_1\) again in non-degenerate Kolmogorov normal form with the same frequency vector \(\omega\) as \(K\ .\) Once this is achieved, one can iterate the construction to obtain a sequence of symplectic transformations \(\phi_j\) so that \[ H_j:=H\circ\phi_1\circ\cdots\circ\phi_j=K_j+\epsilon^{2^j} P_j \] with \(K_j\) a non-degenerate Kolmogorov normal form with fixed frequency vector, and \(P_j\) a real-analytic perturbation. Indeed, the equations leading to the determination of the symplectic transformation \(\phi:(y',x')\to(y,x)\) may be (essentially uniquely) solved and admit as generating function a (real-analytic) function of the form \[ g(y',x)=y'\cdot x+\epsilon \Big(b\cdot x+s(x)+ y'\cdot a(x)\Big) \] where, \(b\) is a constant vector, while \(s\) and \(a\) are, respectively, a scalar and a vector-valued multi-periodic functions with vanishing average over \(\mathbb{T}^n\ .\) In the denominators of the Fourier expansion of \(s\) and \(a\) (and in the determination of the constant \(b\)) there appear the small divisors \(\omega\cdot k\ ,\) which are controlled through the Diophantine inequality (2). The super exponential decrease of \(\epsilon^{2^j}\ ,\) for small \(\epsilon\ ,\) allows to beat the growth of the norm (due to the small divisors) of the new perturbing functions \(P_j\ :\) in the limit as \(j\to\infty\ ,\) \(\phi_1\circ\cdots\circ\phi_j\) converges to a real-analytic symplectic transformation \(\phi_\epsilon\ ,\) \(H_j\to K_\epsilon\ ,\) with \(K_\epsilon=\lim_{j\to\infty} K_j=H\circ\phi_\epsilon\) a real-analytic non-degenerate Kolmogorov normal form with frequency \(\omega\ .\) Arnold's scheme Arnold (who was the first to provide a detailed proof of Kolmogorov's Theorem) followed a different approach (Arnold, 1963a), which, however, shared with Kolmogorov's scheme the two main ingredients. Arnold considered a nearly-integrable Hamiltonian system of the form \(H:=K(y) + \epsilon P(y,x)\) real analytic in a complex neighborhood \(D_0\) of \(\{y_0\}\times\mathbb{T}^n\ ,\) where \(y_0\) is such that \(\partial_y K(y_0)=\omega\) is Diophantine and \(\partial^2_y K\) is invertible on \(D_0\ .\) One then constructs a near-identity symplectic transformation \(\phi_1: D_1\to D_0\) transforming \(H\) as in (4) with \(K_1=K_1(y')\) (i.e., integrable). The new domain \(D_1\) is a complex neighborhood of \(\{y_1\}\times\mathbb{T}^n\) contained in \(D_0\ ,\) and with the property that \(\partial_{y'} K_1(y_1)=\omega\) (same frequency) and \(\partial^2_{y'} K_1\) is invertible on \(D_1\ .\) This is not difficult to achieve, by classical averaging theory, through a symplectic transformation associated to a near-identity generating function \(g(y',x)=y'\cdot x + \epsilon \tilde g(y',x)\ ,\) with \(\tilde g\) a trigonometric polynomial in \(x\) having degree \(\delta\) depending on \(\epsilon\) (\(\delta\) can be chosen as \((\log \epsilon^{-1})^p\) and being related to a cut-off of the high Fourier modes of the perturbation). The iteration leads to a sequence of Hamiltonians \(H_j=K_j+\epsilon^{2^j} P_j\) closer and closer to integrable but in shrinking domains \(D_j\ .\) In the limit the projection onto the action variables of \(D_j\) is a single point \(y_*\ .\) Nevertheless, one can show that, pulling back the dynamics to \(\{y_*\}\times \mathbb{T}^n\ ,\) there corresponds a Diophantine KAM torus for the original Hamiltonian \(H\ .\) Moser's differentiable case In dealing with a finitely differentiable perturbation \(P\) there appears an extra technical problem. Namely, due to the presence of the small divisors, during the iteration scheme one loses derivatives at each step. Moser (inspired by the famous work by J. Nash on the \(C^\infty\) imbedding of Riemannian manifolds) introduced a smoothing technique (via convolutions), which re-stores at each step of the Newton iteration a certain number of derivatives. The super-exponential convergence of the Newton scheme is fast enough to compensate also for the smoothing leading to a convergent algorithm. Later, Moser developed different and sharper methods, using, e.g., a characterization of differentiable functions through approximations by real-analytic ones in smaller and smaller complex neighborhoods of real domains. Thus, by a quantitative approximation of differentiable functions by means of real-analytic functions, one can construct for the analytic approximations real-analytic, invariant tori; such approximate solutions are analytic in shrinking domains and in the limit converge to differentiable solutions of the original problem. The analytical tools needed in KAM proofs are classical and involve, in particular: - exponential decay of Fourier coefficients of analytic functions - quantitative versions of the classical implicit function theorem in real-analytic settings - Cauchy estimates, which allow to bound the sup-norm of derivatives of analytic functions in smaller domains in term of the sup-norm of the function divided by the loss of the extension of the domain - quantitative analysis of the PDE \(\sum_{j=1}^d \omega_j \partial_{x_j} u= f\) where \(f\) is real-analytic function on \(\mathbb{T}^d\) with vanishing average and \((\omega_1,...,\omega_d)\) a Diophantine vector. • In a nearly-integrable analytic Hamiltonian system with \(2n\) degrees of freedom, the Kolmogorov set, i.e., the union of the persistent KAM tori, fills locally a region in phase space of density \(1-O(\sqrt{\epsilon})\ ,\) as \(\epsilon\) goes to zero. While the dynamics on the Kolmogorov set trivializes (being conjugated to a linear quasi-periodic translation on \(\mathbb{T}^n\) with a Diophantine frequency vector), in its complement (which asymptotically represents a small region of measure \(O(\sqrt{\epsilon})\)) the dynamics can be very complicated, exhibiting, in many cases, "random motions" or "Arnold diffusion". • In nearly-integrable Hamiltonian systems, Kolmogorov's non-degeneracy condition is equivalent to require that \(\det \partial_y^2 K(y_0)\neq 0\ ,\) which, in turn, means that the frequency map \(y\to \omega(y):=\partial_y K(y)\) is a local diffeomorphism in a neighborhood of \(y_0\ .\) • The global geometry of the Kolmogorov set is simple: the fibers of the set (i.e., the individual KAM tori) are level sets of a global \(C^\infty\) symplectic map \(\phi_*(\eta,x)\) as the \(n\) vector \(\eta\) varies in a Cantor-like \(n\)-disk of almost full density. This phenomenon may be interpreted by saying that nearly-integrable Hamiltonian systems are integrable over Cantor sets (Pöschel, 1982, Chierchia and Gallavotti, 1982). • The Kolmogorov symplectic map \(\phi_\epsilon\) and the Kolmogorov normal form \(K_\epsilon\) (see above) depend analytically upon the perturbative parameter \(\epsilon\ .\) Therefore quasi-periodic trajectories taking place on KAM tori admit a convergent series expansions in \(\epsilon\). This fact, which was first observed by Moser (1967), solves a long-standing problem about the convergence of Lindstedt series (i.e., \(\epsilon\)-power series expansions of formal quasi-periodic solution with Diophantine frequencies). Direct proofs, based upon delicate and lengthy combinatorial arguments, of the convergence of Lindstedt series (i.e., proofs avoiding KAM fast iteration methods) were found in the late 1980's (H. Eliasson) and early 1990's (G. Gallavotti, L. Chierchia and C. Falcolini). Applications and extensions Iso-energetic tori and perpetual stability Figure 4: Motion trapped by two KAM tori (from A. Celletti and L. Chierchia KAM tori for N-body problems (a brief history) Celestial Mechanics & Dynamical Astronomy, 95 , 2006 117-139 The tori found through Kolmogorov's (or Arnold's) scheme have, as \(\epsilon\) varies, the same frequencies, but different energies. Arnold noticed that, instead, one could keep fixed the ratios of the frequencies and the energy so as to analytically continue KAM tori on a fixed energy surface. The analytical non-degeneracy condition to achieve this (in the nearly-integrable setting) is \[ \det \left(\begin{matrix}\partial^2_y K & \partial_y K \\ \partial_y K & 0\end{matrix}\right) \neq 0 \] (this is a \((n+1)\times(n+1)\) matrix having as last column and as last row the gradient of \(K\) and a 0). Iso-energetic non-degeneracy leads, in low dimensional nearly-integrable systems, to perpetual stability: an energy level for a system with two degrees of freedom is a 3-dimensional surface and, for small perturbation, an iso-energetically non-degenerate, nearly-integrable systems admits a positive measure set of invariant two dimensional tori (which are graphs over the angle variables); thus such tori separate the energy level, and a generic trajectory either lies on an invariant torus or is trapped among two of them. In both cases no escape is possible, and the action variables stay forever close to their initial values ("perpetual stability"). Properly degenerate KAM theory Figure 5: Outer Solar System as (1+4)-body model (animation by Corrado Falcolini) One of the original motivations for KAM theory was to find bounded motions in the planetary many body problem (i.e., a mechanical system formed by \(1+N\) point-masses, one of which is much larger than the other, interacting only through gravity). It is a classical fact that such a system may be seen as a perturbation of \(N\) decoupled two-body systems (star-planet). However, the limiting unperturbed Hamiltonian is highly degenerate, since it does not depend on the full set of action variables (proper degeneracy). In general, perturbations of properly degenerate Hamiltonian systems may admit no KAM tori. However, under suitable assumptions on the (average over the fast angles of the) perturbation KAM tori do exist: Theorem (Arnold, 1963b) Let \((y,x)\in \mathbb{R}^n\times\mathbb{T}^n\) and \((p,q)\in\mathbb{R}^{2m}\) be couples of conjugate symplectic variables and let the Hamiltonian \(H=K(y)+\epsilon P(y,x,p,q)\) be real-analytic in a neighborhood of \(\{y_0\}\times\mathbb{T}^n\times\{0,0\}\ .\) Denote by \(\bar P\) the secular perturbation (i.e., the average over the fast angles \(x\) of \(P\)) \(\bar P(y,p,q)=\int_{\mathbb{T}^n} P(y,x,p,q) dx/(2\pi)^n\) and by \(r=(r_1,...,r_m)\) the vector with components \(r_i=(p_i^2+q_i^2)/2\) (for \(i=1,...,m\)). Assume that \(\det \partial_y^2 K(y_0)\neq 0\ .\) Assume also that the secular perturbation has an elliptic equilibrium\[ \bar P= \bar P_0(y)+ \Omega(y)\cdot r + \frac12 A(y) r\cdot r + O(|r|^3)\ ,\] with \(\det A(y_0)\neq 0\ .\) Then, if \(\epsilon\) is small enough, in a neighborhood of \(\{y_0\}\times\mathbb{T}^n\times\{0,0\}\) there exists a positive measure set of initial data whose evolution lies on \((n+m)\)-dimensional tori close to \(\{y_0\}\times\mathbb{T}^n\times\{r_k=\epsilon^a, \ \forall\ k\}\) for a suitable \(a>0\ .\) Figure 6: The dynamics governed by \(K+\epsilon \bar P\) (animation by Corrado Falcolini) This theorem, or refinements of it, is at the basis of the application of KAM theory to the planetary many body problem; a complete proof of such result, however, was published only in 2004 and is due to M. Herman and J. Fejóz. Weaker non-degeneracies To extend the validity of KAM theory it is important to weaken the non-degeneracy conditions. As mentioned above, Kolmogorov's non-degeneracy condition for nearly-integrable systems with Hamiltonian \(H_\epsilon=K(y)+\epsilon P(y,x)\) means that the frequency map \(\omega=\partial_y K\) is a local diffeomorphism. Rüssmann pointed out (Rüssmann, 1989) that it is sufficient (and in a suitable sense also necessary) to assume that the image of the frequency map \(y\to \omega(y)\) does not lie in any hyperplane, (more precisely, for a ball \(B\ ,\) \(\omega(B)\) does not lie in any hyperplane passing through the origin). A similar condition (that suites better differentiable settings) due to Arnold and Pyartli is to require that the frequency map \(\omega\) is skew at some point \(y_0\ .\) This means that there exists a smooth curve \(t\in(-1,1)\to u(t)\in\mathbb{R}^n\) passing through \(y_0=u(0)\) such that, if \(\alpha(t)\) denotes the lifted curve \(\omega\circ u(t)\ ,\) then the matrix \([\alpha(0),\alpha'(0),...,\alpha^{(n-1)}(0)]\) is invertible. Under these types of non-degeneracy conditions one can guarantee that, under small enough perturbations, there exists a positive measure set of initial data evolving on maximal KAM tori for \(H_\epsilon\ .\) Lower dimensional tori Orbits of great interest for KAM theory are also quasi-periodic trajectories spanning lower dimensional tori, i.e., orbits \(z(t)=\phi_H^t(z_0)\) such that the closure of the set \(\{z(t): t\in\mathbb{R}\}\) is diffeomorphic to \(\mathbb{T}^d\) with \(1< d<n\ ,\) \(n\) being the number of degrees of freedom (i.e., half of the dimension of the phase space). At variance with maximal KAM tori, the union of lower dimensional tori form a set of Lebesgue measure zero in phase space; nevertheless they are important in order to understand the dynamics and for extensions of KAM theory to PDEs. To fix ideas, consider the normal form of a lower dimensional elliptic torus \[\tag{5} K(y,x,p,q;\xi):=E(\xi) + \omega(\xi)\cdot y + \frac12 \sum_{j=1}^m \Omega_j(\xi)(p_j^2+q_j^2)\] where \((y,x)\in\mathbb{R}^d\times \mathbb{T}^d\) are (partial) action-angle variables; \((p,q)\in\mathbb{R}^{2m}\) are conjugated variables; \(\Omega_j(\xi)>0\) and \(\xi\) is a real \(d\)-dimensional parameter (for example, \(\xi\) might be a fixed action \(y_0\) around which one is making a Taylor expansion). The set \(\mathcal{T}^d_0:=\{y=0\}\times\mathbb{T}^d\times\{p=0=q\}\) is an invariant \(d\)-dimensional torus for \(\phi_K^t\ :\) \(\phi_K^t(0,x_0,0,0)=(0,x_0+\omega(\xi)t, 0,0)\ .\) Such a torus is linearly stable (elliptic), and the dynamics close to it, in the \((p,q)\)-variables is just given by harmonic oscillations with frequencies \(\Omega_j(\xi)\) (tangential frequencies). Under suitable regularity and non-degeneracy assumptions (on the inner and tangential frequencies) such tori are persistent. For example, let \(\xi\) vary in a closed set \(\Pi\) of positive \(d\)-dimensional Lebesgue measure; let \(\xi\to\omega(\xi)\) be a Lipschitz homeomorphism and let \(K\) and \(P(y,x,p,q;\xi)\) be real-analytic in the symplectic variables \((y,x,p,q)\) and Lipschitz continuous in \(\xi\ .\) Assume that \(\Omega_j(\xi)\neq\Omega_i(\xi)>0\) for all \(i\neq j\) and \(\xi\in \Pi\ .\) Assume also the following (Melnikov -Pöschel) condition \[{\rm meas}\,\Big(\{\xi\in\Pi: \omega(\xi)\cdot k + \Omega(\xi)\cdot \ell=0\}\Big)=0 \ ,\quad \forall\ k\in\mathbb{Z}^d\backslash\{0\}\ ,\forall\ \ell\in\mathbb{Z}^m\ {\rm with}\ \sum_{j=1}^m|\ell_j|\le 2\ . \] Then, there exists \(\epsilon_*>0\) and a Cantor set \(\Pi_*\subset\Pi\) of positive measure such that to each \(\xi\in\Pi_*\) there corresponds, for any \(0<\epsilon<\epsilon_*\ ,\) a torus \(\mathcal{T}_\epsilon^d(\xi)\) invariant for \(\phi_{K+\epsilon P}^t\ .\) We remark that this kind of result admits many generalizations, which are particularly important for infinite dimensional extensions. The partially hyperbolic case, whose normal form is give by (5) with \((p_j^2+q_j^2)\) replaced by \((p_j^2-q_j^2)\) is much simpler (as in this case the tangential frequencies do not resonate with the inner ones); see (Graff, 1974). Hamiltonian PDE's KAM theory can be partially extended to infinite dimension, i.e., to partial differential equations (PDEs) carrying a Hamiltonian structure. Examples of such equations are: the wave equation, the (stationary) Schrödinger equation, KdV, etc. Under suitable hypotheses, nonlinear perturbations of these equations may be reduced to infinitely many coupled dynamical (ordinary differential) equations (e.g., for the wave equation one obtains infinitely many coupled harmonic oscillators). It is then possible to find quasi-periodic solutions corresponding to the embedding of a linear quasi-periodic flow on a finite dimensional torus into the infinite dimensional phase space associated to the equation. Also almost-periodic motions have been considered (i.e., trajectories with infinitely many independent frequencies). Several results in these directions have been obtained starting from the 1990's; see (Kuksin, 2004). • Arnold , V I (1963a). Proof of a Theorem by A. N. Kolmogorov on the invariance of quasi-periodic motions under small perturbations of the Hamiltonian. Russian Math. Survey 18 : 13-40. • Arnold , V I (1963b). Small divisor problems in classical and Celestial Mechanics. Russian Math. Survey 18 : 85-191. • Arnold , V I (1964). Instability of dynamical systems with many degrees of freedom. Dokl. Akad. Nauk SSSR 156 : 9-12. • Chierchia, L and Gallavotti, G (1982). Smooth prime integrals for quasi-integrable Hamiltonian systems Il Nuovo Cimento. B. Serie 11 67: 277-295. • Féjoz , J (2004). Dèmonstration du `théorème d'Arnol'd' sur la stabilité du système planétaire (d'après Herman). Ergodic Theory Dynam. Systems 5 : 1521-1582. • Graff , S (1974). On the continuation of stable invariant tori for Hamiltonian systems. J. Differential Equations 15 : 1-69. • Herman, M-R (1983). Sur les courbes invariantes par les difféomorphismes de l'anneau. Vol. 1. Astérisque 103: i+221. • Kolmogorov, A N (1954). On the conservation of conditionally periodic motions under small perturbation of the Hamiltonian. Dokl. Akad. Nauk. SSR 98: 527-530. • Kuksin, S B (2004). Fifteen years of KAM for PDE. Geometry, topology, and mathematical physics, Amer. Math. Soc. Transl. Ser. 2 212: 237-258. • Melnikov , V K (1965). On certain cases of conservation of almost periodic motions with a small change of the Hamiltonian function. Dokl. Akad. Nauk SSSR 165: 1245-1248. • Moser , J K (1962). On invariant curves of area-preserving mappings of an annulus. Nach. Akad. Wiss. Göttingen, Math. Phys. Kl. II 1 : 1-20. • Moser , J K (1967). Convergent series expansions for quasi-periodic motions. Math. Ann. 169 : 136-176. • Pöschel , J (1982). Integrability of Hamiltonian systems on Cantor sets. Comm. Pure Appl. Math. 35 : 653-695. • Rüssmann , H (1970 ). Kleine Nenner. I. Über invariante Kurven differenzierbarer Abbildungen eines Kreisringes. Nach. Akad. Wiss. Göttingen, Math. Phys. Kl. II 1970: 67-105. • Rüssmann , H (1989). Nondegeneracy in the perturbation theory of integrable dynamical systems. Number theory and dynamical systems (York, 1987), London Math. Soc. Lecture Note Ser. 134 : 5-18. Internal references Recommended reading • Arnol'd, V I; Kozlov, V V and Neishtadt, A I (2006). Mathematical Aspects of Classical and Celestial Mechanics, Dynamical Systems III Series: Encyclopaedia of Mathematical Sciences. Springer-Verlag 3rd ed. Vol. 3: xiv+518. • Moser, J K (1966). A rapidly convergent iteration method and non-linear partial differential equations Ann. Scuola Norm. Sup. Pisa 20: 499-535. • Moser, J K (1973). Stable and random motions in dynamical systems. With special emphasis on celestial mechanics Hermann Weyl Lectures, the Institute for Advanced Study, Princeton, N. J. Annals of Mathematics Studies 77: viii+198. See also Averaging, Aubry-Mather theory, Chaos, Computational celestial mechanics, Dynamical Systems, Hamiltonian Dynamics, Hamiltonian Normal Forms, N-Body Simulations, Normal Forms, Standard map, Symplectic maps, Three body problem Personal tools Focal areas
b513a0d84e418ded
Nanoscale Research Letters , 13:103 | Cite as Interband Photoconductivity of Metamorphic InAs/InGaAs Quantum Dots in the 1.3–1.55-μm Window • Sergii Golovynskyi • Oleksandr I. Datsenko • Luca Seravalli • Giovanna Trevisi • Paola Frigeri • Ivan S. Babichuk • Iuliia Golovynska • Junle Qu Open Access Nano Express Photoelectric properties of the metamorphic InAs/In x Ga1 − xAs quantum dot (QD) nanostructures were studied at room temperature, employing photoconductivity (PC) and photoluminescence spectroscopies, electrical measurements, and theoretical modeling. Four samples with different stoichiometry of In x Ga1 − xAs cladding layer have been grown: indium content x was 0.15, 0.24, 0.28, and 0.31. InAs/In0.15Ga0.85As QD structure was found to be photosensitive in the telecom range at 1.3 μm. As x increases, a redshift was observed for all the samples, the structure with x = 0.31 was found to be sensitive near 1.55 μm, i.e., at the third telecommunication window. Simultaneously, only a slight decrease in the QD PC was recorded for increasing x, thus confirming a good photoresponse comparable with the one of In0.15Ga0.75As structures and of GaAs-based QD nanostructures. Also, the PC reduction correlate with the similar reduction of photoluminescence intensity. By simulating theoretically the quantum energy system and carrier localization in QDs, we gained insight into the PC mechanism and were able to suggest reasons for the photocurrent reduction, by associating them with peculiar behavior of defects in such a type of structures. All this implies that metamorphic QDs with a high x are valid structures for optoelectronic infrared light-sensitive devices. Nanostructure Quantum dot Metamorphic InAs/InGaAs Photoconductivity Photoluminescence Photocurrent  Bandgap of the InGaAs confining layer Ec and Ev Energy of conductivity and valence bands Metamorphic buffer Quantum dot The load resistance Wetting layer Metamorphic InAs/In x Ga1 − xAs QD nanostructures have attracted much interest in the last decade owing to many benefits [1, 2, 3, 4, 5, 6, 7]. Their most attractive feature is that, by growing the QDs on an InGaAs metamorphic buffer (MB), one can achieve a significant reduction of the transition energy between the QD levels [8] with respect to conventional In(Ga)As/GaAs QD structures. This occurs due to the decrease of InAs QD bandgap as a result of the lattice mismatch reduction between InAs QDs and InGaAs buffer and, hence, the strain in QDs [9, 10, 11]. So, the application of a MB as a confining material allows to shift the emission wavelength value deeper into the infrared (IR) range, in particular, into the telecommunication windows at 1.3 and 1.55 μm, while maintaining a high efficiency [4, 12, 13]. Furthermore, metamorphic QDs have shown interesting properties such as (i) a high QD density [14], (ii) the possibility to widely tune QD and wetting layer (WL) levels [10, 15], and (iii) good performances of active elements in light-emitting devices [16]. However, the recent investigations of deep levels in metamorphic QDs showed that, despite InAs/In0.15Ga0.85As QD structures having a total defect density close to the QD layer comparable to that of InGaAs/GaAs pseudomorphic QDs, metamorphic structures with higher x demonstrated higher defect densities [17, 18]. Metamorphic InAs QD structures have found successful applications in the design and fabrication of IR photonic and light-sensitive devices, such as lasers [19, 20], single-photon sources [3, 7, 21, 22], and solar cells [23, 24, 25]. In(Ga)As QD photodetectors based on interband and intersubband transitions are currently actively investigated for enhanced detection from near-IR to longwave-IR ranges due to their response to the irradiation at normal incidence [26, 27, 28, 29, 30]. For instance, the intersubband transitions of electrons between quantum-confined levels and continuum states can be engineered by embedding InAs QDs in InGaAs layers [29, 30, 31, 32], as this design allows to tune the detection peak wavelength, to control the response by an externally applied bias and to reduce the dark current [33, 34]. To date, there are no papers about the implementation of metamorphic QD structures in photodetectors. The key role for the development of this area is the preservation of a high emission efficiency and photosensitivity of metamorphic QD structures that need to be at least comparable with those of conventional InAs/GaAs QD structures [1, 5, 35]. A lot of studies were carried out in the fundamental and application fields to develop structure design [6, 14, 21], to improve photoelectric properties [5, 13], and to control/reduce strain-related defects in the heterostructures [4, 36, 37]. Hence, InAs/In x Ga1 − xAs metamorphic QD nanostructures are interesting nanostructures, which allow to have emission or photoresponsivity in the 1.3- and 1.55-μm IR ranges [1, 2, 3, 4, 5, 6, 7]. Furthermore, it was reported by us earlier that vertical InAs/In0.15Ga0.75As QD structures can maintain photosensitivity comparable to the GaAs-based ones [5]. However, such metamorphic structures are seldom studied in photoelectric measurements with a lateral geometry, where the photocurrent proceeds through in-plane transport of carriers across channels between two top contacts. Commonly, the QD layers along with the associated WL form these conductivity channels in the lateral geometry-designed GaAs-based structures [38]. Owing to this peculiar type of conductivity, QD photodetectors with the lateral transport are believed to have potential for a high photoresponsivity [39, 40]. An in-depth study of metamorphic InAs/InGaAs QD nanostructures in the lateral configuration can provide a fundamental knowledge about the photoconductivity (PC) mechanism and efficiency of the in-plain carrier transport. In our recent paper devoted to the defects in metamorphic QD structures [17], we reported lateral PC measurements at low temperatures, considering only the IR spectra edges originating from defects. However, we believe that a proper characterization and fundamental investigation of the structure at room temperature can give precious insights for further improvements of novel light-sensitive devices as near-IR photodetectors, linear arrays, and camera matrixes, by implementing metamorphic QDs. In the present work, we studied in-plane photoelectric properties of the metamorphic InAs/In x Ga1 − xAs QD nanostructures grown by molecular beam epitaxy with different In composition x, employing PC and photoluminescence (PL) spectroscopies, lateral electrical measurements, and modeling calculations. In particular, we focused on the observation of a possible redshift of the QD layer photoresponse toward the IR beyond 1.3 μm while preserving photosensitivity alike for In0.15Ga0.85As and GaAs QD light-sensitive structures. A high photosensitivity in the near-IR wavelength range at room temperature is an indication that these nanostructures can be useful not only for devices based on interband transitions but also for intersubband photodetectors working beyond 10 μm. Sample Preparation and Description The studied structures schematically shown in Fig. 1 were grown by molecular beam epitaxy. Firstly, a semi-insulating (100) GaAs substrate was covered by a 100-nm thick GaAs buffer at 600 °C, followed by the deposition of an undoped InGaAs MB 500 nm in thickness at 490 °C. Then, after the prior growth interruption of 210 s to cool down the substrate, 3.0 MLs (monolayers) of InAs were grown at 460 °C. Finally, these self-assembled QDs were covered by 20 nm of undoped In x Ga1 − xAs with the same MB stoichiometry. Four samples with different stoichiometry of In x Ga1 − xAs cladding layer have been fabricated: In content x was 0.15, 0.24, 0.28, and 0.31. Fig. 1 Color online. Scheme of the metamorphic InAs/In x Ga1 − xAs QD structures and their connection for the photoelectric measurements Theoretical Modeling For metamorphic structure designing as well as understanding of the energy profile, the calculations of the quantum energy system composed by In(Ga)As QDs, undoped MB, and cap layer were carried out by using the Tibercad software [41] that we demonstrated to be adequate to simulate the optical properties of semiconductor low-dimensional nanostructures [2, 15, 42]. We consider an InAs QD with truncated conical shape and sizes taken from experimental atomic force microscopy data [14]; we include the presence of InAs WL, which parameters depend on the In x Ga1 − xAs metamorphic layer properties [15]. First, strain calculations for the structure are made, by calculating the strain tensor components of the QD, induced by the mismatch fQD between the QD and MB, defined as $$ {f}_{\mathrm{QD}}=\left[{a}_{\mathrm{InAs}}\hbox{--} {a}_{\mathrm{MB}}(x)\right]/{a}_{\mathrm{MB}}(x) $$ where aMB(x) is the lattice parameter of In x Ga1 − xAs MB and aInAs is the lattice parameter of InAs. Then, band profiles for QDs and embedding layers depend on the deformation potentials of the relevant materials (InAs for QDs and WLs and relaxed InGaAs for MB). Finally, the Schrödinger equation $$ \boldsymbol{H}\psi = E\psi $$ is solved in the envelope function approximation by a single-band, effective-mass approach for electrons and 6 bands k•p approach for holes, where the 3D Hamiltonian is $$ \widehat{H}=-\frac{\upeta^2}{2}{\nabla}_{\mathbf{r}}\left(\frac{1}{m\left(E,\mathbf{r}\right)}\right){\nabla}_{\mathbf{r}}+V\left(\mathbf{r}\right), $$ with V(r) being the 3D potential. Such an approximation is considered satisfying when carrying on QD ground state calculation [2]. Ground levels for electrons and heavy holes are thus obtained, alongside their probability densities. Photoluminescence emission energies were derived by taking the energy difference between confined levels for electrons and heavy holes, reduced by 20 meV to take into consideration excitonic effects. A more detailed description of model calculations can be found in Ref. [2]. Photoelectric Characterization For the lateral photoelectric measurements, two InGa eutectic surface contacts were deposited over 5 × 2 mm pieces of the structures. Measured linear IV characteristics given in Fig. 2 have confirmed the contact ohmicity. The current flowing through the samples was measured by a Siglent SDM3055 multimeter, using a standard dc technique [43, 44] as a voltage drop across a series load resistance R L of 1 MΩ, which was much less than the sample resistance. Photocurrent was excited by a 250-W halogen lamp light dispersed with a prism monochromer, and PC spectra were recorded in the range from 0.6 to 1.6 eV [44, 45, 46]. The spectra were normalized to the excitation quanta number of the light source. PL spectra were obtained using a 532-nm laser as an excitation source with a power density of 5 W/cm2. All the measurements were carried out at room temperature (300 K). Fig. 2 Color online. IV characteristics of the InAs/In x Ga1 − xAs structures with x = 0.15 (a), 0.24 (b), 0.28 (c), and 0.31 (d) for the dark (black) and under an illumination of 350 μW/cm2 (color) at energies of PL spectrum peak (QD excitation) and 1.3 eV (effective absorption in InGaAs). Insets: photocurrent dependences on bias voltage Results and Discussion PC spectra of the studied metamorphic InAs/In x Ga1 − xAs QD structures at room temperature are given in Fig. 3 together with the PL bands, which show the optical transitions between QD ground states. The relative intensities and positions of the PL bands are also shown in Fig. 4b. Features due to the QDs, InGaAs confining layers, and GaAs bottom layers are observed on the PC curves. The photocurrent signal at the energies below the PL band onsets could be related to the structure defects detected earlier [17]. Fig. 3 Color online. PC spectra of the metamorphic InAs/In x Ga1 − xAs structures at room temperature and a bias of 11 V for x = 0.15 (a), 0.24 (b), 0.28 (c), and 0.31 (d). The excitation intensities for the black, red, and blue curves at 1.3 eV correspond to 88, 350, and 1400 μW/cm2, respectively. PL spectra in arbitrary units are given for the energy positioning of QD ground state transitions. The vertical arrows mark the InGaAs bandgaps (ε g ) calculated following Paul et al. [48] and spectral positons, where the PC dependencies on excitation intensity were measured (given in Fig. 5) Fig. 4 Color online. Modeling calculations for the metamorphic InAs/In x Ga1 − xAs QD structures: a band profiles in the structures with different x along the growth axis; b the real QD PL bands and their calculated peak positions (dashed verticals); and c probability densities of the confined electrons and holes for the InAs/In0.15Ga0.85As QD. All the calculations of modeled structures were carried out for 300 K The investigated metamorphic InAs/In0.15Ga0.85As QD structure was found to be photosensitive in the telecom range at 0.95 eV (1.3 μm) (Fig. 3a). As x increased, a redshift was observed for all the samples: the structure with x = 0.31 was found to be sensitive near 0.8 eV (1.55 μm) (Fig. 3d), i.e., at the third telecom window [47]. The shift is related to the reduction of the lattice mismatch between the materials of InAs QD and In x Ga1 − xAs buffer with an increase in x and, hence, a decrease in the strain in QDs. This leads to a narrowing of the InAs QD bandgap and, in turn, to the redshift of the PL band as well as the photoresponse onset toward IR [1, 2, 3, 4, 5, 6, 19, 35]. Simultaneously, only a slight decrease in the QD photocurrent signal was recorded, thus confirming the preservation of a good photoresponsivity, comparable with that of the In0.15Ga0.75As sample. As we discussed recently [5], metamorphic QD structures with x = 0.15 show a photoresponse very similar to those of pseudomorphic InAs/GaAs QD nanostructures. Also, the PC reduction correlate with the PL one as it can be seen in Fig. 3. Such an effect for our samples turned out to be most notable in Fig. 2, where the IV dependences at the dark and under illumination at different characteristic spectral points on bias voltage are shown, together with the photocurrent dependences in the insets. Like in Fig. 3, the photocurrent value implies just the photoinduced part of current obtained from the total current under illumination by subtracting the dark current value. These spectral points are the PL band maximums and 1.3 eV, where an effective band-to-band absorption in InGaAs MB occurs. As well as for the dark IV characteristics, these dependencies are linear-like within the experimental error. The best photoresponse was measured in the structure with the minimal In content in the confining layers. It also had the lowest dark current. The photocurrent value at the applied excitation level (350 μW/cm2) in the InAs/In0.15Ga0.85As structure was two to three times above the dark current when MB was pumped. The photoresponse at QD excitation was comparable to the dark current; however, it should be considered that our structures had only one QD layer. Fabrication of the multilayered QD structures surely would lead to a significant increase in the IR photoresponse. Other structures with higher x revealed lower photocurrent signals; the detected magnitudes at both spectral points were approximately an order lower than the dark current values in a wide range of the applied voltage. The lowest photoresponse was found for the InAs/In0.31Ga0.69As structure with the maximal MB In content. Most probably, this photoresponsivity decrease is related to an increase in the MB defect density with x, which was determined earlier for these structures, employing deep level thermally stimulated current spectroscopy [17], that correlated well with structural analysis of such nanostructures [1]. We have reported that the InAs/In0.15Ga0.85As QD structure had a total defect density close to the QD layer comparable to InGaAs/GaAs ones, whereas other structures with higher In contents demonstrated higher densities of defects like the known GaAs-related point defect complexes EL2, EL6, EL7, EL9, and EL10 near the QD layer and three levels attributed to extended defects propagating through the buffer. In regard to the spectrum shape (Fig. 3), above the QD excitation, light absorption and, hence, the carrier generation occur mainly in the MB at energies above the InGaAs confining layer bandgap ε g , which values for different x were estimated by an empiric formula [48]. However, it is noteworthy that an increase in photon energy above ε g leads to a slight decrease of the photoresponse. Naturally, this confirms that metamorphic QDs, despite being effective recombination centers [1, 2, 12, 22], are more efficient contributors to photocurrent than MB [5, 6, 23]. To understand the PC mechanism of this peculiarity, one should look at Fig. 4a, where we show the calculated QD band profiles along the growth direction for our samples. Calculations are validated by the result of quantum energy levels for electrons and holes: the expected PL emission energies are in agreement with the PL QD ground state transition measured experimentally (Fig. 4b). In Fig. 4c, we show the simulated probability densities for confined electrons and holes, obtained by the carrier wavefunctions calculated with the Tibercad modelization, that indicate a higher level of localization for heavy holes in comparison with electrons. In order to contribute to the photocurrent signal, electron-hole pairs generated by QD interband absorption have to escape from QDs by thermal emission. In a previous study [49], it was established that in metamorphic QDs electrons and heavy holes escape simultaneously from QDs as correlated pairs. Moreover, it was also demonstrated that the activation energy for such process corresponds to the sum of the activation energies for the two particles [50]. While studying thermal quenching of PL emission from metamorphic QDs [10, 51], we proved that such activation energies are equal to the sum of the energy distance from the WL levels and QD states and go from 250 meV for x = 0.15 down to 150 meV for x = 0.31. As widely discussed in Ref. [51], these values cause a strong quenching of the PL emission at room temperature via the thermal escape of confined carriers. On such basis, we can infer that carriers excited in QDs can thermally escape to WL and MB: there, electrons and heavy holes are separated by the band bending in the QD vicinity (Fig. 4a), which promotes the hole trapping back to QDs and, while being a barrier for electrons, thereby effectively suppresses their radiative recombination. As a consequence, heavy holes are concentrated at the QD periphery (Fig. 4c), whereas electrons are free to move along the potential well of WL and MB contributing to the conductivity. It is worth noting that in Ref. [49], it is discussed that, although correlated during the escape process, carriers cannot be considered as excitons at room temperature; henceforth, they can be easily separated by the band bending in the vicinity of QDs. Otherwise, when exciting the MB, non-equilibrium holes are generated in the confining layers and do recombine with electrons. It should be mentioned here that the WL is known to be a conductivity channel for nanostructures based on GaAs [52] and, in our lateral structures designed with surface contacts, there is no heterojunction, so carriers are efficiently collected near the surface plane. In Fig. 3, the fall of PC signal just above ε g turned into the rise at higher energies, e.g., above 1.3 or 1.1 eV for sample with x of 0.15 or 0.31, respectively. This was conceivably caused by the optical absorption closer to the surface and QD layer, thus involving shallower traps. As established for these structures by thermally stimulated current spectroscopy and deep level transient spectroscopy [17, 18], the deeper electron traps are located mainly in the InGaAs MB layer, whereas the shallower ones are concentrated near the surface (in relation to these samples, near the QD layer). The electrons trapped into the shallower traps can more easily escape back to the conduction band at room temperature. Thus, free electrons near the QD layer are more mobile than those excited deeper in the MB and, hence, give a higher contribution to the charge transfer. Furthermore, the electrons, being generated near the surface, can freely transfer to the WL conductivity channel. A similar drop of photocurrent following an increase above GaAs bandgap (near 1.4 eV) was observed. This effect might be due to the carrier generation close to the InGaAs/GaAs interface, which is known to have a higher density of defect states being traps and recombination centers. The relative contribution of different optical transitions to the structure photoresponse varied with pumping intensity. This is better observed in Fig. 5, which shows photocurrent values as a function of the excitation intensity at different characteristic spectral points: the onset of the PL band (resonant excitation of the QD ensemble) or efficient band-to-band absorption in InGaAs (1.3 eV) and GaAs (1.5 eV). Fig. 5 Color online. Photocurrent vs excitation intensity for the InAs/In x Ga1 − xAs structures with a x = 0.15 and b 0.31. The lines are the fitting by functions f(x) ~ xα The structures with different In contents in the confining layers demonstrated similar dependencies at equivalent spectral ranges. So, the band-to-band excitation in GaAs (1.5 eV) shows a quadratic dependence at most of intensity values. This is typical for the band-to-band recombination of non-equilibrium charge carriers, for instance when they are highly predominant under the equilibrium carriers [53]: this is expectable in our undoped structures. The dependencies in the case of excitation in the QD and InGaAs confining layers are very similar to each other but different from those for GaAs. They are linear at low excitation intensities and become sublinear at higher intensities. This behavior obviously points out to the carrier recombination involving Shockley-Read centers: the linear dependence becomes sublinear one, as some of the centers are saturated at higher carrier generation rates [54]. These results of intensity-dependent measurements distinctly indicate an efficient generation of main charge carriers at a relatively low recombination rate in QD embedding layers and a much higher density of recombination centers in GaAs layers. For example, during the QD excitation in similar characterizations, InGaAs/GaAs QD photosensitive structures showed a dependence from intensity as PC(I) ~ I0.25, which occurred due to a high rate of the non-radiative recombination though defect levels along with QD radiative recombination [40, 55]. However, it is worth to notice that the InGaAs/GaAs structure was multilayered having seven QD layers. From these measurements and their interpretation, some indications for the use of metamorphic QDs for IR detection can be highlighted: (i) when using x > 0.15, advanced designs allowing to control strain-related defects should be used, similar to what was done for the development of metamorphic QDs [19, 20, 37]; (ii) multilayer stacks of QDs (with a minimum of 10 layers) are needed to obtain a QD PC above the dark current [27, 56]; and (iii) as a higher confinements of heavy holes is beneficial for the photocurrent obtained when exciting QDs, advanced designs with higher-gap barriers for heavy holes could be considered [51, 57]. Hence, these findings can be very useful for the design of metamorphic QDs aiming at IR detection and the development of metamorphic QD photodetectors. Photoelectric properties of the metamorphic InAs/In x Ga1 − xAs QD nanostructures were studied at room temperature, employing PC and PL spectroscopies, electrical measurements, and theoretical model simulations. The studied metamorphic InAs/In x Ga1 − xAs QD nanostructures were found to be photosensitive in the telecommunication windows at 1.3 (x = 0.15) and 1.55 μm (x = 0.31). However, the QD PC as well as the PL efficiencies of the structures with higher In contents in MB were estimated to be lower and, nevertheless, comparable to that of the InAs/In0.15Ga0.85As structure, which has sensitivity similar to InGaAs/GaAs QD structures. This photoresponsivity reduction is related to an increase in the MB defect density with x. Also, thanks to modeling calculations, we provided insights into the PC mechanism in the investigated type of QD structures. All this implies that metamorphic QDs with a high x are valid structures for optoelectronic IR light-sensitive devices, provided that some points of concern are addressed by optimization of the design of the nanostructure. This work was supported in part by COST Action “Nanoscale Quantum Optics” of European Union; National Natural Science Foundation of China (61525503, 61620106016, 81727804); National Basic Research Program of China (2015CB352005); Guangdong Natural Science Foundation Innovation Team (2014A030312008); Shenzhen Basic Research Project (JCYJ20150930104948169, JCYJ20160328144746940, GJHZ20160226202139185, JCYJ20170412105003520); and Ministry of Education and Science of Ukraine. Availability of Data and Materials All data are fully available without restriction. Authors’ Contributions SG, OD, and LS proposed and guided the overall project. LS, GT, and PF designed and grew the samples, measured the luminescence spectra, and carried out the calculations. SG and OD made the electrical contacts and technical part. SG, OD, ISB, and IG performed the photoelectrical measurements. SG, OD, and LS wrote the manuscript, with contributions from all authors. JQ participated in the discussions and edited the manuscript. LS and JQ provided managerial supports, supervising the research. All authors reviewed and approved the final manuscript. Authors’ Information SG (Ph.D.), ISB (Ph.D.), IG (Ph.D.), and JQ (professor) are from Key Laboratory of Optoelectronic Devices and Systems of Ministry of Education and Guangdong Province, College of Optoelectronic Engineering, Shenzhen University, Shenzhen 518060, People’s Republic of China. SG and ISB are visiting researchers in the Institute of Semiconductor Physics, National Academy of Sciences, Kyiv 03028, Ukraine. OD (Ph.D.) is from the Department of Physics, Taras Shevchenko National University of Kyiv, Kyiv 01601, Ukraine. LS (Ph.D.), GT (Ph.D.), and PF (Ph.D.) are from the Institute of Materials for Electronics and Magnetism, CNR-IMEM, 43100 Parma, Italy. Competing Interests The authors declare that they have no competing interests. Publisher’s Note 1. 1. Seravalli L, Frigeri P, Nasi L, Trevisi G, Bocchi C (2010) Metamorphic quantum dots: quite different nanostructures. J Appl Phys 108:064324CrossRefGoogle Scholar 2. 2. Gioannini M, Cedola AP, Di Santo N, Bertazzi F, Cappelluti F (2013) Simulation of quantum dot solar cells including carrier intersubband dynamics and transport. IEEE J Photovoltaics 3:1271CrossRefGoogle Scholar 3. 3. Munoz-Matutano G, Barrera D, Fernandez-Pousa CR, Chulia-Jordan R, Seravalli L, Trevisi G et al (2016) All-optical fiber Hanbury Brown & Twiss interferometer to study 1300 nm single photon emission of a metamorphic InAs quantum dot. Sci Rep 6:27214CrossRefGoogle Scholar 4. 4. Semenova ES, Zhukov AE, Mikhrin SS, Egorov AY, Odnoblyudov VA, Vasil’ev AP et al (2004) Metamorphic growth for application in long-wavelength (1.3-1.55 μm) lasers and MODFET-type structures on GaAs substrates. Nanotechnology 15:S283–S2S7CrossRefGoogle Scholar 5. 5. Golovynskyi S, Seravalli L, Datsenko O, Trevisi G, Frigeri P, Gombia E et al (2017) Comparative study of photoelectric properties of metamorphic InAs/InGaAs and InAs/GaAs quantum dot structures. Nanoscale Res Lett 12:335CrossRefGoogle Scholar 6. 6. Golovynskyi S, Seravalli L, Datsenko O, Kozak O, Kondratenko SV, Trevisi G et al (2017) Bipolar effects in photovoltage of metamorphic InAs/InGaAs/GaAs quantum dot heterostructures: characterization and design solutions for light-sensitive devices. Nanoscale Res Lett 12:559CrossRefGoogle Scholar 7. 7. Paul M, Olbrich F, Höschele J, Schreier S, Kettler J, Portalupi SL et al (2017) Single-photon emission at 1.55 μm from MOVPE-grown InAs quantum dots on InGaAs/GaAs metamorphic buffers. Appl Phys Lett 111:033102CrossRefGoogle Scholar 8. 8. Trevisi G, Seravalli L, Frigeri P, Prezioso M, Rimada JC, Gombia E et al (2009) The effects of quantum dot coverage in InAs/(In)GaAs nanostructures for long wavelength emission. Microelectron J 40:465–468CrossRefGoogle Scholar 9. 9. Seravalli L, Minelli M, Frigeri P, Allegri P, Avanzini V, Franchi S (2003) The effect of strain on tuning of light emission energy of InAs/InGaAs quantum dot nanostructures. Appl Phys Lett 82:2341–2343CrossRefGoogle Scholar 10. 10. Seravalli L, Minelli M, Frigeri P, Franchi S, Guizzetti G, Patrini M et al (2007) Quantum dot strain engineering of InAs/InGaAs nanostructures. J Appl Phys 101:024313CrossRefGoogle Scholar 11. 11. Wang P, Chen QM, Wu XY, Cao CF, Wang SM, Gong Q (2016) Detailed study of the influence of InGaAs matrix on the strain reduction in the InAs dot-in-well structure. Nanoscale Res Lett 11:119CrossRefGoogle Scholar 12. 12. Seravalli L, Frigeri P, Trevisi G, Franchi S (2008) 1.59 μm room temperature emission from metamorphic InAs/InGaAs quantum dots grown on GaAs substrates. Appl Phys Lett 92:213104CrossRefGoogle Scholar 13. 13. Golovynskyi SL, Seravalli L, Trevisi G, Frigeri P, Gombia E, Dacenko OI et al (2015) Photoelectric properties of the metamorphic InAs/InGaAs quantum dot structure at room temperature. J Appl Phys 117:214312CrossRefGoogle Scholar 14. 14. Seravalli L, Trevisi G, Frigeri P (2012) 2D-3D growth transition in metamorphic InAs/InGaAs quantum dots. CrystEngComm 14:1155–1160CrossRefGoogle Scholar 15. 15. Seravalli L, Trevisi G, Frigeri P (2013) Calculation of metamorphic two-dimensional quantum energy system: application to wetting layer states in InAs/InGaAs metamorphic quantum dot nanostructures. J Appl Phys 114:184309CrossRefGoogle Scholar 16. 16. Mi Z, Wu C, Yang J, Bhattacharya P (2008) Molecular beam epitaxial growth and characteristics of 1.52 μm metamorphic InAs quantum dot lasers on GaAs. J Vac Sci Technol 26:1153CrossRefGoogle Scholar 17. 17. Golovynskyi S, Datsenko O, Seravalli L, Kozak O, Trevisi G, Frigeri P et al (2017) Deep levels in metamorphic InAs/InGaAs quantum dot structures with different composition of the embedding layers. Semicond Sci Technol 32:125001CrossRefGoogle Scholar 18. 18. Rimada JC, Prezioso M, Nasi L, Gombia E, Mosca R, Trevisi G et al (2009) Electrical and structural characterization of InAs/InGaAs quantum dot structures on GaAs. Mater Sci Eng B-Adv 165:111–114CrossRefGoogle Scholar 19. 19. Mi Z, Bhattacharya P (2008) Pseudomorphic and metamorphic quantum dot heterostructures for long-wavelength lasers on GaAs and Si (invited paper). IEEE J Sel Top Quant 14:1171–1179CrossRefGoogle Scholar 20. 20. Karachinsky LY, Kettler T, Novikov II, Shernyakov YM, Gordeev NY, Maximov MV et al (2006) Metamorphic 1.5 μm-range quantum dot lasers on a GaAs substrate. Semicond Sci Technol 21:691–696CrossRefGoogle Scholar 21. 21. Seravalli L, Trevisi G, Frigeri P (2012) Design and growth of metamorphic InAs/InGaAs quantum dots for single photon emission in the telecom window. CrystEngComm 14:6833–6838CrossRefGoogle Scholar 22. 22. Seravalli L, Trevisi G, Frigeri P, Rivas D, Munoz-Matutano G, Suarez I et al (2011) Single quantum dot emission at telecom wavelengths from metamorphic InAs/InGaAs nanostructures grown on GaAs substrates. Appl Phys Lett 98:173112CrossRefGoogle Scholar 23. 23. Azeza B, Alouane MHH, Ilahi B, Patriarche G, Sfaxi L, Fouzri A et al (2015) Towards InAs/InGaAs/GaAs quantum dot solar cells directly grown on Si substrate. Materials 8:4544–4552CrossRefGoogle Scholar 24. 24. Rouis W, Haggui M, Rekaya S, Sfaxi L, M’ghaieth R, Maaref H et al (2016) Local photocurrent mapping of InAs/InGaAs/GaPts intermediate-band solar cells using scanning near-field optical microscopy. Sol Energ Mat Sol C 144:324–330CrossRefGoogle Scholar 25. 25. Han IS, Kim JS, Kim JO, Noh SK, Lee SJ (2016) Fabrication and characterization of InAs/InGaAs sub-monolayer quantum dot solar cell with dot-in-a-well structure. Curr Appl Phys 16:587–592CrossRefGoogle Scholar 26. 26. Wu J, Chen SM, Seeds A, Liu HY (2015) Quantum dot optoelectronic devices: lasers, photodetectors and solar cells. J Phys D Appl Phys 48:363001CrossRefGoogle Scholar 27. 27. Passmore BS, Jiang W, Manasreh MO, Kunets VP, Lytvyn PM, Salamo GJ (2008) Room temperature near-infrared photoresponse based on interband transitions in In0.35Ga0.65As multiple quantum dot photodetector. IEEE Electron Device Lett 29:224–227CrossRefGoogle Scholar 28. 28. Kondratenko SV, Iliash SA, Vakulenko OV, Mazur YI, Benamara M, Marega E et al (2017) Photoconductivity relaxation mechanisms of InGaAs/GaAs quantum dot chain structures. Nanoscale Res Lett 12:183CrossRefGoogle Scholar 29. 29. Shao J, Vandervelde TE, Barve A, Stintz A, Krishna S (2012) Increased normal incidence photocurrent in quantum dot infrared photodetectors. Appl Phys Lett 101:241114CrossRefGoogle Scholar 30. 30. Vaillancourt J, Stintz A, Meisner MJ, Lu XJ (2009) Low-bias, high-temperature operation of an InAs-InGaAs quantum-dot infrared photodetector with peak-detection wavelength of 11.7 μm. Infrared Phys Technol 52:22–24CrossRefGoogle Scholar 31. 31. Lu X, Meisner MJ, Vaillancourt J, Li J, Liu W, Qian X, Goodhue WD (2007) Modulation-doped InAs-InGaAs quantum dot longwave infrared photodetector with high quantum efficiency. Electron Lett 43:589–590CrossRefGoogle Scholar 32. 32. Lu XJ, Vaillancourt J, Meisner MJ, Stintz A (2007) Long wave infrared InAs-InGaAs quantum-dot infrared photodetector with high operating temperature over 170K. J Phys D Appl Phys 40:5878–5882CrossRefGoogle Scholar 33. 33. Lin W-H, Chao K-P, Tseng C-C, Mai S-C, Lin S-Y, Wu M-C (2009) The influence of In composition on InGaAs-capped InAs/GaAs quantum-dot infrared photodetectors. J Appl Phys 106:054512CrossRefGoogle Scholar 34. 34. Nedzinskas R, Čechavičius B, Rimkus A, Pozingytė E, Kavaliauskas J, Valušis G et al (2015) Temperature-dependent modulated reflectance of InAs/InGaAs/GaAs quantum dots-in-a-well infrared photodetectors. J Appl Phys 117:144304CrossRefGoogle Scholar 35. 35. Seravalli L, Frigeri P, Minelli M, Franchi S, Allegri P, Avanzini V (2006) Metamorphic self-assembled quantum dot nanostructures. Mat Sci Eng C-Bio S 26:731–734CrossRefGoogle Scholar 36. 36. Mi Z, Bhattacharya P, Yang J (2006) Growth and characteristics of ultralow threshold 1.45 μm metamorphic InAs tunnel injection quantum dot lasers on GaAs. Appl Phys Lett 89:153109CrossRefGoogle Scholar 37. 37. Mazzucato S, Nardin D, Capizzi M, Polimeni A, Frova A, Seravalli L et al (2005) Defect passivation in strain engineered InAs/(InGa)As quantum dots. Mat Sci Eng C-Bio S 25:830–834CrossRefGoogle Scholar 38. 38. Kunets Vas P, Rebello Sousa Dias M, Rembert T, Ware ME, Mazur Yu I, Lopez-Richard V et al (2013) Electron transport in quantum dot chains: dimensionality effects and hopping conductance. J Appl Phys 113:183709CrossRefGoogle Scholar 39. 39. Towe E, Pan D (2000) Semiconductor quantum-dot nanostructures: their application in a new class of infrared photodetectors. J Sel Top Quantum Electron 6:408CrossRefGoogle Scholar 40. 40. Golovynskyi SL, Dacenko OI, Kondratenko SV, Lavoryk SR, Mazur YI, Wang ZM et al (2016) Intensity-dependent nonlinearity of the lateral photoconductivity in InGaAs/GaAs dot-chain structures. J Appl Phys 119:184303CrossRefGoogle Scholar 41. 41. Auf der Maur M, Penazzi G, Romano G, Sacconi F, Pecchia A, Di Carlo A (2011) The multiscale paradigm in electronic device simulation. IEEE Trans Electron Devices 58:1425–1432CrossRefGoogle Scholar 42. 42. Trevisi G, Seravalli L, Frigeri P (2016) Photoluminescence monitoring of oxide formation and surface state passivation on InAs quantum dots exposed to water vapor. Nano Res 9:3018–3026CrossRefGoogle Scholar 43. 43. Kondratenko SV, Vakulenko OV, Mazur YI, Dorogan VG, Marega E, Benamara M et al (2014) Deep level centers and their role in photoconductivity transients of InGaAs/GaAs quantum dot chains. J Appl Phys 116:193707CrossRefGoogle Scholar 44. 44. Vakulenko OV, Golovynskyi SL, Kondratenko SV (2011) Effect of carrier capture by deep levels on lateral photoconductivity of InGaAs/GaAs quantum dot structures. J Appl Phys 110:043717CrossRefGoogle Scholar 45. 45. Kondratenko SV, Golovinskiy SL, Vakulenko OV, Kozyrev YN, Rubezhanska MY, Vodyanitsky AI (2007) Photocurrent spectroscopy of indirect transitions in Ge/Si multilayer quantum dots at room temperature. Surf Sci 601:L45–LL8CrossRefGoogle Scholar 46. 46. Valakh MY, Dzhagan VM, Yukhymchuk VO, Vakulenko OV, Kondratenko SV, Nikolenko AS (2007) Optical and photoelectrical properties of GeSi nanoislands. Semicond Sci Technol 22:326–329CrossRefGoogle Scholar 47. 47. Song H-Z, Hadi M, Zheng Y, Shen B, Zhang L, Ren Z et al (2017) InGaAsP/InP nanocavity for single-photon source at 1.55-μm telecommunication band. Nanoscale Res Lett 12:128CrossRefGoogle Scholar 48. 48. Paul S, Roy JB, Basu PK (1991) Empirical expressions for the alloy composition and temperature-dependence of the bandbap and intrinsic carrier density in GaxIn1-xAs. J Appl Phys 69:827–829CrossRefGoogle Scholar 49. 49. Sanguinetti S, Colombo D, Guzzi M, Grilli E, Gurioli M, Seravalli L et al (2006) Carrier thermodynamics in InAs/InxGa1−xAs quantum dots. Phys Rev B 74:205302CrossRefGoogle Scholar 50. 50. Le Ru EC, Fack J, Murray R (2003) Temperature and excitation density dependence of the photoluminescence from annealed InAs/GaAs quantum dots. Phys Rev B 67:1–12Google Scholar 51. 51. Seravalli L, Trevisi G, Frigeri P, Franchi S, Geddo M, Guizzetti G (2009) The role of wetting layer states on the emission efficiency of InAs/InGaAs metamorphic quantum dot nanostructures. Nanotechnology 20:275703CrossRefGoogle Scholar 52. 52. Danil’tsev VM, Drozdov MN, Moldavskaya LD, Shashkin VI, Germanenko AV, Min’kov GM et al (2004) Electron transport effects in the IR photoconductivity of InGaAs/GaAs structures with quantum dots. Tech Phys Lett 30:795–798CrossRefGoogle Scholar 53. 53. Sze SM, Ng KK (2006) Physics of semiconductor devices. Wiley-Interscience, New JerseyCrossRefGoogle Scholar 54. 54. Duboc CA (1955) Nonlinearity in photoconducting phosphors. Br J Appl Phys 6:107–111CrossRefGoogle Scholar 55. 55. Golovynskyi SL, Mazur YI, Wang ZM, Ware ME, Vakulenko OV, Tarasov GG et al (2014) Excitation intensity dependence of lateral photocurrent in InGaAs/GaAs dot-chain structures. Phys Lett A 378:2622–2626CrossRefGoogle Scholar 56. 56. Ezzedini M, Hidouri T, Alouane MHH, Sayari A, Shalaan E, Chauvin N et al (2017) Detecting spatially localized exciton in self-organized InAs/InGaAs quantum dot superlattices: a way to improve the photovoltaic efficiency. Nanoscale Res Lett 12:450CrossRefGoogle Scholar 57. 57. Seravalli L, Frigeri P, Allegri P, Avanzini V, Franchi S (2007) Metamorphic quantum dot nanostructures for long wavelength operation with enhanced emission efficiency. Mater Sci Eng C 27:1046–1051CrossRefGoogle Scholar Copyright information © The Author(s). 2018 Authors and Affiliations 1. 1.Key Laboratory of Optoelectronic Devices and Systems of Ministry of Education and Guangdong Province, College of Optoelectronic EngineeringShenzhen UniversityShenzhenPeople’s Republic of China 2. 2.Institute of Semiconductor PhysicsNAS of UkraineKyivUkraine 3. 3.Department of PhysicsTaras Shevchenko National University of KyivKyivUkraine 4. 4.Institute of Materials for Electronics and Magnetism, CNR-IMEMParmaItaly Personalised recommendations
993adfdc38b57ab3
main article image Darren Tunnicliff/Flickr Physicists Confirm That Time Moves Forward Even in The Quantum World 4 DEC 2015 For the first time, an experiment has confirmed that the laws of thermodynamics hold true even at the quantum level – which means that even in the quantum world, you can’t unspill that glass of milk.  The reason time runs the way it does in our everyday lives is because of the second law of thermodynamics, which states that over time all systems become more disordered, or increase in entropy. And that process is irreversible, which is why time only moves forward. But theoretical physicists had predicted that on the quantum level, the process might go both ways. That’s because when you start dealing with really, really small particles, the laws of physics – such as the Schrödinger equation – are 'time-symmetric' or reversible. "In theory, forward and backward microscopic processes are indistinguishable," writes Lisa Zyga for Now physicists led by the Federal University of ABC in Brazil have performed an experiment that confirms that those theories don’t match up with the reality, with thermodynamic processes remaining irreversible even in quantum systems. But they still don’t understand why that’s the case. "Our experiment shows the irreversible nature of quantum dynamics, but does not pinpoint, experimentally, what causes it at the microscopic level, what determines the onset of the arrow of time," one of the researchers, Mauro Paternostro from Queen's University in Ireland, told "Addressing it would clarify the ultimate reason for its emergence." So how do you go about testing the laws of thermodynamics in a quantum system? Basically scientists need to be able to set up an isolated quantum system and observe the reversal of a natural process – which is tricker than it sounds.  For this experiment, the researchers used a bunch of carbon-13 atoms in liquid chloroform, and flipped their nuclear spins using an oscillating magnetic field. They then used another magnetic pulse to reverse the spins again. "If the procedure were reversible, the spins would have returned to their starting points – but they didn’t," writes Zyga. Instead what they saw was that the alternating magnetic pulses were applied so quickly that sometimes the atoms’ spin couldn’t keep up, which lead to the isolated system getting out of equilibrium.  The physicists confirmed that after the experiment the entropy was indeed increasing, which shows that the process of thermodynamics was irreversible, regardless of how small the particles involved were. All of that basically means that the one-way arrow of time exists even for the tiniest particles in the Universe, defying the microscopic laws of physics. And it suggests that something else is getting involved to stop quantum systems from being reversible. The physicists are now interested in figuring out what that is, and they believe the new insight into quantum systems could help advance the march towards quantum computers and other quantum devices. "Any progress towards the management of finite-time thermodynamic processes at the quantum level is a step forward towards the realisation of a fully fledged thermo-machine that can exploit the laws of quantum mechanics to overcome the performance limitations of classical devices," said Paternostro. For now though, we can take away from this research the knowledge that we can’t move backwards in time, as much as we might want to. The past really has passed… even on the atomic scale. The research has been published in Physical Review Letters.
4cc57f0c570b609e
When solving Schrödinger's equation for a 3D quantum well with infinite barriers, my reference states that: $$\psi(x,y,z) = \psi(x)\psi(y)\psi(z) \quad\text{when}\quad V(x,y,z) = V(x) + V(y) + V(z) = V(z).$$ However, I cannot find any rationale for this statement. It may be obvious, but I would appreciate any elucidation. up vote 8 down vote accepted It's because when $$V(x,y,z) = V_x(x) + V_y(y) + V_z(z),$$ (I guess that your extra identity $V(x,y,z)=V(z)$ is a mistake), we also have $$ H = H_x + H_y + H_z$$ because $H = (\vec p)^2 / 2m + V(x,y,z) $ and $(\vec p)^2 = p_x^2+p_y^2+p_z^2$ decomposes to three pieces as well. One may also see that the terms such as $H_x\equiv p_x^2/2m+V_x(x)$ commute with each other, $$ [H_x,H_y]=0 $$ and similarly for the $xz$ and $yz$ pairs. That's because the commutators are only nonzero if we consider positions and momenta in the same direction ($x$, $y$, or $z$). At the end, we want to look for the eigenstates of the Hamiltonian $$ H|\psi\rangle = E |\psi \rangle$$ and because we have $H = H_x+H_y+H_z$, a Hamiltonian composed of three commuting pieces, we may simultaneously diagonalize them i.e. look for the common eigenstates of $H_x,H_y,H_z$, and therefore also $H$. So given the separation condition for the potential, we may also assume $$ H_x |\psi\rangle = E_x |\psi\rangle $$ and similarly for the $y,z$ components. However, the equation above is just a 1-dimensional problem that implies that $|\psi\rangle$ must depend on $x$ as a one-dimensional quantum mechanical energy eigenstate wave function, $$ \psi(x) = C\cdot \psi_n(x) $$ which is an eigenstate of $H_x$. This has to hold but the normalization factor is undetermined. We usually say that it's a constant but this statement only means that it is independent of $x$. In reality, it may depend on all observables that are not $x$ such as $y,z$. So a more accurate implication of the $H_x$ eigenstate equation is $$ \psi(x,y,z) = C_x(y,z)\cdot \psi_{n_x}(x) .$$ In a similar way, we may show that $$ \psi(x,y,z) = C_y(x,z)\cdot \psi_{n_y}(y) $$ and $$ \psi(x,y,z) = C_z(x,y)\cdot \psi_{n_z}(z) $$ and by combining these three formulae, we see that the whole function must factorize to a product of functions of $x$ and $y$ and $z$ separately. If you need a rigorous proof of the last simple step, take e.g. the complex logarithms of the three forms for $\psi$ above and compare e.g. the first pair: $$\ln\psi = \ln C_x(y,z) +\ln\psi_{n_x}(x) = \ln C_y(x,z)+\ln \psi_{n_y}(y) $$ Take e.g. the partial derivative of the last equation with respect to $y$: $$ \frac{\partial \ln C_x(y,z)}{\partial y} = \frac{\partial \ln\psi_{n_y}(y) }{ \partial y }$$ The other two (1+1) terms are zero because they didn't depend on $y$. The right hand side above only depends on $y$, so the same must be true for the left hand side. I am going to make a simple conclusion but to make it really transparent, let's differentiate the latter equation over $z$, too. The $\psi_{n_y}$ term disappears as well so we have $$\frac{\partial^2 \ln C_x(y,z)}{\partial y\,\partial z} = 0$$ It means that $\ln C_x(y,z)$ must have the form $K_x(y)+L_x(z)$, and $e^{K_x(y)}e^{L_x(z)}$ must be the remaining factors in the wave function. We say that the wave function in the product form is a "tensor product" of the three independent one-dimensional wave functions and more "operationally", as another user mentioned, the method described above is the method of "separation of variables". • 1 Great answer, thanks! (The V(x,y,z,) = V(z) wasn't a mistake, but the special case for 1D QW.) – Halftrack Oct 23 '12 at 12:24 This method is called "separation of variables", and it is one of several strategies for finding solutions to multi-dimensional field problems in physics. It's big advantage is that solving N 1-dimensional differential equations is generally easier than solving 1 N-dimensional problems (N>1), but it is contingent on there being a "uniqueness theorem" for the category of problems that you are looking at. Happily many common field problems in physics have such a theorem. To show that the condition on the potential is required for the Schrödinger equation simply make the separation, and expand. If the potential has the above form, you can write the LHS of the equation in three terms: $$ \left[\left( \frac{-\hbar^2}{2m} \frac{\partial^2}{\partial x} + V(x) \right)\psi(x)\right] \left[\left( \frac{-\hbar^2}{2m} \frac{\partial^2}{\partial y} + V(y) \right)\psi(y)\right] \left[\left( \frac{-\hbar^2}{2m} \frac{\partial^2}{\partial z} + V(z) \right)\psi(z)\right] $$ and you can clearly rewrite the whole into three separate conditions like $$ \left( \frac{-\hbar^2}{2m} \frac{\partial^2}{\partial x} + V(x) \right)\psi(x) = i\hbar{}\frac{\partial}{\partial t} \psi(x) . $$ On the other had, if the potential can not be written in this way, you can not get the LHS into the form you need and you can not proceed along these lines. Let me just emphasize that you can find a lot of solutions of the equation that do not satisfy the statement in the question (and cannot be presented as such products), but you can use this statement to find the basis of the set of the solutions of the equation. So any solution can be presented as a linear combination of functions satisfying the statement. • How do you reconcile this with the answer of @lubos-motl? Can you give any examples? – Halftrack Oct 23 '12 at 12:28 • Dear akhmeteli and @Halftrack, akhmeteli must mean lots of solutions to the time-dependent Schrodinger equation: solutions may be obtained as linear superpositions of the separated solutions with different energies $E$. For the time-independent "eigenvalue" equation with a single well-defined energy eigenvalue, the separation of variables lists all the solutions (unless there is degeneracy in the spectrum). – Luboš Motl Oct 23 '12 at 16:11 • @Halftrack and Luboš Motl: Judging by Luboš Motl's comment, he agrees that the statement is not always satisfied even for the time-independent Schrödinger equation if there is degeneracy. Let me also add that it is not quite obvious from your (Halftrack's) question that it was the time-independent Schrödinger equation that you had in mind, at least I did miss that. – akhmeteli Oct 24 '12 at 0:46 • I was only looking to solve the time independent Schrödinger equation, but you are absolutely correct: my question does not state so. If there is degeneracy, how would that break the derivation by @lubos-motl? – Halftrack Oct 24 '12 at 1:03 • @Halftrack: I did not study the derivation in detail, but it seems OK. It is important though to understand what he derived. As far as I can judge, he proved that there is a set of eigenstates of the Hamiltonian in the form of your statement (products of three functions), and any eigenstate is a linear combination of eigenstates from that set. However, when the spectrum is degenerate, there are at least two different eigenstates in that set having the same eigenvalue, and linear combinations of such eigenstates are also eigenstates, but they do not have to have the form of your statement. – akhmeteli Oct 24 '12 at 1:49 Your Answer
da772e1381aecfb1
Are there any truly dumb questions in physics? I trade thoughts and correspond occasionally with other interested amateurs and some highly qualified professionals  on an internet forum on theoretical physics. While most of the topics raised there are serious, intelligent and thoughtful questions involving complex interpretations of concepts, math, and experiments, sometimes there are queries posed that your initial reaction to is “not this again!”  or, “didn’t we cover that subject the last time?” But if you take a few minutes to think about it, you realize that what might seem like a dumb question often turns out to be a smart question, because it leads you to think about an old issue in a new way, or to re-think old assumptions that you may have left unchallenged for too long. A couple of recent ones on our forum are like these. My first response to one of these questions,  “Does a photon have mass?” was dismay, and I posted a (too) quick response. In many ways this question is a nonsensical one. A photon has energy, else how could it displace another “particle” from a plate of another material per the photoelectric effect? And if you accept E=mc2, then energy = mass in that conceptual universe of thought, and of course a photon has mass. But if the question is about the classical notion of “rest mass”, then we’re in another quandary, because a photon is never at rest. So, one thinks that this mass/energy duality might be akin to that other mysterious duality of modern physics, the “wave/particle duality”, except that in the case of mass and energy, at least both qualities can be measured. In this thinker’s conceptual model, the answer to this question lies outside the quantum physics model of the universe. If what we designate as a photon carries energy but no “rest mass” then we must abandon the notion of it as a “particle” as it is considered in the QT universe. Instead we should see it as something more like a coherent wrinkle or distortion in the background fabric of the cosmos of which many of us are convinced that our universe is a part. If one sees the so-called “empty” cosmos as made up of an extremely high frequency electromagnetic field, our “photon” can be seen as simply a small but significant distortion of that field. Its apparent “velocity, c, then, is not seen as the passage of a particle “through” a medium but as a wave-like distortion of the medium itself, and its apparent velocity is a constant, constrained by the fine grain, the ultra high frequency of the medium, just as the apparent velocity of the passage of an ocean wave does not represent any forward movement of the medium itself, only the passage of a distortion. Think of cracking a whip. A second “dumb” question is a little more complex. It goes like this: “Please explain to me the Quantum theory called “superposition.” Well, the answer is also complex. Superposition is often assumed to mean the presence of two entities such as electrons occupying the same space at the same time. This is a false assumption and we can all agree that two substantive things, two particles, say. cannot occupy the same space at the same time. (Note here for further reference however, that two (or more) wave conglomerations can, in fact, occupy the same space at the same time.) The accepted definition of superposition is more of a mathematical construct and according to Wikipedia, it goes like this: Quantum superposition is a fundamental principle of quantum mechanics that holds that a physical system—such as an electron—exists partly in all its particular, theoretically possible states (or, configuration of its properties) simultaneously; but, when measured or observed, it gives a result corresponding to only one of the possible configurations (as described in interpretation of quantum mechanics).       Mathematically, it refers to a property of solutions to the Schrödinger equation; since the Schrödinger equation is linear, any linear combination of solutions to a particular equation will also be a solution of it. ” In simpler terms, what this means is this: that you cannot know or predict at any time, what state any given electron might be in. It’s an easy out for a theory that purports to explain how everything works in nature, but is strangely unsatisfying as an explanation of some observed behavior. Physicists seem to accept it though,  however much it sounds like religious dogma about the eternal mysteries. A third question has come up more recently, about yet another mystery,  something called “quantum entanglement.” Quantum dogma has it, and this has supposedly actually been observed, that two quantum entities can become “entangled,” so that if they are then separated by any distance, even as much as at both sides of the universe, if the state of one of them changes, say from a left spin to a right spin, or from a positive to a negative value, the other entangled entity is automatically, instantaneously, changed as well. Now, Einstein challenged this idea, both as presuming instantaneous action at a distance, without any known force being involved, as well as violating the principal that no action in the universe can exceed the speed of light. Here is a way out of having to deal with both of these quantum conundrums, these paradoxes and contradictions that are somehow easily supported in the language of mathematics but not in the domain of observable reality. It requires only a simple conceptual adjustment, that we accept the notion that what physicists since perhaps the time of Democritus have assumed to have the nature of a “particle” is, in fact simply a very small, coherent, organized, higher concentration of energy in the field of the cosmos. It is a distortion of the field, and because of its concentration of energy, it generates a companion field in its local region. Back to entanglement for instance, if, for example we postulate a tightly bound energy field as the cosmos one might infer behavior something like what happens when you pull at a corner of a bed sheet to eliminate a wrinkle only to have the same wrinkle miraculously appear in the opposite corner If taken seriously, it can be seen that this model enormously simplifies our conceptual vision  from the microscopic world of physics out to the macro-macro model of the cosmologist. There is a place here to explain mystical phenomena from the “double slit experiment” out to the mysterious substance called “dark matter” which can then be seen as large, broad scale distortions in the cosmic field surrounding truly high energy concentrations such as stars, galaxies, and clusters. And “dark energy,” that other mysterious unseen substance can be seen as simply the substance of the field itself. This doesn’t throw out all of the work of the last century. Much of the math will still apply as long as the mathematicians are willing to give their claim that “the math is the reality.” What the rest of us have to give up is something very tiny, and which no one has ever seen, anyway, that hypothetical little billiard ball that has hypnotized scientists and philosophers for a couple of thousand years. The collection of assertions that make up what is collectively known as quantum theory has, for almost a hundred years, been considered the principal body of knowledge that underlies modern physics. Unfortunately, it remains after all that time a body full of contradictions, paradoxes, and uncertainties. It has frustrated all attempts at reconciliation with the dogma at the other end of its chain, the theory of general relativity, itself contradicted by the insubstantiality, the actual reality,  of its two principal elements, space and time. QT must be seen, not as a body of knowledge, but as a body of supposition, that would have been abandoned early had it not been called on to fill an intellectual vacuum, and had it not had such a corps of vociferous supporters speaking a language most could not understand, that of the highest and most impenetrable mathematics. The challenge we are trying to meet here is to replace those troubled, damaged, incomplete, and ontologically challenged models with one that is more complete, more consistent, that explains conceptually more observable phenomena. It is difficult, we know, to mentally conceive  a cosmos that is a 3-dimensional field of vibrating energy, with bundles of vibrations, perhaps the things we call photons, electrons, and the like, that are simply coherent distortions of the field itself, not foreign bodies moving through it; or the concept that everything that exists in the universe is made of those distortions, from the tiniest entities out to and including the stars—but it should not be more difficult than swallowing the thorny paradoxes and contradictions of quantum theory. So, I’ll end with one more dumb question. “Is the Sun around which we annually circumnavigate a real tangible object , or is it perhaps just a truly bright spot out there in the sky?” As you might guess from my thoughts above, I’m leaning strongly toward the second notion. About Charles Scurlock This entry was posted in 6 General. Bookmark the permalink. One Response to Are there any truly dumb questions in physics? 1. pallsopp42 says: Happy Christmas Eve, Chuck. Nice posting!! You’re right. Those “dumb questions” aren’t dumb at all. I agree that thinking about them over time can refine ideas and create new insights into what answers might be. I agree with you about the sun being a bright spot – a big one for sure!! All the very best Leave a Reply You are commenting using your account. Log Out /  Change ) Google+ photo Twitter picture Facebook photo Connecting to %s
d0ae2046a6d7aeaa
Xiao Deng Date of Award Degree Type Degree Name Master of Applied Science (MASc) Electrical and Computer Engineering Shiva Kumar In the passed half century, great improvements have been achieved to make fiber-optic communication systems overweigh other traditional transmission systems such as coaxial systems in many applications. However, the physical features including optic fiber losses, group velocity dispersion (GVD) and nonlinear effects lead to significant system impairments in fiber-optic communications. The nonlinear Schrödinger equation (NLSE) governs the pulse propagation in the nonlinear dispersive media such as an optical fiber. A large number of analytical and numerical techniques can be used to solve this nonlinear partial differential equation (PDE). One of theses techniques that has been extensively used is split-step Fourier scheme (SSFS) which employs the fast Fourier transform (FFT) algorithm to increase the computational speed. In this thesis, we propose a novel lossless SSF scheme in which the fast decay of the optical field due to fiber losses is separated out using a suitable transformation and the resulting lossless NLSE is solved using the symmetric SSF scheme with some approximations. The various symmetric SSF schemes in terms of accuracy for the given computational cost are compared. Our results show that the proposed scheme could lead to one or two orders of magnitude reduction in error as compared to the conventional symmetric SSFS when the computational cost is fixed. The proposed scheme can be also used as an effective algorithm for digital backward propagation (BP) too. Our numerical simulation of quadrature amplitude modulation-16 (QAM-16) coherent fiber-optic transmission system with digital BP has shown that the bit error rate (BER) obtained using the proposed scheme is much lower than that obtained using the conventional SSF schemes. McMaster University Library
19ec1913761a7639
Take the tour × Consider a free-particle with a Gaussian wavefunction, $$\psi(x)~=~\left(\frac{a}{\pi}\right)^{1/4}e^{-\frac12a x^2},$$ find $\psi(x,t)$. The wavefunction is already normalized, so the next thing to find is coefficient expansion function ($\theta(k)$), where: $$\theta(k)=\int_{-\infty}^{\infty} \psi(x)e^{-ikx} \,dx.$$ But this equation seems to be impossible to solve without error function (as maple 16 tells me). Is there any trick to solve this? share|improve this question I am a bit confused, why are you trying to find $\psi (k)$ ? Or as you write it, $\theta (k)$ ? –  DJBunk Sep 20 '12 at 23:06 Can you put what you ran through maple 16? –  Magpie Apr 8 at 1:38 add comment 1 Answer Your question seems rather confused, • First you ask for the time evolution of the wavefunction. For this you will need to use the Schrödinger equation $i \partial \psi/\partial t= \hat H \psi $ and thus will need to know the Hamiltonian ($\hat H$). • Second you seem to want to work out the Fourier transform of the wavefunction. This will not give you the wavefunction as a function of time but will give you the wavefunction in momentum space. The integral you want to calculate is the Fourier transform of a Gaussian which is itself a Gaussian: $$\int_{-\infty}^{\infty} e^{-ax^2/2}e^{-i k x} \, dx \\ = \int_{-\infty}^{\infty} e^{-ax^2/2}\left(\cos{kx} - i \sin{kx} \right) \, dx .$$ The second term in the above integral is odd so will give zero. The first term is a known integral and gives $$=\sqrt{\frac{2\pi}{a}} e^{-k^2/2 a} , $$ a Gaussian as promised with width inversey proportional to the original. I am pretty certain Maple should also be able to calculate the integral for you as it is written in my fist line (Mathematica can), so I imagine you are just not entering it correctly. Edit: Apologies for the first comment above. I had not seen that you had written this was for a free particle, so indeed you know the Hamiltonian, the potential is $V(x,t)=0$, and so from Schrödinger's equation we know the time evolution of the energy Eigenstates is $\psi(x,t)=e^{-i \omega t}\psi(x)$. For the free particle we have $\omega=k^2/2m$ and so you know the time evolution of the Fourier transform. So taking the Fourier transform given above, applying the time evolution, and transforming back to position space we have $$\psi(x,t)=\int_{-\infty}^{\infty} e^{-k^2/2 a}e^{-i\omega t}e^{ikx} \, dk \\ =\int_{-\infty}^{\infty} e^{-\frac{k^2}{2 a}(1+iat/m)}e^{ikx}\, dk \\ \sim e^{\frac12 \frac{x^2}{1/a+imt}}$$ as #Ron pointed out in his comment. This shows how the wavepacket spreads out with time. share|improve this answer The fourier trasnform evolves by simple phases, and a reverse fourier transform gives the time evolution, which is a spreading Gaussian, so that the a gets replaced everywhere by ${1\over {(1/a)+it}}$ –  Ron Maimon Sep 21 '12 at 6:48 Oh yeah, hadn't seen the part saying this was for a free particle (doh!). Have added an edit to the answer to complete it. Thanks for pointing that out. –  Mistake Ink Sep 21 '12 at 13:27 add comment Your Answer
ee3a86a6d4082857
Take the tour × I have the following equations describing the electron field in a (classic) electromagnetic field: $$ c\left(\alpha _i\right.{\cdot (P - q(A + A_b) + \beta mc) \psi = E \psi } $$ where $A_b$ is the background field and $A$ is the one generated by the local Dirac field I presume that the equation for the electromagnetic field $A$ generated by the electron would be: $$\nabla_{\mu}\nabla^{\mu} A_{\nu} = \frac{\psi \gamma^{\nu} \psi}{\epsilon_0} $$ Question: Is there a way to numerically solve these systems of equations to find eigenstates of the system? Side Show Question: Are these eigenstates physically meaningful? do i still need to apply second quantization procedure in order to know which eigenstates are physically meaningful (i.e: stable) and which are not? share|improve this question add comment 2 Answers The main problem in your proposed equation is that the electromagnetic equation with the D'Alambertian over the vector potential is not in Hamiltonian form, this means that the separation of solutions in Sturm–Liouville eigenstates of the energy operator is not manifest in the equation. Without that, you cannot find eigenstates of the coupled system. You might find this dissertation interesting: On the canonical formulation of electrodynamics and wave dynamics In there, the author analizes a Hamiltonian formulation for the electrodynamic field that is amenable to numerical solution coupled with the Schrödinger equations. Depending on what you actually want to find this should suffice (or not) Regarding to your other question, i'm not confident giving an authoritative answer to that, so i'll let others jump on it share|improve this answer add comment I guess these are tough questions, as the system is nonlinear. I can only give some references. In some of them, some numerical solutions of this system were found: Phys. Rev. A 60, 4291–4300 (1999) (also arXiv:physics/0001038 ), http://maths-old.anu.edu.au/research.publications/proceedings/039/CMAproc39-booth.pdf and references there. For what it's worth, in my work arXiv:1111.4630 it is shown how to eliminate spinor field from the system (but complex electromagnetic potentials are introduced, which produce the same electromagnetic field, so their imaginary part is defined by one common function, so you have just 5 unknown real functions). share|improve this answer add comment Your Answer
a31981a9a89a37d7
Forgot your password? Earth Science Scientific Cruise Meets Perfect Storm, Inspires Extreme Wave Research 107 Posted by Unknown Lamer from the creative-punishment-for-copyright-infringers-discovered dept. An anonymous reader writes "The oceanographers aboard RRS Discovery were expecting the winter weather on their North Atlantic research cruise to be bad, but they didn't expect to have to negotiate the highest waves ever recorded in the open ocean. Wave heights were measured by the vessel's Shipborne Wave Recorder, which allowed scientists from the National Oceanography Centre to produce a paper titled 'Were extreme waves in the Rockall Trough the largest ever recorded?' It's that paper, in combination with the first confirmed measurement of a rogue wave (at the Draupner platform in the North Sea), that led to 'a surge of interest in extreme and rogue waves, and a renewed emphasis on protecting ships and offshore structures from their destructive power.'" Scientific Cruise Meets Perfect Storm, Inspires Extreme Wave Research Comments Filter: • by Anonymous Coward on Tuesday April 17, 2012 @12:06AM (#39707379) This scientific cruise also proved that the only kind of cruise where nobody gets laid is a "scientific cruise" • by cplusplus (782679) on Tuesday April 17, 2012 @12:15AM (#39707423) Journal I only RTFAs to find out how high the waves were - it turns out they were up to 29.1 meters (95.5 feet). • Rogue waves (Score:3, Funny) by gstrickler (920733) on Tuesday April 17, 2012 @12:23AM (#39707453) Outlaw them and put out a bounty (or a Bounty?) • 2006 (Score:5, Informative) by Anonymous Coward on Tuesday April 17, 2012 @12:32AM (#39707491) The article was published in 2006. How is this 'new?' • The article was published in 2006. How is this 'new?' I guess it's some sort of tie in with the 100th anniversary of the Titanic making it almost all the way across the Atlantic. • The wave was so high that the ship did a loopty-loop, causing a rift in time where they just ended up here. The same phenomenon can be seen if you can swing high enough on a swingset to go around once • by jlehtira (655619) Well, I agree with your point. But six years is a good time to let scientific papers simmer. Less than that is not enough time for other scientists to evaluate the correctness and value of some paper. • by Anonymous Coward Many researchers were lost during the peer-review of this paper. • by dreemernj (859414) 2006? Wasn't that around the time a rogue wave was recorded on The Deadliest Catch? • Data collected in 2000. Paper published in 2006. Reported in /. in 2012. The pace of good science is slow and deliberate. • by Anonymous Coward on Tuesday April 17, 2012 @12:43AM (#39707553) look up Schrodinger wave equations and apply them to ocean waves. You will get 30+ meter tall waves with a trough next to the "wall" of water, (the wave is tall and narrow - like a wall). This trough adds to the great difficulty in surviving one of these waves. Ships that are designed to withstand forces of 10 tons/m2 have to content with 10 times that force. I believe there was a study in which someone, (don't remember her name :( ) mapped the entire earth over a two week period and found something on the order of 20 of these waves. Fascinating stuff. • by phantomfive (622387) on Tuesday April 17, 2012 @03:21AM (#39708089) Journal Oh yeah, just found it [bbc.co.uk]. They found about 10 giant waves. • by Anonymous Coward FYI the Schrodinger wave equation does not describe ocean waves. Water waves are described by the Navier-Stokes (N-S) equations. Turbulence models fall out of N-S, however only electrons sometimes fall out from Schrodinger :) • There is a non-relativistic version of the Schrödinger equation. Some theories attempt to explain rogue waves in the open sea using these non-linear equations as a model, because the distribution of wave heights that would result from the linear model substantially underpredicts the occurrence and size of rogue waves. • by Anonymous Coward The nonlinear Schordinger equation is one of the many various equations that can be used to describe the behaviour of water waves in various regimes, with a tiny bit about it on Wikipedia here [wikipedia.org]. Although the NLS is mostly used for behaviour of the envelope of deep water waves, which means you can show soliton based rouge wave like behaviour, but not say much about trough to peak steepening as in the grandparent post. The set of equations and theories used to model nonlinear water waves is quite diverse, wit • by WaffleMonster (969671) on Tuesday April 17, 2012 @12:52AM (#39707605) For those looking for more details about this voyage http://eprints.soton.ac.uk/294/ [soton.ac.uk] • Specifically in 1998, a 120ft wave off the east coast of tasmania http://www.swellnet.com.au/news/124-a-short-history-of-tasman-lows [swellnet.com.au] • Since extreme waves were not the subject of their expedition, they had not read all the prior literature. • by TapeCutter (624760) on Tuesday April 17, 2012 @02:48AM (#39708017) Journal The Tasman sea is notorious for rouge waves. Many moons ago I worked a fishing trawler in Bass Straight, I never saw anything like 120ft but the regular waves were tall enough that the radar was blocked by the peaks when the boat was in a trough, I'm guessing the radar mast was about 30ft above the water line. A lot like riding in a giant roller coaster carriage really, slowly climb up one wave, crest, then race down the other side and watch the bow dig under the next one, throw the water over the wheel house as the bow pops up to the surface, and starts the next climb. From what I've heard, the problem with rouge waves is not so much their height but the fact that they are too steep to climb. • Wow, that is incredibly exciting. • I detect a hint of sarcasm but to be honest it was downright fucking scary the first trip but after a few trips it became as exciting to me as an old fashioned roller coaster is to the guy who stands up on it all day operating the brake. Although a stingray the size of a family dinner table flapping about on an 8X12 deck was never boring. • No sarcasm at all. If the human lifespan weren't so short I would definitely consider going down and trying it out for a few years. I don't know about that stingray thing, though. I know people who go ocean kayaking but that's nothing in comparison. • by tlhIngan (30335) Waves are never boring, especially big ones. The key is to cut through them - if you let them hit the side, you risk capsizing. The only way to do this is engine power (run • by serbanp (139486) Does this mean that the "the Perfect Storm" depiction of how the Andea Gail sunk was technically inaccurate? In that film, the ship went with its bow straight into the freak wave but could not reach the top and fell over. • Yep, it's a lot like a plane, if engine is fucked, gravity takes over and you basically fall of the wave.. • That article claims 42.5m is 120 feet - it's actually 140 feet. The wave was probably recorded as 120 feet and someone mangled the conversion rather than the other way round. • by Sarten-X (1102295) on Tuesday April 17, 2012 @12:53AM (#39707611) Homepage Rogue waves: Demonstrating yet again that reality is a fascinatingly weird place. • by iamhassi (659463) And we don't understand our planet as much as we think. We are always focused on exploring strange new worlds, to seek out new life and new civilizations, to boldly... um, you get the idea, but look, there's new things happening on our own planet. How can we understand new planets when we don't understand the one we are on? Not saying never explore space, just saying maybe we should focus on what we have. • by Anonymous Coward How can we understand this planet when we have nothing to compare it to? Rethorical questions only caters to peoples emotional response but they don't make much of an argument. • by Sarten-X (1102295) Reminds me of the TV show seaQuest... for almost a whole season, they had interesting episodes based around real weirdness in the oceans. What fascinates me even more is the emergent behavior observable in simple systems, such as growing crystals, diffusing liquids, convection currents... all of those delightfully complex results from simple principles. There's beauty in the result, and simplicity in the process. • by Anonymous Coward Although the paper might have spurred interest in rogue waves, the wave in the paper linked in the summary wouldn't really be considered a rogue wave. Usually a cut-off is arbitrarily picked at 2 times the significant wave height (the average of the highest third of waves). In this case, the wave was about 1.5 times the significant wave height. Statistically speaking, you would expect about 1 in a 100 waves to be 1.5 times the wave height, just from the mixing and constructive interference of waves, whil • Big waves (Score:4, Interesting) by MarkRose (820682) on Tuesday April 17, 2012 @01:06AM (#39707665) Homepage Waves over 20 m (60 ft) tall are actually pretty common in some places. My dad is senior keeper at Triple Island Lightstation [fogwhistle.ca], located just off the BC coast. In severe winter storms, the waves will often crest over the square part of the building, which is about 20 m above sea level. This January, one such wave blew in a storm window on the top floor -- several tons of water will sometimes do that. The building stays up because it's constructed with 2 ft thick rebar concrete walls. • Re:Big waves (Score:5, Informative) by tirerim (1108567) on Tuesday April 17, 2012 @01:39AM (#39707811) TFA is talking about waves in the open ocean, though. Waves get higher when they reach shallower water, so the 20 m waves you're talking about would have been significantly smaller in the open ocean -- which makes 29 m open ocean waves that much more impressive. • Nice traditional exterior, but sad to see the drop ceiling [lighthousememories.ca] on the interior. At least the wood floor is original. • Interesting link but some of the text is reminiscent of Julian and Sandy (http://en.wikipedia.org/wiki/Julian_and_Sandy) from "Round the Horne", I mean, "The Triple Island light was built to guide mariners through the rocky waters of Brown Passage, on their way to the port of Prince Rupert.", I ask ya! • It's interesting how often myth and legend end up being scientific fact. There has been talk since sailors took to the sea of rogue waves that reached a 100' or more. Science has been confirming these myths in recent years. Most myths have an element of truth in them. On the practical side it's a serious concern since surviving a 100' rogue wave is not something all sea worthy ships can do yet they can face them without warning. I read years ago the theoretical limit was twice what has been recorded so the • The paper is from 2006, and describes a wave observed in 2000. Satellite-based radar altimeters produce a lot of data about wave height world wide, but they don't, apparently, have quite enough resolution yet to see this kind of thing. A view of such waves from above, over a few minutes, would tell us a lot. Is it an intersection of two or more waves? How far does it travel? How long does it persist? The U.S. Navy has put considerable effort into answering questions like that. • bad statistics (Score:4, Interesting) by Tom (822) on Tuesday April 17, 2012 @04:24AM (#39708283) Homepage Journal What has fascinated me about freak/rogue waves is that sailors have known about them for decades if not centuries, but scientists were telling them it can't be. And the reason is badly understood statistics. I've recently read Black Swan, and that gave me a few new concepts to work with, but the basic idea is exactly that: We don't really have a good understanding of statistics and probabilities, especially about extremely low probabilities in big numbers. Or, as Tim Minchin put it: One-in-a-million things happen all the time. And it's not just in the oceans. The entire financial crisis was caused by the people in charge taking huge (but low probability) risks, ignoring that once enough people have taken enough of those "low probability" risk, they become very likely to actually happen. Freak waves are cool because they are in the gray area between the normal distribution and the really freaky - thus they happen often enough that they are rare, but not bigfoot-rare. We can actually study them. • Re: (Score:3, Interesting) by edxwelch (600979) There's an interesting article about that, here: http://www.bbc.co.uk/science/horizon/2002/freakwave.shtml [bbc.co.uk] Apparently, there are two scientific models, linear, which says freak waves are impossible and Quantum physics which says they are possible. • by Tom (822) The problem is that a gaussian approach to the numbers assumes that random fluctuations will even out. But the equations used in quantum physics allow for waves to combine, and that's what is happening - interference, just not between 2 waves as in the double-slit experiment, but between dozens or maybe hundreds of waves. This article here: http://dev.physicslab.org/Document.aspx?doctype=3&filename=PhysicalOptics_InterferenceDiffraction.xml [physicslab.org] shows towards the bottom how massive peaks you can get with mult • by Anonymous Coward Linear wave theory allows for interference and combining of waves (that is kind of actually one of the major properties of linear theories in a lot of situations). The statistics on linear theory waves (which ends up being a Rayleigh distribution, not a Gaussian) is what says that waves much larger than those around it are very unlikely. What nonlinear theories add is not just overlapping like interference, but soliton like solutions, where a single wave or small wave train much larger than neighboring wa • by Tom (822) Thanks, AC. In 12+ years of /. this was one of the most informative AC comments I've come across. • We have bigger waves in Texas! • I've never understood that particular idiocy. Texans know they don't live in the biggest US state, right? Texas is less than half the size of Alaska. • by dtmos (447842) * on Tuesday April 17, 2012 @07:09AM (#39708623) My uncle retired as a US Navy Captain. For many years he had two photographs displayed in his house, which he ascribed to Admiral "Bull" Halsey's "second" typhoon [navy.mil], in June 1945. At that time my uncle was an ensign, assigned to a destroyer, and on his first sea voyage. The two photographs were of a sister destroyer. In the first photograph, all one sees is a giant wave, with the bow of the destroyer sticking out of one side, and the stern sticking out of the other. The middle of the ship, including the masts and superstructure, is submerged and not visible. In the second photo, taken a few seconds later, the middle of the ship is now visible, but both the bow and stern are now submerged in the wave train. And as a kid, the part that fascinated me the most: You could see an air gap below the middle of the ship, between the ship's keel and the wave trough below. • I'm surprised I can't get for my boat (or raft) a platform with accelerometers that operates a hydraulic piston to compensate for wave action. It might need some lateral actuator too, as wave motion is circular. But it might not, if the light floats slide along the surface as the piston pushes down on them keeping the heavy inertial payload in place. Just accelerometers, hydraulic pistons, and DSP. Big bonus points for a device that harvests that energy moving through the site to power the hydraulics.
597f31220bacc915
Real-time Earth and Moon phase Sunday, March 28, 2010 Same old 'SS Edmund Fitzgerald' tragedy. Nothing new! This news from the Chicago Sun-Times this morning, entitled "New wave hits 'Edmund Fitzgerald'", naturally caught my attention: The legend lives on from the Chippewa on down, but singer Gordon Lightfoot says he plans to change the lyrics to his song, the "Wreck of the Edmund Fitzgerald," after researchers concluded that a gigantic, 50-foot rogue wave -- not human error -- was responsible for sinking the ship. The Edmund Fitzgerald left Superior, Wis., on the evening of Nov. 9, 1975, bound for Zug Island, near Detroit. The next day, it encountered a fierce storm and sank. Twenty-nine lives were lost -- the greatest disaster in the history of the Great Lakes. The U.S. Coast Guard concluded that the boat sank because the crew left the cargo hatches open, allowing the holds to fill with water. In the show "Dive Detectives," a new series for History Television, a diving team deployed wave-generating technology to simulate the conditions faced by the Edmund Fitzgerald. The tests demonstrate how the force of the freak wave, crashing down on the midsection of the boat -- already low in the water because of its heavy cargo --- might have caused it to split in two. Lightfoot said the conclusion is "definitive." Instead of singing "at 7 p.m. a main hatchway caved in," he'll sing "at 7 p.m. it grew dark, it was then, He said, 'Fellas, it's been good to know ya.' " Scripps Howard News Service I am a little underwhelmed by this news story. I have never thought the tragedy of Fitzgerald in 1975 was due to human error, since I always regard the official report with whatever was in it was just perfunctory, duty-bound speculations not worth of much undue attention. Now this "new" wave-generating technology simulation is simply another speculation. We don't know what had happened, and no one can recreate the condition the Fitz encountered. Remember that there were other ships went through the same condition that night safely. So we did not know what was happened to SS. Edmund Fitzgerald that night, now over 35 years later, we still don't know what had happened that night. Mr. Lightfoot can certainly entitled to change his lyrics as he wishes. The fact is that there is nothing new in this new simulation/speculation effort. And there is just nothing we can do about it either! By the way there was a fabulous hindcast study by NOAA Weather Service scientists not long ago, showing the time and location where the Fitz lost was the worst spot at the worst time as far as the storm waves can be hindcasted for that night. That should be sufficient evidence showing that Fitz's tragedy was caused by waves more than anything else. Gorden Lightfoot's new lyric "at 7pm it grew dark" is really an under statement. I really could not see this new effort as reported, while commendable, had added anything new to the knowledge base! Reading II on Palm Sunday of the Lord’s Passion Christ Jesus, though he was in the form of God, did not regard equality with God something to be grasped. Rather, he emptied himself, taking the form of a slave, coming in human likeness; and found human in appearance, he humbled himself, becoming obedient to the point of death, even death on a cross. Because of this, God greatly exalted him and bestowed on him the name which is above every name, that at the name of Jesus every knee should bend, of those in heaven and on earth and under the earth, and every tongue confess that Jesus Christ is Lord, to the glory of God the Father. (Is. 50:4-7) Wednesday, March 24, 2010 The Louis Majesty case When freaque waves are encountered, we don’t usually know what was happening. News media reporters mostly put together comments and eyewitness accounts, but clearly no one is capable of sorting out important key facts at any rate. The freaque waves world has been quiet for some time, but in early March an encounter by a cruise ship in the Mediterranean made world wide news. According to Google there were at least 1300 news items have been written and published all around the globe. An example of a typical account can be represented by this AP report on March 3: ATHENS, Greece (AP) - Greek and Cypriot officials say 26-foot waves have crashed into a cruise ship with nearly 2,000 people on board off France, smashing glass windshields and killing two passengers. Another six people suffered light injuries, a Greek coast guard statement says. It says the accident occurred near the French Mediterranean port of Marseilles on Wednesday as the Cypriot-owned Louis Majesty was sailing from Barcelona to Genoa in Italy with 1,350 passengers and 580 crew. The victims were only identified as a German and an Italian man. Louis Cruise Lines spokesman Michael Maratheftis said the ship was hit by three "abnormally high" waves up to 26 feet (8 meters) high that broke glass windshields in the forward section. It is heading back to Barcelona. That’s still about all we know at this time even after all these days days since the happening. Note that two days after the encounter on March 5, this Wired Science article, attempted to put more analysis and science into it but did not really succeed! They merely put in plenty of jargons, extra known facts, and even talked to an oceanographer which is commendable but not really clarify things up. Mainly the article tried to imply that the wave encountered by the cruise ship, Louis Majesty, is not very large and it does not fit the “official definition” of a freaque wave. Here’s my comment for the “experts”: When a wave caused damage and casualty, it is a freaque wave, doesn’t matter what “official definition” is. (Freaque waves may have some general indications or guidelines, but no universally accepted “official" definitions yet! And armchair analysis can cause more confusion!) The article from World News Australia, also on March 5, attempted to do the same kind of analysis and reporting as the Wired report, has this statement: “Experts say the waves are almost always generated by storm-related winds . . . “ Now it is true that ocean waves are always generated by winds. But if they are trying to imply that freaque waves are also wind generated – they are wrong! Freaque waves can happen during storm, hurricanes, or typhoon, or when there is no wind! That’s why it’s freaque! Time magazine has an article, asked the question “How do ‘rogue waves’ work?” that actually made an accurate statement not everyone would willing to accept but no one able to dispute: “Scientists still don't know exactly how rogue waves occur, nor do they know how to predict them.” Among the thousands of articles already written and published on this now worldwide well known tragic case, it is encouraging to see Prof. Paul Taylor of Oxford contributed an article on CNN England entitled "Giant waves: Tall tales or alarming facts?" on March 5, complete with the famous 19th century wave painting by Hokusai. I guess as a member of the general public, I am particularly appreciate the interpretation of the theoretical phenomenon Prof. Taylor gave in terms that everyone should readily understand even without knowing what nonlinear Schodinger equation is: The simplest model to reproduce the basic properties of the simulations is the nonlinear Schrödinger equation -- an equation belonging to an area of applied mathematics investigated extensively over the last 40 years. The basic process is related to the local concentration of energy that occurs when large waves form. Large waves move faster than small ones, causing a group of large waves to contract along the direction of propagation. Like squeezing a tube of toothpaste, the energy is forced out sideways -- extending the length of the wave crests, and appearing to an observer as "a wall of water." The Louis Majesty case is a tragedy with two lost lives unfortunately. But hopefully the well known case can help to remind everyone that we don't really know what was going on out there when freaque waves hit. More research and more measurements are urgently needed!
5727c1c46e0ee3a7
Take the 2-minute tour × Why is the wave function complex? I've collected some layman explanations but they are incomplete and unsatisfactory. However in the book by Merzbacher in the initial few pages he provides an explanation that I need some help with: that the de Broglie wavelength and the wavelength of an elastic wave do not show similar properties under a Galilean transformation. He basically says that both are equivalent under a gauge transform and also, separately by Lorentz transforms. This, accompanied with the observation that $\psi$ is not observable, so there is no "reason for it being real". Can someone give me an intuitive prelude by what is a gauge transform and why does it give the same result as a Lorentz tranformation in a non-relativistic setting? And eventually how in this "grand scheme" the complex nature of the wave function becomes evident.. in a way that a dummy like me can understand. A wavefunction can be thought of as a scalar field (has a scalar value in every point ($r,t$) given by $\psi:\mathbb{R^3}\times \mathbb{R}\rightarrow \mathbb{C}$ and also as a ray in Hilbert space (a vector). How are these two perspectives the same (this is possibly something elementary that I am missing out, or getting confused by definitions and terminology, if that is the case I am desperate for help ;) One way I have thought about the above question is that the wave function can be equivalently written in $\psi:\mathbb{R^3}\times \mathbb{R}\rightarrow \mathbb{R}^2 $ i.e, Since a wave function is complex, the Schroedinger equation could in principle be written equivalently as coupled differential equations in two real functions which staisfy the Cauchy-Riemann conditions. ie, if $$\psi(x,t) = u(x,t) + i v(x,t)$$ and $u_x=v_t$ ; $u_t = -v_x$ and we get $$\hbar \partial_t u = -\frac{\hbar^2}{2m} \partial_x^2v + V v$$ $$\hbar \partial_t v = \frac{\hbar^2}{2m} \partial_x^2u - V u$$ (..in 1-D) If this is correct what are the interpretations of the $u,v$.. and why isn't it useful. (I am assuming that physical problems always have an analytic $\psi(r,t)$). share|improve this question removed the images. –  yayu Apr 5 '11 at 6:45 Hi Yayu. I've always found interesting a paper by Leon Cohen, "Rules of Probability in Quantum Mechanics", Foundations of Physics 18, 983(1988), which approaches this question somewhat sideways, through characteristic functions. Cohen comes from a signal processing background, where Fourier transforms are very often a natural thing to do. Fourier transforms and complex numbers are of course pretty much joined at the hip. –  Peter Morgan Apr 5 '11 at 18:08 Here are a few straightforward observations that might be helpful. (1) You can describe standing waves with real-valued wavefunctions, e.g., one can almost always get away with this in low-energy nuclear structure physics. (2) The w.f. of a photon is simply the electric and magnetic fields. These are observable and real-valued. (3) If the electron w.f. was real and observable, the wavelength would have to be invariant under a Galilean boost, which would violate the de Broglie relation. (4) Even for real-valued waves, operators are complex, e.g., momentum in the classically forbidden region. –  Ben Crowell May 30 '13 at 23:08 add comment 10 Answers up vote 13 down vote accepted More physically than a lot of the other answers here (a lot of which amount to "the formalism of quantum mechanics has complex numbers, so quantum mechanics should have complex numbers), you can account for the complex nature of the wave function by writing it as $\Psi (x) = |\Psi (x)|e^{i \phi (x)}$, where $i\phi$ is a complex phase factor. It turns out that this phase factor is not directly measurable, but has many measurable consequences, such as the double slit experiment and the Aharonov-Bohm effect. Why are complex numbers essential for explaining these things? Because you need a representation that both doesn't induce time and space dependencies in the magnitude of $|\Psi (x)|^{2}$ (like multiplying by real phases would), AND that DOES allow for interference effects like those cited above. The most natural way of doing this is to multiply the wave amplitude by a complex phase. share|improve this answer Is there any wave or vibration which cannot/has-to-be described with complex number formalism? –  Georg Oct 13 '11 at 10:03 add comment Alternative discussion by Scott Aaronson: http://www.scottaaronson.com/democritus/lec9.html 1. From the probability interpretation postulate, we conclude that the time evolution operator $\hat{U}(t)$ must be unitary in order to keep the total probability to be 1 all the time. Note that the wavefunction is not necessarily complex yet. 2. From the website: "Why did God go with the complex numbers and not the real numbers? Answer: Well, if you want every unitary operation to have a square root, then you have to go to the complex numbers... " $\hat{U}(t)$ must be complex if we still want a continuous transformation. This implies a complex wavefunction. Hence the operator should be: $\hat{U}(t) = e^{i\hat{K}t}$ for hermitian $\hat{K}$ in order to preserve the norm of the wavefunction. share|improve this answer Personally I prefer Jerry Schirmer's answer because it requires less postulate and instead uses experimental fact directly. =) –  pcr Apr 8 '11 at 4:56 I very much like your answer, as much as Jerry's. But I would add two things: firstly, the square root thing is a bit obtuse: I would put it as follows for those like me who are a bit slow on the uptake: ....(ctd)... –  WetSavannaAnimal aka Rod Vance Aug 5 '13 at 4:44 "All eigenvalues of unitary operators have unit magnitude. So the only nontrivial unitary operator with all real eigenvalues is one with a mixture of +1s and -1s as eigenvalues- say $M$ -otherwise it is the identity operator $I$. Since $U(t)$ and its eigenvalues vary continuously, $U(t)$ cannot reach $M$ from its beginning value $U(0)=I$ unless at least one eigenvalue goes through all values on the unit semicircle to reach the value -1". ...(ctd)... –  WetSavannaAnimal aka Rod Vance Aug 5 '13 at 4:44 Secondly, the argument won't quite fly as is: there are nontrivial, real matrix valued unitary groups $\mathbf{SO}(N)$ (whose members have complex eigenvalues but nonetheless are real matrices) that will realise the $U(t)=\exp(i\,K\,t)$ in your argument, so quantum states can still be all real wavefunctions if they are real at $t=0$. I don't quite have a fix for this, maybe you could appeal to an experiment. It is a pretty argument, though, so I'll keep thinking. –  WetSavannaAnimal aka Rod Vance Aug 5 '13 at 4:46 add comment Among other things, the OP reprinted a page of a textbook, asking what "it is all about". I think it is impossible to answer this kind of questions because what is the OP's problem all about is totally undetermined, and the people who offer their answers could be writing their own textbooks, with no results. The wave function in quantum mechanics has to be complex because the operators satisfy things like $$ [x,p] = xp-px = i\hbar.$$ It's the commutator defining the uncertainty principle. Because the left hand side is anti-Hermitian, $$ (xp-px)^\dagger = p^\dagger x^\dagger - x^\dagger p^\dagger = (px-xp) = -(xp-px),$$ it follows that if it is a $c$-number, its eigenvalues have to be pure imaginary. It follows that either $x$ or $p$ or both have to have some non-real matrix elements. Also, Schrödinger's equation $$i\hbar\,\, {\rm d/d}t |\psi\rangle = H |\psi\rangle$$ has a factor of $i$ in it. The equivalent $i$ appears in Heisenberg's equations for the operators and in the $\exp(iS/\hbar)$ integrand of Feynman's path integral. So the amplitudes inevitably have to come out as complex numbers. That's also related to the fact that eigenstates of energy and momenta etc. have the dependence on space or time etc. $$\exp(Et/i\hbar)$$ which is complex. A cosine wouldn't be enough because a cosine is an even function (and the sine is an odd function) so it couldn't distringuish the sign of the energy. Of course, the appearance of $i$ in the phase is related to the commutator at the beginning of this answer. See also Why complex numbers are fundamental in physics Concerning the second question, in physics jargon, we choose to emphasize that a wave function is not a scalar field. A wave function is not an observable at all while a field is. Classically, the fields evolve deterministically and can be measured by one measurement - but the wave function cannot be measured. Quantum fields are operators - but the wave function is not. Moreover, the mathematical similarity of a wave function to a scalar field in 3+1 dimensions only holds for the description of one spinless particle, not for more complicated systems. Concerning the last question, it is not useful to decompose complex numbers into real and imaginary parts exactly because "a complex number" is one number and not two numbers. In particular, if we multiply a wave function by a complex phase $\exp(i\phi)$, which is only possible if we allow the wave functions to be complex and we use the multiplication of complex numbers, physics doesn't change at all. It's the whole point of complex numbers that we deal with them as with a single entity. share|improve this answer thanks for answering. I have one question, not knowing about Feynman path integrals yet, I take it that what you are saying is the same thing as: if we make the transformation $\psi(r,t) = e^{i\frac{S(r,t)}{\hbar}}$ then the Schrodinger equation reduces to the classical hamilton Jacobi equations (if terms containing $i$ and $\hbar$ were negligible)? –  yayu Apr 5 '11 at 5:35 Dear yayu, thanks for your question. First, the appearance of $\exp(iS/\hbar)$ in Feynman's approach is not a transformation of variables: the exponential is an integrand that appears in an integral used to calculate any transition amplitude. Second, $\psi$ is complex and $S$ is real, so $\psi=\exp(iS/\hbar)$ cannot be a "change of variables". You may write $\psi=\sqrt{\rho}\exp(i S/\hbar)$, in which case Schrödinger's equation may be (unnaturally) rewritten as two real equations, a continuity equation for $\rho$ and the Hamilton-Jacobi equation for $S$ with some extra quantum corrections. –  Luboš Motl Apr 5 '11 at 5:39 I edited my question removing the reprints and trying to state my problem without them.. it will take some time to think about some points you made in the answer already, though. –  yayu Apr 5 '11 at 6:00 add comment This year-old question popped up unexpectedly when I signed in, and it's an interesting one. So I guess it's OK just to add an intuition-level "addendum answer" to the excellent and far more complete responses provided long ago. Your kernel question seems to be this: "Why is the wave function complex?" My intentionally informal answer is this: Because by experimental observation, the quantum behavior of a particle far more closely resembles that of a rotating rope (e.g. a skip rope) than it does a rope that only moves up and down. If each point in a rope marks out a circle as it moves, then a very natural and economical way to represent each point along the length of the rope is as a complex magnitude. You certainly don't have to do it that way, of course. In fact, using polar coordinates would probably be a bit more straightforward. However, the nifty thing about complex numbers is that they provide a simple and computationally efficient way to represent just such a polar coordinate system. You can get into the gory details mathematical details of why, but suffice it to say that when early physicists started using complex numbers for just that purpose, their benefits continued even as the problems became far more complex. In quantum mechanics, their benefits became so overwhelming that complex numbers started being accepted pretty much as the "reality" of how to represent such mathematics. That conceptual merging of complex quantities with actual physics can throw off your intuitions a bit. For example, if you look at moving skip rope there is no distinction between the "real" and "imaginary" axes in the actual rotations of each point in the rope. The same is true for quantum representations: It's the phase and amplitude that counts, with other distinctions between the axes of the phase plane being a result of how you use those phases within more complicated mathematical constructions. So, if quantum wave functions behaved only like ropes moving up and down along a single axis, we'd use real functions to represent them. But they don't. Since they instead are more like those skip ropes, it's a lot easier to represent each point along the rope with two values, one "real" and one "imaginary" (and neither in real XYZ space) for its value. Finally, why do I claim that a single quantum particle has a wave function that resembles that of a skip rope in motion? The classic example is the particle-in-a-box problem, where a single particle bounces back-and-forth between the two X axis ends of the box. Such a particle forms one, two, three, or more regions (or anti-nodes) in which the particle is more likely to be found. If you borrow Y and Z (perpendicular to the length of the box) to represent the real and imaginary amplitudes of the particle wave function at each point along X, it's interesting to see what you get. It looks exactly like a skip-rope in action, one in which the regions where the electron is most likely to be found correspond one-for-one to the one, two, three, or more loops of the moving skip rope. (Fancy skip-ropers know all about higher numbers of loops.) The analogy doesn't stop there. The volume enclosed by all the loops, normalized to 1, tells you exactly what the odds are on finding the electron along any one section along the box in the X axis. Tunneling is represented by the electron appearing on both sides of the unmoving nodes of the rope, those nodes being regions where there is no chance of finding the electron. The continuity of the rope from point to point captures a rough approximation of the differential equations that assign high energy costs to sharp bends in the rope. The absolute rotation speed of the rope represents the total mass-energy of the electron, or at least can be used that way. Finally, and a bit more complicated, you can break those simple loops down into other wave components by using the Fourier transform. Any simple look can also be viewed as two helical waves (like whipping a hose around to free it) going in opposite directions. These two components represent the idea that a single-loop wave function actually includes helical representations of the same electron going in opposite directions, at the same time. "At the same time" is highly characteristic of quantum function in general, since such functions always contain multiple "versions" of the location and motions of the single particle that they represent. That is really what a wave function is, in fact: A summation of the simple waves that represent every likely location and momentum situation that the particle could be in. Full quantum mechanics is far more complex than that, of course. You must work in three spatial dimensions, for one thing, and you have to deal with composite probabilities of many particles interacting. That drives you into the use of more abstract concepts such as Hilbert spaces. But with regards to the question of "why complex instead of real?", the simple example of the similarity of quantum functions to rotating ropes still holds: All of these more complicated cases are complex because, at their heart, every point within them behaves as though it is rotating in an abstract space, in a way that keeps it synchronized with points in immediately neighboring points in space. share|improve this answer I'm not sure whether the OP is aware of this, but it emphasises your comment "it doesn't have to be this way". Real matrices of the form $\left(\begin{array}{cc}a&-b\\b&a\end{array}\right) = I a + i b$ where now $I$ is the $2\times2$ identity and $i= \left(\begin{array}{cc}0&-1\\1&0\end{array}\right)$ form a field wholly isomophic to $\mathbb{C}$. In particular, a phase delay corresponds to multiplication by the rotation matrix $\exp\left(-i\,\omega\,t\right)=\left(\begin{array}{cc}\cos\omega t&-\sin \omega t\\ \sin\omega t&\cos\omega t\end{array}\right) = I \cos\omega t + i\sin\omega t$. –  WetSavannaAnimal aka Rod Vance Jul 29 '13 at 0:47 Rod, yes. A similar trick can be done for quaternions. I'm actually a quaternion bigot: I like to think of many of the complex numbers used in physics as really being overly generalized quaternions, ones in which our built-in 3D bias keeps us from noticing that the imaginary axis of a complex number is actually just a quaternion unit pointer in XYZ space. You lose a lot of representation richness by doing that, since for example you inadvertently abandon the intriguing option of treating changes in the quaternion-view i orientation as a local symmetry of XYZ space. –  Terry Bollinger Jul 31 '13 at 22:36 Although I guess from the OPs point of view, it would be wrong to call it a trick - there are many ways to encode the kinds of properties complex numbers do and this one IS complex numbers (an isomorphic field). As for quaternions, yes, it's a shame that Hamilton, Clifford and Maxwell never held sway over Heaviside. –  WetSavannaAnimal aka Rod Vance Aug 2 '13 at 0:05 add comment If the wave function were real, performing a Fourier transform in time will lead to pairs of positive-negative energy eigenstates. Negative energies with no lower bounds is incompatible with stability. So, complex wave functions are needed for stability. No, the wave function is not a field. It only looks like it for a single particle, but for N particles, it is a function in 3N+1 dimensional configuration space. share|improve this answer add comment This question has been asked since Dirac In fact Dirac's answer is available for $ 100 from JSTOR in a paper by Dirac from I think 1935 ? A recent answer from James Wheeler - is that the zero-signature Killing metric of a new, real-valued, 8-dimensional gauging of the conformal group accounts for the complex character of quantum mechanics Reference is Why Quantum Mechanics is Complex , James T. Wheeler ArXiv:hep-th9708088 share|improve this answer add comment EDIT add: My Answer is GA centric and after the comments I felt the need to say some words about the beauty of Geometric Algebra: On 2nd page of Oersted Medal Lecture (link bellow): (3) GA Reduces “grad, div, curl and all that” to a single vector derivative that, among other things, combines the standard set of four Maxwell equations into a single equation and provides new methods to solve it. Synthetic Geometry, Coordinate Geometry, Complex Variables, Quaternions, Vector Analysis, Matrix Algebra, Spinors, Tensors, Differential forms. It is one language for all physics. To the Question: WHY is the wave function complex? This Answer is not helpful: because the wave function is complex (or has a i on it). We have to try something different, not written in your book. In the abstracts I bolded the evidence that the papers are about the WHYs. If someone begs a fish I'll try to give a fishing rod. I'm an old IT analyst who would be unemployed if I had not evolved. Physics is evolving too. end EDIT Recently I've found the Geometric Algebra, Grassman, Clifford, and David Hestenes. I will not detail here the subject of the OP because each one of us need to follow paths, find new ideas and take time to read. I will only provide some paths with part of the abstracts: Overview of Geometric Algebra in Physics Oersted Medal Lecture 2002: Reforming the Mathematical Language of Physics (a good start) In this lecture Hestenes is arguing for a reform of the way in which mathematics is taught to physicists. He asserts that using Geometric Algebra will make it easier to understand the fundamentals of physics, because the mathematical language will be clearer and more uniform. Hunting for Snarks in Quantum Mechanics Abstract. A long-standing debate over the interpretation of quantum mechanics has centered on the meaning of Schroedinger’s wave function ψ for an electron. Broadly speaking, there are two major opposing schools. On the one side, the Copenhagen school (led by Bohr, Heisenberg and Pauli) holds that ψ provides a complete description of a single electron state; hence the probability interpretation of ψψ* expresses an irreducible uncertainty in electron behavior that is intrinsic in nature. On the other side, the realist school (led by Einstein, de Broglie, Bohm and Jaynes) holds that ψ represents a statistical ensemble of possible electron states; hence it is an incomplete description of a single electron state. I contend that the debaters have overlooked crucial facts about the electron revealed by Dirac theory. In particular, analysis of electron zitterbewegung (first noticed by Schroedinger) opens a window to particle substructure in quantum mechanics that explains the physical significance of the complex phase factor in ψ. This led to a testable model for particle substructure with surprising support by recent experimental evidence. If the explanation is upheld by further research, it will resolve the debate in favor of the realist school. I give details. The perils of research on the foundations of quantum mechanics have been foreseen by Lewis Carroll in The Hunting of the Snark! Abstract. A reformulation of the Dirac theory reveals that i¯h has a geometric meaning relating it to electron spin. This provides the basis for a coherent physical interpretation of the Dirac and Sch¨odinger theories wherein the complex phase factor exp(−iϕ/¯h) in the wave function describes electron zitterbewegung, a localized, circular motion generating the electron spin and magnetic moment. Zitterbewegung interactions also generate resonances which may explain quantization, diffraction, and the Pauli principle. Universal Geometric Calculus a course, and follow: III. Implications for Quantum Mechanics The Kinematic Origin of Complex Wave Functions Clifford Algebra and the Interpretation of Quantum Mechanics The Zitterbewegung Interpretation of Quantum Mechanics Quantum Mechanics from Self-Interaction Zitterbewegung in Radiative Processes On Decoupling Probability from Kinematics in Quantum Mechanics Zitterbewegung Modeling Space-Time Structure of Weak and Electromagnetic Interactions to keep more references together: Geometric Algebra and its Application to Mathematical Physics (Chris Thesis) (what lead me to this amazing path was a paper by Joy Christian 'Disproof of Bell Theorem') 'Bon voyage', 'good journey', 'boa viagem' share|improve this answer Why the Down votes? –  Helder Velez Apr 5 '11 at 15:03 @Helder The downvotes are not from me, but I think your Answer doesn't much address the Question, so I think they are justifiable just on that count. More significantly, citing Hestenes is problematic unless you are very specific about what you are taking from him, in which case you could as easily cite someone else who does not make such inflated claims. Too many of Hestenes' claims are not justifiable enough, and all of them have to be read critically to find what is interesting, which is time-consuming. Keep your wits about you as you follow the Hestenes path. –  Peter Morgan Apr 5 '11 at 18:26 @Helder; I have a great deal of respect for Dr. Hestenes' work, send me an email if you want to talk about it. His work directly reads on the complex nature of QM. I'll +1 your answer when I get my votes back (I always use them up). –  Carl Brannen Apr 6 '11 at 1:57 @Helder Velez I am one of your downvoters as I saw it as a very broad answer with lots of references and abstracts reproduced which have little to do with the specific context in which I tried to frame my question. Also, I am not interested in the interpretational aspect of Quantum Mechanics at all, at my stage. –  yayu Apr 6 '11 at 4:52 @Carl Brannen Do you upvote an answer just because it cites the work of someone you respect, despite the fact that it might be of little relevance to the question? –  yayu Apr 6 '11 at 4:57 show 7 more comments From the Heisenberg Uncertainty Principle, if we know a great deal about the momentum of a particle we can know very little about its position. This suggests that our mathematics should have a quantum state that corresponds to a plane wave $\psi(x)$ with a precisely known momentum but entirely unknown position. A natural definition for the probability of finding the particle at the position $x$ is $|\psi(x)|^2$. This definition makes sense for both a real wave function and an imaginary wave function. For a plane wave to have no position information is to imply that $|\psi(x)|$ does not depend on position and so is constant. Therefore we must have $\psi$ complex; otherwise there would be no way to store the information "what is the momentum of the particle". So in my view, the complex nature of wave functions arises from the interaction between the necessity for (1) a probability interpretation, (2) the Heisenberg uncertainty principle, and (3) plane waves. share|improve this answer Please clear some doubts for me. 1. The probability interpretation: I think it followed since the wavefunction was complex and physical meaning could only attributed to a real value. If we make a construction $\psi^*\psi$ then we arrive at the continuity equation from the schrodinger equation and the interpretation can now be made that the quantity $\rho=\psi^*\psi$ is the probability density. Starting from an interpretation like $\rho=\psi^*\psi$, I do not see any way to work backwards and convincingly argue that the amplitude $\psi$ must be complex. –  yayu Apr 6 '11 at 18:09 the uncertainty relations follow from the identification of the free particle as a plane wave. I am guessing your answer points in the right direction, I am working on (2) as suggested in Lubos' answer as well and trying to get why $\psi$ is complex valued as a consequence, however I fail to see how anything except (2) is relevant for showing it conclusively. –  yayu Apr 6 '11 at 18:16 @yayu: see my post--there are two essential experimental facts: 1) phase is not directly measurable; 2) interference effects happen in a broad range of quantum materials. It's hard to reconcile these things without using complex numbers. –  Jerry Schirmer Apr 7 '11 at 3:39 add comment Since the physical point of view, the wave function needs to be complex in order to explain the double-slit experiment, as well mentionated in the book of The Feynman Lectures on Physics-III, I suggest you that review chapters 1&3, where it is explained how $\psi$ has to be considered of probabilistic nature, according to the pattern of interference, because "something" has to behave like a wave at the time of crossing through "each one" of the slits. Furthermore, Bohm proclaims that path of the particle (electron,photon, etc.) can be considered classic, so as a consequence you may watch this one, as it follows the rules already known at the macro... in that sense, you can see next reference or this one to consider the covariance of the laws of mechanics. share|improve this answer add comment The wave function is formulated as complex quantity to emphasize that one cannot measure the amplitude and the phase of the wave function simultaneously. If one would formulate the Schrödinger equation as a system of coupled differential equations as you do in point 3, this feature of the wave function would not be manifest (see also my answer here http://physics.stackexchange.com/a/83219/1648). share|improve this answer Dear asmaier, it is usually frown upon to directly copy-paste identical answers. (The problem is if everybody start to copy-paste identical answers en mass.) In general in such situations, please consider one of the following options: (i) Delete three of your answers. (ii) Flag for duplicate posts and delete three of your answers. (iii) If you think the four posts are not duplicates, then personalize each answer to address the four different specific questions. –  Qmechanic Nov 2 '13 at 23:13 Dear Qmechanic, isn't is also frown upon to copy-paste identical comments? ;-) However I admit, that my answers were too similar. So I tried to follow your suggestion (iii) and personalized my answers to address the specific question in a better way. However I still believe the quote from Dirac is very relevant and important, so I will refer to it in every answer. –  asmaier Nov 3 '13 at 14:50 add comment Your Answer
554b1dce25f17d20
Articles - 2011 | volume 7 | issue 2 Causal efficacy and the normative notion of sustainability science Lin-Shu Wang Department of Mechanical Engineering, Stony Brook University, 100 Nicolls Road, Stony Brook, NY 11794 USA (email: Abstract: Sustainability science requires both a descriptive understanding and a normative approach. Modern science, however, began as purely descriptive knowledge, the core of which is that matter is dynamically inert and without purpose. The British philosopher David Hume concluded that the only type of causation in the material world is “efficient causation,” which supported this purposeless view of a deterministic world “governed” by the causal laws of dynamics. But Hume did not argue against the existence of efficacious causation, only the error of humans projecting the mind’s efficacy to objects. Though dynamically inert, a material object away from equilibrium can be thermodynamically reactive, suggesting the possibility of the object being efficaciously managed for a purpose. Furthermore, quantum physics has replaced classical physics as the fundamental theory of the material world. Its basic equation, the Schrödinger wave-equation, is deterministic but causally inert—it cannot govern, leaving the determinism door unlocked. This causal gap, according to the von Neumann-Stapp quantum measurement/activation theory, necessitates the pragmatic existence in an irreversible universe of the causal efficacy of mental effort and information management. The resulting “bigger” empirical science has room for “descriptive determinism” and “normative action,” both of which are utterly essential in formulating sustainability science as an integral discipline. Keyword: theories, cause-effect relationships, interdisciplinary research Citation: Wang L. 2011. Causal efficacy and the normative notion of sustainability science. Sustainability: Science, Practice, & Policy 7(2):30-40. Published online Oct 26, 2011. http:///archives/vol7iss2/ Sustainability science requires both a descriptive knowledge and a normative approach. The knowledge of descriptive necessity presupposes the efficient causation of an orderly nature. The course of normative actions presumes, in addition, the efficacious causal decision that is based on sound descriptive knowledge. This article considers the existence and the nature of efficacious causation by first investigating the British philosopher David Hume’s philosophical position on efficacy and then placing the concept of efficacy in today’s frameworks of irreversible thermodynamics and Copenhagen-von Neumann quantum physics. The objective is the establishment of efficacious causation as a scientific concept. Hume’s empiricism was rediscovered when the climate of philosophy and science in the twentieth century turned toward naturalism, materialism, and reductionism. Materialist naturalism, which admitted only efficient causation, denying the ontological existence of efficacy, came to dominate and define the past century of science and technology as the guiding vision of an extraordinarily productive modern era of pragmatic quantum physics and molecular biology. At the dawn of the twenty-first century, however, “having benefited from the great gains in fundamental science that reductionism made possible, we again turn our attention to fundamental integration [the Humboldtian unity of nature]—whether called consilience…or sustainability science. Sustainability science returns to ask the question about the unity of nature” (Kates, 2000). Sustainability is a normative concept regarding not merely what is, but also what ought to be the human use of the Earth (Kates, 2001). The purposeful and normative nature of sustainability is meaningful only if human action is efficacious (Daly, 2008), a key concept in Hume’s argument against anthropomorphism. The first part of this article provides a critical philosophical analysis of Hume’s Treatise (Hume, 1985) to argue that the twentieth-century materialist naturalism inspired by Hume turns out to be the opposite of Hume’s real conclusion. Hume’s naturalism is not materialistic: he is not a reductionist. He denies the efficacy of physical causes, not the existence of efficacious anthropogenic power. Hume did not elucidate the nature of efficacy. Indeed no one could find efficacy within the framework of the classical physics of the pre-Carnot and pre-quantum-mechanics period.1 The second part of this article suggests a scientific explication of efficacy based on the second law of thermodynamics and the pivotal role of information (and information processing in the context of the quantumneurophysical model of mind-body interaction). The Sciences of Motion and Matter and the Philosophy of Causation “Every science begins as philosophy and ends as art,” as Durant (1926) observed astutely. It may not be that every science began as philosophy, but certainly the best sciences began as philosophical problems. When a philosophical problem was solved successfully, it disappeared as a philosophical issue and became a scientific discipline. Such transformations started first with Galileo and Newton, who took the philosophical question of who is the unmoved mover(s) and transformed it into a scientific investigation of motion. The ancient philosophical controversy, between the atomist school of Leucippus and Democritus and the school of Socrates, Plato, and Aristotle, on the nature of matter remained unresolved after the Copernican-Newtonian revolution. However, since the beginning of the twentieth century, matter has been incontrovertibly shown to be made of molecules and atoms. The science of matter that had begun as philosophical atomism ended as the pragmatic quantum theory of matter—and led to quantum electronic/nucleus gadgets such as transistors and magnetic resonance imaging (MRI) scanners. Another crucial question in science is the nature of causation. The philosophical debate of causation’s true meaning extends over millennia. In the Western philosophical tradition, the discussion stretches back to at least Aristotle. Over the centuries, many philosophers followed Aristotle and developed the view of cause and effect as a logical connection of some sort. This view equating causality with a logically necessary connection was overthrown by Hume, whose analysis concluded that the validity of science was based on empirical grounds, not logical necessity. And he could find only efficient causation in scientific laws, not power or efficacy. That is, Hume found no efficacious connection between event A and event B as described by scientific laws to justify claiming “A causes B.” As Russell (1945) noted, The strongest argument on Hume’s side is to be derived from the character of causal laws in physics. It appears that simple rules of the form “A causes B” are never to be admitted in science, except as crude suggestions in early stages. The causal laws by which such simple rules are replaced in well-developed sciences are so complex that no one can suppose them given in perception;…So far as the physical sciences are concerned, Hume is wholly in the right: such propositions as “A causes B” are never to be accepted, and our inclination to accept them is to be explained by the laws of habit and association. Hume’s great discovery is an example of the scientific capture of a broad philosophical concept of causation and the reduction of it into the narrower, but more precisely defined, scientific concept of efficient causation. Efficient causation may be scientifically defined as event-connection characterized by constant conjunction and invariable succession in conformity with [note: not “as governed by”] laws of nature. However, this process of capture did not transform causation into a purely scientific problem. The German philosopher Immanuel Kant’s powerful response to Hume’s skepticism was a philosophical one and the philosophical issue of causation did not disappear. The topic remains a staple in contemporary philosophy. A true transformation awaits a successful scientific response. Causality for Sustainability Science Sustainability is a normative concept. Clark et al. (2005) have argued that the fundamental difference between descriptive objective science since Copernicus and Galileo and the new sustainability science calls for a second Copernican Revolution.2 The new science emerging from this required revolution, while it is deeply rooted in the “exact and objective” tradition,3 must transcend it in three crucial ways: 1. Introduces a systems approach beyond the limits of an object approach to comprehend the existence of the Earth system at “far from thermodynamic equilibrium.” 2. Addresses the complexity and contingency of such a system as a result of it being brought about by efficacious causation, instead of it being limited to “clockwork” regularity as described by laws of nature (efficient causality). 3. Accepts the epistemological insight derived from Copenhagen quantum physics that abandons the sharp “borderlines between observing subjects and scrutinized objects.” Others have made similar points by calling for the explicit need for a new kind of “knowledge to inform policy and management decisions” (Lubchenco, 1998). In her 1997 Presidential Address to the American Association for the Advancement of Science, Jane Lubchenco (1998) stated that, in recognizing the urgent need for knowledge to understand and manage the biosphere, I propose that the scientific community formulate a new Social Contract for science… [The Contract] should express a commitment to harness the full power of the scientific enterprise in discovering new knowledge, in communicating existing and new understanding to the public and to policy-makers, and in helping society move toward a more sustainable biosphere. Gallopin et al. (2001) argued for a “science for the twenty-first century” that involves “both natural and social scientists in the investigation of the necessary steps to develop a sustainability science.” Kay et al. (1999) referred to such science as post-normal science (a concept developed by Silvio Funtowicz and Jerome Ravetz4) and wrote that only post-normal science is able to provide “a coherent conceptual basis, in the workings of both natural systems and decision systems.” Common to all three proposals is that sustainability science requires both a descriptive understanding of the biosphere and the normative management of a human-dominated Earth system (Vitousek et al. 1997). Here, management, or efficacious causation, is a necessary concept (Vitousek et al. 1997). Yet, materialist naturalists consider efficacy and purpose to be an illusion. Acknowledging other broad issues in these proposals, this article focuses on the crucial issue of causation and argues for replacing materialist naturalism with a new “ecological naturalism” for a “bigger” empirical science. Causation and Hume’s Naturalism Because David Hume is the most influential philosopher ever to write on causation, it is necessary to begin with a critical discussion of his thought. Standardly, it is common to interpret his great discovery as advancing two views: (1) the regularist view on laws of nature that scientific propositions are mere descriptions of the uniformities or regularities in the world since induction cannot establish the necessity of propositions;5 (2) the reductionist view (see Footnote 5) that causality is exhausted by (equal to) constant conjunction and invariable succession (i.e., as Curd & Cover (1998) put it, “causal connections are a species of lawlike connections”), which do not imply causal efficacy. It is reassuring for the author, as an engineer, to learn that “New Hume” scholarship (Wright, 1983; Kemp Smith, 2005) offers a revised interpretation of the philosopher’s position on matters relating to the first of the two views: What, historically, until late in the Twentieth Century, was called the “Humean” account of Laws of Nature was a misnomer. Hume himself was no “Humean” as regards laws of nature. Hume, it turns out, was a Necessitarian—i.e., believed that laws of nature are in some sense “necessary” (although of course not logically necessary). His legendary skepticism was epistemological. He was concerned, indeed even baffled, how our knowledge of physical necessity could arise. What, in experience, accounted for the origin of the idea? What, in experience, provided evidence of the existence of the property? He could find nothing that played such a role. Yet, in spite of his epistemological skepticism, he persisted in his belief that laws of nature are (physical) necessities (Swartz, 2009). Hume did answer his perplexity in the following way: “What principally gives authority to this system is, beside the undoubted arguments, upon which each part is founded, the agreement of these parts, and the necessity of one to explain another” (Hume, 1985, This method of achieving the physical necessity (Hume, 1985, or “truth” of scientific propositions is expressed today as consilience, the unity of knowledge linking together propositions/laws from different branches/disciplines, especially when forming a comprehensive theory. We achieve a degree of “certainty” through “consiliencing/concurring” the totality of our established propositions/laws from different disciplines by canonical principles—in addition to the application of induction to each single proposition alone. The principle of energy conservation is an outstanding example of a canonical principle that applies to all scientific disciplines, linking together various other propositions in individual disciplines. As Quine (1951) puts it, “The totality of our so-called knowledge or beliefs, from the most casual matters of geography and history to the profoundest laws of atomic physics or even of pure mathematics and logic, is a man-made fabric which impinges on experience only along the edges.” A principal source of old misunderstandings is that Hume used necessity and power interchangeably in many cases so that when he denied necessity he was actually denying power in physical causality, not physical necessity (or the doctrine of necessity).6 We now turn to our main concern with respect to Hume’s position on causality (matter relating to the second of the two views). In Treatise, he made the most sustained and systematic analysis on the nature of physical causality. A summary of the analysis is succinctly captured by Fiske (1874): Physics knows nothing of causation except that it is the invariable and unconditional sequence of one event upon another. July does not cause August, though it invariably precedes it. The invariable Necessitarian necessity does not come with the efficacy or power of a real causation; physical causality is only efficient causality, not efficacious causality. This subtle difference has wide-ranging implications. Hume’s position on this matter with regard to whether his naturalism was reductionistic is considered below. First, Hume only denied the power or efficacy of physical causality, not the idea of power: And this we may observe to be the source of all the relations of interest and duty, by which men influence each other in society, and are placed in the ties of government and subordination. A master is such-a-one as by his situation, arising either from force or agreement, has a power of directing in certain particulars the actions of another, whom we call servant. A judge is one, who in all disputed cases can fix by his opinion the possession or property of anything betwixt any members of the society. When a person is possessed of any power, there is no more required to convert it into action, but the exertion of the will (Treatise 59-60, It is this power or efficacy that he tried to find in the physical world of matter. The heart of Treatise relevant to this search can be found in Book I: Part III: Section XIV: Of the Idea of Necessary Connexion. Here, Hume limits himself to considering the nature of causality in the physical world of pure matter. His conclusion is clear that there is no power or efficacy in the connection of material events: “matter is in itself entirely unactive, and depriv’d of any power” (Treatise 209, A large part of the misunderstanding of Hume derived from his use of the term necessity or necessary connexion: “I begin with observing that the terms of efficacy, agency, power, force, energy, necessity, connexion, and productive quality, are all nearly synonimous” (Treatise 206, This is unfortunate, because he did not always use them as synonymous terms. A close reading reveals that he used “necessity” as a broader term than “power,” with at least two meanings: as physical necessity in his doctrine of necessity and as power or efficacy.7 What he repudiated in this section is the power and efficacy, not the necessity, of physical causality. “Upon the whole, necessity is something, that exists in the mind, not in the objects” (Treatise 216, should be read as “power is something, which exists in the mind, not in the objects.” He could not find in the natural occurrences of the material world the power and efficacy that he referred to in human affairs, and, as one of the greatest champions against anthropomorphism, warned against the projection of mind to objects. In this warning, Hume denied physical objects the anthropogenic power to govern: I am, indeed ready to allow, that there may be several qualities both in material and immaterial objects, with which we are utterly unacquainted; and if we please to call these power or efficacy, ‘twill be of little consequence to the world. But when, instead of meaning these unknown qualities, we make the terms of power and efficacy signify something, of which we have a clear idea, and which is incompatible with those objects, to which we apply it, obscurity and error begin to take place, and we are led astray by a false philosophy. This is the case, when we transfer the determination of the thought to external objects, and suppose any real intelligible connexion betwixt them; that being a quality, which can only belong to the mind that considers them (Treatise 218-219, The error is the false philosophy of materialistic-reductionistic naturalism of causal closure (see below). Hume’s more important conclusion is not that physical causality is exhausted by (equal to) constant conjunction and invariable succession—i.e., there is no efficacy in physical causality—but that man should not project the efficacy in conducting human affairs to the natural events of physical objects. Hume did not reject efficacy: he was not a reductionist and the twentieth-century reductionistic interpretation is the opposite of his real conclusion as the next section further examines. Causal Closure vs. Hume’s Compatibilism “Matter is in itself entirely unactive, and depriv’d of any power” (Treatise 209, matter is dynamically inert; it cannot strive or act. While God is omniscient and omnipotent, matter is not supposed to have any kind of power—definitely not the power to constrain. Hume was perfectly clear on this point (see below). A strange thing, then, happened to science: in the twentieth century, scientists and philosophers—in a move that showed the vestige of God’s omnipotence—gave governing power to the laws of nature. That is, force-driven interactions inclusively produce physical effects. Nothing else can cause physical effects according to twentieth-century materialist naturalism. There is an interesting history to science’s view about the kinds of things that can “cause” physical effects. Early Newtonian physics did not impose exacting restrictions on possible causes of physical effects; indeed, Newton himself was not a Newtonian. The philosophes of the eighteenth-century Enlightenment formulated the worldview now known as Newtonianism; a few Newtonians in the eighteenth century were converted into strict materialists. However, for the majority of scientists and philosophers, the first step of the conversion (hardening) of Newtonianism into naturalism took place, according to Papineau (2009), only in the nineteenth century with the discovery of the conservation of energy (and of evolutionary theory). Even so: The nineteenth-century discovery of the conservation of energy continued to allow that sui generis non-physical forces can interact with the physical world, but…any such mental forces would need to be law-governed and thus amenable to scientific investigation along with more familiar physical forces…If mental or vital forces arose spontaneously, then there would be nothing to ensure that they never led to energy increases (Papineau, 2009; emphasis added). When, during the course of the twentieth century, “detailed physiological research gave no indication of any physical effects that cannot be explained in terms of basic physical forces,” Papineau (2009) continued, “belief in sui generis mental or vital forces had become a minority view. This led to [the second step in] the widespread acceptance of the doctrine now known as the ‘causal closure’ of the physical realm” (emphasis added). This step completed the conversion to materialist naturalism. In a causally closed physical universe, a human lived in “a world that is deaf to his music, just as indifferent to his hopes as it is to his suffering or his crimes” (Monod, 1971). The idea of a causally closed world also coincided with the transformation of the triumph of atomism into radical atomism—the philosophy according to which the absolute and timeless properties of elementary particles capture eternal reality (Smolin, 1997). The presuppositions of causal closure and radical atomism are, of course, metaphysical, not propositions based on scientific evidence. Smolin (1997) remarks, “I want to suggest that perhaps the answer is that the belief in radical atomism in the existence of a final and absolute theory that governs the behavior of the elementary particles [thus, life and man]—is as much a religious as it is scientific aspiration.” In a causally closed world without efficacy, matter is the sole determinate of physical effects.8 But, even so, “determination” is not governing, nor is “necessity” constraint. It is logically fallacious to repudiate anthropomorphism by stripping away from material objects their anthropogenic power as Hume argued, and then turning 180 degrees to impart to material objects that very anthropogenic power so that matter can govern over a causally closed world. Collingwood (1940) argued, “The so-called ‘materialism’ which was the favorite metaphysical doctrine of these anti-metaphysicians was in consequence only in name a repudiation of anthropomorphism; really it was anthropomorphic at the core.” Hume’s doctrine of necessity is not the necessity of hard determinism, but of “soft determinism” (“descriptive determinism”) or compatibilism: “By liberty, then, we can only mean a power of acting or not acting, according to the determinations of the will… this hypothetical liberty is universally allowed to belong to everyone who is not a prisoner and in chains” (Hume, 2000) and “Liberty, when opposed to necessity, not to constraint, is the same thing with chance [see comment below with regard to the lack of understanding on chance and statistical physics in the eighteenth century]; which is universally allowed to have no existence” (Hume, 2000). Liberty is compatible with necessity, so is action; they are only opposed to constraint, not necessity. Here, Hume explicitly holds that physical necessity does not equal constraint, or restraint, or governing, contradicting squarely with hard determinism. I thus define efficacious causation as event-connection in conformity with laws of nature resulting from action that utilizes matter’s internal thermodynamic “force.” This does not mean that matter has any kind of “free will.” However, it does mean that matter can be managed by the action of living things requiring no “heavy lifting” or mental force (as explained in the section below). What Hume’s philosophy does say is that it finds no efficacy in the material world alone; therefore, efficacy requires a nonmaterial origination. The following sections take a critical look at the two principal elements of the “man-made fabric” of causal closure: the absence of mental force in accordance with the energy-conservation law and the materialism of classical physics. The first part is an original proposal/result of my research and the second part is a proposal by Stapp (2001) based on orthodox Copenhagen-von Neumann quantum physics (see also Schwartz et al. 2005). Comprehending Efficacious Causation in Terms of Force-Driven and Spontaneity-Driven Interactions The first law of energy conservation is completely consistent with the dynamical laws of physical forces. Dynamical forces, however, are not the only cause for physical effects. Matter is dynamically inert. However, matter away from equilibrium can be thermodynamically reactive or subject to spontaneous changes (Prigogine & Stengers, 1984). These changes result from another kind of “forces”—entropic “forces.” The first law is also consistent with this “force” in the form of the principle of the increase of entropy, or the second law, which accounts for spontaneous natural processes (Wang, 2007; 2011). Their consistency is assured, in the first place, by the fact that the two laws were simultaneously formulated by Rudolf Clausius between the years 1850 and 1865 with the assistance of Lord Kelvin. One aspect of the new “forces” remains completely in accord with the existing materialist naturalism: the mechanistic laws of dynamical forces, the great law of conservation of energy-mass, and the entropy principle remain a consistent set of core ideas for a worldview of a causally closed world, as seen in Figure 1. In this world, physical effects are caused by physical or efficient causality alone. Figure 1 Materialist naturalism: a causally closed world of efficient causation. However, the full implication of the second law points to a fundamental departure from twentieth- century naturalism by creating room for managed application of the entropic “forces,” i.e., heat and spontaneity (Wang, 2011). As shown in Figure 2, once we admit the possibility of managed events taking advantage of the irreversibility of nature, the implication of the mechanistic laws of nature become much greater. As Poincaré (1946) pointed out, “In the deterministic hypothesis there is [in every event] only a single possibility [of invariable succession].” In view of the second law, however, the full implication of the mechanistic laws of nature is to provide a possibility space vastly bigger than that of the causally closed world (see Figure 2). Compton (1967) wrote, A set of known physical conditions is not adequate to specify precisely what a forthcoming event will be. These conditions, insofar as they can be known, define instead a range of possible events from among which some particular event will occur. When one exercises freedom, by his act of choice he is himself adding a factor not supplied by the physical conditions and is thus himself determining what will occur…Thus the way is cleared for our great task. We are free to shape our destiny. Science opens vast new opportunities. Figure 2 Ecological naturalism: a world of efficient and efficacious causation. That is, hidden in the quantum mechanical laws lay unimaginably rich possible options. But those laws provide only the possibility space for what can happen, not the construction of the reality that actually does happen. Ellis (2008) has made the same argument in recent times. Likewise, the conservation law of energy-mass is merely able to provide its version of possibility space. To make possibility into actuality requires both the entropic “forces” of nature and their management through information. Only spontaneity (the entropic “force”) can determine whether, in accordance with mechanistic and conservation laws, a given possibility is entropically possible in the macroscopic reality space, for example, whether a given chemical reaction or a given fission reaction is entropically possible. The physical possibilities (though not their efficacious construction) of macroscopic events are jointly “determined” by (1) the mechanistic laws, (2) the mass-energy conservation law, and (3) the entropic “forces.” Moreover, what makes a spontaneity-driven or entropic forces-driven causation efficacious is (4) the information management of the entropic “forces”—the last of the four great ideas necessary for comprehending the real world. Hume wrote that power is something which exists in the mind. Actually, power is associated with information management with or without the mind; the possibility of information management exists for any living organism, including ones without minds. The ability to manage information is one of the characteristics of life. Let us consider for now (and in the next section) the example of efficacious causation in terms of the mind. The brain, the physical “mind,” cannot bend a spoon because the spoon lacks its own spontaneity engine. But, the brain can set in motion by a remote-control signal an internal combustion engine’s ignition, or command a dog to run to you, because both the engine and the dog have their own “spontaneity reservoirs” or “entropic ‘forces’ reservoirs.” 9 Entropic “forces” exist to be managed easily; this is not the case with dynamical forces. The brain generates no forces for physical effects; it is spontaneity that physically produces physical effects or physical events—the brain and information-signal merely set events in motion. This is what Hume meant, “When a person is possessed of any power, there is no more required to convert it into action, but the exertion of the will” (Treatise Efficacious causation is the exertion of the will without having to do any physical heavy lifting. Scientists in the eighteenth century did not have an understanding of the second law of thermodynamics or of the nature of chance or statistical physics. It is not surprising that Hume was wrong about chance and had no conception of entropic “force.” The identification of the possibility of efficacious causation with the second law here is consistent with the view of Ladyman & Ross (2007): Because all the special sciences take measurements at scales where real patterns conform to the Second Law of Thermodynamics, all special sciences traffic in locally dynamic real patterns. Thus it is useful for them to keep epistemic books by constructing their data into things, local forces, and cohesion relations. As we will discuss in the next chapter, causation is yet another notional-world conceptual instrument ubiquitous in special science. The great mistake of much traditional philosophy has been to try to read the metaphysical structure of the world directly from the categories of this notional-world book-keeping tool…In this context, reductionistic scientific realism has the odd consequence of denying their existence. Taking directed, irreversible transmission of influence, plus conservation principles, for granted is rational when you know you’re taking all your measurements from regions governed by the Second Law. This practice conforms so closely to what is enjoined by a folk metaphysical endorsement of causation that no serious risk of misunderstanding arises if the scientist helps herself to the culturally inherited folk idea of causation. But she need not thereby endorse folk metaphysics.” Efficacious causation is when life and humans make use of matter, the spontaneity of the irreversible world, and information to prescribe and construct the reality space, the creative living world, within the possibility spaces provided by the quantum mechanistic science and the mass-energy conservation principle. Efficacious causation is how every inventor, every engineer-designer, and every Nobel scientist unlock the secrets hidden in vast possibility spaces. Efficacious causation is what creates stable and progressively more complex systems at far from thermodynamic equilibrium in the reality space. One thermodynamic insight of efficacious causation is the concept of free heat in addition to the existing concept of waste heat (Wang, 2011). Instead of the inevitability of matter degenerating into waste heat, it is possible for life to harness free heat. Passive cooling and heating of buildings, maintained at far from equilibrium, are concrete examples of harnessing free heat (Meierhans, 1993; Wang, 2011). As I have written elsewhere, “The objective of thermodynamic management is not only the reduction/ recovery of waste heat, but also the generation of free heat. Otherwise, no matter how successful man’s attempt in reducing/recovering waste heat is, it merely delays the eventual unsustainable doom” (Wang, 2011). The Causal Gap for Information Management: The von Neumann-Stapp Quantum Theory What makes a spontaneity-driven or entropic forces-driven causation efficacious is the information management of the entropic “forces.” But, “who” is the nonmaterial manager? The French philosopher René Descartes formulated the dualism of the mind and the body, with the mind as the manager. The important thing then was to inquire how the mind became aware (i.e., conscious and self-conscious) and how it succeeded in acting upon the body. A satisfactory answer cannot be found in classical physics, which is not surprising because classical physics is completely materialistic. Just as the doctrine of causal closure and radical atomism reached its apotheosis, classical physics was shown to be empirically incorrect in the first quarter of the twentieth century and was replaced by quantum physics, a seminal discovery that turned the whole concept of what science is inside out. The core idea of classical physics was to describe the “world out there” with no reference to “our thought in here.” In quantum mechanics, “our thought in here” or “acts of knowing” become a fundamental reference point of the science: Von Neumann capitalized upon the key Copenhagen move of bringing human choices into the theory of physical reality. But, whereas the Copenhagen approach excluded the bodies and brains of the human observers from the physical world that they sought to describe, von Neumann demanded logical cohesion and mathematical precision, and was willing to follow where this rational approach led. Being a mathematician, fortified by the rigor and precision of his thought, he seemed less intimidated than his physicist brethren by the sharp contrast between the nature of the world called for by the new mathematics and the nature of the world that the genius of Isaac Newton had concocted (Stapp, 2007). Von Neumann carried out a detailed and mathematically rigorous analysis of the process of measurement to remove the ambiguity in the positioning of the “Heisenberg cut” by shifting parts involved in measurement into the physically described realm. Step by step, all parts of the universe that are conceived to be composed of atomic particles and the physical fields associated with them are positioned below the cut, and left above the cut is a residual experiential reality that von Neumann called the “abstract ego.” The physical processes below the cut are called “Process 2,” which evolve deterministically according to the Schrödinger equation between probing actions. The probing action by the abstract ego is called “Process 1,” which picks out from a potential continuum of overlapping possibilities of Process 2 some particular discrete possibilities. The third process, “Process 3,” is “choice on the part of nature” between “yes” and “no”—the statistically specified (based on quantum rules) outcome triggered by the probing action. Process 2 “governed” by the Schrödinger equation is deterministic, but causally inert because of Processes 1 and 3. Stapp (2001; 2007) and his neuropsychologist collaborators (Schwartz et al. 2005) have extended von Neumann quantum mechanics into a new formalism, which contains a radical conceptual move insofar as quantum measurement is understood to involve efficacious conscious human choices. In particular, he postulates the mind’s effect on the activities of the brain as the connection between effort, attention, and so-called quantum Zeno effects. Stapp calls this “active Process 1” (Schwartz et al. 2005) or “Process zero” (Stapp, 2007). I…will use here the more apt name “process zero,” because this process must precede von Neumann’s process 1. It is the absence from orthodox quantum theory of any description on the workings of process zero that constitutes the causal gap (emphasis added) in contemporary orthodox physical theory. It is this “latitude” offered by the quantum formalism in connection with the “freedom of experimentation” (Bohr, 1958) that blocks the causal closure of the physical, and thereby releases human actions from the immediate bondage of the physically described aspects of reality (Stapp, 2007). Quantum measurement problems become quantum measurement/activation problems. It suffices to indicate the direction of this research program by reciting the abstract and the conclusion of Schwartz et al. (2005): Contemporary physical theory brings directly and irreducibly into the overall causal structure certain psychologically described choices made by human agents about how they will act. This key development in basic physical theory is applicable to neuroscience, and it provides neuroscientists and psychologists with an alternative conceptual framework for describing neural processes …[The new framework] is able to represent more adequately than classic concepts the neuroplastic mechanisms relevant to the growing number of empirical studies of the capacity of directed attention and mental effort to systematically alter brain function. Materialist ontology draws no support from contemporary physics and is in fact contradicted by it…These orthodox quantum equations, applied to human brains in the way suggested by John von Neumann, provide for a causal account of recent neuropsychological data. In this account brain behaviour that appears to be caused by mental effort is actually caused by mental effort: the causal efficacy of mental effort is no illusion (Schwartz et al. 2005). The twentieth-century philosopher Anscombe (1971) made the insightful remark, “The laws’ being deterministic does not tell us whether ‘determinism’ is true.” The Schrödinger wave equation is deterministic but causally inert—it cannot strive or govern, leaving the determinism door unlocked. The von Neumann-Stapp causal gap necessitates the pragmatic existence in an irreversible universe of the causal efficacy of mental effort, which can be and will be subject to evidence-based scientific testing. Looking Ahead and Up The causal gap in the mathematics of quantum theory has the potential to overthrow the doctrine of causal closure of the material realm, which is fundamentally quantum mechanical. Rival proposed theories (of the von Neumann-Stapp quantum-theory) exist. One may look forward to a day when causation will disappear as a philosophical dispute and become a scientific subject. The concept of causation, which has room for “normative action” and “descriptive determinism,” and the second law will link together sustainability science and normal science (Kuhn, 1962). The resulting ecological-naturalism (Figure 2) accords equal objectivity to irreversible entropic “forces” as well as reversible dynamical forces, and to the world of information and mind as well as the world of matter and brain. “How can we know if any worldview is true? The ultimate test is the effect of the worldview in question upon the survival of its holders,” notes Robert Zubrin (2007). Ecological naturalism will help us to formulate what ought to be the human use of the Earth. Murray (2003) commented on the materialist naturalism worldview this way, It may well be that the period from the Enlightenment through the twentieth century will eventually be seen as a kind of adolescence of the species…In the manner of adolescents, humans reacted injudiciously, thinking that they possessed wisdom that invalidated all that had gone before. But adolescents, eventually, will grow up and become explorers. In his sublime essay, Zubrin (2007) writes, Humans are the descendants of explorers. Four hundred million years ago, our distant ancestors forsook the aquatic environment in which they had evolved to explore and colonize the alien world above the shoreline. It is remarkable when you think about it—sacrificing the security of the waters for the hazards of the land…On land it is possible to build fires. On land it is possible to see the stars. Out of the security of a causally closed cave humans can learn to see the stars and hear the music. 1 In the period of pre-Carnot physics, a material object is dynamically inert without the understanding that it can be thermodynamically reactive when it is away from equilibrium. 2 The term “second Copernican Revolution” was first used by Schellnhuber (1999). 3 The exact and objective tradition is the Galilean and Newtonian tradition of descriptive science. In the twentieth century this tradition was already critiqued by Michael Polanyi (1958) who argued that even in the exact sciences, “knowing” is an art, of which the skill of the knower is a logically necessary part (a point not unlike one made by Kant who argued that “knowing” originates from presupposition or a priori categories of the mind). The tendency to make knowledge impersonal in our culture has split fact from value, science from humanity. Polanyi made the case for substituting for the impersonal ideal of objective science an alternative ideal, the post-critical philosophical ideal. 4 See Funtowicz & Ravetz (1993). 5 See the glossary of Curd & Cover (1998) for the definition of “regularist.” Both “regularity theory of laws” and “reductionism” are defined there. Two additional points are that: (1) philosophical terms in the article follow usage in this glossary to the extent that is possible; (2) Curd & Cover also point out that the regularity theory of causation is “a conjunction” of both the regularist view/claim on laws and the reductionist view/claim on causation. 6 See Hume (2000) Chapter 8, Of Liberty and Necessity. 7 Millican (2008) writes, “This suggestion can be backed up with an analysis of Hume’s usage of the various terms concerned, which reveals an interesting and significant pattern in both main discussions of the idea in question. In Treatise I iii 14, he refers to the idea of ‘power’ or ‘efficacy’ roughly three times more often than he does to the idea of ‘necessity’ or ‘necessary connexion’, and the only parts of that long discussion where he prefers the latter terms are in the section’s title, the very first paragraph (as quoted in §1 above), and in a short passage of less than 250 words between the end of paragraph 20 and paragraph 22 (T 165-6). Shortly before this passage he introduces talk of ‘power or connexion’ (T 163), without any clear implication of strict necessity. In Enquiry VII, Hume refers numerous times to the idea of ‘power or necessary connexion’, though mainly in parts of his discussion where he is introducing (E 63, E 64) or reviewing (E 73, 78) the main stages of the argument, and in the section’s original title. Within the body of the argument itself, he almost always prefers either ‘power’ alone or various combinations of ‘power’, ‘force’ and ‘energy’, never referring to the idea of necessity or necessary connexion except in one short passage, the first half of a single paragraph (E 75) in which he refers initially to ‘this idea of a necessary connexion among events’ and later to ‘the idea of power and necessary connexion’.” 8 This is the definition of “determinism” or “hard determinism,” a term invented by William James. 9 An internal combustion engine is of course not alive as a dog is. What is common between the two is that both have their own spontaneity reservoirs. The agent is the only one that must be alive and thus can set both events in motion. The dog can just as well be a mechanical dog with its own battery, and it can be “commanded” by a remote control. Anscombe, G. 1971. Causality and Determination: An Inaugural Lecture. New York: Cambridge University Press. Bohr, N. 1958. Atomic Physics and Human Knowledge. New York: Wiley. Clark, W., Crutzen, P., & Schellnhuber, H. 2005. Science for global sustainability: towards a new paradigm. In H. Schellnhuber, P. Crutzen, W. Clark, M. Claussen, & H. Held (Eds.), Earth System Analysis for Sustainability. pp. 1–28. Cambridge, MA: MIT Press. Collingwood, R. 1940. An Essay on Metaphysics. New York: Oxford University Press. Compton, A. 1967. The Cosmos of Arthur Holly Compton. New York: Knopf. Curd, M. & Cover, J. 1998. Philosophy of Science: The Central Issues. New York: W. W. Norton. Daly, H. 2008. Ecological Economics and Sustainable Development. Northampton, MA: Edward Elgar. Durant, W. 1926. The Story of Philosophy. Chicago: Garden City Publishing. Ellis, G. 2008. On the nature of causation in complex systems. Transactions of the Royal Society of South Africa 63(1):69–84. Fiske, J. 1874. Outlines of Cosmic Philosophy, Volume 1. Boston: James R. Osgood & Company. Gallopin, G., Funtowicz, S., O’Connor, M., & Ravetz, J. 2001. Science for the 21st century: from social contract to the scientific core. International Journal of Social Science 53(168):219–229. Hume, D. 1985 [1739;1740]. A Treatise of Human Nature. New York: Penguin. Hume, D. 2000 [1748]. An Enquiry Concerning Human Understanding. New York: Oxford University Press. Kates, R. 2000. Sustainability Science. World Academies Conference on Transition to Sustainability in the 21st Century.May 18, Tokyo, Japan. Kates, R. 2001. Queries on the human use of the Earth. Annual Review of Energy and the Environment 26:1–26. Kay, J., Regier, H., Boyle, M., & Francis, G. 1999. An ecosystem approach for sustainability: addressing the challenge of complexity. Futures 31(7):721–742. Kemp Smith, N. 2005 [1941]. The Philosophy of David Hume.New York: Palgrave MacMillan. Ladyman, J. & Ross, D. 2007. Every Thing Must Go: Metaphysics Naturalized. New York: Oxford University Press. Meierhans, R. 1993. Slab cooling and earth coupling. ASHRAE Transactions 99(2):511–518. Millican, P. 2008. Hume’s Idea of Necessary Connexion: Of What Is It the Idea? 35th Annual Hume Society Conference. August 6–10, University of Akureyri, Iceland. Monod, J. 1971. Chance and Necessity. New York: Penguin. Murray, C. 2003. Human Accomplishment: The Pursuit of Excellence in the Arts and Sciences, 800 BC to 1950. New York: HarperCollins. Papineau, D. 2009. Naturalism. December 15, 2010. Poincaré, H. 1946 [1913]. The Foundations of Science: Science and Hypothesis, the Value of Science and Method. Garrison, NY: The Science Press. Polanyi, M. 1958.  Personal Knowledge: Towards a Post-critical Philosophy. Chicago: University of Chicago Press. Prigogine, I. & Stengers, I. 1984. Order Out of Chaos. New York: Bantam. Quine, W. 1951. Two dogmas of empiricism. Philosophical Review 60(1):20–43. Russell, B. 1945. A History of Western Philosophy. New York: Simon & Schuster. Schellnhuber, H. 1999. “Earth system” analysis and the second Copernican revolution. Nature 402(2):C19–C23. Schwartz, J., Stapp, H., & Beauregard, M. 2005. Quantum physics in neuroscience and psychology: a neurophysical model of mind-body interaction. Philosophical Transactions of the Royal Society B 360(1468):1309–1327. Stapp, H. 2001. Quantum theory and the role of mind in nature. Foundations of Physics 31(10):1465–1499. Stapp, H. 2007. Mindful Universe: Quantum Mechanics and the Participating Observer. New York: Springer. Swartz, N. 2009. Laws of Nature. January 10, 2010. Vitousek, P., Mooney, H., Lubchenco, J., & Melillo, J. 1997. Human domination of Earth’s ecosystems. Science 277(5325):494–499. Wang L.-S. 2007. The nature of spontaneity-driven processes. International Journal of Ecodynamics 2(4):231–244. Wang L.-S. 2011. Waste heat or free heat. International Journal of Thermodynamics (submitted June 10, 2011). Wright, J. 1983. The Sceptical Realism of David Hume. Minneapolis: University of Minnesota Press. Zubrin, R. 2007. Science, religion, and human purpose: a cosmic perspective. Skeptic 13(1):27–31. ©2011 Wang. CC-BY Attribution 4.0 License.
c4d484d1a8d51a40
Take the 2-minute tour × I remember from introductory Quantum Mechanics, that hydrogen atom is one of those systems that we can solve without too much ( embarrassing ) approximations. After a number of postulates, QM succeeds at giving right numbers about energy levels, which is very good news. We got rid of the orbit that electron was supposed to follow in a classical way ( Rutherford-Bohr ), and we got orbitals, that are the probability distribution of finding electron in space. So this tiny charged particle doesn't emit radiation, notwithstanding its "accelerated motion" ( Larmor ), which is precisely what happens in real world. I know that certain "classic questions" are pointless in the realm of QM but giving no answers it makes people asking the same questions over and over. • If the electron doesn't follow a classic orbit, what kind of alternative "motion" we can imagine? • Is it logical that while the electron is around nucleus it has to move in some way or another? • Is it correct to describe electron motion as being in different places around nucleus at different instants, in a random way? share|improve this question Related: physics.stackexchange.com/q/2860/2451 and links therein. –  Qmechanic Apr 9 '12 at 16:08 Related: Planetary model of atom still valid? –  voix Apr 9 '12 at 16:35 I'm more curious how the electron moves without producing EM radiation. But someone will tell me that it doesn't have a lower ground state to decay to. I know... but it's still a moving charge. I think a more satisfactory model would be that coherent states aren't "moving" in the classical sense, because the concept of moving is a limit-case approximation of QM to begin with. –  Alan Rominger Apr 10 '12 at 13:15 4 Answers 4 The problem is that you're thinking of the electron as a particle. Questions like "what orbit does it follow" only make sense if the electron is a particle that we can follow. But the electron isn't a particle, and it isn't a wave either. Our current best description is that it's an excitation in a quantum field (philosophers may argue about what this really means - the rest of us have to get on with life). An electron can interact with it's environment in ways that make it look like a particle, e.g. a spot on a photographic plate, or in ways that make it look like a wave, e.g. the double slits experiment, but it's the interaction that is particle like or wave like, not the electron. If we stick to the Schrödinger equation, which gives a good description of the hydrogen atom, then this gives us a wavefunction that describes the electron. The ground state has momentum zero, so the electron doesn't move at all in any classical sense. Excited states have a non-zero angular momentum, but you shouldn't think of this as a point like object spinning around the atom. The angular momentum is a property of the wavefunction as a whole and isn't concentrated at any particular spot. share|improve this answer Why do you say "momentum" zero. At n=1 l=o m=0 it is the angular momentum that is zero. There is still energy in the orbital which I think can be interpreted as momentum in some manner. –  anna v Apr 9 '12 at 16:10 I'm aware that some classical "prejudice" has to be dropped; but given the excited states, even if we don't have a trajectory over time for the electron, can we conjecture a kind of non-accelerated non-classical, ( weird ) "motion"? Or the wave-particle duality is unbalanced towards waves? –  Marco De Lellis Apr 9 '12 at 20:33 You might be helped by reading carefully the wikipedia article on the hydrogen atom particularly the figures. The electron described in the orbital has not only a specific energy but also momentum and angular momentum, though it is only the operators of energy angular momentum and spin that give the eigenvalues for n l and m. So what is random is not the electron per se but the probability of finding it when you try to measure it in some way . It is moving with 1/137 of the velocity of light according to the linked article. as given in the pictures of the orbitals. such a fast moving particle will look like a cloud anyway, even if possible classically. Yes, we just cannot pin it, think of the uncertainty principle organized by a solution to Schrodinger's equation. No, not random. It is organized by the probabilities of the orbital it happens to be in. share|improve this answer I have read the linked article, and thanks for your answer. However something sounds uncomfortable. The electron is moving at 1/137 c, so it shows a classical property, speed. If we consider the wave part from the wave-particle duality, we can imagine this wave that travels at that speed in space around the nucleus, drawing a weird pattern ( in places where orbital is non-zero ). However no traces of this moving wave are found in Schrödinger solutions ( the wave functions! ) for the hydrogen atom, why? –  Marco De Lellis Apr 9 '12 at 21:19 I believe that it is the solutions of the Schrödinger equation that predict those patterns. Why do you say no traces? The probability functions are highly osclillatory in theta and phi except the n=o m=0. These are probability functions. One can only see waves by their interference after all. Or think of them as "standing waves". –  anna v Apr 10 '12 at 4:08 "Standing waves": this is really interesting. Could be the electron described as a kind of stationary wave? And wave of what? This wave describes only the probability to find it, or something more deep, like a wave of its properties like mass, charge, ... ? Thanks for your patience with me. –  Marco De Lellis Apr 10 '12 at 5:45 Just the probability of finding it. Once in a potential well the electron itself is in a virtual state. Virtual means that it is not possible to measure mass or charge except collectively with the atom as energy and charge conservation.One does not have a moment by moment snapshot of the electron, or the nucleus at that. Only of the probability of what you will find if you take a snapshot. –  anna v Apr 10 '12 at 6:08 This is a difficult concept to swallow when one's intuition comes from classical physics which is our everyday experience, but it is true because it has been experimentally checked in very many cases, not just the hydrogen atom. The uncertainty principle and the probabilistic nature of nature is the corner stone of modern physics. Not random, there are envelopes to the uncertainty, but probabilistic. –  anna v Apr 10 '12 at 6:10 That probably depends of what exactly you call motion, but I would highly recommend an excellent book And Yet It Moves by Mark P. Silverman, and the chapter #3 in particular. If you replace electron (which is a stable particle, that is a particle without age and individual history) in a simple atom with a negative muon (which decays quickly, its lifetime being some 2 microseconds in its rest frame) you would expect that measured lifetime (in the atom or lab rest frame) will be longer if muon moves at relativistic velocities due to time dilatation, exactly as experiments confirm. share|improve this answer Well, this quite interesting. I knew of "older" muons from cosmic rays, but if I understand correct, they made a "setup" with muon Moving at relativistic speed nearby some nucleus. To experience longer life, it has to move in some semi-classical way, is it correct? –  Marco De Lellis Apr 10 '12 at 16:47 @Marco Yes, you are right. Atoms where muon replaces electron is prepared and muon lifetime mesured. Its length corresponds to the expected semi-classical velocity of muon in such an exotic atom and special relativistic time dilatation. –  Leos Ondra Apr 10 '12 at 20:25 Think of electron as of non-point particle. In a hydrogen atom it is "smeared" around proton. it's TOTAL momentum is zero, it is neither moving (as total) nor accelerating, hence in a classical limit it does not emit radiation. If an electron in atom is a "cloud" rather than point, it IS in different points at the same time. That means that there is a non-zero distribution of "electron density" smeared around proton. Electron is not "moving" as a whole, but we can say that "parts of the cloud" are moving, since they carry non-zero momentum resulting in total angular momentum. This is a consequence of the fact that integration of the electron's momentum density over limited volume in space is non-zero. share|improve this answer Your Answer
d6d623fd7ff95614
Take the 2-minute tour × I've already posted this question on Physics.SE, but I thougth it could be useful to ask also here. No problem if moderators will ask me to cancel this thread... But, please, have mercy! :-D Let $\Omega \subseteq \mathbb{R}^N$ be a domain and let $V,m:\Omega \to \mathbb{R}$ be two measurable and sufficiently summable functions. When one considers the eigenvalue problem for the operator $\mathcal{L}:=-\Delta +V$ w.r.t. the weight $m$, i.e.: $$\tag{P} \begin{cases} -\Delta u(x) + V(x)\ u(x) = \lambda\ m(x)\ u(x) &\text{, in } \Omega\\ u(x)=0 &\text{, on } \partial \Omega , \end{cases}$$ the function $V$ is usually called potential and the function $m$ is called weight. Then, a weighted eigenvalue of $\mathcal{L}$ w.r.t. $m$ is any number $\lambda \in \mathbb{R}$ s.t. (P) has at least one nontrivial weak solution $u\in H_0^1(\Omega)$, i.e.: $$\forall \phi \in C_c^\infty(\Omega),\quad \int_\Omega \nabla u\cdot \nabla \phi\ \text{d} x + \int_\Omega V\ u\ \phi\ \text{d} x = \lambda\ \int_\Omega m\ u\ \phi\ \text{d} x\; .$$ My questions are: 1. Is there any reasonable physical interpretation of those eigenvalues? And what is it? 2. Why have the functions $V$ and $m$ those names? Moreover, I heard that the $p$-laplacian (i.e., $\Delta_p u := \operatorname{div} (|\nabla u|^{p-2}\ \nabla u)$, which reduces to the usual laplacian when $p=2$) can be used to model nonlinear elasticity or something like that; therefore I have also the following question: What about any possible physical meaning of the nonlinear weighted eigenvalues coming from the problem: $$\tag{Q} \begin{cases} -\Delta_p u(x) + V(x)\ |u(x)|^{p-2}\ u(x) = \lambda\ m(x)\ |u(x)|^{p-2}\ u(x) &\text{, in } \Omega\\ u(x)=0 &\text{, on } \partial \Omega , \end{cases}$$ where $1 < p < \infty$? Many thanks in advance, guys! share|improve this question In the linear case, the spectrum of $-\Delta + V(\cdot )$ is strictly connected to the study of standing waves for Schrödinger equations. There are hundreds of papers about this. For the $p$-Laplace operator, the problem is nonlinear and mostly open. As far as I know, it is chiefly a mathematical problem. –  Siminore Jul 3 '12 at 11:00 Related: mathoverflow.net/questions/66418/… –  Willie Wong Jul 3 '12 at 11:22 @Siminore : Do you know if there is any paper about sufficient conditions for the first eigenvalue of $-\Delta +V$ to be $> 0$? –  Pacciu Jul 4 '12 at 14:08 You could start from these lecture notes. Some conditions are hidden there :-) math.nsysu.edu.tw/~amen/posters/pankov.pdf –  Siminore Jul 4 '12 at 14:28 I'm absolutely going to take a look at those notes! Thank you. –  Pacciu Jul 4 '12 at 14:51 1 Answer 1 The left hand side $-F=-\Delta u + Vu$ models force in a material where points try to pull their neighbors towards their local value in a spring-like manner, but also get pulled down by an external force that increases linearly with displacement (for example, other springs or long range gravity). Now suppose $m$ is understood as a mass (density), and consider Newton's law $F=ma=mu_{tt}$. We see that solving $-\Delta u + Vu=\lambda m u$ is finding modes such that $$-u_{tt}=\lambda u.$$ In other words, modes that will stay the same shape, but simply grow (complex-)exponentially in time. Here is a 1-dimensional diagram: enter image description here Edit: To clarify, the extension to the p-laplacean, $\nabla \cdot |\nabla u|^{p-2} \nabla u=\nabla \cdot k(u,x) \nabla u$ models a material where the force of molecules pulling on their neighbors is p-nonlinear in the displacement gradient. In other words, the "springs" in the above diagram are not ideal. share|improve this answer Great! Thank you Nick. –  Pacciu Jul 4 '12 at 11:54 Switched up an important minus sign. Acceleration should be in the opposite direction to the position, not the same. Everything else is still good though. –  Nick Alger Jul 4 '12 at 21:50 Your Answer
3a1d936bedb2aca7
Magnetic potential From Wikipedia, the free encyclopedia Jump to: navigation, search The term magnetic potential can be used for either of two quantities in classical electromagnetism: the magnetic vector potential, A, (often simply called the vector potential) and the magnetic scalar potential, ψ. Both quantities can be used in certain circumstances to calculate the magnetic field. The more frequently used magnetic vector potential, A, is defined such that the curl of A is the magnetic field B. Together with the electric potential, the magnetic vector potential can be used to specify the electric field, E as well. Therefore, many equations of electromagnetism can be written either in terms of the E and B, or in terms of the magnetic vector potential and electric potential. In more advanced theories such as quantum mechanics, most equations use the potentials and not the E and B fields. The magnetic scalar potential ψ is sometimes used to specify the magnetic H-field in cases when there are no free currents, in a manner analogous to using the electric potential to determine the electric field in electrostatics. One important use of ψ is to determine the magnetic field due to permanent magnets when their magnetization is known. With some care the scalar potential can be extended to include free currents as well. Magnetic vector potential[edit] The magnetic vector potential A is a vector field defined along with the electric potential ϕ (a scalar field) by the equations:[1] \mathbf{B} = \nabla \times \mathbf{A}\,,\quad \mathbf{E} = - \nabla \phi - \frac { \partial \mathbf{A} } { \partial t }\,, where B is the magnetic field and E is the electric field. In magnetostatics where there is no time-varying charge distribution, only the first equation is needed. (In the context of electrodynamics, the terms "vector potential" and "scalar potential" are used for "magnetic vector potential" and "electric potential", respectively. In mathematics, vector potential and scalar potential have more general meanings.) Defining the electric and magnetic fields from potentials automatically satisfies two of Maxwell's equations: Gauss's law for magnetism and Faraday's Law. For example, if A is continuous and well-defined everywhere, then it is guaranteed not to result in magnetic monopoles. (In the mathematical theory of magnetic monopoles, A is allowed to be either undefined or multiple-valued in some places; see magnetic monopole for details). Starting with the above definitions: \nabla \cdot \mathbf{B} = \nabla \cdot (\nabla \times \mathbf{A}) = 0 \nabla \times \mathbf{E} = \nabla \times \left( - \nabla \phi - \frac { \partial \mathbf{A} } { \partial t } \right) = - \frac { \partial } { \partial t } (\nabla \times \mathbf{A}) = - \frac { \partial \mathbf{B} } { \partial t }. Alternatively, the existence of A and φ is guaranteed from these two laws using the Helmholtz's theorem. For example, since the magnetic field is divergence-free (Gauss's law for magnetism), i.e. ∇ • B = 0, A always exists that satisfies the above definition. The vector potential A is used when studying the Lagrangian in classical mechanics and in quantum mechanics (see Schrödinger equation for charged particles, Dirac equation, Aharonov–Bohm effect). In the SI system, the units of A are V·s·m−1 and are the same as that of momentum per unit charge. Although the magnetic field B is a pseudovector (also called axial vector), the vector potential A is a polar vector.[2] This means that if the right-hand rule for cross products were replaced with a left-hand rule, but without changing any other equations or definitions, then B would switch signs, but A would not change. This is an example of a general theorem: The curl of a polar vector is a pseudovector, and vice-versa.[2] Gauge choices[edit] The above definition does not define the magnetic vector potential uniquely because, by definition, we can arbitrarily add curl-free components to the magnetic potential without changing the observed magnetic field. Thus, there is a degree of freedom available when choosing A. This condition is known as gauge invariance. Maxwell's equations in terms of vector potential[edit] Using the above definition of the potentials and applying it to the other two Maxwell's equations (the ones that are not automatically satisfied) results in a complicated differential equation that can be simplified using the Lorenz gauge where A is chosen to satisfy: \nabla\cdot\textbf{A} + \frac{1}{c^2} \frac{\partial \phi}{\partial t} = 0.[1] Using the Lorenz gauge, Maxwell's equations can be written compactly in terms of the magnetic vector potential A and the electric scalar potential ϕ:[1] \nabla^2\textbf{A} - \frac{1}{c^2}\frac{\partial^2 \textbf{A}}{\partial t^2} = - \mu_0 \textbf{J} In other gauges, the equations are different. A different notation to write these same equations (using four-vectors) is shown below. Calculation of potentials from source distributions[edit] The solutions of Maxwell's equations in the Lorenz gauge (see Feynman [1] and Jackson[3]) with the boundary condition that both potentials go to zero sufficiently fast as they approach infinity are called the retarded potentials, which are the magnetic vector potential A (r, t) and the electric scalar potential ϕ(r, t) due to a current distribution of current density j(r, tr), charge density ρ(r, tr), and volume Ω (ρ and j are non-zero at least sometimes and points): \mathbf A (\mathbf r , t) = \frac{\mu_0}{4\pi}\int_\Omega \frac{\mathbf J (\mathbf r' , t_r)}{|\mathbf r - \mathbf r'|}\, \mathrm{d}^3\mathbf r'\,. \phi (\mathbf r , t) = \frac{1}{4\pi\epsilon_0}\int_\Omega \frac{\rho (\mathbf r' , t_r)}{|\mathbf r - \mathbf r'|}\, \mathrm{d}^3\mathbf r' where the fields are calculated at time t and position vector r, r' is a point in the charge or current distribution (also the integration variable, within volume Ω), and tr is the retarded time t_r = t-\frac{|\mathbf r - \mathbf r'|}{c}. There are a few notable things about A and ϕ calculated in this way: • (The Lorenz gauge condition): \nabla\cdot\mathbf{A} + \frac{1}{c^2}\frac{\partial\phi}{\partial t} = 0 is satisfied. • The position of the source point r only enters the equation as a scalar distance from r' to r. The direction from r' to r does not enter into the equation. The only thing that matters about a source point is how far away it is. • The integrand uses retarded time. This simply reflects the fact that changes in the sources propagate at the speed of light • The equation for A is a vector equation. In Cartesian coordinates, the equation separates into three scalar equations:[4] A_x(\mathbf{r},t) = \frac{\mu_0}{4\pi} \int_\Omega\frac{j_x(\mathbf{r}',t_r)}{|\mathbf r - \mathbf r'|}\,{\rm d}^3\mathbf{r}' A_y(\mathbf{r},t) = \frac{\mu_0}{4\pi} \int_\Omega\frac{j_y(\mathbf{r}',t_r)}{|\mathbf r - \mathbf r'|}\,{\rm d}^3\mathbf{r}' A_z(\mathbf{r},t) = \frac{\mu_0}{4\pi} \int_\Omega\frac{j_z(\mathbf{r}',t_r)}{|\mathbf r - \mathbf r'|}\,{\rm d}^3\mathbf{r}' In this form it is easy to see that the component of A in a given direction depends only on the components of j that are in the same direction. If the current is carried in a long straight wire, the A points in the same direction as the wire. In other gauges the formula for A and ϕ is different — for example, see Coulomb gauge for another possibility. Depiction of the A field[edit] Representing the Coulomb gauge magnetic vector potential A, magnetic flux density B, and current density j fields around a toroidal inductor of circular cross section. Thicker lines indicate field lines of higher average intensity. Circles in the cross section of the core represent the B-field coming out of the picture, plus signs represent B-field going into the picture. ∇ • A = 0 has been assumed. See Feynman[5] for the depiction of the A field around a long thin solenoid. assuming quasi-static conditions, i.e. \frac{\partial E}{\partial t}\rightarrow 0\,\quad \nabla \times \mathbf{A} = \mathbf{B} \,, the lines and contours of A relate to B like the lines and contours of B relate to j. Thus, a depiction of the A field around a loop of B flux (as would be produced in a toroidal inductor) is qualitatively the same as the B field around a loop of current. The figure to the right is an artist's depiction of the A field. The thicker lines indicate paths of higher average intensity (shorter paths have higher intensity so that the path integral is the same). The lines are drawn to (aesthetically) impart the general look of the A-field. The drawing tacitly assumes ∇ • A = 0, true under the following assumptions: • the Coulomb gauge is assumed • the Lorenz gauge is assumed and there is no distribution of charge, ρ = 0, • the Lorenz gauge is assumed and zero frequency is assumed • the Lorenz gauge is assumed and a non-zero frequency that is low enough to neglect \frac{1}{c^2}\frac{\partial\phi}{\partial t} is assumed Electromagnetic four-potential[edit] In the context of special relativity, it is natural to join the magnetic vector potential together with the (scalar) electric potential into the electromagnetic potential, also called "four-potential". One motivation for doing so is that the four-potential is a mathematical four-vector. Thus, using standard four-vector transformation rules, if the electric and magnetic potentials are known in one inertial reference frame, they can be simply calculated in any other inertial reference frame. Another, related motivation is that the content of classical electromagnetism can be written in a concise and convenient form using the electromagnetic four potential, especially when the Lorenz gauge is used. In particular, in abstract index notation, the set of Maxwell's equations (in the Lorenz gauge) may be written (in Gaussian units) as follows: \partial^\mu A_\mu = 0 \, \Box A_\mu = \frac{4 \pi}{c} J_\mu where □ is the d'Alembertian and J is the four-current. The first equation is the Lorenz gauge condition while the second contains Maxwell's equations. The four-potential also plays a very important role in quantum electrodynamics. Magnetic scalar potential[edit] The scalar potential is another useful quantity in describing the magnetic field, especially for permanent magnets. In a simply connected domain where there is no free current, hence we can define magnetic scalar potential ψ as[6] Using the definition of H: it follows that Here ∇ • M acts as the source for magnetic field, much like ∇ • P acts as the source for electric field. So analogously to bound electric charge, the quantity is called the bound magnetic charge. If there is free current, one may subtract the contribution of free current per Biot–Savart law from total magnetic field and solve the remainder with the scalar potential method. See also[edit] • Duffin, W.J. (1990). Electricity and Magnetism, Fourth Edition. McGraw-Hill.  • Feynman, Richard P; Leighton, Robert B; Sands, Matthew (1964). The Feynman Lectures on Physics Volume 2. Addison-Wesley. ISBN 0-201-02117-X.  • Jackson, John David (1998). Classical Electrodynamics, Third Edition. John Wiley & Sons.  • Jackson, John David (1999), Classical Electrodynamics (3rd ed.), John-Wiley, ISBN 0-471-30932-X  • Kraus, John D. (1984), Electromagnetics (3rd ed.), McGraw-Hill, ISBN 0-07-035423-5  • Ulaby, Fawwaz (2007). Fundamentals of Applied Electromagnetics, Fifth Edition. Pearson Prentice Hall. pp. 226–228. ISBN 0-13-241326-4.
064bfee961840744
In quantum physics, the Schrödinger technique, which involves wave mechanics, uses wave functions, mostly in the position basis, to reduce questions in quantum physics to a differential equation. Werner Heisenberg developed the matrix-oriented view of quantum physics, sometimes called matrix mechanics. The matrix representation is fine for many problems, but sometimes you have to go past it, as you’re about to see. One of the central problems of quantum mechanics is to calculate the energy levels of a system. The energy operator is called the Hamiltonian, H, and finding the energy levels of a system breaks down to finding the eigenvalues of the problem: Here, E is an eigenvalue of the H operator. Here’s the same equation in matrix terms: The allowable energy levels of the physical system are the eigenvalues E, which satisfy this equation. These can be found by solving the characteristic polynomial, which derives from setting the determinant of the above matrix to zero, like so That’s fine if you have a discrete basis of eigenvectors — if the number of energy states is finite. But what if the number of energy states is infinite? In that case, you can no longer use a discrete basis for your operators and bras and kets — you use a continuous basis. Representing quantum mechanics in a continuous basis is an invention of the physicist Erwin Schrödinger. In the continuous basis, summations become integrals. For example, take the following relation, where I is the identity matrix: It becomes the following: And every ket can be expanded in a basis of other kets, like this: Take a look at the position operator, R, in a continuous basis. Applying this operator gives you r, the position vector: In this equation, applying the position operator to a state vector returns the locations, r, that a particle may be found at. You can expand any ket in the position basis like this: And this becomes Here’s a very important thing to understand: is the wave function for the state vector — it’s the ket’s representation in the position basis. Or in common terms, it’s just a function where the quantity represents the probability that the particle will be found in the region d3r centered at r. The wave function is the foundation of what’s called wave mechanics, as opposed to matrix mechanics. What’s important to realize is that when you talk about representing physical systems in wave mechanics, you don’t use the basis-less bras and kets of matrix mechanics; rather, you usually use the wave function — that is, bras and kets in the position basis. Therefore, you go from talking about This wave function is just a ket in the position basis. So in wave mechanics, becomes the following: You can write this as the following: But what is It’s equal to The Hamiltonian operator, H, is the total energy of the system, kinetic (p2/2m) plus potential (V(r)) so you get the following equation: But the momentum operator is Therefore, substituting the momentum operator for p gives you this: Using the Laplacian operator, you get this equation: You can rewrite this equation as the following (called the Schrödinger equation): So in the wave mechanics view of quantum physics, you’re now working with a differential equation instead of multiple matrices of elements. This all came from working in the position basis, When you solve the Schrödinger equation for you can find the allowed energy states for a physical system, as well as the probability that the system will be in a certain position state. Note that, besides wave functions in the position basis, you can also give a wave function in the momentum basis, or in any number of other bases.
f8f740ff09fd832e
An example to illustrate how indistinguishable particles can behave as if they are distinguishable    Imagine two electrons bound inside two hydrogen atoms that are far apart. The Pauli exclusion principle says that the two electrons cannot be in the same quantum state because electrons are indistinguishable particles. But the exclusion principle doesn't seem at all relevant when we discuss the electron in a hydrogen atom, i.e. we don't usually worry about any other electrons in the Universe: it is as if the electrons are distinguishable. Our intuition says they behave as if they are distinguishable if they are bound in different atoms but as we shall see this is a slippery road to follow. The complete system of two protons and two electrons is made up of indistinguishable particles so it isn't really clear what it means to talk about two different atoms. For example, imagine bringing the atoms closer together - at some point there aren't two atoms anymore.  You might say that if the atoms are far apart, the two electrons are obviously in very different quantum states. But this is not as obvious as it looks. Imagine putting electron number 1 in atom number 1 and electron number 2 in atom number 2. Well after waiting a while it doesn't anymore make sense to say that "electron number 1 is still in atom number 1". It might be in atom number 2 now because the only way to truly confine particles is to make sure their wavefunction is always zero outside the region you want to confine them in and this is never attainable. We therefore really should treat our two electrons as being indistinguishable from each other, i.e. we are to think of two electrons in the potential of two protons. Let us simplify the situation a little bit by neglecting the interaction between the two electrons - this won't be a bad approximation if the protons are far apart and the electrons are below the ionization energy of 13.6 eV, and in any case it doesn't really affect our argument. The allowed energies of one of the electrons must therefore be approximately equal to the energy levels of an electron in the potential of a single proton (provided the energy is less than the ionization energy). Now the problem is clear - how can both electrons be in (e.g.) the ground state at the same time? Crucially, we are not allowed to appeal to the fact that the electrons are localized on one proton or the other to get round this problem. In the language of quantum mechanics, the energy eigenstates for each electron are not localized on one proton or the other. The initial wavefunction for one electron might be peaked in the region of one proton but after waiting for long enough the wavefunction will evolve to a wavefunction which is not localized at all. In short, the quantum state is completely specified by giving just the electron energies and then it is a puzzle why two electrons can have the same energy (we're also ignoring things like electron spin here but again that is a detail which doesn't affect the main line of the argument). A little thought and you may be able to convince yourself that the only way out of the problem is for there to be two energy levels whose energy difference is too small for us to have ever measured in an experiment. The example presented below is designed to illustrate that this is indeed what is going on.  We'll consider a simplified model. Our system will be two non-interacting particles moving in one dimension. The particles move in the potential illustrated in the figure below: it is an infinite square well with a finite potential barrier in the middle. One can think of putting one particle in the left-hand region and the other in the right-hand region with energies below the barrier height, V. The particles are, for a time, confined however there is always a non-zero tunnelling probability and we cannot therefore say with certainty that the particles remain on one side or the other of the potential barrier. Our goal is to determine the energy eigenvalues and eigenfunctions for the single particle states. If the potential barrier is sufficiently large, then the energy eigenstates will be approximately equal to those of a single particle in an infinite potential well. However, as argued above, we'll encounter the interesting result that there are in fact two slightly non-degenerate energy levels for each energy level of the infinite potential well. Let V be the height of the potential barrier and 2*delta be its width. We'll work in units where the Schrödinger equation for the energy eigenstates reads  -(diff(diff(psi(x), x), x))+V(x)*psi(x) = E*psi(x)  We'll also choose our units so that L = 2(i.e. -1 < x and x < 1). Solving the Schrödinger equation in each of the three regions (I, II and III) gives:  proc (x) options operator, arrow; sin(alpha(E[i])*x)+A(E[i])*cos(alpha(E[i])*x) end proc  proc (x) options operator, arrow; B[i](E[i])*(exp(beta(E[i])*x)+n[i]*exp(-beta(E[i])*x)) end proc  proc (x) options operator, arrow; n[i]*psi[I](-x) end proc  where alpha = sqrt(E), beta = sqrt(V-E). The index ilabels whether we are considering those eigenstates which are even or odd about the centre of the potential (which I choose to be at x = 0). (You should be able to convince yourself that the eigenstates must be either even or odd functions as a result of symmetry.) Consequently, n[even] = 1, n[odd] = -1. Note that I have made life a little bit easier by not working with normalized wavefunctions; to get the normalization correct one just has to re-scale by an overall factor but we won't need to bother doing that here because we'll not be computing any probabilities. The boundary conditions are that the wavefunction must vanish for x < -1and 1 < xand that the wavefunction and its derivative must be continuous at x = `&+-`(delta). Implementing these conditions allows us to fix A and B[i]:  proc (E) options operator, arrow; tan(alpha(E)) end proc  proc (E) options operator, arrow; (A(E)*cos(alpha(E)*delta)-sin(alpha(E)*delta))/(exp(-beta(E)*delta)+n[i]*exp(beta(E)*delta)) end proc  and the boundary conditions also lead to the following transcendental equation in the energy E.  proc (i) options operator, arrow; beta(E)*(-sin(alpha(E)*delta)+A(E)*cos(alpha(E)*delta))/(alpha(E)*(cos(alpha(E)*delta)+A(E)*sin(alpha(E)*delta)))+(exp(beta(E)*delta)+n[i]*exp(-beta(E)*delta))/(exp(b... In order to do some numerics, let us make a choice for the parameters defining the potential:  Before solving, we'll first consider the energy eigenvalues corresponding to an infinite square well of width which is the result we expect in the case that Vtends to infinity. The mth energy eigenstate of the infinte square well is   There are two corresponding energy eigenstates in our case. The energy of the even eigenstate is  and the odd eigenvalue is  Note that the two energy eigenvalues are (a) not very far from the infinite square well result and (b) slightly different from each other. The corresponding energy eigenfunctions are plotted below.  The important point is that there are two almost degenerate energy eigenstates for every one energy eigenstate in the corresponding infinite square well. If we had made the potential barrier higher, then the ground state energy would approach that of the infinite square well and the splitting between the energy levels would be even smaller. For example if  Note that I had to go to 50 significant digits to detect the splitting between the two energies (the difference is real, not my numerical error!) You might like to think what determines the size of  the tiny splitting between the energy levels. (Hint: look at the eigenstates in the vicinity of x = 0.) After working through this exercise you might like to think how things work for an infinite well divided into 3 regions using two finite potential barriers. You might be tempted to think that there are now 4 energy levels for each energy level of the infinite square well corresponding to energy eigenstates that are variously odd/even about the centre of the two potential barriers. However this must be wrong. If it were correct then one could put 4 identical fermions all into the ground state and the Pauli principle would then be violated. You should convince yourself that there are in fact only 3 linearly independent eigenfunctions and hence only 3 energy levels for each energy level of the infinite square well. Now the Pauli principle can still hold: 3 fermions can go into each of the 3 "ground state" levels but the 4th fermion must go into a higher level.  Time evolution  Let's now consider what happens subsequently if we start with the particle on one side of the potential. For the initial wavefunction let us take  which corresponds to a particle located on the left hand side:  This is not an exact energy eigenstate, it is a superposition of two nearly degenerate states and as such it will evolve slowly with time. The below animation shows the subsequent time evolution of the probability density abs(Phi)^2 which evolves according to  proc (x, t) options operator, arrow; psi[even](x)*exp(I*E[even]*t)+psi[odd](x)*exp(I*E[odd]*t) end proc
fb272ef704e341c3
Notes to Many-Worlds Interpretation of Quantum Mechanics 1. The mathematical part of the MWI, (i), yields less than mathematical parts of some other theories such as, e.g., Bohmian mechanics. Indeed, our experience is consistent with the MWI, but it does not follow from its mathematical part. The Schrödinger equation itself does not explain why we experience definite results in quantum measurements. In contrast, in the Bohmian mechanics the mathematical part yields almost everything, and the analog of (ii) is very simple: it is the postulate according to which only the "Bohmian positions" (and not the quantum wave) correspond to our experience. The Bohmian positions of all particles yield the familiar picture of the (single) world we are aware of. Thus, philosophically, a theory like the Bohmian mechanics achieves more than the MWI, but at the price of a significant impairment of the physical aspects of the theory, e.g., addition of the non-local dynamics of Bohmian particle positions. However, Wallace (2001a) argues that stripping the experiential content from empty waves in the Bohmian approach has significant philosophical difficulties too. 2. Wallace 2001a points out that the term “superposition of a cat” is a misnomer. I use it as a shortcut for “a superposition of states of elementary particles corresponding to different (classical) states of the cat”. 3. It corresponds to the fact that we are aware of objects like cats, tables, etc. that are well localized and are in a definite state. The position need not and must not be exact: its uncertainty should be small only relative to the precision with which we can measure it and the uncertainty must remain such for a period of time. Therefore, due to the uncertainty principle, it cannot be too small. 4. The quantum state of the world is the normalized projection of the quantum state of the Universe onto the space corresponding to the classical description of the world. It is a product state only for variables which are relevant for the macroscopic description of the objects. There might be some entanglement between weakly coupled variables like nuclear spins belonging to different objects. In order to keep the form of the quantum state of the world (1), the quantum state of such variables should belong to |Φ>. 5. Since there is a strong philosophical denial of a possibility to have a nondichotomic degree of existence, the name is clearly problematic, however, it seems that no other word fits better. 6. An even more severe difficulty of this kind appears in the consistent-histories approach considered by Gell-Mann and Hartle as an advanced MWI. Its basic concept, the probability of a history, seems to be meaningless since all histories exist. However, Saunders finds this approach useful for the analysis of probability. 7. This postulate is a counterpart of the collapse postulate of standard quantum mechanics according to which, after measurement, the quantum state collapses to a particular branch with probability proportional to its squared amplitude. (See the entry on quantum mechanics.) However, it differs in two aspects. First, it is the parallel of only the second part of the collapse postulate, the Born Rule, and second, it is related only to part (ii) of the MWI, the connection to our experience, and not to the mathematical part of the theory (i). 8. Proponents of the MWI might argue that, in fact, the burden of an experimental proof lies on the opponents of the MWI, because it is they who claim that there is new physics beyond the well tested Schrödinger equation. 9. Steane challenges the claim that a quantum computer performs parallel computations, but this is certainly the most natural interpretation of the operation of the first quantum algorithm which works faster than any classical one, see Experiment 2 in Deutsch (1986). Copyright © 2014 by Lev Vaidman <> Please Read How You Can Help Keep the Encyclopedia Free
dd4fe679d5bddc40
What's New About the Project 31 Heun FunctionsApplications §31.17 Physical Applications §31.17(i) Addition of Three Quantum Spins The problem of adding three quantum spins s, t, and u can be solved by the method of separation of variables, and the solution is given in terms of a product of two Heun functions. We use vector notation [s,t,u] (respective scalar (s,t,u)) for any one of the three spin operators (respective spin values). Consider the following spectral problem on the sphere S2: x2=xs2+xt2+xu2=R2. 31.17.1 J2Ψ(x) (s+t+u)2Ψ(x)=j(j+1)Ψ(x), HsΨ(x) (-2st-(2/a)su)Ψ(x)=hsΨ(x), for the common eigenfunction Ψ(x)=Ψ(xs,xt,xu), where a is the coupling parameter of interacting spins. Introduce elliptic coordinates z1 and z2 on S2. Then 31.17.2 xs2zk+xt2zk-1+xu2zk-a=0, 31.17.3 xs2 =R2z1z2a, xt2 =R2(z1-1)(z2-1)1-a, xu2 =R2(z1-a)(z2-a)a(a-1). The operators J2 and Hs admit separation of variables in z1,z2, leading to the following factorization of the eigenfunction Ψ(x): 31.17.4 Ψ(x)=(z1z2)-s-14((z1-1)(z2-1))-t-14((z1-a)(z2-a))-u-14w(z1)w(z2), where w(z) satisfies Heun’s equation (31.2.1) with a as in (31.17.1) and the other parameters given by 31.17.5 α =-s-t-u-j-1, β =j-s-t-u, γ =-2s, δ =-2t, ϵ =-2u; q =ahs+2s(at+u). For more details about the method of separation of variables and relation to special functions see Olevskiĭ (1950), Kalnins et al. (1976), Miller (1977), and Kalnins (1986). §31.17(ii) Other Applications Heun functions appear in the theory of black holes (Kerr (1963), Teukolsky (1972), Chandrasekhar (1984), Suzuki et al. (1998), Kalnins et al. (2000)), lattice systems in statistical mechanics (Joyce (1973, 1994)), dislocation theory (Lay and Slavyanov (1999)), and solution of the Schrödinger equation of quantum mechanics (Bay et al. (1997), Tolstikhin and Matsuzawa (2001), and Hall et al. (2010)). For applications of Heun’s equation and functions in astrophysics see Debosscher (1998) where different spectral problems for Heun’s equation are also considered. More applications—including those of generalized spheroidal wave functions and confluent Heun functions in mathematical physics, astrophysics, and the two-center problem in molecular quantum mechanics—can be found in Leaver (1986) and Slavyanov and Lay (2000, Chapter 4). For application of biconfluent Heun functions in a model of an equatorially trapped Rossby wave in a shear flow in the ocean or atmosphere see Boyd and Natarov (1998).
c24cbdfc40cb9b69
Atomic orbital From Wikipedia, the free encyclopedia Jump to: navigation, search The shapes of the first five atomic orbitals: 1s, 2s, 2px, 2py, and 2pz. The colors show the wave function phase. These are graphs of ψ(x, y, z) functions which depend on the coordinates of one electron. To see the elongated shape of ψ(x, y, z)2 functions that show probability density more directly, see the graphs of d-orbitals below. An atomic orbital is a mathematical function that describes the wave-like behavior of either one electron or a pair of electrons in an atom.[1] This function can be used to calculate the probability of finding any electron of an atom in any specific region around the atom's nucleus. The term may also refer to the physical region where the electron can be calculated to be, as defined by the particular mathematical form of the orbital.[2] Each orbital in an atom is characterized by a unique set of values of the three quantum numbers n, , and m, which correspond to the electron's energy, angular momentum, and an angular momentum vector component, respectively. Any orbital can be occupied by a maximum of two electrons, each with its own spin quantum number. The simple names s orbital, p orbital, d orbital and f orbital refer to orbitals with angular momentum quantum number = 0, 1, 2 and 3 respectively. These names, together with the value of n, are used to describe the electron configurations. They are derived from the description by early spectroscopists of certain series of alkali metal spectroscopic lines as sharp, principal, diffuse, and fundamental. Orbitals for l > 3 are named in alphabetical order (omitting j).[3][4][5] Atomic orbitals are the basic building blocks of the atomic orbital model (alternatively known as the electron cloud or wave mechanics model), a modern framework for visualizing the submicroscopic behavior of electrons in matter. In this model the electron cloud of a multi-electron atom may be seen as being built up (in approximation) in an electron configuration that is a product of simpler hydrogen-like atomic orbitals. The repeating periodicity of the blocks of 2, 6, 10, and 14 elements within sections of the periodic table arises naturally from the total number of electrons which occupy a complete set of s, p, d and f atomic orbitals, respectively. Electron properties[edit] With the development of quantum mechanics, it was found that the orbiting electrons around a nucleus could not be fully described as particles, but needed to be explained by the wave-particle duality. In this sense, the electrons have the following properties: Wave-like properties: 1. The electrons do not orbit the nucleus in the sense of a planet orbiting the sun, but instead exist as standing waves. The lowest possible energy an electron can take is therefore analogous to the fundamental frequency of a wave on a string. Higher energy states are then similar to harmonics of the fundamental frequency. Particle-like properties: 1. There is always an integer number of electrons orbiting the nucleus. 3. The electrons retain particle like-properties such as: each wave state has the same electrical charge as the electron particle. Each wave state has a single discrete spin (spin up or spin down). Thus, despite the obvious analogy to planets revolving around the Sun, electrons cannot be described simply as solid particles. In addition, atomic orbitals do not closely resemble a planet's elliptical path in ordinary atoms. A more accurate analogy might be that of a large and often oddly shaped "atmosphere" (the electron), distributed around a relatively tiny planet (the atomic nucleus). Atomic orbitals exactly describe the shape of this "atmosphere" only when a single electron is present in an atom. When more electrons are added to a single atom, the additional electrons tend to more evenly fill in a volume of space around the nucleus so that the resulting collection (sometimes termed the atom’s “electron cloud”[6]) tends toward a generally spherical zone of probability describing where the atom’s electrons will be found. Formal quantum mechanical definition[edit] Atomic orbitals may be defined more precisely in formal quantum mechanical language. Specifically, in quantum mechanics, the state of an atom, i.e. an eigenstate of the atomic Hamiltonian, is approximated by an expansion (see configuration interaction expansion and basis set) into linear combinations of anti-symmetrized products (Slater determinants) of one-electron functions. The spatial components of these one-electron functions are called atomic orbitals. (When one considers also their spin component, one speaks of atomic spin orbitals.) A state is actually a function of the coordinates of all the electrons, so that their motion is correlated, but this is often approximated by this independent-particle model of products of single electron wave functions.[7] (The London dispersion force, for example, depends on the correlations of the motion of the electrons.) In atomic physics, the atomic spectral lines correspond to transitions (quantum leaps) between quantum states of an atom. These states are labeled by a set of quantum numbers summarized in the term symbol and usually associated with particular electron configurations, i.e., by occupation schemes of atomic orbitals (for example, 1s2 2s2 2p6 for the ground state of neon-term symbol: 1S0). This notation means that the corresponding Slater determinants have a clear higher weight in the configuration interaction expansion. The atomic orbital concept is therefore a key concept for visualizing the excitation process associated with a given transition. For example, one can say for a given transition that it corresponds to the excitation of an electron from an occupied orbital to a given unoccupied orbital. Nevertheless, one has to keep in mind that electrons are fermions ruled by the Pauli exclusion principle and cannot be distinguished from the other electrons in the atom. Moreover, it sometimes happens that the configuration interaction expansion converges very slowly and that one cannot speak about simple one-determinant wave function at all. This is the case when electron correlation is large. Fundamentally, an atomic orbital is a one-electron wave function, even though most electrons do not exist in one-electron atoms, and so the one-electron view is an approximation. When thinking about orbitals, we are often given an orbital vision which (even if it is not spelled out) is heavily influenced by this Hartree–Fock approximation, which is one way to reduce the complexities of molecular orbital theory. Types of orbitals[edit] Atomic orbitals can be the hydrogen-like "orbitals" which are exact solutions to the Schrödinger equation for a hydrogen-like "atom" (i.e., an atom with one electron). Alternatively, atomic orbitals refer to functions that depend on the coordinates of one electron (i.e. orbitals) but are used as starting points for approximating wave functions that depend on the simultaneous coordinates of all the electrons in an atom or molecule. The coordinate systems chosen for atomic orbitals are usually spherical coordinates (r, θ, φ) in atoms and cartesians (x, y, z) in polyatomic molecules. The advantage of spherical coordinates (for atoms) is that an orbital wave function is a product of three factors each dependent on a single coordinate: ψ(r, θ, φ) = R(r) Θ(θ) Φ(φ). The angular factors of atomic orbitals Θ(θ) Φ(φ) generate s, p, d, etc. functions as real combinations of spherical harmonics Yℓm(θ, φ) (where and m are quantum numbers). There are typically three mathematical forms for the radial functions R(r) which can be chosen as a starting point for the calculation of the properties of atoms and molecules with many electrons. 1. the hydrogen-like atomic orbitals are derived from the exact solution of the Schrödinger Equation for one electron and a nucleus, for a hydrogen-like atom. The part of the function that depends on the distance from the nucleus has nodes (radial nodes) and decays as e−(constant × distance). 2. The Slater-type orbital (STO) is a form without radial nodes but decays from the nucleus as does the hydrogen-like orbital. 3. The form of the Gaussian type orbital (Gaussians) has no radial nodes and decays as e(−distance squared). Although hydrogen-like orbitals are still used as pedagogical tools, the advent of computers has made STOs preferable for atoms and diatomic molecules since combinations of STOs can replace the nodes in hydrogen-like atomic orbital. Gaussians are typically used in molecules with three or more atoms. Although not as accurate by themselves as STOs, combinations of many Gaussians can attain the accuracy of hydrogen-like orbitals. The term "orbital" was coined by Robert Mulliken in 1932.[8] However, the idea that electrons might revolve around a compact nucleus with definite angular momentum was convincingly argued at least 19 years earlier by Niels Bohr,[9] and the Japanese physicist Hantaro Nagaoka published an orbit-based hypothesis for electronic behavior as early as 1904.[10] Explaining the behavior of these electron "orbits" was one of the driving forces behind the development of quantum mechanics.[11] Early models[edit] With J.J. Thomson's discovery of the electron in 1897,[12] it became clear that atoms were not the smallest building blocks of nature, but were rather composite particles. The newly discovered structure within atoms tempted many to imagine how the atom's constituent parts might interact with each other. Thomson theorized that multiple electrons revolved in orbit-like rings within a positively charged jelly-like substance,[13] and between the electron's discovery and 1909, this "plum pudding model" was the most widely accepted explanation of atomic structure. Shortly after Thomson's discovery, Hantaro Nagaoka, a Japanese physicist, predicted a different model for electronic structure.[10] Unlike the plum pudding model, the positive charge in Nagaoka's "Saturnian Model" was concentrated into a central core, pulling the electrons into circular orbits reminiscent of Saturn's rings. Few people took notice of Nagaoka's work at the time,[14] and Nagaoka himself recognized a fundamental defect in the theory even at its conception, namely that a classical charged object cannot sustain orbital motion because it is accelerating and therefore loses energy due to electromagnetic radiation.[15] Nevertheless, the Saturnian model turned out to have more in common with modern theory than any of its contemporaries. Bohr atom[edit] In 1909, Ernest Rutherford discovered that the positive half of atoms was tightly condensed into a nucleus,[16] and it became clear from his analysis in 1911 that the plum pudding model could not explain atomic structure. Shortly after, in 1913, Rutherford's postdoctoral student Niels Bohr proposed a new model of the atom, wherein electrons orbited the nucleus with classical periods, but were only permitted to have discrete values of angular momentum, quantized in units h/2π.[9] This constraint automatically permitted only certain values of electron energies. The Bohr model of the atom fixed the problem of energy loss from radiation from a ground state (by declaring that there was no state below this), and more importantly explained the origin of spectral lines. The Rutherford–Bohr model of the hydrogen atom. After Bohr's use of Einstein's explanation of the photoelectric effect to relate energy levels in atoms with the wavelength of emitted light, the connection between the structure of electrons in atoms and the emission and absorption spectra of atoms became an increasingly useful tool in the understanding of electrons in atoms. The most prominent feature of emission and absorption spectra (known experimentally since the middle of the 19th century), was that these atomic spectra contained discrete lines. The significance of the Bohr model was that it related the lines in emission and absorption spectra to the energy differences between the orbits that electrons could take around an atom. This was, however, not achieved by Bohr through giving the electrons some kind of wave-like properties, since the idea that electrons could behave as matter waves was not suggested until twelve years later. Still, the Bohr model's use of quantized angular momenta and therefore quantized energy levels was a significant step towards the understanding of electrons in atoms, and also a significant step towards the development of quantum mechanics in suggesting that quantized restraints must account for all discontinuous energy levels and spectra in atoms. With de Broglie's suggestion of the existence of electron matter waves in 1924, and for a short time before the full 1926 Schrödinger equation treatment of hydrogen-like atom, a Bohr electron "wavelength" could be seen to be a function of its momentum, and thus a Bohr orbiting electron was seen to orbit in a circle at a multiple of its half-wavelength (this historically incorrect Bohr model is still occasionally taught to students). The Bohr model for a short time could be seen as a classical model with an additional constraint provided by the 'wavelength' argument. However, this period was immediately superseded by the full three-dimensional wave mechanics of 1926. In our current understanding of physics, the Bohr model is called a semi-classical model because of its quantization of angular momentum, not primarily because of its relationship with electron wavelength, which appeared in hindsight a dozen years after the Bohr model was proposed. The Bohr model was able to explain the emission and absorption spectra of hydrogen. The energies of electrons in the n = 1, 2, 3, etc. states in the Bohr model match those of current physics. However, this did not explain similarities between different atoms, as expressed by the periodic table, such as the fact that helium (two electrons), neon (10 electrons), and argon (18 electrons) exhibit similar chemical behavior. Modern physics explains this by noting that the n = 1 state holds two electrons, the n = 2 state holds eight electrons, and the n = 3 state holds eighteen electrons (in argon). In the end, this was solved by the discovery of modern quantum mechanics and the Pauli Exclusion Principle. Modern conceptions and connections to the Heisenberg Uncertainty Principle[edit] Immediately after Heisenberg discovered his uncertainty relation,[17] it was noted by Bohr that the existence of any sort of wave packet implies uncertainty in the wave frequency and wavelength, since a spread of frequencies is needed to create the packet itself.[18] In quantum mechanics, where all particle momenta are associated with waves, it is the formation of such a wave packet which localizes the wave, and thus the particle, in space. In states where a quantum mechanical particle is bound, it must be localized as a wave packet, and the existence of the packet and its minimum size implies a spread and minimal value in particle wavelength, and thus also momentum and energy. In quantum mechanics, as a particle is localized to a smaller region in space, the associated compressed wave packet requires a larger and larger range of momenta, and thus larger kinetic energy. Thus, the binding energy to contain or trap a particle in a smaller region of space, increases without bound, as the region of space grows smaller. Particles cannot be restricted to a geometric point in space, since this would require an infinite particle momentum. In chemistry, Schrödinger, Pauling, Mulliken and others noted that the consequence of Heisenberg's relation was that the electron, as a wave packet, could not be considered to have an exact location in its orbital. Max Born suggested that the electron's position needed to be described by a probability distribution which was connected with finding the electron at some point in the wave-function which described its associated wave packet. The new quantum mechanics did not give exact results, but only the probabilities for the occurrence of a variety of possible such results. Heisenberg held that the path of a moving particle has no meaning if we cannot observe it, as we cannot with electrons in an atom. In the quantum picture of Heisenberg, Schrödinger and others, the Bohr atom number n for each orbital became known as an n-sphere[citation needed] in a three dimensional atom and was pictured as the mean energy of the probability cloud of the electron's wave packet which surrounded the atom. Orbital names[edit] Orbitals are given names in the form: X \, \mathrm{type}^y \ where X is the energy level corresponding to the principal quantum number n, type is a lower-case letter denoting the shape or subshell of the orbital and it corresponds to the angular quantum number , and y is the number of electrons in that orbital. For example, the orbital 1s2 (pronounced "one ess two") has two electrons and is the lowest energy level (n = 1) and has an angular quantum number of = 0. In X-ray notation, the principal quantum number is given a letter associated with it. For n = 1, 2, 3, 4, 5, …, the letters associated with those numbers are K, L, M, N, O, ... respectively. Hydrogen-like orbitals[edit] The simplest atomic orbitals are those that are calculated for systems with a single electron, such as the hydrogen atom. An atom of any other element ionized down to a single electron is very similar to hydrogen, and the orbitals take the same form. In the Schrödinger equation for this system of one negative and one positive particle, the atomic orbitals are the eigenstates of the Hamiltonian operator for the energy. They can be obtained analytically, meaning that the resulting orbitals are products of a polynomial series, and exponential and trigonometric functions. (see hydrogen atom). For atoms with two or more electrons, the governing equations can only be solved with the use of methods of iterative approximation. Orbitals of multi-electron atoms are qualitatively similar to those of hydrogen, and in the simplest models, they are taken to have the same form. For more rigorous and precise analysis, the numerical approximations must be used. A given (hydrogen-like) atomic orbital is identified by unique values of three quantum numbers: n, , and m. The rules restricting the values of the quantum numbers, and their energies (see below), explain the electron configuration of the atoms and the periodic table. The stationary states (quantum states) of the hydrogen-like atoms are its atomic orbitals.[clarification needed] However, in general, an electron's behavior is not fully described by a single orbital. Electron states are best represented by time-depending "mixtures" (linear combinations) of multiple orbitals. See Linear combination of atomic orbitals molecular orbital method. The quantum number n first appeared in the Bohr model where it determines the radius of each circular electron orbit. In modern quantum mechanics however, n determines the mean distance of the electron from the nucleus; all electrons with the same value of n lie at the same average distance. For this reason, orbitals with the same value of n are said to comprise a "shell". Orbitals with the same value of n and also the same value of  are even more closely related, and are said to comprise a "subshell". Quantum numbers[edit] Because of the quantum mechanical nature of the electrons around a nucleus, atomic orbitals can be uniquely defined by a set of integers known as quantum numbers. These quantum numbers only occur in certain combinations of values, and their physical interpretation changes depending on whether real or complex versions of the atomic orbitals are employed. Complex orbitals[edit] In physics, the most common orbital descriptions are based on the solutions to the hydrogen atom, where orbitals are given by the product between a radial function and a pure spherical harmonic. The quantum numbers, together with the rules governing their possible values, are as follows: The principal quantum number n describes the energy of the electron and is always a positive integer. In fact, it can be any positive integer, but for reasons discussed below, large numbers are seldom encountered. Each atom has, in general, many orbitals associated with each value of n; these orbitals together are sometimes called electron shells. The azimuthal quantum number describes the orbital angular momentum of each electron and is a non-negative integer. Within a shell where n is some integer n0, ranges across all (integer) values satisfying the relation 0 \le \ell \le n_0-1. For instance, the n = 1 shell has only orbitals with \ell=0, and the n = 2 shell has only orbitals with \ell=0, and \ell=1. The set of orbitals associated with a particular value of  are sometimes collectively called a subshell. The magnetic quantum number, m_\ell, describes the magnetic moment of an electron in an arbitrary direction, and is also always an integer. Within a subshell where \ell is some integer \ell_0, m_\ell ranges thus: -\ell_0 \le m_\ell \le \ell_0. The above results may be summarized in the following table. Each cell represents a subshell, and lists the values of m_\ell available in that subshell. Empty cells represent subshells that do not exist. = 0 = 1 = 2 = 3 = 4 ... n = 1 m_\ell=0 n = 2 0 −1, 0, 1 n = 3 0 −1, 0, 1 −2, −1, 0, 1, 2 n = 4 0 −1, 0, 1 −2, −1, 0, 1, 2 −3, −2, −1, 0, 1, 2, 3 n = 5 0 −1, 0, 1 −2, −1, 0, 1, 2 −3, −2, −1, 0, 1, 2, 3 −4, −3, −2, −1, 0, 1, 2, 3, 4 Subshells are usually identified by their n- and \ell-values. n is represented by its numerical value, but \ell is represented by a letter as follows: 0 is represented by 's', 1 by 'p', 2 by 'd', 3 by 'f', and 4 by 'g'. For instance, one may speak of the subshell with n=2 and \ell=0 as a '2s subshell'. Each electron also has a spin quantum number, s, which describes the spin of each electron (spin up or spin down). The number s can be +12 or −12. The Pauli exclusion principle states that no two electrons can occupy the same quantum state: every electron in an atom must have a unique combination of quantum numbers. The above conventions imply a preferred axis (for example, the z direction in Cartesian coordinates), and they also imply a preferred direction along this preferred axis. Otherwise there would be no sense in distinguishing m = +1 from m = −1. As such, the model is most useful when applied to physical systems that share these symmetries. The Stern–Gerlach experiment — where an atom is exposed to a magnetic field — provides one such example.[19] Real orbitals[edit] An atom that is embedded in a crystalline solid feels multiple preferred axes, but no preferred direction. Instead of building atomic orbitals out of the product of radial functions and a single spherical harmonic, linear combinations of spherical harmonics are typically used, designed so that the imaginary part of the spherical harmonics cancel out. These real orbitals are the building blocks most commonly shown in orbital visualizations. In the real hydrogen-like orbitals, for example, n and have the same interpretation and significance as their complex counterparts, but m is no longer a good quantum number (though its absolute value is). The orbitals are given new names based on their shape with respect to a standardized Cartesian basis. The real hydrogen-like p orbitals are given by the following: p_z = p_0 p_x = \frac{1}{\sqrt{2}} \left( p_1 + p_{-1} \right) p_y = \frac{1}{i\sqrt{2}} \left( p_1 - p_{-1} \right) where p0 = Rn 1Y1 0, p1 = Rn 1Y1 1, and p−1 = Rn 1Y1 −1, are the complex orbitals corresponding to = 1. [20] Shapes of orbitals[edit] Cross-section of computed hydrogen atom orbital (ψ(r, θ, φ)2) for the 6s (n = 6, = 0, m = 0) orbital. Note that s orbitals, though spherically symmetrical, have radially placed wave-nodes for n > 1. However, only s orbitals invariably have a center anti-node; the other types never do. Simple pictures showing orbital shapes are intended to describe the angular forms of regions in space where the electrons occupying the orbital are likely to be found. The diagrams cannot, however, show the entire region where an electron can be found, since according to quantum mechanics there is a non-zero probability of finding the electron anywhere in space. Instead the diagrams are approximate representations of boundary or contour surfaces where the probability density | ψ(r, θ, φ) |2 has a constant value, chosen so that there is a certain probability (for example 90%) of finding the electron within the contour. Although | ψ |2 as the square of an absolute value is everywhere non-negative, the sign of the wave function ψ(r, θ, φ) is often indicated in each subregion of the orbital picture. Sometimes the ψ function will be graphed to show its phases, rather than the | ψ(r, θ, φ) |2 which shows probability density but has no phases (which have been lost in the process of taking the absolute value, since ψ(r, θ, φ) is a complex number). | ψ(r, θ, φ) |2 orbital graphs tend to have less spherical, thinner lobes than ψ(r, θ, φ) graphs, but have the same number of lobes in the same places, and otherwise are recognizable. This article, in order to show wave function phases, shows mostly ψ(r, θ, φ) graphs. The lobes can be viewed as interference patterns between the two counter rotating "m" and "m" modes, with the projection of the orbital onto the xy plane having a resonant "m" wavelengths around the circumference. For each m there are two of these m⟩+⟨−m and m⟩−⟨−m. For the case where m = 0 the orbital is vertical, counter rotating information is unknown, and the orbital is z-axis symmetric. For the case where = 0 there are no counter rotating modes. There are only radial modes and the shape is spherically symmetric. For any given n, the smaller is, the more radial nodes there are. Loosely speaking n is energy, is analogous to eccentricity, and m is orientation. Generally speaking, the number n determines the size and energy of the orbital for a given nucleus: as n increases, the size of the orbital increases. However, in comparing different elements, the higher nuclear charge Z of heavier elements causes their orbitals to contract by comparison to lighter ones, so that the overall size of the whole atom remains very roughly constant, even as the number of electrons in heavier elements (higher Z) increases. Also in general terms, determines an orbital's shape, and m its orientation. However, since some orbitals are described by equations in complex numbers, the shape sometimes depends on m also. The single s-orbitals (\ell=0) are shaped like spheres. For n = 1 it is roughly a solid ball (it is most dense at the center and fades exponentially outwardly), but for n = 2 or more, each single s-orbital is composed of spherically symmetric surfaces which are nested shells (i.e., the "wave-structure" is radial, following a sinusoidal radial component as well). See illustration of a cross-section of these nested shells, at right. The s-orbitals for all n numbers are the only orbitals with an anti-node (a region of high wave function density) at the center of the nucleus. All other orbitals (p, d, f, etc.) have angular momentum, and thus avoid the nucleus (having a wave node at the nucleus). The three p-orbitals for n = 2 have the form of two ellipsoids with a point of tangency at the nucleus (the two-lobed shape is sometimes referred to as a "dumbbell"). The three p-orbitals in each shell are oriented at right angles to each other, as determined by their respective linear combination of values of m. The five d orbitals in ψ(x, y, z)2 form, with a combination diagram showing how they fit together to fill space around an atomic nucleus. Four of the five d-orbitals for n = 3 look similar, each with four pear-shaped lobes, each lobe tangent to two others, and the centers of all four lying in one plane, between a pair of axes. Three of these planes are the xy-, xz-, and yz-planes, and the fourth has the centres on the x and y axes. The fifth and final d-orbital consists of three regions of high probability density: a torus with two pear-shaped regions placed symmetrically on its z axis. There are seven f-orbitals, each with shapes more complex than those of the d-orbitals. For each s, p, d, f and g set of orbitals, the set of orbitals which composes it forms a spherically symmetrical set of shapes. For non-s orbitals, which have lobes, the lobes point in directions so as to fill space as symmetrically as possible for number of lobes which exist for a set of orientations. For example, the three p orbitals have six lobes which are oriented to each of the six primary directions of 3-D space; for the 5d orbitals, there are a total of 18 lobes, in which again six point in primary directions, and the 12 additional lobes fill the 12 gaps which exist between each pairs of these 6 primary axes. Additionally, as is the case with the s orbitals, individual p, d, f and g orbitals with n values higher than the lowest possible value, exhibit an additional radial node structure which is reminiscent of harmonic waves of the same type, as compared with the lowest (or fundamental) mode of the wave. As with s orbitals, this phenomenon provides p, d, f, and g orbitals at the next higher possible value of n (for example, 3p orbitals vs. the fundamental 2p), an additional node in each lobe. Still higher values of n further increase the number of radial nodes, for each type of orbital. The shapes of atomic orbitals in one-electron atom are related to 3-dimensional spherical harmonics. These shapes are not unique, and any linear combination is valid, like a transformation to cubic harmonics, in fact it is possible to generate sets where all the d's are the same shape, just like the px, py, and pz are the same shape.[21][22] Orbitals table[edit] This table shows all orbital configurations for the real hydrogen-like wave functions up to 7s, and therefore covers the simple electronic configuration for all elements in the periodic table up to radium. "ψ" graphs are shown with and + wave function phases shown in two different colors (arbitrarily red and blue). The pz orbital is the same as the p0 orbital, but the px and py are formed by taking linear combinations of the p+1 and p−1 orbitals (which is why they are listed under the m = ±1 label). Also, the p+1 and p−1 are not the same shape as the p0, since they are pure spherical harmonics. s ( = 0) p ( = 1) d ( = 2) f ( = 3) m = 0 m = 0 m = ±1 m = 0 m = ±1 m = ±2 m = 0 m = ±1 m = ±2 m = ±3 s pz px py dz2 dxz dyz dxy dx2−y2 fz3 fxz2 fyz2 fxyz fz(x2−y2) fx(x2−3y2) fy(3x2−y2) n = 1 S1M0.png n = 3 S3M0.png P3M0.png P3M1.png P3M-1.png D3M0.png D3M1.png D3M-1.png D3M2.png D3M-2.png n = 4 S4M0.png P4M0.png P4M1.png P4M-1.png D4M0.png D4M1.png D4M-1.png D4M2.png D4M-2.png F4M0.png F4M1.png F4M-1.png F4M2.png F4M-2.png F4M3.png F4M-3.png n = 5 S5M0.png P5M0.png P5M1.png P5M-1.png D5M0.png D5M1.png D5M-1.png D5M2.png D5M-2.png . . . . . . . . . . . . . . . . . . . . . n = 6 S6M0.png P6M0.png P6M1.png P6M-1.png . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Qualitative understanding of shapes[edit] The shapes of atomic orbitals can be understood qualitatively by considering the analogous case of standing waves on a circular drum.[23] To see the analogy, the mean vibrational displacement of each bit of drum membrane from the equilibrium point over many cycles (a measure of average drum membrane velocity and momentum at that point) must be considered relative to that point's distance from the center of the drum head. If this displacement is taken as being analogous to the probability of finding an electron at a given distance from the nucleus, then it will be seen that the many modes of the vibrating disk form patterns that trace the various shapes of atomic orbitals. The basic reason for this correspondence lies in the fact that the distribution of kinetic energy and momentum in a matter-wave is predictive of where the particle associated with the wave will be. That is, the probability of finding an electron at a given place is also a function of the electron's average momentum at that point, since high electron momentum at a given position tends to "localize" the electron in that position, via the properties of electron wave-packets (see the Heisenberg uncertainty principle for details of the mechanism). This relationship means that certain key features can be observed in both drum membrane modes and atomic orbitals. For example, in all of the modes analogous to s orbitals (the top row in the animated illustration below), it can be seen that the very center of the drum membrane vibrates most strongly, corresponding to the antinode in all s orbitals in an atom. This antinode means the electron is most likely to be at the physical position of the nucleus (which it passes straight through without scattering or striking it), since it is moving (on average) most rapidly at that point, giving it maximal momentum. A mental "planetary orbit" picture closest to the behavior of electrons in s orbitals, all of which have no angular momentum, might perhaps be that of a Keplerian orbit with the orbital eccentricity of 1 but a finite major axis, not physically possible (because particles were to collide), but can be imagined as a limit of orbits with equal major axes but increasing eccentricity. Below, a number of drum membrane vibration modes are shown. The analogous wave functions of the hydrogen atom are indicated. A correspondence can be considered where the wave functions of a vibrating drum head are for a two-coordinate system ψ(r, θ) and the wave functions for a vibrating sphere are three-coordinate ψ(r, θ, φ). s-type modes None of the other sets of modes in a drum membrane have a central antinode, and in all of them the center of the drum does not move. These correspond to a node at the nucleus for all non-s orbitals in an atom. These orbitals all have some angular momentum, and in the planetary model, they correspond to particles in orbit with eccentricity less than 1.0, so that they do not pass straight through the center of the primary body, but keep somewhat away from it. In addition, the drum modes analogous to p and d modes in an atom show spatial irregularity along the different radial directions from the center of the drum, whereas all of the modes analogous to s modes are perfectly symmetrical in radial direction. The non radial-symmetry properties of non-s orbitals are necessary to localize a particle with angular momentum and a wave nature in an orbital where it must tend to stay away from the central attraction force, since any particle localized at the point of central attraction could have no angular momentum. For these modes, waves in the drum head tend to avoid the central point. Such features again emphasize that the shapes of atomic orbitals are a direct consequence of the wave nature of electrons. p-type modes d-type modes Orbital energy[edit] In atoms with a single electron (hydrogen-like atoms), the energy of an orbital (and, consequently, of any electrons in the orbital) is determined exclusively by n. The n=1 orbital has the lowest possible energy in the atom. Each successively higher value of n has a higher level of energy, but the difference decreases as n increases. For high n, the level of energy becomes so high that the electron can easily escape from the atom. In single electron atoms, all levels with different \ell within a given n are (to a good approximation) degenerate, and have the same energy. This approximation is broken to a slight extent by the effect of the magnetic field of the nucleus, and by quantum electrodynamics effects. The latter induce tiny binding energy differences especially for s electrons that go nearer the nucleus, since these feel a very slightly different nuclear charge, even in one-electron atoms; see Lamb shift. In atoms with multiple electrons, the energy of an electron depends not only on the intrinsic properties of its orbital, but also on its interactions with the other electrons. These interactions depend on the detail of its spatial probability distribution, and so the energy levels of orbitals depend not only on n but also on \ell. Higher values of \ell are associated with higher values of energy; for instance, the 2p state is higher than the 2s state. When \ell = 2, the increase in energy of the orbital becomes so large as to push the energy of orbital above the energy of the s-orbital in the next higher shell; when \ell = 3 the energy is pushed into the shell two steps higher. The filling of the 3d orbitals does not occur until the 4s orbitals have been filled. The increase in energy for subshells of increasing angular momentum in larger atoms is due to electron–electron interaction effects, and it is specifically related to the ability of low angular momentum electrons to penetrate more effectively toward the nucleus, where they are subject to less screening from the charge of intervening electrons. Thus, in atoms of higher atomic number, the \ell of electrons becomes more and more of a determining factor in their energy, and the principal quantum numbers n of electrons becomes less and less important in their energy placement. The energy sequence of the first 24 subshells (e.g., 1s, 2p, 3d, etc.) is given in the following table. Each cell represents a subshell with n and \ell given by its row and column indices, respectively. The number in the cell is the subshell's position in the sequence. For a linear listing of the subshells in terms of increasing energies in multielectron atoms, see the section below. s p d f g 1 1 2 2 3 3 4 5 7 4 6 8 10 13 5 9 11 14 17 21 6 12 15 18 22 7 16 19 23 8 20 24 Note: empty cells indicate non-existent sublevels, while numbers in italics indicate sublevels that could exist, but which do not hold electrons in any element currently known. Electron placement and the periodic table[edit] Electron atomic and molecular orbitals. The chart of orbitals (left) is arranged by increasing energy (see Madelung rule). Note that atomic orbits are functions of three variables (two angles, and the distance r from the nucleus). These images are faithful to the angular component of the orbital, but not entirely representative of the orbital as a whole. Several rules govern the placement of electrons in orbitals (electron configuration). The first dictates that no two electrons in an atom may have the same set of values of quantum numbers (this is the Pauli exclusion principle). These quantum numbers include the three that define orbitals, as well as s, or spin quantum number. Thus, two electrons may occupy a single orbital, so long as they have different values of s. However, only two electrons, because of their spin, can be associated with each orbital. Additionally, an electron always tends to fall to the lowest possible energy state. It is possible for it to occupy any orbital so long as it does not violate the Pauli exclusion principle, but if lower-energy orbitals are available, this condition is unstable. The electron will eventually lose energy (by releasing a photon) and drop into the lower orbital. Thus, electrons fill orbitals in the order specified by the energy sequence given above. This behavior is responsible for the structure of the periodic table. The table may be divided into several rows (called 'periods'), numbered starting with 1 at the top. The presently known elements occupy seven periods. If a certain period has number i, it consists of elements whose outermost electrons fall in the ith shell. Niels Bohr was the first to propose (1923) that the periodicity in the properties of the elements might be explained by the periodic filling of the electron energy levels, resulting in the electronic structure of the atom.[24] The periodic table may also be divided into several numbered rectangular 'blocks'. The elements belonging to a given block have this common feature: their highest-energy electrons all belong to the same -state (but the n associated with that -state depends upon the period). For instance, the leftmost two columns constitute the 's-block'. The outermost electrons of Li and Be respectively belong to the 2s subshell, and those of Na and Mg to the 3s subshell. The following is the order for filling the "subshell" orbitals, which also gives the order of the "blocks" in the periodic table: The "periodic" nature of the filling of orbitals, as well as emergence of the s, p, d and f "blocks", is more obvious if this order of filling is given in matrix form, with increasing principal quantum numbers starting the new rows ("periods") in the matrix. Then, each subshell (composed of the first two quantum numbers) is repeated as many times as required for each pair of electrons it may contain. The result is a compressed periodic table, with each entry representing two successive elements: 2s 2p 2p 2p 3s 3p 3p 3p 4s 3d 3d 3d 3d 3d 4p 4p 4p 5s 4d 4d 4d 4d 4d 5p 5p 5p 6s (4f) 5d 5d 5d 5d 5d 6p 6p 6p 7s (5f) 6d 6d 6d 6d 6d 7p 7p 7p The number of electrons in an electrically neutral atom increases with the atomic number. The electrons in the outermost shell, or valence electrons, tend to be responsible for an element's chemical behavior. Elements that contain the same number of valence electrons can be grouped together and display similar chemical properties. Relativistic effects[edit] For elements with high atomic number Z, the effects of relativity become more pronounced, and especially so for s electrons, which move at relativistic velocities as they penetrate the screening electrons near the core of high-Z atoms. This relativistic increase in momentum for high speed electrons causes a corresponding decrease in wavelength and contraction of 6s orbitals relative to 5d orbitals (by comparison to corresponding s and d electrons in lighter elements in the same column of the periodic table); this results in 6s valence electrons becoming lowered in energy. Examples of significant physical outcomes of this effect include the lowered melting temperature of mercury (which results from 6s electrons not being available for metal bonding) and the golden color of gold and caesium (which results from narrowing of 6s to 5d transition energy to the point that visible light begins to be absorbed).[25] In the Bohr Model, an {{{1}}} electron has a velocity given by v = Z \alpha c, where Z is the atomic number, \alpha is the fine-structure constant, and c is the speed of light. In non-relativistic quantum mechanics, therefore, any atom with an atomic number greater than 137 would require its 1s electrons to be traveling faster than the speed of light. Even in the Dirac equation, which accounts for relativistic effects, the wavefunction of the electron for atoms with Z > 137 is oscillatory and unbounded. The significance of element 137, also known as untriseptium, was first pointed out by the physicist Richard Feynman. Element 137 is sometimes informally called feynmanium (symbol Fy). However, Feynman's approximation fails to predict the exact critical value of Z due to the non-point-charge nature of the nucleus and very small orbital radius of inner electrons, resulting in a potential seen by inner electrons which is effectively less than Z. The critical Z value which makes the atom unstable with regard to high-field breakdown of the vacuum and production of electron-positron pairs, does not occur until Z is about 173. These conditions are not seen except transiently in collisions of very heavy nuclei such as lead or uranium in accelerators, where such electron-positron production from these effects has been claimed to be observed. See Extension of the periodic table beyond the seventh period. There are no nodes in relativistic orbital densities, although individual components of the wavefunction will have nodes.[26] Transitions between orbitals[edit] Under quantum mechanics, each quantum state has a well-defined energy. When applied to atomic orbitals, this means that each state has a specific energy, and that if an electron is to move between states, the energy difference is also very fixed. Consider two states of the Hydrogen atom: State 1) n = 1, = 0, m = 0 and s = +12 State 2) n = 2, = 0, m = 0 and s = +12 By quantum theory, state 1 has a fixed energy of E1, and state 2 has a fixed energy of E2. Now, what would happen if an electron in state 1 were to move to state 2? For this to happen, the electron would need to gain an energy of exactly E2E1. If the electron receives energy that is less than or greater than this value, it cannot jump from state 1 to state 2. Now, suppose we irradiate the atom with a broad-spectrum of light. Photons that reach the atom that have an energy of exactly E2E1 will be absorbed by the electron in state 1, and that electron will jump to state 2. However, photons that are greater or lower in energy cannot be absorbed by the electron, because the electron can only jump to one of the orbitals, it cannot jump to a state between orbitals. The result is that only photons of a specific frequency will be absorbed by the atom. This creates a line in the spectrum, known as an absorption line, which corresponds to the energy difference between states 1 and 2. The atomic orbital model thus predicts line spectra, which are observed experimentally. This is one of the main validations of the atomic orbital model. The atomic orbital model is nevertheless an approximation to the full quantum theory, which only recognizes many electron states. The predictions of line spectra are qualitatively useful but are not quantitatively accurate for atoms and ions other than those containing only one electron. See also[edit] 1. ^ Orchin, Milton; Macomber, Roger S.; Pinhas, Allan; Wilson, R. Marshall (2005). Atomic Orbital Theory.  3. ^ Griffiths, David (1995). Introduction to Quantum Mechanics. Prentice Hall. pp. 190–191. ISBN 0-13-124405-1.  4. ^ Levine, Ira (2000). Quantum Chemistry (5 ed.). Prentice Hall. pp. 144–145. ISBN 0-13-685512-1.  5. ^ Laidler, Keith J.; Meiser, John H. (1982). Physical Chemistry. Benjamin/Cummings. p. 488. ISBN 0-8053-5682-7.  6. ^ Feynman, Richard; Leighton, Robert B.; Sands, Matthew (2006). The Feynman Lectures on Physics -The Definitive Edition, Vol 1 lect 6. Pearson PLC, Addison Wesley. p. 11. ISBN 0-8053-9046-4.  7. ^ Roger Penrose, The Road to Reality 8. ^ Mulliken, Robert S. (July 1932). "Electronic Structures of Polyatomic Molecules and Valence. II. General Considerations". Physical Review 41 (1): 49–71. Bibcode:1932PhRv...41...49M. doi:10.1103/PhysRev.41.49.  9. ^ a b Bohr, Niels (1913). "On the Constitution of Atoms and Molecules". Philosophical Magazine 26 (1): 476. Bibcode:1914Natur..93..268N. doi:10.1038/093268a0.  10. ^ a b Nagaoka, Hantaro (May 1904). "Kinetics of a System of Particles illustrating the Line and the Band Spectrum and the Phenomena of Radioactivity". Philosophical Magazine 7 (41): 445–455. doi:10.1080/14786440409463141.  11. ^ Bryson, Bill (2003). A Short History of Nearly Everything. Broadway Books. pp. 141–143. ISBN 0-7679-0818-X.  12. ^ Thomson, J. J. (1897). "Cathode rays". Philosophical Magazine 44 (269): 293. doi:10.1080/14786449708621070.  14. ^ Rhodes, Richard (1995). The Making of the Atomic Bomb. Simon & Schuster. pp. 50–51. ISBN 978-0-684-81378-3.  15. ^ Nagaoka, Hantaro (May 1904). "Kinetics of a System of Particles illustrating the Line and the Band Spectrum and the Phenomena of Radioactivity". Philosophical Magazine 7: 446.  16. ^ Geiger, H.; Marsden, E. (1909). "On a Diffuse Reflection of the α-Particles". Proceedings of the Royal Society, Series A 82 (557): 495–500. Bibcode:1909RSPSA..82..495G. doi:10.1098/rspa.1909.0054.  17. ^ Heisenberg, W. (March 1927). "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik". Zeitschrift für Physik A 43 (3–4): 172–198. Bibcode:1927ZPhy...43..172H. doi:10.1007/BF01397280.  18. ^ Bohr, Niels (April 1928). "The Quantum Postulate and the Recent Development of Atomic Theory". Nature 121 (3050): 580–590. Bibcode:1928Natur.121..580B. doi:10.1038/121580a0.  19. ^ Gerlach, W.; Stern, O. (1922). "Das magnetische Moment des Silberatoms". Zeitschrift für Physik 9: 353–355. Bibcode:1922ZPhy....9..353G. doi:10.1007/BF01326984.  20. ^ Levine, Ira (2000). Quantum Chemistry. Upper Saddle River, NJ: Prentice-Hall. p. 148. ISBN 0-13-685512-1.  21. ^ Powell, Richard E. (1968). "The five equivalent d orbitals". Journal of Chemical Education 45 (1): 45. Bibcode:1968JChEd..45...45P. doi:10.1021/ed045p45.  22. ^ Kimball, George E. (1940). "Directed Valence". The Journal of Chemical Physics 8 (2): 188. Bibcode:1940JChPh...8..188K. doi:10.1063/1.1750628.  23. ^ Cazenave, Lions, T., P.; Lions, P. L. (1982). "Orbital stability of standing waves for some nonlinear Schrödinger equations". Communications in Mathematical Physics 85 (4): 549–561. Bibcode:1982CMaPh..85..549C. doi:10.1007/BF01403504.  25. ^ Lower, Stephen. "Primer on Quantum Theory of the Atom".  26. ^ Szabo, Attila (1969). "Contour diagrams for relativistic orbitals". Journal of Chemical Education 46 (10): 678. Bibcode:1969JChEd..46..678S. doi:10.1021/ed046p678.  Further reading[edit] External links[edit]
1768e6ec6b1fc125
Many-worlds interpretation From Wikipedia, the free encyclopedia   (Redirected from Many worlds) Jump to: navigation, search The quantum-mechanical "Schrödinger's cat" theorem according to the many-worlds interpretation. In this interpretation, every event is a branch point; the cat is both alive and dead, even before the box is opened, but the "alive" and "dead" cats are in different branches of the universe, both of which are equally real, but which do not interact with each other.[1] The original relative state formulation is due to Hugh Everett in 1957.[3][4] Later, this formulation was popularized and renamed many-worlds by Bryce Seligman DeWitt in the 1960s and 1970s.[1][5][6][7] The decoherence approaches to interpreting quantum theory have been further explored and developed,[8][9][10] becoming quite popular. MWI is one of many multiverse hypotheses in physics and philosophy. It is currently considered a mainstream interpretation along with the other decoherence interpretations, collapse theories (including the historical Copenhagen interpretation),[11] and hidden variable theories such as the Bohmian mechanics. Before many-worlds, reality had always been viewed as a single unfolding history. Many-worlds, however, views reality as a many-branched tree, wherein every possible quantum outcome is realised.[12] Many-worlds reconciles the observation of non-deterministic events, such as random radioactive decay, with the fully deterministic equations of quantum physics. In many-worlds, the subjective appearance of wavefunction collapse is explained by the mechanism of quantum decoherence, and this is supposed to resolve all of the correlation paradoxes of quantum theory, such as the EPR paradox[13][14] and Schrödinger's cat,[1] since every possible outcome of every event defines or exists in its own "history" or "world". In Dublin in 1952 Erwin Schrödinger gave a lecture in which at one point he jocularly warned his audience that what he was about to say might "seem lunatic". It was that, when his Nobel equations seem to be describing several different histories, they are "not alternatives but all really happen simultaneously". This is the earliest known reference to the many-worlds.[15][16] Hugh Everett III (1930–1982) was the first physicist who proposed the many-worlds interpretation (MWI) of quantum physics, which he termed his "relative state" formulation. Although several versions of many-worlds have been proposed since Hugh Everett's original work,[4] they all contain one key idea: the equations of physics that model the time evolution of systems without embedded observers are sufficient for modelling systems which do contain observers; in particular there is no observation-triggered wave function collapse which the Copenhagen interpretation proposes. Provided the theory is linear with respect to the wavefunction, the exact form of the quantum dynamics modelled, be it the non-relativistic Schrödinger equation, relativistic quantum field theory or some form of quantum gravity or string theory, does not alter the validity of MWI since MWI is a metatheory applicable to all linear quantum theories, and there is no experimental evidence for any non-linearity of the wavefunction in physics.[17][18] MWI's main conclusion is that the universe (or multiverse in this context) is composed of a quantum superposition of very many, possibly even non-denumerably infinitely[2] many, increasingly divergent, non-communicating parallel universes or quantum worlds.[7] The idea of MWI originated in Everett's Princeton Ph.D. thesis "The Theory of the Universal Wavefunction",[7] developed under his thesis advisor John Archibald Wheeler, a shorter summary of which was published in 1957 entitled "Relative State Formulation of Quantum Mechanics" (Wheeler contributed the title "relative state";[19] Everett originally called his approach the "Correlation Interpretation", where "correlation" refers to quantum entanglement). The phrase "many-worlds" is due to Bryce DeWitt,[7] who was responsible for the wider popularisation of Everett's theory, which had been largely ignored for the first decade after publication. DeWitt's phrase "many-worlds" has become so much more popular than Everett's "Universal Wavefunction" or Everett–Wheeler's "Relative State Formulation" that many forget that this is only a difference of terminology; the content of both of Everett's papers and DeWitt's popular article is the same. The many-worlds interpretation shares many similarities with later, other "post-Everett" interpretations of quantum mechanics which also use decoherence to explain the process of measurement or wavefunction collapse. MWI treats the other histories or worlds as real since it regards the universal wavefunction as the "basic physical entity"[20] or "the fundamental entity, obeying at all times a deterministic wave equation".[21] The other decoherent interpretations, such as consistent histories, the Existential Interpretation etc., either regard the extra quantum worlds as metaphorical in some sense, or are agnostic about their reality; it is sometimes hard to distinguish between the different varieties. MWI is distinguished by two qualities: it assumes realism,[20][21] which it assigns to the wavefunction, and it has the minimal formal structure possible, rejecting any hidden variables, quantum potential, any form of a collapse postulate (i.e., Copenhagenism) or mental postulates (such as the many-minds interpretation makes). Decoherent interpretations of many-worlds using einselection to explain how a small number of classical pointer states can emerge from the enormous Hilbert space of superpositions have been proposed by Wojciech H. Zurek. "Under scrutiny of the environment, only pointer states remain unchanged. Other states decohere into mixtures of stable pointer states that can persist, and, in this sense, exist: They are einselected."[22] These ideas complement MWI and bring the interpretation in line with our perception of reality. Many-worlds is often referred to as a theory, rather than just an interpretation, by those who propose that many-worlds can make testable predictions (such as David Deutsch) or is falsifiable (such as Everett) or by those who propose that all the other, non-MW interpretations, are inconsistent, illogical or unscientific in their handling of measurements; Hugh Everett argued that his formulation was a metatheory, since it made statements about other interpretations of quantum theory; that it was the "only completely coherent approach to explaining both the contents of quantum mechanics and the appearance of the world."[23] Deutsch is dismissive that many-worlds is an "interpretation", saying that calling it an interpretation "is like talking about dinosaurs as an 'interpretation' of fossil records."[24] Interpreting wavefunction collapse[edit] As with the other interpretations of quantum mechanics, the many-worlds interpretation is motivated by behavior that can be illustrated by the double-slit experiment. When particles of light (or anything else) are passed through the double slit, a calculation assuming wave-like behavior of light can be used to identify where the particles are likely to be observed. Yet when the particles are observed in this experiment, they appear as particles (i.e., at definite places) and not as non-localized waves. Some versions of the Copenhagen interpretation of quantum mechanics proposed a process of "collapse" in which an indeterminate quantum system would probabilistically collapse down onto, or select, just one determinate outcome to "explain" this phenomenon of observation. Wavefunction collapse was widely regarded as artificial and ad hoc[citation needed], so an alternative interpretation in which the behavior of measurement could be understood from more fundamental physical principles was considered desirable. Everett's Ph.D. work provided such an alternative interpretation. Everett stated that for a composite system – for example a subject (the "observer" or measuring apparatus) observing an object (the "observed" system, such as a particle) – the statement that either the observer or the observed has a well-defined state is meaningless; in modern parlance, the observer and the observed have become entangled; we can only specify the state of one relative to the other, i.e., the state of the observer and the observed are correlated after the observation is made. This led Everett to derive from the unitary, deterministic dynamics alone (i.e., without assuming wavefunction collapse) the notion of a relativity of states. Everett noticed that the unitary, deterministic dynamics alone decreed that after an observation is made each element of the quantum superposition of the combined subject–object wavefunction contains two "relative states": a "collapsed" object state and an associated observer who has observed the same collapsed outcome; what the observer sees and the state of the object have become correlated by the act of measurement or observation. The subsequent evolution of each pair of relative subject–object states proceeds with complete indifference as to the presence or absence of the other elements, as if wavefunction collapse has occurred, which has the consequence that later observations are always consistent with the earlier observations. Thus the appearance of the object's wavefunction's collapse has emerged from the unitary, deterministic theory itself. (This answered Einstein's early criticism of quantum theory, that the theory should define what is observed, not for the observables to define the theory).[25] Since the wavefunction merely appears to have collapsed then, Everett reasoned, there was no need to actually assume that it had collapsed. And so, invoking Occam's razor, he removed the postulate of wavefunction collapse from the theory. Attempts have been made, by many-world advocates and others, over the years to derive the Born rule, rather than just conventionally assume it, so as to reproduce all the required statistical behaviour associated with quantum mechanics. There is no consensus on whether this has been successful.[26][27][28] Frequency-based approaches[edit] Everett (1957) briefly derived the Born rule by showing that the Born rule was the only possible rule, and that its derivation was as justified as the procedure for defining probability in classical mechanics. Everett stopped doing research in theoretical physics shortly after obtaining his Ph.D., but his work on probability has been extended by a number of people. Andrew Gleason (1957) and James Hartle (1965) independently reproduced Everett's work[29] which was later extended.[30][31] These results are closely related to Gleason's theorem, a mathematical result according to which the Born probability measure is the only one on Hilbert space that can be constructed purely from the quantum state vector.[32] Bryce DeWitt and his doctoral student R. Neill Graham later provided alternative (and longer) derivations to Everett's derivation of the Born rule.[7] They demonstrated that the norm of the worlds where the usual statistical rules of quantum theory broke down vanished, in the limit where the number of measurements went to infinity. Decision theory[edit] A decision-theoretic derivation of the Born rule from Everettarian assumptions, was produced by David Deutsch (1999)[33] and refined by Wallace (2002–2009)[34][35][36][37] and Saunders (2004).[38][39] Deutsch's derivation is a two-stage proof: first he shows that the number of orthonormal Everett-worlds after a branching is proportional to the conventional probability density. Then he uses game theory to show that these are all equally likely to be observed. The last step in particular has been criticised for circularity.[40][41] Some other reviews have been positive, although the status of these arguments remains highly controversial; some theoretical physicists have taken them as supporting the case for parallel universes.[42][43] In the New Scientist article, reviewing their presentation at a September 2007 conference,[44][45] Andy Albrecht, a physicist at the University of California at Davis, is quoted as saying "This work will go down as one of the most important developments in the history of science."[42] The Born rule and the collapse of the wave function have been obtained in the framework of the relative-state formulation of quantum mechanics by Armando V.D.B. Assis. He has proved that the Born rule and the collapse of the wave function follow from a game-theoretical strategy, namely the Nash equilibrium within a von Neumann zero-sum game between nature and observer.[46] Symmetries and invariance[edit] Wojciech H. Zurek (2005)[47] has produced a derivation of the Born rule, where decoherence has replaced Deutsch's informatic assumptions.[48] Lutz Polley (2000) has produced Born rule derivations where the informatic assumptions are replaced by symmetry arguments.[49][50] Charles Sebens and Sean M. Carroll, building on work by Lev Vaidman,[51] proposed a similar approach based on self-locating uncertainty.[52] In this approach, decoherence creates multiple identical copies of observers, who can assign credences to being on different branches using the Born rule. Brief overview[edit] In Everett's formulation, a measuring apparatus M and an object system S form a composite system, each of which prior to measurement exists in well-defined (but time-dependent) states. Measurement is regarded as causing M and S to interact. After S interacts with M, it is no longer possible to describe either system by an independent state. According to Everett, the only meaningful descriptions of each system are relative states: for example the relative state of S given the state of M or the relative state of M given the state of S. In DeWitt's formulation, the state of S after a sequence of measurements is given by a quantum superposition of states, each one corresponding to an alternative measurement history of S. Schematic illustration of splitting as a result of a repeated measurement. For example, consider the smallest possible truly quantum system S, as shown in the illustration. This describes for instance, the spin-state of an electron. Considering a specific axis (say the z-axis) the north pole represents spin "up" and the south pole, spin "down". The superposition states of the system are described by (the surface of) a sphere called the Bloch sphere. To perform a measurement on S, it is made to interact with another similar system M. After the interaction, the combined system is described by a state that ranges over a six-dimensional space (the reason for the number six is explained in the article on the Bloch sphere). This six-dimensional object can also be regarded as a quantum superposition of two "alternative histories" of the original system S, one in which "up" was observed and the other in which "down" was observed. Each subsequent binary measurement (that is interaction with a system M) causes a similar split in the history tree. Thus after three measurements, the system can be regarded as a quantum superposition of 8 = 2 × 2 × 2 copies of the original system S. The accepted terminology is somewhat misleading because it is incorrect to regard the universe as splitting at certain times; at any given instant there is one state in one universe. Relative state[edit] In his 1957 doctoral dissertation, Everett proposed that rather than modeling an isolated quantum system subject to external observation, one could mathematically model an object as well as its observers as purely physical systems within the mathematical framework developed by Paul Dirac, von Neumann and others, discarding altogether the ad hoc mechanism of wave function collapse. Since Everett's original work, there have appeared a number of similar formalisms in the literature. One such idea is discussed in the next section. The relative state formulation makes two assumptions. The first is that the wavefunction is not simply a description of the object's state, but that it actually is entirely equivalent to the object, a claim it has in common with some other interpretations. The second is that observation or measurement has no special laws or mechanics, unlike in the Copenhagen interpretation which considers the wavefunction collapse as a special kind of event which occurs as a result of observation. Instead, measurement in the relative state formulation is the consequence of a configuration change in the memory of an observer described by the same basic wave physics as the object being modeled. The many-worlds interpretation is DeWitt's popularisation of Everett's work, who had referred to the combined observer–object system as being split by an observation, each split corresponding to the different or multiple possible outcomes of an observation. These splits generate a possible tree as shown in the graphic below. Subsequently, DeWitt introduced the term "world" to describe a complete measurement history of an observer, which corresponds roughly to a single branch of that tree. Note that "splitting" in this sense, is hardly new or even quantum mechanical. The idea of a space of complete alternative histories had already been used in the theory of probability since the mid-1930s for instance to model Brownian motion. Partial trace as relative state. Light blue rectangle on upper left denotes system in pure state. Trellis shaded rectangle in upper right denotes a (possibly) mixed state. Mixed state from observation is partial trace of a linear superposition of states as shown in lower right-hand corner. Successive measurements with successive splittings Under the many-worlds interpretation, the Schrödinger equation, or relativistic analog, holds all the time everywhere. An observation or measurement of an object by an observer is modeled by applying the wave equation to the entire system comprising the observer and the object. One consequence is that every observation can be thought of as causing the combined observer–object's wavefunction to change into a quantum superposition of two or more non-interacting branches, or split into many "worlds". Since many observation-like events have happened, and are constantly happening, there are an enormous and growing number of simultaneously existing states. If a system is composed of two or more subsystems, the system's state will be a superposition of products of the subsystems' states. Once the subsystems interact, their states are no longer independent. Each product of subsystem states in the overall superposition evolves over time independently of other products. The subsystems states have become correlated or entangled and it is no longer possible to consider them independent of one another. In Everett's terminology each subsystem state was now correlated with its relative state, since each subsystem must now be considered relative to the other subsystems with which it has interacted. Properties of the theory[edit] Comparative properties and possible experimental tests[edit] One of the salient properties of the many-worlds interpretation is that it does not require an exceptional method of wave function collapse to explain it. "It seems that there is no experiment distinguishing the MWI from other no-collapse theories such as Bohmian mechanics or other variants of MWI... In most no-collapse interpretations, the evolution of the quantum state of the Universe is the same. Still, one might imagine that there is an experiment distinguishing the MWI from another no-collapse interpretation based on the difference in the correspondence between the formalism and the experience (the results of experiments)."[57] However, in 1985, David Deutsch published three related thought experiments which could test the theory vs the Copenhagen interpretation.[58] The experiments require macroscopic quantum state preparation and quantum erasure by a hypothetical quantum computer which is currently outside experimental possibility. Since then Lockwood (1989), Vaidman and others have made similar proposals.[57] These proposals also require an advanced technology which is able to place a macroscopic object in a coherent superposition, another task which it is uncertain will ever be possible to perform. Many other controversial ideas have been put forward though, such as a recent claim that cosmological observations could test the theory,[59] and another claim by Rainer Plaga (1997), published in Foundations of Physics, that communication might be possible between worlds.[60] Copenhagen interpretation[edit] In the Copenhagen interpretation, the mathematics of quantum mechanics allows one to predict probabilities for the occurrence of various events. When an event occurs, it becomes part of the definite reality, and alternative possibilities do not. There is no necessity to say anything definite about what is not observed. The universe decaying to a new vacuum state[edit] Any event that changes the number of observers in the universe may have experimental consequences.[61] Quantum tunnelling to a new vacuum state would reduce the number of observers to zero (i.e., kill all life).[citation needed] Some cosmologists[citation needed] argue that the universe is in a false vacuum state and that consequently the universe should have already experienced quantum tunnelling to a true vacuum state. This has not happened and is cited as evidence in favor of many-worlds. In some worlds, quantum tunnelling to a true vacuum state has happened but most other worlds escape this tunneling and remain viable. This can be thought of as a variation on quantum suicide. The many-minds interpretation is a multi-world interpretation that defines the splitting of reality on the level of the observers' minds. In this, it differs from Everett's many-worlds interpretation, in which there is no special role for the observer's mind.[60] Common objections[edit] • The many-worlds interpretation is very vague about the ways to determine when splitting happens, and nowadays usually the criterion is that the two branches have decohered. However, present day understanding of decoherence does not allow a completely precise, self-contained way to say when the two branches have decohered/"do not interact", and hence many-worlds interpretation remains arbitrary. This objection is saying that it is not clear what is precisely meant by branching, and point to the lack of self-contained criteria specifying branching. MWI response: the decoherence or "splitting" or "branching" is complete when the measurement is complete. In Dirac notation a measurement is complete when: where represents the observer having detected the object system in the ith state. Before the measurement has started the observer states are identical; after the measurement is complete the observer states are orthonormal.[4][7] Thus a measurement defines the branching process: the branching is as well- or ill-defined as the measurement is; the branching is as complete as the measurement is complete – which is to say that the delta function above represents an idealised measurement. Although true "for all practical purposes" in reality the measurement, and hence the branching, is never fully complete, since delta functions are unphysical,[63] Since the role of the observer and measurement per se plays no special role in MWI (measurements are handled as all other interactions are) there is no need for a precise definition of what an observer or a measurement is — just as in Newtonian physics no precise definition of either an observer or a measurement was required or expected. In all circumstances the universal wavefunction is still available to give a complete description of reality. Also, it is a common misconception to think that branches are completely separate. In Everett's formulation, they may in principle quantum interfere (i.e., "merge" instead of "splitting") with each other in the future,[64] although this requires all "memory" of the earlier branching event to be lost, so no observer ever sees two branches of reality.[65][66] • MWI states that there is no special role, or need for precise definition of measurement in MWI, yet Everett uses the word "measurement" repeatedly throughout its exposition. MWI response: "measurements" are treated as a subclass of interactions, which induce subject–object correlations in the combined wavefunction. There is nothing special about measurements (such as the ability to trigger a wave function collapse), that cannot be dealt with by the usual unitary time development process.[3] This is why there is no precise definition of measurement in Everett's formulation, although some other formulations emphasize that measurements must be effectively irreversible or create classical information. • The splitting of worlds forward in time, but not backwards in time (i.e., merging worlds), is time asymmetric and incompatible with the time symmetric nature of Schrödinger's equation, or CPT invariance in general.[67] MWI response: The splitting is time asymmetric; this observed temporal asymmetry is due to the boundary conditions imposed by the Big Bang[68] • There is circularity in Everett's measurement theory. Under the assumptions made by Everett, there are no 'good observations' as defined by him, and since his analysis of the observational process depends on the latter, it is void of any meaning. The concept of a 'good observation' is the projection postulate in disguise and Everett's analysis simply derives this postulate by having assumed it, without any discussion.[69][unreliable source?] MWI response: Everett's treatment of observations / measurements covers both idealised good measurements and the more general bad or approximate cases.[70] Thus it is legitimate to analyse probability in terms of measurement; no circularity is present. • Talk of probability in Everett presumes the existence of a preferred basis to identify measurement outcomes for the probabilities to range over. But the existence of a preferred basis can only be established by the process of decoherence, which is itself probabilistic[40] or arbitrary.[71] MWI response: Everett analysed branching using what we now call the "measurement basis". It is fundamental theorem of quantum theory that nothing measurable or empirical is changed by adopting a different basis. Everett was therefore free to choose whatever basis he liked. The measurement basis was simply the simplest basis in which to analyse the measurement process.[72][73] MWI response: All accepted quantum theories of fundamental physics are linear with respect to the wavefunction. While quantum gravity or string theory may be non-linear in this respect there is no evidence to indicate this at the moment.[17][18] • Conservation of energy is grossly violated if at every instant near-infinite amounts of new matter are generated to create the new universes. MWI response: There are two responses to this objection. First, the law of conservation of energy says that energy is conserved within each universe. Hence, even if "new matter" were being generated to create new universes, this would not violate conservation of energy. Second, conservation of energy is not violated since the energy of each branch has to be weighted by its probability, according to the standard formula for the conservation of energy in quantum theory. This results in the total energy of the multiverse being conserved.[75][unreliable source?] • Occam's Razor rules against a plethora of unobservable universes – Occam would prefer just one universe; i.e., any non-MWI. MWI response: Occam's razor actually is a constraint on the complexity of physical theory, not on the number of universes. MWI is a simpler theory since it has fewer postulates.[56][unreliable source?] Occams's razor is often cited by MWI adherents as an advantage of MWI. • Unphysical universes: If a state is a superposition of two states and , i.e., , i.e., weighted by coefficients a and b, then if , what principle allows a universe with vanishingly small probability b to be instantiated on an equal footing with the much more probable one with probability a? This seems to throw away the information in the probability amplitudes. MWI response: The magnitude of the coefficients provides the weighting that makes the branches or universes "unequal", as Everett and others have shown, leading the emergence of the conventional probabilistic rules.[1][4][5][6][7][76][unreliable source?] • Violation of the principle of locality, which contradicts special relativity: MWI splitting is instant and total: this may conflict with relativity, since an alien in the Andromeda galaxy can't know I collapse an electron over here before she collapses hers there: the relativity of simultaneity says we can't say which electron collapsed first – so which one splits off another universe first? This leads to a hopeless muddle with everyone splitting differently. Note: EPR is not a get-out here, as the alien's and my electrons need never have been part of the same quantum, i.e., entangled. MWI response: the splitting can be regarded as causal, local and relativistic, spreading at, or below, the speed of light (e.g., we are not split by Schrödinger's cat until we look in the box).[77][unreliable source?] For spacelike separated splitting you can't say which occurred first — but this is true of all spacelike separated events, simultaneity is not defined for them. Splitting is no exception; many-worlds is a local theory.[53] • Quantum tunnelling and radioactive decay are inadequately explained by the MWI. A tunnelling particle would have more energy than what is actually measured in experiments.[78] This discrepancy suggests that the MWI is not an accurate description of reality; further, it suggests that the wave-function is physically real, and not merely a statistical description of many-worlds.[79] There is a wide range of claims that are considered "many-worlds" interpretations. It was often claimed by those who do not believe in MWI[80] that Everett himself was not entirely clear[81] as to what he believed; however, MWI adherents (such as DeWitt, Tegmark, Deutsch and others) believe they fully understand Everett's meaning as implying the literal existence of the other worlds. Additionally, recent biographical sources make it clear that Everett believed in the literal reality of the other quantum worlds.[24] Everett's son reported that Hugh Everett "never wavered in his belief over his many-worlds theory".[82] Also Everett was reported to believe "his many-worlds theory guaranteed him immortality".[83] One of MWI's strongest advocates is David Deutsch.[84] According to Deutsch, the single photon interference pattern observed in the double slit experiment can be explained by interference of photons in multiple universes. Viewed in this way, the single photon interference experiment is indistinguishable from the multiple photon interference experiment. In a more practical vein, in one of the earliest papers on quantum computing,[85] he suggested that parallelism that results from the validity of MWI could lead to "a method by which certain probabilistic tasks can be performed faster by a universal quantum computer than by any classical restriction of it". Deutsch has also proposed that when reversible computers become conscious that MWI will be testable (at least against "naive" Copenhagenism) via the reversible observation of spin.[65] Asher Peres was an outspoken critic of MWI. For example, a section in his 1993 textbook had the title Everett's interpretation and other bizarre theories. Peres not only questioned whether MWI is really an "interpretation", but rather, if any interpretations of quantum mechanics are needed at all. An interpretation can be regarded as a purely formal transformation, which adds nothing to the rules of the quantum mechanics.[citation needed] Peres seems to suggest[according to whom?] that positing the existence of an infinite number of non-communicating parallel universes is highly suspect per those[who?] who interpret it as a violation of Occam's razor, i.e., that it does not minimize the number of hypothesized entities. However, it is understood[by whom?] that the number of elementary particles are not a gross violation of Occam's Razor, one counts the types, not the tokens. Max Tegmark remarks[where?] that the alternative to many-worlds is "many words", an allusion to the complexity of von Neumann's collapse postulate. On the other hand, the same derogatory qualification "many words" is often applied to MWI by its critics[who?] who see it as a word game which obfuscates rather than clarifies by confounding the von Neumann branching of possible worlds with the Schrödinger parallelism of many worlds in superposition.[citation needed] MWI is considered by some[who?] to be unfalsifiable and hence unscientific because the multiple parallel universes are non-communicating, in the sense that no information can be passed between them. Others[65] claim MWI is directly testable. Everett regarded MWI as falsifiable since any test that falsifies conventional quantum theory would also falsify MWI.[23] According to Martin Gardner, the "other" worlds of MWI have two different interpretations: real or unreal; he claims that Stephen Hawking and Steve Weinberg both favour the unreal interpretation.[86] Gardner also claims that the nonreal interpretation is favoured by the majority of physicists, whereas the "realist" view is only supported by MWI experts such as Deutsch and Bryce DeWitt. Hawking has said that "according to Feynman's idea", all the other histories are as "equally real" as our own,[87] and Martin Gardner reports Hawking saying that MWI is "trivially true".[88] In a 1983 interview, Hawking also said he regarded the MWI as "self-evidently correct" but was dismissive towards questions about the interpretation of quantum mechanics, saying, "When I hear of Schrödinger's cat, I reach for my gun." In the same interview, he also said, "But, look: All that one does, really, is to calculate conditional probabilities—in other words, the probability of A happening, given B. I think that that's all the many worlds interpretation is. Some people overlay it with a lot of mysticism about the wave function splitting into different parts. But all that you're calculating is conditional probabilities."[89] Elsewhere Hawking contrasted his attitude towards the "reality" of physical theories with that of his colleague Roger Penrose, saying, "He's a Platonist and I'm a positivist. He's worried that Schrödinger's cat is in a quantum state, where it is half alive and half dead. He feels that can't correspond to reality. But that doesn't bother me. I don't demand that a theory correspond to reality because I don't know what it is. Reality is not a quality you can test with litmus paper. All I'm concerned with is that the theory should predict the results of measurements. Quantum theory does this very successfully."[90] For his own part, Penrose agrees with Hawking that QM applied to the universe implies MW, although he considers the current lack of a successful theory of quantum gravity negates the claimed universality of conventional QM.[74] Advocates of MWI often cite a poll of 72 "leading cosmologists and other quantum field theorists"[91] conducted by the American political scientist David Raub in 1995 showing 58% agreement with "Yes, I think MWI is true".[92] The poll is controversial: for example, Victor J. Stenger remarks that Murray Gell-Mann's published work explicitly rejects the existence of simultaneous parallel universes. Collaborating with James Hartle, Gell-Mann is working toward the development a more "palatable" post-Everett quantum mechanics. Stenger thinks it's fair to say that most physicists dismiss the many-world interpretation as too extreme, while noting it "has merit in finding a place for the observer inside the system being analyzed and doing away with the troublesome notion of wave function collapse".[93] Max Tegmark also reports the result of a "highly unscientific" poll taken at a 1997 quantum mechanics workshop.[94] According to Tegmark, "The many worlds interpretation (MWI) scored second, comfortably ahead of the consistent histories and Bohm interpretations." Such polls have been taken at other conferences, for example, in response to Sean Carroll's observation, "As crazy as it sounds, most working physicists buy into the many-worlds theory"[95] Michael Nielsen counters: "at a quantum computing conference at Cambridge in 1998, a many-worlder surveyed the audience of approximately 200 people... Many-worlds did just fine, garnering support on a level comparable to, but somewhat below, Copenhagen and decoherence." However, Nielsen notes that it seemed most attendees found it to be a waste of time: Asher Peres "got a huge and sustained round of applause… when he got up at the end of the polling and asked 'And who here believes the laws of physics are decided by a democratic vote?'"[96] A 2005 poll of fewer than 40 students and researchers taken after a course on the Interpretation of Quantum Mechanics at the Institute for Quantum Computing University of Waterloo found "Many Worlds (and decoherence)" to be the least favored.[97] A 2011 poll of 33 participants at an Austrian conference found 6 endorsed MWI, 8 "Information-based/information-theoretical", and 14 Copenhagen;[98] the authors remark that the results are similar to Tegmark's 1998 poll. Speculative implications[edit] Speculative physics deals with questions which are also discussed in science fiction. Quantum suicide thought experiment[edit] Quantum suicide, as a thought experiment, was published independently by Hans Moravec in 1987[99][100] and Bruno Marchal in 1988[101][102] and was independently developed further by Max Tegmark in 1998.[103] It attempts to distinguish between the Copenhagen interpretation of quantum mechanics and the Everett many-worlds interpretation by means of a variation of the Schrödinger's cat thought experiment, from the cat's point of view. Quantum immortality refers to the subjective experience of surviving quantum suicide regardless of the odds.[104] Weak coupling[edit] Another speculation is that the separate worlds remain weakly coupled (e.g., by gravity) permitting "communication between parallel universes". A possible test of this using quantum-optical equipment is described in a 1997 Foundations of Physics article by Rainer Plaga.[60] It involves an isolated ion in an ion trap, a quantum measurement that would yield two parallel worlds (their difference just being in the detection of a single photon), and the excitation of the ion from only one of these worlds. If the excited ion can be detected from the other parallel universe, then this would constitute direct evidence in support of the many-worlds interpretation and would automatically exclude the orthodox, "logical", and "many-histories" interpretations. The reason the ion is isolated is to make it not participate immediately in the decoherence which insulates the parallel world branches, therefore allowing it to act as a gateway between the two worlds, and if the measure apparatus could perform the measurements quickly enough before the gateway ion is decoupled then the test would succeed (with electronic computers the necessary time window between the two worlds would be in a time scale of milliseconds or nanoseconds, and if the measurements are taken by humans then a few seconds would still be enough). R. Plaga shows that macroscopic decoherence timescales are a possibility. The proposed test is based on technical equipment described in a 1993 Physical Review article by Itano et al.[105] and R. Plaga says that this level of technology is enough to realize the proposed inter-world communication experiment. The necessary technology for precision measurements of single ions already exists since the 1970s, and the ion recommended for excitation is 199Hg+. The excitation methodology is described by Itano et al. and the time needed for it is given by the Rabi flopping formula[106] Such a test as described by R. Plaga would mean that energy transfer is possible between parallel worlds. This does not violate the fundamental principles of physics because these require energy conservation only for the whole universe and not for the single parallel branches.[60] Neither the excitation of the single ion (which is a degree of freedom of the proposed system) leads to decoherence, something which is proven by Welcher Weg detectors which can excite atoms without momentum transfer (which causes the loss of coherence).[107] The proposed test would allow for low-bandwidth inter-world communication, the limiting factors of bandwidth and time being dependent on the technology of the equipment. Because of the time needed to determine the state of the partially decohered isolated excited ion based on Itano et al.'s methodology, the ion would decohere by the time its state is determined during the experiment, so Plaga's proposal would pass just enough information between the two worlds to confirm their parallel existence and nothing more. The author contemplates that with increased bandwidth, one could even transfer television imagery across the parallel worlds.[60] For example, Itano et al.'s methodology could be improved (by lowering the time needed for state determination of the excited ion) if a more efficient process were found for the detection of fluorescence radiation using 194 nm photons.[60] A 1991 article by J.Polchinski also supports the view that inter-world communication is a theoretical possibility.[108] Other authors in a 1994 preprint article also contemplated similar ideas.[109] The reason inter-world communication seems like a possibility is because decoherence which separates the parallel worlds is never fully complete,[110][111] therefore weak influences from one parallel world to another can still pass between them,[110][112] and these should be measurable with advanced technology. Deutsch proposed such an experiment in a 1985 International Journal of Theoretical Physics article,[113] but the technology it requires involves human-level artificial intelligence.[60] Similarity to modal realism[edit] The many-worlds interpretation has some similarity to modal realism in philosophy, which is the view that the possible worlds used to interpret modal claims exist and are of a kind with the actual world. Unlike the possible worlds of philosophy, however, in quantum mechanics counterfactual alternatives can influence the results of experiments, as in the Elitzur–Vaidman bomb-testing problem or the Quantum Zeno effect. Also, while the worlds of the many-worlds interpretation all share the same physical laws, modal realism postulates a world for every way things could conceivably have been. Time travel[edit] The many-worlds interpretation could be one possible way to resolve the paradoxes[84] that one would expect to arise if time travel turns out to be permitted by physics (permitting closed timelike curves and thus violating causality). Entering the past would itself be a quantum event causing branching, and therefore the timeline accessed by the time traveller simply would be another timeline of many. In that sense, it would make the Novikov self-consistency principle unnecessary. Many-worlds in literature and science fiction[edit] A map from Robert Sobel's novel For Want of a Nail, an artistic illustration of how small events – in this example the branching or point of divergence from our timeline's history is in October 1777 – can profoundly alter the course of history. According to the many-worlds interpretation every event, even microscopic, is a branch point; all possible alternative histories actually exist.[1] The many-worlds interpretation (and the somewhat related concept of possible worlds) has been associated to numerous themes in literature, art and science fiction. Some of these stories or films violate fundamental principles of causality and relativity, since the information-theoretic structure of the path space of multiple universes (that is, information flow between different paths) is very likely complex. Another kind of popular illustration of many-worlds splittings, which does not involve information flow between paths, or information flow backwards in time considers alternate outcomes of historical events. According to the many-worlds interpretation, all of the historical speculations entertained within the alternate history genre are realized in parallel universes.[1] The many-worlds interpretation of reality was anticipated with remarkable fidelity in Olaf Stapledon's 1937 science fiction novel Star Maker, in a paragraph describing one of the many universes created by the Star Maker god of the title. "In one inconceivably complex cosmos, whenever a creature was faced with several possible courses of action, it took them all, thereby creating many distinct temporal dimensions and distinct histories of the cosmos. Since in every evolutionary sequence of the cosmos there were very many creatures, and each was constantly faced with many possible courses, and the combinations of all their courses were innumerable, an infinity of distinct universes exfoliated from every moment of every temporal sequence in this cosmos." See also[edit] 1. ^ a b c d e f g Bryce Seligman DeWitt, Quantum Mechanics and Reality: Could the solution to the dilemma of indeterminism be a universe in which all possible outcomes of an experiment actually occur?, Physics Today, 23(9) pp 30–40 (September 1970) "every quantum transition taking place on every star, in every galaxy, in every remote corner of the universe is splitting our local world on earth into myriads of copies of itself." See also Physics Today, letters followup, 24(4), (April 1971), pp 38–44 2. ^ a b Osnaghi, Stefano; Freitas, Fabio; Olival Freire, Jr (2009). "The Origin of the Everettian Heresy" (PDF). Studies in History and Philosophy of Modern Physics. 40: 97–123. doi:10.1016/j.shpsb.2008.10.002.  3. ^ a b Hugh Everett Theory of the Universal Wavefunction, Thesis, Princeton University, (1956, 1973), pp 1–140 4. ^ a b c d e Everett, Hugh (1957). "Relative State Formulation of Quantum Mechanics". Reviews of Modern Physics. 29: 454–462. Bibcode:1957RvMP...29..454E. doi:10.1103/RevModPhys.29.454.  5. ^ a b c Cecile M. DeWitt, John A. Wheeler eds, The Everett–Wheeler Interpretation of Quantum Mechanics, Battelle Rencontres: 1967 Lectures in Mathematics and Physics (1968) 6. ^ a b c Bryce Seligman DeWitt, The Many-Universes Interpretation of Quantum Mechanics, Proceedings of the International School of Physics "Enrico Fermi" Course IL: Foundations of Quantum Mechanics, Academic Press (1972) 7. ^ a b c d e f g h Bryce Seligman DeWitt, R. Neill Graham, eds, The Many-Worlds Interpretation of Quantum Mechanics, Princeton Series in Physics, Princeton University Press (1973), ISBN 0-691-08131-X Contains Everett's thesis: The Theory of the Universal Wavefunction, pp 3–140. 8. ^ H. Dieter Zeh, On the Interpretation of Measurement in Quantum Theory, Foundation of Physics, vol. 1, pp. 69–76, (1970). 9. ^ Wojciech Hubert Zurek, Decoherence and the transition from quantum to classical, Physics Today, vol. 44, issue 10, pp. 36–44, (1991). 10. ^ Wojciech Hubert Zurek, Decoherence, einselection, and the quantum origins of the classical, Reviews of Modern Physics, 75, pp 715–775, (2003) 11. ^ The Many Worlds Interpretation of Quantum Mechanics 12. ^ David Deutsch argues that a great deal of fiction is close to a fact somewhere in the so called multiverse, Beginning of Infinity, p. 294 13. ^ Bryce Seligman DeWitt, R. Neill Graham, eds, The Many-Worlds Interpretation of Quantum Mechanics, Princeton Series in Physics, Princeton University Press (1973), ISBN 0-691-08131-X Contains Everett's thesis: The Theory of the Universal Wavefunction, where the claim to resolves all paradoxes is made on pg 118, 149. 14. ^ Hugh Everett, Relative State Formulation of Quantum Mechanics, Reviews of Modern Physics vol 29, (July 1957) pp 454–462. The claim to resolve EPR is made on page 462 15. ^ David Deutsch. The Beginning of infinity. Page 310. 16. ^ 17. ^ a b Steven Weinberg, Dreams of a Final Theory: The Search for the Fundamental Laws of Nature (1993), ISBN 0-09-922391-0, pg 68–69 18. ^ a b Steven Weinberg Testing Quantum Mechanics, Annals of Physics Vol 194 #2 (1989), pg 336–386 19. ^ John Archibald Wheeler, Geons, Black Holes & Quantum Foam, ISBN 0-393-31991-1. pp 268–270 20. ^ a b Everett 1957, section 3, 2nd paragraph, 1st sentence 21. ^ a b Everett [1956]1973, "Theory of the Universal Wavefunction", chapter 6 (e) 22. ^ Zurek, Wojciech (March 2009). "Quantum Darwinism". Nature Physics. 5 (3): 181–188. arXiv:0903.5082Freely accessible. Bibcode:2009NatPh...5..181Z. doi:10.1038/nphys1202.  23. ^ a b Everett 24. ^ a b Peter Byrne, The Many Worlds of Hugh Everett III: Multiple Universes, Mutual Assured Destruction, and the Meltdown of a Nuclear Family, ISBN 978-0-19-955227-6 25. ^ "Whether you can observe a thing or not depends on the theory which you use. It is the theory which decides what can be observed." Albert Einstein to Werner Heisenberg, objecting to placing observables at the heart of the new quantum mechanics, during Heisenberg's 1926 lecture at Berlin; related by Heisenberg in 1968, quoted by Abdus Salam, Unification of Fundamental Forces, Cambridge University Press (1990) ISBN 0-521-37140-6, pp 98–101 26. ^ N.P. Landsman, "The conclusion seems to be that no generally accepted derivation of the Born rule has been given to date, but this does not imply that such a derivation is impossible in principle.", in Compendium of Quantum Physics (eds.) F.Weinert, K. Hentschel, D.Greenberger and B. Falkenburg (Springer, 2008), ISBN 3-540-70622-4 27. ^ Adrian Kent (May 5, 2009), One world versus many: the inadequacy of Everettian accounts of evolution, probability, and scientific confirmation 28. ^ Kent, Adrian (1990). "Against Many-Worlds Interpretations". Int. J. Mod. Phys A. 5: 1745–1762. arXiv:gr-qc/9703089Freely accessible. Bibcode:1990IJMPA...5.1745K. doi:10.1142/S0217751X90000805.  29. ^ James Hartle, Quantum Mechanics of Individual Systems, American Journal of Physics, 1968, vol 36 (#8), pp. 704–712 30. ^ E. Farhi, J. Goldstone & S. Gutmann. How probability arises in quantum mechanics., Ann. Phys. (N.Y.) 192, 368–382 (1989). 31. ^ Pitowsky, I. (2005). "Quantum mechanics as a theory of probability". arXiv:quant-ph/0510095Freely accessible.  32. ^ Gleason, A. M. (1957). "Measures on the closed subspaces of a Hilbert space". Journal of Mathematics and Mechanics. 6: 885–893. doi:10.1512/iumj.1957.6.56050. MR 0096113.  33. ^ Deutsch, D. (1999). Quantum Theory of Probability and Decisions. Proceedings of the Royal Society of London A455, 3129–3137. [1]. 34. ^ David Wallace: Quantum Probability and Decision Theory, Revisited 35. ^ David Wallace. Everettian Rationality: defending Deutsch's approach to probability in the Everett interpretation. Stud. Hist. Phil. Mod. Phys. 34 (2003), 415–438. 36. ^ David Wallace (2003), Quantum Probability from Subjective Likelihood: improving on Deutsch's proof of the probability rule 37. ^ David Wallace, 2009,A formal proof of the Born rule from decision-theoretic assumptions 38. ^ Simon Saunders: Derivation of the Born rule from operational assumptions. Proc. Roy. Soc. Lond. A460, 1771–1788 (2004). 39. ^ Simon Saunders, 2004: What is Probability? 40. ^ a b David J Baker, Measurement Outcomes and Probability in Everettian Quantum Mechanics, Studies In History and Philosophy of Science Part B: Studies In History and Philosophy of Modern Physics, Volume 38, Issue 1, March 2007, Pages 153–169 41. ^ H. Barnum, C. M. Caves, J. Finkelstein, C. A. Fuchs, R. Schack: Quantum Probability from Decision Theory? Proc. Roy. Soc. Lond. A456, 1175–1182 (2000). 42. ^ a b Merali, Zeeya (2007-09-21). "Parallel universes make quantum sense". New Scientist (2622). Retrieved 2013-11-22.  (Summary only). 43. ^, Parallel universes exist – study, Sept 23 2007 44. ^ Perimeter Institute, Seminar overview, Probability in the Everett interpretation: state of play, David Wallace – Oxford University, 21 Sept 2007 45. ^ Perimeter Institute, Many worlds at 50 conference, September 21–24, 2007 46. ^ Armando V.D.B. Assis (2011). "Assis, Armando V.D.B. On the nature of and the emergence of the Born rule. Annalen der Physik, 2011.". Annalen der Physik (Berlin). 523: 883–897. arXiv:1009.1532Freely accessible. Bibcode:2011AnP...523..883A. doi:10.1002/andp.201100062.  47. ^ Wojciech H. Zurek: Probabilities from entanglement, Born's rule from envariance, Phys. Rev. A71, 052105 (2005). 48. ^ Schlosshauer, M.; Fine, A. (2005). "On Zurek's derivation of the Born rule". Found. Phys. 35: 197–213. arXiv:quant-ph/0312058Freely accessible. Bibcode:2005FoPh...35..197S. doi:10.1007/s10701-004-1941-6.  49. ^ Lutz Polley, Position eigenstates and the statistical axiom of quantum mechanics, contribution to conference Foundations of Probability and Physics, Vaxjo, Nov 27 – Dec 1, 2000 50. ^ Lutz Polley, Quantum-mechanical probability from the symmetries of two-state systems 51. ^ Vaidman, L. "Probability in the Many-Worlds Interpretation of Quantum Mechanics." In: Ben-Menahem, Y., & Hemmo, M. (eds), The Probable and the Improbable: Understanding Probability in Physics, Essays in Memory of Itamar Pitowsky. Springer. 52. ^ Sebens, C.T. and Carroll, S.M., Self-Locating Uncertainty and the Origin of Probability in Everettian Quantum Mechanics. 53. ^ a b Mark A. Rubin, Locality in the Everett Interpretation of Heisenberg-Picture Quantum Mechanics, Foundations of Physics Letters, 14, (2001) , pp. 301–322, arXiv:quant-ph/0103079 54. ^ Paul C.W. Davies, Other Worlds, chapters 8 & 9 The Anthropic Principle & Is the Universe an accident?, (1980) ISBN 0-460-04400-1 55. ^ Paul C.W. Davies, The Accidental Universe, (1982) ISBN 0-521-28692-1 56. ^ a b Everett FAQ "Does many-worlds violate Ockham's Razor?" 57. ^ a b Vaidman, Lev. "Many-Worlds Interpretation of Quantum Mechanics". The Stanford Encyclopedia of Philosophy.  58. ^ Deutsch, D., (1986) 'Three experimental implications of the Everett interpretation', in R. Penrose and C.J. Isham (eds.), Quantum Concepts of Space and Time, Oxford: The Clarendon Press, pp. 204–214. 59. ^ Page, D., (2000) 'Can Quantum Cosmology Give Observational Consequences of Many-Worlds Quantum Theory?' 60. ^ a b c d e f g Plaga, R. (1997). "On a possibility to find experimental evidence for the many-worlds interpretation of quantum mechanics". Foundations of Physics. 27: 559–577. arXiv:quant-ph/9510007Freely accessible. Bibcode:1997FoPh...27..559P. doi:10.1007/BF02550677.  61. ^ Page, Don N. (2000). "Can Quantum Cosmology Give Observational Consequences of Many-Worlds Quantum Theory?". arXiv:gr-qc/0001001Freely accessible. doi:10.1063/1.1301589.  63. ^ Penrose, R. The Road to Reality, §21.11 64. ^ Tegmark, Max The Interpretation of Quantum Mechanics: Many Worlds or Many Words?, 1998. To quote: "What Everett does NOT postulate: "At certain magic instances, the world undergoes some sort of metaphysical 'split' into two branches that subsequently never interact." This is not only a misrepresentation of the MWI, but also inconsistent with the Everett postulate, since the subsequent time evolution could in principle make the two terms...interfere. According to the MWI, there is, was and always will be only one wavefunction, and only decoherence calculations, not postulates, can tell us when it is a good approximation to treat two terms as non-interacting." 65. ^ a b c Paul C.W. Davies, J.R. Brown, The Ghost in the Atom (1986) ISBN 0-521-31316-3, pp. 34–38: "The Many-Universes Interpretation", pp 83–105 for David Deutsch's test of MWI and reversible quantum memories 66. ^ Christoph Simon, 2009, Conscious observers clarify many worlds 67. ^ Joseph Gerver, The past as backward movies of the future, Physics Today, letters followup, 24(4), (April 1971), pp 46–7 68. ^ Bryce Seligman DeWitt, Physics Today,letters followup, 24(4), (April 1971), pp 43 69. ^ Arnold Neumaier's comments on the Everett FAQ, 1999 & 2003 70. ^ Everett [1956] 1973, "Theory of the Universal Wavefunction", chapter V, section 4 "Approximate Measurements", pp. 100–103 (e) 71. ^ Stapp, Henry (2002). "The basis problem in many-world theories" (PDF). Canadian Journal of Physics. 80: 1043–1052. arXiv:quant-ph/0110148Freely accessible. Bibcode:2002CaJPh..80.1043S. doi:10.1139/p02-068.  72. ^ Brown, Harvey R; Wallace, David (2005). "Solving the measurement problem: de Broglie–Bohm loses out to Everett" (PDF). Foundations of Physics. 35: 517–540. arXiv:quant-ph/0403094Freely accessible. Bibcode:2005FoPh...35..517B. doi:10.1007/s10701-004-2009-3.  73. ^ Mark A Rubin (2005), There Is No Basis Ambiguity in Everett Quantum Mechanics, Foundations of Physics Letters, Volume 17, Number 4 / August, 2004, pp 323–341 74. ^ a b Penrose, Roger (August 1991). "Roger Penrose Looks Beyond the Classic-Quantum Dichotomy". Sciencewatch. Retrieved 2007-10-21.  75. ^ Everett FAQ "Does many-worlds violate conservation of energy?" 76. ^ Everett FAQ "How do probabilities emerge within many-worlds?" 77. ^ Everett FAQ "When does Schrodinger's cat split?" 78. ^ Tom Radcliffe (June 7, 2015). "Many Interacting Worlds". Retrieved May 24, 2016.  79. ^ James Schombert (2015), Cosmology lecture 8, retrieved May 24, 2016  80. ^ Jeffrey A. Barrett, The Quantum Mechanics of Minds and Worlds, Oxford University Press, 1999. According to Barrett (loc. cit. Chapter 6) "There are many many-worlds interpretations." 81. ^ Barrett, Jeffrey A. (2010). Zalta, Edward N., ed. "Everett's Relative-State Formulation of Quantum Mechanics" (Fall 2010 ed.). The Stanford Encyclopedia of Philosophy.  Again, according to Barrett "It is... unclear precisely how this was supposed to work." 82. ^ Aldhous, Peter (2007-11-24). "Parallel lives can never touch". New Scientist (2631). Retrieved 2007-11-21.  83. ^ Eugene Shikhovtsev's Biography of Everett, in particular see "Keith Lynch remembers 1979–1980" 84. ^ a b David Deutsch, The Fabric of Reality: The Science of Parallel Universes And Its Implications, Penguin Books (1998), ISBN 0-14-027541-X 85. ^ Deutsch, David (1985). "Quantum theory, the Church–Turing principle and the universal quantum computer". Proceedings of the Royal Society of London A. 400: 97–117. Bibcode:1985RSPSA.400...97D. doi:10.1098/rspa.1985.0070.  86. ^ A response to Bryce DeWitt[dead link], Martin Gardner, May 2002 87. ^ Award winning 1995 Channel 4 documentary "Reality on the rocks: Beyond our Ken" [2] where, in response to Ken Campbell's question "all these trillions of Universes of the Multiverse, are they as real as this one seems to be to me?" Hawking states, "Yes.... According to Feynman's idea, every possible history (of Ken) is equally real." 88. ^ Gardner, Martin (2003). Are universes thicker than blackberries?. W.W. Norton. p. 10. ISBN 978-0-393-05742-3.  89. ^ Ferris, Timothy (1997). The Whole Shebang. Simon & Schuster. pp. 345. ISBN 978-0-684-81020-1.  90. ^ Hawking, Stephen; Roger Penrose (1996). The Nature of Space and Time. Princeton University Press. pp. 121. ISBN 978-0-691-03791-2.  91. ^ Elvridge., Jim (2008-01-02). The Universe – Solved!. pp. 35–36. ISBN 978-1-4243-3626-5. OCLC 247614399. 58% believed that the Many Worlds Interpretation (MWI) was true, including Stephen Hawking and Nobel Laureates Murray Gell-Mann and Richard Feynman  92. ^ Bruce., Alexandra. "How does reality work?". Beyond the bleep : the definitive unauthorized guide to What the bleep do we know!?. p. 33. ISBN 978-1-932857-22-1. [the poll was] published in the French periodical Sciences et Avenir in January 1998  93. ^ Stenger, V.J. (1995). The Unconscious Quantum: Metaphysics in Modern Physics and Cosmology. Prometheus Books. p. 176. ISBN 978-1-57392-022-3. LCCN lc95032599. Gell-Mann and collaborator James Hartle, along with a score of others, have been working to develop a more palatable interpretation of quantum mechanics that is free of the problems that plague all the interpretations we have considered so far. This new interpretation is called, in its various incarnations, post-Everett quantum mechanics, alternate histories, consistent histories, or decoherent histories. I will not be overly concerned with the detailed differences between these characterizations and will use the terms more or less interchangeably.  94. ^ Max Tegmark on many-worlds (contains MWI poll) 95. ^ Caroll, Sean (1 April 2004). "Preposterous Universe". Archived from the original on 8 September 2004.  96. ^ Nielsen, Michael (3 April 2004). "Michael Nielsen: The Interpretation of Quantum Mechanics". Archived from the original on 20 May 2004.  97. ^ Interpretation of Quantum Mechanics class survey 98. ^ "A Snapshot of Foundational Attitudes Toward Quantum Mechanics", Schlosshauer et al 2013 99. ^ "The Many Minds Approach". 25 October 2010. Retrieved 7 December 2010. This idea was first proposed by Austrian mathematician Hans Moravec in 1987...  100. ^ Moravec, Hans (1988). "The Doomsday Device". Mind Children: The Future of Robot and Human Intelligence. Harvard: Harvard University Press. p. 188. ISBN 978-0-674-57618-6.  (If MWI is true, apocalyptic particle accelerators won't function as advertised). 101. ^ Marchal, Bruno (1988). "Informatique théorique et philosophie de l'esprit" [Theoretical Computer Science and Philosophy of Mind]. Acte du 3ème colloque international Cognition et Connaissance [Proceedings of the 3rd International Conference Cognition and Knowledge]. Toulouse: 193–227.  102. ^ Marchal, Bruno (1991). De Glas, M.; Gabbay, D., eds. "Mechanism and personal identity" (PDF). Proceedings of WOCFAI 91. Paris. Angkor.: 335–345.  104. ^ Tegmark, Max (November 1998). "Quantum immortality". Retrieved 25 October 2010.  105. ^ W.M.Itano et al., Phys.Rev. A47,3354 (1993). 106. ^ M.SargentIII,M.O.Scully and W.E.Lamb, Laser physics (Addison-Wesley, Reading, 1974), p.27. 107. ^ M.O.Scully and H.Walther, Phys.Rev. A39,5229 (1989). 108. ^ J.Polchinski, Phys.Rev.Lett. 66,397 (1991). 109. ^ M.Gell-Mann and J.B.Hartle, Equivalent Sets of Histories and Multiple Quasiclassical Domains, preprint University of California at Santa Barbara UCSBTH-94-09 (1994). 110. ^ a b H.D.Zeh, Found.Phys. 3,109 (1973). 111. ^ H.D.Zeh, Phys.Lett.A 172,189 (1993). 112. ^ A.Albrecht, Phys.Rev. D48,3768 (1993). 113. ^ D.Deutsch, Int.J.theor.Phys. 24,1 (1985). Further reading[edit] External links[edit]
79a558319d9aef86
How the Hippies Saved Physics: Science, Counterculture, and the Quantum Revival [Excerpt] This book excerpt traces the history of quantum information theory and the colorful and famous physicists who tried to figure out "spooky action at a distance" W. W. Norton & Company Editor's Note: Reprinted from How the Hippies Saved Physics: Science, Counterculture, and the Quantum Revival by David Kaiser. Copyright (c) 2011 by David Kaiser. Used with permission of the publisher, W.W. Norton & Company, Inc. Click here to see a Scientific American video that explains quantum entanglement.  [from Chapter 2, pp. 25-38:] The iconoclastic Irish physicist John S. Bell had long nursed a private disquietude with quantum mechanics. His physics teachers—first at Queen's University in his native Belfast during the late 1940s, and later at Birmingham University, where he pursued doctoral work in the mid-1950s—had shunned matters of interpretation. The "ask no questions" attitude frustrated Bell, who remained unconvinced that Niels Bohr had really vanquished the last of Einstein's critiques long ago and that there was nothing left to worry about. At one point in his undergraduate studies, his red shock of hair blazing, he even engaged in a shouting match with a beleaguered professor, calling him "dishonest" for trying to paper over genuine mysteries in the foundations, such as how to interpret the uncertainty principle. Certainly, Bell would grant, quantum mechanics worked impeccably "for all practical purposes," a phrase he found himself using so often that he coined the acronym, "FAPP." But wasn't there more to physics than FAPP? At the end of the day, after all the wavefunctions had been calculated and probabilities plotted, shouldn't quantum mechanics have something coherent to say about nature? In the years following his impetuous shouting matches, Bell tried to keep these doubts to himself. At the tender age of twenty-one he realized that if he continued to indulge these philosophical speculations, they might well scuttle his physics career before it could even begin. He dove into mainstream topics, working on nuclear and particle physics at Harwell, Britain's civilian atomic energy research center. Still, his mind continued to wander. He wondered whether there were some way to push beyond the probabilities offered by quantum theory, to account for motion in the atomic realm more like the way Newton's physics treated the motion of everyday objects. In Newton's physics, the behavior of an apple or a planet was completely determined by its initial state—variables like position (where it was) and momentum (where it was going)—and the forces acting upon it; no probabilities in sight. Bell wondered whether there might exist some set of variables that could be added to the quantum-mechanical description to make it more like Newton's system, even if some of those new variables remained hidden from view in any given experiment. Bell avidly read a popular account of quantum theory by one of its chief architects, Max Born's Natural Philosophy of Cause and Chance (1949), in which he learned that some of Born's contemporaries had likewise tried to invent such "hidden variables" schemes back in the late 1920s. But Bell also read in Born's book that another great of the interwar generation, the Hungarian mathematician and physicist John von Neumann, had published a proof as early as 1932 demonstrating that hidden variables could not be made compatible with quantum mechanics. Bell, who could not read German, did not dig up von Neumann's recondite proof. The say-so of a leader (and soon-to-be Nobel laureate) like Born seemed like reason enough to drop the idea. Imagine Bell's surprise, therefore, when a year or two later he read a pair of articles in the Physical Review by the American physicist David Bohm. Bohm had submitted the papers from his teaching post at Princeton University in July 1951; by the time they appeared in print six months later, he had landed in São Paolo, Brazil, following his hounding by the House Un-American Activities Committee. Bohm had been a graduate student under J. Robert Oppenheimer at Berkeley in the late 1930s and early 1940s. Along with several like-minded friends, he had participated in free-wheeling discussion groups about politics, worldly affairs, and local issues like whether workers at the university's laboratory should be unionized. He even joined the local branch of the Communist Party out of curiosity, but he found the discussions so boring and ineffectual that he quit a short time later. Such discussions might have seemed innocuous during ordinary times, but investigators from the Military Intelligence Division thought otherwise once the United States entered World War II, and Bohm and his discussion buddies started working on the earliest phases of the Manhattan Project to build an atomic bomb. Military intelligence officers kept the discussion groups under top-secret surveillance, and in the investigators' eyes the line between curious discussion group and Communist cell tended to blur. When later called to testify before HUAC, Bohm pleaded the Fifth Amendment rather than name names. Over the physics department's objections, Princeton's administration let his tenure-track contract lapse rather than reappoint him. At the center of a whirling media spectacle, Bohm found all other domestic options closed off. Reluctantly, he decamped for Brazil. In the midst of the Sturm und Drang, Bohm crafted his own hidden variables interpretation of quantum mechanics. As Bell later reminisced, he had "seen the impossible done" in these papers by Bohm. Starting from the usual Schrödinger equation, but rewriting it in a novel way, Bohm demonstrated that the formalism need not be interpreted only in terms of probabilities. An electron, for example, might behave much like a bullet or billiard ball, following a path through space and time with well-defined values of position and momentum every step of the way. Given the electron's initial position and momentum and the forces acting on it, its future behavior would be fully determined, just like the case of the trusty billiard ball—although Bohm did have to introduce a new "quantum potential" or force field that had no analogue in classical physics. In Bohm's model, the quantum weirdness that had so captivated Bohr, Heisenberg, and the rest—and that had so upset young Bell, when parroted by his teachers—arose because certain variables, such as the electron's initial position, could never be specified precisely: efforts to measure the initial position would inevitably disturb the system. Thus physicists could not glean sufficient knowledge of all the relevant variables required to calculate a quantum object's path. The troubling probabilities of quantum mechanics, Bohm posited, sprang from averaging over the real-but-hidden variables. Where Bohr and his acolytes had claimed that electrons simply did not possess complete sets of definite properties, Bohm argued that they did—but, as a practical matter, some remained hidden from view. Bohm's papers fired Bell's imagination. Soon after discovering them, Bell gave a talk on Bohm's papers to the Theory Division at Harwell. Most of his listeners sat in stunned (or perhaps just bored) silence: why was this young physicist wasting their time on such philosophical drivel? Didn't he have any real work to do? One member of the audience, however, grew animated: Austrian émigré Franz Mandl. Mandl, who knew both German and von Neumann's classic study, interrupted several times; the two continued their intense arguments well after the seminar had ended. Together they began to reexamine von Neumann's no-hidden-variables proof, on and off when time allowed, until they each went their separate ways. Mandl left Harwell in 1958; Bell, dissatisfied with the direction in which the laboratory seemed to be heading, left two years later. Bell and his wife Mary, also a physicist, moved to CERN, Europe's multinational high-energy physics laboratory that had recently been established in Geneva. Once again he pursued cutting-edge research in particle physics. And once again, despite his best efforts, he found himself pulled to his hobby: thinking hard about the foundations of quantum mechanics. Once settled in Geneva, he acquired a new sparring partner in Josef Jauch. Like Mandl, Jauch had grown up in the Continental tradition and was well versed in the finer points of Einstein's, Bohr's, and von Neumann's work. In fact, when Bell arrived in town Jauch was busy trying to strengthen von Neumann's proof that hidden-variables theories were irreconcilable with the successful predictions of quantum mechanics. To Bell, Jauch's intervention was like waving a red flag in front of a bull: it only intensified his resolve to demonstrate that hidden variables had not yet been ruled out. Spurred by these discussions, Bell wrote a review article on the topic of hidden variables, in which he isolated a logical flaw in von Neumann's famous proof. At the close of the paper, he noted that "the first ideas of this paper were conceived in 1952"—fourteen years before the paper was published—and thanked Mandl and Jauch for all of the "intensive discussion" they had shared over that long period. Still Bell kept pushing, wondering whether a certain type of hidden variables theory, distinct from Bohm's version, might be compatible with ordinary quantum mechanics. His thoughts returned to the famous thought experiment introduced by Einstein and his junior colleagues Boris Podolsky and Nathan Rosen in 1935, known from the start by the authors' initials, "EPR." Einstein and company had argued that quantum mechanics must be incomplete: at least in some situations, definite values for pairs of variables could be determined at the same time, even though quantum mechanics had no way to account for or represent such values. The EPR authors described a source, such as a radioactive nucleus, that shot out pairs of particles with the same speed but in opposite directions. Call the left-moving particle, "A," and the right-moving particle, "B." A physicist could measure A's position at a given moment, and thereby deduce the value of B's position. Meanwhile, the physicist could measure B's momentum at that same moment, thus capturing knowledge of B's momentum and simultaneous position to any desired accuracy. Yet Heisenberg's uncertainty principle dictated that precise values for certain pairs of variables, such as position and momentum, could never be known simultaneously. Fundamental to Einstein and company's reasoning was that quantum objects carried with them—on their backs, as it were—complete sets of definite properties at all times. Think again of that trusty billiard ball: it has a definite value of position and a definite value of momentum at any given moment, even if we choose to measure only one of those properties at a time. Einstein assumed the same must be true of electrons, photons, and the rest of the furniture of the microworld. Bohr, in a hurried response to the EPR paper, argued that it was wrong to assume that particle B had a real value for position all along, prior to any effort to measure it. Quantum objects, in his view, simply did not possess sharp values for all properties at all times. Such values emerged during the act of measurement, and even Einstein had agreed that no device could directly measure a particle's position and momentum at the same time. Most physicists seemed content with Bohr's riposte—or, more likely, they were simply relieved that someone else had responded to Einstein's deep challenge. Bohr's response never satisfied Einstein, however; nor did it satisfy John Bell. Bell realized that the intuition behind Einstein's famous thought experiment—the reason Einstein considered it so damning for quantum mechanics—concerned "locality." To Einstein, it was axiomatic that something that happens in one region of space and time should not be able to affect something happening in a distant region—more distant, say, than light could have traveled in the intervening time. As the EPR authors put it, "since at the time of measurement the two systems [particles A and B] no longer interact, no real change can take place in the second system in consequence of anything that may be done to the first system." Yet Bohr's response suggested something else entirely: the decision to conduct a measurement on particle A (either position or momentum) would instantaneously change the properties ascribed to the far-away particle B. Measure particle A's position, for example, and—bam!—particle B would be in a state of well-defined position. Or measure particle A's momentum, and—zap!—particle B would be in a state of well-defined momentum. Late in life, Bohr's line still rankled Einstein. "My instinct for physics bristles at this," Einstein wrote to a friend in March 1948. "Spooky actions at a distance," he huffed. Fresh from his wrangles with Jauch, Bell returned to EPR's thought experiment. He wondered whether such "spooky actions at a distance" were endemic to quantum mechanics, or just one possible interpretation among many. Might some kind of hidden variable approach reproduce all the quantitative predictions of quantum theory, while still satisfying Einstein's (and Bell's) intuition about locality? He focused on a variation of EPR's set-up, introduced by David Bohm in his 1951 textbook on quantum mechanics. Bohm had suggested swapping the values of the particles' spins along the x- and y-axes for position and momentum. "Spin" is a curious property that many quantum particles possess; its discovery in the mid-1920s added a cornerstone to the emerging edifice of quantum mechanics. Quantum spin is a discrete amount of angular momentum—that is, the tendency to rotate  around a given direction in space. Of course many large-scale objects possess angular momentum, too: think of the planet Earth spinning around its axis to change night into day. Spin in the microworld, however, has a few quirks. For one thing, whereas large objects like the Earth can spin, in principle, at any rate whatsoever, quantum particles possess fixed amounts of it: either no spin at all, or one-half unit, or one whole unit, or three-halves units, and so on. The units are determined by a universal constant of nature known as Planck's constant, ubiquitous throughout the quantum realm. The particles that make up ordinary matter, such as electrons, protons, and neutrons, each possess one-half unit of spin; photons, or quanta of light, possesss one whole unit of spin. In a further break from ordinary angular momentum, quantum spin can only be oriented in certain ways. A spin one-half particle, for example, can exist in only one of two states: either spin "up" or spin "down" with respect to a given direction in space. The two states become manifest when a stream of particles passes through a magnetic field: spin-up particles will be deflected upward, away from their previous direction of flight, while spin-down particles will be deflected downward. Choose some direction along which to align the magnets—say, the z-axis—and the spin of any electron will only ever be found to be up or down; no electron will ever be measured as three-quarters "up" along that direction. Now rotate the magnets, so that the magnetic field is pointing along some different direction. Send a new batch of electrons through; once again you will only find spin up or spin down along that new direction. For spin one-half particles like electrons, the spin along a given direction is always either +1 (up) or -1 (down), nothing in between. (Fig. 2.1.) No matter which way the magnets are aligned, moreover, one-half of the incoming electrons will be deflected upward and one-half downward. In fact, you could replace the collecting screen (such as a photographic plate) downstream of the magnets with two Geiger counters, positioned where the spin-up and spin-down particles get deflected. Then tune down the intensity of the source so that only one particle gets shot out at a time. For any given run, only one Geiger counter will click: either the upper one (indicating passage of a spin-up particle) or the lower one (indicating spin-down). Each particle has a 50-50 chance of being measured as spin-up or spin-down; the sequence of clicks would be a random series of +1's (upper counter) and -1's (lower counter), averaging out over many runs to an equal number of clicks from each detector. Neither quantum theory nor any other scheme has yet produced a successful means of predicting in advance whether a given particle will be measured as spin-up or spin-down; only the probabilities for a large number of runs can be computed. Bell realized that Bohm's variation of the EPR thought experiment, involving particles' spins, offered two main advantages over EPR's original version. First, the measurements always boiled down to either a +1 or a -1; no fuzzy continuum of values to worry about, as there would be when measuring position or momentum. Second, physicists had accumulated decades of experience building real machines that could manipulate and measure particles' spin; as far as thought experiments went, this one could be grounded on some well-earned confidence. And so Bell began to analyze the spin-based EPR arrangement. Because the particles emerged in a special way—spat out from a source that had zero spin before and after they were disgorged—the total spin of the two particles together likewise had to be zero. When measured along the same direction, therefore, their spins should always show perfect correlation: if A's spin were up then B's must be down, and vice versa. Back in the early days of quantum mechanics, Erwin Schrödinger had termed such perfect correlations "entanglement." Bell demonstrated that a hidden-variables model that satisfied locality—in which the properties of A remained unaffected by what measurements were conducted on B—could easily reproduce the perfect correlation when A's and B's spins were measured along the same direction. At root, this meant imagining that each particle carried with it a definite value of spin along any given direction, even if most of those values remained hidden from view. The spin values were considered to be properties of the particles themselves; they existed independent of and prior to any effort to measure them, just as Einstein would have wished. Next Bell considered other possible arrangements. One could choose to measure a particle's spin along any direction: the z-axis, the y-axis, or any angle in between. All one had to do was rotate the magnets between which the particle passed.  What if one measured A's spin along the z-axis and B's spin along some other direction? (Fig. 2.2.) Bell homed in on the expected correlations of spin measurements when shooting pairs of particles through the device, while the detectors on either side were oriented at various angles. He considered detectors that had two settings, or directions along which spin could be measured. Using only a few lines of algebra, Bell proved that no local hidden variables theory could ever reproduce the same degree of correlations as one varied the angles between detectors. The result has come to be known as "Bell's theorem." Simply assuming that each particle carried a full set of definite values on its own, prior to measurement—even if most of those values remained hidden from view—necessarily clashed with quantum theory. Nonlocality was indeed endemic to quantum mechanics, Bell had shown: somehow, the outcome of the measurement on particle B depended on the measured outcome on particle A, even if the two particles were separated by huge distances at the time those measurements were made. Any effort to treat the particles (or measurements made upon them) as independent, subject only to local influences, necessarily led to different predictions than those of quantum mechanics. Here was what Bell had been groping for, on and off since his student days: some quantitative means of distinguishing Bohr's interpretation of quantum mechanics from other coherent, self-consistent possibilities. The problem—entanglement versus locality—was amenable to experimental test. In his bones he hoped locality would win. In the years since Bell formulated his theorem, many physicists (Bell included) have tried to articulate what the violation of his inequality would mean, at a deep level, about the structure of the microworld. Most prosaically, entanglement suggests that on the smallest scales of matter, the whole is more than the sum of its parts. Put another way: one could know everything there is to know about a quantum system (particles A + B), and yet know nothing definite about either piece separately. As one expert in the field has written, entangled quantum systems are not even "divisible by thought": our natural inclination to analyze systems into subsystems, and to build up knowledge of the whole from careful study of its parts, grinds to a halt in the quantum domain. Physicists have gone to heroic lengths to translate quantum nonlocality into everyday terms. The literature is now full of stories about boxes that flash with red and green lights; disheveled physicists who stroll down the street with mismatched socks; clever Sherlock Holmes-inspired scenarios involving quantum robbers; even an elaborate tale of a baker, two long conveyor belts, and pairs of soufflés that may or may not rise. My favorite comes from a "quantum-mechanical engineer" at MIT, Seth Lloyd. Imagine twins, Lloyd instructs us, separated a great distance apart. One steps into a bar in Cambridge, Massachusetts just as her brother steps into a bar in Cambridge, England. Imagine further (and this may be the most difficult part) that neither twin has a cell phone or any other device with which to communicate back and forth. No matter what each bartender asks them, they will give opposite answers. "Beer or whiskey?"  The Massachusetts twin might respond either way, with equal likelihood; but no matter which choice she makes, her twin brother an ocean away will respond with the opposite choice. (It's not that either twin has a decided preference; after many trips to their respective bars, they each wind up ordering beer and whiskey equally often.) The bartenders could equally well have asked, "Bottled beer or draft?" or "Red wine or white?" Ask any question—even a question that no one had decided to ask until long after the twins had traveled far, far away from each other—and you will always receive polar opposite responses. Somehow one twin always "knows" how to answer, even though no information could have traveled between them, in just such a way as to ensure the long-distance correlation. [from Chapter 3, pp. 43-48:] John Clauser sat through his courses on quantum mechanics as a graduate student at Columbia University in the mid-1960s, wondering when they would tackle the big questions. Like John Bell, Clauser quickly learned to keep his mouth shut and pursue his interests on the side. He buried himself in the library, poring over the EPR paper and Bohm's articles on hidden variables. Then in 1967 he stumbled upon Bell's paper in Physics Physique Fizika. The journal's strange title had caught his eye, and while lazily leafing through the first bound volume he happened to notice Bell's article. Clauser, a budding experimentalist, realized that Bell's theorem could be amenable to real-world tests in a laboratory. Excited, he told his thesis advisor about his find, only to be rebuffed for wasting their time on such philosophical questions. Soon Clauser would be kicked out of some of the finest offices in physics, from Robert Serber's at Columbia to Richard Feynman's at Caltech. Bowing to these pressures, Clauser pursued a dissertation on a more acceptable topic—radio astronomy and astrophysics—but in the back of his mind he continued to puzzle through how Bell's inequality might be put to the test. Before launching into an experiment himself, Clauser wrote to John Bell and David Bohm to double-check that he had not overlooked any prior experiments on Bell's theorem and quantum nonlocality. Both respondents wrote back immediately, thrilled at the notion that an honest-to-goodness experimentalist harbored any interest in the topic at all. As Bell later recalled, Clauser's letter from February 1969 was the first direct response Bell had received from any physicist regarding Bell's theorem—more than four years after Bell's article had been published. Bell encouraged the young experimenter: if by chance Clauser did manage to measure a deviation from the predictions of quantum theory, that would "shake the world!" Encouraged by Bell's and Bohm's responses, Clauser realized that the first step would be to translate Bell's pristine algebra into expressions that might make contact with a real experiment. Bell had assumed for simplicity that detectors would have infinitesimally narrow windows or apertures through which particles could pass. But as Clauser knew well from his radio-astronomy work, apertures in the real world are always wider than a mathematical pinprick. Particles from a range of directions would be able to enter the detectors at either of their settings, a or a'. Same for detector efficiences. Bell had assumed that the spins of every pair of particles would be measured, every time a new pair was shot out from the source. But no laboratory detectors were ever 100% efficient; sometimes one or both particles of a pair would simply escape detection altogether. All these complications and more had to be tackled on paper, long before one bothered building a machine to test Bell's work. Clauser dug in and submitted a brief abstract on this work to the Bulletin of the American Physical Society, in anticipation of the Society's upcoming conference. The abstract appeared in print right before the spring 1969 meeting. And then his telephone rang. Two hundred miles away, Abner Shimony had been chasing down the same series of thoughts. Shimony's unusual training—he held Ph.D.s in both philosophy and in physics, and taught in both departments at Boston University—primed him for a subject like Bell's theorem in a way that almost none of his American physics colleagues shared. He had already published several articles on other philosophical aspects of quantum theory, beginning in the early 1960s. Shimony had been tipped off about Bell's theorem back in 1964, when a colleague at nearby Brandeis University, where Bell had written up his paper, sent Shimony a preprint of Bell's work. Shimony was hardly won over right away. His first reaction: "Here's another kooky paper that's come out of the blue," as he put it recently. "I'd never heard of Bell. And it was badly typed, and it was on the old multigraph paper, with the blue ink that smeared. There were some arithmetical errors. I said, ‘What's going on here?'" Alternately bemused, puzzled, and intrigued, he read it over again and again. "The more I read it, the more brilliant it seemed. And I realized, ‘This is no kooky paper. This is something very great.'" He began scouring the literature to see if some previous experiments, conducted for different purposes, might already have inadvertently put Bell's theorem to the test. After intensive digging—he came to call this work "quantum archaeology"—he realized that, despite a few near misses, no existing data would do the trick. No experimentalist himself, he "put the whole thing on ice" until he could find a suitable partner. A few years went by before a graduate student came knocking on Shimony's door. The student had just completed his qualifying exams and was scouting for a dissertation topic. Together they decided to mount a brand-new experiment to test Bell's theorem. Several months into their preparations, still far from a working experiment, Shimony spied Clauser's abstract in the Bulletin, and reached for the phone. They decided to meet at the upcoming American Physical Society meeting in Washington, D.C., where Clauser was scheduled to talk about his proposed experiment. There they hashed out a plan to join forces. A joint paper, Shimony felt, would no doubt be stronger than either of their separate efforts alone would be—the whole would be greater than the sum of its parts—and, on top of that, "it was the civilized way to handle the priority question." And so began a fruitful collaboration and a set of enduring friendships. Clauser completed his dissertation not long after their meeting. He had some down time between handing in his thesis and the formal thesis defense, so he went up to Boston to work with Shimony and the (now two) graduate students whom Shimony had corralled onto the project. Together they derived a variation on Bell's theme: a new expression, more amenable to direct comparisons with laboratory data than Bell's had been. (Their equations concerned S, the particular combination of spin measurements examined in the previous chapter.) Even as his research began to hum, Clauser's employment prospects grew dim. He graduated just as the chasm between demand and supply for American physicists opened wide. He further hindered his chances by giving a few job talks on the subject of Bell's theorem. Clauser would later write with great passion that in those years, physicists who showed any interest in the foundations of quantum mechanics labored under a "stigma," as powerful and keenly felt as any wars of religion or McCarthy-like political purges. Finally Berkeley's Charles Townes offered Clauser a postdoctoral position in astrophysics at the Lawrence Berkeley Laboratory, on the strength of Clauser's dissertation on radio astronomy. Clauser, an avid sailer, planned to sail his boat from New York around the tip of Florida and into Galveston, Texas; then he would load the boat onto a truck and drive it to Los Angeles, before setting sail up the California coast to the San Francisco Bay Area. (A hurricane scuttled his plans; he and his boat got held up in Florida, and he wound up having to drive it clear across the country instead.) All the while, Clauser and Shimony hammered out their first joint article on Bell's theorem: each time Clauser sailed into a port along the East Coast, he would find a telephone and check in with Shimony, who had been working on a draft of their paper. Then Shimony would mail copies of the edited draft to every marina in the next city on Clauser's itinerary, "some of which I picked up," Clauser explained recently, "and some of which are probably still waiting there for all I know." Back and forth their edits flew, and by the time Clauser arrived in Berkeley in early August 1969, they had a draft ready to submit to the journal. Things were slow at the Lawrence Berkeley Laboratory compared to the boom years, and budgets had already begun to shrink. Clauser managed to convince his faculty sponsor, Townes, that Bell's theorem might merit serious experimental study. Perhaps Townes, an inventor of the laser, was more receptive to Clauser's pitch than the others because Townes, too, had been told by the heavyweights of his era that his own novel idea flew in the face of quantum mechanics. Townes allowed Clauser to devote half his time to his pet project, not least because, as Clauser made clear, the experiments he envisioned would cost next to nothing. With the green light from Townes, Clauser began to scavenge spare parts from storage closets around the Berkeley lab—"I've gotten pretty good at dumpster diving," as he put it recently—and soon he had duct-taped together a contraption capable of measuring the correlated polarizations of pairs of photons. (Photons, like electrons, can exist in only one of two states; polarization, in this case, functions just like spin as far as Bell-type correlations are concerned.) In 1972, with the help of a graduate student loaned to him at Townes's urging, Clauser published the first experimental results on Bell's theorem. (Fig. 3.1.) Despite Clauser's private hope that quantum mechanics would be toppled, he and his student found the quantum-mechanical predictions to be spot on. In the laboratory, much as on theorists' scratch pads, the microworld really did seem to be an entangled nest of nonlocality. He and his student had managed to conduct the world's first experimental test of Bell's theorem—today such a mainstay of frontier physics—and they demonstrated, with cold, hard data, that measurements of particle A really were more strongly correlated with measurements of particle B than any local mechanisms could accommodate. They had produced exactly the "spooky action at a distance" that Einstein had found so upsetting. Still, Clauser could find few physicists who seemed to care. He and his student published their results in the prestigious Physical Review Letters, and yet the year following their paper, global citations to Bell's theorem—still just a trickle—dropped by more than half. The world-class work did little to improve Clauser's job prospects, either. One department chair to whom Clauser had applied for a job doubted that Clauser's work on Bell's theorem counted as "real physics." Share this Article: Scientific American Holiday Sale Scientific American Mind Digital Get 6 bi-monthly digital issues + 1yr of archive access for just $9.99 Hurry this offer ends soon! > Email this Article
0b13a26151dff378
Envelope (waves) In physics and engineering, the envelope of an oscillating signal is a smooth curve outlining its extremes.[1] The envelope thus generalizes the concept of a constant amplitude into an instantaneous amplitude. The figure illustrates a modulated sine wave varying between an upper envelope and a lower envelope. The envelope function may be a function of time, space, angle, or indeed of any variable. Envelope for a modulated sine wave. Example: beating waves A modulated wave resulting from adding two sine waves of identical amplitude and nearly identical wavelength and frequency. A common situation resulting in an envelope function in both space x and time t is the superposition of two waves of almost the same wavelength and frequency:[2] which uses the trigonometric formula for the addition of two sine waves, and the approximation Δλ ≪ λ: Here the modulation wavelength λmod is given by:[2][3] The modulation wavelength is double that of the envelope itself because each half-wavelength of the modulating cosine wave governs both positive and negative values of the modulated sine wave. Likewise the beat frequency is that of the envelope, twice that of the modulating wave, or 2Δf.[4] If this wave is a sound wave, the ear hears the frequency associated with f and the amplitude of this sound varies with the beat frequency.[4] Phase and group velocity The argument of the sinusoids above apart from a factor 2π are: with subscripts C and E referring to the carrier and the envelope. The same amplitude F of the wave results from the same values of ξC and ξE, each of which may itself return to the same value over different but properly related choices of x and t. This invariance means that one can trace these waveforms in space to find the speed of a position of fixed amplitude as it propagates in time; for the argument of the carrier wave to stay the same, the condition is: which shows to keep a constant amplitude the distance Δx is related to the time interval Δt by the so-called phase velocity vp On the other hand, the same considerations show the envelope propagates at the so-called group velocity vg:[5] A more common expression for the group velocity is obtained by introducing the wavevector k: We notice that for small changes Δλ, the magnitude of the corresponding small change in wavevector, say Δk, is: so the group velocity can be rewritten as: where ω is the frequency in radians/s: ω = 2πf. In all media, frequency and wavevector are related by a dispersion relation, ω = ω(k), and the group velocity can be written: Dispersion relation ω=ω(k) for some waves corresponding to lattice vibrations in GaAs.[6] In a medium such as classical vacuum the dispersion relation for electromagnetic waves is: where c0 is the speed of light in classical vacuum. For this case, the phase and group velocities both are c0. In so-called dispersive media the dispersion relation can be a complicated function of wavevector, and the phase and group velocities are not the same. For example, for several types of waves exhibited by atomic vibrations (phonons) in GaAs, the dispersion relations are shown in the figure for various directions of wavevector k. In the general case, the phase and group velocities may have different directions.[7] Example: envelope function approximation Electron probabilities in lowest two quantum states of a 160Ǻ GaAs quantum well in a GaAs-GaAlAs heterostructure as calculated from envelope functions.[8] In condensed matter physics an energy eigenfunction for a mobile charge carrier in a crystal can be expressed as a Bloch wave: where n is the index for the band (for example, conduction or valence band) r is a spatial location, and k is a wavevector. The exponential is a sinusoidally varying function corresponding to a slowly varying envelope modulating the rapidly varying part of the wavefunction un,k describing the behavior of the wavefunction close to the cores of the atoms of the lattice. The envelope is restricted to k-values within a range limited by the Brillouin zone of the crystal, and that limits how rapidly it can vary with location r. In determining the behavior of the carriers using quantum mechanics, the envelope approximation usually is used in which the Schrödinger equation is simplified to refer only to the behavior of the envelope, and boundary conditions are applied to the envelope function directly, rather than to the complete wavefunction.[9] For example, the wavefunction of a carrier trapped near an impurity is governed by an envelope function F that governs a superposition of Bloch functions: where the Fourier components of the envelope F(k) are found from the approximate Schrödinger equation.[10] In some applications, the periodic part uk is replaced by its value near the band edge, say k=k0, and then:[9] Example: diffraction patterns Diffraction pattern of a double slit has a single-slit envelope. Diffraction patterns from multiple slits have envelopes determined by the single slit diffraction pattern. For a single slit the pattern is given by:[11] where α is the diffraction angle, d is the slit width, and λ is the wavelength. For multiple slits, the pattern is [11] where q is the number of slits, and g is the grating constant. The first factor, the single-slit result I1, modulates the more rapidly varying second factor that depends upon the number of slits and their spacing. See also 1. ^ C. Richard Johnson, Jr; William A. Sethares; Andrew G. Klein (2011). "Figure C.1: The envelope of a function outlines its extremes in a smooth manner". Software Receiver Design: Build Your Own Digital Communication System in Five Easy Steps. Cambridge University Press. p. 417. ISBN 978-0521189446. 2. ^ a b Blair Kinsman (2002). Wind Waves: Their Generation and Propagation on the Ocean Surface (Reprint of Prentice-Hall 1965 ed.). Courier Dover Publications. p. 186. ISBN 0486495116. 3. ^ Mark W. Denny (1993). Air and Water: The Biology and Physics of Life's Media. Princeton University Press. pp. 289. ISBN 0691025185. 4. ^ a b Paul Allen Tipler; Gene Mosca (2008). Physics for Scientists and Engineers, Volume 1 (6th ed.). Macmillan. p. 538. ISBN 978-1429201247. 5. ^ Peter W. Milonni; Joseph H. Eberly (2010). "§8.3 Group velocity". Laser Physics (2nd ed.). John Wiley & Sons. p. 336. ISBN 978-0470387719. 6. ^ Peter Y. Yu; Manuel Cardona (2010). "Fig. 3.2: Phonon dispersion curves in GaAs along high-symmetry axes". Fundamentals of Semiconductors: Physics and Materials Properties (4th ed.). Springer. p. 111. ISBN 978-3642007095. 7. ^ V. Cerveny; Vlastislav Červený (2005). "§2.2.9 Relation between the phase and group velocity vectors". Seismic Ray Theory. Cambridge University Press. p. 35. ISBN 0521018226. 8. ^ G Bastard; JA Brum; R Ferreira (1991). "Figure 10 in Electronic States in Semiconductor Heterostructures". In Henry Ehrenreich; David Turnbull (eds.). Solid state physics: Semiconductor Heterostructures and Nanostructures. p. 259. ISBN 0126077444. 9. ^ a b Christian Schüller (2006). "§2.4.1 Envelope function approximation (EFA)". Inelastic Light Scattering of Semiconductor Nanostructures: Fundamentals And Recent Advances. Springer. p. 22. ISBN 3540365257. 10. ^ For example, see Marco Fanciulli (2009). "§1.1 Envelope function approximation". Electron Spin Resonance and Related Phenomena in Low-Dimensional Structures. Springer. pp. 224 ff. ISBN 978-3540793649. 11. ^ a b Kordt Griepenkerl (2002). "Intensity distribution for diffraction by a slit and Intensity pattern for diffraction by a grating". In John W Harris; Walter Benenson; Horst Stöcker; Holger Lutz (eds.). Handbook of physics. Springer. pp. 306 ff. ISBN 0387952691. This article incorporates material from the Citizendium article "Envelope function", which is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License but not under the GFDL.
64b421469518f40c
Quantum superposition Quantum superposition is a fundamental principle of quantum mechanics. It states that, much like waves in classical physics, any two (or more) quantum states can be added together ("superposed") and the result will be another valid quantum state; and conversely, that every quantum state can be represented as a sum of two or more other distinct states. Mathematically, it refers to a property of solutions to the Schrödinger equation; since the Schrödinger equation is linear, any linear combination of solutions will also be a solution. An example of a physically observable manifestation of the wave nature of quantum systems is the interference peaks from an electron beam in a double-slit experiment. The pattern is very similar to the one obtained by diffraction of classical waves. Another example is a quantum logical qubit state, as used in quantum information processing, which is a quantum superposition of the "basis states" and . Here is the Dirac notation for the quantum state that will always give the result 0 when converted to classical logic by a measurement. Likewise is the state that will always convert to 1. Contrary to a classical bit that can only be in the state corresponding to 0 or the state corresponding to 1, a qubit may be in a superposition of both states. This means that the probabilities of measuring 0 or 1 for a qubit are in general neither 0.0 nor 1.0, and multiple measurements made on qubits in identical states will not always give the same result. The principle of quantum superposition states that if a physical system may be in one of many configurations—arrangements of particles or fields—then the most general state is a combination of all of these possibilities, where the amount in each configuration is specified by a complex number. For example, if there are two configurations labelled by 0 and 1, the most general state would be where the coefficients are complex numbers describing how much goes into each configuration. The principle was described by Paul Dirac as follows: The general principle of superposition of quantum mechanics applies to the states [that are theoretically possible without mutual interference or contradiction] ... of any one dynamical system. It requires us to assume that between these states there exist peculiar relationships such that whenever the system is definitely in one state we can consider it as being partly in each of two or more other states. The original state must be regarded as the result of a kind of superposition of the two or more new states, in a way that cannot be conceived on classical ideas. Any state may be considered as the result of a superposition of two or more other states, and indeed in an infinite number of ways. Conversely, any two or more states may be superposed to give a new state... Anton Zeilinger, referring to the prototypical example of the double-slit experiment, has elaborated regarding the creation and destruction of quantum superposition: "[T]he superposition of amplitudes ... is only valid if there is no way to know, even in principle, which path the particle took. It is important to realize that this does not imply that an observer actually takes note of what happens. It is sufficient to destroy the interference pattern, if the path information is accessible in principle from the experiment or even if it is dispersed in the environment and beyond any technical possibility to be recovered, but in principle still ‘‘out there.’’ The absence of any such information is the essential criterion for quantum interference to appear.[2] For an equation describing a physical phenomenon, the superposition principle states that a combination of solutions to a linear equation is also a solution of it. When this is true the equation is said to obey the superposition principle. Thus, if state vectors f1, f2 and f3 each solve the linear equation on ψ, then ψ = c1f1 + c2f2 + c3f3 would also be a solution, in which each c is a coefficient. The Schrödinger equation is linear, so quantum mechanics follows this. For example, consider an electron with two possible configurations, up and down. This describes the physical system of a qubit. is the most general state. But these coefficients dictate probabilities for the system to be in either configuration. The probability for a specified configuration is given by the square of the absolute value of the coefficient. So the probabilities should add up to 1. The electron is in one of those two states for sure. Continuing with this example: If a particle can be in state  up and  down, it can also be in a state where it is an amount 3i/5 in up and an amount 4/5 in down. In this, the probability for up is . The probability for down is . Note that . In the description, only the relative size of the different components matter, and their angle to each other on the complex plane. This is usually stated by declaring that two states which are a multiple of one another are the same as far as the description of the situation is concerned. Either of these describe the same state for any nonzero The fundamental law of quantum mechanics is that the evolution is linear, meaning that if state A turns into A′ and B turns into B′ after 10 seconds, then after 10 seconds the superposition turns into a mixture of A′ and B′ with the same coefficients as A and B. For example, if we have the following Then after those 10 seconds our state will change to So far there have just been 2 configurations, but there can be infinitely many. In illustration, a particle can have any position, so that there are different configurations which have any value of the position x. These are written: The principle of superposition guarantees that there are states which are arbitrary superpositions of all the positions with complex coefficients: This sum is defined only if the index x is discrete. If the index is over , then the sum is replaced by an integral. The quantity is called the wavefunction of the particle. If we consider a qubit with both position and spin, the state is a superposition of all possibilities for both: The configuration space of a quantum mechanical system cannot be worked out without some physical knowledge. The input is usually the allowed different classical configurations, but without the duplication of including both position and momentum. A pair of particles can be in any combination of pairs of positions. A state where one particle is at position x and the other is at position y is written . The most general state is a superposition of the possibilities: The description of the two particles is much larger than the description of one particle—it is a function in twice the number of dimensions. This is also true in probability, when the statistics of two random variables are correlated. If two particles are uncorrelated, the probability distribution for their joint position P(x, y) is a product of the probability of finding one at one position and the other at the other position: In quantum mechanics, two particles can be in special states where the amplitudes of their position are uncorrelated. For quantum amplitudes, the word entanglement replaces the word correlation, but the analogy is exact. A disentangled wave function has the form: while an entangled wavefunction does not have this form. Analogy with probability In probability theory there is a similar principle. If a system has a probabilistic description, this description gives the probability of any configuration, and given any two different configurations, there is a state which is partly this and partly that, with positive real number coefficients, the probabilities, which say how much of each there is. For example, if we have a probability distribution for where a particle is, it is described by the "state" Where is the probability density function, a positive number that measures the probability that the particle will be found at a certain location. The evolution equation is also linear in probability, for fundamental reasons. If the particle has some probability for going from position x to y, and from z to y, the probability of going to y starting from a state which is half-x and half-z is a half-and-half mixture of the probability of going to y from each of the options. This is the principle of linear superposition in probability. Quantum mechanics is different, because the numbers can be positive or negative. While the complex nature of the numbers is just a doubling, if you consider the real and imaginary parts separately, the sign of the coefficients is important. In probability, two different possible outcomes always add together, so that if there are more options to get to a point z, the probability always goes up. In quantum mechanics, different possibilities can cancel. In probability theory with a finite number of states, the probabilities can always be multiplied by a positive number to make their sum equal to one. For example, if there is a three state probability system: where the probabilities are positive numbers. Rescaling x,y,z so that The geometry of the state space is a revealed to be a triangle. In general it is a simplex. There are special points in a triangle or simplex corresponding to the corners, and these points are those where one of the probabilities is equal to 1 and the others are zero. These are the unique locations where the position is known with certainty. In a quantum mechanical system with three states, the quantum mechanical wavefunction is a superposition of states again, but this time twice as many quantities with no restriction on the sign: rescaling the variables so that the sum of the squares is 1, the geometry of the space is revealed to be a high-dimensional sphere A sphere has a large amount of symmetry, it can be viewed in different coordinate systems or bases. So unlike a probability theory, a quantum theory has a large number of different bases in which it can be equally well described. The geometry of the phase space can be viewed as a hint that the quantity in quantum mechanics which corresponds to the probability is the absolute square of the coefficient of the superposition. Hamiltonian evolution The numbers that describe the amplitudes for different possibilities define the kinematics, the space of different states. The dynamics describes how these numbers change with time. For a particle that can be in any one of infinitely many discrete positions, a particle on a lattice, the superposition principle tells you how to make a state: So that the infinite list of amplitudes completely describes the quantum state of the particle. This list is called the state vector, and formally it is an element of a Hilbert space, an infinite-dimensional complex vector space. It is usual to represent the state so that the sum of the absolute squares of the amplitudes is one: For a particle described by probability theory random walking on a line, the analogous thing is the list of probabilities , which give the probability of any position. The quantities that describe how they change in time are the transition probabilities , which gives the probability that, starting at x, the particle ends up at y time t later. The total probability of ending up at y is given by the sum over all the possibilities The condition of conservation of probability states that starting at any x, the total probability to end up somewhere must add up to 1: So that the total probability will be preserved, K is what is called a stochastic matrix. When no time passes, nothing changes: for 0 elapsed time , the K matrix is zero except from a state to itself. So in the case that the time is short, it is better to talk about the rate of change of the probability instead of the absolute change in the probability. where is the time derivative of the K matrix: The equation for the probabilities is a differential equation that is sometimes called the master equation: The R matrix is the probability per unit time for the particle to make a transition from x to y. The condition that the K matrix elements add up to one becomes the condition that the R matrix elements add up to zero: One simple case to study is when the R matrix has an equal probability to go one unit to the left or to the right, describing a particle that has a constant rate of random walking. In this case is zero unless y is either x + 1, x, or x  1, when y is x + 1 or x  1, the R matrix has value c, and in order for the sum of the R matrix coefficients to equal zero, the value of must be −2c. So the probabilities obey the discretized diffusion equation: which, when c is scaled appropriately and the P distribution is smooth enough to think of the system in a continuum limit becomes: Which is the diffusion equation. Quantum amplitudes give the rate at which amplitudes change in time, and they are mathematically exactly the same except that they are complex numbers. The analog of the finite time K matrix is called the U matrix: Since the sum of the absolute squares of the amplitudes must be constant, must be unitary: or, in matrix notation, The rate of change of U is called the Hamiltonian H, up to a traditional factor of i: The Hamiltonian gives the rate at which the particle has an amplitude to go from m to n. The reason it is multiplied by i is that the condition that U is unitary translates to the condition: which says that H is Hermitian. The eigenvalues of the Hermitian matrix H are real quantities, which have a physical interpretation as energy levels. If the factor i were absent, the H matrix would be antihermitian and would have purely imaginary eigenvalues, which is not the traditional way quantum mechanics represents observable quantities like the energy. For a particle that has equal amplitude to move left and right, the Hermitian matrix H is zero except for nearest neighbors, where it has the value c. If the coefficient is everywhere constant, the condition that H is Hermitian demands that the amplitude to move to the left is the complex conjugate of the amplitude to move to the right. The equation of motion for is the time differential equation: In the case in which left and right are symmetric, c is real. By redefining the phase of the wavefunction in time, , the amplitudes for being at different locations are only rescaled, so that the physical situation is unchanged. But this phase rotation introduces a linear term. which is the right choice of phase to take the continuum limit. When is very large and is slowly varying so that the lattice can be thought of as a line, this becomes the free Schrödinger equation: If there is an additional term in the H matrix that is an extra phase rotation that varies from point to point, the continuum limit is the Schrödinger equation with a potential energy: These equations describe the motion of a single particle in non-relativistic quantum mechanics. Quantum mechanics in imaginary time The analogy between quantum mechanics and probability is very strong, so that there are many mathematical links between them. In a statistical system in discrete time, t=1,2,3, described by a transition matrix for one time step , the probability to go between two points after a finite number of time steps can be represented as a sum over all paths of the probability of taking each path: where the sum extends over all paths with the property that and . The analogous expression in quantum mechanics is the path integral. A generic transition matrix in probability has a stationary distribution, which is the eventual probability to be found at any point no matter what the starting point. If there is a nonzero probability for any two paths to reach the same point at the same time, this stationary distribution does not depend on the initial conditions. In probability theory, the probability m for the stochastic matrix obeys detailed balance when the stationary distribution has the property: Detailed balance says that the total probability of going from m to n in the stationary distribution, which is the probability of starting at m times the probability of hopping from m to n, is equal to the probability of going from n to m, so that the total back-and-forth flow of probability in equilibrium is zero along any hop. The condition is automatically satisfied when n=m, so it has the same form when written as a condition for the transition-probability R matrix. When the R matrix obeys detailed balance, the scale of the probabilities can be redefined using the stationary distribution so that they no longer sum to 1: In the new coordinates, the R matrix is rescaled as follows: and H is symmetric This matrix H defines a quantum mechanical system: whose Hamiltonian has the same eigenvalues as those of the R matrix of the statistical system. The eigenvectors are the same too, except expressed in the rescaled basis. The stationary distribution of the statistical system is the ground state of the Hamiltonian and it has energy exactly zero, while all the other energies are positive. If H is exponentiated to find the U matrix: and t is allowed to take on complex values, the K' matrix is found by taking time imaginary. Experiments and applications Successful experiments involving superpositions of relatively large (by the standards of quantum physics) objects have been performed.[3] • A "cat state" has been achieved with photons.[4] • A beryllium ion has been trapped in a superposed state.[5] • A double slit experiment has been performed with molecules as large as buckyballs.[6][7] • A 2013 experiment superposed molecules containing 15,000 each of protons, neutrons and electrons. The molecules were of compounds selected for their good thermal stability, and were evaporated into a beam at a temperature of 600 K. The beam was prepared from highly purified chemical substances, but still contained a mixture of different molecular species. Each species of molecule interfered only with itself, as verified by mass spectrometry.[8] • An experiment involving a superconducting quantum interference device ("SQUID") has been linked to the theme of the "cat state" thought experiment.[9] By use of very low temperatures, very fine experimental arrangements were made to protect in near isolation and preserve the coherence of intermediate states, for a duration of time, between preparation and detection, of SQUID currents. Such a SQUID current is a coherent physical assembly of perhaps billions of electrons. Because of its coherence, such an assembly may be regarded as exhibiting "collective states" of a macroscopic quantal entity. For the principle of superposition, after it is prepared but before it is detected, it may be regarded as exhibiting an intermediate state. It is not a single-particle state such as is often considered in discussions of interference, for example by Dirac in his famous dictum stated above.[10] Moreover, though the 'intermediate' state may be loosely regarded as such, it has not been produced as an output of a secondary quantum analyser that was fed a pure state from a primary analyser, and so this is not an example of superposition as strictly and narrowly defined. Nevertheless, after preparation, but before measurement, such a SQUID state may be regarded in a manner of speaking as a "pure" state that is a superposition of a clockwise and an anti-clockwise current state. In a SQUID, collective electron states can be physically prepared in near isolation, at very low temperatures, so as to result in protected coherent intermediate states. What is remarkable here is that there are two well-separated self-coherent collective states that exhibit such metastability. The crowd of electrons tunnels back and forth between the clockwise and the anti-clockwise states, as opposed to forming a single intermediate state in which there is no definite collective sense of current flow.[11][12] • An experiment involving a flu virus has been proposed.[13] • A piezoelectric "tuning fork" has been constructed, which can be placed into a superposition of vibrating and non-vibrating states. The resonator comprises about 10 trillion atoms.[14] • Recent research indicates that chlorophyll within plants appears to exploit the feature of quantum superposition to achieve greater efficiency in transporting energy, allowing pigment proteins to be spaced further apart than would otherwise be possible.[15][16] • An experiment has been proposed, with a bacterial cell cooled to 10 mK, using an electromechanical oscillator.[17] At that temperature, all metabolism would be stopped, and the cell might behave virtually as a definite chemical species. For detection of interference, it would be necessary that the cells be supplied in large numbers as pure samples of identical and detectably recognizable virtual chemical species. It is not known whether this requirement can be met by bacterial cells. They would be in a state of suspended animation during the experiment. In quantum computing the phrase "cat state" often refers to the GHZ state, the special entanglement of qubits wherein the qubits are in an equal superposition of all being 0 and all being 1; i.e., Formal interpretation Applying the superposition principle to a quantum mechanical particle, the configurations of the particle are all positions, so the superpositions make a complex wave in space. The coefficients of the linear superposition are a wave which describes the particle as best as is possible, and whose amplitude interferes according to the Huygens principle. For any physical property in quantum mechanics, there is a list of all the states where that property has some value. These states are necessarily perpendicular to each other using the Euclidean notion of perpendicularity which comes from sums-of-squares length, except that they also must not be i multiples of each other. This list of perpendicular states has an associated value which is the value of the physical property. The superposition principle guarantees that any state can be written as a combination of states of this form with complex coefficients. Write each state with the value q of the physical quantity as a vector in some basis , a list of numbers at each value of n for the vector which has value q for the physical quantity. Now form the outer product of the vectors by multiplying all the vector components and add them with coefficients to make the matrix where the sum extends over all possible values of q. This matrix is necessarily symmetric because it is formed from the orthogonal states, and has eigenvalues q. The matrix A is called the observable associated to the physical quantity. It has the property that the eigenvalues and eigenvectors determine the physical quantity and the states which have definite values for this quantity. Every physical quantity has a Hermitian linear operator associated to it, and the states where the value of this physical quantity is definite are the eigenstates of this linear operator. The linear combination of two or more eigenstates results in quantum superposition of two or more values of the quantity. If the quantity is measured, the value of the physical quantity will be random, with a probability equal to the square of the coefficient of the superposition in the linear combination. Immediately after the measurement, the state will be given by the eigenvector corresponding to the measured eigenvalue. Physical interpretation It is natural to ask why ordinary everyday objects and events do not seem to display quantum mechanical features such as superposition. Indeed, this is sometimes regarded as "mysterious", for instance by Richard Feynman.[18] In 1935, Erwin Schrödinger devised a well-known thought experiment, now known as Schrödinger's cat, which highlighted this dissonance between quantum mechanics and classical physics. One modern view is that this mystery is explained by quantum decoherence. A macroscopic system (such as a cat) may evolve over time into a superposition of classically distinct quantum states (such as "alive" and "dead"). The mechanism that achieves this is a subject of significant research, one mechanism suggests that the state of the cat is entangled with the state of its environment (for instance, the molecules in the atmosphere surrounding it), when averaged over the possible quantum states of the environment (a physically reasonable procedure unless the quantum state of the environment can be controlled or measured precisely) the resulting mixed quantum state for the cat is very close to a classical probabilistic state where the cat has some definite probability to be dead or alive, just as a classical observer would expect in this situation. Another proposed class of theories is that the fundamental time evolution equation is incomplete, and requires the addition of some type of fundamental Lindbladian, the reason for this addition and the form of the additional term varies from theory to theory. A popular theory is Continuous spontaneous localization, where the lindblad term is proportional to the spatial separation of the states, this too results in a quasi-classical probabilistic state. The Heisenberg uncertainty principle declares that for any given instant of time, the position and momentum of an electron or another subatomic particle cannot both be exactly determined and that a state where one of them has a definite value corresponds to a superposition of many states for the other. See also 1. P.A.M. Dirac (1947). The Principles of Quantum Mechanics (2nd edition). Clarendon Press. p. 12. 2. Zeilinger A (1999). "Experiment and the foundations of quantum physics". Rev. Mod. Phys. 71 (2): S288–S297. Bibcode:1999RvMPS..71..288Z. doi:10.1103/revmodphys.71.s288. 3. "What is the world's biggest Schrodinger cat?". 4. "Schrödinger's Cat Now Made Of Light". 27 August 2014. 5. C. Monroe, et. al. A "Schrodinger Cat" Superposition State of an Atom 6. "Wave-particle duality of C60". 31 March 2012. Archived from the original on 31 March 2012.CS1 maint: BOT: original-url status unknown (link) 7. Nairz, Olaf. "standinglightwave". 8. Eibenberger, S., Gerlich, S., Arndt, M., Mayor, M., Tüxen, J. (2013). "Matter-wave interference with particles selected from a molecular library with masses exceeding 10 000 amu", Physical Chemistry Chemical Physics, 15: 14696-14700. 9. Leggett, A. J. (1986). "The superposition principle in macroscopic systems", pp. 28–40 in Quantum Concepts of Space and Time, edited by R. Penrose and C.J. Isham, ISBN 0-19-851972-9. 10. Dirac, P. A. M. (1930/1958), p. 9. 11. Physics World: Schrodinger's cat comes into view 12. Friedman, J. R., Patel, V., Chen, W., Tolpygo, S. K., Lukens, J. E. (2000)."Quantum superposition of distinct macroscopic states", Nature 406: 43–46. 13. "How to Create Quantum Superpositions of Living Things"> 15. Scholes, Gregory; Elisabetta Collini; Cathy Y. Wong; Krystyna E. Wilk; Paul M. G. Curmi; Paul Brumer; Gregory D. Scholes (4 February 2010). "Coherently wired light-harvesting in photosynthetic marine algae at ambient temperature". Nature. 463 (7281): 644–647. Bibcode:2010Natur.463..644C. doi:10.1038/nature08811. PMID 20130647. 16. Moyer, Michael (September 2009). "Quantum Entanglement, Photosynthesis and Better Solar Cells". Scientific American. Retrieved 12 May 2010. 17. "Could 'Schrödinger's bacterium' be placed in a quantum superposition?"> 18. Feynman, R. P., Leighton, R. B., Sands, M. (1965), § 1-1. Bibliography of cited references • Bohr, N. (1927/1928). The quantum postulate and the recent development of atomic theory, Nature Supplement 14 April 1928, 121: 580–590. • Cohen-Tannoudji, C., Diu, B., Laloë, F. (1973/1977). Quantum Mechanics, translated from the French by S. R. Hemley, N. Ostrowsky, D. Ostrowsky, second edition, volume 1, Wiley, New York, ISBN 0471164321. • Dirac, P. A. M. (1930/1958). The Principles of Quantum Mechanics, 4th edition, Oxford University Press. • Einstein, A. (1949). Remarks concerning the essays brought together in this co-operative volume, translated from the original German by the editor, pp. 665–688 in Schilpp, P. A. editor (1949), Albert Einstein: Philosopher-Scientist, volume II, Open Court, La Salle IL. • Feynman, R. P., Leighton, R.B., Sands, M. (1965). The Feynman Lectures on Physics, volume 3, Addison-Wesley, Reading, MA. • Merzbacher, E. (1961/1970). Quantum Mechanics, second edition, Wiley, New York. • Wheeler, J. A.; Zurek, W.H. (1983). Quantum Theory and Measurement. Princeton NJ: Princeton University Press.
6bda7c3bfbf2fddb
Download Momentum Transfer to a Free Floating Double Slit yes no Was this document useful for you?    Thank you for your participation! Document related concepts Molecular Hamiltonian wikipedia , lookup Quantum machine learning wikipedia , lookup Quantum entanglement wikipedia , lookup Many-worlds interpretation wikipedia , lookup Quantum group wikipedia , lookup Atomic orbital wikipedia , lookup Bell test experiments wikipedia , lookup Renormalization wikipedia , lookup Path integral formulation wikipedia , lookup Quantum key distribution wikipedia , lookup History of quantum field theory wikipedia , lookup Aharonov–Bohm effect wikipedia , lookup Ensemble interpretation wikipedia , lookup Bell's theorem wikipedia , lookup Quantum teleportation wikipedia , lookup Bohr model wikipedia , lookup Wave function wikipedia , lookup Renormalization group wikipedia , lookup Relativistic quantum mechanics wikipedia , lookup Interpretations of quantum mechanics wikipedia , lookup Rotational spectroscopy wikipedia , lookup Probability amplitude wikipedia , lookup Rotational–vibrational spectroscopy wikipedia , lookup Coherent states wikipedia , lookup Canonical quantization wikipedia , lookup Wheeler's delayed choice experiment wikipedia , lookup Atomic theory wikipedia , lookup Electron scattering wikipedia , lookup Particle in a box wikipedia , lookup Quantum state wikipedia , lookup Copenhagen interpretation wikipedia , lookup Hydrogen atom wikipedia , lookup EPR paradox wikipedia , lookup Symmetry in quantum mechanics wikipedia , lookup Delayed choice quantum eraser wikipedia , lookup Hidden variable theory wikipedia , lookup Wave–particle duality wikipedia , lookup T-symmetry wikipedia , lookup Matter wave wikipedia , lookup Theoretical and experimental justification for the Schrödinger equation wikipedia , lookup Bohr–Einstein debates wikipedia , lookup Double-slit experiment wikipedia , lookup PRL 111, 103201 (2013) week ending from the Einstein-Bohr Debates L. Ph. H. Schmidt,1,* J. Lower,1 T. Jahnke,1 S. Schößler,1 M. S. Schöffler,1 A. Menssen,1 C. Lévêque,2 N. Sisourat,2 R. Taı̈eb,2 H. Schmidt-Böcking,1 and R. Dörner1 Institut für Kernphysik, Goethe-Universität, Max-von-Laue-Straße 1, 60438 Frankfurt am Main, Germany Laboratoire de Chimie Physique-Matière et Rayonnement, Université Pierre et Marie Curie, 11 Rue Pierre et Marie Curie, 75231 Paris 05, France (Received 20 March 2013; revised manuscript received 4 June 2013; published 5 September 2013) We simultaneously measured the momentum transferred to a free-floating molecular double slit and the momentum change of the atom scattering from it. Our experimental results are compared to quantum mechanical and semiclassical models. The results reveal that a classical description of the slits, which was used by Einstein in his debate with Bohr, provides a surprisingly good description of the experimental results, even for a microscopic system, if momentum transfer is not ascribed to a specific pathway but shared coherently and simultaneously between both. DOI: 10.1103/PhysRevLett.111.103201 PACS numbers: 03.65.Ta, 34.50.s, 34.70.+e, 42.50.Xa Quantum mechanics poses a major challenge to our intuition which is trained in the macrocosm to the laws of classical physics. Among all the quantum phenomena, the double-slit interference is ‘‘a phenomenon which is impossible (. . .) to explain in any classical way, and which has in it the heart of quantum mechanics’’ (Feynman). Consequently, from its early days on the double slit has been used as an example to discuss the wave concept of quantum mechanics. Most famously, Einstein challenged quantum mechanics by a thought experiment he proposed to Bohr [1,2]. He argued that it should be possible to determine the pathway of each individual particle passing through a double slit by observing the recoil momentum it imparts onto a first slit used to diffract the particle wave, ensuring it coherently illuminates the double-slit assembly. Einstein construed this to express his ‘‘deep concern over the extent to which causal account in space and time was abandoned in quantum mechanics’’ (quoted from [1]). Bohr countered by asserting that the slits, in addition to the scattered particle, obey the laws of quantum mechanics. A slightly modified version of Einstein’s thought experiment, which is better matched to Bohr’s arguments, is illustrated in Fig. 1(a). Here one expects a momentum transfer pA or pB to the double slit in the case of pathway A or B. However, the uncertainty principle either prohibits the determination of the pathway or implies an uncertainty of the length of the two pathways so that the interference structure disappears, depending on whether the slit momentum or position is fixed initially [3]. Since we report an experiment which shows high interference contrast we have realized the case where (almost) no which way information can be obtained. In modern terms, Bohr argued that the quantumclassical border cannot be drawn between the interfering particle (quantum) and the slit arrangement (classically) [4] as implicitly done by Einstein. Instead, the double slit is part of the quantum mechanical system and has to be treated In the spirit of the original thought experiment we measure both slit and projectile momenta. The observed projectile interference pattern is compared to semiclassical and quantum mechanical calculations. These show that even for microscopic ‘‘slits’’ a classical modeling of the slit dynamics can be appropriate if the momentum transfer from the scattered particle to the slits is treated in a way not consistent with classical mechanics. To relate our experiment to the original Einstein-Bohr considerations, we need to consider the two-dimensional analog of the original thought experiment, as shown in Fig. 1(b). In contrast to the one-dimensional case, recoil from the double pinhole may induce rotation (clockwise or anticlockwise) in the x-y plane in addition to translation. However, for a macroscopic pinhole arrangement the interference pattern still consists of vertical stripes. In the present experiment we replace the macroscopic double pinhole by an ensemble of molecular ‘‘microslits’’ (freefloating HD molecules) and observe the diffraction of helium atoms by them [5]. Because the mass of the microslits is now comparable to that of the individual projectile atoms and the mass difference between the H and D nuclei is considerable, a significant distortion in the interference fringe pattern by rotational excitation of the slit is expected. Isotope labeling of the slits enables the entanglement of the slit-projectile system to be explored. One might anticipate that the molecules comprising our molecular slits need to be aligned in space to observe interference phenomena. This is not the case. It is sufficient to choose only those scattering events in which the molecules rapidly dissociate and to postselect molecular orientations by measuring the emission angles of their atomic fragments. By the commonly used axial-recoil approximation [6,7] the internuclear vector at the time of collision is inferred from the measured directions of the fragments. Furthermore, given the correlation between Ó 2013 American Physical Society PRL 111, 103201 (2013) kinetic energy release (KER) of the molecular breakup and the internuclear distance at the instance of its inception [8,9], interference for well-defined slit separations may be Figure 1(c) shows a schematic representation of our experiment. Helium atoms are used as the projectiles and the double pinhole is replaced by the two nuclei of the diatomic molecular ion HDþ . Scattering with high momentum transfer is localized close to the nuclei. Only those scattering events corresponding to dissociative electron Heð1s2 Þ þ HDþ ð1sg Þ ! Heþ ð1sÞ þ HDðb3 þ ! Heþ ð1sÞ þ Hð1sÞ þ Dð1sÞ are analyzed, as these lead to an accurate determination of slit orientation. One electron is transferred from the He to the empty 2pu orbital of the HDþ , forming electronically exited neutral HD on the repulsive b3 þ u potential energy curve [10]. The transition of the electron from an even to an odd state effects a phase shift of between the two FIG. 1 (color). Thought experiment for a kicked double slit. (a) A coherent particle wave travels through the double slit to the screen where its probability distribution shows interference if no which-way information can be gained. Along path A (B) the particle is deflected downward (upward), therefore measurement of the double-slit momentum for each particle should allow determination of their paths. Equivalently, the momentum transfer to the first slit could be measured as was originally proposed by Einstein [1]. (b) Two-dimensional version of the arrangement in (a). Momentum transfer from the projectile can cause a clockwise- or anticlockwise-rotation of the pinholes. (c) Experimental implementation of (b). Atoms collide with a HD molecule. Rutherford scattering at one of the molecular nuclei establishes the two interfering pathways. The scattered atom is observed in coincidence with the molecular fragments. In contrast to the macroscopic slit arrangement in (b) with the rotation axis located exactly in between the apertures, the unequal nuclear masses of our molecular microslits lead to a curving of the interference fringes. week ending scattering pathways, which inverts the interference maxima to minima and vice versa [5,6,11]. The experiment is performed in inverse kinematics; i.e., a fast beam of molecules, which constitute the ‘‘slits’’, collides with a helium gas target. HDþ molecules, produced in a Penning source and in a mixture of several vibrational and rotational excited states [10] are accelerated to 30 keV before colliding with helium atoms prepared within a supersonic jet. H and D fragments, formed through the process of charge transfer are measured on position- and time-sensitive delay-line detectors [12] located behind the reaction region, enabling their momenta to be reconstructed and the slit geometry (HD internuclear distance and molecular orientation) at the time of collision to be inferred. The momenta of the Heþ ions, which show interference stripes in the plane perpendicular to the direction of impact, were measured by cold target recoil ion momentum spectroscopy (COLTRIMS) [13,14]. The interference pattern of the scattered helium is shown in Fig. 2. This representation in momentum space is FIG. 2 (color). Two-dimensional distribution of the momentum transfer to He scattered at HDþ and leading to dissociative electron transfer He þ HDþ ð1sg Þ ! Heþ þ HDð1sg ;2pu Þ ! Heþ þ Hð1sÞ þ Dð1sÞ. The experimental distributions shown in (a) and (b) consist of events where the break-up direction of the molecule is measured between 85 to 95 with respect to the direction of impact. The internuclear vector at the classical dissociation limit and pointing from the H to the D defines the x axis. (c) and (d) Predictions from a quantum mechanical treatment of the slit dynamics. Two different regions of KER have been selected: (a) and (c): 1 eV < KER < 2 eV, (b) and (d): 4 eV < KER < 5 eV. Momenta are given in atomic units (me ¼ e ¼ @ ¼ 4"0 ¼ 1). PRL 111, 103201 (2013) equivalent to a spatial interference pattern because the helium is propagated to macroscopic distances during the process of measurement. The horizontal axis of the coordinate frame is defined by the fragmentation direction of the molecule with the H directed to the right. We select two different regions of KER, which correspond to distinct internuclear distances (slit separations). As stated above, the interference pattern is inverted compared to the optical case. More importantly, the vertical interference stripes are bent, which was not observed for H2 molecules [5] where the additional final state symmetry arising for identical molecular fragments enforces the reflection symmetry of the diffraction pattern about the vertical axis and therefore prohibits the pattern bending to a specific side. As we will show, this bending is a result of the coherent rotation of the molecular axis initiated by the momentum kick [15]. A macroscopic double pinhole, as shown in Fig. 1(b), has a well-defined orientation and slit separation. If we consider the dynamics of our microscopic molecular double slit classically and neglect the momentum transfer by the diffracted He, the excited HD would dissociate along the bond axis and the fragment direction would coincide with the slit axis. In this case one would expect the maxima and minima in Figs. 2(a) and 2(b) to lie along vertical lines, in clear contradiction to the observed curved pattern which shows that the momentum transfer is relevant in our case. The observed final H or D direction does not coincide with the internuclear axis at the time when the scattering occurred. From the quantum mechanical perspective, our molecularslit ensemble of HDþ molecules comprises an incoherent superposition of vibrational and rotational excited states. We restrict our modeling to molecules initially in the rotational ground state of HDþ which describes a coherent superposition of all possible orientations in three dimensions; no classical analog exists for such a state. Nevertheless, prominent interference effects can still be observed if specific momenta (i.e., break-up directions and KER) of the dissociation fragments are selected. This is because the collision process causes rotational excitation which is essential to generate orientational effects. To interpret our experimental results we compare them with calculations derived from semiclassical and quantum-mechanical theoretical approaches. In all approaches the diffracted wave is described by the superposition of contributions from the respective nuclei of the molecular double slit. The key difference is the way in which momentum is transferred from the projectile to the molecular slit. Our first approach employs a classical treatment of fragment trajectories. Momentum transfer is assumed to occur at either the H or the D atom. Such a localization of momentum transfer to a particular point in space was envisaged in the Bohr-Einstein debate inspired thought experiment. Thus, while the wave scatters simultaneously week ending from both atoms (delocalization of the projectile wave function), the momentum transfer occurs at only one, causing the intermolecular axis to rotate in a clockwise or in an anticlockwise direction. The degree and sign of rotation, which depends on the magnitude and direction of the relative momentum kick [16] and on whether the momentum transfer occurs at the H or D atom, is calculated within classical mechanics, by numerically solving Newton’s equation for the relative motion of the molecular fragments. The orientation of the molecular slit at the time of scattering can then be inferred and interference patterns Two-dimensional distributions for the momentum transferred between the helium and the H or D atom are shown in Fig. 3, panel (a) or (b), respectively. As scattering from both is expected to occur with equal probability, it is appropriate to compare the average of distributions (a) and (b) [presented in Fig. 3(c)] with the results of measurement [Fig. 2(a)]. As is clearly seen, the model achieves unsatisfactory agreement with the experimental results, exhibiting a more complex structure of lesser contrast than observed in measurement. Our second approach pursues the strategy suggested in Bohr’s reply to Einstein, namely, that the slits themselves be treated quantum mechanically. This results in a system FIG. 3 (color). Two-dimensional distribution of the momentum transfer to He scattered at HDþ and leading to dissociative electron transfer. Semiclassical calculations for HD internuclear distances resulting in 1 eV < KER < 2 eV. The x axis is defined by the internuclear vector at the classical dissociation limit pointing from the H to the D. The dissociation is started with a relative motion of the fragments resulting from momentum transfer at either (a) the H or (b) the D atom from the scattered helium. These two cases are classical analogs to the two translation factors used to modify the initial state wave function of the quantum mechanical model. (c) Average of (a) and (b). (d) The transferred momentum is shared equally between H and D atoms. PRL 111, 103201 (2013) of three correlated particles, H, D, and the scattered helium, which can only be solved by applying Since the total momentum of the closed system comprising these three particles is conserved, the motion of the center of mass of the double slits equals the momentum change of the scattered particle. Therefore, its quantum mechanical description will not provide new insights. In contrast, additional information is gained from the quantum mechanical description of the relative motion of the two scattering centers which involves a coherent superposition of double-slit orientations and slit distances and the effect of rotational and vibrational excitation during the collision process. It can be assumed that close scattering at one of the molecular centers involves a localized momentum transfer at this nuclear site. This implies a definite correlation between the momentum transfer and the internuclear wave function describing the molecule. The outgoing wave of the scattered particle is a coherent superposition of contributions from the two scattering centers. Correspondingly, the internuclear wave function after scattering describes the molecule getting stretched or compressed and getting clockwise or anticlockwise rotationally excited, depending on the pathway of the scattered particle. Our quantum mechanical approach is based on the assumption that the description of the internal degrees of freedom of the molecule is sufficient to characterize the complete system. To further reduce the dimensionality of the problem, we treat only the plane perpendicular to the beam axis. The collision process leads to an internuclear wave function 0 which is derived from the initial state wave function by multiplying it with a coherent superposition of two translation factors, namely, ðRÞ ¼ ðRÞ exp ikR mD þ mH exp ikR mD þ mH The evolution of this kicked HD nuclear wave function [15] is calculated on the same repulsive excited HD potential used in the semiclassical modeling. A multiconfigurational time-dependent Hartree approach (MCTDH) [17] is used to propagate the wave function until the dissociation limit is reached. The asymptotic wave function is then analyzed to extract the distribution of the KER and the angle ’ between the final internuclear vector R and the momentum-transfer vector k. Quantum mechanical and experimental results in terms of KER and ’ are presented in Fig. 4. We perform these calculations for a large number of absolute values of k. Because the momentum transfer to the scattered particle is a measured value we can add up these independent KER-’ distributions to finally compose the diffraction pattern of the scattered particle shown week ending FIG. 4 (color). Final state of dissociated HD molecules for different magnitudes of momentum transfer p ¼ @k to the molecule. The final state is parameterized by the KER and the angle ’ between the vector k ¼ pHeþ [email protected] and the inter nuclear vector R at the dissociation limit or the measured breakup direction. (a) and (c) show time dependent quantum mechanical calculations for jkj ¼ 4 a:u: and 7 a.u. at the transversal plane. The results of calculations for different vibrational states have been added incoherently. The right panels show experimental results for (b) 3:5 a:u: < jkj < 4:5 a:u: and (d) 6:5 a:u: < jkj < 7:5 a:u: for an angle between the direction of impact and dissociation of between 80 and 100. in Figs. 2(c) and 2(d). In contrast to the semiclassical predictions, excellent agreement is now achieved with the experimental results in Figs. 2(a) and 2(b). Finally, it is interesting to reflect on whether it is possible to incorporate the concept of a coherent momentum transfer into classical dynamics. To investigate this hypothesis we modify our semi-classical model by assuming that, in each collision, the momentum transfer is divided equally between both nuclei. The result presented in Fig. 3(d) shows excellent agreement with both the experimental results [Fig. 2(a)] and those from the fully quantum mechanical treatment of the slit dynamics [Fig. 2(c)]. This shows that the process of coherent momentum transfer indeed possesses a physical analog in the classical world: the momentum transfer is shared by the scattering centers even though the forces along the classical trajectories involve only a single scattering center. In conclusion, we have observed Young-type interferences behind a free-floating isotope-labeled molecular double pinhole and measured the momentum transfer. Consistent with Bohr’s arguments, a quantum mechanical description of the molecular slit dynamics is appropriate to describe the observed interference phenomena. Moreover, it is sufficient to completely define the system dynamics; no additional treatment of the scattered projectile is necessary to describe the interference phenomena. Momentum transfer from the projectile to the slit is shown to modify PRL 111, 103201 (2013) the interference features in full agreement with predictions from quantum modeling the kicked-molecule slit dynamics. As an alternative to a quantum mechanical description of the slits, our results show that a classical description of the slits according to Einstein’s original viewpoint of the thought experiment is still possible. In that case one has, however, to assume a delocalized nonclassical interaction. Interestingly, for the specific pathway-symmetric thought experiment of Fig. 1(a) this net interaction would not lead to a recoiling of the slits. The experimental work was supported by the Deutsche *[email protected] [1] N. Bohr, in Albert Einstein: Philosopher Scientist (Cambridge University Press, Cambridge, England, 1949), p. 201. [2] Y. Shi, Ann. Phys. (N.Y.) 9, 637 (2000). [3] W. K. Wootters and W. H. Zurek, Phys. Rev. D 19, 473 [4] P. Bertet, S. Osnaghi, A. Rauschenbeutel, G. Nogues, A. Auffeves, M. Brune, J. M. Raimond, and S. Haroche, Nature (London) 411, 166 (2001). week ending [5] L. Ph. H. Schmidt, S. Schössler, F. Afaneh, M. Schöffler, K. Stiebing, H. Schmidt-Böcking, and R. Dörner, Phys. Rev. Lett. 101, 173202 (2008). [6] R. N. Zare, Mol. Photochem. 4, 1 (1972). [7] R. N. Zare, J. Chem. Phys. 47, 204 (1967). [8] L. Ph. H. Schmidt, T. Jahnke, A. Czasch, M. Schöffler, H. Schmidt-Böcking, and R. Dörner, Phys. Rev. Lett. 108, 073202 (2012). [9] S. Chelkowski, P. B. Corkum, and A. D. Bandrauk, Phys. Rev. Lett. 82, 3416 (1999). [10] Z. Amitay et al., Phys. Rev. A 60, 3769 (1999). [11] T. F. Tuan and E. Gerjuoy, Phys. Rev. 117, 756 (1960). [12] O. Jagutzki, V. Mergel, K. Ullmann-Pfleger, L. Spielberger, U. Spillmann, R. Dörner, and H. Schmidt-Böcking, Nucl. Instrum. Methods Phys. Res., Sect. A 477, 244 (2002). [13] R. Dörner, V. Mergel, O. Jagutzki, L. Spielberger, J. Ullrich, R. Moshammer, and H. Schmidt-Böcking, Phys. Rep. 330, 95 (2000). [14] J. Ullrich, R. Moshammer, A. Dorn, R. Dörner, L. Ph. H. Schmidt, and H. Schmidt-Böcking, Rep. Prog. Phys. 66, 1463 (2003). [15] W. Domcke and L. S. Cederbaum, J. Electron Spectrosc. Relat. Phenom. 13, 161 (1978). [16] E. Kukk et al., Phys. Rev. Lett. 95, 133001 (2005). [17] M. H. Beck, A. Jäckle, G. A. Worth, and H.-D. Meyer, Phys. Rep. 324, 1 (2000).
39747e3c0e9e3998
Resume Reading — The Physicist’s New Book of Life You've read 1 of 2 free monthly articles. Learn More. The Physicist’s New Book of Life Jeremy England says religious ideas can inform our scientific quest for the origin of life. Who is Jeremy England? There are many answers to that question. He is a biochemistry graduate who became an MIT assistant professor…By Michael Brooks Who is Jeremy England? There are many answers to that question. He is a biochemistry graduate who became an MIT assistant professor in physics when he was 29 years old. He is an ordained rabbi. He is the grandson of Holocaust survivors. He is a descendant of the first life-form on Earth. He can also be described as an assemblage of atoms that exhibits complex, life-like behavior. England might describe himself as one of the many dissipators of energy in the universe—this, he says, seems to be a useful way to answer the question that humans have asked for so many millennia: What is life, and how did it arise? This question—and England’s answer—form the basis of his new book Every Life Is On Fire: How Thermodynamics Explains the Origin of Living Things, which explores the idea that burning up energy is the base activity of life. But England has no simple, neat tale to tell: This is a complex, multilayered subject, and must be treated as more than a scientific issue, he says. That’s why Every Life Is On Fire daringly brings ideas from the Hebrew Scriptures and uses them to unpack the science. Cultural and religious traditions have long been exploring this territory, he says, and can complement scientific angles on the question of where we ultimately came from. If we really want to understand ourselves, he suggests, we’ll need more than science. BOTH SIDES NOW: “I always wanted to do physics because I liked the predictive power of simple principles,” Jeremy England says. “At the same time, I was fascinated by the form-function relationships of biology. So I was always trying to do both.”Katherine Taylor How did you get into combining physics and biology to come up with ideas about life’s origins? I always wanted to do physics because I liked the predictive power of simple principles. At the same time, I was fascinated by the relationships between form and function in biology—especially when you see that it’s still there when you get down to the molecular level. I started out working in a structural biology and a cell biology wet lab, and was very bad at that! By the time I was finishing undergraduate school, I was working in a theoretical lab looking at protein folding. So it feels natural for me to be drawn to this set of questions. Does it matter that no one has come up with a watertight definition of life? Not really. We get the notion of what life is. You can do plenty of great science while saying, “Let me accept that there is a category of things where fish belongs to it, and trees belong to it, but rocks don’t belong to it, and ice doesn’t belong to it.” We can continue to use the word while admitting that we don’t really have a scientific pedigree for where the development of the word came from. And yes, there will be some difficult cases such as viruses. But we can accept the category as given, and study, to the best of our ability, the properties of the things in that category that are of interest to us. That I can talk about a person as a collection of atoms should not supplant the fact that I can also talk about that person as a moral being. Part of the quest involves seeking out life-like behavior in inanimate things too, doesn’t it? In a way, yes. We had a paper in Physical Review Letters a few years ago about a simulation of a bunch of balls and springs, just jiggling and the springs were hooking and unhooking from the balls. Then you wiggled one of the springs at a certain frequency and it all jumbles and hooks together in a different way. Now you have a resonator that’s better at absorbing energy from the frequency that you’re wiggling the ball at. Learning to harvest energy better from its surroundings is a feedback process that sounds lifelike. On the other hand, if you held it up to someone and said, “Look at this jiggling mass of balls and springs, it’s alive,” they would just laugh at you, and rightly so. So it’s clear that this is a territory where there’s going to be a lot of different arguments. Some might say, “Well, the fundamental thing about life is that it does X.” And someone else might say, “Well, the fundamental thing about life is that it does Y.” What I find is that when I focus on any one of those properties, you can always find examples and counter-examples. If you were loose enough in your understanding of what it means to copy yourself, for instance, then a spreading fire is a self-copying phenomenon. But to call a fire alive is a really contentious extension of the domain of that word. What we term life is this multifarious bundle of all these different things together: You’re good at self-replication, energy harvesting, and so on. When you study each one of those things on its own, it’s a physical phenomenon that has more primitive examples. But those examples are where we have a chance of understanding the fundamental principle better. A lot of physicists’ efforts to understand life seem to invoke ideas from thermodynamics, such as entropy. To me, that feels uncomfortable, because thermodynamics was developed for another purpose entirely. Are you wedded to a thermodynamics approach? It’s necessary to talk about entropy for historical reasons, and if we are conservative enough about how we use the term, it can still be useful as a shorthand. What I advocate for now is that we try to make theories that talk about the probability of things happening. And yes, it’s true that entropy, which is counting the number of ways something could happen, is part of what weighs on the scale in determining probabilities—but it is not the only thing that impacts probability. So trying to talk about whether entropy should increase becomes very distracting. I think a better way to talk about it is more to consider what is the likely outcome, given the starting point, given the way the system is being driven, and given the sources of fluctuation in the system. Entropy will be one of the things that matters there, but not the only one. You say that life seems to demand an explanation. Do you think that you’ve found one? The focus of my line of research is more about whether we can develop the capability to bring about the different aspects of what I would call “lifelikeness” in experimental settings, with control and with theoretical principles that can be clearly articulated. We may not know exactly how our particular example of life got put together, but we start to see how one puts a bunch of things together in general. The starting point for that is to break things apart into these different phenomena like energy-harvesting, self-replication, et cetera. With each of those, we’ve made some progress. There’s more to do, but we can start to see how a story might come together. Energy harvesting is central to your ideas. You suggest a key aspect of life’s emergence is down to structures that adapt to their environment by dissipating energy. Can you elaborate on that? Imagine I have a collection of matter under the influence of an environment. The environment is essentially sources of energy that are kicking the matter and knocking into it and allowing it to change shape. I’m interested in which configurations of that matter will be likely to exist at some point in the future. That likelihood depends, in part, on how much extra energy was absorbed and dissipated on the way. Over the course of the whole history of the system, highly dissipative histories are going to lead to highly likely outcomes. What’s an example of a likely outcome? An example might be a self-copying bacterium that eats some sugar. It uses the sugar to build another copy of itself. Now I have two of them and they eat the sugar even faster, and then they make four of them, and then they eat the sugar even faster. So the chemical dissipation is accelerating toward a likely outcome, which is that I have more bacteria in my future than I had in my past. The balls and springs work that way as well. It’s a positive feedback process where you’re exploring a space with combinations of matter. There’s an energy source. And the flow of energy through the system is leading to a positive feedback relationship where you find a better energy absorber and it helps you absorb even more energy, then you find another even better energy absorbing state. Subjected to the right kinds of patterns, naive matter can exhibit computing and learning behaviors. What’s the general idea of dissipative adaptation? There’s a feedback process that’s positive: I end up in a particular place because I was in a state in my past that was good at absorbing energy and it carried me irreversibly in a certain direction that I can’t go back from. It left its mark. So the general idea with dissipative adaptation is that the current state of the system holds the signature of how I had to be in some special state in my past to absorb a lot of energy. That helped me change my shape in consequential ways. Sometimes that leads to growing energy absorption over time, and sometimes it leads to extinction of energy absorption over time. And both of those things can leave very noticeable fingerprints that are different aspects of lifelike behavior. And can we see this in biological experiments? I haven’t been able to apply these ideas rigorously to anything like living cells yet—certainly not in experiments, but also not even with theoretical models. It’s much messier and more complicated to try to get things done in the biological context. But I don’t think that doing that kind of experiment is a long way away. Usually if I show a biologist a living cell and I say, “Look, it’s behaving in this way where it’s being very smart in how it’s reacting to something its environment is doing,” the default assumption is, “Well, there’s something you don’t understand yet about the biology, and that’s the explanation to what you’re seeing.” The design of experiments will have to be done very carefully. If you can’t yet use biology, what can you use to explore these ideas? There are membrane-less droplets, for instance, that self-organize into cells under different conditions. They seem to have very plastic and flexible properties to help the cell respond to different functional needs. A biologist might say, “Oh, well, it has all of these evolved abilities that come from eons of natural selection, making it better and better at what it does.” But it’s starting to be hard to imagine that every kind of response like this has its own separate program, as though it’s all been learned from the past. There’s a growing list of experimental biologists who are interested in these kinds of emergent adaptive behaviors in biological systems. What kind of experiments are you doing along these lines? We’ve been working with primitive abiotic examples. The place we’re looking is called “active matter.” It can involve proteins chewing through chemical fuels and binding and unbinding from each other. But you can also do it with larger objects. I have a collaboration at Georgia Tech where we do this with robots swarms. There are also examples of “colloidal particles” that have special coatings on their surface—they’re like little chemical jet packs. And they already exhibit really interesting collective behaviors. Active matter is a nice experimental base camp. You don’t have to try to make sense of the living cell, where in addition to everything else you have all of the impacts of natural selection at the level of the organism. We can just study the collective behavior of things that are like soups of interacting proteins that are more primitive. You can’t describe the interesting phenomena of the world if you just start with Coulomb’s law and the Schrödinger equation. Every Life Is On Fire is still not the long-sought origin of life story, though? It’s true: There is a lot more to fill in. I’m sure there are people who will read this book and say, “Well, you’ve talked about different kinds of lifelikeness and how they might emerge, but that’s not the same thing as a full story from start to finish of how life as we know it gets put together.” Maybe we can understand how self-replicators might start to emerge, how predictive mechanisms that respond to the patterns in their environment by accurately predicting their surroundings or their behavior might emerge. And maybe energy harvesting is something whose emergence we can understand. So that certainly recalibrates our sense of how to imagine a prebiotic situation and think about what’s difficult or easy to accomplish with what would be lying around. But, no, it is not the same thing as telling a blow-by-blow story. I’m sure anyone who’s looking for that level of detail in a story that is convincing and testable will have to wait a while. Doing forensics on that kind of distance to history is pretty difficult. What’s the next stage in trying to get a handle on the origin of life? For the short term, it’s going to be about how far can we push this idea that, subjected to the right kinds of patterns, naive matter can exhibit computing and learning behaviors. I’m trying to do that right now with some of my collaborators—Dan Goldman at Georgia Tech and others who are part of this effort to control robot swarms. We want to push that envelope and show a smoking gun for that kind of an effect, creating something that can be tested and proven empirically in the laboratory. That will put the physical principles on a very firm footing. The more we can achieve impressive results in that way, the more we are going to be able to redouble our efforts to understand wider implications of the theory and tie it back into other things. To be honest, the broader question of how we start to talk about how life comes together is something I find more difficult to predict: I don’t claim that I can see which way that goes yet. One thing that makes your book particularly interesting is that it is not entirely focused on science, but weaves religious narratives—in particular, the story of Moses from Hebrew scripture—into the scientific narrative. What made you want to do that? Talking about the origin of life, or the boundary between what’s alive and what isn’t, involves broader questions that aren’t in the narrow domain of what you can understand scientifically. I didn’t want to stick my head in the sand about that. I want to understand how things work if I reason about them scientifically. But I am also a human being with other interests. I’m a practicing religious Jew—I’m an ordained Orthodox rabbi—and I care very deeply about these things. So I would feel foolish putting the scientific ideas out there but not making my own comment about a larger conversation that includes more perspectives on what some of this could mean. When I decided to write this book, I quickly realized I wanted to go and look in the Torah and see if I can find a commentary that responds to what I’m already thinking about with the science. I certainly think that it’s possible to contemplate the boundary between life and not-life from that perspective, and the text, I would argue, clearly contains such a contemplation. Do you think that including all these different perspectives is important in our quest to make sense of ourselves? It’s clear that the question of what happened in the past is not a low stakes question. You see that in how people argue about history and in the very emotional disputes people end up having about the prehistoric, or about cosmology. Ultimately, and this is something I learned from the Torah, how you describe the past is not ideologically neutral. The way you talk about who we are, and where we come from, matters to people—partly because it makes some people powerful, and enables them to convince others to do certain things. So I certainly don’t want to eliminate any frameworks of meaning that we need for talking about the past—we can’t just have frameworks that involve concepts like fundamental fields or prebiotic chemical reactions. I sometimes think fundamental physicists gin up the notion that when we’re done. We’ll just talk about strings and that will be everything. But we already know you can’t describe the interesting phenomena of the world if you just start with Coulomb’s law and the Schrödinger equation. It doesn’t work. You need different languages. We certainly shouldn’t be trying to have fewer of them. The difference between physics and biology is that they are different languages for talking about the same world. It’s a mistake to be trying to look for one language that will replace or subsume all others. The fact that I can talk about a person and see them as a collection of atoms should not supplant the fact that I can also talk about that person as a participant in an economy, or a moral being, or a participant in a relationship. These other frameworks of meaning are important, and we should grab them and hold on to them and insist on them. Is it more important to wrestle with issues around our origins than solve them? This is not a conversation that we should be hoping to exhaust. People who think that we’re done sorting it out are misguided in one way or another. People need to keep talking respectfully, with intellectual honesty, and in different languages, and sharing those languages with each other in order. That’s how we’ll progress in our understanding. Michael Brooks holds a Ph.D. in physics and is the author of The Quantum Astrologer’s Handbook. Read “Why Physics Can’t Tell Us What Life Is” by Jeremy England, also in this issue. Lead image: ping198 / Shutterstock Join the Discussion
c1fc8b0e1d5905b4
Novel Charmonium and Bottomonium Spectroscopies due to Deeply Bound Hadronic Molecules from Single Pion Exchange Frank Close Rudolf Peierls Centre for Theoretical Physics, University of Oxford, 1 Keble Road, Oxford, OX1 3NP, UK    Clark Downum Clarendon Laboratory, University of Oxford, Parks Road, Oxford, OX1 3PU, UK    Christopher E. Thomas Jefferson Laboratory, 12000 Jefferson Avenue, Suite #1, Newport News, VA. 23606 USA Pion exchange in S-wave between hadrons that are themselves in a relative S-wave is shown to shift energies by hundreds of MeV, leading to deeply bound quasi-molecular states. In the case of charmed mesons a spectroscopy arises consistent with enigmatic charmonium states observed above 4 GeV in annihilation. A possible explanation of and is found. We give results for all isospin and charge-conjugation combinations, and comment on flavor exotic doubly charmed states and bottomonium analogs. A search in is recommended to test this hypothesis. An exotic is predicted to occur in the vicinity of the (4260). preprint: OUTP-10-01Ppreprint: JLAB-THY-10-1119 I Introduction If two hadrons are linked by , then necessarily hadronic pairs of or have the potential to feel a force from exchange. This force will be attractive in at least some channels. Long agotorn ; ericson the idea of exchange between flavored mesons, in particular charm, was suggested as a source of potential “deusons”torn . Using the deuteron binding as normalization, the attractive force between the charmed and its counterpart was calculated for the S-wave combination with total , and the results compared with the enigmatic charmonium state pdg08 ; torn ; classics ; fcthomas . In all these examples, as in the more traditional case of the nucleon, where the coupling is the source of an attractive force that helps to form the deuteron, the exchanged was emitted and absorbed in a relative P-wave with respect to the hadrons. In such cases, the binding energies that result are MeV; this in particular has encouraged interest in the which is within errors degenerate with the threshold. It has recently been pointed outcdprl that the exchange of a in S-wave, between pairs of hadrons that are themselves in relative S-wave, leads to deeply bound states between those hadrons. Instead of binding of a few MeV, as in the cases considered historically, there is now the potential for binding on the scale of MeV, leading to a rich spectroscopy of states that are far from the di-hadron channels that create them. We shall argue that examples of such a spectroscopy appear to be manifested among charmonium-like mesons. We organize this paper as follows. First we summarize the general arguments for expecting large binding energies due to S-wave exchange. We shall consider a chiral Lagrangian model to illustrate and quantify the phenomenon of energy shifts of MeV, focusing specifically on the charmonium-like isoscalar () channel. In Section III we investigate the connection between the chiral potential and the decay width of the relevant charmed mesons, first in the heavy quark limit with point particles and subsequently in the non-heavy quark limit and with form-factors. Then, we solve the Schrödinger equation and discuss the uncertainties within the model. Detailed results for the charmonium-like are presented in Section IV along with results for other , isospin, and flavor channels. We discuss the limitations of our approximation to the strong interaction in Section V, give phenomenological implications and suggest experimental searches in Section VI and finish with conclusions in Section VII. Ii Molecules and S-wave exchange Several groups have studied the following meson-pairs looking for bound states in total channels due to pion exchange (from here on etc. will be taken to include the charge conjugate channel): These combinations were discussed in torn . In all of these examples parity conservation requires that the is emitted in a P-wave; the hadrons involved at the emission vertices have their constituents in a relative -wave, (we use S,P to denote the angular momentum between hadrons, and to denote internal angular momentum of the constituents within a hadron). In such cases, the emission being in P-wave causes a penalty for small momentum transfer, , which is manifested by the interactionericsonwise where , being the masses of the mesons in the process . (For a discussion of this interaction, and its sign, see Eq.  (20) in reffcthomas ). The resulting potential is for low momentum transfer and has been found to give bindings on the scale of a few MeV, which is in part a reflection of the P-wave penalty. There is no such penalty when emission is in S-wave, which is allowed when the hadrons have opposite parities. Examples involving the lightest charmed mesons are and . P-wave exchange carries a penalty at each vertex. One might naively anticipate that the transition from a or , with constituents in -wave, to the or , with constituents in -wave, would restore a penalty, leading to small binding effects as in the cases previously considered. However, as we now argue, this need not be the case, and energy shifts of MeV can arise. There is a long history of data on transitions between hadrons of opposite parity which indicate that the S-wave coupling is significant when . In the charm sector of interest here, the large widthspdg08 for MeV and MeV suggest that, even after phase space is taken into account, there is a significant transition amplitude. This non-suppression was specifically commented upon in the classic quark model paper of ref.fkr . It arises from a derivative operator acting on the internal hadron wave function, which enables an internal to transition to occur without suppression even when vanishes. This can be seen when is expanded to , where the internal quark momentumdivgi . Feynman, et al.fkr argued for this form on general grounds of Galilean invariance. The presence of gives the required derivative operator, and hence the unsuppressed transition. An unsuppressed transition, when applied to exchange in the system (e.g. liu ) causes the chiral model analogue of Eq.(1) in the channel to become where is the coupling constant (up to a phase), MeV, is the exchanged three-momentum, ( for the system), and is the usual contraction of Pauli matrices resulting from the exchange of an isovector by two isospin-half particles. is the model dependent form-factor which regulates the potential and would be unity in the chiral model. In the derivation of the potential a static approximation has been made to the pion propagator. The full propagator is . Approximating and one recovers the form of the propagator presented in the potential, Eq. (2). The potential is similiar to one presented in Table 1 of Ref. liu , who were investigating a exchange model of the (4430). They considered only the channel and omitted the factor (which is unity for ). We have made this factor explicit as it will become crucial when we study the sector later. The absence of a penalty factor is immediately apparent. The scale is now being set by the mass gap squared, which is equal to the timelike component of the momentum transfer vector squared, as . This potential and any bound states have immediate implications for a rather rich set of physics. The potential also applies for the system, and the bottomonium and strangeonium analogs of and , by exchanging the masses with their appropriate counterparts. Note that the potential in Eq. (2) has no spin dependence and therefore any results apply equally to the spins coupled to total spin , or . For example, if an isoscalar bound state is found, then we also expect degenerate and states. Thus on rather general grounds we may anticipate significant energy shifts, MeV), due to exchange at least in some channels between such hadrons in a relative S-wave. Signals may be anticipated below or near threshold in the following channels (in the charmonium analogues, involving charm and anti-charm mesons for either 0 or 1, or in exotic states with manifest charm involving two charm mesons): We also find that it is possible that states could bind which would lead to more configurations. Pion exchange depends on the presence of flavors, therefore there will be no such effects in the analogues. Further, the potential depends only on the quantum numbers of the light quarks. Therefore, there will be effects in the strange and bottom analogues, which can add to the test of such dynamics at different kinematics. The parameter in Eq. (2) is closely connected to the width of the decay. Data exists on this decay which constrains the value of and hence the spectrum of the model. We discuss the extraction of from the decay width in the next section. Iii The Coupling Constant Being simply related to the coupling constant, also appears in the chiral formula for the decay width. Eq. (137) of Ref. casalbuoni97 gives: which is valid in the heavy quark limit. /MeV /MeV /MeV BF /MeV 2427 40 2010.27 .17 384 N/A 359 2352 50 1896.62 .20 261 50 N/A 414 2403 40 1864.84 .17 283 40 N/A 461 5723.4 2.0 5325.1 .5 N/A dominant 360 1403 7 891.66 .26 174 13 94 6% 402 1425 50 493.677 .016 270 80 93 10% 619 Table 1: Data of low lying mesons of different flavor sectors with opposite parity which exhibt a large width. Values taken from the Particle Data Grouppdg08 . No width data are available for the bottom sector and no branching fractions are given for the charmed sector. In order to extract using Eq. (3), we use the data from the PDG listed in Table 1. In the absence of a branching fraction we assume that the total width is saturated by the channels. We are using chiral formulae for the charged width which may be related to the total decay width by falkluke . Therefore, we use MeV and MeV throughout. For the system we have . There are theoretical and empirical reasons to suspect that Eq. (3) may be a poor estimate for given . Firstly, in the heavy quark limit assumed for Eq. (3), , , and thus as Eq. (3) applies equally well to the decay. However, these relations do not hold experimentally. Finite mass effects (including mass differences) have been used to derive a correction to Eq. (3)casalbuoni97 : Using this expression we have . We have mentioned that our analysis of the system applies equally well to the system. Indeed, Eq. (3) applies to both systems with a trivial substition of the appropriate mass. However, when finite mass effects are included, chiral model gives a different formula for the decay widths of the and mesons. The analogous formula to Eq. (4) iscasalbuoni97 Due to the larger mass gap (and hence the larger ), Eqs. (4) and (5) imply that . Empiricallypdg08 , Although the uncertainties are large, has a smaller width than even though the phase space is larger, in contrast with the expectations of Eqs. (4) and (5). In general, processes such as involve form factors that summarize the penalty for selecting the exclusive process of single emission, which is increasingly improbable at large relative to multi-pion, inclusive, channels. Thus, the assumption that the () coupling is constant in the chiral model does not take account of the full dynamics at the vertex. The data suggest that we must include the effects of exclusive form factors. The effects of form factors may be modelled by making the replacement everywhere. is a smooth, decreasing function such that . The exact form of is model dependent; however, the introduction of a form factor will in general lead to an increased estimate of and so, naively, to an increased binding energy. As an explicit example, consider the form factor resulting from a dynamical model of emissioncs : with and GeVcs . For one has while for . This plays a significant role in the relative widths as which drives the ratio of widths in favour of the . In turn the form-factor also shows that , extracted earlier from the chiral model, is an underestimation. In such a model the more general Eq. (4) modified the heavy quark value of to and the effect of form-factors further increases to . Therefore, the inclusion of finite mass corrections and the effects of exclusive form factors has a significant impact on the value of extracted from experimental decay widths. We emphasise that although the form factor itself is model dependent, the suppression for larger is expected in general. System           HQ NHQ         NHQFF Table 2: Values of extracted for various systems which may experience S-wave exchange. The adaptation of equations for the charm-system to their appropraite form for flavor analog systems by making obvious mass substitutions is assumed. In summary, from these different determinations we find values ranging from to : the value of is highly model and data dependent. We collect these results and present other results for analogous systems in Table 2. The HQ column presents the values of extracted in the heavy quark limit using Eq. (3). The NHQ column is similiar but extracts the values in the non-heavy quark limit using Eqs. (4) and (5). The NHQFF column presents extracted values which would be required to overcome the form-factor suppresion, Eq. (6), and to reproduce the correct width in the non-heavy quark limit. In the following section we will present results for a range of and show that the spectrum is highly sensitive to the value of . Iv Molecular Spectroscopy Previouslycdprl we performed a variational calculation with the potential in Eq. (2) and using trial wave functions. With this technique we agreed with Ref. liu that there was no reason to expect an isovector bound state. Additionally, we found deep binding in the isoscalar system. The presence of deep binding in the 1 channel motivated the present study where we solve the Schrödinger equation and analyze the spectroscopy emerging from S-wave exchange binding of the and analogous systems. We solve the Schrödinger equation and quantify the bound states from S-wave exchange using a range of to set the scale. The resulting spectrum contains several potential bound states. The channel includes 1S and 2S states which are consistent with the and structures claimed in annihilation. Results for the charmonium-like exotic and isovector channels are also presented. We find the binding energies are highly sensitive both to the value used for in all channels, and attempts to model finite size effects in some channels. We first consider the potential from the chiral model involving a pointlike interaction, and then discuss modification of the potential due to finite size effects. iv.1 Point-like pion exchange The Fourier transform of Eq.(2) gives the potential with S-wave exchange in coordinate space. When the real part of the potential is: in agreement with Ref. liu . We numerically solve the Schrödinger equation using this position space potential as described in Ref. fcthomas . Clearly the results for larger values of , which yield significant binding, will have important finite size corrections. Therefore the point particle results can only be considered to give a cursory quantitative examination of the implications of our general argument. However, given the unusual nature of the oscillatory potential, it is beneficial to study the unregulated potential in order to contextualize the effects of a form-factor which are explored in the next subsection. In Table 3 we show the binding energies of some of the low-lying isoscalar states in relative S-wave with (the parity obviously depends on the relative orbital angular momentum of the system; since the potential is independent of spin, the binding energies are degenerate across all possible total combinations). Binding energies are given for a few values of . The binding energies are seen to be very sensitive to the value of . Binding Energy / MeV Table 3: Binding energies for various isoscalar states in with ; the binding energies are given for a few values of in the range identified above. If for example , we find that two or even three S-states may arise, with binding energies (1S), (2S) and MeV (3S). The rms radii, for these states are then approximately , and respectively. This shows that the ground state is typically hadronic, the 2S consistent with a canonical molecule and the 3S dubious. Using Fig. 1, we can interpret the results for the values obtained for the S-wave states with the point particle potential. The 1S state had an fm clearly indicating that the state is bound in the first attractive well. In contrast, the 2S state had an fm indicating that the particles are bound in the second attractive well of the potential. The 3S state has an fm suggesting that it is bound by the third attractive well. If we take the potential, Eq. (8), and applied it to systems unchanged apart from the centrifugal potential, we would find multiple bound states in the P- and D-waves including some with binding energies of MeV. Firm conclusions regarding the possiblity of such states would require a more extensive analysis of the origin of Eq. (2) than presented here. The potential energy scales but the binding energies are much more sensitive to (ground state binding energies scale like ). This sensitivity to may not be unexpected, as the oscillatory nature of the potential in position space makes the potential turn over to repulsive when fm, and gives considerable sensitivity to these oscillations even for the short range 1S level, and critically so for the 2S. In a Coulomb potential the binding energies would scale as ; this further explains the sensitivity noted above in the numerical calculation. iv.2 Form Factor As noted previously, the form factor has a significant impact on the calculation of from the decay width. In the previous section we used this “form-factor-renormalised” value of , but otherwise continued to treat the potential as if the hadrons were pointlike. Therefore, it is prudent to investigate what effect form factors may have on the analysis of the molecular spectroscopy. To examine this question we attach dipole form factors to the potential, Eq. (2). Such ideas have been discussed in refstorn ,newheavymesons and liulambda . Following those ideas, we specifically choose to parametrise the form factor as and the potential is multiplied by – we use the latter expression for . We have made the same static approximation as in Eq. (2). In nuclear physics dipole form factors have only in the denominator due to the exchange between the nearly degenerate nucleons. In position space, the form factor changes the potential from that in Eq. (8) to: with and where . We plot this potential for the isoscalar channel for various in Fig. 1 and for a fixed and the various channels in Fig. 5. The potential, Eq. ( Figure 1: The potential, Eq. (10), plotted against in the isoscalar channel for and a variety of s. The solid line is the point particle result – in effect ; the dashed line is the result for GeV; the dash-dot line is for GeV; the dash-dot-dot line is for GeV; and the dotted line is for GeV. Here is a purely phenomenological constant. Although its value should be related to the convolution of the spatial wave functions of the hadrons, its value is fairly arbitrary in practice. In the data-rich nucleon-nucleon sector, dipole form factors have been used in the Bonn nucleon-nucleon potentials. In CD-Bonn one finds values of cdbonn . However, there is no reason to believe that the value used in nuclear forces should be related to the value most appropriate for use in exchange between charmed mesons. In the literature, other practitioners using dipole form factors in meson exchange molecular models employ values of: 1.2 GeVtorn , -GeVnewheavymesons , and -GeVliulambda . The qualitative effects of this form-factor are made apparent in Fig. 1. Regulating the potential introduces a soft-repulsive core instead of a singular attraction at the origin. As decreases, the first attractive well in the potential is entirely overwritten as a repulsive core. We present the results for the binding energy as a function of and for the 1S and 2S isoscalar states in Figs. 2 and 3. The horizontal axes are so that the origin corresponds to the point-like case. As one can see, the ground state binding energy falls off rapidly with decreasing . Eventually the ground state binding energy finds a stable point and remains at approximately that energy for the rest of the considered values of . This behavior is sharply contrasted by the binding energy of the 2S state. The 2S binding energy is initially insensitive to a decrease in before falling steeply and finally becoming insensitive again. This behavior can be understood from the behavior of the potential in Fig. 1. As is decreased from , the potential is increasingly regulated. This manifests as overwriting the initial attraction from the potential and eventually replacing it by an entirely repulsive core interaction for MeV. Thus we would expect a steep fall off in ground state binding energy as is decreased. This expectation is borne out in Fig. 2. In contrast, the 2S state is bound primarily by the second attractive well, which is unaffected by decreasing as long as MeV. Thus, we would expect the 2S binding energy to be relatively stable against decreasing as Fig. 3 confirms. At some point, which is dependent, there will no longer be enough attraction in the first attractive well to bind the system, and so the ground state will begin to require presence in the second attractive well in order to bind, displacing the 2S state and decreasing its binding energy. When the first attractive well is completely overwritten, both the 1S and 2S states should have a relatively stable binding energy as the attractive wells (second and third) which bind them are stable against decreased . Indeed the binding energies of the 1S state decrease slightly as MeV corresponding with the alteration of the second attractive well in Fig. 1. Plot of the 1S Figure 2: Plot of the 1S isoscalar binding energy for multiple values of as the form factor parameter is varied. Plot of the 2S Figure 3: Plot of the 2S isoscalar binding energy for multiple values of as the form factor parameter is varied. This analysis shows that the molecular spectroscopy is very sensitive to the parameters. While a simple abstraction of parameters from existing data support the idea that a spectroscopy of molecules could arise, one cannot with certainty predict this. However, the result of strong binding appears relatively robust. Indeed our results show that the existence of robust bound states (assuming is sufficently large) does not depend on deep attraction at the origin, and that, even in the presence of a strong repulsive core interaction, a bound state should exist with a binding energy largely determined by long-range (fm) virtual pion effects. If the and are confirmed as genuine signals, then within this simple modelling, their energies are qualitatively consistent with those expected for S-wave binding. Indeed, the differing sensitivities of the 1S and 2S states to would allow one to tune the model to reproduce the binding energy of the , MeV and the , MeV assuming the mass of the was exactly 2427 MeV. Since the mass of the affects both binding energies in a systematic way, we cannot simply add its error in quadrature for both to obtain our binding energies with their error. Instead, we study the system for MeV; MeV; MeV requiring binding energies of: MeV, MeV; MeV, MeV; MeV, MeV. We present the “tune-ability” of the model in Fig. 4. Countour plot of the values of Figure 4: Countour plot of the values of and . The interior of the boxes corresponds to values which reproduce the binding energies of the and to within errors. The center box is for the experimental mass while the box on the left is for the mass minus its error and the box on the right is the mass plus the error. The dotted line corresponds to the value for from the experimental width. The mass of the effects the potential in two straightforward ways. First, it factors into the calculation of from the decay widths. Secondly, it helps determine the mass gap which, along with controls the strength of the potential. Although the value of and the mass gap depend on the value of the mass, is unchanged as varies over its error. This allows us to plot the different mass cases on a single axis. (The mass of the has an insignificant error.) Fig. 4 was produced by parameterizing the binding energies. We assumed that the 2S binding energy was approximately independent of and so could be used to determine . This assumption has been explicitly verified for the values of , considered and is found to be a good approximation. Then the 1S binding energies were parameterized as a quadratic function of whose coeffecients were quadratic functions of . This parameterization reproduced the 1S binding energies over the relevant range of these parameters. The quadratic formula could then be used to extract the range of from the binding energy at each . The region of compatability extends to just below the error bounds for to slightly above it. values are undetermined by experiment, however the compatible values lie around 1 GeV which is near values used by other practictioners. Thus, a consistent, physically reasonable parameterization of and is possible which permits the identification of the and as the 1S and 2S bound states of the system respectively. In general within the chiral model, is a function of the experimental . If experiment were to show that the width were different than the current values that we have used, the consequent alteration of the molecular binding energies could be considerable. It is here that some major limitations in the robustness of the model lie. iv.3 Other bound states and flavour exotics The potentials in the and isovector/isoscalar channels are related by a simple constant. The potentials of isovector and isoscalar channels are related by a factor while the potential in channels with opposite charge conjugation are related by a relative phase. Therefore, we can use Fig. 1 to interpet the binding in all these channels against finite size effects and a possible repulsive core. The robustness of our results in the isoscalar channel were discussed previously. The potential, Eq. ( Figure 5: The potential, Eq. (10), plotted against in the various channels for and GeV. The dotted lines are the isovector potentials while the solid lines are the isoscalar potentials. The left panel shows the potentials and the right panel gives the potentials. In Table 4 we show the binding energies of some of the low-lying isoscalar and isovector states in relative S-wave with . Binding energies are given for a few values of and GeV. Interestingly we find potentially robust binding in all isospin and charge-conjugation states. Binding Energy / MeV State Isospin 1S 0 1S 1 1S 0 1S 1 Table 4: Binding energies for states with various isospins and charge conjugations; the binding energies are given for a few values of in the range identified above and GeV. We note that the pattern of binding described here is valid for GeV and the pattern will be altered as changes. In particular, the pattern will change if the finite size effects wipe away less of the deep attractive core which binds the isoscalar and isovector channels. In general a higher value of will lead to (significantly) more deeply bound isoscalar and isovector bound states, and slightly less bound isovector and isoscalar states. The pattern of relative binding energies between the channels may be understood from Fig. 5. The most deeply bound states occur in the isoscalar channel where the potential is repulsive near the origin but has a deep attractive well (due again to the isospin factor of 3) near 1fm. The second most deeply bound states occur in the isoscalar channel where the term contributes a –3 factor and there is a deep attraction near the origin. We can see in Fig. 5 that the form-factor has reduced the magnitude of the first dip in the oscillating potential (around 0.3 fm), making it smaller than the first bump (around 1.2 fm). This is why the isoscalar channel has deeper binding than the channel. The isovector channels lose the isospin factor of 3, leading to significantly reduced binding in these channels. However, their relative binding is the same: the potential retains the deeper attraction near 1 fm whereas the isovector channel loses the attraction around the origin due to form-factor effects. Hence the isovector channel is the least deeply bound when GeV. Consequently the prediction of bound states in the isovector is the least robust. The situation is very different for the isovector and isoscalar channels. In both of these channels the point particle potential is repulsive at the origin and they must bind in the first attractive well which is 1fm away from the origin. Therefore we expect these numerical results to be robust against finite size effects and a repulsive core. However, in the presence of intense regulation of the potential then, in this channel, the deep attraction being overwritten to strong repulsion with decreasing , shown in Fig. 1, becomes a strong repulsion being overwritten to deep attraction. Therefore, we conclude that the existence of deep binding in these channels is a very robust result which should be insensitive to strong, short-range dynamics and totally independent of finite size effects of the potential, though both may contribute to deeper binding. The ranges of and which reproduce binding energies for the and (see Fig. 4) are of particular interest. The case h=1.3 (Table 4) illustrates how it is possible to identify the 1S and 2S 1 respectively as (4260) and (4360). In such a scenario it is possible that a third 1 state could occur around 4400 MeV. But of most interest is the prediction of a robust isoscalar exotic 1 bound state in the vicinity of, or even below, the (4260). If this exotic state is below the (4260) then it may possibly be observed through . Table 4 shows binding in both isovector channels. We therefore must reverse our previous concurrencecdprl with the conclusions of Ref. liu : when subjected to a more complete analysis we find that a bound state may exist due to one pion exchange between in the isovector channel. We find it interesting to note that the has a mass of MeVz4430 . Therefore if it were a molecule, it would have a binding energy of MeV. This binding energy is compatible with a charged partner of the isovector result for the range of and which reproduces the and . A more complete analysis than that provided here is necessary to make a definitive identification. However we find the possiblity that one pion exchange might provide a consistent description of the , , and the with physically reasonable parameters encouraging. In addition, we predict doubly charmed ( as opposed to ) isoscalar and isovector states degenerate with respectively the isoscalar and isovector states in . We refer to Ref. fcthomas for a discussion of the signs involved. iv.4 Bottom analogues In Table 5 we present the binding energies of some of the low-lying isoscalar states in relative S-wave with , along with the analogous states for comparison. Binding energies are given for a few values of and GeV. Binding Energy / MeV Table 5: Binding energies for various isoscalar and states in with ; the binding energies are given for a few values of in the range identified above. The binding energies are generally deeper than in the charmed analogues. This is easily understood: the higher mass of the mesons result in a lower kinetic energy. In general we predict analogous effects in the analogs of the charmed system, subject to differences in the width which is experimentally undetermined for the . Similar effects may exist in the system. However the phenomenology of the is more complex and the heavy quark approximation is certainly inadequate. Together with the constraint on implied by the width, this prevents us making quantitative conclusions, we only note the qualitative possibility that S-wave pion exchange may produce binding in the system. V Discussion The results for binding energies, and even whether states bind at all, are sensitive to parameters, and also to more complicated (possibly more realistic) modelling of the strong interactions. We have focussed solely on the -channel force from virtual pion exchange, specifically, the four fermion intermediate states in the Fock state. Therefore, we have taken only the real part of the potential and solved the Schrödinger equation and ignored the imaginary part arising from the exchange of a real, on-shell, pion. There are also -channel forces arising from intermediate excited states. More immediately in our molecular approach there are intermediate states with a real pion, of form . The ability of a virtual exchanged particle to be on-shell introduces an imaginary component to the matrix element and, hence, to the potential. The effect is to make the energy complex: the real part is taken as the binding energy while the imaginary part is interpreted as the width of the state. That the on-shell intermediate state would manifest as a width seems natural as it represents a direct connection between the bound state and a possible decay channel. The picture is then that the decays into a and the “would be” quasi-molecular bound state disintegrates, or even fails to form. Thus, we expect the on-shell pion contribution will endow any state produced by this mechanism with a width, or that it produces a non-resonant background which may obscure the signal. Within our approximations we find deeply bound meta-stable states. The generates widths and background. Whether these states remain visible is then dependent upon the relative importance of neglected forces, such as mixing with or . In general it is difficult to calculate the impact of neglected effects, not least because strong interactions are complicated and we are approximating one particular force as dominant. If the is an example of our states, then its visibility shows that Nature is kind, at least in the channel. It has given a width of MeV and a visibility above background. It could be that this fortune is because a component drives the production, and the rearrangement then drives the signal. Thus the conclusion of this analysis is that while it is possible that a deep bound molecular spectroscopy with signals visible above a background can arise, it is not mandatory. However, as we have already noted, the appearance of and are consistent with being the first two states observed in such a spectroscopy. The immediate test of this is to seek evidence for these states in . Unless there is some dynamical suppression, such channels must show strength if a bound system is present. If this first test is passed, then a search for other transitions and evidence for analogous states in would be warranted. In this latter case we note the apparent presence of anomalous state georgehou This and other phenomenological implications are the theme of the next section. Vi Phenomenology We have studied the molecules and found deeply bound states with , which are degenerate for the channels. However, the number of potentially deeply bound states is very sensitive to parameters. Typically we anticipate the binding energies of the states to have the orders of magnitude as follows: 1S MeV; 2S MeV, with an exotic between the 1S and 2S levels. Further reasons to anticipate a rich spectroscopy are that this S-wave exchange also can occur for and the off-diagonal . The strengths for each of these in the heavy quark limit are identical. In practice there will be model dependent perturbations due to mass shifts and mixings; these are beyond the present paper and only merit study if the general features of our model show up in the data. In general: if S-wave pion exchange forms deeply bound charmed molecules comprised of (and manifest charm analogs) , there will be a rich spectroscopy of states in the GeV mass range. These can include states that are superficially charmonium, such as , and , as well as exotic and . In addition there are also states with charmonium character but . States such as , and may contain   in their Fock state and hence be produced at measurable rates; the other states have no such aid, but may be produced in radiative or strong transitions from higher lying molecules. Manifestly charm (, etc.) states are also expected and are degenerate with the charmonium like states. The pattern and observability of these will depend on the detailed pattern of the spectroscopy. The states that are most amenable to experimental study are the , . These occur in , and also can arise from the off-diagonal S-wave potential for . Hence there can be a rather rich spectroscopy in the sector. As there are apparently several states of varying statistical significance emerging in the data, we shall primarily focus on this channel here. The best established enigmatic structure in the sector here is which is seen in . Its typical hadronic width MeV implies that either is not the dominant decay channel or that 40 years of experience with the OZI rule and strong interactions is wrong. Given the nearness of the thresholds which can be accessed in S-wave, rearrangement into at low momentum seems reasonable, and has been invoked as a qualitative explanation of these phenomenaclosepage . As and , whereas , then if the dynamics are associated with the nearby and thresholds, such as being a molecule, or a hybrid   that is dynamically attracted towards that threshold, then strength should be seen in the channelsclosepage ; closetalk . However, if the is a bound state, then the favored strong decay will be in contrast to the aforementioned or . A preliminary report from Bellebelle2pi sees no evidence for in the region. This disfavors and potentially also the channel. Thus by default, the possibility that the strength is driven by becomes tantalizing. Thus an immediate consequence of this interpretation is that if the is a molecule, there must be significant coupling of the that could exceed that to . More generally an unavoidable conclusion of this dynamics is that in the sector the channel has significant strength in the region of any molecular states. Hence we urge measurement of the relative importance of the channels and of (when, in the latter, has been removed). The depth of binding of the ground state with trial wave functions already suggestedcdprl the tantalizing possibility that a radially excited state could also be bound. Numerical solutions of the Schrodinger equation confirmed that this is likely to be the case in the range of models discussed here. The excitation energy for radial excitation of a compact QCD  state is MeV; it takes less energy MeV to excite the extended molecular system which has no linearly rising potential. The spatial extent of the molecular S system is significantly greater than that of   hadrons. The rearrangement of constituents leading to final states of the form + light mesons then rather naturally suggests that the lower (radial) states convert to ( ) respectively. In this context it is intriguing that there are states observed with energies and final states that appear to be consistent with this: 4260 and the possible higher state babar1 ; belle1 are respectively 170MeV and 70 MeV below the combined masses of 4430MeV. Here again, for a molecular state, we would expect significant coupling to . If these states were to be established as members of molecular systems, one could tune the model accordingly. Further, this could be an interesting signal for a quasi-molecular spectroscopy with transitions among states that could be revealed in, for example, . Indeed, if we identify S and S, then we expect the exotic to occur in the vicinity of the . Given that lattice QCD finds activity for a hybrid cc* signal in this channel in this region, one should now actively search for evidence. A clear signature is that the hybrid will couple to in either the or combinations; looking for the presence of strength in which does not include should thus be a primary endeavor. The absence of such a channel could have far reaching implications for theory. While our discussion has centered on charmonium, the remarks hold more generally. Since the attraction of the potential depends only on the quantum numbers of the light , it follows immediately that the flavor of the heavy quarks is irrelevant, at least qualitatively. Hence we expect similar effects to occur in the and sectors. It has been noted that GeV appears to have an anomalous affinity for upsilon . This state is MeV below threshold. In the channel there is an enhancement at 2175MeVphi . This is approximately 125MeV below the threshold. This is consistent with the spectroscopy; however, as commented earlier, analysis here is less reliable, as the heavy quark approximation fails, and the phenomenology of the pair is more complicatedbarnes ; lipkin . The primary test for this picture is that if the states in the to 4.5 GeV region are deeply bound spectroscopy, then their decays in charm pairs must show strength in the channels. The energy dependence of this channel and that of (with no ) can reveal the mixings between and molecular systems. The presence of exotic is also expected. Vii Conclusions In general we find that deeply bound molecules in the system should occur as a result of exchange in S-wave, leading to a potentially rich spectroscopy. Whether such states are narrow enough to show up above background is a question that experiment may resolve. We note however that the emerging data on the states known as and are consistent with being examples of these molecular states. The immediate test is to verify if the prominent channels with manifest charm in this mass region are . If this is confirmed, then more detailed studies will be merited, in particular searches for an exotic in the vicinity of 4.2GeV. This state could be produced via , and/or be revealed in 1. Table 4 with h=1.3 shows a possible spectroscopy consistent with the Y(4260) and Y(4360) as the 1S and 2S states. In this case, the exotic states expected are I=0 also at 4260 and 4360 (in both charmonium-like and manifestly charm channels); the isoscalar at 4250 and 4395; and also I=1 “charmonium” states, including at 4390. As long as one picks and chooses which datum one will fit, it is possible to fit it in a molecular model. A reason is that binding energies are very sensitive to parameters that are not well determined elsewhere. Thus a model designed to fit a single state has limited appeal. The more relevant test is whether a group of states share a common heritage, and their production or decay properties reveal the underlying molecular structure. In the particular case here, one can fit the masses and decay widths in tetraquark, hybrid and molecular models. As such the existence of these states does not discriminate among them. However, the pattern of and the decay channels differ. Thus the sharpest tests of their dynamical structure appears to be in the decay branching ratios. Hence, for example, the as a tetraquark would couple to ; a hybrid or molecule associated with threshold would be expected to appear in ; molecules associated with the threshold by contrast would have significant strength in the channels. Thus the decay branching ratios of states seem likely to be sharper indicators of their dynamical nature than simply their masses. If our hypothesis is correct, we expect significant strength in the channels in the 4 to 5 GeV region. Such evidence may already exist among the data sets for annihilation involving ISR at BaBar and Belle. One of us (FEC) thanks Jo Dudek for a question at a Jefferson Lab seminar which stimulated some of this work and T. Burns for discussion. This work is supported by grants from the Science & Technology Facilities Council (UK), in part by the EU Contract No. MRTN-CT-2006-035482, “FLAVIAnet.’, and in part authored by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177. The U.S. Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce this manuscript for U.S. Government purposes. For everything else, email us at [email protected].
bd674537a4738235
Video encyclopedia Flashback calendar Dow Jones index hits 25,000 Throughout the course of the rest of 2017 and January 2018, the Dow skyrocketed past a few millenary milestones, including the symbolic 25,000 in 2018. However, one month later, the Dow suffered its biggest loss since Brexit in 2016. Hennenman–Kroonstad train crashes A passenger train operated by Shosholoza Meyl collided with a truck on a level crossing at Geneva Station between Hennenman and Kroonstad, Free State, South Africa. The train was derailed and seven of the twelve carriages caught fire. Twenty-one people were killed and 254 people were injured. Subway train derails in Brooklyn A subway train overran the bumper block, injuring 103 persons. Two carriages of the six-carriage electric multiple units involved were severely damaged when it collided with the buffer stop at a speed of 10 to 15 miles per hour. 'Galavant' first airs on ABC Galavant is an American musical fantasy-comedy television series, created and written by Dan Fogelman, with music and lyrics by Alan Menken and Glenn Slater. Fogelman, Menken, and Slater also serve as executive producers alongside Chris Koch, Kat Likkel and John Hoberg for ABC Studios. Lance Armstrong doping case Because of doping, American former professional road racing cyclist Lance Armstrong was stripped of his seven Tour de France titles. For much of his career, Lance Armstrong faced persistent allegations of doping, but until 2006 no official investigation was undertaken. The tallest building in the world opens The Burj Khalifa is a skyscraper in Dubai. The building was opened in 2010 as part of a new development called Downtown Dubai. The total height of 829.8 m (2,722 ft) including the antenna makes it currently the tallest structure in the world. Spirit rover lands on Mars Spirit rover landed successfully on Mars at 04:35 Ground UTC. Spirit is a robotic rover on Mars operated by NASA between years 2004 and 2010. It operated under the Mars Exploration Rover Mission which goal is to search for water activity on Mars. Bill Belichick quit as 'HC of the NYJ' after one day Belichick was introduced as head coach of New York Jets. The day after his hiring was publicized he turned it into a surprise-resignation announcement. Soon after this bizarre turn, he was introduced as the Patriots' 12th full-time head coach. First mass-produced electric vehicle One of most important moments in the history of General Motors was the creation of General Motors EV1. It was the first mass-produced electric vehicle from a major automaker. The customer response was very positive, but production was ended by General Motors due to lack of profitability. Gulf of Sidra incident between America and Libya John F. Kennedy was sailing toward the eastern Mediterranean Sea. Two United States Navy F-14 Tomcats shot down two Libyan MiG-23 Floggers which the Americans believed were attempting to engage them. Libya accused the U.S. of attacking two unarmed reconnaissance planes which were on a routine mission over international waters. David Robinson blocks a record 14 shots David Robinson once blocked 14 shots in a game while playing for Navy. He is the only player with 13 or more blocks in a game to be inducted into the Naismith Memorial Basketball Hall of Fame. Based on his prior service as an officer in the U. S. Navy, Robinson earned the nickname "The Admiral". Elton John starts a two-week run at #1 on the US singles chart Lucy in the Sky with Diamonds was originally written by John Lennon and Paul McCartney. A cover version of the single by Elton John was recorded at the Caribou Ranch and it featured backing vocals and guitar by John Lennon under the pseudonym Dr. Winston O'Boogie. Nixon says no! The Watergate scandal was a major political scandal that occurred in the United States. "One year of Watergate is enough," President Nixon declared in his State of the Union address. But the embattled president could not put the issue behind him. But after his role in the conspiracy was revealed, Nixon resigned. Last time the Beatles record as a group Although there were two further recording sessions for the Let It Be album involving just one member of the Beatles, this was the Beatles' last time recording as a group, although without John Lennon who was in Denmark with Yoko Ono at that time. Donald Campbell is killed in water speed record attempt Known as the most prolific Water Speed Record breaker of all time, Donald Campbell broke eight world speed records on water and on land in the 1950s and 1960s. He died during a speed record attempt occurred at more than 300 mph and within 200 yards of the end of the measured kilometer. Erwin Schrödinger dies Erwin R. J. A. Schrödinger is best known for his work in quantum theory. He was awarded the Nobel Prize in Physics in 1933 for his Schrödinger equation. It is a wave-equation that serves as a mathematical model of the movement of waves. He died of tuberculosis at the aged 73 in Vienna. Albert Camus dies in an automobile accident Camus died at the age of 46, in a car accident near Sens, in Le Grand Fossard in the small town of Villeblevin. In his coat pocket was an unused train ticket. He had planned to travel by train with his wife and children, but at the last minute, accepted his publisher's proposal to travel with him. Elvis Presley records his first demo Before becoming the King of Rock and Roll, 19-year-old Elvis Presley visited Sun Studio in Memphis. The recording session launched Presley’s music career. The first four demos recorded at Presley's expense were produced by Sam Phillips and featured Scotty Moore on guitar and Bill Black on bass. Burma gains its independence from the United Kingdom The nation became an independent republic, named the Union of Burma. Shan National Sao Shwe Taik became the new country’s first President and U Nu its first Prime Minister. Notably, it declined to the join the British Commonwealth, unlike most other former British colonies. Billboard magazine publishes the first ever-pop music chart Best Sellers in Stores was the first Billboard chart and big band violinist Joe Venuti is the first No.1. with "Stop! Look! Listen!". This chart ranked the biggest selling singles in retail stores, as reported by merchants surveyed throughout the country. Utah becomes the 45th state admitted to the U.S. Utah is a state in the western United States. It became the 45th state admitted to the U.S. in 1896. Utah is the 13th-largest by area, 31st-most-populous, and 10th-least-densely populated of the 50 United States. Utah has a population of more than 3 million. Samuel Colt sells his first revolver to the U.S. Government The Texas Rangers ordered 1,000 Samuel Colt's revolvers during the American war with Mexico. Colt's use of interchangeable parts helped him become one of the first to use the assembly line efficiently. His innovative use of art, celebrity endorsements, and corporate gifts made him a pioneer in the field of advertising. Anniversaries of the (in)famous born 1890 Victor Lustig born 1986 Charlyne Yi born 1990 Toni Kroos born 1940 Helmut Jahn
9cb525b162407208
How Pragmatism Reconciles Quantum Mechanics With Relativity etc Interview by Richard Marshall. Richard Healeyis the pragmatist philosopher of physics who thinks there's a need to interpret quantum mechanics, that none of the standard interpretations are good enough, that the idea of a nonseparable world helps and that a pragmatist approach is the way to go. He discusses how to dispel the Feynman mystery, the paradox of Wigner's friend, and how to reconcile quantum mechanics with relativity, whether quantum mechanics is a realist or instrumentalist position, on whether quantum mechanics makes ontological claims, on time, on quantum nonlocality and Dr Bertlmann's socks, and on getting free of the prejudices we call common sense. Go figure... 3:AM:Why did you become a philosopher? Richard Healey:I became a philosopher when I came to realize that this was the best way to make a contribution while pursuing my passion. As a young teenager I became fascinated by the modern physics I read about in semi-popular books and newspaper reports. Trying to understand the world, I was naive enough to think that knowledge of what it was like at a fundamental level was the key to understanding everything else about it! I was also eager to know how people could possibly have found out the things I read about. I endured high-school physics as a necessary evil if I wanted to achieve the deeper understanding I looked forward to at university. Then I heard about a new joint honour degree in Physics and Philosophy at Oxford whose description seemed tailor-made for my interests. I knew about philosophical thinking from my elder brother who had studied Greats at Oxford (Latin and Greek languages, ancient history and philosophy—modern as well as ancient.) So that’s where I went for my B.A. I wasn’t yet committed to becoming a philosopher, and realized I still didn’t know enough fundamental physics. So I went to Sussex University to take a Master’s degree in theoretical high energy physics. I realized during my year at Sussex that creative work in theoretical physics requires unusual talent and an ability to immerse oneself in a very narrow subject with no guarantee of success. I didn’t want to pursue that path. I was seeking understanding on a broader range of topics, pursuing what Wilfrid Sellars’s took as the aim of philosophy---to understand how things in the broadest possible sense of the term hang together in the broadest possible sense of the term. More specifically, I wanted to understand quantum theory: to struggle with its conceptual problems and to explore its broader implications. So both talents and temperament led me into philosophy, and to Hilary Putnam and W.V.O. Quine at Harvard where I took my Ph.D. 3:AM:You're an expert in the philosophy of quantum mechanics, amongst other things. Some Physicists have said that there's no need to interpret the theory - it is what it is and we should just use it and develop it. After all, disputes about how to use it tend to be short-lived and consensus reached. So why do you think there is a need to interpret the theory, and why is it so hard? RH:Quantum theory comes in many forms, including the non-relativistic and relativistic quantum mechanics of particles as well as Lagrangian and algebraic quantum field theory: but it is common to lump these all together and call them quantum mechanics. There is a general consensus that any fundamental theory will be some form of quantum mechanics. One expects a fundamental theory to be capable of precise formulation and to say what the world is like at the deepest level. But quantum mechanics confounds these expectations. In the words of the physicist John Stewart Bell “The problem is this: quantum mechanics is fundamentally about “observations”. It necessarily divides the world into two parts, a part which is observed and a part which does the observing. The results depend in detail on just how this division is made, but no definite prescription for it is given. All that we have is a recipe which, because of practical human limitations, is sufficiently unambiguous for practical purposes.” He contrasted quantum mechanics unfavorably with classical mechanics in this respect. “In classical mechanics we have a model of a theory which is not intrinsically inexact, for it neither needs nor is embarrassed by an observer.” We could solve Bell’s problem if we could replace Bell’s ambiguous recipe with a precise formulation of quantum mechanics, and show exactly how this should be applied. Bell’s model for a solution would also be a theory that tells us what the world is like: in his terms, it would be a theory of beables, not observables. What an interpretation of quantum mechanics must do is to go beyond the recipe that is good for all practical purposes to achieve a precise formulation of the theory without using vague terms like ‘observer’ and ‘measurement’ and to show how this formulation may be successfully applied. It is not obvious how we could do this except by reformulating quantum mechanics as a theory of beables we could use to describe or represent the world at a fundamental level. Quantum mechanics talks of observables and quantum states (wave-functions), but severe technical and conceptual difficulties arise if one attempts to certify either of these as beables. Attempts to view the quantum state as a beable lead to the notorious measurement problem, while a variety of “no-go” theorems block the attempt to view observables as beables. That is why interpretation has proved so difficult. 3:AM:There are many interpretations aren't there - the orthodox Copenhagen and its rivals - the Everettian interpretations, a naïve realist interpretation, a quantum logical interpretation of the theory and so on. Are none of these interpretations good enough for you? RH:No. My early research convinced me that naive realism—essentially the attempt to portray observables as beables—would not work, and I think there is now general agreement on its failure. Quantum logic might be considered a last gasp attempt to revive naive realism, but despite its interest for the philosophy of logic it never seemed promising as a way to understand quantum mechanics. There are as many versions of “the” orthodox Copenhagen interpretation as there are proponents: Bohr’s version is in many ways the most interesting, but it is quite different from the version due to Dirac and Von Neuman that is often taught to students as orthodoxy. I think Bell put his finger on the main problem with “the” Copenhagen interpretation: it presupposes a notion of measurement without the resources clearly to specify what constitutes a measurement. Everettian interpretations were always popular among cosmologists and are currently enjoying a resurgence among my philosophical colleagues, but I remain skeptical. Despite their elegant solution to the measurement problem and the issue of non-locality, Everettians have yet to convince me that they can make sense of a notion of probability applicable to a deterministically branching multiverse. By challenging views of probability, self-location and materiality the current decision-theoretic approach due to Deutsch and Wallace does raise fascinating philosophical questions. But I am yet to be convinced by their answers to these questions, and (in my view) it is mistaken to view the universal wave-function as a beable. 3:AM:What do you think a successful interpretation of the theory has to achieve? RH:I touched on this question already. A successful interpretation must explain how quantum mechanics may be formulated as a precise physical theory and unambiguously applied to real-life physical situations. My present view is that this can be done without recasting it as a theory of beables, in which case quantum mechanics will not itself describe or represent the physical situations to which it is applied. But by applying quantum mechanics we become able better to describe and represent those situations in non-quantum terms. I say ‘non-quantum’ rather than ‘classical’ to acknowledge that the progress of science naturally introduces novel language to describe or represent the world (Bose-Einstein condensate, Mott insulator, quark-gluon plasma). My point is that characteristically quantum terms like ‘quantum state’, ‘observable’, ‘Born probability’ do not represent beables. So I no longer agree with those philosophers who believe that a successful interpretation of quantum mechanics has to say how the world could possibly be the way quantum mechanics says it is. Any interpretation has to address a number of long-standing conceptual puzzles, including the measurement problem (including Schrödinger’s cat), the problem of non-locality and the problem of Wigner’s friend. I say address rather than solve, because my present view is that these are problems to be dissolved by showing they never arise in the first place if one adopts the right view of quantum mechanics. They are symptoms of a mistaken understanding of the theory. 3:AM:Your views about how to interpret the theory has evolved since your 2009 book. Can you sketch what your initial interpretation of a nonseparable world looked like? RH:My interest in gauge theories leading up to my book Gauging What’s Realemerged from the attempt to extend to quantum field theory an interpretation of non-relativistic quantum mechanics developed in my first book The Philosophy of Quantum Mechanics: an Interactive Interpretation. A key idea of that earlier book was that a compound system like a pair of hydrogen atoms formed by dissociation of a hydrogen molecule could have holistic properties over and above those it inherited from properties of its components. An example would be a property whose best expression in English is having oppositely directed spins—a property of the pair even when neither atom actually has a determinate spin! To describe the history of such a pair one would have to ascribe properties to a region of space(-time) that were not determined by properties of its constituent points. I called that non-separability, and saw it as important to reconciling the “quantum non-locality” involved in violations of Bell inequalities with relativity. This same non-separability would also occur even for a single particle like an electron if (as I thought) its position were not restricted to a point of space at each moment. Another puzzling phenomenon (the Aharonov-Bohm effect) seems to manifest a quite different sort of non-locality: the interference pattern formed when electrons pass by a long, thin solenoid (a current-carrying wire tightly coiled around a long, thin cylinder) depends on the magnetic flux through the cylinder even though the electrons never experience any (electro-)magnetic field in the region through which they pass. I came to think of the AB effect and violations of Bell inequalities as different manifestations of the same phenomenon—non-separability. The idea was that in neither case was there any action at a distance: there didn’t need to be, because what was acted upon (the particle pair, or the electron) was itself not spatially localized. Moreover, in the AB effect what acted was itself not spatially localized. Let me explain. Even in the absence of electric and magnetic fields in the region outside a very long, thin solenoid, classical electromagnetic theory posits spatially non-localized structures called holonomies. The holonomy of each closed curve encircling the solenoid once is proportional to the magnetic flux inside the solenoid, and the location on a detection screen of the fringes manifested by interference of electrons passing by the solenoid is a simple function of these holonomies. But a holonomy is a property of a closed curve that is not determined by any properties of the points that make it up. If the passage of an electron by the solenoid were a non-separable process, then it might interact locally with the non-separable holonomies. What does this have to do with quantum field theory? It is customary to introduce the AB effect the way I did as a phenomenon within the scope of classical electromagnetic theory and non-relativistic quantum mechanics. But while neither theory can be considered fundamental, each is naturally viewed as an ancestor of the quantum field theories of the Standard Model of high energy physics—currently our most successful fundamental theories. Classical EM was the first gauge theory, and non-relativistic quantum mechanics was the first quantum theory. The thought that prompted the investigation leading to my book Gauging What’s Realwas that one might come to understand the ontologies of quantum gauge field theories as non-separable. I was encouraged in this thought when I found that some physicists advocated so-called loop representations of these theories. These looked like promising candidates for a formal implementation of a holonomy interpretation. Philosophers continue to puzzle over the ontology of quantum field theories: are they about particles or fields or something else entirely? I had banged my ahead against that problem for several years in the 1990's: but now I hoped to solve it through an ontology of holonomy properties. The hope was that we could come to see the world as non-separable at a fundamental level—that it ultimately consisted of space-time regions bearing non-separable holonomy properties of various kinds interacting locally with one another in a way that could be represented by quantum field theories. But what actually emerged in the book was much less. I still think a holonomy interpretation of classical gauge theories including electromagnetism is viable and to be preferred in the context of non-relativistic quantum mechanics. But I came to realize that there is no obvious way to extend this to the quantum gauge field theories of the Standard Model. Moreover, the main barrier to its extension was quantum theory! By this time a number of problems had surfaced for my old interactive interpretation even of non-relativistic quantum mechanics. Even though these did not strike me as fatal, responding to them threatened to become a project of adding epicycles that made the view more and more baroque and less and less likely to be extendable to quantum field theory. In Gauging What’s RealI attempted to remain neutral on how to interpret quantum mechanics. Afterwards I began seriously rethinking my views, stimulated by extended visits to Anton Zeilinger’s experimental quantum optics and information institute in Vienna and to the Perimeter Institute for Theoretical Physics in Waterloo, Ontario. 3:AM:You've since then become interested in a pragmatist approach haven't you. But given that from that position meaning comes from use, and everyone agrees about how to use the theory, don't you face a huge problem right from the get go with this approach? RH:Good question! Part of my answer was foreshadowed by my answer to 2. As Bell made clear, it is only at a superficial level that everyone agrees how to use the theory. If one probes deeper one realizes that there are actually different ways to apply quantum mechanics to a situation and the results depend in detail on how one chooses to apply it. Bell’s recipe—“treat as much quantum mechanically as you need to, so that treating more quantum mechanically wouldn’t make a significant difference”—is (as he stresses) vague and depends on a value judgment by the user of the theory. Experimentalists have no difficulty in making that judgment on a daily basis. But a theoretician (or philosopher) with a conscience must acknowledge that a theory that cannot be stated without a prior judgment on what matters in practice falls short of a precisely formulated scientific theory. Moreover, such a theory can never deliver a single, consistent story of what the world is like at a fundamental level. This would not be so bad if there were in principle a right way to apply the theory, though we could never do this in practice because of the intractable complexity of theory and world. But Bell’s point is that the structure of quantum mechanics itself implies there is no right way to apply it—it is intrinsically inexact. Now let me back up a bit, since as a pragmatist I don’t entirely agree with Bell here. He thinks classical mechanics did not face this problem since “at least one can envisage an accurate description of the world in terms of classical mechanics”. I don’t think one can. Classical mechanics makes available a vast collection of mathematical models of increasing complexity (containing more and more particles spread throughout the universe and interacting in a welter of ways). One applies classical mechanics by choosing a model and taking it to represent a physical situation. Our world is so huge and complex that any model capable of accurately representing it would be so far beyond human cognitive resources that we could not use it. But only in use does a mathematical model represent anything. So we cannot envisage an accurate description of the world in terms of classical mechanics. All we can do is develop better and better inaccurate models to serve particular descriptive, predictive and explanatory purposes. The same thing is true in quantum mechanics. By treating more and more quantum mechanically we can get better and better predictions, but also better and better descriptions and explanations. We use quantum models when representing physical situations even though no quantum model is itself used to represent a physical situation. We use them to make descriptive non-quantum claims that figure in predictions and explanations. By treating more and more quantum mechanically we can make those descriptive claims better and better. We can think of this as an improvement in accuracy, but not if we think of increased accuracy as improved approximation to the one true description of the world. This is where the pragmatism about meaning comes in. The content of a non-quantum claim accrues to it through its inferential links to other claims, and ultimately to perception and action. By treating more and more quantum mechanically we become able to describe the world in non-quantum terms by claims that have a fuller and richer content, as manifested by the increased number of reliable inferences they license. We can represent the content of each of these claims truth-conditionally, but only trivially since we have no independent descriptive language to fall back on. Increased accuracy cannot be understood as better and better approximation to any truth that we can express—in either quantum or non-quantum terms. Since claims in classical mechanics get their content through their inferential links also, descriptive claims based on quantum mechanics are no less precise in their content than those based on classical mechanics. 3:AM:So how does the pragmatist approach dispel the Feynman mystery which lies at the theory’s heart? RH:Feynman located the mystery already in a two-hole interference experiment with many individual particles—he chose electrons. Focusing attention on the proposition (A) that each electron either goes through hole 1 or it goes through hole 2 [and not both], he rehearsed a familiar argument with the (false) conclusion that no interference fringes will appear in the statistical pattern of localized “hits” registered by the electrons on a detection screen placed behind the holes. Two other patterns with only one of the holes open may be recorded, each in a separate experiment: neither experiment produces a pattern with interference fringes. Assume (A). If an electron goes through hole 1 then it will behave the same way whether hole 2 is open or not, and an electron going through hole 2 will behave the same whether or not hole 1 is open. So the pattern on the screen in the original experiment with both holes open must be formed by combining the results of two other experiments: the pattern with hole 2 closed and the pattern with hole 1 closed. The pattern formed by combining these two patterns also displays no interference fringes. But the actual pattern formed by the electrons with both holes open does display interference fringes. So (A) must be false. Now any apparatus capable of detecting through which of the two open holes an electron has just passed in the experiment never detects anything but an entire electron just behind one hole or the other. So (A) is true of all observed electrons. But as the sensitivity of such an apparatus is increased the interference fringes disappear. (A) cannot be checked experimentally without destroying the interference pattern! Here is what Feynman concluded from his analysis of the two-hole interference experiment: “...if one has a piece of apparatus which is capable of determining whether the electrons go through hole 1 or hole 2, then one can say it goes through either hole 1 or hole 2. [otherwise] one may not say that an electron goes through either hole 1 or hole 2. If one does say that, and starts to make any deductions from the statement, he will make errors in the analysis. This is the logical tightrope on which we must walk if we wish to describe nature successfully.” But how could the absence of a piece of apparatus revoke one’s right to free speech? Presumably although one can assert (A) in any circumstances, Feynman’s advice was that one should do so only when the apparatus is present, because only then is (A) meaningful and conducive to correct inferences. But the mystery remains: How can the presence of a piece of apparatus render (A) both meaningful and correct, and what exactly is meant by the presence of such a piece of apparatus? The key to answering both questions is quantum decoherence. A quantum system like an electron (or an apparatus) is very sensitive to environmental interactions. (That is why it is so hard to build a quantum computer.) The effect on a system of its environment may itself be modeled quantum mechanically, though usually only in an idealized way because of the diversity and complexity of actual environments. At least in simple idealized models, the entangled quantum state of system+environment extremely rapidly approaches, and then remains in, a special form. For many environmental interactions this form privileges the system’s position, in the following sense: by ascribing a definite (though perhaps unknown) position to the system one can make many reliable inferences about its behavior. On a pragmatist inferentialist view of content, this helps to endow a claim about the system’s position with a high degree of content, while claims about other properties (for example its energy) lack such well-defined content. A claim like (A) is both meaningful and correct in the presence of environmental interactions modeled as privileging the electron’s position in this way. No “observer” need have set up any apparatus to exploit the electron’s interaction with the environment to detect its position by noting its effect on this environment (e.g. by the ambient light scattered from the electron). In the presence of such an environment quantum mechanics assigns a definite probability to an electron’s passing through hole 1 rather than hole 2, but no definite probability to other properties this environmental interaction does not privilege. So the pragmatist approach dispels Feynman’s mystery not by describing an electron’s journey through the holes to the screen, but by showing how quantum mechanics itself can help us to see how much we can significantly say about the electron in different environmental circumstances, and how we should apportion our degrees of belief in contrary significant claims about it. With no position-decohering interactions before the screen, the advice is not to say anything about an electron’s route through the experiment: while interaction with the screen licenses use of quantum mechanics in estimating where it is likely to be detected there. 3:AM:You also argue that Wigner's paradox is best approached as a pragmatist. Can you sketch out the puzzle and say why the pragmatist is superior to other attempts to solve it? RH:This is actually the paradox of Wigner’s friend (in a paper I called the friend John, after Eugene Wigner’s fellow high-school student in Budapest, John Von Neumann.) It goes like this. Imagine Eugene’s friend John conducting a quantum measurement on something (the spin of a silver atom, say) inside a laboratory that is completely isolated from the rest of the world—by hypothesis there are no mechanical, electromagnetic or any other physical interactions between the lab and its external environment (a condition that would be completely unrealizable in practice). John observes the atom as spinning up along his chosen axis—it is detected in the upper half of his detection screen—and writes the result in his notebook. Meanwhile, Eugene remains outside the laboratory where he is physically unable to observe what is going on inside. According to Wigner (and many others, including Dirac and von Neumann), John observes a determinate result of his measurement only insofar as the quantum state (“wave function”) of the atom (+detection screen+ notebook entry+...) ceases to be an entangled superposition, but physically collapses onto one of that superposition’s components, corresponding to spin up (rather than down). But according to Eugene, who has not (yet) made any observation, the quantum state of the entire lab (including John’s notebook and John’s body and brain as well as the silver atom and detection screen) remains an entangled superposition. So Eugene and John assign different quantum states to the lab and its contents—one representing a determinate result of John’s experiment, the other representing no definite result. If Eugene now enters the lab (inevitably interacting physically with it) and observes its contents, it is his (Eugene’s) observation that then collapses the lab’s state to produce a result of John’s measurement. When he asks John what result he obtained, John will say “spin up”. Eugene will not take this as a true report of what happened before he entered the lab, but a physical response brought about only by his observation on entering the lab (even though Eugene’s further examination of the lab’s contents will reveal multiple “records” apparently confirming the truth of John’s report). Wigner himself (at one time) proposed to resolve this paradox by supposing that it is consciousness (and only consciousness) that collapses the quantum state. On this supposition, a collapse occurred as soon as John became aware of the result of his experiment, and Eugene simply found this out when he entered the lab—Eugene’s subsequent observation did not need to induce any further collapse. A pragmatist dissolves the paradox by rejecting Wigner’s view that the quantum state represents the physical condition of a system to which it is assigned. Instead, relative to a specified physical situation, a quantum state provides an objective guide for any agent who might be in that situation—a guide to the significance of claims about a system, and what credence to attach to each significant claim. So quantum state “collapse” is not a physical process, but an objective constraint on updating beliefs in the light of a change in physical (and so epistemic) situation. And differently situated agents (like John and Eugene) should consistently assign different quantum states to the same system—neither of which serves to represent its physical condition. John’s measurement yields a determinate result as soon as the silver atom interacts with the detection screen, whether or not John or anyone else becomes conscious of this result. John and Eugene use their respective quantum state assignments to adjust their degrees of belief about what this result is, each in the light of all information physically available to him at the time. 3:AM:Does pragmatism help resolve the issue of reconciling quantum mechanics with relativity? RH:Yes, in three ways. First, by adopting the pragmatist view of the non-representational function of the quantum state briefly sketched in my answer to question 8. Second, by understanding probability in terms of its role as providing an objective guide to credence (degree of belief) for a physically situated (and so epistemically limited) agent. Third, by understanding causation in terms of its role as providing an objective guide to an agent’s assessment of the chances of various possible consequences of his actions. You can see how all three ways work together in a classic example that exhibits the apparent conflict between quantum mechanics and relativity—Bohm’s version of the Einstein-Podolsky-Rosen thought-experiment. If one adopts these three pragmatist views it becomes clear why there is no conflict. In their thought-experiment, EPR applied quantum mechanics to a pair of systems in an entangled state. Bohm considered a pair of systems whose spin states are entangled. This version is more easily realized in real experiments. Quantum mechanics (correctly) predicts that in Bohm’s entangled state, a measurement of any particular spin-component on one system is certain (probability 1) to yield the opposite result to a measurement of the same spin-component on the other system. It also (correctly) predicts that a measurement of spin-component with respect to any axis on either particle alone has probability ½ of a spin up outcome and probability ½ of a spin down outcome. Consider a case in which a spin-component with respect to some axis is measured on each particle in a Bohm pair when the particles are far apart. Suppose the decision as to which spin-component is to be measured on each particle is made by adjusting which axis each apparatus is set to independently, randomly, and immediately before the measurements are carried out. If the settings and measurements occur far enough apart in the two wings of the experiment then not even light could travel between either the setting or the measurement event in one wing and either setting or measurement event in the other wing: (in terms of relativistic space-time structure, these events at one wing are space-like separated from the corresponding events at the other wing). Experiments like this have been done, and their results bear out the quantum predictions, not only for the perfect (anti)-correlations when the apparatus in both wings are set to measure spin-component with respect to the same axis, but also for the magnitude of the less-than-perfect correlations when these axes differ by specific angles. Focus on the perfect anti-correlations at the same settings. Although the outcome at each wing appears to be an individually random (probability ½) event, whether a particular outcome occurs in one wing is completely determined by the outcome in the distant wing (it has probability 1 or 0, depending on that outcome). If the outcome at one wing causally determines the outcome at the other this is difficult to reconcile with relativity. Nothing in relativity breaks the symmetry between such a pair of space-like separated events to mark one as a cause of the other: in particular, neither event occurs invariantly earlier. If there is a physically asymmetric causal dependence this conflicts with fundamental relativistic (Lorentz) invariance. Fortunately, there are reasons to deny that the counterfactual dependence between the distant outcomes is causal. To see how these arise it is helpful first to consider the notion of chance in relativity. General probabilities like those provided by quantum mechanics are useful to a physically situated agent as sources of authoritative advice about what to believe and what to do. Such advice pertains to particular, individual events the agent is not in a position to be certain about. Application of a general probability statement yields the chance of such an event, and it is this chance that is authoritative over the agent’s beliefs. David Lewis captured the constitutive connection between chance and credence in his Principal Principle, which, he said, tells us everything we know about chance. Its basic idea was to take the chance of an event as making redundant any other accessible information when rationally setting one’s degree of belief in its occurrence. Lewis took all information about the past to be (in principle) accessible. It followed that the chance of an event typically changes as more and more historical information becomes accessible, until the chance becomes either 1 or 0 at the time the event does or does not occur. In the absence of an absolute time in relativity, the analog of the past is the space-time region encompassed by the backward light-cone of a space-time point, and the analog of its future is the space-time region encompassed by it forward light-cone. Assuming nothing travels faster than light, the information accessible at a point is confined to what happens in its backward light-cone: what happens in space-like separated regions outside its light-cone is just as inaccessible as what happens in its future light-cone. The natural adaptation of Lewis’s Principle to relativity makes the chance of an event relative not to time, but to a space-time point. This has the important consequence that two agents moving in the same way but in different places should sometimes assign different chances to the same event at the same time (relative to their state of motion). Suppose Alice is in one wing of an EPR-Bohm experiment while Bob is in the other. Suppose also that Bob’s outcome occurs at time tb momentarily earlier than Alice’s at ta with respect to their common state of motion, even though their outcomes are space-like separated. At any time t between tb and ta the chance of Alice’s outcome being spin-up is ½ where Alice is, but either 0 or 1 where Bob is. So the question as to whether Alice’s outcome was predetermined is not well defined. The general probabilities supplied by quantum mechanics yield both Alice’s chance and Bob’s chance at t, and this is the advice each should then take when setting credences at t. Alice and Bob are offered different advice, but in each case the advice is appropriate to one so physically (and therefore epistemically) situated. To extract this advice from quantum mechanics, Bob can consult the quantum state he should assign to Alice’s particle just after tb. This state takes account of his outcome: it is updated just the way it would be if it had physically collapsed, though there was no physical collapse and nothing changed in Alice’s wing at tb. Since Alice is then not in a position to know about Bob’s outcome she cannot ( and should not) assign this quantum state to her particle. Causation is linked to chance by the principle that e causally depends on f if and only if some hypothetical intervention only on f would alter the chance of e. Such an intervention need not be within the power of any actual agent: it need not even be physically possible. But to evaluate the claim that e causally depends on f one has to adopt the perspective of a hypothetical agent able to intervene and so alter f. This follows from the constitutive role of causation as a guide to action. At t, Alice’s chance of each possible outcome of her measurement is ½ irrespective of Bob’s outcome. So no hypothetical intervention only on Bob’s outcome would change this chance. At t, Bob’s chance of Alice’s outcome is either 0 or 1, depending on Bob’s outcome at tb. No hypothetical intervention on Bob’s outcome is possible at t, since by then Bob’s outcome has already occurred. Would a hypothetical intervention only on Bob’s outcome prior to tb alter Bob’s chance at t of Alice’s outcome? This question presupposes that it makes sense to speak of a hypothetical intervention only on Bob’s outcome. But for anyone who accepts quantum mechanics this makes no sense! Bob’s outcome is the result of a random process whose possible outcomes each have fixed probability ½: some interventions might alter this process, but not just by altering its outcome. Since no possible intervention only on the outcome in one wing would alter any chance of an outcome in the other wing, the dependence between these outcomes expressed by their perfect (anti-)correlations is not causal. Moreover, the chances, probabilities and quantum state assignments underlying this analysis may all be understood in a way that is manifestly consistent with fundamental relativistic (Lorentz) invariance. 3:AM:So does your pragmatism at work in these two cases mean that we should think of quantum mechanics as a realist or an instrumentalist theory or is it a middle way? RH:Too often contemporary philosophers apply the terms ‘realism’and ‘instrumentalism’ loosely in evaluating a position, as in the presumptive insult “Oh, that’s just instrumentalism!” Each term may be understood in many ways, and applied to many different kinds of things (theories, entities, structures, interpretations, languages, ....). I once characterized my pragmatist view of quantum mechanics as presenting a middle way between realism and instrumentalism. But by adopting one rather than another use of the terms ‘realism’ and ‘instrumentalism’ one can pigeon hole my view under either label. In this pragmatist view, quantum probabilities do not apply only to results of measurements. This distinguishes the view from any Copenhagen-style instrumentalism according to which the Born rule assigns probabilities only to possible outcomes of measurements, and so has nothing to say about unmeasured systems. An agent may use quantum mechanics to adjust her credences concerning what happened to the nucleus of an atom long ago on an uninhabited planet orbiting a star in a galaxy far away, provided only that she takes this to have happened in circumstances when that nucleus’s quantum state suffered suitable environmental decoherence. According to one standard usage, instrumentalism in the philosophy of science is the view that a theory is merely a tool for systematizing and predicting our observations. For the instrumentalist, nothing a theory supposedly says about unobservable structures lying behind but responsible for our observations should be considered significant. Moreover, instrumentalists characteristically explain this alleged lack of significance in semantic or epistemic terms: claims about unobservables are meaningless, reducible to statements about observables, eliminable from a theory without loss of content, false, or (at best) epistemically optional even for one who accepts the theory. My pragmatist view makes no use of any distinction between observable and unobservable structures, so to call it instrumentalist conflicts with this standard usage. In this view, quantum mechanics does not posit novel, unobservable structures corresponding to quantum states, observables, and quantum probabilities; these are not physical structures at all. Nevertheless, claims about them in quantum mechanics are often perfectly significant, and many are true. This pragmatist view does not seek to undercut the semantic or epistemic status of such claims, but to enrich our understanding of their non-representational function within the theory and to show how they acquire the content they have. There is a widespread view that the role of the wave-function (or more general mathematical object) is to represent a novel physical structure—the quantum state—whose existence is evidenced by the theory’s success. In this view, a wave-function represents a physical structure that either exists independently of the more familiar physical systems to which claims about positions, spin etc. pertain or else grounds their existence and properties. From this realist perspective, it may seem natural to label as instrumentalist any approach opposed to that account of the quantum state. But a pragmatist should concede the reality of the quantum state; its existence follows trivially from the truth of quantum claims ascribing quantum states to systems. What he should deny is that quantum state ascriptions are true independently of or prior to the true magnitude claims that (in his view) back them. A more radical pragmatist would reject the representationalist presupposition of this realist/instrumentalist dilemma: the assumption that mere representation is both a (key) function of a novel element of theoretical structure and figures centrally in an account of its content. The truth of a quantum state ascription trivially implies that a wave-function represents something, much as the truth of ‘1+ 1=2’ implies that ‘1’ represents the number one. By eschewing a ‘thicker’ notion of representation, this more radical pragmatist could seek to undermine the view that representation of a tolerably insubstantial sort could either be a non-perspectival function of an element of theoretical structure or usefully appealed to in an account of its content. I’m not presently convinced you have to be so radical to understand the significance of the quantum revolution! 3:AM:Does this pragmatist approach change what you used to think about gauging what's real ? RH:It doesn’t change much if anything about what I said in the book about how to understand classical gauge theories. But it does help me to see why that way of thinking didn’t provide a good guide to understanding their quantum counterparts. In particular, non-separability, though it exists, is not nearly as important as I used to think in understanding a quantum theory. And the thought that quantum gauge field theories posit a non-separable world now strikes me as mistaken. Since I now think of quantum theories of all kinds as “ontologically light” I have come to a novel resolution of the vexed question of what quantum field theories are about: like all quantum theories, they introduce no novel ontology, but advise their users on the significance and credibility of claims (now including ontological claims) about other things. So a quantum field theory may be used to make claims about particles in one context, and about classical fields in another context. And quantum field theories themselves offer advice on the contexts that make each type of ontological claim appropriate—advice made explicit through the application of quantum field-theoretic models of decoherence. One interesting thing I haven’t thought about much is what contexts (if any) would make appropriate claims about non-separable holonomy properties on the basis of a quantum gauge field theory. 3:AM:A key question for us all is how macroscopic systems can be explained by the microscopic. Are we wrong to think of this in terms of trying to reduce the macro to the micro? Ladyman and Rosshave argued that there are different levels but not a fundamental one. What do you think? RH:This question was posed without mention of quantum mechanics, even though this is (one of) our most fundamental theory/(ies). Quantum mechanics is often described as a theory of how the world behaves at the microscopic level. But it’s both more and less than that. Quantum mechanics was first applied at the atomic scale. Since then it has been successfully applied over an enormous range of length, time and energy scales, from applications of quantum chromodynamics to calculate the proton-neutron mass difference through the explanation of massive superconducting magnets used in CAT scans and the Large Hadron Collider, up to applications to quantum cosmology including the emergence of large scale structure through quantum fluctuations in the very early universe. So it’s not just a theory of the microworld. On the other hand, it’s not clear that quantum mechanics is used to describe the world in any of these applications, despite physicists’ tendency to call any successful application of a theory a description! Indeed, in my pragmatist view the function of the distinctively quantum elements of the theory’s models is not to describe the world but to advise us on how better to describe it in other terms. One respect in which my view of quantum mechanics is not instrumentalist is that I take quantum theory to represent an enormous advance in our ability to understand and explain natural phenomena, notably including many macroscopic phenomena like superfluidity and Bose condensation as well as more familiar things like colors and chemical properties of elements and compounds, lasers, atomic clocks in the GPS system, different types of magnetism, semiconductors, nuclear fission and fusion. But there are several reasons why the explanation is not best described as taking the form of a reduction of (a theory describing) the phenomenon to quantum mechanics. Reduction is often thought to take the form of a derivation of the laws of the reduced theory to those of the reducing theory. But in my view quantum mechanics has no laws! In particular, the Schrödinger equation is not a fundamental dynamical law representing the evolution of a physical magnitude (the quantum state), and the Born rule is not a fundamental stochastic law. This follows from the fact that neither quantum states nor quantum probabilities are physical magnitudes. Other philosophers (van Fraassen, Giere) also downplay the significance of laws in understanding the structure of a scientific theory. But there is a much more widespread acceptance of the importance of models in this context. The predominant view is that the primary function of a theory’s models is to represent physical systems. So one could think of reduction as corresponding to the embedding of the reduced theory’s models into those of the reducing theory, thereby connecting their representational structures. While I think this is a pretty good start in understanding many reductions in classical physics (like that of light to electromagnetic radiation, or the gas laws to kinetic theory), it won’t help us to understand how quantum mechanics can help explain macroscopic phenomena. This is because (in my pragmatist view) models of quantum theory do not function representationally. So while I think quantum theory helps us to understand all kinds of otherwise puzzling phenomena, it does not do this by saying what’s going on at a deeper level: ontologically speaking, there is no quantum level. Quantum theory is fundamental to contemporary physics, and is likely to remain so for the foreseeable future. But it does not contain fundamental laws, and does not contribute its own fundamental ontology. Since quantum mechanics is in these ways parasitic on other descriptive or representational frameworks it cannot be expected to provide a basis for the reduction of the macroscopic to the microscopic. Nor, therefore, can anything else within the horizon of contemporary physics. Some philosophers (Jonathan Schaffer, for example) have seriously considered the possibility that there is no fundamental level because, ontologically speaking, “it’s turtles all the way down”. My view is very different. The “levels” metaphor is of limited value. Here’s a different metaphor. Theories in physics form a team, and quantum mechanics is a vital player—without quantum mechanics there are many, many things we couldn’t understand about our world. But quantum mechanics can’t play every position at once, and no player is indispensable—physics can do a lot without quantum mechanics. 3:AM:You once asked whether we could coherently deny the reality of time. Now that you're a pragmatist, how do you answer the question? Has anything changed? RH:I argued that it is not coherent to deny the reality of time, but that we may some day come to realize that time is not fundamental, and that temporality emerges (conceptually, not successively!) from some more fundamental physical structure(s). My idea was that we may come to think of time as real in the way that color is real—a non-fundamental feature of the physical world of particular interest to folks like us because of our physical constitution. Looking back on it, that was already a pragmatist idea though I didn’t think of it that way then. I can now add another reason for not denying the reality of time (pace Carlo Rovelli, with whom I agree on many things!) In my pragmatist view, any form of quantum theory is tailored for the use of physically situated agents like us. My answer to question 9 made it apparent how important to our physical situation is our location in time (indeed, in space-time). This is a reason for skepticism about the possibility that space-time might emerge from an application of a quantum theory to something like a spin-foam, as in loop quantum gravity. The worry is whether it could make sense to talk of applying a quantum theory in a pre-spatiotemporal world. 3:AM:How spooky is quantum nonlocality? (Well, you did ask!) Is reality genuinely spooky from the quantum theory perspective. What do you find weird (if anything) and should common sense guide theorists - or is it actually a hindrance? RH:This is really two questions. I’ll start with quantum nonlocality. As I said in answer to question 9, two things are not spooky about quantum nonlocality: There is no instantaneous action at a distance, and quantum mechanics meshes beautifully with relativity theory. But quantum entanglement and the theory’s successful explanation of violations of Bell inequalities (as manifested in its correct predictions for the correlations I described in that answer) has brought us face to face with a surprising and perhaps disquieting feature of the world. I can do no better than quote John Bell: Of course, Bell showed it is not just the same—“the reasonable thing just doesn’t work.” The full patterns of correlation correctly predicted by quantum mechanics for all the correlations described in my answer to question 9 cannot be explained as resulting from a common cause that separately and independently pre-determines the response of Alice’s and Bob’s detectors to each particle in a pair no matter what that detector happens to be set to. This is so even though that seemed to be the only possible explanation of the perfect (anti-)correlations when they ended up with the same settings, barring direct causal connections between space-like separated events at the two wings. It is as if Bertlmann’s second sock somehow always assumes a different color even though neither sock had a color before it was examined. In one paper, Bell stated an intuitive principle of local causality, which he later attempted to make more precise to prove his result in greater generality: In my pragmatist view, quantum non-locality does not show that space-like separated events are causally connected in a way that would conflict with this principle. But there is still a serious tension with the first part of Bell’s condition. We can locate a common cause of correlated space-like separated outcomes in their common past (the overlap of their backward light cones). But even when this cause has been fully specified, the unconditional probability of one outcome still differs from its probability conditional on the other outcome. And we cannot use quantum theory to explain why outcomes of experiments on systems assigned entangled quantum states are correlated as they are in the usual way—by describing a continuous causal process connecting an invariantly earlier common cause to these outcomes. In daily life an earlier common cause of a regular correlation between distant events is always connected to each such event by a continuous causal process: and after this cause has been fully specified, the probability of an event here is independent of the outcome there. I admit I still think this is weird, and I crave some deeper explanation of the correlations. But the only suggestions I’ve seen of where to look for one strike me as just as weird as the phenomena themselves (retro-causation, locality in some higher dimension, branching worlds, ...) Now for the second part of your question. Common sense is one guide for a theorist, but it should always be used with caution. A wise theorist should bear in mind the view (attributed to Einstein in 1948) that common sense is actually nothing more than a deposit of prejudices laid down in the mind prior to the age of eighteen. Many ideas in physics have proved to be important even thought they conflict with common sense. For years I have tried without success to convince my highly intelligent brother that no inconsistency arises between the invariance of the speed of light and results of thought-experiments designed to measure this speed. I have come to suspect that philosophers and even first rate physicists are sometimes misled by their common sense intuitions about causation, probability, and even time into trying to locate these as fundamental elements of physical reality rather than thinking of their objectivity as arising from the essential role they play in the lives of agents such as ourselves. We should all strive to free our imaginations from the prejudices of our eighteen-year-old selves. 3:AM:And for the readers here at 3:AM, are there five books you could recommend to take us into your philosophical world? RH:Richard Healey The Quantum Revolution in Philosophy(when it comes out from Oxford University Press—maybe in 2016?) Until then, Simon Friederich MacMillan John S. Bell Speakable and Unspeakable in Quantum Mechanics2nd Edition. Cambridge University Press Huw PriceNaturalism Without MirrorsOxford University Press Tim MaudlinQuantum Non-Locality and Relativity3rd Edition Wiley-Blackwell (The only one I can fully endorse is the first book, but it hasn’t been published yet!) Richard Marshallis still biding his time. Buy his book hereto keep him biding!
a8d00736cd5fb8e7
Navigation and service Opens new window An accurate and realistic description of materials of scientific or technological interest requires ab-initio methods that are able to handle a variety of phenomena such as non-collinear magnetism, spin-orbit coupling effects, (external) electric fields, correlation effects, low dimensions, etc. Within the vector spin-density formulation of density functional theory we developed a program, FLEUR, which allows us to investigate materials properties on a quantum mechanical level. This massively parallelized program is based on the full-potential linearized augmented planewave (FLAPW) method for bulk, film and wire geometry. With this method, it is possible to accurately describe a wide variety of systems with open structures and low symmetry. Force calculations enable us to simultaneously determine the magnetic and structural ground state. (S. Blügel) JuNoLo: The Jülich Nonlocal Code Ec_NL.flvClick for movie ! This is the homepage of JuNoLo, a parallel code that implements vdW-DF theory. The code works as a postprocessing tool using the charge density obtained from some Density Functional Theory code. P. Lazić, N. Atodiresei, M. Alaei, V. Caciuc, S. Blügel and R. Brako, Computer Physics Communications181, 371 (2010) A massively parallel code that implements vdW-DF theory. M. Dion et al , Phys. Rev. Lett. (2004) & K.  Lee et al , Phys. Rev. B (2010) Given the charge density we only have to calculate: ( N. Atodiresei ) Orbital-Dependent Density Functionals For many years, local and semilocal density functionals, such as the local-density approximation and the generalized gradient approximation, have been the standard in electronic structure calculations based on density functional theory. With the advent of increasingly powerful computers, more sophisticated nonlocal orbital-dependent functionals are becoming more and more popular. Their simplest variants are the hybrid functionals, which contain a certain fraction of exact exchange admixed with local or semilocal functionals. The self-interaction error is thus partially canceled, which improves the description of strongly correlated materials and oxides. We have implemented two of the most popular hybrid functionals (PBE0 and HSE) into the FLEUR code. A treatment of orbital-dependent functionals (e.g., the exact exchange functional) within the Kohn-Sham formalism, which requires the effective potential to be purely local, is enabled by the optimized-effective-potential (OEP) method. A novel incomplete-basis-set correction has made the calculations particularly efficient and stable.The next logical step would be to add an orbital-dependent correlation functional, whose nonlocality and frequency dependence will make it possible to account for the van-der-Waals interaction including the dispersive force created by fluctuating dipoles. With the adiabatic-connection fluctuation-dissipation theorem we can make a connection to many-body perturbation theory as it allows to construct density functionals in a systematic manner from the frequency-dependent density-density correlation function, which can be expanded in terms of Feynman diagrams. ( M. Betzinger, M.Schlipf, C. Friedrich ) Exact Diagonalization A straightforward approach to solving the many-body problem is to simply diagonalize the Hamiltonian. Of course this can only be done for finite systems, as the Hamiltonian is a matrix of finite dimension. Though finite, this dimension grows with increasing system size, to astronomical proportions. Already for fairly small clusters, tens of gigabytes of memory are needed for storing even a single many-body wave-function. Thus, while providing an exact solution to the many-body problem, it is very difficult to eradicate finite-size effects without using extremely large computers. In our calculations we use the Lanczos method to calculate the ground state, density matrix, spectral function, and dynamical responses. (E. Koch) Quantum Monte Carlo For large systems, the Hilbert space becomes prohibitively large; it is then no longer possible, for example, to calculate the exact product of the Hamiltonian matrix with a state vector. The basic idea of the quantum Monte Carlo approach is to evaluate such matrix-vector products in a stochastic way. If all matrix elements of the Hamiltonian are positive, the ground state of very large systems can be determined exactly, within controllable statistical errors. For electrons, however, there are also always negative matrix elements in the Hamiltonian. In the quantum Monte Carlo approach, these give rise to the infamous sign-problem, which, if untreated, makes calculations for fermions impossible. To avoid the sign-problem, we use the fixed-node approximation. We have applied quantum Monte Carlo for calculating the ground state, static response functions, and quasiparticle energies. (E. Koch) The KKR method The KKR method of band structure calculations was originally introduced in 1947 by Korringa and in 1954 by Kohn and Rostoker. A characteristic feature of this method is the use of multiple scattering theory for solving the Schrödinger equation. In this way, the problem is split into two parts. First, one solves the scattering problem of a single potential in free space. Second, one solves the multiple scattering problem by demanding that the incident wave to each scattering centre should be the sum of the outgoing waves from all other scattering centres. The scheme has met with great success as a Green function method, within density-functional theory. Its applications range from the full potential ab-initio treatment of bulk, surfaces, interfaces and layered systems with O(N) scaling to the embedding of impurities and clusters in bulk and on surfaces. The method has been used with considerable success in the study of non-collinear magnetic structures, lattice relaxations, relativistic effects, and transport properties of solids. (P. Mavropoulos)
8344adde0c269ddf
Saturday, January 03, 2009 AWT and human scale Friday, January 02, 2009 Motivations of Aether Wave Theory AWT isn't based on some mysticism at all - on the contrary. AWT is based on Boltzmann gas model - it's a basic system for definition of thermodynamical energy, instead. Furthermore, this model isn't ad-hoce at all. It's based on the understanding, from sufficiently distant perspective every object appears like pin-point particle. And every complex interactions in such system can be modeled by system of colliding particles. For example, people are complex objects, but if we would observe them from sufficient altitude, they would appear and behave like chaotic 2D gas composed of colliding particles. It's natural reduction of virtually every physical system. Despite its conceptual simplicity, this system becomes irreducibly complex with increasing of particle density, because it forms fractaly nested density fluctuations composed of density fluctuations. Such behavior can be both simulated by computers, both modeled by dense gas condensation (supercritical fluid at the right picture) and the resulting complexity is limited just by computational power. Which means, AWT principle enables to model systems of arbitrary complexity just by recursive application of trivial mechanism. If nothing else, we should consider this model because of its simplicity and the fact, nobody did propose it for modeling of observable reality, yet. The main reason for reintroduction of Aether theory back into mainstream physics is better and more consistent and universal understanding of fundamental connections of reality. Most of these motivations weren't never presented by mainstream physics and they're forming the theorems, i.e. testable predictions of AWT at the same moment, because they can be derived from ab-initio simulation of nested density fluctuations of Boltzmann particle gas. This list bellow will be extended by new ideas occasionally. 1. Explanation of energy spreading by light The spreading of inertial energy requires inertial environment. We cannot use the energy concept for light waves spreading, while ignoring mass concept, the mass-energy equivalence in particular. 2. Explanation of wave character of light. Only system of mutually colliding particles can spread energy in waves, vacuum shouldn't be any exception. 3. Explanation of finite frequency of light. Only system of nonzero mass density can spread waves of finite frequency, as follows from wave equation. 4. Explanation of high light energy density/frequency achievable. Classical models of luminiferous Aether were based on sparse gas model of Aether, which cannot spread the waves of energy density corresponding to gamma or cosmic radiation frequency. 5. Explanation of light speed invariance. Light speed invariance is consequence of Aether concept and the fact, the light speed is the fastest energy spreading observable (if wee neglect the gravity waves, which are too faint to be observable), so we can use only light for observation of reality, the light speed/spreading in particular. 6. Explanation of absence of reference frame for light spreading in vacuum. If we use the light for observation of light spreading in luminiferous Aether, it's motion/reference frame can be never locally observed just by using of light waves, because no object can serve as a subject and as a mean of observation at the same moment. 7. Explanation/prediction of transversal character of light waves. In particle environment, only transversal waves can remain independent to environment reference frame by the same way, like motion of capillary waves at water surface. 8. Explanation/prediction of foamy structure of vacuum. Only foam structure composed of "strings" and "(mem)branes" can spread energy in transversal waves through bulk particle environments (string and brane theories) and/or provide the properties of elastic fluid, composed of "spin loops" vortices (LQG theory). 9. Explanation/prediction of two vector character of transversal light waves. Only nested foam structure can promote the light spreading in two mutually perpendicular vectors of electrical and magnetic intensity (1, 2). The formation of nested density fluctuations can be observed experimentally during condensation of supercritical fluid (1). 10. Explanation/prediction of uncertainty principle. The transversal character of surface waves is always violated on behalf of underwater waves. Inside of inhomogeneous particle system the energy is always spreading in both transversal, both longitudinal waves, thus violating the predictability/determinism of energy spreading and introducing an indeterminism into phenomena, mediated/observed by using it. 11. Explanation/prediction of particle/wave duality. Every isolated energy wave (a soliton) increases the Aether foam density temporarily by the same way, like the soap foam gets dense during shaking due the spontaneous symmetry breaking. As the result, every soliton spreads like less or more pronounced gradient/blob of Aether density and it bounces from internal walls of surface gradient of such blob like standing wave packet, i.e. particle (1). 12. Explanation/prediction of virtual particles. The concept of virtual particles, which appear and dissapear temporarily in vacuum is typical behavior of density fluctuations inside of every gas or fluid and physics knows no other way, in which such behavior can be realized. 13. Explanation/definition of time dimension and space-time concept. "..People have often tried to figure out ways of getting these new concepts. Some people work on the idea of the axiomatic formulation of the present quantum mechanics. I don't think that will help at all. If you imagine people having worked on the axiomatic formulation of the Bohr orbit theory, they would never have been led to Heisenberg's quantum mechanics. They would never have thought of non-commutative multiplication as one of their axioms which could be challenged. In the same way, any future development must involve changing something which people have never challenged up to the present, and which will not be shown up by an axiomatic formulation..." Paul A.M. Dirac, in Development of the Physicist's conception of Nature, In The Physicists conception of Nature ed.Jaghdish Metra, D. Reidel, 1973., pp 1-14. Wednesday, December 31, 2008 Monday, December 29, 2008 Aether and Boltzmann brain concept The idea of Boltzmann Brain (BB) was originated by unnamed assistant of L. Boltzmann, who conjectured, whole Universe is sort of especially huge and complex, but still quite randomly formed fluctuation inside of chaotic particle environment. Such Universe could enable the spontaneous formation (evolution) of hypothesized self-aware entity which arises due to random fluctuations out of a state of chaos. It's apparent, this conjecture follows the dense Aether concept of AWT so closely, so that BB appears to be a integral part of it. No wonder, BB concept is recently gaining interests of mainstream physics, which gradually saturates all possibilities, how to understand the Universe in classical positivistic way based on formal sequential logic. Ludwig Boltzmann The purpose of this post is to explain the conceptual mistake of seeming BB paradox, which is arguing, if we are just a low-entropy fluctuation in a high-entropy world, then virtually everything in the Universe should appear in a disordered state without a consistent history. Such argument is the consequence of deep misunderstanding of physical reality, because the highly ordered states cannot interact with low ordered states by the same (i.e. causual) way, like with each other mutually. After all, my struggles in explanation of AWT concept to the rest of society illustrates this problem clearly. Similia similibus affecter, only similar things can influent each other similarly - so we cannot exchange information with less or more conscious creatures easily due the dispersion and total reflection phenomena of casual energy spreading. From AWT perspective people are driven by symmetric (flat) bra(i)nes comprising of a large, but convoluted time dimension. Just because human brain is product of long term evolution, it cannot interact with atemporal density fluctuations well. It's completely a symmetry problem, following from laws of inertial energy spreading inside of large particle system. The flat density fluctuations of Aether (i.e. gradients or "branes") can interact with those symmetrical only temporarily in longitudinal waves. Such interactions are only weak and/or short distance ones. The long distance atemporal interactions would require the transversal waves instead, which can be produced/exchanges just by another flat branes - end of story. Briefly speaking, the fact we aren't seeing the whole Universe chaotic consist just in fact, such chaos interact by quite weak way in gravity waves or short distance interactions with us. Whole the vacuum is full of such disordered states, which we cannot see, just because we are special flat brane (a density gradient of fluctuation) involving a large amount of compacted time dimensions. So that we are perceiving these highly entropic states as a empty space-time, i.e. as a vacuum only, thus ignoring whole such part of reality completelly. While for primitive organisms the Universe appears small, for well developed creatures, which are product of long term evolution of many mutations such Universe appears large. So we can imagine the causal portion of such chaotic system like nested foam or fractal tree or nested foam/sponge, the complexity of which increases with scale together with its ability to interact with indeterministic portion of Universe. By AWT the Universe can be modeled by dense system of many nested fluctuations and after then the scope, in which every fluctuation can interact and observe it neighborhood simply depend on space-time scale (the number of states/mutations involved). It means, the density fluctuations of every sufficiently large volume of such particle system are becomes sufficiently complex to interact with the rest and to observe it as a Universe by the same way, like tiny density fluctuations can interact with its neighborhood inside of dense gas. BB concept makes the occurrence of intelligent life rather undeniable, but such civilizations should be dispersed regularly through space-time, i.e. along flat spatial branes (surface medium sized planets inside of medium sized galaxies, enabling sufficient long term evolution). The flatness of space artifacts increases the probability of long term evolution due the space-time symmetry reasons. With respect to AWT definition of time we can see again, Aether concept simplifies the understanding of BB concept a lot. Sunday, December 28, 2008 Duality of relativity and quantum mechanics The dual (invariant to R-1/R transform) character or general relativity (GR) and quantum mechanics (QM) was expressed in 1997 in form of so called Maldacena duality, based on AdS-CFT correspondence (1). This duality is based on fact, every relativity phenomena can be perceived as quantum mechanics phenomena, when observing from exsintric perspective instead of insintric one. As a classical example can serve the gravitational lens phenomena, which appears like relativistic aberration from perspective of internal observer, while it manifests itself like quantum uncertainty phenomena violating Lorentz symmetry postulate from outside perspective (1, 2). It can be demonstrated easily, QM is dual to relativity via duality of gravity to omnidirectional universe expansion. It's not so well known (mainstream science covers it), quantum mechanics suffers serious experimental problem in general, because it predicts, every free particle should expand into infinite volume by solution of time dependent Schrödinger equation. This discrepancy can be explained by potential of gravity field, which keeps the particle "at place". The problem is, gravity itself cannot be derived from QM by any way. The omnidirectional collapse of space-time would have the very same effect, though. GR suffers the very dual problem, as the J.A.Wheeler has demonstrated by geon concept (1) of geometrodynamics. Geon is hypothetical closed artifact, formed just by gravity waves spreading by graviton field. Every particle or black hole can be considered a geon from certain perspective. Albeit from GR follows, such geon should collapse into singularity by its own gravity, which can be prevented by omnidirectional space-time expansion. By such way, validity of GR can be saved by concept of space-time expansion by the same way, like validity of QM depends on space-time collapse. This apparent paradox can be reconciled by concept of black hole/gravastar (i.e. graviton or "dark energy" star), forming our Universe generation. The gravitational collapse of such object is followed by gradual increasing of its internal density, which manifest itself as a omnidirectional space-time expansion from internal observer perspective, i.e. perspective of observer, which is formed by standing waves of such environment. The single concept can therefore explain the conceptual problems of both GR, both QM at the same moment. By such way, we can understand the gravity action as an acceleration force in terms of omnidirectional expansion inside of Aether density gradient. Such gradient makes a gradient of expansion speed, i.e. the gravity force. Such approach has even it's own testable predictions, for example in slowing of speed of light or by gradual expansion and dissolving of kilogram/meter prototypes from long-term perspective (1, 2, 3) and its closely related to dark matter and dark energy phenomena. Now we can understand as well, why numeric prediction of cosmological constant by GR differs by two hundreds orders of magnitude from those predicted by QM. QM follows the model of collapsing universe, while GR considers the concept of expanding universe on background.
7e19e3b8973ef1b7
Category Archives: Academia A somewhat coherent post on a robust idea The word “coherence” has different meaning for different people. Most people may think of the notion of being logical and consistent, be it in speaking or in acting. Actually, we all hope to deal with people — especially politicians(!) — who exhibit coherence between what they say and what they do. And we all hope that the next major blockbuster movie is coherent, with no major plot holes that make you grind your teeth in your seat, unable to fully enjoy your popcorn. Nonetheless, to a physicist, coherence is also a notion associated with wave behaviour. More precisely, it is associated with the possibility of seeing the effects of superposition, which is the coherent(!) combination of different physical possibilities. For example, the superposition of sounds waves is what allows people to listen to music in the background, while pleasantly chatting. Among the effects of superposition more affected by coherence (or lack thereof) are phenomena of interference, be it constructive or destructive, like the ones that you can experience with noise-cancelling headphones, for sound, or by looking at the colours of a soap bubble, for light. The recent detection of gravitational waves was possible exactly using the fact that light is a wave, and as such it can be used to detect tiny variations in length within an “interferometer”. Without coherence, neither constructive nor destructive interference would be possible, because both kinds of interference would be “washed out” and inexistent in practice. The importance of coherence becomes enormous, both conceptually and practically, when we realize that in quantum mechanics everything is also a wave, including what would normally — or, rather, “classically” — be considered “particles”, like electrons and atoms. Mathematically speaking, what we do is to associate a wave — the wave-function — to any physical system or compound of physical systems, more precisely to the state of the system. The evolution in time of the state of the object is given by the evolution of such a wave, described by the famous Schrödinger equation. Then, predictions of what one can observe, and with what probability, can be computed from the knowledge of the wave at a given time. In the case of information, this wave-like property of objects leads to the consideration of the quantum bit, or qubit, where one can have the superposition of the standard values assumed by a bit, 0 and 1. While in the classical realm the latter would be considered alternative and mutually exclusive options, they can coexist — in the sense of superposition — in the quantum case. This is at the basis of the computational power of future quantum computers. A coherent superposition is like a controlled combination of ingredients. In a more realistic setting, and taking into account issues like ignorance(!), the (unwanted) interactions with an environment, and all kinds of “noise”, the state of an object is associated not with a wave, but rather with a so-called density matrix. The latter can be thought of as the incoherent combination of several waves, leading to the decrease and potentially disappearance of interference. One could compare coherent and incoherent mixing respectively to, on one hand, expert cooking, where many flavours combine nicely, either reinforcing or contrasting each other, and, on the other hand, blending everything in a mixer, making often a tasteless combination out of even the most delicious ingredients. An incoherent mixture of foods may lead to a tasteless result; so can the incoherent mixture of waves and quantum states states. An incoherent mixture of foods may lead to a tasteless result; so can the incoherent mixture of waves, or of quantum states. (Photo: Tim Patterson (CC BY-SA 2.0)) In the density matrix formalism, (the surviving) coherence is often equated with the presence of off-diagonal elements in the matrix representation of a quantum state. Such off-diagonal elements are the “fingerprint” of the quantum superposition of the (classically) mutually exclusive properties associated with the basis in which the matrix is written;  the latter, although in principle arbitrary, is typically singled-out by the physics, for example by the consideration of what are the various possible energy states of the system. Most importantly, interesting effects — like oscillations — can occur when, and only when, there are off-diagonal elements in the energy representation of a quantum state. Somewhat surprisingly, the purposeful and focused study of coherence in the matrix formalism has been initiated only recently, leading to an explosion of interest and of works on the topic. Researchers are trying to develop a full consistent theory of coherence, which can be considered like a resource to be characterized, quantified, and manipulated. In [C. Napoli et al., Phys. Rev. Lett. 116, 150502 (2016)], together with collaborators from the University of Nottingham in UK and the Mount Allison University in New Brunswick, Canada, I put forward a quantifier of coherence, the robustness of coherence, that has many appealing properties, including the possibility of efficiently calculating it when the density matrix is known, of directly measuring it in the lab, and of associating it with practical tasks. Indeed, we find that the robustness of coherence of a quantum state sets an ultimate limit for usefulness of the involved physical system for metrological tasks. We prove that the robustness of coherence and the robustness of asymmetry quantify the usefulness of the corresponding quantum state for the sake of metrological tasks, like establishing which particular rotation among a set of possible rotations was actually applied. In the companion paper [M. Piani et al., Phys. Rev. A 93, 042107] we expand on these ideas, using the fact that coherence, despite being such a fundamental concept, can also be seen as “just” a special case of “asymmetry”, a word that may also mean different things to different people. Nonetheless, in this case, it is easy to grasp that the asymmetry of an object is associated with how different it looks when, let us say, we rotate it or flip it. It should be clear that a sphere is a very symmetric object; for example, it looks the same from whatever direction we look at it, e.g., even if we look at it while standing on our hands, rather than on our feet. On the other hand, say, a face, albeit typically symmetric with respect to a left-right flip, is not symmetric with respect to an upside-down flip. This means that we can realize that we are standing on our hands by noticing that the faces of the bystanders around us are upside-down themselves — this even disregarding the puzzlement or amusement that could transpire from the same faces. In [M. Piani et al., Phys. Rev. A 93, 042107] we introduce the robustness of asymmetry as a quantifier of the asymmetry of a quantum state with respect to a set of transformations that form a group; that means, in particular, with respect to a set of transformations such that, if you combine two transformations, one followed by the other, you obtain again a transformation that is part of the group, and such that any transformation can be undone by another transformation in the group. Again, think of rotations of an object, and of how they can be combined and undone. We prove that also the robustness of asymmetry of a quantum state can be easily calculated, that it can be measured directly experimentally, and that it sets an ultimate limit to the usefulness of the system prepared in said state for the sake of telling apart the transformations of the group — another metrological task. You might still wonder where the name “robustness” comes from. Well, it comes from the fact that the property of interest — coherence, or asymmetry — is quantified by the noise that it takes to destroy it; that is, literally, by how robust it is. What our works point out is that this already operational interpretation of the quantifier is precisely associated with how useful the coherence or asymmetry present in the quantum system are. That is, independently of whether you have a positive attitude (“what is the best use I can make of the resource?”) or you’d rather prepare for the worse (“how much noise can our system tolerate?”), robustness is your answer. [This post is cross-posted on Quanta Rei] On an alternative system to evaluate scientific contributions The ideas below are not necessarily original (for example, I have been inspired by posts and related discussions as this one and its follow-up), and I have never taken any real action to see whether they could be tweaked and somehow implemented. But I am also sure that ideas that are not shared have no hope to change things. And it is better to have at least some little hope 🙂 So, here we go. A scientist could be associated with two numbers, similar to Google’s PageRank: – an AuthorRank – a ReviewerRank. These two numbers would reflect the reputation (value?) of the researcher in the two major activities/roles of a scientist: that of producing new and interesting results, and that of judging/checking/validating the results of others. These numbers would be calculated also adopting an algorithm similar to PageRank (see below). Each scientist should have an account with two corresponding modes: Author and Reviewer. The first would be associated with the real name of the scientist, while the second would allow the scientist to act anonymously. Anyone could open an account, but the Reviewer mode would be activated only upon referral from an official institution (university?) or after having built enough AuthorRank. This would reduce the risk of people polluting the system with bad behavior in Reviewer mode, and of accounts opened just to rig the system. Each “published” (“arXived”?) paper should be open for discussion (commenting, suggestions, etc.) and for voting. Voting would be given by scientists in their Reviewer (anonymous) mode, with only the ReviewerRank displayed and having an effect (although the Author mode would have an effect indirectly; see later). The vote casted by a Reviewer with higher ReviewerRank should count more than the vote casted by a Reviewer with a low ReviewerRank (in this sense the system is PageRank inspired). In principle one could even keep track separately (besides with the total count) of the votes coming from people with high ReviewerRank (much in the similar way in which in Rottentomatoes one can check the rate of the “top critics”). The AuthorRank would (should?) influence the ReviewerRank by adding to it. The rationale is that if one is a good author, he/she is probably able to judge properly the works of others, even if he/she does not dedicate much time to reviewing and to building the ReviewerRank with an intense reviewing activity. The researcher would take part in the discussion on his/her article in his/her Author mode. His AuthorRank would increase thanks to the votes given to the article and potentially to the votes given to the activity of the author in the discussion on the author’s paper (e.g., replying effectively to the comments/questions of the Reviewers). The AuthorRank would also increase with citations of his/her paper by other papers. As in the calculation of PageRank, this increase would depend on the AuthorRank of the authors of the citing paper. The point is to make the quality of the citations at least as important as the number of the citations. The ReviewerRank of a Reviewer would increase thanks to the votes of both the Authors and the other Reviewers for constructive feedback, good comments, helpful suggestions. There could be tags associated to papers to indicate the fields and subfields of research: one could then even end up with Author and Reviewer ranks in each subfield, depending on the votes associated to both the uploads (papers published) and the discussions in a particular field. This would make more objective saying “this person is a leader in this field but also an expert in this other field”. As a result of this system, a researcher would be associated with his/her Author and Reviewer ranks, possibly (sub)split by field/subfield. Also, each paper in the list of papers would have an associated score. Committees evaluating a candidate for a job should then be able to get a good sense of the ability of the person in a given field/subfield, as well as of his/her contribution to the community through his/her referee activity.
9a557f5e6955ae60
Saturday, July 26, 2014 About the origin of Born rule Lubos has been again aggressive. At this time Sean Carroll became the victim of Lubos's verbal attacks. The reason why Lubos got angry was the articles of Carroll and his student to derive Born rule from something deeper: this deeper was proposed to be many worlds fairy tale as Lubos expresses it. I agree with Lubos about the impossibility to derive Born rule in the context of wave mechanics - here emphasis is on "wave mechanics". I also share the view about many worlds interpretation - at least I have not been able to make any sense of it mathematically. Lubos does not miss the opportunity to personally insult people who tell about their scientific work on blogs. Lubos does not realize that this is really the only communication channel for many scientists. For the out-laws of the academic world blogs, home pages, some archives, and some journals (of course not read the "real" researchers enjoying monthly salary) provide the only manner to communicate their work. Super string hegemony did good job in eliminating people who did not play the only game in the town: also I had the opportunity to learn this. Ironically, also Lubos is out-of-law, probably due to his overly aggressive blog behaviors in past. Perhaps Lubos does not see this as a personal problem since - according to his own words - he has decided to not publish anything without financial compensation because this would make him communist. Concerning Born rule I dare to have a different opinion than Lubos. I need not be afraid of Lubos's insults since Lubos as a brahmine of science refuses to comment anything written by inferior human beings like me and even refuses to mention their names: maybe Lubos is afraid of doing it might somehow infect him with the thoughts of casteless. Without going to the details of quantum measurement theory, one can say that Born's rule is bilinear exression for the initial and final states of quantum mechanical transition amplitude. Bilinearity is certainly something deep and I will go to that below. Certainly Born's rule gives the most natural expression for the transition amplitude: demonstraing this is of course not a derivation for it. 1. One could invent for the transition amplitude formal expressions non-linear in normalized initial and final states. One can however argue that the acceptable expressions must be symmetric in initial and final states. 2. The condition that the transition amplitude conserves quantum numbers associated with symmetries suggests strongly that the transition amplitude is a function of the bilinear transition amplitude between initial and final states and norms of initial and final states. The standard form for non-normalized states - inner product divided by product of square roots norms - is indeed of this form. For instance, one could add exponentials of the norms of initial and final state norms. 3. Projective invariance of the transition amplitude - the independence of the transition probabilities from normalization- implies that standard transition amplitude multiplied by a function - say exponential - of the modules square of the standard amplitude (transition probability in standard approach) remains to be considered. 4. One could however still consider the possibility that the probability as given by Born rule are replaced by its function: pij→ f(pij)pij. Unitary poses strong constraints on f and my guess that f =1 is the only possibility. Sidestep: To make this more concrete, the proponents of so called weak measurement theory propose a modification of formulate for the matrix element of an operator A to ⟨i|A|f⟩/⟨i|f⟩. The usual expression contains the product of square roots of the norms instead of ⟨i|f⟩. This is complete nonsense since for orthogonal states the expression can give infinity and for A =I, unit matrix, it gives same matrix element between any two states. For some mysterious reason the notion weak measurement - to be sharply distinguished from interaction free measurement - has ended up to Wikipedia and popular journals comment it enthusiastically as a new revolution in quantum theory. Consider now the situation in TGD framework. 1. In TGD framework the configuration space, "World of Classical Worlds" consisting of pairs of 3-surfaces at opposite boundaries of causal diamonds (CDs), is infinite-dimensional, and this sharply distinguishes TGD based quantum theory from wave mechanics. More technically, hyperfinite factors of type II (and possibly also III) replace factors of type I, in the mathematical formulation of the theory. Finite measurement resolution is unavoidable and is represented elegantly in terms of inclusions of hyper-finite factors. This means that single ray of state space is replaced with infinite-D sub-space whose states cannot be distinguished from each other in given measurement resolution. The infinite-dimensional character of WCW makes the definition of the inner product for WCW spinor fields extremely delicate. Note that WCW spinor fields are formally classical at WCW level and state function reduction remains the only genuinely quantal aspect of TGD. At space-time level one must perform second quantization of induced spinor fields to build WCW gamma matrices in terms of fermionic oscillator operators. 2. WCW spinors which are fermionic Fock states associated with a given 3-surface. There are good reasons to believe that the generalization of the usual bilinear inner product defined by integration of the spinor bilinear over space (with Euclidian signature) generalizes but under extremely restrictive condition. Spinor bilinear is replaced with fermionic Fock space inner product and this bilinear is integrated over WCW. The integration over WCW makes sense only if WCW allows a metric which is invariant under maximal group of isometries- this fixes WCW and physics highly uniquely. To avoid divergences one must also assume that Ricci scalar vanishes and empty space Einstein equations hold true. Metric determinant is ill-defined and must be cancelled by the Gaussian determinant coming from the exponent of vacuum functional, which is exponent of Kähler action if WCW metric is Kähler as required by the geometrization of hermitian conjugation which is basic operation in quantum theory. One could however still consider the possibility that the probabilities given by Born rule are replaced by their functions: pij→ f(pij)pij but unitarity excludes this. Infinite-dimensionality is thus quite not enough: something more is needed unless one assumes unitarity. 3. Zero Energy Ontology brings in the needed further input. In ZEO the transition amplitudes correspond to time-like entanglement coefficients of positive and negative energy parts of zero energy states located at the opposite light-like boundaries of causal diamond. The deep principle is that zero energy states code for the laws of physics as expressed byS-matrix and its generalizations in ZEO This implies that transition amplitudes is automatically bilinear with respect to positive and negative energy parts of zero energy state, which correspond to initial and final states in positive energy ontology. The question why just Born rule disappears in ZEO. That ZEO gives justification also for Born rule is nice since it has produced a solution also to many other fundamental problems of quantum theory. Consider only the basic problem of quantum measurement theory due to determinism of Schrödinger equation contra non-determinism of state function reduction: Bohr's solution was to give up entirely ontology and take QM as a mere toolbox of calculational rules. There is also the problem the relationship between geometric time and experienced time, which ZEO allows to solve and leads to much more detailed view about what happens in state function reduction. The most profound consequences are at the level of consciousness theory which is essentially generalization of ordinary quantum measurement theory in order to make observer part of the system by introducing the notion of self. ZEO also makes the physical theories testable: any quantum state can be achieved from vacuum in ZEO whereas in standard positive energy ontology conservation laws make this impossible so that at the level of principle the testing of the theory becomes impossible without additional assumptions. At 4:53 PM, Anonymous Anonymous said... I just saw 'A Thin Sheet of Reality' discussion ( My TGD shirt would say: scalable hbar. Question in terms of the finite amount of 2D-surface planck area bits (too simplified in TGD terms I suppose, but lets play): Can "dark energy" in this context be defined as the push-effect of thin sheets of reality with lower hbar value than the one we can measure, and "dark" matter the gravity pull-effect of higher hbar-values? At 4:39 AM, Anonymous Anonymous said... Reality is the thin sheet between heart and mind. When our heart and mind are one, there is nothing we cannot achieve. At 7:27 AM, Anonymous Anonymous said... Buckminster Fuller said something very deep about gravity, and in his writings insisted using the correct language: in gravity there is no "up" or "down", just "inwards" and "outwards". I'm still having problems comprehending the notion "causal diamond" with it's ups and downs. If TGD finite measurement resolution is consistent with finite amount of planck surface bits of a state of classical world, we accept that no information can be lost, and that the most economical 2D surface is sphere, then the increase of information content of a given 2D-surface can only happen as exponential 4D quantum jump evolution? Is this even on the right track so far? So how and why the causal diamond enters the picture? At 11:21 AM, Blogger Stephen said... At 11:23 AM, Blogger Stephen said... Anonymous, you are just babbling, are you a but just regurgitating comments? There is no such thing as an "exponential 4d quantum jump evolution". You might be in danger of descending into word-salad.... At 2:16 PM, Anonymous Anonymous said... 2 is quadratic root of 4, and 4 is square exponent of 2, is this correct? I'm not a physicist or a mathematician and producing precise English jargon is difficult for me, sorry if this creates unnecessary confusion. However, if I'm not mistaken, the square root - square exponent relation manifesting also in Newton's inverse-square law is important also in TGD and values of hbar? Also, correct me if I'm wrong, 4D quantum jump that rewrites both history and future, is basic feature of TGD cosmology and evolution? Is this just babble, and/or do you feel the need to try to insult and hurt others more? Are you hurting yourself for some reason, is everything not OK in your life? Or just the weather? At 1:23 AM, Anonymous Matti Pitkanen said... To the first Anonymous: My view is that dark energy corresponds at "microscopic level" (many-sheeted space-time of TGD) to the (Kahler) magnetic energy of flux tubes carrying monopole flux. Dark matter corresponds to ordinary particles at these flux tubes and have large Planck constant, much larger than standard Planck constant. Cosmological constant emerges in GRT limit of TGD. This means that the sheets of many-sheeted space-time are replaced with single piece of Minkowski space endowed with effective metric which is sum of (empty space) Minkowski metric and deviations of induced metrics of sheets from Minkowski metric. Similar description applies to various gauge potentials. Poincare invariance suggests that Einsteins-Yang-Mills equations with cosmological constant are satisfied for the resulting effective space-time time. What TGD can thus give is "microscopic" explanation for dark energy and dark matter. At 1:51 AM, Anonymous Matti Pitkanen said... This comment has been removed by a blog administrator. To Anonymous about causal diamond (CD): I understand causal diamond and closely related zero energy ontology in terms of what they make possible. CD makes possibly to formulate zero energy ontology (ZEO). ZEO is necessary to formulate TGD elegantly. ZEO makes possible to formulate * quantum theory in TGD framework where the strict determinism of basic action principle fails. Examples: -In ZEO brain is not 3-D but 4-D object inside CD. - Self organisation is not for 3-D but for 4-D patterns: entire temporal evolution within CD is replaced in each quantum jump with a new temporal evolution. This allows totally new view to say morphogenesis. *number theoretical universality requiring p-adic variants of TGD can be formulated. * conservation laws in length scale dependent manner. Conservation laws hold true inside given CD characterised by scale. * genuine measure of information (negentropic entanglement). Standard physics allows only the notion of entropy and one can talk only about reduction of information in some space-time region and say that information is gained when entropy is reduced. This is not enough in consciousness theory: information is more than reduction of dis-information. At 1:53 AM, Anonymous Matti Pitkanen said... Continuation to Anonymous: In TGD I cannot assign to blackhole horizon bits of information. It is possible to assign with it entropy. I could assign with black hole negentropic entanglement with some other object or to a particle at the surface of blackhole negentropic entanglement with blackhole. That is information. Negentropy Maximization Principle (NMP) is not present in standard physics: it says that negentropic entanglement -potential conscious information - is not conserved but increases. This would explain evolution. NMP implies second law for ordinary (entropic) entanglement. Second law applies to average member of ensemble. NMP to entanglement between systems of ensemble. CD entires to the picture in ZEO because quantum jumps giving rise to increase of information/ negentropic entanglement happen for zero energy states associated for it as repeated state function reductions at either boundary of CD. CD is also imbedding space correlate for self. At 2:03 AM, Anonymous Matti Pitkanen said... To Anonymous last questions about Newton's laws etc. TGD predicts same things as Newton's theory in non-relativistic limit and long length scales. The stuff related to hierarchy of Planck constants is genuinely new and speculative for colleague. In TGD Universe history is rewritten again and again much like in real world;-). At 2:06 AM, Anonymous Matti Pitkanen said... To Stephen: Interesting claim. I do not know the details to comment the feasibility of the claim. In any case, I am not enthusiastic about Zero Point Energy (which might be involved as a theoretical background) because it it theoretically ad hoc and mathematically ill-defined. At 2:38 AM, Anonymous Anonymous said... Thank you. This state of TGD psychohistory evolution is not yet fully negentropically rewritten by what is emanating from the center of TGD quantum jumps as all linguistic messages send are not being fully received (negentangled) because I'm dumb. So allow me another stupid question: is the image of TDG that I can in my current state negentropically entangle and be informed by, the Russian doll of scalable values of hbar, 1) just wrong, 2) old history getting replaced 3) still somehow relevant with CD bipolarism? And: does ZEO CD together with strong holography imply that there is no hope for all the drama of "bipolar disorder", and it just can't be getting better? At 2:46 AM, Anonymous Anonymous said... Another question: does negentropic information mean that TDG information spreads, if it spreads, rather at heart level than at mind level of entropic 2D world maps? At 4:47 AM, Anonymous A shaman said... I am a shaman, so it easiest for me to map with "ylinen", "keskinen" and "alinen", and I have benefited from conversations with science and math that I have evolved to think and speak in 3D "outer", "middle" "inner". In my experience Inner, the road to the center of gravity, is where we meet our fears and find our strength. In this experience Outer is the space of creative imagination and soothing vibrations. Now I also see, thanks to your response, what was forgotten, that Middle, the claimed "thin" 2D layer is not a closed border (as in between 0 and 1 in the context of whole numbers) but open border on the level of rational numbers and their real and p-adic extensions. Thank you again. :) As a shaman, it is also my job to listen the hearts desires of my peers, all sentient beings I communicate with, and let the 4D-quantum jumps happen as they happen in the emotional space time, informed and negentropically entangled by all heart felt desires. What is your desire? That your scientific "peers" accept your 2D sheet-side of words and numbers as 1D either-or truth, traveling down in information content from 3D sheet to 2D map and 1D truth value? Or could the psychological and emotional spacetime of TGD be for some other purpose in your life. It is your life and it is not my responsibility to offer you advice, but if this question helps you and all of us with you to evolve, I will be even happier. You have all the answers you need in TDG and your brave heart. It is up to you what you do with all that information. At 4:59 AM, Anonymous Matti Pitkanen said... To Anonymous: 1) Seems that I have had some success in my communication effects;-) 2) Old history replaced: this is one of key ideas and especially interesting when one think the miracles of biological evolution at various levels. If something goes wrong at the beginning is not a catastrophe anymore. The whole evolution hitherto can be replaced with a one in which never went wrong in the beginning. Hitler can be born again;-). 3) My original view about CD bipolarism was too simplistic: just single state function reduction to given boundary of CD and then to other boundary. The arrow of time would have changed all the subjective time. In elementary particle scales the number of quantum jumps on single boundary might be small and this would explain why arrow of time seems to absent in microscopic scales. The realistic view is that a sequence of state function reductions occurs at same boundary of CD: meaning that the part of the zero energy state assigned with say "lower"/past boundary does not change - just as state does not change in repeated quantum measurement - whereas the part associated at the "upper"/future boundary changes. Arrow of time emerges as also self as sequence of the quantum jumps at same boundary. Something very obvious which took very long time to realise. I am not sure in what sense you are using "bipolar disorder". As analogy or something concrete. In any case the sequence of reductions on given boundary would be self, and the lifetime of this self would correspond to the growth of the distance of the "upper"/future boundary of CD. In bipolar disorder the dominance period of particular brain hemisphere would be the life period of a self at some level of hierarchy. In bipolar disorder these competing self would kill each other alternately;-) instead of entangling to bigger self. Information is generated as entropic entanglement. Negentropic entanglement can be transferred/spread. It can be stolen, etc… I have considered the possibility that nutrients carry negentropic entanglement with entity which could be called Mother Gaia. Question about negentropic information spreading. I would like to speak information creation and transfer. In living matter one could imagine flux tube with one end attached to nutrient and other end to some bigger and wiser space-time sheet- say that representing MG. The end of tube attached to nutrient is transferred to ATP molecule and from ATP to receiver molecule in ADP-ATP cycle. The transfer of metabolic energy would be thus transfer of negentropic entanglement with MG. Metabolism would be basically transfer of potential for moments of understanding. At 9:01 AM, Anonymous A shaman said... Most concretely we mean by 'bipolar disorder' the mood swings between happy enthousiasm and depressed inactivity, and in some cases shaman and poet and drunk forming "Swing of Gods" experiences that Eino Leino spoke about. A very common shamans way to spread and disrupt bipolar experiences and mechanisms are various healing rituals and their more civilized forms as drama and cathartic tragedy. Fear and love and new balance are the most basic elements. Another very concrete and easy to comprehend form of bi-polar order parabolic curve and Law of diminishing returns. How to know that you have reached the ideal balance of ideal growth at the top of the gaussian curve of diminishing returns, and don't take a further step in the negative side to continue to deterministically fall back in towards the center of gravity, in the gaussian manner of the pendulum swings of Galileo and Newton? I trust you now see why CD bi-polarism is approached very carefully and with healthy suspicion, out of still effecting fear of falling back to the trap of 1D ZEO bipolar codependence of happiness and misery? Future as "up" and past as "down" is not easy for me to accept as the most coherent language I'm ready to receive, as in natural languages we speak and see them ahead and back, either way (e.g. Aymara sees past ahead and future is unseen behind the back), and metaphor and actuality of gravity suggest that out and in would be more precise language allowing also multidimensional view of the state instead of 1D up/down relation. Which has for me also very negative social connotations of upper class/lower class, "God Above", etc., which are not helpful for building a global peer-to-peer society. It is my hearts desire that we humans finally start to evolve towards new psychological and emotional level and learn to leave the past of shamanistic tribal ways behind. Food from the God inside, from the Heart of the Mother, from the Fire of the Source in its both feminine and masculine aspects, is of course well known to shamans. DMT-plant molecules combined with ayahuasca vine for digestion is probably most potent such food known to us, and has distinct feminine aspect, which is why we call her Mother Ayahuasca. Can your formulas tell us something about why that particular brew opens such a strong nutritional flux tube between MG space-time sheet and a monkey brain and mind? But let's remember, what was said about law of diminishing returns may quite likely apply also to the Food from Mother inside, She can point the way and offer much healing of traumas, but to stay in balance and evolve in good balance towards better and better without bi-polar mood swings, we can't depend from a single medicinal plant brew, or any other single growth factor molecule. In our selves we are all our relations. At 9:09 AM, Anonymous A shaman said... PS: as a token of gratitude I would like to offer this song for enjoyment. What the singer does in the middle turn point of the song is amazing and beautiful. At 12:36 PM, Anonymous Anonymous said... At 2:38 PM, Blogger Stephen said... Anon, no hate, only love, I was just too too hungry before typing. Anyway, what you say is not too far off... with great power comes great responsibility, but yes you are realizing I think it is a communications problem in many ways. Peace, Crow At 10:48 PM, Anonymous Matti Pitkanen said... To Anonymous: Concerning bipolar mood disorder and two other analogous disorders. I tend to identify the optimum situation as criticality. Dancing on the rope. In this state you are maximally sensitive perceiver and can rapidly change views about world. In the totally stable state one has completely fixed world view and you become very serious scientist believing in text book truths and regarding those who do not as inferior organic stuff or rather waste;-). Both depression and mania represent these stable states. In this view shamans are hang around critical point and can easily move between different states of consciousness. In stable state you would be hawk or dove, using political terminology. In terms of Thom's catastrophe theory the shaman or creative person would be a someone who has developed third option besides the two bipolar options and thus does not fall for long times to depression or mania or does not become hawk or dove. Concerning Ayahuasca, I do not of course have any formulas. Last autumn I wrote an article and also blog text inspired by a book about shamanism interpreted as activating entanglement connections to distant parts of Universe and making possible to "meet" representatives of civilisation of distant galaxies. The propagation of signals in both time directions allowed by zero energy ontology allows to overcome the barrier posed by light velocity and communication becomes possible. DMT and some psychoactive drugs would activate communication line. DMT molecules could have flux tube connections to very big magnetic bodies of cosmic size to which also representatives of distant civilizations could generate connections (ordinary nutrients could have connections to Mother Gaia). As DMT like molecule attaches to a receptor in neuronal membrane, a Josephson junction generating Josephson radiation serving as signal sent to the distance receiver is activated. The signal to the ET would be neuronal signal. Same applies to the received signals. Very simple and economic. At 12:50 AM, Anonymous Matti Pitkanen said... To Anonymous about impossible engine. I have considered models for so called free energy phenomena. Energy conservation suggests that them to be transient phenomena so that perpetual mobiles are impossible. It might be however possible to extract energy from environment. A related mechanism is transfer of energy between system and its magnetic body: something analogous to the proposals involving electromagnetic or even gravitational fields. This would make possible for system to gain momentum and its magnetic body or some larger magnetic body to get the compensating momentum in opposite direction. A more natural mechanism is however the transfer of angular momentum between magnetic body and the system. The claimed spontaneous rotation of some magnetic system could be due to the spontaneous magnetisation of the magnetic body of the system giving it large spin (spin unit is h_eff and quite large now!) . The compensating angular momentum is obtained by the system and it starts to rotate. The needed energy would come from the fact that spontaneous magnetisation gives negative and large (proportional to h_eff) energy for the dark matter at the magnetic body. The compensating energy goes to the rotational energy of the system. The transformation of rotational motion to translational one is standard operation. A question left is how to "force" the transfer of energy and angular momentum between magnetic body and system. Here the fact that TGD predicts flux tubes carrying monopole magnetic flux and thus fields without current creating them is one promising possibility. How to realize this I leave as home exercize;-). Pollack's experiments demonstrate that external radiation generates what he calls exclusion zones in water: EZ is negatively charged and there is electron potential difference between exterior and EZ and H1.5O is the stoichometry instead of H2O. This suggests that a fraction of protons goes to dark protons at the flux tubes outside EZ. If flux tubes spontaneously magnetize, one can imagine that rotational motion of say electrons is generated inside EZ. For instance supercurrents could be generated and Cooper pairs would have large binding energy if they have large h_eff. At 3:19 AM, Anonymous A shaman said... Hunger is good, friend, it feeds our thinking and learning. Time is funny even in its one-dimensional aspect. Usually the real purpose of an action or event becomes revealed only afterwards. Maybe CD has something to do with that. Can feel bit weird at first, but then you get used to it and start to accept. Maybe the moment/quantum jump of purpose revealing itself has something to do with the fully saturated "top" point of the curve of diminishing returns also. And as for communications problem, are there any other kinds of problems? :) At 1:34 PM, Blogger Ulla said... Look at this about the weak measurements. I think this approach would fit the dam Schrödinger cat well, maybe even life itself? At 11:45 AM, Blogger Stephen said... Ulla, it looks like they finally implemented the quantum chaology stuff of Berry et. al. That is pretty cool stuff but not sure how it really applies to life..... periodic orbits, eigenfunction "scarring" etc. Even the vacuum propulsion thing tested by nasa the other day apparently relies on the Casimir effect which is one of the places in physics the Riemann zeta function appears. Shaman, im not here to discuss my personal life.. maybe in a more private setting At 2:12 PM, Anonymous Anonymous said... This looks very interesting, novel mathematical derivation of SR with sound instead of light and inertial observer instead of inertial frame. Can this simplify TDG? At 8:59 PM, Anonymous Matti Pitkanen said... I did not have time to listen the who talk. It does not seem simple to me;-) . But what seems to go wrong is following on basis of general group theoretic considerations. Special relativity is assumption about symmetries of space-time. Poincare group replaces Galilean group. This is a concise statement. If this model would allow to deduce special relativity, it should allow Lorentz group to act as symmetries of wave equations defining sound propagation. Sound waves obey in the rest system of matter formally the same wave equation as light but with sound velocity. This does not however mean that equations are invariant under special relativity with maximal signal velocity given by sound velocity. When one goes to a coordinate system moving with velocity v with respect to original one, the form of equations is *not* preserved. Thus one cannot assume the basic transformation formulas of special relativity. Newtonian velocity addition formula is v=1_1+v_2. Special relativistic is non-linear and guarantees that v is not above light-velocity. This fellow might of course have something different in mind. At 8:19 AM, Blogger Stephen said... Three-dimensional Mid-air Acoustic Manipulation by Ultrasonic Phased Arrays Yoichi Ochiai, Takayuki Hoshi, Jun Rekimoto (Submitted on 14 Dec 2013) The essence of levitation technology is the countervailing of gravity. It is known that an ultrasound standing wave is capable of suspending small particles at its sound pressure nodes. The acoustic axis of the ultrasound beam in conventional studies was parallel to the gravitational force, and the levitated objects were manipulated along the fixed axis (i.e. one-dimensionally) by controlling the phases or frequencies of bolted Langevin-type transducers. In the present study, we considered extended acoustic manipulation whereby millimetre-sized particles were levitated and moved three-dimensionally by localised ultrasonic standing waves, which were generated by ultrasonic phased arrays. Our manipulation system has two original features. One is the direction of the ultrasound beam, which is arbitrary because the force acting toward its centre is also utilised. The other is the manipulation principle by which a localised standing wave is generated at an arbitrary position and moved three-dimensionally by opposed and ultrasonic phased arrays. We experimentally confirmed that expanded-polystyrene particles of 0.6 mm and 2 mm in diameter could be manipulated by our proposed method. At 9:52 AM, Blogger Ulla said... time has an order parameter due to transfer of energy... interesting. Does this also make time 3D? At 4:29 PM, Blogger Stephen said... In what way would "time" become 3d? At 2:15 AM, Anonymous Anonymous said... You can easily create single observer clock with just ruler (affine geometry of parallel lines). If there are more than one observers, and two or more observers want to synchronize their clocks, also a compass is needed (projective geometry). Not to mention good will to do so (negentropy? ;)). The question is not about the quantity of dimensions, just about qualitative level of geometry (ruler -> ruler and compass). At 9:52 AM, Blogger Stephen said... Anonymous, why the hell would anyone want to synchronize a clock? That's just some bullshit anachronistic thing some white asshole made up to get people to do "work" At 10:13 AM, Blogger Stephen said... you can find some stuff related to Berry's quantum chaos stuff... I dont think a hermitian operator lies behind the Riemann zeros... its probably not orthogonal in general and Matti's paper about a possible strategy is pretty interesting, regarding the eigenvalues of an annihilation operator, maybe have something to do with the splitting of cosmic rays thru the atmosphere At 10:47 AM, Anonymous Anonymous said... Stephen, that's a deep philosophical question relating to Hume's guillotine "no should from is". If someone likes to think about a general measurement theory and concludes that also a compass is needed to do so, that's an "is" and there is no obligation for other observers to participate and observe similarly, or obligation not to. At 10:56 AM, Blogger Ulla said... sterile neutrinos At 4:26 AM, Anonymous Anonymous said... "Entia non sunt multiplicanda praeter necessitatem." AKA Occam's razor Maybe it's still worth remembering? At 5:18 AM, Anonymous Matti Pitkanen said... Agree if "entia" correspond to basic principles. The outcome from these principles should be of infinite complexity. Otherwise it would not be interesting. At 8:56 PM, Anonymous Anonymous said... Speculative houses of cards with feet of clay, and with interest on top, tend to grow until they fall down, and both processes can be very interesting. And then with the help of the razor the spaces collapsed and vacuums left can be again filled with more rational structures build on solid foundations. And upon the more solid foundation to build new speculative houses of cards, to start to see what wonders the next biggest new prime and new beginning will show and allow to share and communicate. It never ends and it's all good, the dance of creation and creative destruction. This is how we are and interest each other in our infinitely unique complexity, all ready and all ways enlightened sources of light, no matter how much energy we put in thinking that we are not. :) At 9:59 PM, Blogger Stephen said... At 2:48 AM, Blogger Ulla said... Cyclotron-frequencies are basic to TGD. At 9:33 AM, Anonymous Anonymous said... Re "Cyclotron-frequencies", Quadratic Reciprocity (QR) ( and Legendre symbol are very deep link with "space-time -sheet" quadrances, Weil cohomology etc. This is very nice beginner level introduction to Gauss' "Golden Rule": Here's very good presentation of the Legendre Symbol: It's relatively easy to see that the magic formula 3n+1 is also behind the QR and how that extends to n-dimensional linear algebra, "reversimal" and/or cyclotomic remainders of n/p modulo p, etc. Post a Comment << Home
df7513d58c9eb02a
Applied Math Seminar: High fidelity representation of Born-Oppenheimer potential energy surfaces by machine learning Event Type:  Hua Guo, Department of Chemistry and Chemical Biology Event Date:  Monday, May 1, 2017 - 3:30pm SMLC 356 Event Description:  Within the Born-Oppenheimer approximation, which separates the electronic motion from nuclear motion, the spectroscopy and reaction dynamics of molecular systems can be characterized by nuclear movement on potential energy surfaces. These multidimensional surfaces are continuous and single valued, but may have quite complex topological features, such as wells, saddles, and dissociation asymptotes. While the values of the potential energy at given nuclear configurations can now be obtained with high accuracy by solving the electronic Schrödinger equation, high fidelity representation of the global potential energy surface from these discrete data points remains a challenge. In this talk, we will discuss the latest developments in representing the potential energy surface using machine learning methods such as neural networks and Gaussian processes. In particular, how the permutation symmetry, which is necessary property of the potential energy surface, is imposed in these implementations will be emphasized.  Coffee and cookies will be served in the lounge at 15.00 Event Contact Contact Name: Deborah Sulsky Contact Email:
8854219d555f083d
Monday, February 28, 2005 Dark Matter and Living Matter Dark matter and living matter represent two deep mysteries of the recent world view. There however exists an amazing possibility that there might be close connection between these mysteries. Do Bohr rules apply to astrophysical systems? D. Da Rocha and Laurent Nottale have proposed that Schrödinger equation with Planck constant hbar replaced with what might be called gravitational Planck constant hbar_{gr}= GmM/v_0 (hbar=c=1). v_0 is a velocity parameter having the value v_0=about 145 km/s giving v_0/c=4.6\times 10^{-4}. This is rather near to the peak orbital velocity of stars in galactic halos. Also subharmonics and harmonics of $v_0$ seem to appear. The support for the hypothesis coming from empirical data is impressive. The support for the hypothesis coming from empirical data is impressive. It is surprising that findings of this caliber have not received any attention in popular journals while the latest revolutions in M-theory gain all possible publicity: also I heard from the article by accident from Victor Christianto to whom I am deeply grateful. Is dark matter in astroscopic quantum state? Nottale and Da Rocha believe that their Schrödinger equation results from a fractal hydrodynamics. Many-sheeted space-time however suggests that astrophysical systems are not only quantum systems at larger space-time sheets but correspond to a gigantic value of gravitational Planck constant. The gravitational (ordinary) Schrödinger equation would provide a solution of the black hole collapse (IR catastrophe) problem encountered at the classical level. The basic objection is that astrophysical systems are extremely classical whereas TGD predicts macrotemporal quantum coherence in the scale of life time of gravitational bound states. The resolution of the problem inspired by TGD inspired theory of living matter is that it is the dark matter at larger space-time sheets which is quantum coherent in the required time scale. I have proposed already earlier the possibility that Planck constant is quantized and the spectrum is given in terms of logarithms of Beraha numbers B_n= 4cos^2(pi/n): the lowest Beraha number B_3 =1 is completely exceptional in that it predicts infinite value of Planck constant. The inverse of the gravitational Planck constant could correspond a gravitational perturbation of this as 1/hbar_{gr}= v_0/GMm. The general philosophy would be that when the quantum system would become non-perturbative, a phase transition increasing the value of hbar occurs to preserve the perturbative character and at the transition n=4 --> 3 only the small perturbative correction to 1/hbar (3)=0 remains. This would apply to QCD and to atoms with Z>137 as well. TGD predicts correctly the value of the parameter v_0 assuming that cosmic strings and their decay remnants are responsible for the dark matter. The harmonics of v_0 can be understood as corresponding to perturbations replacing cosmic strings with their n-branched coverings so that tension becomes n^2-fold: much like the replacement of a closed orbit with an orbit closing only after n turns. 1/n-sub-harmonic would result when a magnetic flux tube split into n disjoint magnetic flux tubes. Planetary system as a testing ground The study of inclinations (tilt angles with respect to the Earth's orbital plane) leads to a concrete model for the quantum evolution of the planetary system. Only a stepwise breaking of the rotational symmetry and angular momentum Bohr rules plus Newton's equation (or geodesic equation) are needed, and gravitational Shrödinger equation holds true only inside flux quanta for the dark matter. • During pre-planetary period dark matter formed a quantum coherent state on the (Z^0) magnetic flux quanta (spherical shells or flux tubes). This made the flux quantum effectively a single rigid body with rotational degrees of freedom corresponding to a sphere or circle (full SO(3) or SO(2) symmetry). • In the case of spherical shells associated with inner planets the SO(3)--> SO(2) symmetry breaking led to the generation of a flux tube with the inclination determined by m and j and a further symmetry breaking, kind of an astral traffic jam inside the flux tube, generated a planet moving inside flux tube. The semiclassical interpretation of the angular momentum algebra predicts the inclinations of the inner planets. The predicted (real) inclinations are 6 (7) resp. 2.6 (3.4) degrees for Mercury resp. Venus). The predicted (real) inclination of the Earth's spin axis is 24 (23.5) degrees. • The v_0--> v_0/5 transition necessary to understand the radii of the outer planets can be understood as resulting from the splitting of (Z^0) magnetic flux tube to five flux tubes representing Earth and outer planets except Pluto, whose orbital parameters indeed differ dramatically from those of other planets. The flux tube has a shape of a disk with a hole glued to the Earth's spherical flux shell. • A remnant of the dark matter is still in a macroscopic quantum state at the flux quanta. It couples to photons as a quantum coherent state but the coupling is extremely small due to the gigantic value of hbar_gr scaling alpha by hbar/hbar_gr: hence the darkness. Note however that it is the entire condensate that couples to electromagnetism with this coupling, individual charged particles couple normally. Living matter and dark matter The most interesting predictions from the point of view of living matter are following. • The dark matter is still there and forms quantum coherent structures of astrophysical size. In particular, the (Z^0) magnetic flux tubes associated with the planetary orbits define this kind of structures. The enormous value of h_{gr} makes the characteristic time scales of these quantum coherent states extremely long and implies macro-temporal quantum coherence in human and even longer time scales. • The rather amazing coincidences between basic bio-rhythms and the periods associated with the states of orbits in solar system suggest that the frequencies defined by the energy levels of the gravitational Schrödinger equation might entrain with various biological frequencies such as the cyclotron frequencies associated with the magnetic flux tubes. For instance, the period associated with n=1 orbit in the case of Sun is 24 hours within experimental accuracy for v_0: the duration of day in Earth and in a good approximation also in Mars! Second example is the mysterious 5 second time scale associated with the Comorosan effect. Indeed, the basic assumption of TGD inspired quantum biology is that the "electromagnetic bodies" associated with living systems are the intentional agents would conform with the idea that it is dark matter what makes ordinary matter living by acting as quantum controlling agent. Already now there exist a rather detailed theory about how these electromagnetic (or more generally, field-) bodies use biological body as a motor instrument and sensory receptor. For instance, the basic mechanisms of metabolisms would involve flow of matter between space-time sheets liberating energy quanta defining universal metabolic energy currencies same everywhere in Universe and having nothing to do with the details of living systems. The strange time delays of consciousness observed first by Libet suggests that the size of the field body is at least of the order of Earth size as also the frequency scale of EEG suggests (EEG would be involved with communications with magnetic body and biological body). For more details see the chapter "TGD and Astrophysics". For the notion of electromagnetic body see the relevant chapters of the book Genes, Memes, Qualia, and Semitrance. How to Put an End to the Suffering Caused by Path Integrals Path integrals have caused a lot of suffering amongst theoretical physicists. Lubos Motl gives a nice summary about Wick-rotation used quite generally as a trick to give some meaning to these poorly defined objects which have caused so much frustration. The idea of Wick rotation is to define path integrals of quantum field theory in Minkowskian space M^4 by replacing M^4 temporarily by Euclidian space E^4, by calculating the integral here as Euclidian functional integral having more meaning, and returning back to M^4 by analytically continuing in various parameters such as the momenta of particles. The trick has been also applied in the hope of making sense of path integral of General Relativity as well as in string models. I have never liked the trick, not because it is a trick, but just for the fact that this trick is needed at all. Something must be fatally wrong at the fundamental level. To see what this something might be, one must recall what Feynman did for long time ago. How one ends up to path integral? The path integral approach was abstracted by Feynman from a unitary time evolution operator by decomposing the time evolution to a product of infinite number of infinitesimally short time evolutions. After this "obvious" generalizations of the formalism lacking a real mathematical justification were made. Despite all the work done it can be safely stated, that the notion of path integral does not mathematically exist. The tricky definition of the functional integral through Wick rotation transforming it to functional (I will drop the attribute Euclidian in the sequel) integral is certainly not enough for a mathematician. I hasten to add that even the functional integrals are deeply problematic since the introduction of local interactions automatically induces infinities, and only in the case of so called renormalizable theories there exist a prescription for getting rid of these infinities. What are the implicit philosophical ideas behind path integral formulation? When the best brains have been unable to give a real meaning to the notion of path integral despite a work of about six decades, it is time to ask what might be behind these difficulties and whether it could relate to some cherished philosophical assumptions. a) Feynman's approach starts from Hamiltonian quantization and the notion of time is that of Newtonian mechanics. The representability of the unitary time evolution operator as sum over paths is natural in this context. No absolute time exists in the world of Special Relativity so that there are reasons to get worried. It might not be necessary nor even sensible in the Minkowskian context. c) The sexy idea about the sum of all histories with the classical physics identified as the history corresponding to the stationary phase might be simply wrong. Even Feynman could be sometimes wrong, believe or not! One can quite well consider some other, more sensible, approach to define S-matrix elements. d) Infinities are the basic problem of modern physics and are present for both path- and functional integrals. Local divergences are practically always present always as one tries to make a free theory interacting by introducing local interactions consistent with classical field theory. The basic assumption behind locality is that fundamental particles are pointlike. In string models this assumption is given up and there are indeed reasons to believe that superstrings of various kinds allow perturbation theory free of infinities. The unfortunate fact is that this perturbation series very probably does not converge to anything well-defined and is only an asymptotic series. The now-disappearing hope was that M-theory could resolve this problem by providing a non-perturbative approach to strings. d) In the perturbative approach the functional integrals give rise to Gaussian determinants, which are typically infinite formally. They can be eliminated but are aesthetically very awkward. TOE should be maximally aesthetic! These observations do not lead us very far but give some hints about what might go wrong. Perhaps the entire idea about sum over all possible paths with classical physics resulting via stationary phase approximation is utterly wrong. Perhaps the idea about space-time-local interactions is wrong and perhaps higher-dimensional fundamental objects might allow to get over the problems. Neither Hamiltonian formalism nor path integral works in TGD When I started to develop mathematical theory around the basic idea that space-times can be regarded as 4-dimensional surfaces in H=M^4xCP_2, I soon learned that perturbative approach fails completely. Indeed, it would be natural to construct a perturbation theory around canonically imbedded M^4 but for the only reasonable candidate for the action, Kähler action, the functional power series vanishes in the third order at M^4 so that the kinetic terms defining propagators vanish identically. For the same reason also Hamiltonian formalism fails completely. This is the case much more generally, and the enormous vacuum degeneracy (any 4-surface for which CP_2 projection belongs to at most 2-D Lagrange manifold is a non-deterministic vacuum extremal) kills all hopes about conventional quantization. Geometrization of quantum physics as a solution to the problems This puzzling state of affairs led to the idea that if quantization is not possible one should not quantize! This idea grew gradually to the vision that quantum states correspond to the modes of completely classical spinor fields of an infinite-dimensional configuration space CH of 3-surfaces, the world of classical worlds. This allows also the geometrization of fermionic statistics and super-conformal symmetries in terms of gamma matrices associated with the Kähler metric. The breakthrough came from the realization that general coordinate invariance in 4-dimensional sense is the key requirement. The obvious problem is that you have only 3-dimensional basic objects but you want 4-dimensional Diff invariance. Obviously the very definition of the configuration space geometry should assign to a given 3-surface X^3 a unique four-surface X^4(X^3) for 4-D general coordinate transformations to act on it. What would be the physical interpretation of this? X^4(X^3) defines the classical physics associated with X^3. Actually something more: X^4(X^3) is an analog of Bohr orbit since it is unique so that one can expect a quantization of various classical observables. Classical physics in the sense of Bohr orbitology would become part of quantum physics and of configuration space geometry. This is certainly something totally new and would mean a partial return from the days of Feynman to the good old days of Bohr when everything was still understandable and concrete. There are also other implications. Oscillator operators are the essence of quantum theory and can be geometrized only if configuration space has Kähler metric defined by so called Kähler function. Since classical physics should be coded by this Kähler function, it should be defined by a preferred extremal X^4(X^3) of some physically meaningful action principle. The so called Kähler action, which is formally Maxwell action for CP_2 Kähler form induced to space-time surface, is the unique candidate. The first guess is that X^4(X^3) could be identified as an absolute minimum of Kähler action. This is however a little bit questionable option since there is no lower bound for the value of Kahler action and if it gets negative and infinite, vacuum functional defined as the exponent of Kahler function vanishes. Indeed, it took 15 years to learn that this need not be the quite correct definition. A candidate for a more realistic identification came from a proposal for a general solution of field equations in terms of so called Kähler calibration. The magnitude of Kähler action would be minimized separately in regions where Lagrangian density L_K has a definite sign. This means that X^4(X^3) is as near as possible to a vacuum extremal. The Universe is maximally lazy energy saver! By minimizing energy of solution it might be possible to fix the time derivatives of the imbedding space coordinates at X^3 in order to find the X^4(X^3) by solving the partial differential equations as initial value problem at X^3. A considerable reduction of computational labor. This is of extreme importance, and even more so because Kähler action does not define a fully deterministic variational principle. There are indeed hopes of understanding the theory even numerically! Generalized Feynman diagrams as computations/analytic continuations A generalization of the notion of Feynman diagram emerging in TGD framework replaces sum over classical paths with what might be regarded as computation or analytic continuation. The first observation is that the path integral over all space-time surfaces with a fixed collection of 3-surfaces as a boundary does not make sense in TGD framework. Sum reduces to a single 3-surface X^4(X3) since classical physics in the sense of Bohr's orbitology is a quintessential part of configuration space geometry and quantum theory. Classical world is not anymore identified as a path with a stationary phase. This suggests completely different approach to the notion of Feynman diagram. It however took quite a long time before I realized how to formulate this approach more precisely. The idea came when I constructed a TGD inspired model for topological quantum computation. In topological quantum computation braids are the basic structures and quantum computation coded into the knotting and linking of the threads of the braid. This leads to a view that generalized Feynman diagrams do not represent sum over all classical paths but represent something analogous to computations with vertices representing some fundamental algebraic operations. A given computation can be carried out in very many equivalent manners and there always exists a minimal computation. In the language of generalized Feynman diagrams this would mean that diagrams with loops are always equivalent with tree diagrams. The summation over loops would be obviously multiple counting in this framework. This would be nothing but a far reaching generalization of the duality symmetry, which originally lead to string models. I have formulated this generalization in terms of Hopf (ribbon-) algebras here and in a different manner here. That there are several equivalent diagrams would conform with the non-determinism of Kähler action implying several equivalent space-time surfaces having given 3-surfaces as boundaries. This of course correlates directly with the fact that the functional integral and canonical quantization fail completely. The generalized Feynman diagrams could be also interpreted as space-time counterparts for different analytic continuations of configuration space spinor fields (classical spinor fields in the world of classical worlds) from a sector of configuration space with a given 3-topology to another sector with different topology (initial and final states of particle reaction in the language of elementary particle physicist). This continuation can be performed in very many manners but the final result is same always, just as in case of equivalent computations. Getting rid of standard divergences It is possible to get rid of path integrals in TGD framework but not from the functional integral over the infinite-dimensional world of classical worlds. This integration means performing an average over these well-defined generalized Feynman diagrams, one might say over predictions of finite quantum field theories. This functional integral in question could bring back the basic difficulties but it does not. a) The vacuum functional over quantum fluctuating degrees of freedom defining the functional integral is completely analogous to a thermal partition function defined as an exponent of Hamiltonian in thermodynamics at a critical temperature. Kähler coupling strength is analogous to critical temperature, which means that the values of the only free parameter of the theory are predicted as they should in any respectable TOE. The good news is that Kähler function is a non-local functional of 3-surface X^3. Hence the local divergences unavoidable in any local QFT are absent. If one would try to integrate over all X^4, one would have Kähler action and locality and all the problems of standard approach would be magnified since the action is extremely non-linear. b) Vacuum functional is the exponent of Kähler function and in the perturbation theory configuration space contravariant metric becomes propagator. The Gaussian determinant is the inverse of the metric determinant and these two ill-defined determinants neatly cancel each other so that also aesthetic is perfect! Note that the coefficient of the exponent of Kähler function is also fixed. A further good news is that there are hopes that the functional integral might be carried out exactly by performing perturbation theory around the maxima of Kähler function. These hopes are stimulated by the fact that the world of classical worlds is a union of symmetric spaces and for a symmetric space all points are metrically equivalent. In the finite-dimensional case there are a lot of examples about the occurrence of this phenomenon. The conclusion is that the standard divergences are not present and that this result is basically due to a new philosophy rather than some delicate cancellation mechanism. What about zero modes? Is there something that could still go wrong? Yes. The existence of configuration space metric requires that it is a union over infinite dimensional symmetric spaces labelled by zero modes whose contribution to CH line element vanishes. An infinite union is indeed in question: if CH would reduce to single symmetric space, a 3-surface with size of galaxy would be equivalent with a 3-surface associated with electron. The zero modes characterize classical degrees of freedom: shape, size, and the induced Kähler form defining a classical Maxwell field on X^4(X^3). In zero modes there is no proper definition of the functional integral. Here comes however quantum measurement theory in rescue. Zero modes are non-quantum fluctuating degrees of freedom and thus behave like genuine classical macroscopic degrees of freedom. Therefore a localization in these degrees of freedom is expected to occur in each quantum jump as a counterpart of quantum measurement. These degrees of freedom should be also correlated in one-one manner with quantum fluctuating degrees of freedom like the pointer of measurement apparatus with the direction of electron spin. A kind of duality between quantum fluctuating degrees of freedom and zero modes is required. We would experience the macroworld as completely classical because each moment of consciousness identifiable as quantum jump makes it classical. It is made again non-classical during the unitary U process stage of the next quantum jump. Dispersion in zero modes, localization in zero modes, dispersion in zero modes,.... Like Djinn getting out of the bottle and representing a very long list of classical wishes of which just one is realized. With this complete localization or localization to a discrete union of points in zero mode degrees of freedom, S-matrix elements become well defined. Note however that the most general option would be a localization into finite-dimensional symplectic subspaces of zero modes in each quantum jump. The reason is that zero modes allow a symplectic structure and thus all possible finite-dimensional integrals are well defined using the exterior powers of symplectic form as integration measure. What about renormalization? The elimination of infinities relates closely to the renormalization of coupling constants, and one could argue that the proposed beautiful scenario is in conflict with basic experimental facts. This not the case if Kähler coupling strength has an entire spectrum of values labelled by primes or subset of primes labelling p-adic length scales. In this picture p-adic primes label p-adic effective topologies of non-deterministic extremals of Kähler action. p-Adic field equations possess an inherent non-determinism and the hypothesis is that this non-determinism gives in an appropriate length scale rise to fractality characterized by an effective p-adic topology such that the prime p is fixed from the constraint that the non-determinism of Kähler action correspond to the inherent p-adic non-determinism in this length scale range. The highly non-trivial prediction is that quantum non-determinism is not just randomness since p-adic non-determinism involves long range correlations due to the fact that in p-adic topology evolution is continuous. The proposal is that the long range correlations of locally random looking intentional/purposeful behavior could correspond to p-adic non-determinism with p characterizing the "intelligence quotient" of the system. This is a testable prediction. The coupling constant evolution is replaced by the p-adic length scale dependence of Kähler coupling strength and of other coupling constants determined by it. The emergence of the analogs of loop corrections might be understood if there is a natural but non-orthogonal state basis labelled by quantum numbers which allow a natural grading. The orthogonalized state basis would be obtained by a Gram-Schmidt orthonormalization procedure respecting the grading. Orthonormalization introduces a cloud of virtual particles and the dependence of this Gram-Schmidt cloud on prime p would induces the TGD counterpart of renormalization group evolution. It is clear that the classical non-determinism of Kähler action is the red thread of the proposed vision. By quantum classical correspondence space-time surface is not only a representation for a quantum state but even for the final states of quantum jump sequence. Classical non-determinism would thus correspond to the space-time correlate of quantum non-determinism. Matti Pitkänen Sunday, February 27, 2005 Melancholic Moods and Road to Reality These periods of stagnation are difficult to tolerate. You have been experiencing a flow of mathematical ideas continuing more than year with only brief periods of recovery from physical exhaustion. Then you lose the contact. No ideas pop up. You have been asked to write articles but you cannot do it: a passive organizing of existing material into articles simply cannot motivate you, and you cannot develop enough self discipline to do this by brute force. Chess problems are your latest invention in the war against depression: your brain gets depressed unless it receives its daily portion of problems. You also write to your blog page these melancholic musing to stay sane. You begin to feel more and more strongly your role in the society: an unemployed academic village fool, trying to survive with a minimum possible unemployment money. Without any future and thus without any hope. The academic world of Finland will never forgive you that you have carried out without their permission a lifework summarized in these four 1000-page books which have become your identity. The punishment is the cruellest possible: they have stolen your future. You feel that you have lost completely your faith in humanity but at the same time know that you should be able to cheat yourself to believe on the ultimate justice. Fear fills you: how could you recover some of this naive and healthy belief that all power holders of science are not gangsters. In this mood you open monday's email box and ... experience something completely unexpected. An email tells that Sir Roger Penrose mentions in his newest book "Road to Reality" your work relating to p-adic physics. Although you have talked a lot about name magic, you cannot hide your happiness. You feel it really consolating that there are also honest and intelligent people in this cruel, and you notice notice it now, beautiful world. You know that this kind of gesture has miraculous effect: some collegue might lower himself to touch your work. Even better, this book will inspire many young people who still have an open mind. While looking out you see that Sun is almost shining. You hope to have some day the opportunity to read Road to Reality yourself and remember that there was a discussion in Not-Even-Wrong about Road to Reality. Matti Pitkanen Saturday, February 26, 2005 In to-day's Not-Even-Wrong I learned that M-theory God Ed Witten has followed his brother's Matt Witten's example and is now a TV writer(;-)! The TV series Numb3rs is very ambitious project. The basic goal is to fight against the anti-intellectualism, which has got wings during Bush's era and attack the stereotype about mathematician as a kind of super book-keeper and super-calculator with zero real life intelligence. A TV series in which mathematician's solve crimes is an ingenious choice since detectives must be real-life-intelligent even if they are mathematicians. The team coworks with real mathematicians in Caltech (as you see they look very hippie like) since the goal is to be as autenthic as possible. Even the formulas on blackboard must be sensible so that even mathematician can enjoy the series without fear of sudden strong visceral reactions. Ed Witten got a manuscript to read and proposed an episode in which a rogue mathematician proves Riemann Hypothesis to destroy internet security. My interest is keen since I have proposed a proof, or more cautiously A Strategy for a Proof of Riemann Hypothesis, which has been published in Acta Math. Univ. Comeniae, vol. 72. . I have proposed also a TGD inspired conjecture about the zeros of Zeta. The postulate is that real number based physics of matter and various p-adic physics (one for each prime p) describing correlates of cognition are obtained by algebraic continuation from rational number based physics. This translates to the mathematics the idea that cognitive representations are mimicries of reality and cognitive representation and reality meet each other in a finite number of rational points. This is just what happens in the numerical modelling of the real world since we can represent only rationals using even the best computers. This vision leads to concrete conjectures about the number theoretical anatomy of the zeros of Riemann Zeta which appear in a fundamental role in quantum TGD. The conformal weights of the so called super-canonical algebra creating physical states are suitable combinations of zeros of Zeta. The conjecture is following: for any zero z=1/2+iy of Zeta at critical line the numbers p^(iy) are algebraic numbers for every prime p. Therefore any number q^(iy) is an algebraic number for any rational number q. This assumption guarantees that the expansion of Zeta makes sense also in various p-adic senses for z=n+1/2+iy. A related conjecture is that ratios of logarithms of rationals are rationals: this hypothesis could in principle be tested numerically by looking whether ratios of this kind have periodic expansions in powers of any chosen integer n>1. I would be happy if I had even a slight gut feeling about how the "Strategy for a Proof of Riemann Hypothesis" might relate to Internet safety. Here I meet the boundaries of my narrow mathematical education. So, at this moment it seems that I will not be a notable risk for Internet safety. A word warning is however in order: TGD will certainly become a safety risk for M-theory: sooner or later;-)! Matti Pitkanen Parody as means to see what went wrong The world of science is a strange world. You readily see that its inhabitants are strangely twisted creatures having somehow surreal shapes. You also recognize that all of them have in some difficult-to-define sense lost their way. You gradually begin to realize what the problem might be. It seems that these creatures have lost themselves into a forest. This forest is not an ordinary forest. No. The trees of this forests are concepts and expressions of language. It is obvious that these meanderes in the forest have forgot where they came from and what they were. They do not have the familiar five senses anymore, they have replaced reality with symbols. These creatures without senses look depressed and unhappy. What is left from real life are occasional bursts of aggression. Clearly, they can still hate and be bitter and do so intensively but what has happened to love and trust and joy? Something has gone badly wrong and you would be happy if you could help them somehow but you do not know what to do. Certainly it seems that it is not a good idea to go and tap on the shoulder and suggest a beer or two and a friendly chat. You ask how it is possible that a happy gifted child came an angry and bitter academic identifying himself with his curriculum vitae. You gradually learn that this tragic somehow relates to meanings. Clearly, these people have started to assign meanings to symbols of reality instead of reality. Life goes by as they have their endless bloody debates about priorities. You find it really heart breaking to see how these tiny beings fight passionately about something which has absolutely no meaning for you or any healthy human being who has learned what are those few really signficant things in life. How could even a best Zen guru teach these people the joy of carrying water and chopping firewood. Superstring community certainly represents an extreme example about this anomalous association of meanings. It is its own micro-culture talking strange stringy language expressing strange stringy concepts. This closed society of stringists is like a religious sect with its own difficult-to-understand rules and habits. In order to understand what I mean, try to imagine a community which has replaced spoken language with written scientific jargon with every sentence ornamented with minimum of one reference or foonnote and discussing via published articles rather than face-to-face. W. Siegel, a string theorist himself but in a strangely ambivalent manner also an outsider, has produced hilarious parodies about typical super string articles. Here is one example. Do not miss the other ones. The following piece of text made me laughing to science and myself and I realized how insane this world (I do not try to pretend that it includes also me) is. My hope is that this text fragment might give even a reader without any background in science a healthy burst of laugh. "The moonlight danced on her face as it reflected off the warm August waves. He watched her graceful body gradually emerge as she stepped closer to the shore. As she lay down in the sand beside him he leaned close to whisper in her ear: "String theory is presently the most successful model to describe quantum gravity [1]. Unfortunately, like QCD before it, the proof of its relevance to the real world was shortly followed by the realization that it was impossible to calculate anything with it..." Matti Pitkanen Mersenne Primes and Censorship Lubos Motl commented at his blog site about the largest known Mersenne prime , which is 2^24,036,583 -1. This inspired me to write a comment copied below (I have added a couple of links and added some detail). ....Not only Mersennes .... Mersenne primes are in Topological Geometrodynamics framework the most interesting primes since they correspond to most important p-adic length scales. Only Mersennes up to M_127 =2^127-1 are interesting physically since next Mersenne corresponds to a completely super astrophysical length scale. M_127 corresponds to electron whereas M_107 corresponds to the hadronic length scale (QCD length scale). M_89 corresponds to intermediate boson length scale. There is an interesting number theoretic conjecture due to Hilbert that iterated Mersennes M_{n+1}= M_{M_n} form an infinite sequence of primes: 2,3,7,127,;M-_{127},.... etc. Quantum computers would be needed to kill the conjecture. Physically the higher levels of this hierarchy could be also very interesting. ...but also Gaussian Mersennes are important in TGD Universe Also Gaussian primes associated with complex integers are important in TGD framework. Gaussian Mersennes defined by the formula (1\pm i)^n-1 exist also and correspond to powers p=about 2^k, k prime. k=113 corresponds to the p-adic length scale of muon and atomic nucleus in TGD framework. Neutrinos could correspond to several Gaussian Mersennes populating the biologically important length scales in the range 10 nanometers 5 micrometers. k=151,k=157,k=163, k=167 all correspond to Gaussian Mersennes. There is evidence that neutrinos can appear with masses corresponding to several mass scales. These mass scales do not however correspond to these mass scales but to scale k=13^2=169 about 5 micrometers and k=173. The interpretation is that condensed matter neutrinos are confined by long range classical Z^0 force predicted by TGD inside condensed matter structures at space-time sheets k=151,...,167 and those coming from say Sun are at larger space-time sheets such as k=169 and k=173. p-Adic mass calculations are briefly explained here and here, where also links to the relevant chapters of p-Adic numbers and TGD can be found. That Gaussian Mersennes populate the biologically most interesting length scale range is probably not an accident. The hierarchical multiple coiling structure of DNA could directly correspond to these Gaussian Mersennes. The ideas about the role of Gaussian Mersennes in biology are discussed briefly here can be found. For more details see the chapter Biological realization of self hierarchy of "TGD Inspired Theory of Consciousness...". ...and how Lubos Motl responded... Here is Lubos Motl's response which reflects his characteristic deep respect of intellectual integrity: "Matti - well, let me admit that I don't believe any of the things about "TGD" you wrote." The response was not surprising in view of what I have learned about Lubos Motl's typical behavioral patterns. I wonder whether the open censoring of ideas and opinions which do not conform with those of hegemony relates to change that has occurred during the regime of Bush. For instance, in a recent poll it was found that about one half of student agreed when it was asked whether the full freedom of speech guaranteed in constitution law should be restricted. Food for a worried thought! My response, which I unfortunately did not store to my computer, was related to the basic trauma suffered by theoretical physicists at the end of the era of hard-boiled reductionism. It could be called Planck length syndrome and manifests itself as desperate attempts to explain all elementary particle masses in terms of Planck mass scale although the first glance at elementary particle mass spectrum demonstrates that there are obviously several mass mass scales involved: electron, mu, and tau or quarks simply cannot belong to same multiplet of symmetry group in any conceivable scenario based on a unifying gauge group. p-Adic length scale hypothesis resolves the paradox and allows to understand among other things the basic mystery number 10^38=about 2^127-1 (largest Mersenne prime of physical signficance in human scales) of physics. Also a far reaching generalization of the scope of what physics can describe emerges: p-adic physics is physics of cognition which must be also a part of theory of everything. The response of Lubos is a school example about how difficult it is to communicate new, radical, and working idea. Other typical and equally brilliant response would have been "You cannot be right because Witten would have discovered it before you!". Matti Pitkanen Friday, February 25, 2005 What UFOs could teach about fundamental physics? Here is my comment about UFOs to Not-Even-Wrong discussion group, where it was mentioned that Michio Kaku, string theorists and author of "Hyper-Space", has expressed publicly as his opinion that UFOs might be a real. Leaving aside ontological considerations and the question what one should think about people taking seriously UFOs, one could take UFOs as an inspiration for a thought experiment. Suppose for a moment that UFOs represent a real technology. According to the reports, UFOs seem to have a very small inertial mass (butterfly like motions involving sudden accelerations and changes of direction of motion without producing any shock waves). A technology able to reduce dramatically inertial mass of a material object would thus exist. What could this tell about fundamental physics? A possible answer would be a modification of Equivalence Principle. Gravitational mass would be absolute value of inertial mass, which can have both signs. One of the most obvious implications is an explanation for why gravitational energy is definitely not conserved in cosmological scales whereas there is no evidence for the non-conservation of inertial energy. The simplest cosmology would be that created from inertial vacuum by energetic vacuum polarizations creating regions of positive and negative density of inertial mass. The 4-D universe replacing itself by a new one quantum jump by quantum jump would become possible and the difficult philosophical problems formulated as questions like "What was the initial state of the Universe and what were the initial values/densities of conserved quantities at the moment of big bang" would disappear. The observations motivating the anthropic principle would find a natural explanation: the universe has gradually quantum engineered itself so that the values of these constants are what they are. Technological implications would be also interesting. Forming an tightly bound state of systems with positive and negative inertial mass a large feather light system could be created. Could UFOs utilize this kind of technology? Accepting negative energies, one cannot avoid the questions whether negative energy signals propagate backwards in (geometric) time and whether phase conjugate light discovered at seventies could be identified as signals of this kind. Positive answer would have quite interesting technological implications. Negative energy signals time reflected as positive energy signals from time mirrors (lasers with population reversal for instance) would allow communications with geometric past. Our memory might be based on this mechanism: to recall memories would be to scan the brain of geometric past by using reflected in time direction (rather than in spatial direction as seeing in the ordinary sense). Communications with the civilizations of the geometric future and past might become possible by a similar mechanism. Matti Pitkanen Saturday, February 19, 2005 Color confinement and conformal field theory The discovery of number theoretical compactification has meant a dramatic increase in the understanding of quantum TGD. There are two manners to undestand the theory. • Number theoretic view: Space-time surfaces can be regarded as hyper-quaternionic 4-surfaces in 8-dimensional hyper-octonionic space HO. • Physics view: Space-time surfaces can be seen as 4-surfaces in 8-D space M^4xCP_2 minimizing the so called Kähler action which essentially means that they minimize their non-commutativity measured by Lagrangian density of Kähler action. These views seem to be complementary, and at this moment the very existence of this duality (conjecture of course) is what has the strongest implications. A lot remains to do in order to see whether the conjecture is indeed correct and what it really implies. At this moment I am trying to find whether this duality, very reminiscent of various M-theory dualities, is internally consistent. One of the possible implications is the possibility to interpret TGD also as a kind of string theory, not in the usual sense of the world, but as a generalization of so called topological quantum field theories, where the notion of braids is central. Whether this duality is completely general or holds true only for selected space-time surfaces, such as space-time sheets corresponding to maxima of Kähler function (most probable space-times) or space-time sheets representing asymptotic behavior, is an open question. I have explained the duality in earlier posts and do not go to the details here. Suffice it so say that so called Wess-Zumino-Witten action for group G_2, a group which as a Lie group is completely exceptional, and acts as the automorphism group of octonions, suggests itself as a characterizer of the dynamics of these strings. G_2 has group SU(3) as maximal subgroup and can be said to leave these strings invariant. The interpretation is as the color group and G_2/SU(3) coset theory is the natural guess for the dynamics. SU(3) takes indeed the role of color gauge group. The so called primary fields of the theory correspond to two color singlets, triplet and antitriplet and the natural guess is that they relate to leptons and quarks. Indeed in the H picture the basic fields are lepton and quark fields and all other particles are constructed from leptonic and quark like excitations. The beauty of this approach is that QCD might be replaced with an exactly solvable conformal field theory allowing also to deduce how correlation functions change in hyper-octonion analytic transformations affecting space-time surface. There are however also objections against this picture. a) The basic objection is that G_2 Kac-Moody algebra contains triplet and anti-triplet generators and triplet generators commute to anti-triplet. It is hard to imagine any sensible physical interpretation for these lepto-quark generators, whose commutation relations break the conservation of lepton and quark number. The point is however that triplet generators affect e_1, and thus S^6 coordinates and also the SU(3) subgroup acting as isotropy group changes. Thus correlation functions involving these currents are not physically meaningful. Indeed, in G/H coset theory only the H Kac-Moody currents appear naturally in correlation functions since the construction involves functional integral only over H connections. b) If 14-dimensional adjoint representation of G_2 would appear as primary field, also 3 and \overline{3} lepto-quark like states for which baryon and lepton number are not conserved would appear in the spectrum. This is in conflict with H picture. The choice k=1 for Kac-Moody central charge provides however a unique manner to circumvent this difficulty. Integrability condition for the highest weight representations allows for a given value of k only the highest weights \lambda_R satisfying Tr(\phi \lambda_R)\leq k, where \phi is the highest root for Lie-algebra. Since the highest root has length squared 2, adjoint representation is not possible as a highest weight representation for k=1 WZW model, and the primary fields of G_2 model are singlet and 7-plet corresponding to the hyper-octonionic spinor field and defining in an obvious manner the primary fields 1+3+\overline{3} of G_2/SU(3) coset model. Fusion rules for 1\oplus 7 correspond to octonionic multiplication. The absence of G_2 gluons saves from lepto-quark like bosons, and the absence of SU(3) gluons can be interpreted as HO counterpart for the fact that all particles, in particular gluons, can be regarded bound states of fermions and anti-fermions in TGD Universe. This picture conforms also with the claims that 3+\overline{3} part of G_2 algebra does not allow vertex operator construction whereas SU(3) allows the construction in terms of two free bosonic fields. These fields would naturally correspond to the two X^4 directions transversal to the string orbit defined by 1 and e_1. One could say that strings in X^4 are able to represent color Kac-Moody algebra and that SU(3)is inherent to 4-dimensional space-time. c) The third objection is that conformal field theory correlation functions obeying simple scaling laws are not consistent with the exponentially decreasing correlation functions suggested by color confinement. A resolution of the paradox could be based on the role of classical gravitation. At light-like causal determinants the time-like component g_{tt} of the induced metric vanishes meaning that classical gravitational field is very strong. Hence also the normal component g_{nn} of the induced metric is expected to become very large so that hadron would look like the interior of black hole. A finite X^4 proper time for reaching the outer boundary of the hadronic surface can correspond to a very long M^4 time and the finite M^4 distance from the boundary can mean very long distance along hadronic space-time surface. Hence quarks and gluons can behave as almost free particles when viewed from hadronic space-time sheet but look confined when seen from imbedding space. If the hyper-quaternionic coordinates appearing in the correlation functions correspond to internal coordinate of the space-time surface, the correlation functions when expressed in terms of M^4 coordinates can look confining. For more details see the chapter TGD as a Generalized Number Theory: Quaternions, Octonions, and their Hyper Counterparts. Matti Pitkanen Friday, February 18, 2005 Approaching the end of an epoch The following quotes from the page Suppression, Censorship and Dogmatism in Science" serve as a good introduction to the recent sad situation in science. Do not miss the chronological ordering. "Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense." Buddha (563BC-483BC). "There must be no barriers to freedom of inquiry. There is no place for dogma in science. The scientist is free, and must be free to ask any question, to doubt any assertion, to seek for any evidence, to correct any errors." J. Robert Oppenheimer, quoted in Life, October 10, 1949. "The more important fundamental laws and facts of physical science have all been discovered, and these are now so firmly established that the possibility of their ever being supplanted in consequence of new discoveries is exceedingly remote.... Our future discoveries must be looked for in the sixth place of decimals." Albert Abraham Michelson, speaking at the University of Chicago, 1894. "The great era of scientific discovery is over.... Further research may yield no more great revelations of revolutions, but only incremental, diminishing returns." Science journalist John Horgan, in The End of Science (1997). Suppression in science Anyone who has devoted life to some new idea has experienced the arrogance and cruelty of those who are in power. This applies also to me. I have used 26 years of my life to a revolutionary idea and I have summarized the resulting world view in 4 books making about 5000 pages full of original ideas developed in highly detailed manner. Both in quality and quantity this output is exponentially higher than that of an average professor. One might think that with this background it would not be difficult to find a financiation for my research which is still continuing. The reality however loves paradoxes. There is absolutely no hope of getting any support in my home country for my work, and I do not believe that situation might be better elsewhere. Even worse. Average theoretical physicist colleague refuses to read anything that I have written, and there is absolutely no manner to communicate these ideas to a mainstream career builder living in a typical academic environment. I am doomed to be a crackpot. Although American Mathematical Society lists Topological Geometrodynamics in Mathematical Subject Classification Tables, something which can be regarded as a rare honor for a physicist, I am a crackpot. It is easier to change water into wine than change the conviction of an average research person receiving a monthly salary in University about my crackpot-ness. I am not of course alone. The suppression in science has become rule rather than a rare exception, and even Nobelists like Brian Josephson, are punished with censorship for the courage of talking aloud about phenomena not understood within the confines of existing dogmas recent. The victims of this suppression cannot publish anything and even e-print archives such as are closed for those who think in a new manner. These people have started to organize. For instance, at the web page scientific dissidents tell their personal horror stories. End of an epoch as an era of moral degeneration One might try to interpret this sad state of theoretical physics (and science in general of course), which affects deeply also experimental physics, in particular particle physics, using notions like sociology of science, and talking about the fierceful competion about positions and research money implying that people with keen brain but without intelligent elbows are doomed to be destroyed. I think however that there is something deeper involved. My conclusion from what has happened during these decades, and from a personal work in wide range of topics ranging from particle physics to cosmology to biology to consciousness, is that we are living an end of an epoch, not only in science but also in the evolution of western civilization. Epochs are always characterized by some sacred philosophical ideas, and when they lose their explanatory and guiding power, the civilization breaks down and experiences a phase transition to a "New Age". The irony is that the end of epoch is always seen as the final victory of the old dogmas (see the last quotations in the beginning of these musings!). In physics only this and comparison with what happened for a century ago, helps to understand how M-theories, regarded now by most professionals (privately of course!) as a pathetical failure, are still successfully sold for media as the final word about physics. Of course, things have begun to change rapidly. For instance, for some time ago the physics department of Boston University made the decision that string models cannot be regarded as physics so that string theorists must find their positions from Mathematics department. Do not take me wrongly: many of the mathematical ideas of string models are fantastics and will be pieces of future theories. The problems is that the utterly wrong philosophy makes it impossible build a physical theory based on these ideas. In physics several philosophical dogmas have lost their power during the last century. For century ago before advent of quantum theory the idea about deterministic clockwork universe was the final truth. The attribute "Newtonian" is often assigned with this clockwork. This does injustice to Newton, who was five centuries ahead of his time and regarded even planetary systems as living systems. This was described in some popular science document as "the dark side of Newton"! Although quantum physics in microscopic length scales turned out to be in conflict with the clockwork idea and although the evidence for macroscopic quantum effects is increasing, the average physic is still, at the temporal distance of century from Bohr, stubbornly refuses to consider the possibility that quantum effects might be important in our everyday life, and that we might not be deterministic machines. This collective silliness has meant after the days of von Neumann nothing interesting has been said about quantum measurement theory so that its paradoxes are still there, virtually untouched. What a gem for a brilliant theoretician these paradoxes could be but how few are those who are ready to spend life in science as crackpots! The sad consequence is that theoretical physicists have practically nothing to add to what they could had said about biology and consciousness before Bohr. Second sacred and very dead idea is reductionism. String theories and M-theory amplify this idea to sheer madness. The dogma is that the whole physics reduces to Planck scale: a very short distance about 10^(-33) meters. After a huge amount of theoretical activity during last 20 years we have a a theory which cannot predict anything even in principle, and the proponents of this theory are now seriously suggesting that we must accept this state of affairs since M-theory is the final theory! I know...! I know that I have said this many times before but this is so scandalous that it must be said aloud again and again. Of course, one need not go to Planck length to see the failure of reductionism: again the phenomenon of life tells that reductionism is wrong. In fact, all transitions in the sequence quark physics--> hadron physics --> nuclear physics ---> atomic physics --> molecular physics --> ... are all ill-defined, and the belief that lower levels predict the higher ones is just a belief without any justification. Despite this most colleagues enjoying monthly salary are still repeating the liturgy which I heard for the first time in my student days: life as nothing but Schrödinger equation applied to a very complex system. It is understandable that a person at the age of 20 years takes this platitude seriously but it is unforgivable that professors of physics still repeat this kind of trash. The seeds of the new age are already here: fractality is now a relatively well established notion even in physics, and means an entire hierarchy of scales giving good hopes of expressing quantitatively how reductionism fails. A lot of new, but already existing mathematics is however needed. Unfortunately, and totally unexpectedly, for consistency of interpretation one cannot avoid talking about physical correlates of cognition once this mathematics is introduced. The transition to the New Age will thus be very painful since cognition and consciousness do not exist for a physicist. Next big idea. Average physicist familiar with special relativity identifies without hesitation the time of conscious experience as the fourth space-time coordinate. This despite the fact that brief analysis shows that these notions are quite different. The sole justification for their identification is the materialistic dogma: in the materialistic world order consciousness is a mere epiphenomenon and has no active role: everything about contents of consciousness is known once the state of the clockwork is known. Despite all the paradoxes that this view implies, it dominates still even neuroscience as I painfully learned during the decade that I devoted to TGD inspired theory of consciousness and learned basics of neuroscience in discussions and by reading. Energy is dual to time and subjectivetime=geometric time problematics repeats itself as the poorly understood relationship between inertial energy and gravitational energy. Einstein's proposal was that they are one and the same thing but even academic person could privately wonder why it is possible that inertial energy is conserved exactly whereas gravitational energy is not. Of course, friendly Einstein has so long a shadow that the academic person does not allow this kind of thoughts to pop up into his conscious mind. The sociological side There are also social factors involved and these factors are amplified during a period of degeneration that we are living. To make this more concrete I tell about my experiences in certain discussion groups related to string models and M-theory during last half year. My motivation for spending (or rather wasting as it turned out) my time in these groups was besides satisfying my basic social needs, to learn what the general situation is, and perhaps even communicate something: it is extremely frustrating to see that physics is in a state of deep stagnation and, this I can say without any hybris, to realize that one reason for this is simply that I have not used all my energy to communicate the bottleneck ideas which are the only way out of the dead end. To be specific, I have read the comments at the Not-Even-Wrong blog page administered by Peter Woit. For half a year ago I was enthusiastic: this would be a page where the problems of M-theory and string models could be discussed openly without repeating the usual liturgy starting with "M-theory is the only known quantum theory of gravitation...". It however soon became clear that the discussion group is not intended to serve as a forum for new ideas. The dream about return back to the good old days of quantum field theories seems to be the basic motivation for its existence, and any new idea is therefore regarded as an enemy. For a month or so ago an open censorship was established meaning that everything which is not about the topic of discussion is censored away. To stay in the "topic of discussion" is to repeat the already-too-familiar mantra like arguments against M-theory. M-theory is dead but it does not help to say this again and again: something more constructive is needed. This discussion group has taught me a lot about the difference between western and eastern communication culture manifests. Typically the discussions are battles: people express their verbal skills by insulting each other personally in all imaginable manners. Open verbal sadism is completely acceptable and often regarded as a measure of intelligence. Paradoxically, I myself find that I take seriously Lubos Motl, young extremely aggressive M-theory fanatic who has been continually ranting and raving in Not-Even-Wrong. It is easy to disagree in a well-argumented manner with most of what he says, and most of his arguments are cheap rhetoric, but still! Why do I pay attention to this empty verbalism. Sad to confess, I am part of this culture which confuses aggression with intellect. In Eastern cultures, which we tend to assign labels like "New-Age", both physical and intellectual integrity are the basic values, and my dream is that the New Age would see these people continually insulting each other simply as what they are: uncivilized barbarians. NAMEs are important ghostly participants in any academic discussion group. Again and again I find, that people are asking was it this or was it that what Feynman/Gross/Witten..... said and what is the URL where it can be found. Somehow it seems completely impossible to be taken seriously unless one continually drops names and citations. With all respect to these brilliant NAMEs, one must admit that their often casual comments are just casual comments and that 65 year old NAME probably has not very interesting things to say about forefront science. A further amusing feature of the sociology of academic discussion groups is the attitude towards "crackpots". "Crackpots" are by definition those who have something original to say and strong motivations to do this. The strategy for dealing with crackpots is following. In the first stage "serious researchers" do not notice in any manner the presence of "crackpot", he feels that he is just air. If this does not help, ironic and sadistic comments communicating the conviction that the "crackpot" is a complete idiot begin to appear. If even this does not help, the "crackpot" is told to leave the discussion group since people want to have "serious discussions". If even this does not beat the visitor, his messages are censored away. The irony is that also in this group the "crackpots" have been the most interesting participants. They have something interesting to say, they have a passion of understanding, they develop real arguments instead of cascades of insults and citations from authorities, and they have a strong urge to express clearly what they want to say. This cannot be said about many "respected" contributions, which often contain about ten typos and grammatic errors per line, and from which it is often completely impossible to figure out whether the attempt is to say something or just to teach by example that the writer is an an-alphabet. One gloomy conclusion from these experiences about the sociology of science might be that organized good is impossible. Perhaps this true to some degree. I can however imagine the feelings of those enlightened people who discovered the idea of gathering together intelligent people devoting their lives to science. Certainly they dreamed of spirited debates, endless discussions, openness of minds, evolution of ideas. It is not the fault of these visionaries that in a typical university people sitting in the same room for years know nothing about each other's work and could not be less interested, that hatred and envy are the dominant emotions of an average career builder, and that science has transformed from passion to a fierceful competition to a war in which the best colleague is a dead colleague. Perhaps the New Age inspired by new Great Ideas and Ideals will create a new kind of university in which people could could live like ordinary simple people like to live: listening, helping and loving each other. Matti Pitkanen Infinite primes and physics as a generalized number theory Layman might think that something which is infinite is also something utterly non-physical. The notion of infinity is however much more delicate and it depends on topology whether things look infinite or not. Indeed, infinite primes have become besides p-adicization and the representation of space-time surface as a hyper-quaternionic sub-manifold of hyper-octonionic space, basic pillars of the vision about TGD as a generalized number theory. 1. Two views about the role of infinite primes and physics in TGD Universe Two different views about how infinite primes, integers, and rationals might be relevant in TGD Universe have emerged. a) The first view is based on the idea that infinite primes characterize quantum states of the entire Universe. 8-D hyper-octonions make this correspondence very concrete since 8-D hyper-octonions have interpretation as 8-momenta. By quantum-classical correspondence also the decomposition of space-time surfaces to p-adic space-time sheets should be coded by infinite hyper-octonionic primes. Infinite primes could even have a representation as hyper-quaternionic 4-surfaces of 8-D hyper-octonionic imbedding space. b) The second view is based on the idea that infinitely structured space-time points define space-time correlates of mathematical cognition. The mathematical analog of Brahman=Atman identity would however suggest that both views deserve to be taken seriously. 2. Infinite primes and infinite hierarchy of second quantizations The discovery of infinite primes suggested strongly by the possibility to reduce physics to number theory. The construction of infinite primes can be regarded as a repeated second quantization of a super-symmetric arithmetic quantum field theory. Later it became clear that the process generalizes so that it applies in the case of quaternionic and octonionic primes and their hyper counterparts. This hierarchy of second quantizations means an enormous generalization of physics to what might be regarded a physical counterpart for a hierarchy of abstractions about abstractions about.... The ordinary second quantized quantum physics corresponds only to the lowest level infinite primes. This hierarchy can be identified with the corresponding hierarchy of space-time sheets of the many-sheeted space-time. One can even try to understand the quantum numbers of physical particles in terms of infinite primes. In particular, the hyper-quaternionic primes correspond four-momenta and mass squared is prime valued for them. The properties of 8-D hyper-octonionic primes motivate the attempt to identify the quantum numbers associated with CP_2 degrees of freedom in terms of these primes. The representations of color group SU(3) are indeed labelled by two integers and the states inside given representation by color hyper-charge and iso-spin. 3. Infinite primes as a bridge between quantum and classical An important stimulus came from the observation stimulated by algebraic number theory. Infinite primes can be mapped to polynomial primes and this observation allows to identify completely generally the spectrum of infinite primes whereas hitherto it was possible to construct explicitly only what might be called generating infinite primes. This in turn led to the idea that it might be possible represent infinite primes (integers) geometrically as surfaces defined by the polynomials associated with infinite primes (integers). Obviously, infinite primes would serve as a bridge between Fock-space descriptions and geometric descriptions of physics: quantum and classical. Geometric objects could be seen as concrete representations of infinite numbers providing amplification of infinitesimals to macroscopic deformations of space-time surface. We see the infinitesimals as concrete geometric shapes! 4. Various equivalent characterizations of space-times as surfaces One can imagine several number-theoretic characterizations of the space-time surface. • The approach based on octonions and quaternions suggests that space-time surfaces might correspond to associative or hyper-quaternionic surfaces of hyper-octonionic imbedding space. • Space-time surfaces could be seen as an absolute minima of the Kähler action. The great challenge is to rigorously prove that this characterization is equivalent with the others. 5. The representation of infinite primes as 4-surfaces The difficulties caused by the Euclidian metric signature of the number theoretical norm forced to give up the idea that space-time surfaces could be regarded as quaternionic sub-manifolds of octonionic space, and to introduce complexified octonions and quaternions resulting by extending quaternionic and octonionic algebra by adding imaginary units multiplied with \sqrt{-1}. This spoils the number field property but the notion of prime is not lost. The sub-space of hyper-quaternions resp.-octonions is obtained from the algebra of ordinary quaternions and octonions by multiplying the imaginary part with \sqrt{-1}. The transition is the number theoretical counterpart for the transition from Riemannian to pseudo-Riemannian geometry performed already in Special Relativity. The notions of hyper-quaternionic and octonionic manifold make sense but it is implausible that H=M^4xCP_2 could be endowed with a hyper-octonionic manifold structure. Indeed, space-time surfaces are assumed to be hyper-quaternionic or co-hyper-quaternionic 4-surfaces of 8-dimensional Minkowski space M^8 identifiable as the hyper-octonionic space HO. Since the hyper-quaternionic sub-spaces of HO with a fixed complex structure are labelled by CP_2, each (co)-hyper-quaternionic four-surface of HO defines a 4-surface of M^4xCP_2. One can say that the number-theoretic analog of spontaneous compactification occurs. Any hyper-octonion analytic function HO--> HO defines a function g: HO--> SU(3) acting as the group of octonion automorphisms leaving a selected imaginary unit invariant, and g in turn defines a foliation of OH and H=M^4xCP_2 by space-time surfaces. The selection can be local which means that G_2 appears as a local gauge group. Since the notion of prime makes sense for the complexified octonions, it makes sense also for the hyper-octonions. It is possible to assign to infinite prime of this kind a hyper-octonion analytic polynomial P: HO--> HO and hence also a foliation of HO and H=M^4xCP_2 by 4-surfaces. Therefore space-time surface could be seen as a geometric counterpart of a Fock state. The assignment is not unique but determined only up to an element of the local octonionic automorphism group G_2 acting in HO and fixing the local choices of the preferred imaginary unit of the hyper-octonionic tangent plane. In fact, a map HO--> S^6 characterizes the choice since SO(6) acts effectively as a local gauge group. The construction generalizes to all levels of the hierarchy of infinite primes and produces also representations for integers and rationals associated with hyper-octonionic numbers as space-time surfaces. A close relationship with algebraic geometry results and the polynomials define a natural hierarchical structure in the space of 3-surfaces. By the effective 2-dimensionality naturally associated with infinite primes represented by real polynomials 4-surfaces are determined by data given at partonic 2-surfaces defined by the intersections of 3-D and 7-D light-like causal determinants. In particular, the notions of genus and degree serve as classifiers of the algebraic geometry of the 4-surfaces. The great dream is of course to prove that this construction yields the solutions to the absolute minimization of Kähler action. 6. Generalization of ordinary number fields: infinite primes and cognition The introduction of infinite primes, integers, and rationals leads also to a generalization of real numbers since an infinite algebra of real units defined by finite ratios of infinite rationals multiplied by ordinary rationals which are their inverses becomes possible. These units are not units in the p-adic sense and have a finite p-adic norm which can be differ from one. This construction generalizes also to the case of hyper-quaternions and -octonions although non-commutativity, and in the case of octonions also non-associativity, pose technical problems. Obviously this approach differs from the standard introduction of infinitesimals in the sense that sum is replaced by multiplication meaning that the set of real units becomes infinitely degenerate. Infinite primes form an infinite hierarchy so that the points of space-time and imbedding space can be seen as infinitely structured and able to represent all imaginable algebraic structures. Certainly counter-intuitively, single space-time point is even capable of representing the quantum state of the entire physical Universe in its structure. For instance, in the real sense surfaces in the space of units correspond to the same real number 1, and single point, which is structure-less in the real sense could represent arbitrarily high-dimensional spaces as unions of real units. For real physics this structure is completely invisible and is relevant only for the physics of cognition. One can say that Universe is an algebraic hologram, and there is an obvious connection both with Brahman=Atman identity of Eastern philosophies and Leibniz's notion of monad. For more details see the chapter TGD as a Generalized Number Theory III: Infinite Primes. Matti Pitkanen Tuesday, February 15, 2005 Comment to Not-Even-Wrong The discovery that strings in a fixed flat background could describe gravitation without any need to make the background dynamical was really momentous. The discovery should have raised an obvious question: How to generalize the theory to the physical 4-dimensional case by replacing string orbits with 4-surfaces? Instead, the extremely silly idea of making also imbedding space dynamical emerged and brought back and magnified all the problems of general relativity, which one had hoped to get rid of. I have tried for more than two decades to communicate simple core ideas about an alternative approach but have found that theoretical physicists are too arrogant to listen to those without name or position. a) The fusion of special relativity with general relativity is achieved by assuming that space-times are 4-surfaces in M^4xCP_2. The known quantum numbers pop out elegantly from this framework. The topological complexity of space-time surfacse allows to circumvent objection that the induced metrics are too restricted. Light-like 3-D causal determinants allow generalization of super-conformal invaraince by their metric 2-dimensionality and dimension 4 for space-time is the only possibility. b) The maximal symmetries of H=M^4xCP_2 have an excellent justification when quantum theory is geometrized by identifying physical states of the Universe as classical configuration space spinor fields, configuration space being defined as the space of 3-surfaces in H. The only hope of geometrizing this infinite-dimensional space is as union of infinite-dimensional symmetric spaces labelled by zero modes having interpretation as non-quantum fluctuating classical degrees of freedom. Infinite-dimensional variant of Cartan's problem of classifying symmetric spaces emerges as the challenge of finding TOE. Mathematical existence fixes physical existence. Just as in the case of loop space, and with even better reasons, one expects that there are very few choices of H allowing internally consistent Kaehler geometry. Fermion numbers and super-conformal symmetries find an elegant geometrization and generalization in terms of complexified gamma matrices representing super-symmetry generators. c) M^4xCP_2 follows also from purely number theoretical considerations as has now become clear. The theory can be formulated in two equivalent manners. *4-surfaces can be regarded as hyper-quaternionic 4-surfaces in M^8 possessing what I call hyper-octonionic tangent space structure (octonionic imaginary units are multiplied by commutative sqrt(-1) to make number theoretical norm Minkowskian). *Space-times can be regarded also as 4-surfaces in M^4xCP_2 identified as extrema of so called Kaehler action in M^4xCP_2. Spontaneous compactification has thus purely number theoretical analog but has nothing to do with dynamics. The surprise was that under some additional conditions (essentially hyper-octonion real-analyticity for the dynamical variables in M^8 picture) the theory can be coded by WZW action for two-dimensional string like 2-surfaces in M^8. These strings not super-strings but generalizations of braid/ribbon diagrams allowing n-vertices in which string orbits are glued together at their ends like pages of book. Vertices can be formulated in terms of octonionic multiplication. Both classical and quantum dynamics reduce to number theory and the dimensions of classical division algebras reflect the dimensions of string, string orbit, space-time surface, and imbddding space. The conclusion is that both particle data table, the vision about physics as free, classical dynamics of spinor fields in the infinite-dimensional configuration space of 3-surfaces, and physics as a generalized number theory, lead to the same identification: space-time can be regarded as 4-surfaces in M^4xCP_2. In the case that someone is more interested of learning about real progress instead of wasting time to heated arguments at the ruins M theory, he/she can read the chapter summarizing part of the number theoretical vision, and also visit my blog at where I have summarized the most recent progress and great ideas of TGD. With Best Regards, Matti Pitkanen Monday, February 14, 2005 Kähler calibrations, number theoretical compactification, and general solution to the absolute minimization of Kähler action The title probably does not say much to anyone who is not a theoretical physicist working with theories of everything. I thought however it appropriate to glue this piece of text from my homepage in hope that the information about these beautiful discoveries might find some readers. So what follows is rather heavy mathematical jargon. Calibrations represent a good example of those merciful "accidents", which happen just the right time. Just for curiosity I decided to look what the word means, and it soon became clear that the notion of calibration allows to formulate my proposal for how to construct general solution of field equations defined by Kähler action in terms of a number theoretic spontaneous compactification in a rigorous and..., perhaps I dare say it aloud, even in convincing manner. For an excellent popular representation about calibrations, spinors and super-symmetries see the homepage of Jose Figueroa-O'Farrill . 1. The notion of calibration Calibrations allow a very elegant and powerful formulation of minimal surface property and have been applied also in brane-worldish considerations. Calibration is a closed p-form, whose value for a given p-plane is not larger than its volume in induced metric. What is important that if it is maximum for tangent planes of p-sub-manifold, minimal surface with smallest volume in its homology equivalence class results. Could absolute minima of Kähler action found using Kähler calibration?! For instance, all surfaces X^2xY^2 subset M^4xCP_2, X^2 and Y^2 minimal surfaces, are solutions of field equations. Calibration theory allows to concluded that Y^2 is any complex manifold of CP_2! A very general solution of TGD in stringy sector results and there exists a deformation theory of calibrations to produce moduli spaces for the perturbations of these solutions! In fact, all known solutions of field equations are either minimal surfaces or have a vanishing Kähler action density. This probably tells more about my simple mind-set than reality, and there are excellent reasons to believe that, since Lorentz-Kähler force vanishes, the known solutions are space-time correlates for asymptotic self-organization patterns. The question is how to find more general solutions. Or how to generalize the notion of calibration for minimal surfaces to what might be called Kähler calibration? It is here, where the handsome and young idea of number theoretical spontaneous compactification enters the stage and the outcome is a happy marriage of two ideas. 2. The notion of Kähler calibration It is intuitively clear that the closed calibration form omega which is saturated for minimal surfaces must be replaced by Kähler calibration 4-form omega_K= L_K omega . L_K is Kähler action density (Maxwell action for induced CP_2 Kähler form). Important point: omega_K is closed but not omega as in the case of minimal surfaces. L_K acts as an integrating factor. This difference is absolutely essential. When L_K is constant you should get minimal surfaces. L_K is indeed constant for the known minimal surface solutions. The basic objection against this conjecture is following: L_K is a four dimensional action density. How it is possible to assign it to a 4-form in 8 dimensional space-time? Here number theoretical spontaneous compactification shows its power. 3. Number theoretic compactification allows to define Kähler calibration The calibrations are closely related to spinors and the number theoretic compactification based on 2-component octonionic spinors satisfying Weyl condition, and therefore equivalent with octonions themselves, tells how to construct omega. • The hyper-octonion real-analytic maps of HO=M^8 to itself define octonionic 2 spinors satisfying Weyl condition. Octonionic massless Dirac equation reduces to d'Alembert equation in M^8 by the generalization of Cauchy-Riemann conditions. • Octonions and thus also the spinors have 1+1+3+3bar decomposition with respect to (color) SU(3) sub-group of octonion automorphism G_2. SU(3) leaves a preferred hyper-octonionic imaginary unit invariant. The unit can be chosen in local manner and the choices are parameterized by local S^6. • 3x3bar tensor product defines color octet identifiable as SU(3) Lie algebra generator and its exponentiation gives SU(3) group element. • The canonical bundle projection SU(3)-->CP_2 assigns a CP_2 point to each point of M^8, when a preferred octonion unit is fixed at each point of M^8. • Canonical projection M^8-->M^4 assigns M^4 point to point of M^8. Conclusion: M^8 is mapped to M^4xCP_2 and metric of H and CP_2 Kähler form can be induced. M^8 having originally only the number theoretical norm as metric inherits the metric of H. • Here comes the key point of the construction. CP_2 parameterizes hyper-quaternionic planes of hyper-octonions and therefore it is possible to assign to a given point of M^8 a unique hyper-quaternion 4-plane. Thus also the projection J of Kähler form to this plane and also the dual *J of this projection. Therefore also L_K=J\wedge*J as the value of Kähler action density! • The Kähler calibration omega_K= L_K*omega is defined in an obvious manner. As found, L_K is associated with the local hyper-quaternionic plane is assigned to each point of M^8. The form omega is obtained from the wedge product of unit tangent vectors for hyper-quaternionic plane at a given point by lowering the indices using the induced metric in M^8. omega is not a closed form in general. For a given 4-plane it is essentially the cosine of the angle between plane and hyper-quaternionic plane and saturated for hyper-quaternionic plane so that calibration results. • Kähler calibration is the only calibration that one can seriously imagine. Furthermore, the spinorial expression for omega is well defined only if the form omega saturates for hyper-quaternionic planes or their duals. The reason is that non-associativity makes the spinorial expression involving an octonionic product of four tangent vectors for the calibration ill defined for non-associative 4-planes. Hence number theory allows only hyper-quaternionic saturation. Note that also co-hyper-quaternionicity is allowed and required by the known extremals of Kähler action. A 4-parameter foliation of M^8, and perhaps even that of M^4xCP_2 (discrete set of intersections probably occurs) by 4-surfaces results and the parameters at given point of X^4 define the dual space-time surface. • A surprise, which does not flatter theoretician's vanity, emerges. Closed-ness of omega_K implies that if absolute value of Kähler action density replaces K\"ahler action, minimization indeed occurs for hyper-quaternionic surfaces in a given homology class assuming that the hyper-quaternionic plane at given point minimizes L_K (is this equivalent this closed-ness of omega_K?). Thus L_K should be replaced with L_K so that vacuum extremals become absolute minima, and universe would do its best to save energy by staying as near as possible to vacuum. The 3 surfaces for which CP_2 projection is at least 2-dimensional and not Lagrange manifolds would correspond to non-vacua since since conservation laws do not leave any other option. The attractiveness of this option from the point of calculability of TGD would be that the initial values for the time derivatives of the imbedding space coordinates at X^3 at light-like 7-D causal determinant could be computed by requiring that the energy of the solution is minimized. This could mean a computerizable solution to the absolute minimization. • There is a very beautiful connection with super-symmetries allowing to express absolute minimum property as a condition involving only the hyper-octonionic spinor field defining the Kähler calibration (discovered for calibrations by Strominger and Becker). 4. Could TGD reduce to string model like theory in HO picture? Conservation laws suggests that in the case of non-vacuum extremals the dynamics of the local automorphism associated with the hyper-octonionic spinor field is dictated by field equations of some kind. The experience with WZW model suggests that in case of non-vacuum extremals G_2 element could be written as a product g=g_L(h)g^{-1}_R(h*) of hyper-octonion analytic and anti-analytic complexified G_2 elements. g would be determined by the data at hyper-complex 2-surface for which the tangent space at a given point is spanned by real unit and preferred hyper-octonionic unit. Also Dirac action would be naturally restricted to this surface. The amazing possibility is that TGD could reduce in HO picture to 8-D WZW string model both classically and quantally since vertices would reduce to integrals over 1-D curves. The interpretation of generalized Feynman diagrams in terms of generalized braid/ribbon diagrams and the unique properties of G_2 provide further support for this picture. In particular, G_2 is the lowest-dimensional Lie group allowing to realize full-powered topological quantum computation based on generalized braid diagrams and using the lowest level k=1 Kac Moody representation. Even if this reduction would occur only in special cases, such as asymptotic solutions for which Lorentz Kähler force vanishes or maxima of Kähler function, it would mean enormous simplification of the theory. 5. Why extermals of Kähler action would correspond to hyper-quaternionic 4-surfaces? The resulting over all picture leads also to a considerable understanding concerning the basic questions why (co)-hyper-quaternionic 4-surfaces define extrema of Kähler action and why WZW strings would provide a dual for the description using Kähler action. The answer boils down to the realization that the extrema of Kähler action minimize complexity, also algebraic complexity, in particular non-commutativity. A measure for non-commutativity with a fixed preferred hyper-octonionic imaginary unit is provided by the commutator of 3 and 3bar parts of the hyper-octonion spinor field defining an antisymmetric tensor in color octet representation: very much like color gauge field. Color action is a natural measure for the non-commutativity minimized when the tangent space algebra closes to complexified quaternionic, instead of complexified octonionic, algebra. On the other hand, Kähler action is nothing but color action for classical color gauge field defined by projections of color Killing vector fields. Here it is!That WZW + Dirac action for hyper-octonionic strings would correspond to Kähler action would in turn be the TGD counterpart for the proposed string-YM dualities. 6. Summary To sum up, the following conjectures are direct generalizations of those for minimal surfaces. • L_K acts as an integrating factor and omega_K= L_K*omega is a closed form. • Generalizing from the case of minimal surfaces, closed-ness guarantees that hyper-quaternionic 4-surfaces saturating this form are absolute minima of Kähler action. • The hyper-octonion analytic solutions of hyper-octonionic Dirac equation defines those maps M^8-->M^4xCP_2 for which L_K acts as an integrating factor. Classical TGD reduces to a free Dirac equation for hyper-octonionic spinors! For more details see the chapter TGD as a Generalized Number Theory II: Quaternions, Octonions, and their Hyper Counterparts. Friday, February 11, 2005 Great Ideas It occurred to me that I could try to summarize the great ideas behind TGD. There are many of them and I cannot summarize them in single page. My dream is to communicate some holistic vision about what I have become conscious of and therefore I start just by listing the great ideas that come to my mind just now and continue later by giving details. These ideas are also summarized in the chapter Overview about the Evolution of Quantum TGD of TGD. • Classical physics as the geometry of space-times regarded as 4-surfaces in certain 8-dimensional space-time. This generalizes and modifies Einstein's vision. Note that in quantum context the plural "space-times" indeed makes sense. • Quantum physics as infinite-dimensional geometry of the world of classical worlds=space-time surfaces. This vision generalizes further the vision of Einstein. One of the paradoxes of the sociology of post-modern physics is that M-theorists refuse to realize the enormous unifying power of infinite-dimensional geometry. • Physics as number theory vision involves several ideas bigger than life. • p-Adic physics as physics of cognition and intentionality and fusion of real physics and physics of cognition to single super physics by fusing real numbers and p-adic number fields to a larger structure. p-Adic numbers can be applied also to real physics and leads to a plethora of quantitative predictions. It became already decade ago clear that p-adic numbers provide a royal road to the understanding of elementary particle masses. The idea is somewhat tricky: just the requirement that physical system allows p-adic cognitive representations poses unexpectedly strong constraints on the properties of the system, in this case to the values of elementary particle masses. Knowing this I become very sad when I see colleagues to continue the fruitless and more and more bizarre looking M-theory Odysseia. They are now quite seriously suggesting that we must accept that physical theories will never be able to predict anything concrete. They simply cannot imagine the possibility that M theory might be wrong or even worse, not-even-wrong! For the critical discussions about the state of M-theory see the blog Not-Even Wrong of Peter Woit. • Space-time surfaces as hyper-quaternionic surfaces of 8-dimensional octonionic space-time is an idea which has started to flourish during last months: for details see TGD as a Generalized Number Theory II: Quaternions, Octonions, and their Hyper Counterparts. TGD can be formulated in two dual manners. First corresponds to M^4xCP_2 picture and the second, number theoretical formulation, can be seen as a string model in 8-dimensional Minkowski space M^8 of hyper-octonions. I refer to the possibility to formulate the theory either in M^8 or M^4xCP_2 as "number theoretical compactification". Of course no (spontaneous) compactification occurs. If I would be forced to introduce this monstrously ugly super-stringy notion into TGD, I would be ready to hang myself. What is so beautiful that the classical notion of complex analytic function generalizes to the level of hyper-octonions and physics at this primordial number-theoretical level looks ridiculously simple. WZW action for the automorphism group G_2 of octonions and Dirac action for octonionic 2-spinors satisfying Weyl conditions with solutions given by hyper-octonion analytic functions. In the quantization the real Laurent coefficients of spinor field and G_2 valued field are replaced by mutually commuting Hermitian operators representing observables coding for quantum states so that also quantum measurement theory and quantum-classical correspondence pop up in the number theoretical framework. Here I cannot resist revealing one more fascinating fact: G_2 is a unique exception among simple Lie groups in that the ratio of long roots to short ones is sqrt(3): do not hesitate to tell this also to your friends;-)! The number three pops up again and again in TGD, for instance, the existence of so called trialities corresponds to the existence of classical number fields. I have told at my homepage about my great experiences during which I had the vision about number three as the fundamental number of mathematics and physics and also precognition about the "theory of infinite magnitudes", that is the theory of infinite primes about which I am going to say something after two paragraphs. Perhaps the most beautiful aspect is that m+n-particle vertices correspond to a local multiplication of hyper-octonionic spinors coding for m incoming resp. n outgoing states and the inner product of these products in accordance with the view that generalized Feynman diagrams represent computation like processes developed in Equivalence of Loop Diagrams with Tree Diagrams and Cancellation of Infinities in Quantum TGD I have not taken strings seriously but I must admit the amazing possibility that all information about both classical physics (space-time dynamics) and quantum physics as predicted by TGD might be coded by these number theoretical strings, which are however quite different from the strings of super-string models. The corresponding string diagrams generalize so called braid diagrams to Feynman diagrams a la TGD and also provide a physical realization of topological quantum computation at the level of fundamental physics. • The notion of infinite primes was inspired by TGD inspired theory of consciousness and is now an organic part of quantum TGD. • Infinite primes are in one-one correspondence with the states of repeatedly second quantized arithmetic quantum field theory and can be identified as representations of physical states. Infinite primes can in turn be represented as 4-dimensional space-time surfaces. The conjecture that these surfaces solve the field equations of TGD is equivalent with the conjecture that hyper-octonionic spinor fields code for space-time surfaces as hyper-quaternionic surfaces by defining what I call Kähler calibration. • Infinite primes force also to generalize the notion of ordinary number: each real number corresponds to an infinite number of different variants, which are equivalent as real numbers but p-adically/cognitively non-equivalent so that space-time points becomes an infinitely structured an complex algebraic hologram able to represent in its structure even entire physical universe! This is Brahman=Atman mathematically. Leibniz realized this when he talked about monads but was forgotten for five centuries. • A further bundle of ideas relates to TGD inspired theory of consciousness. There are two books about this: TGD Inspired Theory of Consciousness and Genes, Memes, Qualia and Semitrance. • The starting point is a real problem as always. Now it is the paradox created by the non-determinism of quantum jump contra determinism of Schrödinger equation. TGD leads to a solution of the paradox and at the same time emerges a notion of free will and volition consistent with physics. There is no attempt to explain free will and consciousness as as illusions (whatever that might mean!). They are still stubbornly trying to do this, these stubborn neuroscientists! • A new view about the relationship between experienced, subjective time and the geometric time of physicist emerges. The outcome is a revolutionary modification of the notions of both time and energy. • The new view about time gives the conceptual tools needed to make scientific statements about what might happen for consciousness after biological death and TGD inspired consciousness has grown into two books covering basic phenomena of consciousness from brain functions to paranormal. • The possibility of negative energies and signals propagating backwards in geometric time and reflected back by time reflection has several implications. Communications with geometric past (long term memories!) and in principle even with civilizations of geometric future become possible. Intentions might be realized by control signals sent backwards in time. Instantaneous remote sensing by negative energy signals becomes possible, etc... • p-Adic mathematics leads to and identification of genuine information measures based on Shannon entropy and the replacement of ordinary real norm with p-adic norm and applicable as measures of conscious information. The idea about rationals and algebraic numbers as islands of order in the sea of chaos represented by generic real numbers finds a direct counterpart at the level of physical correlates of conscious experience. Well, I think it is better to stop here and continue later. Matti Pitkanen
6e46f14511e2e54c
Research Papers Virtually all my research, which is still continuing actively, has been based on the conviction that time, motion and size are all relative. I call this the relational approach. My main collaborators for the earlier work were Bruno Bertotti, Niall Ó Murchadha, Edward Anderson, Brendan Foster, and Bryan Kelleher. I am currently working closely with Henrique Gomes, Sean Gryb, Tim Koslowski and Flavio Mercati. I list below the more significant papers that have resulted from this research, including papers that consider the implications of the relativity of time and motion for the quantum theory of the universe. It is worth emphasizing that Einstein set out to create a theory in which time and motion are relative in a brilliant but indirect way and that this has led to considerable confusion. In contrast, the approach of the papers below is direct. I believe their main value to be, first, the demonstration that the direct route to relativity of time and motion leads to the same theory as Einstein’s indirect route, second, that the inclusion of relativity of size leads to Shape Dynamics (Ideas), and third that this work identifies the dynamical features of gravity likely to be important in the creation of quantum gravity. For this, the last four papers (before the two on maximal variety) are the most relevant and relate to Shape Dynamics. Finally, there are two papers on maximal variety, Leibniz’s idea that the universe in which we live is more varied than any other possible universe. This speculative idea was developed by Lee Smolin and myself. It requires much development, but is a good illustration of key features of Leibniz’s thinking that could be relevant to quantum gravity. Papers on the Relativity of Time and Motion Relative-distance Machian theories. Nature 249, 328 (1974). (PDF) Gravity and inertia in a Machian framework. (With Bruno Bertotti.) Nuovo Cimento 38B, 1 (1977). The basic idea behind these two papers is that only relative separations occur in the action principle of the universe. It later transpired that the same idea had been developed by several authors, most notably by Schrödinger in 1925. The main value of such theories is that they show how Mach’s Principle can be implemented, but they lead to anisotropic inertial masses, which are ruled out experimentally. For a discussion of work along these lines, which was initiated by Mach himself, see the conference proceedings Mach’s Principle (books). Relational concepts of space and time. British Journal for the Philosophy of Science 33, 251 (1982). (Journal) Explains precisely what one should expect of a Machian dynamics. It develops an idea due to Poincaré. Mach’s principle and the structure of dynamical theories. (With Bruno Bertotti.) Proceedings of the Royal Society (London) A 382, 295 (1982). (PDF) In this paper, Bertotti and I succeeded in finding a way to implement Mach’s Principle that does not lead to anisotropic masses. I now call this method best matching. It is universal, closely related to the gauge principle of modern high-energy physics, and leads to a direct dynamical and relational derivation of general relativity. All the subsequent papers on the relativity of time and motion listed below are based on this principle. Leibnizian time, Machian dynamics, and quantum gravity. In: Quantum Concepts in Space and Time, eds. R. Penrose and C. J. Isham, Oxford University Press, Oxford (1986). (PDF) Explores the implications for quantum gravity of the Machian structure of general relativity found in the previous paper. I argue that the quantum mechanics of the universe will be very different from the existing form of quantum mechanics, which is valid in the framework provided by the universe. The part played by Mach’s Principle in the genesis of relativistic cosmology. In: Modern Cosmology in Retrospect, eds. B Bertotti et al, Cambridge University Press, Cambridge (1990). (PDF) The development of Machian themes in the twentieth century. In: The Arguments of Time, ed. J Butterfield, Oxford University Press, Oxford (1999). The two above papers give an account of Einstein’s attempts to implement Mach’s idea about the origin of inertia. My main conclusion is that Einstein created much confusion by never clearly identifying precisely how he should implement Mach’s proposal but that despite this succeeded to a large degree. The Machian motivation behind the creation of general relativity was very great. Time and complex numbers in canonical quantum gravity. Physical Review D 47, 12 (1993). Journal (PDF, 7.5MB) Complex numbers play a very significant role in quantum mechanics because the fundamental time-dependent Schrödinger equation is complex. However the putative equation of quantum gravity is real, and it is not clear how complex numbers should enter the theory. The above paper focuses on this issue, which has still not attracted the attention that it warrants. I gave a seminar on this at the Perimeter Institute. The emergence of time and its arrow from timelessness. In: Physical Origins of Time Asymmetry, eds. J. Halliwell et al, Cambridge University Press, Cambridge (1994). (PDF) The timelessness of quantum gravity I, II. Classical and Quantum Gravity 11, 2853, 2875 (1994). (Journal, Journal) These papers show that the notion of an independently existing time is redundant if one is considering the dynamics of the universe. Change does not occur in time. Rather dynamics relates all changes in the universe to each other. I believe that the implications of this fact for the quantum theory of the universe (quantum gravity) are profound. The quantum universe is likely to be static. Ironically, I believe that in this timeless scenario it will be easier to explain the arrow of time, which I trace to the marked asymmetry of the configuration space of the universe. I conjecture that this concentrates the wave function of the universe on configurations that contain what we interpret as mutually consistent records of a past. My The End of Time (books) gives a popular account of these ideas. Dynamics of pure shape, relativity and the problem of time. In: Decoherence and Entropy in Complex Systems (Proceedings of the Conference DICE, Piombino 2002, ed. H.-T Elze), Springer Lecture Notes in Physics 2003. (arXiv:gr-qc/0309089) In 1999 I began a very fruitful collaboration with Niall O'Murchadha that greatly extended my earlier work with Bruno Bertotti on time and Mach's principle. This paper gives a useful introductory overview of the results that we obtained in collaboration with Brendan Foster, Edward Anderson, and Bryan Kelleher. Details can be found in the five following papers. Relativity without relativity. (With Brendan Z Foster and Niall Ó Murchadha.) Classical and Quantum Gravity 19, 3217 (2002). (arXiv:gr-qc/0012089) I think this may prove to be my most important research paper; it greatly extends my work with Bertotti and demonstrates the power of the notion of best matching. The key insight is due to Niall O'Murchadha, who realized that it is very difficult to create a consistent theory that meets the relational requirements; consistency becomes a powerful tool for the finding of theories. The paper considers the construction of a relational theory that describes the dynamical evolution of three-dimensional Riemannian geometry and shows that the simplest nontrivial consistent theory of this kind is general relativity. If in addition one attempts to allow other fields to interact with the dynamical Riemannian geometry, then the simplest realization of such interaction enforces the emergence of a universal light cone, gauge theory, and the equivalence principle. This result needs to be put into its historical perspective. Newton introduced absolute space and time in order to formulate dynamics, but Leibniz and Mach argued that position and time are relative, so that dynamics must be relational. This paper completely vindicates the relational standpoint and shows that its simplest implementation leads directly and inexorably to all the fundamental principles of modern physics except for quantum theory. Interacting vector fields in relativity without relativity. (With Edward Anderson.) Classical and Quantum Gravity 19 3249 (2002). (arXiv:gr-qc/0201092) This paper, largely the work of Edward Anderson, gives the details of the emergence of gauge theory in the framework of the previous paper “Relativity without relativity”. Scale-invariant gravity: particle dynamics. Classical and Quantum Gravity 20 1543 (2003). (arXiv:gr-qc/0211021) In a fully relational theory, not only time and position should be relative but also size – if all distances in the universe were suddenly doubled, nothing observable would change. This paper extends the principle of relational best matching to the relativity of size for the case of particle dynamics. It presents a dynamics of pure shape and demonstrates that one can exactly recover Newtonian inertia, gravity, and electrostatics for subsystems of the universe together with one additional force that acts significantly only over scales of the order of the complete universe and is closely analogous to Einstein’s cosmological constant. This paper, together with the two following papers, highlights a most strange feature of general relativity and the Big Bang cosmology: in these theories, overall size is absolute, in contrast to everything else. This is the feature of general relativity that allows the ‘expansion of the universe’.  In the standard model, the universe is doing two things simultaneously: it is expanding and changing its shape (it is becoming more inhomogeneous). If the universe were perfectly relational, it could only change its shape. Attractive as this idea is, it appears to be in strong conflict with the evidence from cosmology. For me, this is a great mystery and a stimulus to further research. Scale-invariant gravity: geometrodynamics. (With Edward Anderson, Brendan Z Foster, and Niall Ó Murchadha.) Classical and Quantum Gravity 20 1571 (2003). (arXiv:gr-qc/0211022) In this paper the principle of best matching is extended to create a theory of relational dynamical Riemannian three-geometry in which position, time, and size are completely relative. The resulting theory has many attractive features and is essentially identical to general relativity as regards processes that take place below intergalactic scales. It therefore passes all the same stringent observational tests as general relativity except those on cosmological scales, for which it fails badly. It cannot be a realistic theory of the universe. For me this is the mystery noted in the discussion of the previous paper: why does the universe fail to be perfectly relational? The significance of this question is emphasized by the following paper. The physical gravitational degrees of freedom. (With Edward Anderson, Brendan Z Foster, Bryan Kelleher, and Niall Ó Murchadha.) Classical and Quantum Gravity 22 1795 (2005). (arXiv:gr-qc/0407104) This paper shows the precise sense in which general relativity, treated as a dynamical theory, just fails to be fully scale invariant. A Riemmanian three-geometry is characterized by three degrees of freedom at each space point P. Two of them determine angles at P (the conformal part of the geometry) while the third determines the scale at P. This is analogous to the way in which two angles determine the shape of a triangle, while scale determines its size. It is intuitively clear that shape is more fundamental than size. The above paper shows that general relativity can be represented as a theory in which the two local degrees of freedom at each space point interact with each other and with one single extra global degree of freedom, essentially the rate of change of the volume of the universe. It is very odd that the local scales play no role but the rate of change of the global scale (the volume of the universe) does. This is what allows the universe to expand. Constraints and gauge transformations: Dirac's theorem is not always valid (with Brendan Foster) (arXiv:0808.1223) This paper is related to a widely held belief that there is no genuine dynamical evolution in classical general relativity. The belief relies on a famous theorem proved by Dirac in his Lectures on Quantum Mechanics (1964), which is used to interpret the so-called Hamiltonian constraint in canonical quantum gravity. Our paper shows that in fact Dirac's theorem is not universally valid and thus calls into question the orthodox belief. The definition of Mach's Principle Found. Phys. 40 1263-1284 (2010) (arXiv:1007.3368) There is much confusion surrounding the definition of Mach's principle. In this paper, which includes historical and conceptual material, I provide what I believe is the only sensible definition. Conformal superspace: the configuration space of general relativity (with Niall O'Murchadha) (arXiv:1009.3559) I regard this as one of my most important papers. It shows that general relativity can be characterized as a theory of the dynamics of local shapes of space. This perfectly matches the definition of Mach's principle as given in the above paper. Einstein gravity as a 3D conformally invariant theory (by Henrique Gomes, Sean Gryb, and Tim Koslowski) (arXiv:1010.2481) This paper and the following, both by my collaborators, complement the one above very usefully, making effective use of Dirac's theory of constrained dynamical systems. The link between general relativity and shape dynamics (by Henrique Gomes and Tim Koslowski) (arXiv:1101.5974) Papers on Maximal Variety Extremal variety as the foundation of a cosmological quantum theory. (With Lee Smolin.) Unpublished. (arXiv: hep-th/9203041; no figures!) The deep and suggestive principles of Leibnizian philosophy. The Harvard Review of Philosophy 11, Spring (2003). (PDF) Leibniz was mocked for claiming that we live in the best of all possible worlds. (Voltaire’s Candide asks: “If this is the best, what are the others like?”) In fact, Leibnizian philosophy is really concerned with variety, and in his Monadology Leibniz postulated that the universe is created “with as much variety as possible, but with the greatest order possible”. Strangely, no one seems to have attempted to express this idea in concrete mathematical form before Smolin and I found a realization in the form of various models for which an intrinsic variety can be defined and maximized. Unlike Shannon information, the information content of such models is intrinsic and can be 'read off' directly from them. We initially hoped these models would cast light on the mysteries of quantum mechanics, but, despite several intriguing properties of these models, any direct link to quantum mechanics is clearly still a long way off.
45f1c8f2b782687e
Spline Potential Eigenfunctions Model Documents Main Document Spline Potential Eigenfunctions Model  written by Wolfgang Christian The Spline Potential Eigenfunctions Model computes the Schrödinger equation energy eigenvalues and eigenfunctions for a particle confined to a potential well with hard walls at -a/2 and a/2 and a smooth potential energy function between these walls.  The potential energy function is a third-order piecewise continuous polynomial (cubic spline) that connects N draggable control points.  Cubic-spline coefficients are chosen such that the resulting potential energy function and its first derivative is smooth throughout the interior and has zero curvature at the endpoints.  Users can vary the number of control points and can drag the control points to study level splitting in multi-well systems.  Additional windows show a table of energy eigenvalues and their corresponding energy eigenfunctions. The Spline Potential Eigenfunctions Model was created using the Easy Java Simulations (EJS) modeling tool.  It is distributed as a ready-to-run (compiled) Java archive.  Double clicking the ejs_qm_SplinePotentialEigenfunctions.jar file will run the program if Java is installed. Last Modified June 12, 2014 This file has previous versions. Source Code Documents Spline Potential Eigenfunctions Source Code  The source code zip archive contains an EJS-XML representation of the Spline Potential Eigenfunctions Model.   Unzip this archive in your EJS workspace to compile and run this model using EJS. Last Modified June 12, 2014 This file has previous versions.
14c1ac9ed9ca7e24
« The Big Bang in Controversy | Main | Share the Wealth ... with Socioeconomic Democracy » PrintPrinter-friendly version Pogue, Hydrogen - Stories of Suppression The development of new energy technologies is an arduous affair. Not because we're not smart enough, but because there are powerful interests that make buckets of money from OIL and its derivatives. The Iraq war, initially called "Operation Iraqi Liberation" - OIL - wasn't about weapons of mass destruction or about any involvement of Iraq with terrorists or the events that brought down three steel-core buildings in an unprecedented collapse. It was about oil, says Greg Palast in his most recent article: There are powerful interests that profit handsomely - ExxonMobil posted a record near $10 billion profit during one quarter in 2005 alone - from there being no alternatives to oil and other fossil fuels, and they do not want us to have non-polluting, fuelless energy. It seems that suppressing a whole new field of development in energy technology would be too difficult, even for a very powerful industry. Perhaps not - at least there are numerous known instances of suppression, many of them have been documented on this page of PES Wiki. You can add to the collection if you know of others - the page is open for public editing. A recent article in a local Maryland paper mentions the Pogue carburetor from the 1930's and also a much more recent example. For the sake of bringing home the kind of pressure that is actually being brought to bear on inventors, as unlikely as it may seem to some, here is a copy of that article by Larry Jarboe in St. Mary's Today - 18 March 2007: Article continues below - - - Common Sense in Government - Commentary on the News by Larry Jarboe Larry Jarboe, a Republican, has been elected to three terms as St. Mary's Commissioner representing the Chaptico, Bushwood and Mechanicsville areas of St. Mary's County. Jarboe lives in Golden Beach and has a background as a citizen activist and environmentalist. He operates a saw mill in Charlotte Hall and has been recognized for his efforts to spur electric cars and fuel saving devices. Agents of Suppression There are many urban myths regarding high mileage carburetors and unique energy devices. One of the most common stories is the two hundred miles per gallon carburetor patented and demonstrated in the mid-1930's by Charles Pogue. My own years of study uncovered that this vaporizing carb did actually work so well that Standard Oil Company purchased the rights to the carburetor after reformulating gasoline with additives. This new fuel recipe corrupted the thermo-catalytic reaction that created such unusual efficiency and pollution reduction in the Pogue invention. Mr. Pogue's carburetor was actually a molecular disruptor that broke the fuel molecules into the fundamental clean burning carbon and hydrogen gases prior to combustion in the engine. The Pogue carburetor later showed up on American tanks and Australian Bren gun carriers in North Africa during WW II to assist in the defeat of Gen. Rommel's diesel powered Panzer tanks. There was a metal shroud over the unit that stated: "Property of Standard Oil Co., No User Serviceable Parts, Do Not Remove This Cover". General Rommel's memoirs are reputed to attribute his defeat in North Africa to a secret Allied Super Carburetor. Unfortunately, there are few people left living who can corroborate this story. A modern example of very real suppression of clean energy technology that can be easily verified happened over the past month. Paul Zigouras was selling an electric control unit (ECU) and electrolyser on e-Bay to make hydrogen fuel from water. He was selling these units to help subsidize his research on how to more than efficiently crack water into its component hydrogen and oxygen gases. He had taken public information from expired patents and schematics from the Internet to build the units that he was selling. Paul had developed an improved circuit that exceeded Faraday output by a large measure using a resonant electrical pulse that literally kept itself tuned to the frequency best suited for maximum output. His unit blasted two gallons of water a minute into hydrogen/oxygen gas which was enough to run an auto or boat engine. He openly communicated with alternative energy Yahoo forum members on the workings of his assembly that he maintained was not practical for a car because of the large water consumption. However, he had a marine repair company, Zigouras Racing, and he wanted to develop marine engines that would literally run on water. Since the exhaust was also water, he had developed a fuel and pollution free marine power unit, every mariner's dream. Soon after, Paul eased out of forum discussions in which he had been so formerly forthcoming citing legal concerns. At the same time, a friend of mine with his wife flew from Lexington, Kentucky to visit Paul Zigouras in Brockton, Massachusetts. Their story from Paul is a classic case of suppression of new energy technology: Two men in black from the Justice Department had visited Paul threatening that he would be subject to high fines and fifty years in prison if he sold another unit due to national security concerns. Not long after, another group, whose identity Paul did not disclose, offered him and his partners six million dollars to drop the project. Since our Federal Government was not going to let him produce the units, Paul and his business partners took the money. The use of threats and intimidation followed by a purchase of rights to the technology is why many emerging competitive energy concepts and products have been sitting on a black shelf. The participation of the Justice Department to assist the energy monopoly should outrage every patriotic American citizen! In Japan, the most prolific inventor of our time, Dr. Yoshiro Nakamatsu, who invented the floppy disk has over three thousand patents that he recorded. One of those inventions is a working scooter that cracks hydrogen on demand from tap water to power a small fuel cell that feeds an electric motor. His motor vehicle can silently zip through traffic with only water vapor for exhaust. Instead of being suppressed by G-men, the Japanese people hold Dr. Nakamatsu in great esteem. Mazda and Lexus are working to adopt this amazing technology in automobiles. President George W. Bush promised to support a hydrogen economy. Last week, Congressman Steny Hoyer pledged his commitment in an interview in ST. MARY'S TODAY for new alternative energy technologies. However, when an enterprising American citizen does make the very means possible to escape consumption of fossil fuel and fulfill these political promises, agents of our government show up to suppress the technology. Unlike the stories surrounding the high mileage carburetor that Mr. Pogue engineered, the Zigouras information is presently archived on the Internet and available to all who seek. Mr. Zigouras is still alive though it may take a subpoena to get him to talk due to any disclosure agreement within his multi-million dollar contract not to develop his products. Unlike past stories, the smoking gun of energy suppression is available here and can be documented. Do not expect a Congressional investigation or full disclosure in the main stream media. The fact is: They don't want you to know! - - - May 2008 - An interesting comment on turning water into a gaseous fuel was recently made by Tom Bearden in an email exchange: Presently we have a very viable alternative to carbon-based fuels etc. that is beginning to rapidly emerge. That is "watergas", which has a history going back to the 1920s. Several legitimate inventors right now have viable watergas systems and processes, where the H-O-H molecule can be tricked to just "fall apart" because the O-H bond is "unhappened" by use of negative energy in the local vacuum and the accompanying negative probabilities. In the 1930s, some of our leading physicists and mathematical scientists so hated negative energy (from the Schrödinger equation and from Dirac's relativistic extension of it, and also in Dirac's original electron theory) and its associated negative probabilities, that they arbitrarily tossed it out of physics - out of the Dirac relativistic extension to the Schrödinger equation, and out of Dirac's electron theory. The problem is given in this quote from Ian D. Lawrie. A Unified Grand Tour of Theoretical Physics, CRC Press, 1990, p. 130 (speaking of the Schrödinger equation and derivation of the Klein-Gordon equation from it with two problems - negative energy states and negative probability density): "The negative energy solutions are an embarrassment, because they imply the existence of single-particle states with energy less than that of the vacuum. Intuitively, this is nonsensical. In fact, there is no lower limit to the energy spectrum. This means that the vacuum is unstable, since an infinite amount of energy could be released from it by the spontaneous creation of particles in negative energy states. ... it is the negative energy states which give rise to a negative probability density." Dirac himself at first adhered to negative energy and negative probabilities. Quoting: "Negative energies and probabilities should not be considered as nonsense. They are well-defined concepts mathematically, like a negative of money." [P. A. M. Dirac, "The physical interpretation of quantum mechanics." Proc. Roy. Soc. Lond. A, Vol. 180, 1942, pp. 1-40.] However, later Dirac caved in to the fierce peer pressure of his adamant colleagues, and then personally participated in eliminating the negative energy. Quoting Dirac later: "I remember once when I was in Copenhagen, that Bohr asked me what I was working on and I told him I was trying to get a satisfactory relativistic theory of the electron, and Bohr said 'But Klein and Gordon have already done that!' That answer first rather disturbed me. Bohr seemed quite satisfied by Klein's solution, but I was not because of the negative probabilities that it led to. I just kept on with it, worrying about getting a theory which would have only positive probabilities." [Conversation between Dirac and J. Mehra, Mar. 28, 1969, quoted by Mehra in Aspects of Quantum Theory, ed. A. Salam and E. P. Wigner, Cambridge University Press, Cambridge, 1973.] You see, the "problem" reduces to this: In modern physics a thing that has "occurred" or "happened" (and is thus sustained and observable), is based on subsidiary statistical operations ongoing between the active vacuum and all the charges. All observable forces are generated in interacting matter by the exchange of virtual particles between the local vacuum and the material particles. So underneath that "observable or happened entity" in physics there is a sustaining and producing set of more subtle statistical processes - calculated (usually) with a positive energy vacuum and thus with positive probabilities. So when the positive probabilities in those underlying processes - of that observable or happened entity - reach a total of 100%, that is "certainty" and so there is the resulting physical (observable) entity present and sustained - so long as the local vacuum is not altered to add negative energy and negative probabilities to those underlying processes. The observable entity/state has "happened" and it "stays happened", normally - in a positive energy vacuum. But if one's theoretical model allows negative energy of the vacuum and thus negative probabilities in underlying and ongoing primary statistical processes and interactions with the vacuum, then by conditioning the local vacuum with negative energy (very easily done, by the way, as clearly shown for more than 20 years by Bedini), one also creates those negative probabilities in those underlying statistical processes. And that is a very profound change to present science and scientific method. That means that the probability of something that has "happened" and is observably sustained, can be lowered from 100% to 70% or even to zero percent. This in turn means that something that has physically "happened" and is thus being sustained observably, can be "unhappened" deliberately by simply conditioning the local vacuum to have negative energy. Indeed, it can be "unhappened completely" so that it disappears and is not there at all, regardless of how many instruments one employs to look. In the watergas technique, e.g., an inventor uses one or another of the methods of conditioning the local vacuum with negative energy. Specifically this strongly affects the O-H bonds, so that to us (observably) they seem to just "fall apart". They actually fall apart because of the changes resulting in their underlying sustaining processes in interaction with the active vacuum. Technically this means that the previous 100% probability of those established O-H bonds are lowered, or even totally vanished or "unhappened". The O-O bonds and H-H bonds, on the other hand, are more firmly increased, so that in the affected water there appear bubbles of H2 and O2, as the H-O-H molecules fall apart because of the vanishing and "unhappening" of their O-H bonds. Done correctly, this then becomes a pretty safe thing, because in that changing water (in its negative energy vacuum) the freed O2 and H2 will not explode as in a normal vacuum, because of the difficulty in forming O-H bonds in the presence of a negative energy vacuum and negative O-H probabilities. This means that one can then direct the stabilized bubbles of H2 and O2 to a short distance (even a few inches) away from that conditioned negative energy vacuum, to a "more normal" vacuum - and then the O2 and H2 will again burn very nicely (as in the chambers of a piston engine in a car). In this way, one can power an automobile from watergas alone, or augment the use of normal gasoline with simultaneous combined use of watergas, etc. The same process, applied to cancers in the living body, can "unhappen" the cancer (which is being maintained by those same statistical underlying processes that formed it in the first place). And the cancer will then "heal up" or, in other words, "unhappen" gradually and disappear because of the addition of negative probabilities. Engineering negative energy of the vacuum and thus negative probabilities is indeed a vast leap forward in science and physics - because the physicists just arbitrarily discarded it back there decades ago. Tesla originally discovered negative energy, before the term was even available, and he called it "radiant energy" to differentiate its phenomenology from that of normal positive EM energy. Bedini uses negative energy in his epochal battery chargers, so the "happened" sulfation of a battery can be "unhappened" and eliminated. The lifetime of the battery can thus be extended dramatically, and this is very important, e.g., in large expensive batteries (as in large battery-powered materials handling equipment in warehouses, in which the Bedini process and system have been very successfully tested). Kanzius, e.g., achieved that negative energy local vacuum and thus negative probabilities process (though he himself doesn't appear to know the exact nature of his process) for his epochal cancer treatment process. That process has now been through animal trials, and in the animals it cured 100% of their cancers. An independent and well-recognized cancer research institute has studied it, and pronounced the Kanzius cancer treatment as the greatest advance in cancer therapy in a century. Next must come human trials, then seeking out FDA approval for use in humans. In other words, by the same "precursor" engineering of the local vacuum with negative energy, it is possible to produce curative process for any and all our present human diseases. Without the use of harmful drugs and all their side effects, etc. As you can see, some very powerful people and organizations flatly do not wish that to be developed. Kanzius also noted that the same process affects salt water. So he developed a very good adaptation for use on the water and making watergas, with the characteristics we previously mentioned. He then took his watergas process and system to a world-recognized authority on water chemistry, who subjected the process to some 50 rigorous tests. When finally finished, the expert publicly proclaimed this was the "greatest advance in water chemistry in the last 100 years". Late last year, Kanzius stated that his watergas process had now achieved overunity (coefficient of performance, which means the burning of the resulting fuel resulted in more usable energy in the powered system than the operator had to input to the watergas process), and so he would not be saying anything else about it for awhile. In short, now it was time for patenting and protecting intellectual property rights. In short, this (use of the negative energy asymmetric vacuum) is one process by which asymmetrical EM processes can be engendered in water, in living bodies, and in other physical material systems. The impact on science and engineering is likely to be profound - it is a great leap forward at least by 200 years. Boyce also has a very viable watergas process, and it is my understanding that strong work is underway to be able to power automobiles and demonstrate it widely and publicly. He uses the Aharonov-Bohm effect of a toroidal coil and RF pulsing to achieve conditioning of the local vacuum uncurled A-potential with negative EM energy. A sharp little RF gradient (each little pulse) pops some electrons out of local Dirac sea holes, leaving the empty holes behind - which are negative mass energy electrons, NOT positrons. The result of using the AB effect to smoothly condition the local volume of vacuum in which the water resides, with negative energy (a negative energy "froth" of emptied Dirac sea holes), Boyce is able to very smoothly "unhappen" the H-O bonds, strengthen the H-H and O-O bonds, and make a very viable and very useful watergas process. With the escalating world fuel crisis and the resulting world energy crisis, it appears that the watergas process and "engineering the local vacuum" to accomplish precursor engineering of the underlying precursor statistical processes creating and sustaining a given object or process is something whose "time has come". It can be rigorously tested by our academic community, and some rigorous testing has already been done with extraordinarily positive results. The potential for powering our automobiles, trains, ships, etc. with watergas is tremendously important. One inputs water only, and the engine outputs water only. So one takes some water from the environment, uses the vacuum to engineer it, then uses the watergas to power out loads, and exhausts only WATER back to the same environment. Thus it is an environment-enhancing process par excellence, and it could greatly clean up our present pollution of our precious biosphere. We point out that the Fogal semiconductor has also demonstrated the ability to directly engineer its surrounding local spacetime for nearly 20 years now, and Fogal has continued to be rigorously suppressed, even though several important and competent independent laboratories have tested his chip and verified its unique functioning - such as instant communication to any distance without travel through the "intervening" ordinary space between the two widely separated points. He actually uses a multiply-connected spacetime for that communication, so any number of widely separate points can have "instant communication" between them with no time delay at all. Again, this has been independently tested and verified. For example, Dan Solomon, (Dean of the College of Physical and Mathematical Sciences, North Carolina State University) has also rigorously and theoretically shown that throwing out negative energy from physics (the relativistic extension of the Shrödinger equation, Dirac's theory, and from quantum field theory) was and is a serious mistake. One may Google quite a few important papers by Solomon, many published in high quality scientific journals. E.g., see Dan Solomon, "Some new results concerning the vacuum in Dirac's hole theory," Physica Scripta, Vol. 74, 2006, p. 117-122. Quoting: "In Dirac's hole theory (HT), the vacuum state is generally believed to be the state of minimum energy. It will be shown that this is not, in fact, the case and that there must exist states in HT with less energy than the vacuum state. It will be shown that energy can be extracted from the HT vacuum state through application of an electric field." See also (1) Dan Solomon, "Some differences between Dirac's hole theory and quantum field theory." Can. J. Phys., Vol. 83, 2005, pp. 257-271; (2) "Mathematical Inconsistencies in Dirac Field Theory," 1999. Available at quant-ph/9904106. Particularly see Dan Solomon, "Negative energy density for a Dirac-Maxwell field." 1999. Available at gr-qc/9907060. See http://eprintweb.org/S/authors/All/so/Solomon. Abstract: It is well known that there can be negative energy densities in quantum field theory. Most of the work done in this area has involved free non-interacting systems. In this paper we show how a quantum state with negative energy density can be formulated for a Dirac field interacting with an Electromagnetic field. It will be shown that, for this case, there exist quantum states whose average energy density over an arbitrary volume is a negative number with an arbitrarily large magnitude. We posted a write-up on the watergas process on our website, the little article "MEG Aharonov-Bohm Effect, Watergas, Negative Energy, Negative Probabilities, Precursor Engineering, Extending the Scientific Method, and EM Limitations," 7 April 2008. Available at this link. The late Eugene Mallove published two very important articles by D. L. Dotson, "Dirac's Equation and the Sea of Negative Energy, Part I, New Energy, Issue 43, 2002, pp. 1-20 (available at available at http://openseti.org/Docs/HotsonPart1.pdf) and D. L. Dotson, "Dirac's Equation and the Sea of Negative Energy, Part II, New Energy, Issue 44, 2002, pp. 1-24; available at http://www.openseti.org/Docs/HotsonPart2.pdf. Quoting Hotson: "I think if one had to point to a single place where science went profoundly and permanently off the track, it would be 1934 and the emasculation of Dirac's equation." [D. L. Hotson, "Dirac's Equation and the Sea of Negative Energy, Part I, New Energy, Issue 43, 2002, pp. 1-20. Quote is from p. 1.] So watergas and the use of precursor engineering (conditioning the local vacuum/spacetime first, and then allowing that conditioned vacuum/spacetime to directly alter a situation, an object, or a state, are two things whose "time has come". And it couldn't come at a better time than now, with the energy crisis and a great economic debacle descending directly upon the U.S. and Western Europe. - - - End of message by Tom Bearden - - - See also: Are new energy technologies being actively suppressed? (PDF - download, 137 pages) It would appear so, reading Gary Vesperman's compilation of cases and stories of inventors who have been threatened to within an inch of their lives - apparently with the aim of making their inventions disappear from public view. While not all of Gary's cases seem to be well documented or even to be genuine cases of suppression, reading the stories without any preconceived ideas, you might well come to the conclusion that something strange is going on in the new energy world... Pogue Carburetor, 'Gasoline Vapor Maker' Increase Mileage Water for Fuel Blog Yoshiro Nakamatsu's ENEREX Water Fueled Car - Sri Lanka "The specialty of my invention is its ability to produce this energy from water with a minimal electric current of barely 0.5 amperes, which was not possible earlier." -- Thushara Priyamal Edirisinghe PrintPrintable Version Michael comments (by email): Dear Sepp, Thanks for the info. Re: Greg Palast in the article - his book "Armed Madhouse" is well worth the effort to read. It's very funny and loaded with inside info about the US "corporate takeover" of Iraq, filled with tales of the circus of insider money-grubbing corporate and political eliters and their ploys. It also hits the "peak oil" myth right in the bull's eye. Nice con to get everyone to pay much more for fuel - that's about it, and interesting details about how much oil there really is in Iraq and the deals that were set up to hide it going back many decades. The book is worth every penny of its price. I bought a set of plans for the 200Mpg carburetor from an Allan Wallace back in the early 80s that had plans for Pogue design carburetors and the theory behind it. The mathematics of the fuel/air ratios were all correctly calculated, but the calculations did not take into consideration that a gas engine operates at a vacuum, not atmospheric pressure. This threw all their calculations and their claims out the window. While a vaporizing carburetor can get very good mileage, in the 30-50 range, it also is a fuel vapor/air bomb waiting for a backfire to set it off. It is my personal belief that the only way to get past the big OIL is to share the information with the world. Create a freedom of information site dedicated to the information and blueprints of all these inventions for the world to see. Many talented mechanics, both professional and backyard, would be able to build and test the product. Think of the ideas for improvements that would be generated! A universal carbuerator or engine that does not run on fossil fuels. And the knowledge free for the asking! Now there's an invention! A government by the people for the people? Ha! It's time we the people quit telling the government what we want and show them. Seems to me that they just don't get it. Impeach Bush & Cheney, bring our boys home, and take care of USa first!! A friend in the UK just sent these two links about energy inventions lost in the meanders of time... Top 10 green cars that have been lost to time (1924-1973) I want to let you know , Yes the 100 MPG is possible getting 5 times the regular mileage, but few inventors actually understood why it worked it was not just vaporizing the gasoline but cracking it into lower boiling hydrocarbons like natural gas and propanes, please see my site www.himacresearch.com I have made it my mission to get the truth out, also see www.fuelvapors.com and www.byronwine.com it is time to end suppression before it ends us all. Bruce
90f36812dea1d718
ONE hundred years ago this month, a light-bulb lit up over a physicist's head—and he wondered what made it yellow. For, while the yellow colour of a bulb suggests that it is giving off most of its light at that easily visible frequency, the physics he had been taught predicted that a heated object should emit mostly shorter-wavelength radiation, which is invisible. Max Planck presented his explanation for this troubling observation, known as the black-body radiation problem, in a lecture to the German Physical Society in Berlin on December 14th 1900. And that was when the light-bulbs really started going off over the heads of physicists across the world. For to solve his problem, Planck had had to invent the notion of the quantum. At the time, the idea that light travels in distinct packets of energy—the quanta after which the theory is named—seemed preposterous. Newton had thought it might, but the discovery of interference patterns at the point where beams of light interact clearly demonstrated that the stuff moved as a continuous wave of energy. Once people started looking for evidence of quanta, however, they found that too. The “duality” of light, as this phenomenon came to be known, reflects the odd reality that, depending on the sort of measuring device used, light can function both as a wave and as a particle. At the time, Planck saw his quanta as a mere mathematical trick, of the sort beloved by physicists needing to untangle knotty equations. Not even he believed that the idea corresponded to any physical reality. But, although he was the first to be confounded by quantum mechanics, he would not be the last. Even Albert Einstein, one of the finest physicists who has ever lived, could not bring himself to believe many of the theory's implications. And earlier this year, when a group of physicists gathered to choose the ten most important mysteries left in their discipline, all but two of the problems they selected directly involved quantum theory. Nor, despite its esoteric nature, is quantum theory irrelevant to everyday life. Leon Lederman, a particle physicist whose credentials include the discovery of the muon and the bottom quark (two of the fundamental particles in the universe), reckons that quantum phenomena already feature in technology that accounts for a quarter of America's GNP. And there is plenty more to come, as physics enters a second century of quantum investigation. Waves of change The first generation of quantum technology came from the slow realisation that unfolded in the decades following Planck's lecture that particles can behave like waves, just as Planck's energy waves behaved like tiny particles. This means that an electron, say, exists not as a point mass but rather as a “smear” of probability surrounding a point. By solving the wave equation developed by Erwin Schrödinger in 1926, a physicist can calculate the chance of finding the electron at any given point in space. Those quantum technologies developed to date, such as the transistor, the laser and the light-emitting diode, exploit the wave-like nature of the electron. Transistors, for example, are made up of bands of electron-rich and electron-poor areas sitting next to each other. Solutions to the Schrödinger wave equation dictate that an electron, no matter how smeared-out, can reside only within such a band. That means an engineer can control the flow of electric current through a transistor by manipulating the quantum transitions between bands. The Schrödinger equation does not merely apply to electrons. It actually describes the wave nature of all matter. But because of the effects of mass on an object's wavelength, matter tends to behave in a perceptibly wave-like way only at the sub-atomic scale. On the human scale, it generally acts as if it were made up of discrete, run-of-the-mill particles. Traditionally, the line dividing the quantum world of waves from the real world of particles has coincided with the boundary dividing the study of physics from the study of chemistry. For, although it is perfectly possible to model atoms and molecules using the Schrödinger equation, undertaking the tricky and complicated business of quantum calculation simply did not appear worth the effort to chemists until recently. That is now changing. Depending on how you look at it, either chemistry is getting involved with smaller objects, or quantum physics is getting involved with bigger ones. This exchange means that some old problems, which quantum physics considers solved, are now presenting new challenges for quantum chemistry. For example, Jan Hendrik Schön, a physicist working at Bell Laboratories, the research arm of Lucent Technologies, reported earlier this year that his group had created the first electrically powered “organic” laser. Such lasers, which use molecules of a cheap organic compound called tetracene to generate their light, could replace conventional lasers, which are made of expensive gallium arsenide, in ordinary electronic devices. A more esoteric example of an application on the border between chemistry and physics is quantum cryptography. This is a coding system that could exploit the quantum features of individual photons, as the quanta of light are known, to ensure the perfect secrecy of an electronic transmission. The feasibility of this hinges on being able to emit and detect single photons in a consistent manner. So far, physicists have been trying to do this using very faint laser beams. But last month, after working out its quantum energy states, W.E. Moerner, a chemist at Stanford University, reported that a particular molecule could be coaxed into releasing a single photon at a time, in a much more reliable way than a weak laser. Yet another borderland between physics and chemistry is found in the realm of carbon nanotubes. These are strong and elastic cylinders of carbon atoms, beloved of those who believe in nanotechnology—the idea that machines the size of molecules can one day be harnessed to the service of man. But carbon nanotubes are also highly conductive. When electrons are fed into one end of a nanotube, they are faithfully spat out at the other end. Moreover, as Bruce Alphenaar of Cambridge University recently demonstrated, the magnetic “spin” of the electrons passing down such a tube remains constant—something that does not usually happen when an electron is conducted. Because they can transmit spin in this fashion, carbon nanotubes may form the backbone of future quantum-computing devices. Such devices could increase computing power dramatically, because the zeroes and ones of traditional computing would be replaced by an array of “in-between” spin values. Such gizmos may sound outlandish, and there are plenty of sceptics who scoff at them. But in quantum theory, pure speculation is precisely the point. If a quantum physicist today could predict the future impact of his work, he would stand in violation of the long and honourable tradition of his discipline. Planck, indeed, never came to terms with the ideas whose birth he presided over. The progression of quantum theory beyond physics, into chemistry, and possibly thence into biology, will probably astonish the people who are investigating it now, and who may have thought they knew what they were up to all along.
d6013ce527e99631
Volume 4 Supplement 1 7th German Conference on Chemoinformatics: 25 CIC-Workshop Open Access Modeling of molecular atomization energies using machine learning • Matthias Rupp1Email author, • Alexandre Tkatchenko2, • Klaus-Robert Müller1 and • O Anatole von Lilienfeld3 Journal of Cheminformatics20124(Suppl 1):P33 Published: 1 May 2012 Atomization energies are an important measure of chemical stability. Machine learning is used to model atomization energies of a diverse set of organic molecules, based on nuclear charges and atomic positions only [1]. Our scheme maps the problem of solving the molecular time-independent Schrödinger equation onto a non-linear statistical regression problem. Kernel ridge regression [2] models are trained on and compared to reference atomization energies computed using density functional theory (PBE0 [3] approximation to Kohn-Sham level of theory [4, 5]). We use a diagonalized matrix representation of molecules based on the inter-nuclear Coulomb repulsion operator in conjunction with a Gaussian kernel. Validation on a set of over 7000 small organic molecules from the GDB database [6] yields mean absolute error of ~10 kcal/mol, while reducing computational effort by several orders of magnitude. Applicability is demonstrated for prediction of binding energy curves using augmentation samples based on physical limits. Authors’ Affiliations Machine Learning Group, Technical University Fritz-Haber-Institute, Max-Planck Society Argonne National Laboratory 1. Rupp M, Tkatchenko A, Müller KR, von Lilienfeld OA: Fast and Accurate Modeling of Molecular Atomization Energies with Machine Learning. Physical Review Letters. 2012, 108 (5):Google Scholar 2. Hastie T, Tibshirani R, Friedman J: The Elements of Statistical Learning: Data Mining, Inference, and Prediction. 2009, New York: SpringerView ArticleGoogle Scholar 3. Perdew JP, Ernzerhof M, Burke K: Rationale for mixing exact exchange with density functional approximations. J Phys Chem. 1996, 105 (22): 9982-9985. 10.1063/1.472933.View ArticleGoogle Scholar 4. Hohenberg P, Kohn W: Inhomogeneous Electron Gas. Phys Rev. 1964, 136 (3B): B864-B871. 10.1103/PhysRev.136.B864.View ArticleGoogle Scholar 5. Kohn W, Sham LJ: Self-Consistent Equations Including Exchange and Correlation Effects. Phys Rev. 1965, 140 (4A): A1133-A1138. 10.1103/PhysRev.140.A1133.View ArticleGoogle Scholar 6. Blum LC, Reymond JL: 970 Million Druglike Small Molecules for Virtual Screening in the Chemical Universe Database GDB-13. J Am Chem Soc. 2009, 131 (25): 8732-8733. 10.1021/ja902302h.View ArticleGoogle Scholar © Rupp et al; licensee BioMed Central Ltd. 2012
6fe6735b551eabcb
Statistical thermodynamics Thermodynamics is the study of the various properties of macroscopic systems that are in equilibrium and, particularly, the relations between these various properties. Having been developed in the 1800s before the atomic theory of matter was generally accepted, classical thermodynamics is not based on any atomic or molecular theory, and its results are independent of any atomic or molecular models. This character of classical thermodynamics is both a strength and a weakness: classical thermodynamic results will never need to be modified as scientific knowledge of atomic and molecular structure improves or changes, but classical thermodynamics gives no insight into the physical properties or behaviour of physical systems at the molecular level. With the development of atomic and molecular theories in the late 1800s and early 1900s, thermodynamics was given a molecular interpretation. This field is called statistical thermodynamics, because it relates average values of molecular properties to macroscopic thermodynamic properties such as temperature and pressure. The goal of statistical thermodynamics is to understand and to interpret the measurable macroscopic properties of materials in terms of the properties of their constituent particles and the interactions between them. Statistical thermodynamics can thus be thought of as a bridge between the macroscopic and the microscopic properties of systems. It provides a molecular interpretation of thermodynamic quantities such as work, heat, and entropy. Research in statistical thermodynamics varies from mathematically sophisticated discussions of general theories to semiempirical calculations involving simple, but nevertheless useful, molecular models. An example of the first type of research is the investigation of the question of whether statistical thermodynamics, as it is formulated today, is capable of predicting the existence of a first-order phase transition. General questions like this are by their nature mathematically involved and require rigorous methods. For many scientists, however, statistical thermodynamics merely serves as a tool with which to calculate the properties of physical systems of interest. The Boltzmann factor and the partition function Two central quantities in statistical thermodynamics are the Boltzmann factor and the partition function. To understand what these quantities are, consider some macroscopic system such as a litre of gas, a litre of some solution, or a kilogram of some solid. From a mechanical point of view, such a system can be described by specifying the number N of constituent particles, the volume V of the system, and the forces between the particles. Even though the system contains on the order of Avogadro’s number of particles, one can still consider the Schrödinger equation for this N-body system, where ĤN is the Hamiltonian operator; ψj are its associated wave functions, which depend on the coordinates of all the particles; and Ej are the allowed energies of the system. The energies depend on both N and V and may therefore be written Ej(N,V). For the special case of an ideal gas, the total energy Ej(N,V) will simply be a sum of the individual molecular energies because the molecules of an ideal gas are independent of one another. For example, for a monatomic ideal gas, if one ignores the electronic states and focuses only on the translational states, then the εi are just the energies of a particle in a three-dimensional box: where h is Planck’s constant, m is the mass of the particle, and a is the length of the box. It should be noted that Ej(N,V) depends on N through the number of terms in equation (75) and on V through the fact that a = V1/3 in equation (76). For a more general system in which the particles interact with each other, the Ej(N,V) cannot be written as a sum of individual particle energies, but the allowed macroscopic energies Ej(N,V) can still be considered, at least conceptually. Now consider a system with N constituent particles in a volume V and at a temperature T. Thus, from a thermodynamic point of view, the system is specified by N, V, and T. What is the probability that the (macroscopic) system is in the jth quantum state with an energy Ej(N,V)? To answer this question, it is necessary to construct a mental collection of identical systems, essentially infinite in number, each with N, V, and T fixed. Generally, a mental collection of identical systems is called an ensemble, and a canonical ensemble in particular if N, V, and T are fixed for each system. The probability pj(N,V,T) that a system is in the quantum state j with energy Ej(N,V) is related to the energy by where the quantity k is a fundamental constant called the Boltzmann constant, whose numerical value is 1.3807 × 10−23 joule per kelvin. The Boltzmann constant is the molar gas constant R (in the equation PV = nRT) divided by Avogadro’s number. The factor eEj/kT, which occurs throughout the equations of chemistry and physics, is called the Boltzmann factor. Proportionality (77) can be converted to an equation by virtue of the fact that the sum of pj(N,V,T) over all values of j must equal unity (because the system must be in some state). The resulting equation is where the summation is carried over all values of j or over all possible quantum states. The quantity Q(N,V,T) is called a (canonical) partition function and is a central quantity of statistical thermodynamics. The partition function Q(N,V,T) is related to the Helmholtz energy A by the equation This equation is remarkable in that the right-hand side depends on molecular properties through the quantum mechanical energies Ej(N,V), whereas the left-hand side is a macroscopic, classical thermodynamic quantity. Thus, equation (80) serves as a bridge between classical thermodynamics and statistical thermodynamics. It allows thermodynamic properties to be interpreted and calculated in terms of molecular properties. As a concrete, simple example, consider the partition function of a monatomic ideal gas, such as argon, given by where m is the mass of the atom. Substituting equation (81) for Q(N,V,T) in equation (80) and then using the thermodynamic formula which is the ideal gas equation of state. Furthermore, the thermodynamic energy can be calculated by means of the equation to obtain the well-known result from the kinetic theory of gases, U = 32NkT = 32nRT. The molar heat capacity CV(∂U/∂T)NV is then 32R. The entropy S can be expressed in terms of Q(N,V,T) by using the fact that A = UTS, where A is obtained from equation (80) and U from equation (84): Using equation (81) for Q gives which is called the Sackur-Tetrode equation. The calculated value for the standard molar entropy of argon at 298.2 K is 154.7 joules per kelvin per mole (J/K·mol), compared with the experimental (calorimetric) value of 154.8 J/K·mol. In general, the statistical thermodynamic entropies are in excellent agreement with experimental (calorimetric) values. The summation in equation (79) is carried over all possible quantum states of the N-body system. If Ω(Ej) is the degeneracy, or the number of quantum states with energy Ej, then the value exp(−Ej/kT) occurs Ω(Ej) times in the summation. Rather than listing exp(−Ej/kT) Ω(Εj) separate times, one can simply write Ω(Ej)exp(−Ej/kT) and then sum over different values of E. Equation (79) can then be written in the form In equation (79) the summation is over the quantum mechanical states of the system; in equation (87) it is over levels. The second and third laws of thermodynamics Consider equation (87) for an isolated system. This can be done conceptually by choosing only those members of the canonical ensemble that have exactly the energy E and then isolating them. In such a case, there will be only one term in equation (87), so that Q = ΩeE/kT. Substituting this result into equation (80) and using the fact that A = UTS, where the thermodynamic energy U is identified with E, then gives the central statistical thermodynamic equation for an isolated system: Equation (88) provides the connection between entropy and disorder. The more states there are available to the system, the larger the value of Ω(N,V,E), the more disordered the system, and consequently the greater the entropy. Equation (88) can be used to discuss the second law of thermodynamics, which says that the entropy of an isolated system always increases as a result of a spontaneous process. Consider a typical spontaneous process in an isolated system, such as the expansion of a gas into a vacuum, as illustrated in Figure 16. It can be shown that Ω(N,V,E) for an ideal gas is proportional to VN. For the process illustrated in Figure 16, the gas initially has an energy E, N number of particles, and volume V/2; in its final state it has the same energy E (the system is isolated) and the same N number of particles, but its volume is now V. Thus, the number of quantum states available or accessible to the system increases by a factor of VN/(V/2)N = 2N in this spontaneous process. Consider another example of a spontaneous process. An isolated system initially contains a mixture of hydrogen and oxygen gases. The hydrogen and oxygen react to form water, but without a catalyst the reaction is so slow that it can be disregarded, and the mixture of hydrogen and oxygen can be thought of as a mixture of two gases in equilibrium. When a small amount of catalyst is added to the system, however, the hydrogen and oxygen readily form water, so that the system consists of hydrogen, oxygen, and water. The addition of the catalyst allows all the energy states associated with water molecules to be available or accessible to the system, and the system proceeds spontaneously to populate these states. Since the originally accessible states are also available (the system still contains some hydrogen and oxygen), the elimination of a constraint—the high activation energy barrier removed by the addition of the catalyst—leads to a spontaneous process associated with an increase in the number of states accessible to the system. Both of the spontaneous processes discussed above occurspontaneous process occurs because a restraint, or barrier, is removed, making additional quantum states accessible to the system. As a rule, any spontaneous process in an isolated system can be thought of in this way. The removal of a constraint increases the number of quantum states accessible to the system, and the “flow” of the system into these states is observed as a spontaneous process. Thus, for any spontaneous process in an isolated system, Ω2 must be greater than Ω1, and so, using equation (88), The value of ΔS is greater than zero, in accord with the second law of thermodynamics, because lnx XXgtXX > 0 if x XXgtXX > 1. Equation (88) can be attributed to the 19th-century Austrian physicist Ludwig Boltzmann and is possibly the best-known equation in statistical thermodynamics, at least for historical reasons. Of course, Boltzmann did not express his famous equation in terms of quantum states but rather in a classical mechanical framework. Boltzmann, in fact, was a great contributor to both equilibrium and nonequilibrium statistical mechanics. He was one of the first to see clearly how probability ideas could be combined with mechanics. Equation (88) is carved on his tombstone in Vienna. It is interesting to note that Boltzmann, who contributed so much to the understanding of macroscopic phenomena in terms of molecular mechanics, lived at a time when the atomic theory was not so generally accepted as it is today, and his work was severely criticized by some of the leading physicists of the day. He committed suicide in 1906 (for reasons not entirely clear) and never lived to see the full acceptance of his work in statistical mechanics. Equation (88) can also be used to discuss the third law of thermodynamics, which states that the entropy of a so-called perfect crystal is zero at zero kelvin. At zero kelvin, the system will be in its ground state, and Ω will be the degeneracy of the ground state, denoted by Ω0. Therefore Thus, as T → 0, S is proportional to the logarithm of the degeneracy of the lowest level. Unless Ω0 is very large, equation (90) says that S is practically zero. For example, if the system were a gas of N particles and the degeneracy of the lowest level were on the order of N, then klnN would be practically zero (7.6 × 10−22 J/K·mol, when N = Avogadro’s number) compared with a typical higher-temperature entropy on the order of Nk(8.314 J/K·mol). Thus, equation (90) is a statement of the third law of thermodynamics: the entropy of a perfect crystal is zero at the absolute zero of temperature. Averages and fluctuations Earlier the Gibbs-Helmholtz equation (equation [84]) was used to determine the thermodynamic energy of a monatomic ideal gas. This procedure is now considered more closely. If equation (80) is substituted into the first part of equation (84), then we obtain Substituting equation (79) into equation (91) then gives But, according to equation (78), the ratio exp(−Ej/kT)/Q is pj(N,V,T), and so equation (92) can be written as The summation in this equation is, by definition, the average value of E, denoted XXltXX<EXXgtXX>. Equation (93) leads to one of the fundamental postulates of statistical thermodynamics—namely, that the average energy of a system is equal to the thermodynamic energy, or, more generally, that the average of any mechanical quantity is equal to its corresponding thermodynamic quantity. The other fundamental postulate of statistical thermodynamics is called the principle of equal a priori probabilities. This principle says that each and every one of the Ω(E) quantum states of an isolated system is equally likely. If only E, V, and N of an isolated system are known, then there is no reason to favour any particular one of the Ω(E) quantum states of the system over any of the others. All Ω(E) quantum states are consistent with the given values of E, V, and N, the only information known about the system. This postulate is used in the derivation of equation (77). Statistical thermodynamics allows fluctuations about average values to be investigated. For example, it is not difficult to show that the standard deviation σE of the energy of a system in a canonical ensemble is given by where CV is the molar heat capacity. The relative fluctuation, the ratio of σE to XXltXX<EXXgtXX>, which is a unitless quantity, is the best measure of the extent of fluctuations. Given the fact that the orders of magnitude of XXltXX<EXXgtXX > and CV for an ideal gas are NkT and Nk, respectively, it is clear that σE/XXltXX<EXXgtXX > is on the order of N−1/2, which is about 10−10 percent for a system containing Avogadro’s number of particles. For small systems, however, the relative fluctuations may become quite significant. Consequently, classical thermodynamics, which deals with only average quantities, is not applicable to systems containing only a few molecules, but statistical thermodynamics, which recognizes and accounts for fluctuations, is applicable. An interesting, practical consequence of fluctuations concerns the scattering of sunlight by the atmosphere. It can be shown that light scattered by density fluctuations in the atmosphere varies as λ−4, where λ is the wavelength of the light. Thus, light at the blue end of the spectrum, which has shorter wavelengths, is scattered more intensely than light in the red region. During the day, more blue light reaches the Earth, and the sky appears blue. During sunrise and sunset, however, when sunlight travels a greater distance before it reaches the Earth, most of the blue light is scattered, and more red light reaches the Earth, and so sunsets and sunrises appear red. Independent, distinguishable particles In order to evaluate the partition function Q(N,V,T), it is necessary to have the eigenvalues, Ej(N,V), of the N-body Schrödinger equation. In general, this is an impossible task. There are many important systems, however, in which the total energy of the system can be written as a sum of individual energies, as for an ideal gas. This leads to a great simplification of the partition function and allows the results to be applied with relative ease. Consider a system of N independent, distinguishable particles. The energies of the single quantum states are denoted by εaj, where the superscript denotes the particle (they are distinguishable) and the subscript denotes the quantum state. Because the particles are independent, the energy of the system is given by In this case, Q(N,V,T) becomes Equation (96) is an important result. It shows that, if the particles are independent and distinguishable, then the determination of Q(N,V,T) reduces to a determination of q(V,T), the partition function of an individual particle. Because q(V,T) requires a knowledge of the energies of only an individual particle, its evaluation is quite feasible. Furthermore, the probability πj that a particle is in its jth quantum state is given by Equation (98) is the independent-particle analog of equation (78). For an example of the applicability of equation (96), consider the case of a single molecule. To a good approximation, the energy of a molecule can be written as where εtrans, εrot, εvib, and εelec denote translational, rotational, vibrational, and electronic energy, respectively. Thus, equation (96) gives If the allowed energies of all N particles are the same in equation (96), then Thus, the original N-body problem—the determination of Q(N,V,T)—is reduced to a one-body problem—the determination of q(V,T). The question arises: How can the particles be considered distinguishable? Certainly, atoms and molecules are indistinguishable from each other; it is not generally possible to distinguish one atom or one molecule from another. There are cases, however, where they can be treated as distinguishable. An excellent example of this is the case of a perfect crystal. In a perfect crystal, each atom is confined to one and only one lattice site, which can, in principle, be identified by a set of three coordinates. Because each particle is confined to a lattice site and the lattice sites are distinguishable, the particles themselves are distinguishable. Furthermore, even though the constituent particles of a solid interact strongly with each other, it is possible to decompose mathematically the motions of all the particles into a set of independent motions (normal coordinates). Thus, equation (101) can be used to describe a crystal. Independent, indistinguishable particles Consider a system of N independent, indistinguishable particles. The results for indistinguishable particles are quite different from those of distinguishable particles. The final results depend on a fundamental property of the constituent particles of the system, which is due to their inherent indistinguishability and manifests itself in the nature of the wave function that describes the system. Consider a system of N identical, indistinguishable particles, described by a wave function ψ(1, 2, . . . , N), where 1 denotes the coordinates of particle 1, and so on. If the positions of any two of the (indistinguishable) particles—say, particles 1 and 2—are interchanged, then by the laws of quantum mechanics the wave function must remain unchanged or change in sign. In terms of an equation, if 12 denotes the operation of interchanging particles 1 and 2, then For particles with integral spin, such as helium-4 (He-4) or photons, the + sign in equation (102) applies. In this case, the wave function is said to be symmetric under the interchange of particles, and the particles themselves are called bosons. For particles with half-integral spin, such as electrons and protons, the − sign in equation (102) applies. In this case, the wave function is said to be antisymmetric, and the particles are called fermions. The antisymmetric nature of fermion wave functions implies that no two fermions can occupy the same quantum state. This result is familiar to many as the Pauli exclusion principle, which says that no two electrons (fermions) in an atom can have the same set of four quantum numbers (occupy the same quantum state). There is no such restriction on bosons; any number of bosons can occupy the same quantum state. The profound effect of this symmetric-antisymmetric symmetry property of wave functions on the macroscopic properties of systems will be discussed below. Now consider a macroscopic system consisting of N independent, indistinguishable particles, so that the total energy E of the system is equal to the sum of the energies of single quantum states, such as atomic or molecular states: E = ε1 + ε2 + · · · + εN. For the case of fermions, the average number of particles in the jth single quantum state is given by where λ = eμ/kT and μ is the chemical potential of the system (equivalent to the partial molal Gibbs function). Note that equation (103) requires that XXltXX<njXXgtXX > be between 0 and 1, consistent with the fact that each individual state can be occupied by either 0 or 1 fermion. For the case of bosons, XXltXX<njXXgtXX > is where λ = eμ/kT, as in equation (103). Two important applications of equation (104) are to liquid He-4 and blackbody radiation (an ideal gas of photons). The distribution of particles over the single quantum states with energies εj given by equation (103) for the case of fermions is called Fermi-Dirac statistics, and that for bosons given by equation (104) is called Bose-Einstein statistics. Because all known particles are either bosons or fermions, these two statistics are the only exact statistics. However, for small values of λ or large negative values of μ/kT, both Fermi-Dirac and Bose-Einstein statistics converge to the same result, namely Summing both sides of equation (105) over j and then eliminating λ gives and εj(V) has been written to emphasize that single particle energies depend only on V. Equations (105), (106), and (107) result either when λ is small or, equivalently, when μ/kT is large and negative. Recalling the thermodynamic properties of the chemical potential, it can be seen that μ/kT is large and negative either in the limit of low density for fixed temperature or in the limit of high temperature for fixed density. From a molecular point of view, equations (106) and (107) result when the number of available single quantum states is much greater than the number of particles in the system. This condition implies that the average number of molecules in any particular state is extremely small, because most states will be unoccupied and those few states that are occupied will most likely be occupied by only one molecule. Consequently, XXltXX<njXXgtXX > will be very small, which is the situation when λ is very small (see equation [105]). A theoretical analysis of the conditions for which the number of molecular quantum states is much greater than the number of particles in the system yields the criterion that for equations (105)–(107) to be valid. Notice that this criterion is favoured by large molecular mass, high temperature, and low number density. Numerically, inequality (108) is satisfied for all but the lightest molecules at very low temperatures. When inequality (108) holds, so that equations (105)–(107) are valid, it is said that the particles obey Boltzmann statistics. Boltzmann statistics is valid at high temperatures, where high energy states are appreciably populated. Because high energies are associated with large quantum numbers, and because systems with large quantum numbers approach the classical limit (the correspondence principle), the limiting case of Boltzmann statistics is also called classical statistics. Systems that require equation (103) or (104) for their description are said to obey quantum statistics. When inequality (108) is satisfied, there is a simple relationship between Q(N,V,T) and q(V,T) for a system of N independent, indistinguishable particles, namely Equation (107) will be used below when the properties of ideal gases are discussed. The similarity of equations (79) and (107) is striking. In equation (79), Q(N,V,T) is a system partition function; it is a summation over the energy states of the macroscopic N-body system. In equation (107), q(V,T) is an individual-particle partition function; it is a summation over the energy states of the individual constituent particles of the system. Although equations (79) and (107) are superficially similar, they are totally different quantities. Equation (79) is a rigorous, central equation of statistical thermodynamics, whereas q(V,T) occurs only for systems of independent particles and then only under conditions of high temperature or low density. Monatomic ideal gas A monatomic gas has translational and electronic degrees of freedom, which are independent to an excellent approximation. Therefore, according to equation (96), The translational partition function is given by where the εitrans(V) are given by allowed energies of a particle in a box. Therefore, Under the conditions of high temperature or low number density, where Boltzmann statistics applies, it is easy to show that where V = a3. Substituting this result into equation (109) gives equation (81), which was used earlier to show that PV = nRT and CV = 32R are direct results of the statistical thermodynamics of a monatomic ideal gas. The electronic partition function of an atom can be written in the form where εjelec is the energy and ωej is the degeneracy of the jth electronic state. It is convenient and admissible to fix the arbitrary zero of energy such that ε1elec = 0; that is, the energies of all the electronic states are related to the ground state. Given this convention, where Δε1jelec is the energy of the jth electronic state relative to the ground electronic state. The values of these Δε1jelecs are typically on the order of electron volts, and so Δε1jelec/kT is quite large and exp(−Δε1jelec/kT) is quite small. At ordinary temperatures, only the first term, the degeneracy of the ground electronic state, is significantly different from zero. There are a few cases, however—such as the halogen atoms—in which the first excited state lies only a fraction of an electron volt above the ground state. In such cases, additional terms must be included in equation (114), although the summation converges very rapidly. According to equation (78), the fraction of atoms in the first excited electronic state is given by The value of f2 is negligible for most atoms, but there are cases, such as for fluorine, where two (or more) terms are needed to evaluate qelec. Diatomic ideal gas Diatomic molecules have rotational and vibrational as well as translational and electronic degrees of freedom. If the rotational motion approximates that of a rigid rotator and the vibrational motion that of a harmonic oscillator (the rigid rotator–harmonic oscillator approximation), then The quantities qtrans and qelec have been discussed in the section Monatomic ideal gas, and so the only new quantities here are qrot and qvib. The quantum mechanical energies of a rigid rotator are given by with an associated degeneracy ωJ = (2J + 1). In equation (117), I is the moment of inertia of the molecule. For most diatomic molecules at room temperature, it so happens that T XXgtXXXXgtXX >> h2/8π2Ik, and the rotational partition function is given by where σ is a quantity called the symmetry number of the molecule, which is equal to 1 for a heteronuclear diatomic molecule and 2 for a homonuclear diatomic molecule. The quantum mechanical energies of a harmonic oscillator are given by with a degeneracy ωn = 1 for all values of n. In equation (119), ν is the natural vibrational frequency of the molecule. The vibrational partition function is The fact that the series in equation (120) is a geometric series allows one to go from the first line to the second line in that equation. Substituting equations (112), (114), (118), and (120) into equation (100) gives the partition function of a diatomic molecule where D0 is the dissociation energy of the diatomic molecule. The term involving D0 occurs because the zero of energy is taken to be that of the separated ground-state atoms. The entropy of a diatomic gas is calculated by substituting equation (121) into equation (109) and then substituting that result into equation (85). The agreement between these calculated values of the entropy and experimental (calorimetric) values is excellent. Q(N,V,T) can also be used to calculate molar heat capacities. The resultant expression (neglecting any electronic contribution) is Equation (122) is plotted against temperature in Figure 17 for oxygen (O2), for which ν = 4.70 × 1013 hertz (Hz). It can be seen that the agreement between equation (122) and the experimental (calorimetric) values is excellent. Although the figure does not show it, CV is 52R (20.8 J·K−1·mol−1) for T less than 300 K or so and then increases with temperature beyond that. Physically, this is due to the excited vibrational states becoming increasingly populated with increasing temperature, once the thermal energy kT becomes significant compared to hν, the spacing of the vibrational states. Population of rotational and vibrational levels The above results can be used to calculate the populations of vibrational states in a diatomic gas. The fraction of molecules in the nth vibrational state is given by (from equation [98]) The fraction of molecules in all the excited states has the particularly simple form Most diatomic molecules are in the ground vibrational state at 300 K. The population of the rotational levels in a diatomic gas can also be calculated. The fraction of molecules in the Jth rotational level is given by The quantity fJ is plotted against J for carbon monoxide gas (CO) at 300 K in Figure 18. Note that the most populated rotational level in this case is the 7th or 8th. Unlike vibrational states, excited rotational states are well populated at room temperature. Polyatomic molecules The partition function of a polyatomic molecule is also given by equation (100), but the forms of qrot and qvib are slightly more complicated than those for monatomic or diatomic molecules. The rotational partition function of a linear polyatomic molecule is the same as that of a diatomic molecule, but qrot for a nonlinear polyatomic molecule is given by where σ is the symmetry number (the number of ways that the molecule can be rotated into itself) and IA, IB, and IC are the three principal moments of inertia of the molecule. Under the harmonic oscillator approximation, the vibrational motion of a polyatomic molecule consists of 3n − 5 (linear molecule) or 3n − 6 (nonlinear molecule) normal coordinates, so that the vibrational partition function takes the form where α is either 3n − 5 or 3n − 6, depending on whether the molecule is linear or nonlinear. If equations (126) and (127) are substituted into equation (100) and then equation (100) into (109), the energy and hence the molar heat of a polyatomic ideal gas can be calculated. The partition function of a nonlinear polyatomic molecule in the rigid rotator-harmonic oscillator approximation is given by The molar heat capacity of a nonlinear molecule is given by The first term in this expression is the contribution to CV from the translational and rotational modes, and the second term is the contribution from the vibrational modes. As in the case of diatomic molecules, the vibrational modes do not contribute significantly to CV until the temperature is high enough to excite the vibrational modes. This occurs when the temperature is such that jkT. The vibrational contributions are usually far from their fully excited values, which are 8.314 J·K−1·mol−1 times the degeneracy of each mode. The agreement between statistical thermodynamics and calorimetric heat capacities of polyatomic molecules is generally excellent. Ortho-para hydrogen The rotational partition function of a diatomic molecule given by equation (118) is valid only for temperatures such that T XXgtXXXXgtXX >> h2/8π2kI. The rotational partition function of a homonuclear diatomic molecule (and a symmetric linear polyatomic molecule such as acetylene) at relatively low temperatures where the temperature does not satisfy the condition T XXgtXXXXgtXX >> h2/8π2kI depends on the nature of the nuclei. This dependence is due to the fact that the total wave function of the molecule must be either symmetric or antisymmetric under the interchange of the two identical nuclei. It must be symmetric if the nuclei have integral spins (bosons) or antisymmetric if they have half-integral spins (fermions). This symmetry requirement has profound consequences on the thermodynamic properties of homonuclear diatomic molecules at low temperatures, and particularly small molecules such as hydrogen (H2). For homonuclear diatomic molecules with nuclei having integral spin (such as O16O16), the rotational partition function is given by where I is the spin of the nuclei. Likewise, for molecules with nuclei with half-integral spins (such as H2), If h2/8π2IkT is small, as it is for most molecules at ordinary temperatures, then equations (130) and (131) reduce to equation (118) with σ = 2. There are some important cases, however, where h2/8π2IkT is not small, low-temperature hydrogen being one of the most important such cases. The two nuclei in H2 have nuclear spin 12, and so substituting I = 12 into equation (131) gives The terms involving a summation over even values of J represent hydrogen molecules whose nuclear spins are opposed, and the terms involving odd values of J represent hydrogen molecules whose nuclear spins are parallel. Hydrogen with only even rotational levels allowed is called para-hydrogen, and that with only odd rotational levels allowed is called ortho-hydrogen. The ratio of the number of ortho-H2 molecules to the number of para-H2 molecules from equation (132) is Figure 19 shows the percentage of para-H2 versus temperature in an equilibrium mixture of ortho- and para-hydrogen. Note that the system is all para-H2 at 0 K and 25 percent para-H2 at high temperatures. The heat capacity of H2 can be calculated using equation (132). Figure 20 shows the calculated, as well as the experimental, values at low temperatures. The two curves are in complete disagreement. This posed a problem for the proponents of quantum mechanics, which was still being developed and had not yet been generally accepted at the time these values were first calculated. It was finally realized that, unless a catalyst is present, the conversion between ortho- and para-hydrogen is extremely slow. As a result, hydrogen prepared in the laboratory at room temperature and cooled down for the low-temperature heat capacity measurements remains at the room-temperature composition (25 percent para-H2; see Figure 19) rather than the equilibrium composition. Thus, the experimental data illustrated in Figure 20 are not for an equilibrium system of ortho- and para-hydrogen, but for a metastable system whose ortho-para composition is that of equilibrium room-temperature hydrogen, 25 percent para-H2 and 75 percent ortho-H2. The heat capacity of such a metastable mixture can be calculated using the formula where CV(para-H2) is obtained from the first term and CV(ortho-H2) is obtained from just the second term of equation (132). These values are in excellent agreement with the experimental curve. The explanation of the heat capacity of H2 was one of the great achievements of quantum statistics. Even though only diatomic molecules have been considered here, the results of this section apply also to linear polyatomic molecules such as carbon dioxide and acetylene. Chemical equilibria An important application of statistical thermodynamics to chemistry involves the calculation of equilibrium constants for gas-phase reactions in terms of molecular quantities. Consider the general (ideal) gas-phase reaction where the ν’s are stoichiometric coefficients. The thermodynamic condition for equilibrium is that where the μ’s are the chemical potentials of the reactants A(g) and B(g) and the products X(g) and Y(g). The relation between the chemical potential of a substance and its partition function is where equation (111) has been used for Q(N,V,T) and Stirling’s approximation (lnN! = NlnNN for large N) for lnN!. If equation (136) is substituted into equation (135), then an expression for the equilibrium constant for the reaction given by equation (134) is obtained: Equation (137) allows (ideal) gas-phase equilibrium constants to be calculated in terms of the properties of the individual molecules involved in the reaction. For example, one may consider the simple reaction 2Na(g) ⇔ Na2(g). Equations (110), (112), and (114) can be used for qNa(V,T) and equation (121) for qNa2(V,T) to obtain the values of the reaction’s equilibrium constant at various temperatures. These values are in good agreement with the experimental values. A more complicated example is the reaction H2(g) + I2(g) ⇔ 2HI(g). Figure 21 shows the calculated values of lnK(T) plotted against 1/T. Once again, the agreement between the statistical thermodynamic values and the experimental values is quite good. The enthalpy of reaction Hrxn can be obtained from the slope of the line in Figure 21, giving ΔHrxn = −13 kJ, compared to the experimental value of −12.5 kJ. Heat capacities of crystals The heat capacity of a typical monatomic solid such as silver is shown in Figure 22. The interpretation of the data in Figure 22 was one of the great triumphs of the early quantum theory. According to classical (prequantum) physics, the molar heat capacity of a monatomic crystal should be 3R = 25 J·K−1·mol−1, which is known as the law of Dulong and Petit. Indeed, the data in Figure 22 approach this value at high temperatures, but the data drop off to lower values at low temperatures. This low-temperature behaviour was a great challenge to theoretical physics at the turn of the century. Einstein was the first to present a theoretical explanation of the low-temperature heat capacities of crystals by applying the ideas of the new quantum theory, as proposed by Max Planck. Einstein modeled a monatomic crystal as a collection of N atoms, each vibrating independently about its equilibrium lattice site. He further assumed that each atom vibrated with the same frequency and that these vibrations could be treated as three-dimensional quantum mechanical harmonic oscillators. Thus, he modeled the entire crystal as a set of 3N independent quantum mechanical harmonic oscillators, each vibrating with a frequency νE. The partition function of each oscillator is given by equation (120). As discussed earlier, the atoms may be considered to be distinguishable because they are located at lattice sites. Thus, according to equation (101), the partition function of the crystal is given by Substituting equation (138) into equation (84) and then using the fact that CV = (∂U/∂T) gives for the molar heat capacity of a monatomic crystal. It is easy to show from equation (139) that CV approaches its Dulong and Petit value of 3R as T becomes large. Equation (139) contains one adjustable parameter, νE, to fit the entire heat capacity curve shown in Figure 22. Although Figure 22 appears to show that the Einstein model of a crystal gives good, general agreement with experiment, it is not in accord with very-low-temperature data. Equation (139) predicts that the low-temperature heat capacity goes as 3R(E/kT)2ee/kT, whereas the experimental results go to zero as T3. The low-temperature heat capacity predicted by Einstein goes to zero more rapidly than does T3. A few years after Einstein’s prediction, the Dutch-American physical chemist Peter Debye proposed a model treating a crystal as an elastic solid. The Debye theory more accurately predicts the low-temperature T3 behaviour. Electrons in metals Many of the electronic properties of metals can be understood in terms of a free-electron model, where the valence electrons are treated as an ideal Fermi-Dirac gas. Although the valence electrons in a metal interact with each other and with the atomic cores through an electric potential, this r−1 potential is so long-range that the resultant potential that any one electron experiences as it moves through the crystal is almost constant. Furthermore, many of the physical properties of metals are due more to quantum-statistical effects than to the details of the electron-electron and electron-core interactions. Equation (103) for Fermi-Dirac statistics can be rewritten in the form where f(ε) is the probability that a given electron state with energy ε is occupied. It is instructive to plot f(ε) against ε/μ for fixed values of μ/kT, as shown in Figure 23. At 0 K (where μ = μ0), all the states with ε XXltXX < μ0 are occupied and those with ε XXgtXX > μ0 are unoccupied. In other words, the electrons occupy the states of lowest energy, much like the electrons in an atom in its ground electronic state. Thus, μ0 has the property of being a cut-off energy at 0 K. It turns out that μ0, which is called the Fermi energy, is typically on the order of electron volts. This means that at room temperature, where μ0/kT is on the order of 100 or so, f(ε) is essentially a step function like that shown at 0 K in Figure 23. Another characteristic temperature, called the Fermi temperature, is defined as μ0/k and is denoted by TF. Typically, TF for a metal is on the order of several thousand degrees, and so room temperature may be considered to be essentially zero degrees. If the 0 K limit of f(ε) is used to calculate the molar electronic heat capacity of a metal, then This equation predicts that the molar electronic heat capacity of metals will be on the order of 10−3 J · K−1 · mol−1, which is observed for many metals to which the free-electron model might be expected to be applicable. Fermi-Dirac statistics have also been applied to the theory of white dwarfs and nuclear gases. Bose-Einstein condensation An ideal gas of bosons (an ideal Bose-Einstein gas) has an interesting low-temperature behaviour. The fraction of particles in the ground state (see Figure 24) is given by where the temperature T0 is defined by Thus, when T XXgtXX > T0, the fraction of particles in the ground state is essentially zero. In this case, the particles are distributed smoothly over the many quantum states available to each one, so that the average number of particles in any one state is essentially zero. However, as the temperature is lowered below T0, suddenly the ground state (which is simply one of a great many states available) begins to be populated appreciably. Its population increases as the temperature is lowered, until all the particles are in their ground state at T = 0 K. The fact that one state out of the many available to each particle starts to become greatly preferred abruptly at T = T0 is analogous to an ordinary phase transition. This “condensation” of the particles into their ground states is called Bose-Einstein condensation. This Bose-Einstein phase transition perhaps can be seen more readily by looking at the pressure as a function of density at a fixed temperature. The resultant equation of state is plotted in Figure 25, where pressure-volume isotherms are shown. Note that these isotherms are not too dissimilar from the isotherms of real gases. The horizontal lines represent regions in which the system is a mixture of two phases, a condensed phase (A) and a dilute phase (B). At each temperature, the dilute phase has a specific volume v0 given by (h2/2πmkT)3/2/v0 = 2.612, and the condensed phase has a specific volume of zero. Bose-Einstein condensation is a first-order phase transition. It is a rather unusual first-order phase transition, however, because the condensed phase has no volume, and so the system has a uniform density, unlike the two regions of different densities usually associated with first-order phase transitions. Because the particles in the condensed phase have zero momentum, Bose-Einstein condensation is considered to be a condensation in momentum space rather than coordinate space. Bose-Einstein condensation takes place even though the particles do not interact with one another. An effective interaction does occur, however, through the symmetry requirement of the N-body wave function of the system—and this interaction leads to the condensation. Although the results described here are valid only for an ideal gas of bosons, there is a real system to which they are approximately applicable. Helium exists in the form of two isotopes: He-3 and He-4. He-4 has a spin of zero and therefore obeys Bose-Einstein statistics. Among its many remarkable properties is the heat-capacity curve shown in Figure 26B. Because this experimental curve resembles the Greek letter λ, the transition is known as a “lambda transition.” The experimental heat capacity appears to diverge logarithmically at T = 2.18 K; nevertheless, it agrees remarkably well with the heat-capacity curve of an ideal Bose-Einstein gas (shown in Figure 26A). The agreement is not complete, since liquid He-4 combines quantum statistics and intermolecular interactions, but it seems that the experimental heat capacity can be attributed in part to the quantum statistics of the Bose-Einstein He-4 system. Similarly, liquid He-3 obeys Fermi-Dirac statistics; and its heat-capacity curve, like that of an ideal Fermi-Dirac gas, does not exhibit any unusual behaviour. Furthermore, if the value T0 is calculated from equation (143) (using the value 0.145 gram per millilitre for the density of liquid helium), then T0 = 3.14 K, which is the right order of magnitude. Although the λ-transition in He-4 differs significantly from the Bose-Einstein condensation in that it is not a first-order transition, it seems clear that Bose-Einstein statistics has an important role in the λ-transition. Bose-Einstein statistics has also been used to derive the thermodynamic properties of blackbody radiation. For example, Planck’s famous blackbody distribution can be derived by treating the radiation as an ideal quantum gas of photons. Nonideal gases For all the systems that have been discussed up to now, the forces acting between the constituent particles can be neglected. Most systems of interest are such that it is not possible to ignore these interactions, which, in fact, play a dominant role in determining macroscopic properties. Only a few examples will be discussed here. One of the most important systems in which intermolecular interactions play a dominant role is that of a nonideal gas at temperatures where classical statistical thermodynamics is applicable. In the limit of low densities, all gases behave ideally and obey the equation of state P = ρRT, where ρ = n/V is the molar density. The densities of a gas that behaves ideally are such that the molecules of the gas are so far apart on average that their interactions can be neglected. As the density of a gas is increased, the molecules are closer on average, their interactions no longer can be neglected, and the gas shows deviations from ideal behaviour. These deviations from ideal behaviour can be expressed generally by writing This equation of state is called the virial equation of state, and the coefficient Bj(T), which is a function of temperature only, is called the jth virial coefficient. Note that equation (144) reduces to the ideal gas equation at low densities. The second virial coefficient represents the first deviation from ideal behaviour as the density is increased, and it has been accurately determined for a great many gases. The second and third virial coefficients give most of the correction to P/RT up to pressures of about 100 atmospheres. Equation (144) can be derived by statistical thermodynamics. This derivation consists of a systematic expansion of P/RT in terms of two interacting molecules, three interacting molecules, and so on. The jth virial coefficient is expressed as an integral over the positions of exactly j interacting molecules. The most dominant and important virial coefficient, B2(T), for the simple case of a monatomic gas is given by where u(r) is the interatomic potential and r is the separation of the atoms. Therefore, if the interatomic potential u(r) is known, then the pressure of a nonideal gas can be calculated. In principle, u(r) can be determined from quantum mechanics, but this is an exceedingly difficult numerical problem and has been done only for the simplest atoms. Instead, simple analytical expressions for u(r) are generally used, with adjustable parameters that can be fit to experimental data. The most commonly used form for u(r) is the so-called Lennard-Jones 6-12 potential This expression for u(r) is shown in Figure 27, where it can be seen that σ is the distance at which u(r) = 0, and ε is the depth of the well. When equation (146) is used to calculate B2(T) from equation (145), the result can be written as where T* = kT/ε and b0 = 2πσ3/3. Note that the right-hand side of this equation is a function of T* only. Thus, if the actual temperature is divided by ε/k and the observed second virial coefficient by 2πσ3/3, then the data for all gases should fit on one curve, as shown in Figure 28. The solid curve in Figure 28 is obtained from equation (147). The behaviour shown in Figure 28, where the data for various gases plot on one curve, is an example of the law of corresponding states. Even though the ε and σ are adjusted to fit the data, the agreement between theory and experiment in Figure 28 is excellent. The theory of nonideal gases is well established and is one of the more important applications of classical statistical thermodynamics. The Ising model The Ising model was first introduced in 1925 as a model for ferromagnetism. It consists of a regular array of lattice sites in one, two, or three dimensions, in which each site can be occupied in one of two ways. It is assumed that ferromagnetism is caused by interactions between the spins of certain electrons in the atoms occupying the lattice sites and that each atom can exist in one of only two spin states: +, in the direction of a magnetic field B0, or −, against a field. The potential energy of a single dipole or spin is −mB0 if it is oriented with the field (+), and +mB0 if it is oriented against the field (−), where m is the magnetic moment of an individual atom. Let the state of the jth lattice site be denoted by σj, which equals +1 for a + state and −1 for a − state. In terms of these σj, then, the potential energy due to the external field is −mBσj. It is also assumed that there is an interaction εij between nearest-neighbour atoms, which are in states i and j, where i and j can be + or −. In terms of the σj, εij can be written as −iσj, where J is the interaction between the spins. Note that, if the spins are parallel, σiσj is positive, and, if they are antiparallel, σiσj is negative, so that the nature of the interaction (be it attractive [−] or repulsive [+]) is determined by the sign of J. If J XXgtXX > 0, parallel alignments are the more stable and the model will describe ferromagnetism. If J XXltXX < 0, an opposed alignment is the more stable and this will lead to antiferromagnetism. These interaction energies are due to quantum mechanical exchange forces similar to those in chemical bond theory. The total energy of a given configuration (a given set of σj) is then where the first summation is over all nearest-neighbor pairs. This is the energy expression that characterizes the Ising model of a magnetic system. The canonical partition function is the summation over all configurations, weighted by exp(−E/kT), or Q(N,T,B0) = Σσ1 = ±1Σσ2 = ±1 · · · ΣσN = ±1 exp { MNUME(σ1, σ2, . . . , σN)MDENkT } . (133) The total number of terms in equation (148) is 2N, because each of the N σ’s can take on two values. The magnetic properties of the system follow from Q(N,T,B0), equation (80), and the basic thermodynamic equation for magnetic systems, dA = SdTPdVMdB0, where M is the magnetization of the system. In a certain sense, an Ising model is a simpler system than a nonideal gas or a liquid, because the interacting particles are allowed to be situated only at discrete lattice sites. On the other hand, the model is difficult enough to have escaped being solved exactly in three dimensions, although the two-dimensional problem has been solved exactly (at least partially). In most cases, it is necessary to use approximate methods to evaluate Q(N,T,B0) from equation (149). The simplest approximation that retains the correct qualitative features is the Bragg-Williams, or mean-field, approximation. In a mean-field theory, the neighbouring spins of a lattice site are assumed to take on average values. Thus, each spin is treated in an exact manner when it is treated as the centre of interest and in an average manner when it is treated as the neighbour to some other spin. The system is then treated in a self-consistent manner to produce what is called a mean-field approximation. Before any results from the mean-field theory are discussed, it should be pointed out that experimentally there is a certain critical temperature, TC, called the Curie temperature, below which there is a residual magnetization if a ferromagnetic sample is placed in a magnetic field (B0 XXgtXX > 0) and then the field is removed (B0 → 0+). This residual magnetization is called spontaneous magnetization, or MS. In this state, even though B0 = 0, the majority of the spins are in the direction of the previously applied field, resulting in a ferromagnet. Above the Curie temperature, the spontaneous magnetization MS equals zero. The mean-field theory of ferromagnetism is in accord with these observations. An important result of the mean-field approximation is the equation where c is the number of nearest neighbours to a lattice site (c = 4 for a square lattice). This equation gives the values of the spontaneous magnetization MS versus T. The left-hand side of this equation is always less than unity, and so it predicts that T must be less than cJ/k in order for spontaneous magnetization to occur. This defines a temperature TC = cJ/k, the Curie temperature, below which the dipoles tend to align even when the external field is turned off. The system cannot exist in a ferromagnetic state above the Curie temperature. The Ising model has been applied to a great variety of physical systems, such as adsorption of gases onto solid surfaces, order-disorder transitions in alloys, concentrated solutions of liquids, the helix-coil transition of polypeptides, and the absorption of oxygen by hemoglobin. The Onsager solution In one of the most important and famous papers in statistical thermodynamics, the Norwegian-American chemist Lars Onsager presented an exact solution to the two-dimensional Ising model of ferromagnetism by evaluating equation (149) exactly for a square lattice in the absence of an external magnetic field. This work constitutes one of the few exact evaluations of a partition function for systems in which interactions cannot be neglected. Just a few years before Onsager’s complete solution, it was shown that the critical temperature of this system is given by e−2J/kTC = 2 − 1, or that TC = 2.269J/k. Onsager found that the partition function of the system is given by Differentiation of equation (151) with respect to temperature gives the energy U. One finds after some cancellation that where K1(k1) is the complete elliptic integral of the first kind, or As TTC, k1 → 1 − 4J2/k2(1/T − 1/TC)2 + · · · . Using this result and the fact that K1(k1) → ln(1 − k21)1/2 as k1 → 1, it can be seen that K1(k1) goes as ln|TTC|near TC. The coefficient of K1(k1) in equation (153) is linear in (TTC) near the critical point, and so U is continuous and equal to −NJcoth(2J/kTC) = 2NJ at the point T = TC. On further differentiation of equation (153) to obtain the heat capacity C, it can be determined from the term (TTC) ln|TTC|that C is proportional to ln|TTC|near T = TC. Thus, the heat capacity has a logarithmic singularity at T = TC, which is shown in Figure 30. Several years later Onsager calculated the spontaneous magnetization MS(T) for a square lattice and found that Figure 31 shows the spontaneous magnetization plotted versus T/TC. Near the critical point, as TTC, equation (154) becomes Critical phenomena Statistical thermodynamics has played an important role in the understanding of critical phenomena. Recall that pressure-volume isotherms of real gases become flatter as T approaches Tc from above, and finally at the critical region, the isothermal compressibility which is essentially the reciprocal of the slope of the pressure-volume isotherms, diverges to infinity. The van der Waals equation is one of the simplest equations of state to display this critical behaviour. Two predictions that are due to the van der Waals equation are as follows: (1) The difference between the densities of coexisting liquid and gaseous phases (ρl and ρg, respectively) vanishes as where A is a constant. (2) The compressibility along the critical isochore (ρ = ρc) diverges as where B is a constant. These predictions are, in fact, not peculiar to the van der Waals equation but are a direct consequence of the assumption that the free energy and pressure can be expanded in a Taylor series in the volume and the temperature around the critical point. These two results are part of the classical, or van der Waals’s, theory of the critical point. These predictions have been tested experimentally, and, although not in complete accord with experiment, they do give a good starting point for a more detailed study of the critical region. One of the most thorough tests of the classical theory of the critical region is given by the data on the coexistence curves of simple gases. The British thermodynamicist Edward Armand Guggenheim showed that the gases neon, argon, krypton, xenon, nitrogen, and oxygen closely obey a law of corresponding states of the form with β = 13. More recent data seem to indicate that β = 0.325. These results disagree with the classical prediction of β = 12 (equation [156]), although the form of the prediction does appear to be correct. The classical prediction that κT should diverge as (TTc)−1 along the critical isochore (equation [157]) does not seem to be found experimentally. In practice, plots of 1/κT versus temperature are not linear but are distinctly concave upward in the critical region, which suggests that the compressibility might diverge more strongly than a simple pole. Thus, where γ is greater than one. A great deal of knowledge about critical exponents has been obtained from Ising models. These models offer the advantage that they can be solved exactly in two dimensions and, even though they cannot be solved exactly in three dimensions, it is possible to derive series expansions containing enough terms (about 20) to provide a great deal of information about the functions themselves. It is found from series expansions that β is 0.312 ± 0.005 for three-dimensional lattices, compared with the experimental value of 0.325 and the classical value of 12. Similarly, γ = 1.250 ± 0.003 for a three-dimensional lattice, compared with the experimental value of 1.24 and the classical value of 1. The closeness of the lattice statistics values to the experimental values indicates that the dimensionality and the statistics are the main factors determining critical behaviour and that the details of the molecular interactions are of secondary importance. One reason that a lattice model can represent a real system like a gas at the critical point is that the correlations are of such a long range that the underlying lattice structure probably becomes unimportant. The critical exponents β and γ are like universal constants, which is why lattice models have been useful in studying the critical region and have provided so much information and insight. Nonequilibrium thermodynamics Not long after Clausius and Kelvin formulated the principle now known as the second law of thermodynamics, scientists began to search for mechanical explanations of entropy. This search was beset with difficulties, because the mechanical equations predict reversible motions that can run both forward and backward in time, while the second law of thermodynamics is irreversible. Indeed, in the words of Clausius, the second law was simply that “heat cannot, of itself, move from a cold to a hot body”; i.e., it moves irreversibly from hot to cold. While this statement of the second law seemed simple enough to understand, the entropy function that is derived from it was not. Convincing investigations of the mechanical theory of heat were initiated by the Scottish physicist James Clerk Maxwell in the 1850s. Maxwell argued that the velocities of point particles in a gas were distributed over a range of possibilities that increased with temperature, which led him to predict, and then verify experimentally, that the viscosity of a gas is independent of its pressure. In the following decade Boltzmann began his investigation of the dynamical theory of gases, which ultimately placed the entire theory on firm mathematical ground. Both men had become convinced that the entropy reflected molecular randomness. Maxwell expressed it this way: “The second law has the same degree of truth as the statement that, if you throw a tumblerful of water into the sea, you cannot get the same tumblerful of water out again.” These were the seminal notions that in the 20th century led to the field of nonequilibrium, or irreversible, thermodynamics. Phenomenological theory: systems close to equilibrium Despite its molecular origin, the first consistent formulation of nonequilibrium thermodynamics was phenomenological—i.e., independent of the existence of molecules. In this formulation the definition of the entropy function is extended to apply to systems that are close to, but not actually at, equilibrium. Called the local equilibrium assumption, it means that the time derivative of the entropy S can be written where the variables that are proportional to the size of the system (the extensive variables) are E, the internal energy; V, the volume; and Ni, the number of molecules of kind i. The temperature T and the other intensive variables (the pressure P and the chemical potentials μi) retain their usual meanings. This is the first phenomenological assumption. The second assumption is based on the second law—namely, that the time rate of change of the entropy consists of two terms: one due to changes that are reversible and another due to irreversible processes, called the entropy production or dissipation function (Φ), which can never be negative. In the phenomenological theory, the dissipation function can be written as a sum over the thermodynamic fluxes J and their conjugate forces X, or Examples of fluxes include the heat or mass flowing through unit area in unit time in a fluid, an electric current, and the time rate of change of the number of molecules or atoms in a chemical reaction. These various fluxes all can be thought of as the time rate of change of an extensive variable and can be written Ji = dni/dt, with ni representing one of the extensive variables. The conjugate forces are identified from phenomenological equations, such as Fourier’s law of heat transport—where the thermodynamic force is the temperature gradient—or Ohm’s law, in which the current in a resistor is proportional to the voltage across it. Close enough to equilibrium, these—and all the other familiar kinetic and transport laws—can be expressed in the linear form: where the constant coefficients Lik are called phenomenological coupling coefficients. Written in this fashion, the thermodynamic forces are differences between the instantaneous and equilibrium value of an intensive variable (or their gradients). Thus, for example, 1/T − 1/Te is the thermodynamic force conjugate to the internal energy E, which is Newton’s law of cooling. Equations such as (161) are referred to as linear laws. In addition to the kinetic and transport equations that were established experimentally in the 19th century, the linear laws suggested new types of transport phenomena. Indeed, in the hands of the chemist Lars Onsager and later the German physicist Josef Meixner, this rewriting led to a deep connection between the phenomenological coupling coefficients and thermodynamics. Because the dissipation function is positive, the coupling coefficients are not arbitrary. Consider, for example, the case of energy and charge transport in a thermocouple. This device, illustrated in Figure 32, involves two lengths of metal wire of different composition connected in a loop with the metal junctions immersed in thermal reservoirs at different temperatures (say, ice and boiling water). According to the linear laws, the flux of energy Je and the flux of electrons Jq in the wire can be written as: where v is the (electrochemical) potential difference and Δ(−v/T) is the thermodynamic force for the electron flux. The first term in equation (162) is a form of Newton’s law of cooling, and the second term in equation (163) is a form of Ohm’s law, so Lee and Lqq are related to the heat and electrical conductivity, respectively. Restrictions on the coefficients follow from the fact that the entropy production cannot be negative. This means that both of the direct coupling coefficients, Lee and Lqq, must be positive. Furthermore, the cross coupling coefficients Leq and Lqe must satisfy the inequality (Leq + Lqe)2 ≤ 4LqqLee. Thus, the second law constrains the size of the cross coupling coefficients. The thermocouple illustrates another aspect of the cross coupling of fluxes and forces: namely, that a temperature difference might give rise not only to a heat flux but also to an electric current, while a potential difference can give rise to an electric current and a heat flux. Experimental measurements demonstrate not only that these phenomena, known as the Seebeck and Peltier effects, exist but that the two cross coupling coefficients, Lqe and Leq, are equal within experimental error. By introducing statistical ideas into nonequilibrium thermodynamics, Onsager was able to prove for irreversible processes that Lik = Lki, a fact now referred to as the Onsager reciprocal relations. Another restriction on the cross coupling coefficients, called the Curie principle, dictates, for example, that the thermodynamic force for a chemical reaction in a fluid, which has no directional character, does not contribute to the heat flux, which is a vector. Experiments to test the validity of the reciprocal relations are not always easy. Some of the most accurate deal with mass diffusion at uniform temperature. For example, in a solution of table salt (NaCl) in water, there is only a single thermodynamic force for diffusion (the gradient of the chemical potential of NaCl), but in an aqueous solution of NaCl and potassium chloride (KCl), gradients in the chemical potentials of both salts act as thermodynamic forces for diffusion. Within the limits of experimental uncertainty, the cross coupling coefficients for diffusion in this and other three-component solutions have been shown to be equal. The magnitude of the cross coupling coefficients is frequently comparable to the direct coupling coefficients, which has made cross coupling a significant phenomenon in electrochemistry, geophysics, and membrane biology. During World War II, thermal diffusion, a cross coupling in which a temperature gradient causes a diffusion flux, was used to separate fissionable isotopes of uranium. In 1931 Onsager enunciated another principle of nonequilibrium thermodynamics that applies to certain systems at steady state. A steady state is a condition in which none of the extensive variables change in time but in which there is a net flux of some quantity. This contrasts with thermal equilibrium, in which all the fluxes vanish. For example, if no current is drawn from the thermocouple in Figure 32, a steady state is attained with a heat flux Je, given by equation (162), and a flux of electrons Jq = 0. According to equation (163) this implies that the difference in the temperature of the two reservoirs ΔT maintains a steady potential difference determined by the condition: Onsager discovered that this type of steady state condition was implied by a property of the dissipation function Φ. The steady state in equation (164) turns out to be the state of least dissipation when the temperatures of the two reservoirs are held fixed. The proof of this requires that the linear laws and the reciprocity relations are valid; it is referred to as the Rayleigh-Onsager principle of least dissipation or the principle of minimum entropy production. Fluctuations and dissipation By introducing statistical ideas into nonequilibrium thermodynamics, Onsager extended applications of the theory to a new type of phenomenon: molecular fluctuations. The existence of molecular fluctuations was first made manifest in Einstein’s theory of Brownian motion. Brownian motion, visible only under a microscope, is the incessant, random movement of micrometre-sized particles immersed in a liquid. In 1905 Einstein correctly identified Brownian motion as due to imbalances in the forces on a particle resulting from molecular impacts from the liquid. Shortly thereafter, the French physicist Paul Langevin formulated a theory in which the minute fluctuations in the position of the particle were due explicitly to a random force. Langevin’s approach proved to have great utility in describing molecular fluctuations in other systems, including nonequilibrium thermodynamics. A striking example of molecular fluctuations and their analysis was reported by two American physicists in 1928. Using sensitive electrical devices, J.B. Johnson measured fluctuations in the voltage across various types of resistors, which were kept at equilibrium with no current flowing. Now called Johnson noise, these fluctuations are due to the same type of molecular processes as those associated with Brownian motion. Using a formula derived by Harry Nyquist, Johnson was able to determine Boltzmann’s constant with considerable accuracy. Nyquist’s idea was that, once a fluctuation in the voltage was excited by molecular processes, it would decrease in time, just as a charge imposed by a battery would decrease, until it was randomly reexcited again. A quantitative test of this idea can be made using random voltage records, like the one shown schematically in Figure 33. From this record, one forms the time-correlation function by taking the instantaneous deviation from the average voltage across the resistor (zero at equilibrium) and multiplying it by the deviation a time τ later. Repeating this for all pairs of points that are separated in time by the same interval τ and then averaging the results determines the correlation function at time τ. Using the Langevin-Nyquist idea, the time-correlation function in Johnson’s experiment was predicted accurately. Three years later, Onsager suggested a broad generalization of the Langevin-Nyquist idea, now called the regression hypothesis. Onsager postulated that a spontaneous fluctuation in an extensive variable would relax toward equilibrium by the same linear laws as a large deviation until it was abruptly changed by a random molecular process. The value of this assumption is that it can be used to calculate the time-correlation function for any process and to express the result in terms of the coupling coefficients Lik. Taking into account the underlying reversibility of the molecular equations of motion, Onsager was able to derive the reciprocity theorem Lik = Lki for purely irreversible processes, along with an appropriate generalization in the presence of magnetic and rotational fields. Some years later the Dutch scientist Hendrik Casimir derived an additional relationship valid for reversible and irreversible processes. The mathematical expression of the regression hypothesis is This expression states that the extensive variables change in time according to the linear laws, except for the changes that occur because of the random flux i. The random flux, like the random force in Brownian motion, is composed of random impulses that change the value of the extensive variable ni in an unpredictable fashion. Like the random signal in Figure 33, the random flux can be defined by its time-correlation function. This correlation function must be consistent with the distribution of values that the extensive variables take on at equilibrium. The correct distribution has a bell-shaped (i.e., Gaussian) form. It was obtained first by Einstein using statistical mechanics and generalizes Maxwell’s formula for the temperature dependence of the distribution of velocities in a gas. Using Einstein’s formula, it can be shown that the time-correlation function of the random fluxes must be proportional to the coupling coefficients, i.e., to where kB is Boltzmann’s constant (1.38 × 10−16 erg per kelvin) and δ(t) (the Dirac delta) is a function of time. This remarkable formula shows that the time dependence of the random flux is determined by the coupling coefficients in the linear laws. Changes in the random fluxes i, however, occur on a much more rapid time scale than that given by the linear laws, so that δ(t) vanishes except for a very short time interval around t = 0. The great difference between the time scale for relaxation of a flux created by a thermodynamic force and that for a random flux is an important feature of the thermodynamic theory of fluctuations. The formula in equation (166) is one form of the fluctuation dissipation theorem. The most significant verifications of nonequilibrium thermodynamic fluctuation theory have come from light scattering experiments on liquids. It has been known since the 19th century that when visible light traverses a fluid it will scatter owing to small fluctuations in the density. Indeed, the preferential scattering of light at the blue end of the spectrum is responsible for the blue colour of the sky and the red colour of sunsets. The Russian physicist Lev Davidovich Landau recognized that molecular fluctuations associated with the conduction of heat and the viscosity of a liquid would also change the frequency of the scattered light by small amounts, and he successfully applied Onsager’s fluctuation theory to calculate the so-called spectrum of scattered light from a quiescent fluid at equilibrium. Figure 34 shows the experimental spectrum of light scattered from liquid argon at T = 84.97 K. According to the fluctuation theory, the width of the central, or Rayleigh, peak is determined by the thermal conductivity, while the location of the two symmetric side, or Brillouin, peaks is determined by the speed of sound and their width by the viscosity of the fluid. The values of these coefficients, taken from the experiment in Figure 34 and from similar experiments on water and other liquids, are in excellent agreement with values measured by imposing thermodynamic forces on the system. Other confirmations of thermodynamic fluctuation theory have come from careful measurements of density fluctuations caused either by diffusion or by chemical reactions. Statistical nonequilibrium thermodynamics: Systems far from equilibrium As successful as the phenomenological theory of nonequilibrium thermodynamics is, it applies only to systems at or close to equilibrium. Thus it does not explain the rich behaviour seen in chemical, physical, and biological systems that are far from equilibrium. Resistors carrying large electric currents can exhibit negative differential resistances—i.e., currents that decrease with increasing voltage—and may support oscillating, rather than steady, currents. Comparable behaviour is observed in certain chemical reactions—e.g., the so-called Belousov-Zhabotinsky reaction, in which both oscillations and waves of concentrations of cerium ions occur. Similar waves of calcium ion concentrations have been observed in living cells. All these phenomena involve nonlinear transport processes; and, because the phenomenological theory is linear, it cannot be used to understand them. The nonlinear generalization of the Onsager theory has its roots in the work of Boltzmann. To explain the mechanical origin of irreversibility, Boltzmann considered what happens to the distribution of velocities of gas molecules when collisions occur. This led him to formulate a kinetic equation, subsequently called the Boltzmann equation, in which two-body collisions (like those between two billiard balls) play a leading role. Certain collisions (called direct collisions) cause decreases in the number of molecules with a certain velocity, while other collisions (called restoring collisions) increase that number. The occurrence of both direct and restoring collisions corresponds to the inherent reversibility of molecular events. Despite this fact, the Boltzmann equation is not reversible, and it implies that a certain entropy-like quantity, called the H-function, never increases in time. Moreover, it is possible to derive the laws of fluid flow, including the linear phenomenological equations, from the Boltzmann equation and to obtain explicit expressions for the heat conductivity and the viscosity of gases that agree with experimental measurements. Despite these successes, the Boltzmann equation involves conceptual difficulties. Because it is irreversible, it violates the recurrence theorem of mechanics. This theorem says that the molecules composing a system of finite energy and size will return at some future time to very nearly their initial condition. While it can be argued that a return to a nonequilibrium state is prohibited by the second law of thermodynamics, even Maxwell knew that such an unusual happening was not strictly ruled out. Boltzmann proposed the correct way out of this dilemma: his equation should be interpreted as describing what happens most, not all, of the time. To illustrate this, the Dutch physicists Paul and Tatyana Ehrenfest introduced a picturesque model, colloquially referred to as the dog-flea model. Think of two dogs lying next to one another, with a total of N fleas shared between them. If the fleas jump only from one dog to the next, then after a time the number on each dog will have changed while the total number of fleas will be the same. If the dogs are identical, after a long period of time each will have on the average N/2 fleas. This will be true even if all the fleas originally resided on only one dog. However, if one waits long enough, there is a finite, but very small, probability that all the fleas will be back on the original dog. The Ehrenfests argued that something similar must hold for the Boltzmann equation. The modern theory of nonequilibrium thermodynamics brings together the molecular, collisional ideas of Boltzmann with the statistical ideas of the Ehrenfests to give a nonlinear, statistical theory. The theory is couched at a level intermediate between the phenomenological level and the mechanical level. Sometimes referred to as the mesoscopic level, it involves a conception of molecular events, termed elementary processes, that are similar to those in the Boltzmann equation. An elementary process represents a class of molecular events that cause a well-defined change in an extensive variable. For example, an elementary chemical reaction such as changes the number of hydrogen (H) atoms by ωH = −1, the number of hydrogen chloride (HCl) molecules by , and so on, and involves n+H = 1 and n1H = 0 hydrogen atoms in the forward (+) and reverse (−) steps, and so forth. The familiar double arrow emphasizes the reversibility of the conversion of reactants to products. This elementary chemical reaction does not represent a single molecular event, but rather a collection of molecular events that all produce the indicated changes. This elementary reaction, like other elementary processes, is characterized by the molecular-size amounts, n±, of the extensive variables involved in the forward (+) and reverse (−) processes and their net changes, ω = nn+. Associated with each elementary process is a rate for the forward and reverse direction. Generalizing the laws of mass action, the forward and reverse rates of the chemical reaction above can be written: where the exponential terms exp(·) represent the probability of the reactants (+) or products (−) being poised for reaction and the factor Ω is the intrinsic reaction rate when poised. This exponential dependence of the rates of elementary processes on the chemical potentials is an example of the canonical form for rates of elementary processes. The American chemist Joel Keizer noted that the Boltzmann-Planck expression for the entropy in terms of the number of molecular states of a system could be used to show that for many other types of elementary processes where Fk = ∂S/∂nk is the intensive variable associated with the extensive variable nk. In the nonlinear theory, microscopic reversibility is expressed as the equality of Ω+ = Ω, a property that holds for all processes except those involving the exchange of heat with a thermal reservoir. This single exception, in fact, can be used to give a proof of the second law of thermodynamics. When applied to binary collisions in the gas phase using Boltzmann’s H-function to define the entropy, the rates in the Boltzmann equation are found to have the canonical form. The canonical form applies to a great variety of physical and chemical processes, including diffusion, heat transport, electrochemical processes, shear and bulk viscosity, and cross coupling phenomena. In fact, if the canonical form of the rates is interpreted as describing kinetic laws, then close to equilibrium it reduces to the linear laws of the phenomenological theory, including the reciprocal relations. Enlarging on the ideas of the Ehrenfests, a deeper interpretation of the canonical form that circumvents problems with the recurrence theorem was proposed by Keizer. In it the rates represent transition probabilities for changes caused by elementary processes. Instead of dogs and fleas, consider the reaction , in which the two chemical isomers of 1,2-dibromoethene differ only in the spatial arrangement of the atoms. The two isomers are analogous to the two dogs and the number of each isomer to the number of fleas. It is clear that the probability of one isomer changing into the other in a short interval of time must be proportional to the number of cis or trans isomers, just as the probability of a flea jumping from one dog to the other is proportional to the number of fleas. In the nonlinear theory, one assumes that the probability for an extensive variable making the change in unit time is V+ and that for the variable making the change niωi is V. This fundamental assumption implies that, as long as fluctuations are small, the nonlinear transport equations are true on the average. In addition, for a system made up of many molecules, fluctuations satisfy a linear version of the usual transport equations, with a random flux that generalizes the fluctuation dissipation theorem in the linear theory. Because the nonlinear theory starts from an overtly statistical conception of molecular events, it differs in significant ways from the linear, phenomenological theory. As in the Boltzmann approach, the existence of the entropy and the positivity of the dissipation function are consequences of the canonical form. Furthermore, the Einstein formula for the bell-shaped distribution of fluctuations no longer must be assumed but is a consequence of the statistical interpretation of the canonical form. Unlike equilibrium states, steady states far from equilibrium can be unstable; i.e., a small perturbation may lead precipitously to a new state, as, for example, in the case of a pencil balanced on its point. This is true of the Gunn instability that occurs in thin wafers of the semiconductor gallium arsenide. If the electrical potential across the semiconductor exceeds a critical value, the steady current that is stable at lower potentials abruptly gives way to periodic changes in the current (Gunn oscillations). Statistical nonequilibrium thermodynamics shows that the stability of a nonequilibrium steady state is reflected in the behaviour of the molecular fluctuations. In fact, the width of the bell-shaped distribution of fluctuations at a steady state provides an indicator of stability; it becomes larger (and finally infinite) as the steady state becomes unstable. While the nonlinear statistical theory describes a vast array of thermodynamic behaviours close to or far from equilibrium, its most compelling verifications have been for steady-state fluctuations far from equilibrium. Here, again, light scattering experiments and electrical measurements have provided the most accurate tests. The Dutch-American physicist Jan Sengers and his colleagues have measured carefully the spectrum of light scattered from various liquids with temperature gradients on the order of 50 to 100 kelvins per centimetre. In excellent agreement with calculations, they find an asymmetry in the Brillouin peaks and a change in the magnitude of the density fluctuations as compared to the same liquids at equilibrium. Measurement of voltage fluctuations in gallium arsenide wafers has documented an increase of six orders of magnitude in their size as the threshold of the Gunn oscillations is approached. The results of measurements and calculation of this increase using statistical nonequilibrium thermodynamics are shown in Figure 35. The distribution of voltage fluctuations is, to within experimental error, the bell-shaped Gaussian distribution predicted by the theory.
d215637f1dc02873
Atomic Vortex Theory - Kelvin Kelvin in 1902 may not have had the computers to work on chaos theory but in 1992 at the Santa Fe Institute supercomputing and complexity modelling although seeming to toe the prescribed line re NOT articulating the newly discovered chaos law of emergence and its contradiction of the 2nd law of thermodynamics, predicting rewarming after the Big bang and not heat death. At this time Nikola Tesla was attempting to print his Theory of Environmental Energy which indicated that the aether would outpour free energy if disturbed by rotating magnets/magnetic field, empirical proof of that these days coming from the spinning NASA satellites that get more energy than they appear to have been entitled to by the known (or allowed) issues in physics when they engage a gravitational slingshot around a planet and its magnetosphere. ‘Real Scientists’ today are attempting to sell us the Higgs Boson or ‘God Particle’ as the final ultimate smallest building block – homogenous, identical in every detail, reproducible in every detail and as standard as a billiard ball. Real Chaos Theory though would suggest that everything every one item in the Universe was as unique as a fingerprint, with no two identical items, all having variations to some degree, and that also the aether and its array of particles is infinitely divisible with no upper or lower limit on scale or function in any given context. Here are the wiki notes on Vortex Dynamics which give an indication of the reasonable steps in natural modelling that chaos and fluid dynamics were producing before Science with a big ‘S’ in 1938 in Copenhagen decided that a physics paradigm with an inexplicable paradox at its heart was better than anything that natural events could teach us. Vortex dynamics is a vibrant subfield of fluid dynamics, commanding attention at major scientific conferences and precipitating workshops and symposia that focus fully on the subject. A curious diversion in the history of vortex dynamics was the vortex atom theory of William Thomson, later Lord Kelvin. His basic idea was that atoms were to be represented as vortex motions in the ether. This theory predated the quantum theory by several decades and because of the scientific standing of its originator received considerable attention. Many profound insights into vortex dynamics were generated during the pursuit of this theory. Other interesting corollaries were the first counting of simple knots by P. G. Tait, today considered a pioneering effort in graph theory, topology and knot theory. Ultimately, Kelvin's vortex atom was seen to be wrong-headed but the many results in vortex dynamics that it precipitated have stood the test of time. Kelvin himself originated the notion of circulation and proved that in an inviscid fluid circulation around a material contour would be conserved. This result — singled out by Einstein as one of the most significant results of Kelvin's work[citation needed] — provided an early link between fluid dynamics and topology. The history of vortex dynamics seems particularly rich in discoveries and re-discoveries of important results, because results obtained were entirely forgotten after their discovery and then were re-discovered decades later. Thus, the integrability of the problem of three point vortices on the plane was solved in the 1877 thesis of a young Swiss applied mathematician named Walter Gröbli. In spite of having been written in Göttingen in the general circle of scientists surrounding Helmholtz and Kirchhoff, and in spite of having been mentioned in Kirchhoff's well known lectures on theoretical physics and in other major texts such as Lamb's Hydrodynamics, this solution was largely forgotten. A 1949 paper by the noted applied mathematician J. L. Synge created a brief revival, but Synge's paper was in turn forgotten. A quarter century later a 1975 paper by E. A. Novikov and a 1979 paper by H. Aref on chaotic advection finally brought this important earlier work to light. The subsequent elucidation of chaos in the four-vortex problem, and in the advection of a passive particle by three vortices, made Gröbli's work part of "modern science". Another example of this kind is the so-called "localized induction approximation" (LIA) for three-dimensional vortex filament motion, which gained favor in the mid-1960s through the work of Arms, Hama, Betchov and others, but turns out to date from the early years of the 20th century in the work of Da Rios, a gifted student of the noted Italian mathematician T. Levi-Civita. Da Rios published his results in several forms but they were never assimilated into the fluid mechanics literature of his time. In 1972 H. Hasimoto used Da Rios' "intrinsic equations" (later re-discovered independently by R. Betchov) to show how the motion of a vortex filament under LIA could be related to the non-linear Schrödinger equation. This immediately made the problem part of "modern science" since it was then realized that vortex filaments can support solitary twist waves of large amplitude. For thousands of years, knots have been used for basic purposes such as recording information, fastening and tying objects together. Over time people realized that different knots were better at different tasks, such as climbing or sailing. Knots were also regarded as having spiritual and religious symbolism in addition to their aesthetic qualities. The endless knot appears in Tibetan Buddhism, while the Borromean rings have made repeated appearances in different cultures, often symbolizing unity. The Celtic monks who created the Book of Kells lavished entire pages with intricate Celtic knotwork. Knots were studied from a mathematical viewpoint by Carl Friedrich Gauss, who in 1833 developed the Gauss linking integral for computing the linking number of two knots. His student Johann Benedict Listing, after whom Listing's knot is named, furthered their study. Trivial knots The early, significant stimulus in knot theory would arrive later with Sir William Thomson (Lord Kelvin) and his theory of vortex atoms. (Sossinsky 2002, p. 1–3) In 1867 after observing Scottish physicist Peter Tait's experiments involving smoke rings, Thomson came to the idea that atoms were knots of swirling vortices in the æther. Chemical elements would thus correspond to knots and links. Tait's experiments were inspired by a paper of Helmholtz's on vortex-rings in incompressible fluids. Thomson and Tait believed that an understanding and classification of all possible knots would explain why atoms absorb and emit light at only the discrete wavelengths that they do. For example, Thomson thought that sodium could be the Hopf link due to its two lines of spectra. (Sossinsky 2002, p. 3–10) Tait subsequently began listing unique knots in the belief that he was creating a table of elements. He formulated what are now known as the Tait conjectures on alternating knots. (The conjectures were proved in the 1990s.) Tait's knot tables were subsequently improved upon by C. N. Little and Thomas Kirkman. (Sossinsky 2002, p. 6) James Clerk Maxwell, a colleague and friend of Thomson's and Tait's, also developed a strong interest in knots. Maxwell studied Listing's work on knots. He re-interpreted Gauss' linking integral in terms of electromagnetic theory. In his formulation, the integral represented the work done by a charged particle moving along one component of the link under the influence of the magnetic field generated by an electric current along the other component. Maxwell also continued the study of smoke rings by considering three interacting rings. Popular posts from this blog Transhumanism and the Galaxy The Falkirk Triangle
e651572902799210
480-4400/02 – Quantum Physics I (KFI) Gurantor departmentDepartment of PhysicsCredits4 Subject guarantorDoc. Dr. RNDr. Petr AlexaSubject version guarantorDoc. Dr. RNDr. Petr Alexa Study levelundergraduate or graduateRequirementChoice-compulsory Study languageEnglish Year of introduction2018/2019Year of cancellation Intended for the facultiesUSP, FEIIntended for study typesBachelor, Follow-up Master Instruction secured by LoginNameTuitorTeacher giving lectures ALE02 Doc. Dr. RNDr. Petr Alexa UHL72 Mgr. Radim Uhlář, Ph.D. Extent of instruction for forms of study Form of studyWay of compl.Extent Full-time Credit and Examination 2+2 Subject aims expressed by acquired skills and competences Explain the fundamental principles of quantum-mechanical approach to problem solving. Apply this theory to selected simple problems. Discuss the achieved results and their measurable consequences. Teaching methods The course introduces the most important aspects of non-relativistic quantum mechanics. It includes the fundamental postulates of quantum mechanics and their applications to square wells and barriers, the linear harmonic oscillator and spherical potentials and the hydrogen atom. The remarcable properties of quantum particles and the resulting macroscopic effects are discussed. Compulsory literature: MERZBACHER, E.: Quantum mechanics, John Wiley & Sons, NY, 1998. Recommended literature: SAKURAI, J. J.: Modern Quantum mechanics, Benjamin/Cummings, Menlo Park, Calif. 1985 MERZBACHER, E.: Quantum mechanics, Wiley, New York 1970 Way of continuous check of knowledge in the course of semester Written test, active students´ participation at seminars. Other requirements Systematic off-class preparation. Subject has no prerequisities. Subject has no co-requisities. Subject syllabus: 1. Introduction - historical context and the need for a new theory. 2. Postulates of quantum mechanics, Schrödinger equation, time dependent and stationary, the equation of continuity. 3. Operators - linear Hermitian operators, variables, measurability. Coordinate representation. 4. Basic properties of operators, eigenfunctions and eigenvalues, mean value, operators corresponding to the selected physical variables and their properties. 5. Free particle waves, wavepackets. The uncertainty relation. 6. Model applications of stationary Schrödinger equation - piece-wise constant potential, infinitely deep rectangular potential well - continuous and discrete energy spectrum. 7. Other applications: step potential, rectangular potential well, square barrier potentials - tunneling effect. 8. Approximations of selected real-life situations by rectangular potentials. 9. The harmonic oscillator in the coordinate representation and the Fock's representation. 10. Spherically symmetric field, the hydrogen atom. Spin. 11. Indistinguishable particles, the Pauli principle. Atoms with more than one electrons. Optical and X-ray spectrum. 12. The basic approximations in the theory of chemical bonding. 13. Interpretation of quantum mechanics. Conditions for subject completion Task nameType of taskMax. number of points (act. for subtasks) Min. number of points Credit and Examination Credit and Examination 100  51         Credit Credit           Examination Examination   Mandatory attendence parzicipation: Show history Occurrence in study plans Academic yearProgrammeField of studySpec.ZaměřeníFormStudy language Tut. centreYearWSType of duty 2018/2019 (N2658) Computational Sciences (2612T078) Computational Sciences P English Ostrava 1 Choice-compulsory study plan Occurrence in special blocks Block nameAcademic yearForm of studyStudy language YearWSType of blockBlock owner
38333dda30cacb8d
From Wikipedia, the free encyclopedia Jump to navigation Jump to search A heterojunction is an interface that occurs between two layers or regions of dissimilar semiconductors. These semiconducting materials have unequal band gaps as opposed to a homojunction. It is often advantageous to engineer the electronic energy bands in many solid-state device applications, including semiconductor lasers, solar cells and transistors. The combination of multiple heterojunctions together in a device is called a heterostructure, although the two terms are commonly used interchangeably. The requirement that each material be a semiconductor with unequal band gaps is somewhat loose, especially on small length scales, where electronic properties depend on spatial properties. A more modern definition of heterojunction is the interface between any two solid-state materials, including crystalline and amorphous structures of metallic, insulating, fast ion conductor and semiconducting materials. In 2000, the Nobel Prize in physics was awarded jointly to Herbert Kroemer of the University of California, Santa Barbara, California, USA and Zhores I. Alferov of Ioffe Institute, Saint Petersburg, Russia for "developing semiconductor heterostructures used in high-speed-photography and opto-electronics". Manufacture and applications[edit] Heterojunction manufacturing generally requires the use of molecular beam epitaxy (MBE)[1] or chemical vapor deposition (CVD) technologies in order to precisely control the deposition thickness and create a cleanly lattice-matched abrupt interface. A recent alternative under research is the mechanical stacking of layered materials into van der Waals heterostructures.[2] Despite their expense, heterojunctions have found use in a variety of specialized applications where their unique characteristics are critical: Energy band alignment[edit] The three types of semiconductor heterojunctions organized by band alignment. Band diagram for straddling gap, n-n semiconductor heterojunction at equilibrium. The behaviour of a semiconductor junction depends crucially on the alignment of the energy bands at the interface. Semiconductor interfaces can be organized into three types of heterojunctions: straddling gap (type I), staggered gap (type II) or broken gap (type III) as seen in the figure.[6] Away from the junction, the band bending can be computed based on the usual procedure of solving Poisson's equation. Various models exist to predict the band alignment. • The simplest (and least accurate) model is Anderson's rule, which predicts the band alignment based on the properties of vacuum-semiconductor interfaces (in particular the vacuum electron affinity). The main limitation is its neglect of chemical bonding. • A common anion rule was proposed which guesses that since the valence band is related to anionic states, materials with the same anions should have very small valence band offsets. This however did not explain the data but is related to the trend that two materials with different anions tend to have larger valence band offsets than conduction band offsets. • Tersoff[7] proposed a gap state model based on more familiar metal-semiconductor junctions where the conduction band offset is given by the difference in Schottky barrier height. This model includes a dipole layer at the interface between the two semiconductors which arises from electron tunneling from the conduction band of one material into the gap of the other (analogous to metal-induced gap states). This model agrees well with systems where both materials are closely lattice matched[8] such as GaAs/AlGaAs. • The 60:40 rule is a heuristic for the specific case of junctions between the semiconductor GaAs and the alloy semiconductor AlxGa1−xAs. As the x in the AlxGa1−xAs side is varied from 0 to 1, the ratio tends to maintain the value 60/40. For comparison, Anderson's rule predicts for a GaAs/AlAs junction (x=1).[9][10] The typical method for measuring band offsets is by calculating them from measuring exciton energies in the luminescence spectra.[10] Effective mass mismatch[edit] When a heterojunction is formed by two different semiconductors, a quantum well can be fabricated due to difference in band structure. In order to calculate the static energy levels within the achieved quantum well, understanding variation or mismatch of the effective mass across the heterojunction becomes substantial. The quantum well defined in the heterojunction can be treated as a finite well potential with width of . In addition, in 1966, Conley et al.[11] and BenDaniel and Duke[12] reported a boundary condition for the envelope function in a quantum well, known as BenDaniel–Duke boundary condition. According to them, the envelope function in a fabricated quantum well must satisfy a boundary condition which states that and are both continuous in interface regions. Mathematical details worked out for quantum well example. Using the Schrödinger equation for a finite well with width of and center at 0, the equation for the achieved quantum well can be written as: Solution for above equations are well-known, only with different(modified) k and [13] At the z = even-parity solution can be gained from By taking derivative of (5) and multiplying both sides by Dividing (6) by (5), even-parity solution function can be obtained, Similarly, for odd-parity solution, For numerical solution, taking derivatives of (7) and (8) gives even parity: odd parity: where . The difference in effective mass between materials results in a larger difference in ground state energies. Nanoscale heterojunctions[edit] Image of a nanoscale heterojunction between iron oxide (Fe3O4 — sphere) and cadmium sulfide (CdS — rod) taken with a TEM. This staggered gap (type II) offset junction was synthesized by Hunter McDaniel and Dr. Moonsub Shim at the University of Illinois in Urbana-Champaign in 2007. In quantum dots the band energies are dependent on crystal size due to the quantum size effects. This enables band offset engineering in nanoscale heterostructures. It is possible[14] to use the same materials but change the type of junction, say from straddling (type I) to staggered (type II), by changing the size or thickness of the crystals involved. The most common nanoscale heterostructure system is ZnS on CdSe (CdSe@ZnS) which has a straddling gap (type I) offset. In this system the much larger band gap ZnS passivates the surface of the fluorescent CdSe core thereby increasing the quantum efficiency of the luminescence. There is an added bonus of increased thermal stability due to the stronger bonds in the ZnS shell as suggested by its larger band gap. Since CdSe and ZnS both grow in the zincblende crystal phase and are closely lattice matched, core shell growth is preferred. In other systems or under different growth conditions it may be possible to grow anisotropic structures such as the one seen in the image on the right. It has been shown[15] that the driving force for charge transfer between conduction bands in these structures is the conduction band offset. By decreasing the size of CdSe nanocrystals grown on TiO2, Robel et al.[15] found that electrons transferred faster from the higher CdSe conduction band into TiO2. In CdSe the quantum size effect is much more pronounced in the conduction band due to the smaller effective mass than in the valence band, and this is the case with most semiconductors. Consequently, engineering the conduction band offset is typically much easier with nanoscale heterojunctions. For staggered (type II) offset nanoscale heterojunctions, photoinduced charge separation can occur since there the lowest energy state for holes may be on one side of the junction whereas the lowest energy for electrons is on the opposite side. It has been suggested[15] that anisotropic staggered gap (type II) nanoscale heterojunctions may be used for photocatalysis, specifically for water splitting with solar energy. See also[edit] 1. ^ Smith, C.G (1996). "Low-dimensional quantum devices". Rep. Prog. Phys. 59 (1996) 235282, pg 244. 2. ^ Geim, A. K.; Grigorieva, I. V. (2013). "Van der Waals heterostructures". Nature. 499 (7459): 419–425. arXiv:1307.6718. doi:10.1038/nature12385. ISSN 0028-0836. PMID 23887427. S2CID 205234832. 3. ^ Okuda, Koji; Okamoto, Hiroaki; Hamakawa, Yoshihiro (1983). "Amorphous Si/Polycrystalline Si Stacked Solar Cell Having More Than 12% Conversion Efficiency". Japanese Journal of Applied Physics. 22 (9): L605–L607. doi:10.1143/JJAP.22.L605. 4. ^ Yamamoto, Kenji; Yoshikawa, Kunta; Uzu, Hisashi; Adachi, Daisuke (2018). "High-efficiency heterojunction crystalline Si solar cells". Japanese Journal of Applied Physics. 57 (8S3): 08RB20. doi:10.7567/JJAP.57.08RB20. 5. ^ Kroemer, H. (1963). "A proposed class of hetero-junction injection lasers". Proceedings of the IEEE. 51 (12): 1782–1783. doi:10.1109/PROC.1963.2706. 6. ^ Ihn, Thomas (2010). "ch. 5.1 Band engineering". Semiconductor Nanostructures Quantum States and Electronic Transport. United States of America: Oxford University Press. pp. 66. ISBN 9780199534432. 7. ^ J. Tersoff (1984). "Theory of semiconductor heterojunctions: The role of quantum dipoles". Physical Review B. 30 (8): 4874–4877. Bibcode:1984PhRvB..30.4874T. doi:10.1103/PhysRevB.30.4874. 8. ^ Pallab, Bhattacharya (1997), Semiconductor Optoelectronic Devices, Prentice Hall, ISBN 0-13-495656-7 9. ^ Adachi, Sadao (1993-01-01). Properties of Aluminium Gallium Arsenide. ISBN 9780852965580. 10. ^ a b Debbar, N.; Biswas, Dipankar; Bhattacharya, Pallab (1989). "Conduction-band offsets in pseudomorphic InxGa1-xAs/Al0.2Ga0.8As quantum wells (0.07≤x≤0.18) measured by deep-level transient spectroscopy". Physical Review B. 40 (2): 1058. Bibcode:1989PhRvB..40.1058D. doi:10.1103/PhysRevB.40.1058. PMID 9991928. 11. ^ Conley, J.; Duke, C.; Mahan, G.; Tiemann, J. (1966). "Electron Tunneling in Metal-Semiconductor Barriers". Physical Review. 150 (2): 466. Bibcode:1966PhRv..150..466C. doi:10.1103/PhysRev.150.466. 12. ^ Bendaniel, D.; Duke, C. (1966). "Space-Charge Effects on Electron Tunneling". Physical Review. 152 (2): 683. Bibcode:1966PhRv..152..683B. doi:10.1103/PhysRev.152.683. 14. ^ Ivanov, Sergei A.; Piryatinski, Andrei; Nanda, Jagjit; Tretiak, Sergei; Zavadil, Kevin R.; Wallace, William O.; Werder, Don; Klimov, Victor I. (2007). "Type-II Core/Shell CdS/ZnSe Nanocrystals: Synthesis, Electronic Structures, and Spectroscopic Properties". Journal of the American Chemical Society. 129 (38): 11708–19. doi:10.1021/ja068351m. PMID 17727285. 15. ^ a b c Robel, István; Kuno, Masaru; Kamat, Prashant V. (2007). "Size-Dependent Electron Injection from Excited CdSe Quantum Dots into TiO2Nanoparticles". Journal of the American Chemical Society. 129 (14): 4136–7. doi:10.1021/ja070099a. PMID 17373799. Further reading[edit]
2391208788d6c1a2
Science Is What Works Science Is What Defines Our Species Best. Science is industrial strength truth, and that works. Science, well done, teaches wonder, and humility. We are all, or we should all be, scientists (those who are paid for that, therefore, ought to spare the public who finance them arrogance, sarcasm and appearing certain of what they ought not to be certain of). Let me wax lyrical on this theme (suggested by an essay of Matthew Francis). Some of these skills could disappear, as artificial intelligence becomes ubiquitous: the driver of a car instinctively learn some rudiments of mechanics. Yet, when automatic cars appear, those rudiments will go away. This happened before: a Neanderthal equipped with a spear-thrower (atlatl) had to know, instinctively, quite a bit of physics about dynamics, aerodynamics, angular momentum, inertia, etc. Astute and cynical commenters will no doubt observe that this is how dogs learn calculus… Instinctively. So what? One hopes to build “Boson Sampling” computers. They will be just something that works, just as spear throwers did. Don’t ask why: nobody knows, not anymore than Neanderthals “knew” all this physics to send a dart 100 meters away. Science is just what works. Some revere equations, and feel they differentiate “science” from what was before. Illusion. Equations just depict ideas. Equations can be very hard. Some we have no …idea how to handle them (Navier-Stokes, the most useful equation supposed to depict fluid flow). It’s hard to find new ideas. However, some, once found and accepted, can be amazingly simple. The invention of Non-Euclidean geometry just amounted to admit a pre-Euclidean idea: one could make geometry on a sphere, or a saddle, not just a flat surface. Inventing Non-Euclidean geometry was more of a philosophical change of perspective than anything else. It took 21 centuries to make it. It was not a question of equations. Actually, there are no equations in Euclidean geometry. Similarly Einstein took Poincare’s observation that the constancy of the speed of light should be viewed as a physical law, and got the Lorentz group from it. Modulo some mathematics so trivial, Poincare’ had not bother to make them explicit, when he talked about the “Principle Of Relativity”. Again a philosophical change of perspective. Or Einstein (again) took Planck’s idea of quantified emission of light, and decided that was proof enough that there was such a thing as light quanta. Planck disapproved. Planck was not impressed that this outrageous idea “explained” the photoelectric effect discovered 80 years earlier. When he recommended Einstein for jobs, Planck asked the would-be employers to overlook that silly mistake of an exuberant young man (Einstein got the Nobel for that simple “lichtquanten” [light quanta] idea in 1923). Philosophical change of perspective, again. The discovery of Dark Matter and Dark Energy were as unexpected as that of Quantum Theory. However there is an important philosophical difference. Planck’s quantified emission of radiation “explained” right away two well-known, yet baffling, experimental facts: the non-occurring “ultraviolet catastrophe”, and the Blackbody Radiation. In the present situation, we are not even completely sure that Dark Matter and Dark Energy are really observed facts. The philosophical perspectives, let alone the physical ones, are vast. Breakthroughs will come, first, from simple ideas. Complicated equations will follow. We appreciate the brutal beauty of the universe as our judge, because we evolved that way. We evolved to find those elements of reality we call the truth. Our glorious survival blossomed that way. Science is what we do, as a species. And philosophy is our oracle. We evolved into thinking that we are. We are what we think. Patrice Ayme 17 Responses to “Science Is What Works” 1. Matthew R. Francis Says: Matthew R. Francis January 11, 2014 at 07:03 What you say sounds reasonable on its face, but there are number of problems with your arguments. We use equations in physics because they are effective. The Navier-Stokes equation helps us describe physical phenomena successfully; it doesn’t matter whether you understand it philosophically or not. To cite the most important example of all: people still debate over the proper way to interpret quantum mechanics, but everyone uses the Schrödinger equation and the other mathematical tools because those are the way to do quantum physics. That’s not to say the interpretation isn’t important, but the equations are essential. Also, you get the cosmological issues backward. Dark matter and dark energy are observed phenomena (“facts” if you will, though I dislike using that term). “Dark energy” in particular is just the name we give to the observed accelerated expansion of the Universe, for which we currently don’t have a good theoretical explanation. “Dark matter” similarly is the name we give to the simplest explanation for a wide variety of astronomical observations, from the rotation of galaxies to the sound waves in the cosmic microwave background (see the detailed discussion in http://galileospendulum.org/2013/03/21/planck-results-our-weird-and-wonderful-universe/ for more on that second point). These are observations for which we need more theory and observation, not philosophical perspectives. Conceptual breakthroughs happen, but they follow hard work. Newton didn’t spontaneously come up with gravity, and Einstein didn’t spontaneously think of relativity. Both of these breakthroughs came after long strenuous efforts, and were built on ideas, experiments, and observations from many others who came before them. When we figure them out, dark energy and dark matter will be no different. After all, we’ve known about dark matter since the 1930s and dark energy since 1998 (with inklings of its existence before then). If all it took was a philosophical perspective, we’d have solved it by now. To reiterate, physics is hard, but worth it. • Patrice Ayme Says: Dear Matthew: I did not say the Navier-Stokes equation had to be understood “philosophically”. I just alluded to the fact that, although it depicts fluid flow, the general existence and smoothness solutions of this non linear PDE have not been proven (I actually don’t believe they exist). Newton did not come up with the gravity law, by the way. He exploited it further. The French astronomer Ismaël Boulliau suggested that Kepler was wrong about the gravitational force. Kepler had declared that the gravitational force holding the planets in place decreased inversely to distance. Boulliau held instead that the force decreased as an inverse square law. He deduced this in analogy to light. Isaac Newton acknowledged Boulliau’s discovery. Nobody dares to suggest the equations related to Quantum Theory are not essential. To a great extent, they are all what defines the theory. QFT is all about guessing the Laplacian, aka the equation(s). The situation with Dark Stuff is not similar. They are not directly observed phenomena (just ask LHC people). The “observations” of both Dark Matter and Dark Energy are the fruits of (philosophical) pruning. The former depends, among other things, upon the hypothesis that gravity holds at galactic scales (some employed astronomers claim gravity does not work beyond the Solar System… as seems to be the case, at face value!) It’s hard to evaluate things we don’t know, such as galactic mass (the Milky Way has grown in astronomers’ minds recently) to make further guesses about something else. In the case of Super Novae studies, outliers explosions are removed from the sampling. I could not read a clear enough description of what was found (I read the original literature) to see if my pet theory survives. Boldly supposing that something is really going on (I know a Nobel was attributed), we are very far from being able to describe the thing (whether, for example it’s a Cosmological Constant or Quintessence field description). Physics is what we do, it did not start with Newton. Or Buridan, who discovered inertia, or Aristotle, who got that wrong. Physics, finding new physics is desperately hard, but so worth it, our lives depend upon it. They always have. • Patrice Ayme Says: What I am driving at, is that just reducing physics to equations is too reductive. • Paul Handover Says: I’m sure this is familiar to Matthew but for me this recent item on the BBC News website had me spellbound: http://www.bbc.co.uk/news/science-environment-25663810 Universe measured to 1% accuracy Astronomers have measured the distances between galaxies in the universe to an accuracy of just 1%. This staggeringly precise survey – across six billion light-years – is key to mapping the cosmos and determining the nature of dark energy. The new gold standard was set by BOSS (the Baryon Oscillation Spectroscopic Survey) using the Sloan Foundation Telescope in New Mexico, US. It was announced at the 223rd American Astronomical Society in Washington DC. Continue reading the main story Start Quote “I now know the size of the universe better than the size of my house” Prof David Schlegel BOSS principal investigator “There are not many things in our daily lives that we know to 1% accuracy,” said Prof David Schlegel, a physicist at Lawrence Berkeley National Laboratory and the principal investigator of BOSS. But the aspect that really generated that spellbound feeling was this: The latest results indicate dark energy is a cosmological constant whose strength does not vary in space or time. They also provide an excellent estimate of the curvature of space. “The answer is, it’s not curved much. The universe is extraordinarily flat,” said Prof Schlegel. “And this has implications for whether the universe is infinite. While we can’t say with certainty, it’s likely the universe extends forever in space and will go on forever in time. Our results are consistent with an infinite universe,” he said. “it’s likely the universe extends forever in space and will go on forever in time.” I find that utterly beyond imagination! Matthew, PLEASE help me out! 😉 • Patrice Ayme Says: Hi Paul! There is a whole culture of people out there who view scientists, the way priests used to be seen. This is very wrong. Matthew makes it clear on his site that he does not take it lightly to those who do not use proper reverence. This way he reminds me of Mr. Lack. I’m a mathematician, and, he, clearly is not (he thought I said something philosophical about Navier-Stokes!). All research mathematicians know the Navier-Stokes is one of the seven “Millennium” problems of the Clay Institute. There is a one million dollar prize for it. I don’t believe it can be always solved, because it neglects QUANTUM effects. What you reported there is very interesting. I vaguely saw come across, and took it tongue in cheek. It’s nevertheless striking to see this in print, a few weeks after my own: By coincidence I was writing something about Dark Matter (I have had the same theory for decades; one can say it predicted Dark Matter!) Here is a little help to give you. The Big Bang theory is brutal, definitive, on a limited time span, and based on naïve assumptions. In one word: Biblical. The problem you have is that it seems to conflict with: Well, not really. Anyway this cat can help you more than Mr. Matthew…. Methinks. • Paul Handover Says: Thank you for your length reply. Yes, I understand, and share, your criticism of the Big Bang theory. I wasn’t in conflict, per se, with the idea of an infinite universe. It was just that I couldn’t understand, in a scientific sense, a universe that is boundless. I.e. it has no start or end. The reason I have such trouble in understanding is that everything material that I am aware of, from the atom to the solar system, has a start and an end. Therefore, if the universe has NO start or end then somewhere between the fabric of our solar system and the universe there must be a boundary where the rules of matter change. Not even sure if I’m making myself clear! • Patrice Ayme Says: Dear Paul: The universe we see now is about 30 billion light years across. That defies understanding. In practice, it’s infinite. I have actually argued that the very notion of infinity, in MATHEMATICS, is defined by the size of the universe itself. I don’t know for sure that there is one mathematician besides myself who understand what it means. • Paul Handover Says: Almost the philosophy of mathematics! Or is it the mathematics of philosophy? 😉 Thanks Patrice. 2. Alexi Helligar Says: Alexi Helligar Truth is what works. 3. Paul Handover Says: Sorry about this but becoming fixated on this universe size thing! I see that 1 light year = 9.4605284 x 10 to the 15 meters Ergo, 3 light years = 28.3815852 x 10 to the 15 meters, or 28.3815852 x 10 to the 12 kilometres. (28.38 trillion kilometres) Thus 30 BILLION light years is: 28.3815852 x 10 to the 21 Have I done that correctly, Mr. Mathematician? • Paul Handover Says: Sorry should have included the measure: Thus 30 BILLION light years is: 28.3815852 x 10 to the 21 kilometres. • Patrice Ayme Says: Dear Paul: Look, I just called apes apes, in my latest post, so I am tired, and I’m sure you handle the math OK. I think the pictures of billions of galaxies are more telling than powers of ten, anyway. There is a new Hubble Deep Field, showing huge, very bright galaxies. Two things: 1) it seems to show the universe evolved. 2) seems to me like another Big Bang headache, as it’s not clear how such big things could have evolved 500 million years after the alleged BB. • Patrice Ayme Says: The size of the universe is an excellent thing to be fixated about. Men used to be the measure of all things, it may be wiser to prefer the universe itself to confer scale. 4. DIVING INTO TRUTH | Patrice Ayme's Thoughts Says: 5. Nature Physical Law From Galileo’s Pendulum Xchge | Tyranosopher Overflow Says: […] Physics, finding new physics is desperately hard, but so worth it, our lives depend upon it. They always have. https://patriceayme.wordpress.com/2014/01/11/science-is-what-works/ […] WordPress.com Logo Google photo Twitter picture Facebook photo Connecting to %s %d bloggers like this:
22ec421f7efa2daf
Randell Mills GUT - Who can do the calculations? • I'll let you know what Dr. Mills says. Or you can just join us at The Society For Classical Physics. Sorry if I came off as a jerk, you seem to be obviously willing to take an honest look at the theory. Furthermore, not all parts of the theory are fully fleshed out as you can see, but what it does predict it does so with extreme accuracy and within the confines of classical physics and fundamental constants. There is room to make original contributions to the theory. I suggested the design of using liquid electrodes last year on the forum to isolate the energetic transition reactions from the solid parts of the reactor and prevent them from melting or vaporizing. This was prior to revealing any liquid fuel injection or liquid electrodes being used in the latest design revealed last week. • The vaporized silver provides the conductive matrix, the heat provides the kinetic energy to the reactants which are catalyst and atomic hydrogen. The kinetic energy is what is responsible for initiating the transition reactions (specifically dipole/multipole resonant collisions destabilizing the orbitsphere causing radial acceleration and release of electric potential between electron and proton). If the plasma wasn't contained within the pressure vessel the conditions conducive to the reactions would not persist. The current is mainly to alleviate charge buildup and to provide the initial kinetic energy to the reactants. The energy obviously comes from the transition reactions which are releasing ~100 or more eVs per event depending on which fractional state is being catalyzed. There could likely even be disproportionation occurring which is when hydrinos collide and drop to even lower energy levels. This also likely occurs within the corona of our star. If the plasma wasn't confined somehow, it would simply dissipate. Even in single shot open air tests three years ago the plasma persisted much longer after current ceased to flow which current theory cannot explain. In all cases there is no high field, only a maximum of 5 volts. Why not just go on the forum and ask Dr. Mills directly? • If the magic is in the kinetic energy and not in the driving electric current, then the hydrino reaction can spread over N numbers of silver fountains...say 100 electrode sets...an electrode array where the reaction in one electrode set can activate the reaction in many other electrode sets that are nearby the prime driver set. • I meant according to the current mainstream physics paradigm, it is explained using classical physics (GUTCP) as I have described above. I don't claim to be the world's authority on GUTCP but I think I got the major points mostly right. Again, Dr. Mills doesn't mind answering questions on his Society for Classical Physics forum. We interact with him on a daily basis pretty much. Also I wasn't sure if you were being sarcastic with your prior posts, but look at what happens in the corona of the Sun. If GUTCP is right, disproportionation hydrino reactions occur on a massive scale providing the high energy photons to produce the ionized species of elements observed in the spectrum, not millions of degrees temperature as is currently assumed. • @stefan You asked about the relationship between GUTCP and QM. I think Mills had the same question and has a first answer and gives its derivation from p11 ff. His conclusion: “Thus the mathematical relationship of GUTCP and QM is based on the Fourier transform of the redial function. GUTCP requires that the electron is real and physically confined to a two dimensional surface comprising source currents that match the wave equation solutions for spherical waves in two dimensions (angular) and time. The corresponding Fourier transform is a wave over all space that is a solution of the three dimensional wave equation (ev.g. the Schrödinger equation). In essence, QM may be considered as a theory dealing with the Fourier transform of an electron, rather than the physical electron. By Parsevals theorem, the energies may be equivalent, but the quantum mechanical case is nonphysical – only mathematical. It may mathematically produce numbers that agree with experimental energies as eigenvalues, but the mechanisms lack internal consistency and conformity with physical laws. ” This is a quite a remarcable result. @ Eric Sorry for my strong wording. I think I adopted the verbally strong position of some of the people posting in this thread :-) . To your question: I am no expert so I am talking about my current understanding of the process of pair production and the fine structure constant: Of course the electron is not moving with lightspeed. 1/alpha is the fraction where the electron would have the velocity c and because this is not possible (because GUTCP relies on special relativity as one of its foundations) the last permitted orbit is a fraction of 1/137. Orbit 1/138 would result in an electron velocity greater than c. And in between the pair production process happens. This transition state orbitosphere is not a traditional orbit of the electron but rather a short living state where (in the case Driscoll describes) the photon wave (photon orbitosphere) changes to become an electron and a positron. To get an impression of how this might work I think one has to see the animations of the fields of the photon and the free electron. I think they are somethere on BLPs page. To your other question regarding my two links: they are linked to Mills equations because they use the nonradiation condition to construct models for electrons. The paper from 1990 is interesting because they use a simple ad hoc nonradiation condition for the simplest case. Then they solve maxwells equations for their simple nonradiation condition and can show that the electron can have a stable orbit and directly show that the spin is a direct physical consequence of their solution and not “inherent” as in QM. They are completely unrelated to Mills but basically had the same idea and could produce a small part of Mills result. Instead of the ad hoc simples nonradiation condition Mills took the general case and as a model of the electron he used the 2D wave equation. Btw. this also shows that Mills is not randomly putting numbers together – because these guys got the same result as Mills at least for the spin. And the other paper shows that it is possible to construct not only the electron but other particles with this nonradiation condition so that they are stable – it is more or less a proof/indication that Mills model does not violate any accepted law of nature (Maxwell, Newton) and gives stable models for atoms. • In regards to Epimetheus's post above, I can't remember where I read it, it was either on the forum or in Brett's book, but apparently many years ago, Hermann Haus told Dr. Mills privately that he had correctly solved for the structure of the electron classically. At the time he did not wish to make "waves" so to speak through public acknowledgement. In regards to K-Capture Eric seems to be asking specifically about the case of capture of the inner shell; I've posted a question on the other forum so we'll see what Dr. Mills says. • Here's what Dr. Mills posted. Probably not as much information as you would have liked but you can always prod him for more detail on the forum. Also GUTCP theorizes that excited states are due to photons expressing "effective charge" and shielding the electron to a degree from the central field of the proton. I guess if one accepts that a high energy photon can convert into an electron and positron the idea of photons in certain situations expressing effective charge isn't all that strange. I'm not sure how to relate this to K-capture but just thought I'd mention it. Randy MillsToday at 5:12 AM K capture can only occur if the reaction can form a more stable nucleus. A proton cannot undergo K-capure for example. • Mills states as follows: Mills has trademarked “Hydrino.” And because his issued patents claim the hydrino as an invention, BLP asserts that it owns all intellectual property rights involving hydrino research. BLP therefore forbids outside experimentalists from doing even the most basic hydrino research, which could confirm or deny hydrinos, without first signing an IP agreement. “We welcome research partners; we want to get others involved,” Mills says. “But we do need to protect our technology.” The insulator-metal transition in hydrogen Very high temperature shock wave methods might make metalize hydrogen obtainable. This transition from molecular liquid to atomic liquid is called the PPT (discussed below). Leif Holmlid uses a quantum mechanics process called Rydberg blockade to produce metalized hydrogen where a Rydberg matter substance like potassium is used as a QM template to reform the atomic structure of hydrogen into the low orbit based liquid matalized form. * A phase of hydrogen Rydberg matter (RM) is formed in ultra-high vacuum by desorption of hydrogen from an alkali promoted RM emitter (Holmlid 2002 J. Phys.: Condens. Matter 14 13469). The RM phase is studied by pulsed laser-induced Coulomb explosions which is the best method for detailed studies of the RM clusters. This method gives direct information about the bonding distances in RM from the kinetic energy release in the explosions. At pressures >10-6 mbar hydrogen, H* Rydberg atoms are released with an energy of 9.4 eV. This gives a bonding distance of 150 ± 8 pm which corresponds to a metallic phase of atomic hydrogen using the results by Chau et al (2003 Phys. Rev. Lett. 90 245501). The results indicate that a partial 3D structure is formed. I beleive that there are other theories that have been accepted by science that explain below base hydrogen orbits; specifically metalized hydrogen. High pressure physics is directed at producing metalized hydrogen as its major goal. [/quote] All the experimental data that Mills has accumulated may very well be consistent with high temperature shock wave produced PPT hydrogen. From Holmlid Instead of inverted Rydberg matter, it is spin-based Rydberg matter with orbital angular momentum l = 0 for the electrons. It is shown to be both superfluid4 and superconductive (Meissner effect observed) at room temperature.6,7 The measured H–H distances are short, normally 2.3 pm.1,3,9 Several spin states with different internuclear distances exist.3 It is likely that the main process initiated by the impinging laser pulse is a transition from level s = 2 with H–H distance of 2.3 pm, to level s = 1 with theoretical distance 0.56 pm. At this distance, nuclear reactions are spontaneous and laser-induced nuclear processes are thus relatively easy to start. • @Epimetheus I don't understand this connection, the radial solutions are essentially Laguerre polynomials + exponential for Shrödingers equation of hydrogene and in the derivation you gave me he uses spherical bessel functions as a radial function. So I can't follow this line of thoughts. But it is true that the fourier transform with spherical bessel functions for the radial part indeed fourier transform into Mills charge distribution. • Has anyone worked through Appendix I to the point they feel comfortable with the derivations? I'm mostly OK with it (except for the discussion of the H() and G() functions). However, the conclusion uses some terms that aren't well explained. While I *think* I understand these, can anyone take a crack at providing a more intuitive justification behind the highlighted equations? Exactly what is represented by the cross-product s_n * v_n? Is omega_n the angular frequency of the emitted photon? I think s_n is the spatial frequency expressed in rad/m, and v_n is a velocity in m/sec of the current density. And radiation requires that the cross product of the two at some point on the orbitsphere is equal to the photon's wavelength. Am I understanding this correctly? • I think that in order to understand a proof of this you should in stead of the taken path, expand the plane wave in the fourier transform into a sum of spherical bessel functions and spherical harmonics, the sum will cancel almost all terms but a single bessel and spherical harmonic that match the same quantum number of the Mills charge distribution due to orthogonality. You will end up with the fourier transform being: (*) j_l(|s|r) Y_lm(theta,phi) Which is much better because the stated equation (38) in Mills takes convolution with all factors except the last having s. You just can't show that this expression dissapears because of the property of the convolution. now for a specific w0 |s| has a certain magnitude for light like wave numbers and hence r can be chooses so that (*) e.g. |s|r represents a zero of the spherical bessel function and (*) is shown to be zero for all light like s,w. To understand everything that is written is hard though. In all to motivate the non radiation one only needs a half of page I think and could keep it much much simpler than whats written in the book. • To further explain and highlight that the key to proper mathematical understanding of Mills theory is the expansion of plane waves in various ways. We have a photon inside the atom that is trapped. Consider the superpositioning of EM plane waves assume that the wave vectors of all the plane waves are evenly distributed e.g. they live on a sphere with constant radi. Again the theorem where you expand the plane wave in bessel function and spherical harmoics apply and we get the explicit solution of the electrical potential as ~ j_0(|r|w/c)exp(i w t), r = sqrt(x^2+y^2+z^2) j_0(x) = sin(x)/x and hence |r|w/c = 2pi for a zero <=> |r|2 pi f / c = 2 pi <=> |r| 1/(1/f c) = 1 <=> |r| 1 / (T c) = 1 <=> |r| / lambda = 1 <=> |r| = lambda So the lambda of the trapped photon has to be the same as the radi as described in option geeks post above. What utter nonsense. Even Randall Mills can't patent physics, and how could he possibly forbid someone else from experimenting? Is he planning to put a copyright tag on each and every hydrino? ETA. Next step, GE patent the electron and forbid anyone else using 'pirate' versions. • Read the paper that the jack booted thugs at BLP don't want you to see. The fact that BLP tries to suppress basic scientific research only means they are not worthy of our attention. Any organization which would send a cease and desist letter to a replicator (who is not seeking to commercialize the technology) is not worthy of existing. My hope is that LENR technologies -- which produce millions of eV per reaction -- arrive on the market soon and cause BLP to lose all future funding. • What do you want to tell us? That paper sees indications for unusual development of bright light as claimed by Mills. This is more supportive for Mills theory than the opposite. But regarding an independent validation I find the papers of world class plasma physicist like Kroesen and Conrads much more compelling: Conrads, H, R Mills, and Th Wrubel. (2003) “Emission in the deep vacuum ultraviolet from a plasma formed by incandescently heating hydrogen gas with trace amounts of potassium carbonate.” Plasma Sources Sci Technol 12: 389–395. Driessen, N. M., E. M. van Veldhuizen, P. Van Noorden, R. J. L. J. De Regt, and G. M. W. Kroesen. (2005) “Balmer-alpha line broadening analysis of incandescently heated hydrogen plasmas with potassium catalyst.” In XXVIIth ICPIG, Eindoven, the Netherlands. 18-22 July. I´m not your opinion that the cease and desist letter has anything to say. It just tells me that after Rossi we have another guy who is totally scared to lose the race against the competitors. Mills wants to make a lot of money and he owes his private investors a huge return of investment. He also needs money to start some new companies that have other products predicted by GUTCP in their focus. And of course he wants to sue the a$$ of everyone who harmed his credibility like Wikipedia, Rathke, etc. Being the lone wolf can make you a bit weird. In my eyes Mills is way ahead of Rossi regarding basic decent human behavior. • Randell Mills is not a decent human being. As I said in my previous post, he is a thug. Andrea Rossi, despite his less than complete honesty and straightforwardness, has never attempted to sue those who performed replications of his technology. He never sent cease and desist letters to Parkhomov, Songsheng, Stepanov, Alan Smith, and a dozen other individuals. Why? Because Andrea Rossi realizes that attempting to prohibit, under threat of litigation, basic scientific research is absolutely repugnant. It's not simply bad, but the polar opposite of the open source movement. Basically, he is claiming that trying to replicate a scientific phenomenon (in this case the reality of the hydrino) is something no one has the right to do unless they sign up with his company. No one has any duty or obligation whatsoever to ask his permission or sign any document with Black Light Power before performing not-for-profit research. He's basically trying to be the dictator of an entire branch of science which he has no right to be. But even if he was trying, his dictatorship is a flop. After decades of research and making huge claims and pronouncements about a dozen different variations of their technology, the best he can come up with is a giant Rube Goldberg device. Even if his figures and those of his validation team are confirmed, it will be many years before a SunCell would be robust enough to operate for many months or years in an industrial setting. LENR has him beat and he knows it due to the very basic physics involved. His technology isn't really somewhere between nuclear and chemical. That is like saying, "the speed of me on my bicycle is somewhere between a turtle and an ICBM." And if you notice, he doesn't even speak about his beloved "hydrino hydrides" anymore. He used to brag about them. Waving tubes of multi-colored crystals he'd claim they had all sorts of amazing properties. Now they have vanished. My hope is that Black Light Power folds in short order. I would hope the same for any company or organization that would threaten a lawsuit over a simple replication attempt. We've had enough petty, arrogant dictators on this planet -- they've been responsible for all sorts of atrocities. We definitely don't need them in science.
2aa6848c591dcb7f
UBC Theses and Dissertations UBC Theses Logo UBC Theses and Dissertations Behaviour of solutions to the nonlinear Schrödinger equation in the presence of a resonance Coles, Matthew Preston 2017 Notice for Google Chrome users: Item Metadata 24-ubc_2017_september_coles_matthew.pdf [ 816.42kB ] JSON: 24-1.0354399.json JSON-LD: 24-1.0354399-ld.json RDF/XML (Pretty): 24-1.0354399-rdf.xml RDF/JSON: 24-1.0354399-rdf.json Turtle: 24-1.0354399-turtle.txt N-Triples: 24-1.0354399-rdf-ntriples.txt Original Record: 24-1.0354399-source.json Full Text Full Text Behaviour of Solutions to theNonlinear Schro¨dinger Equation in thePresence of a ResonancebyMatthew Preston ColesB.Sc., McMaster University, 2011A THESIS SUBMITTED IN PARTIAL FULFILLMENT OFTHE REQUIREMENTS FOR THE DEGREE OFDOCTOR OF PHILOSOPHYinThe Faculty of Graduate and Postdoctoral Studies(Mathematics)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)August 2017c© Matthew Preston Coles 2017AbstractThe present thesis is split in two parts. The first deals with the focusing Non-linear Schro¨dinger Equation in one dimension with pure-power nonlinearitynear cubic. We consider the spectrum of the linearized operator about thesoliton solution. When the nonlinearity is exactly cubic, the linearized oper-ator has resonances at the edges of the essential spectrum. We establish thedegenerate bifurcation of these resonances to eigenvalues as the nonlinearitydeviates from cubic. The leading-order expression for these eigenvalues isconsistent with previous numerical computations.The second considers the perturbed energy critical focusing NonlinearSchro¨dinger Equation in three dimensions. We construct solitary wave solu-tions for focusing subcritical perturbations as well as defocusing supercriticalperturbations. The construction relies on the resolvent expansion, which issingular due to the presence of a resonance. Specializing to pure power fo-cusing subcritical perturbations we demonstrate, via variational arguments,and for a certain range of powers, the existence of a ground state soliton,which is then shown to be the previously constructed solution. Finally,we present a dynamical theorem which characterizes the fate of radially-symmetric solutions whose initial data are below the action of the groundstate. Such solutions will either scatter or blow-up in finite time dependingon the sign of a certain function of their initial data.iiLay SummaryWe conduct a mathematically motivated study to understand qualitativeaspects of the nonlinear Schro¨dinger equation. For this summary, however,we imagine our equation as describing the positions of many cold quantumparticles in a cloud. A group of particles may cluster together and persistin this configuration for all time; this structure being called a soliton. Oneaspect we are interested in is the stability of the soliton. It may be stable - asmall disruption of the system will be weathered and the soliton will remain,or unstable - a perturbation will cause the particles to break up, destroyingthe soliton. The soliton also impacts the overall dynamics of the equation.If the particles do not have enough mass or energy to form a soliton, theyspread out and scatter. On the other hand, with too much mass or energy,they may blow-up, coming together to form a singularity.iiiPrefaceA version of Chapter 2 has been published in [19]. I conducted much of theanalysis, all of the numerics, and wrote most of the manuscript.A version of Chapter 3 has, at the time of this writing, been submittedfor publication to an academic journal. The preprint is available here [20].I conducted much of the analysis and wrote most of the manuscript.ivTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiLay Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . vList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . ixDedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 The Nonlinear Schro¨dinger Equation . . . . . . . . . . . . . 11.2 Conserved Quantities, Scaling, and Criticality . . . . . . . . 21.3 Scattering and Blow-Up . . . . . . . . . . . . . . . . . . . . . 31.4 Solitary Waves and Stability . . . . . . . . . . . . . . . . . . 51.5 Resonance . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.6 The 1D Linearized NLS . . . . . . . . . . . . . . . . . . . . . 81.7 A Perturbation of the 3D Energy Critical NLS . . . . . . . . 102 A Degenerate Edge Bifurcation in the 1D Linearized NLS 182.1 Setup of the Birman-Schwinger Problem . . . . . . . . . . . 182.2 The Perturbed and Unperturbed Operators . . . . . . . . . . 222.3 Bifurcation Analysis . . . . . . . . . . . . . . . . . . . . . . . 272.4 Comments on the Computations . . . . . . . . . . . . . . . . 36vTable of Contents3 Perturbations of the 3D Energy Critical NLS . . . . . . . . 403.1 Construction of Solitary Wave Profiles . . . . . . . . . . . . 403.1.1 Mathematical Setup . . . . . . . . . . . . . . . . . . . 403.1.2 Resolvent Estimates . . . . . . . . . . . . . . . . . . . 413.1.3 Solving for the Frequency . . . . . . . . . . . . . . . . 483.1.4 Solving for the Correction . . . . . . . . . . . . . . . 563.2 Variational Characterization . . . . . . . . . . . . . . . . . 653.3 Dynamics Below the Ground States . . . . . . . . . . . . . 784 Directions for Future Study . . . . . . . . . . . . . . . . . . . 874.1 Improvements and Extensions of the Current Work . . . . . 874.2 Small Solutions to the Gross-Pitaevskii Equation . . . . . . . 88Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93viList of Tables2.1 Numerical values for 8α2. . . . . . . . . . . . . . . . . . . . . . 39viiList of Figures2.1 The two components of Pw1. . . . . . . . . . . . . . . . . . . . 382.2 The two components of function g. . . . . . . . . . . . . . . . . 39viiiAcknowledgementsI am grateful to my supervisors, Stephen Gustafson and Tai-Peng Tsai,for suggesting interesting problems, providing stimulating discussion, andsupplying constant mentorship. For keeping me motivated, I thank myfellow graduate students. I am in debt to friends and family, in particularmy parents, for their enduring emotional support. I acknowledge partialfinancial support from the NSERC Canada Graduate Scholarship.ixfor DainaxChapter 1Introduction1.1 The Nonlinear Schro¨dinger EquationThe nonlinear Schro¨dinger equation (NLS) is the partial differential equation(PDE)i∂tu = −∆u± |u|p−1u. (1.1)It attracts interest from both pure and applied mathematicians, with ap-plications including quantum mechanics, water waves, and optics [35], [74].Before discussing the applications we establish some terminology and no-tation. Here u = u(x, t) is a complex-valued function, x is often thoughtof a spacial variable (in n real dimensions, x ∈ Rn), and t is typically thetemporal variable (t ∈ R). The operator ∂t is the partial derivative in timeand ∆ = ∂2x1 + . . . + ∂2xn is the laplacian. We have chosen the pure powernonlinearity, ±|u|p−1u, with power p ∈ (1,∞), but could also replace thisnonlinearity with a more complicated function of u. The nonlinearity withthe negative sign, −|u|p−1u, will be referred to as the focusing or attractivecase while the positive, +|u|p−1u, is the defocusing or repulsive nonlinearity.In applications the most common nonlinearities are the cubic (p = 3) andthe quintic (p = 5). Also common is the cubic-quintic nonlinearity: the sumor difference of cubic and quintic terms.Let us now briefly describe some of the contexts in which the NLS canbe applied. Firstly, in quantum physics the nonlinear Schro¨dinger equation,and the closely related Gross-Pitaevskii equation (which is NLS subject toan external potential), describes the so called Bose-Einstein condensate. ABose-Einstein condensate is a cloud of cold bosonic particles all of which arein their lowest energy configuration. In this way, the mass of particles can bedescribed by one wave function; that is u(x, t). In this context the absolutevalue |u(x, t)|2 describes the probability density to find particles within thecloud at point x in space, and at time t. The inter-particle forces may beeither attractive or repulsive, which, corresponds to focusing or defocusingnonlinearities, respectively. See [74] or the review [14] for more information.11.2. Conserved Quantities, Scaling, and CriticalityThe NLS has also applications in water waves, a pursuit that dates backto [91]. For a thorough description see, for example, Chapter 11 of [74](and references therein). The water-wave problem concerns the dynamicsof a wave train propagating at the surface of a liquid. In deep water, thesolution u(x, t) to the 1D cubic (p = 3) NLS describes an envelope whichcaptures the behaviour of a wave which is modulated only in the directionin which it propagates.In nonlinear optics, the function u describes a wave propagating in aweakly nonlinear dielectric (see again [74], Chapter 1). In this case ∆ is theLaplacian transverse to the propagation and the time variable t is replacedby the spacial variable along the direction of propagation. For example in3D where x = (x1, x2, x3) the NLS may take the formi∂x3u = −(∂2x1 + ∂2x2)u± |u|2uwhere x3 is in the direction of propagation.On the pure mathematical side, the NLS is interesting as a general modelof dispersive and nonlinear wave phenomena. The nonlinear Schro¨dingerequation provides an arena to develop techniques which can be applied toother nonlinear dispersive equations. The NLS is often technically simplerthan other equations [15], such as the Korteweg-de Vries equation (KdV),the nonlinear wave equation (NLW), Zakharov system, Boussinesq equation,and various other water-wave models.1.2 Conserved Quantities, Scaling, and CriticalityThe facts we review here are standard and can be found in, for example, thebooks [15, 35, 74, 77].The NLS (1.1) has the following conserved quantitiesM(u) = 12∫Rn|u|2 dx, E(u) =∫Rn12|∇u|2 ± 1p+ 1|u|p+1 dx (1.2)often called mass and energy, respectively. These quantities are conservedin time, that is: M(u(x, t)) = M(u(x, 0)) and E(u(x, t)) = E(u(x, 0)) forsufficiently smooth solutions.The scalingu(x, t) 7→ uλ(x, t) := λ−2/(p−1)u(λ−1x, λ−2t) (1.3)preserves the equation. That is if u(x, t) is a solution to (1.1) then uλ(x, t)is also a solution.21.3. Scattering and Blow-UpWhen the power p and dimension n are chosen according to the followingrelationp =4n+ 1our scaling (1.3) also preserves the mass, that isM(uλ(·, t)) =M(u(·, λ−2t)).Such an equation is called mass critical. For example, in one dimension(n = 1) the quintic (p = 5) NLS is mass critical. For values of p < 4/n+ 1we call the equation mass sub-critical and for p > 4/n + 1 we say masssuper-critical.Similarly, ifp = 1 +4n− 2 , n ≥ 3then (1.3) preserves the energy, so E(uλ(·, t)) = E(u(·, λ−2t)). We call thisequation an energy critical equation. For example, in three dimensions(n = 3) the quintic (p = 5) NLS is energy critical. Again we refer top < 1 + 4/(n − 2) as energy sub-critical and p > 1 + 4/(n − 2) as energysuper-critical. Note that for dimensions one and two all equations withp <∞ are energy sub-critical.1.3 Scattering and Blow-UpThe overall dynamics of the equation are affected by the power p’s relationto the mass and energy critical values. By overall dynamics we mean thelong time behaviour of the solution subject to an initial condition, ie. theCauchy problem {i∂tu = −∆u± |u|p−1uu(x, 0) = u0(x). (1.4)We seek theorems which characterize the solution’s eventual behaviour fora large class of initial conditions u0.One possibility is that the solution will scatter (Chapter 7 of [15], Chap-ter 3.3 of [74]). This means that the solution u(x, t) will eventually look likea solution to the linear Schro¨dinger equation. That is, after sufficient time,all nonlinear behaviour has disappeared and only linear behaviour remains.Solutions to the linear Schro¨dinger equation (Chapter 2 of [15], Chapter3.1 of [74]) {i∂tu = −∆uu(x, 0) = u031.3. Scattering and Blow-Upevolve in a predictable way. The mass, or L2 norm, is preserved while theL∞ norm (supremum) decays according to‖u(x, t)‖L∞(Rn) . t−n/2‖u0‖L1(Rn). (1.5)The above norms are defined as‖u(x, t)‖L∞(Rn) = supx∈Rn|u(x, t)| and ‖u(x, t)‖L1(Rn) =∫Rn|u(x, t)| dxand when we write f(t) . g(t) we mean that there is a constant C, inde-pendent of t, such that f(t) ≤ Cg(t).We may think of the linear Schro¨dinger equation as modeling a freequantum particle. The particle has a tendency to “spread out” due tomomentum uncertainty, even if it is well localized to start.A scattering theorem will then have the following form: for all u0 ∈ X(or some subset of X) we have‖u(x, t)− ulin‖X → 0, as t→∞.Here ulin is a solution to the linear Schro¨dinger equation (chosen accordingto the particular u0), X is an appropriate function space, and ‖ · ‖X is anorm on this space.Scattering is, to some extent, expected in the defocusing (repulsive) case.We think about the Laplacian as being a force for dispersion, as evidencedby the behaviour of the linear Schro¨dinger equation, and we think of thedefocusing nonlinearity as a repellent to the gathering of mass. We singleout the result [38], a scattering theorem in the energy space X = H1 fordefocusing NLS with power p between mass critical and energy critical,1 +4n< p < 1 +4n− 2 ,in dimensions n ≥ 3. The space H1 consists of functions u whose H1 norm,‖u(x, t)‖2H1(Rn) = ‖u(x, t)‖2L2(Rn) + ‖u(x, t)‖2H˙1(Rn)=∫Rn|u(x, t)|2 dx+∫Rn|∇u(x, t)|2 dx,is finite.For the focusing equation, with n ≥ 3, X = H1, 1 + 4/n < p < 1 +4/(n − 2), we have the scattering theory [73]. The key difference now inthe focusing case is the assumption that the energy (H1) norm of the initial41.4. Solitary Waves and Stabilitycondition, u0, be small. Here we see that if we do not have enough mass orenergy to begin with, the force of dispersion will win out over the focusingnonlinearity’s tendency to attract mass together.In the focusing equation, however, there are other possible fates for so-lutions besides scattering. If too much mass is assembled, the attractivenonlinearity may cause the solution to break down in finite time; that is,some norm of the solution will blow-up. For example, (Chapter 5 of [74])consider the focusing NLS with p ≥ 1 + 4/n. There exist initial conditionsin u0 ∈ H1 such that there exists a time t∗ <∞ such thatlimt→t∗‖∇u(x, t)‖L2 =∞.Important for the discussion of mass super-critical and energy sub-criticalequations above, scattering in the defocusing case and blow-up in the focus-ing case, is the local existence theory. The H1 local theory ensures an initialdata u0 ∈ H1 will generate a solution u(x, t) with continuous in time H1norm up to some time T . Here T is a non-increasing function of ‖u0‖H1 .Repeatedly applying the local theory together with an a priori H1 bound(such as in the defocusing case) then yields global existence, ie. the solutionexists for all time (the maximal time of existence Tmax =∞). In the absenceof an a priori H1 bound (focusing case) it’s possible that the maximal timeof existence for our initial data is finite. Indeed, the local theory provides thefollowing blow-up criterion: if Tmax <∞ then limt→T−max ‖u(·, t)‖H1 =∞.The energy critical case, p = 1 + 4/(n − 2), is more challenging sincethe blow-up criterion generated by the corresponding local theory is morecomplicated. Nevertheless, in the defocusing energy critical case the globalexistence and scattering theory is more or less complete. First we havescattering theorems [10] [11] in dimensions n = 2, 3, also [40] in dimensionn = 3, and then [76] in dimensions n ≥ 5, all where the initial data u0 wasassumed to be radial. The radial assumption was removed in [21] for n = 3and later in [67] for n = 4 and finally in [83] for n ≥ 5. Scattering in thedefocusing energy super-critical case remains a substantial open problem.We discuss scattering and blow-up for the focusing energy critical equa-tion (a study which was initiated in [54]) in Section 1.7 and Chapter 3.1.4 Solitary Waves and StabilityIn the focusing NLS,i∂tu = −∆u− |u|p−1u,51.4. Solitary Waves and Stabilitythe dispersive force of the Laplacian may be balanced by the attractivenonlinearity leading to solutions that neither scatter nor blow-up. Our NLSmay admit solutions of the formu(x, t) = eiωtQ(x) (1.6)often called solitary wave solutions or solitons. We think of solitary wavesas bound states or equilibrium solutions: the time dependence is confinedto the phase and the absolute value, |u|, is preserved. We further classifysolitons as ground states if they minimize the actionSω = E + ωMamong all non-zero solutions of the form (1.6).The time independent function Q(x) is the solitary wave profile, which,satisfies the following elliptic partial differential equation−∆Q− |Q|p−1Q+ ωQ = 0. (1.7)The above elliptic equation (1.7) is well studied, with results going backto [72] and [9]. Much of the present thesis concerns solitary waves. Inparticular, their existence, stability, and impact on the overall dynamics ofthe equation.Ground state solitary waves are stable for mass sub-critical powers of p[16] and unstable for mass critical [88] and mass super-critical powers [7]. Wecall solitary waves (orbitally) stable if initial conditions close to the solitonproduce solutions which remain close to the soliton (modulo symmetries ofspatial translation and phase rotation) for all time. More precisely, let ϕ(x)be the spatial profile of the ground state, u0(x) ∈ H1 an initial condition,and u(x, t) the solution generated by the initial condition. We say eiωtϕis (orbitally) stable if for every ε > 0 there exits a δ(ε) > 0 such that if‖φ− u0‖H1 ≤ δ(ε) thensupt∈Rinfθ∈Rinfy∈Rn‖u(·, t)− eiθφ(· − y)‖H1 < ε.Otheriwise, we say a soliton is unstable. In fact, both [88] and [7] demon-strate that some initial condition close to the soliton blows up in finite time.To study stability one can employ variational methods, as in [16], [88], [7]or else study the linearized operator around the soliton (see [39], [41], [89]).Understanding the spectrum of the linearized operator is often necessary toestablish asymptotic stability results such as those obtained in [6, 13, 22,61.5. Resonance23, 26, 36, 65, 68]. A soliton is asymptotically stable if initial conditionsclose to the soliton produce solutions which converge to a (nearby) solitonas t→∞. The study we initiate in Section 1.6 and Chapter 2 concerns thelinearized operator about 1D solitons.1.5 ResonanceBy resonance, or resonance eigenvalue, we mean a ‘would be’ eigenvalue,usually at the edge of the continuous spectrum. The resonance is not a trueeigenvalue because its resonance eigenfunction does not have sufficient decayto be square integrable. The appearance of a resonance, in the linearizedoperator about a soliton for example, is a non-generic occurrence. While itis non-generic, however, it does appear in a few key equations; in particularthose NLS that we study in Chapter 2 and Chapter 3. Such a resonance maycomplicate analysis, for example by making singular the resolvent expansionand slowing the time-decay of perturbations to a soliton.For example, let us consider the linear Schro¨dinger operatorH := −∆ + V (1.8)in n dimensions where V = V (x) is a potential. We may have a resonanceeigenvalue λ with resonance eigenfunction ξ such thatHξ = λξbut ξ /∈ L2(Rn).In 3D we require the resonance ξ be in L3w(R3), the weak L3 space.Therefore, ξ may decay like 1/|x| at infinity. The original paper [47] com-putes several terms in the resolvent expansion in the presence of a resonancein 3D. The resolvent is singular as λ→ 0 and takes the following form (as-suming we have no edge-eigenvalue):(H + λ2)−1 = O(1λ)(1.9)as an operator on suitable spaces. Moreover, the time-decay estimate (1.5)is retarded and decays in time like t−1/2 instead of t−3/2. A restated versionof these results are crucial in the analysis of Chapter 3.In 1D a resonance ξ may not decay at all. If Hξ = λξ and ξ /∈ Lpfor any p < ∞ but ξ ∈ L∞ then we regard ξ as a resonance. The morerecent work [48] provides a unified approach to resolvent expansions across71.6. The 1D Linearized NLSall dimensions, and so in 1D in particular, which, we rely on in Chapter 2.The resolvent expansion itself in 1D appears similar to (1.9).Interestingly, resonances in the Schro¨dinger operator only appear in di-mensions 1-4 [46–48] not in dimensions n ≥ 5 [45].1.6 The 1D Linearized NLSConsider now the following focusing NLS in 1 space dimensioni∂tu = −∂2xu− |u|p−1u. (NLSp)Chapter 2 deals with the above equation and so we supply here some back-ground, motivation, and connection to previous works.The above (NLSp) is known to exhibit solitary waves. Indeed, since weare in 1D the solitons are available in the following explicit formu(x, t) = Qp(x)eitwhereQp−1p (x) =(p+ 12)sech2(p− 12x).One naturally asks about the stability of these waves, which leads immedi-ately to an investigation of the spectrum of the linearized operator governingthe dynamics close to the solitary wave solution.The linearized operator is obtained by considering a perturbation of thesolitary wave,u(x, t) = (Qp(x) + h(x, t))eit,and neglecting all but the leading order in the resulting system. After wecomplexify, ie. letting ~h =(h h¯)T, we obtain the linear systemi∂t~h = Lp~hwhereLp =(1 00 −1)((−∂2x + 1 00 −∂2x + 1)− 12(p+ 1 p− 1p− 1 p+ 1)Qp−1p)is the linearized operator. See section 2.1. Systematic spectral analysis ofthe linearized operator has a long history (eg. [39, 89], and for more recentstudies [17, 30, 85, 86]).81.6. The 1D Linearized NLSThe principle motivation for Chapter 2 comes from [17] where resonanceeigenvalues (with explicit resonance eigenfunctions) were observed to sit atthe edges (or thresholds) of the spectrum for the 1D linearized NLS problemwith focusing cubic nonlinearity. Numerically, it was observed that the sameproblem with power nonlinearity close to p = 3 (on both sides) has a trueeigenvalue close to the threshold. We establish analytically the observedqualitative behaviour. Stated roughly, the main result of Chapter 2 is:for p ≈ 3, p 6= 3, the linearization of the 1D (NLSp) about its soliton haspurely imaginary eigenvalues, bifurcating from resonances at the edges of theessential spectrum of linearized (NLS3), whose distance from the thresholdsis of order (p− 3)4.The exact statement is given as Theorem 2.3.1 in Section 2.3, and includesthe precise leading order behaviour of the eigenvalues.The eigenvalues obtained here, being on the imaginary axis, correspondto stable behaviour at the linear level. A further motivation for obtainingdetailed information about the spectra of linearized operators is that suchinformation is a key ingredient in studying the asymptotic stability of solitarywaves: see [6, 13, 22, 23, 26, 36, 65, 68] for some results of this type. Suchresults typically assume the absence of threshold eigenvalues or resonances.The presence of a resonance is an exceptional case which complicates thestability analysis by retarding the time-decay of perturbations. Nevertheless,the asymptotic stability of solitons in the 1D cubic focusing NLS was recentlyproved in [29]. The proof relies on integrable systems technology and so isonly available for the cubic equation. The solitons are known to be stable inthe (weaker) orbital sense for all p < 5 (the so-called mass subcritical range)while for p ≥ 5 they are unstable [41, 90], but the question of asymptoticstability for p < 5 and p 6= 3 seems to be open. The existence (and location)of eigenvalues on the imaginary axis, which is shown here, should play a rolein any attempt on this problem.The generic bifurcation of resonances and eigenvalues from the edge ofthe essential spectrum was studied by [28] and [84] in three dimensions.Edge bifurcations have also been studied in one dimensional systems usingthe Evans function in [52] and [53] as well as in the earlier works [50], [51]and [64]. We do not follow that route, but rather adopt the approach of[28, 84] (going back also to [48], and in turn to the classical work [47]),using a Birman-Schwinger formulation, resolvent expansion, and Lyapunov-Schmidt reduction.Our work is distinct from [28, 84] due to the unique challenges of working91.7. A Perturbation of the 3D Energy Critical NLSin one dimension, in particular the strong singularity of the free resolventat zero energy, which among other things increases by one the dimensionof the range of the projection involved in the Lyapunov-Schmidt reductionprocedure.Moreover, our work is distinct from all of [28, 52, 53, 84] in that we studythe particular (and as it turns out non-generic) resonance and perturbationcorresponding to the near-cubic pure-power NLS problem. Generically, aresonance is associated with the birth or death of an eigenvalue, and such isthe picture obtained in [28, 52, 53, 84]: an eigenvalue approaches the essen-tial spectrum, becomes a resonance on the threshold and then disappears.In our setting, the eigenvalue approaches the essential spectrum, sits on thethreshold as a resonance, then returns as an eigenvalue. The bifurcationis degenerate in the sense that the expansion of the eigenvalue begins athigher order, and the analysis we develop to locate this eigenvalue is thusconsiderably more delicate.1.7 A Perturbation of the 3D Energy CriticalNLSIn Chapter 3 we consider Nonlinear Schro¨dinger equations in three spacedimensions, of the formi∂tu = −∆u− |u|4u− εg(|u|2)u, (1.10)where ε is a small, real parameter. Equation (1.10) is a perturbed versionof the focusing energy critical NLS. This section is devoted to introducingthe above equation, providing some background on the unperturbed criticalequation, and stating the main theorems of Chapter 3.The mass and energy of (1.10) areM(u) = 12∫R3|u|2 dx, Eε(u) =∫R3{12|∇u|2 − 16|u|6 − ε2G(|u|2)}dxwhereG′ = g. We are particularly interested in the existence (and dynamicalimplications) of solitary wave solutions of the formu(x, t) = Q(x)eiωtof (1.10). We will consider only real-valued solitary wave profiles, Q(x) ∈ R,for which the corresponding stationary problem is−∆Q−Q5 − εf(Q) + ωQ = 0, f(Q) = g(Q2)Q. (1.11)101.7. A Perturbation of the 3D Energy Critical NLSSince the perturbed solitary wave equation (1.11) is the Euler-Lagrangeequation for the actionSε,ω(u) := Eε(u) + ωM(u) ,the standard Pohozaev relations [34] give necessary conditions for existenceof finite-action solutions of (1.11):0 = Kε(u) := ddµSε,ω(Tµu)∣∣∣∣µ=1=∫|∇Q|2 −∫Q6 + ε∫ (3F (Q)− 32Qf(Q)))0 = K(0)ε,ω(u) :=ddµSε,ω(Sµu)∣∣∣∣µ=1= ε∫ (3F (Q)− 12Qf(Q))− ω∫Q2(1.12)where(Tµu)(x) := µ32u(µx), (Sµu)(x) := µ12u(µx)are the scaling operators preserving, respectively, the L2 norm and the L6(and H˙1) norm, and F ′ = f (so F (Q) = 12G(Q2)).The corresponding unperturbed (ε = 0) problem, the 3D quintic equa-tioni∂tu = −∆u− |u|4u, (1.13)is energy critical ie. the scalingu(x, t) 7→ uλ(x, t) := λ1/2u(λx, λ2t)which preserves (1.13), also leaves invariant its energyE0(u) =∫R3{12|∇u|2 − 16|u|6}dx, E0(uλ(·, t)) = E0(u(·, λ2t)).One implication of energy criticality is that (1.13) fails to admit solitarywaves with ω 6= 0 – as can be seen from (1.12) – but instead admits theAubin-Talenti static solutionW (x) =(1 +|x|23)−1/2, ∆W +W 5 = 0, (1.14)whose slow spatial decay means it fails to lie in L2(R3), though it does fallin the energy spaceW 6∈ L2(R3), W ∈ H˙1(R3) = {u ∈ L6(R3) | ‖u‖H˙1 := ‖∇u‖L2 <∞}.111.7. A Perturbation of the 3D Energy Critical NLSBy scaling invariance, Wµ := SµW = µ1/2W (µx), for µ > 0, also sat-isfy (1.14), as do their negatives and spatial translates ±Wµ(·+a) (a ∈ R3).These functions (and their multiples) are well-known to be the only functionsrealizing the best constant appearing in the Sobolev inequality [4, 75]∫R3|u|6 ≤ C3(∫R3|∇u|2)3, C3 =∫R3 W6(∫R3 |∇W |2)3 = 1(∫R3 W6)2 ,where the last equality used∫ |∇W |2 = ∫ W 6 (as follows from (1.12)). Aclosely related statement is that W , together with its scalings, negatives andspatial translates, are the only minimizers of the energy under the Pohozaevconstraint (1.12) with ε = ω = 0:min{E0(u) | 0 6= u ∈ H˙1(R3), K0(u) = 0} = E0(W ) = E0(±Wµ(·+ a)),K0(u) =∫R3{|∇u|2 − |u|6} .(1.15)It follows that for solutions of (1.13) lying energetically ‘below’ W , E0(u) <E0(W ), the sets where K0(u) > 0 and where K0(u) < 0 are invariantfor (1.13). The celebrated result [54] showed that radially symmetric so-lutions in the first set scatter to 0, while those in the second set becomesingular in finite time (in dimensions 3, 4, 5). In this way, W plays a centralrole in classifying solutions of (1.13), and it is natural to think of W (to-gether with its scalings and spatial translates) as the ground states of (1.13).The assumption in [54] that solutions be radially symmetric was removed in[57] for dimensions n ≥ 5 and then for n = 4 in [32]. Removing the radialsymmetry assumption appears still open for n = 3. A characterization ofthe dynamics for initial data at the threshold E0(u0) = E0(W ) appears in[33], and a classification of global dynamics based on initial data slightlyabove the ground state is given in [60].Just as the main interest in studying (1.13) is in exploring the implica-tions of critical scaling, the main interest in studying (1.10) and (1.11) hereis the effect of perturbing the critical scaling, in particular: the emergence ofground state solitary waves from the static solution W , the resulting energylandscape, and its implications for the dynamics.A natural analogue for (1.11) of the ground state variational problem (1.15)ismin{Sε,ω(u) | u ∈ H1 \ {0},Kε(u) = 0}. (1.16)For a study of similar minimization problems see [7] and [8] as well as [3],which treats a large class of critical problems and establishes the existence121.7. A Perturbation of the 3D Energy Critical NLSof ground state solutions. In space dimensions 4 and 5, [1, 2] showed theexistence of minimizers for (the analogue of) (1.16), hence of ground statesolitary waves, for each ω > 0 and εg(|u|2)u sufficiently small and subcritical;moreover, a blow-up/scattering dichotomy ‘below’ the ground states in thespirit of [54] holds. Our intention is to establish the existence of groundstates, and the blow-up/scattering dichotomy, in the 3-dimensional setting.In dimension 3, the question of the existence of minimizers for (1.16) is moresubtle, and we proceed via a perturbative construction, rather than a directvariational method.A key role in the analysis is played by the linearization of (1.14) aroundW , in particular the linearized operatorH := −∆ + V := −∆− 5W 4, (1.17)which as a consequence of scaling invariance has the following resonance:H ΛW = 0, ΛW :=ddµSµW |µ=0 =(12+ x · ∇)W /∈ L2(R3). (1.18)Indeed ΛW = W 3 − 12W decays like |x|−1, and soW, ΛW ∈ Lr(R3) ∩ H˙1(R3), 3 < r ≤ ∞.Our first goal is to find solutions to (1.11) where ω = ω(ε) > 0 is smalland Q(x) ∈ R is a perturbation of W in some appropriate sense. Oneobstacle is that W /∈ L2 is a slowly decaying function, whereas solutionsof (1.11) satisfy Q ∈ L2, and indeed are exponentially decaying.Assumption 1.7.1. Take f : R→ R ∈ C1 such that f(0) = 0 and|f ′(s)| . |s|p1−1 + |s|p2−1with 2 < p1 ≤ p2 <∞. Further assume that〈ΛW, f(W )〉 < 0.Theorem 1.7.2. There exists ε0 > 0 such that for each 0 < ε ≤ ε0, thereis ω = ω(ε) > 0, and smooth, real-valued, radially symmetric Q = Qε ∈H1(R3) satisfying (1.11) withω = ω1ε2 + ω˜ (1.19)Q(x) = W (x) + η(x) (1.20)131.7. A Perturbation of the 3D Energy Critical NLSwhereω1 =(−〈ΛW, f(W )〉6pi)2,ω˜ = O(ε2+δ1) for any δ1 < min(1, p1−2), ‖η‖Lr . ε1−3/r for all 3 < r ≤ ∞,and ‖η‖H˙1 . ε1/2. In particular, Q→W in Lr ∩ H˙1 as ε→ 0.Remark 1.7.3. We have a further decomposition of η but the leading orderterm depends on whether we measure it in Lr with r = ∞ or 3 < r < ∞.See Lemmas 3.1.9 and 3.1.10.Remark 1.7.4. Note that allowable f include f(Q) = |Q|p−1Q with 2 <p < 5, the subcritical, pure-power, focusing nonlinearities, as well as f(Q) =−|Q|p−1Q with 5 < p <∞, the supercritical, pure power, defocusing nonlin-earities. Observe〈ΛW,W p〉 =∫ (12W p+1 +W p(x · ∇)W)=∫ (12W p+1 +1p+ 1(x · ∇)W p+1)=∫ (12− 3p+ 1)W p+1which is negative when 2 < p < 5 and positive when p > 5.Remark 1.7.5. Since Qε → W in Lr for r ∈ (3,∞], the Pohozaev iden-tity (1.12), together with the divergence theorem, implies that for any suchfamily of solutions, a necessary condition is〈ΛW, f(W )〉 =∫ (12Wf(W )− 3F (W ))= limε→0∫ (12Qεf(Qε)− 3F (Qε))≤ 0.Remark 1.7.6. Note that Q ∈ Lr ∩ H˙1 (3 < r ≤ ∞) satisfying (1.11) liesautomatically in L2 (and hence H1): by the Pohozaev relations (1.12):0 =∫|∇Q|2 −∫Q6 − ε∫f(Q)Q+ ω∫Q2. (1.21)The first two integrals are then finite. We can also bound the third∣∣∣∣∫ f(Q)Q∣∣∣∣ ≤ ∫ |f(Q)||Q| . ∫ |Q|p1+1 + ∫ |Q|p2+1 <∞141.7. A Perturbation of the 3D Energy Critical NLSsince p2 + 1 ≥ p1 + 1 > 3. In this way∫Q2 must be finite. Moreover,since Q ∈ Lr with r > 6, a standard elliptic regularity argument impliesthat Q is in fact a smooth function. Therefore it suffices to find a solutionQ ∈ Lr ∩ H˙1.The paper [31] considers an elliptic problem similar to (1.11):−∆Q+Q−Qp − λQq = 0with 1 < q < 3, λ > 0 large and fixed, and p < 5 but p → 5. They demon-strate the existence of three positive solutions, one of which approaches W(1.14) as p → 5. The follow up [18] established a similar result with p → 5but p > 5 and 3 < q < 5. While [31] and [18] are perturbative in nature,their method of construction differs from ours.The proof of Theorem 1.7.2 is presented in Section 3.1. As the state-ment suggests, the argument is perturbative – the solitary wave profiles Qare constructed as small (in Lr) corrections to W . The set-up is given inSection 3.1.1. The equation for the correction η involves the resolvent ofthe linearized operator H. A Lyapunov-Schmidt-type procedure is used torecover uniform boundedness of this resolvent in the presence of the reso-nance ΛW – see Section 3.1.2 for the relevant estimates – and to determinethe frequency ω, see Section 3.1.3. Finally, the correction η is determinedby a fixed point argument in Section 3.1.4.The next question is if the solution Q is a ground state in a suitablesense. For this question, we will specialize to pure, subcritical powers f(Q) =|Q|p−1Q, 3 < p < 5, for which the ‘ground state’ variational problem (1.16)readsmin{Sε,ω(u) | u ∈ H1(R3) \ {0},Kε(u) = 0},Sε,ω(u) = 12‖∇u‖2L2 −16‖u‖6L6 −1(p+ 1)ε‖u‖p+1Lp+1+12ω‖u‖2L2 ,Kε(u) = ‖∇u‖2L2 − ‖u‖6L6 −3(p− 1)2(p+ 1)ε‖u‖p+1Lp+1.(1.22)Theorem 1.7.7. Let f(Q) = |Q|p−1Q with 3 < p < 5. There exists ε0 suchthat for each 0 < ε ≤ ε0 and ω = ω(ε) > 0 furnished by Theorem 1.7.2,the solitary wave profile Qε constructed in Theorem 1.7.2 is a minimizerof problem (1.22). Moreover, Qε is the unique positive, radially-symmetricminimizer.Remark 1.7.8. It follows from Theorem 1.7.7 that the solitary wave profilesare positive: Qε(x) > 0.151.7. A Perturbation of the 3D Energy Critical NLSRemark 1.7.9. (see Corollary 3.2.12). By scaling, for each ε > 0 there isan interval [ω,∞) 3 ω(ε), such that for ω ∈ [ω,∞),Q(x) :=(εεˆ) 15−pQεˆ((εεˆ) 25−px),where 0 < εˆ ≤ ε0 satisfies (ω(εˆ)/ω) = (εˆ/ε)4/(5−p), solves the correspondingminimization problem (1.22). Here the function Qεˆ is the solution con-structed by Theorem 1.7.2 with εˆ and ω(εˆ).The proof of Theorem 1.7.7 is presented in Section 3.2. It is somewhatindirect. We first use the Q = Qε constructed in Theorem 1.7.2 simply astest functions to verifySε,ω(ε)(Qε) < E0(W )and so confirm, by standard methods, that the variational problems (1.22)indeed admit minimizers. By exploiting the unperturbed variational prob-lem (1.15), we show these minimizers approach (up to rescaling) W as ε→ 0.Then the local uniqueness provided by the fixed-point argument from The-orem 1.7.2 implies that the minimizers agree with Qε.Finally, as in [1, 2], we use the variational problem (1.22) to character-ize the dynamics of radially-symmetric solutions of the perturbed criticalNonlinear Schro¨dinger equation{i∂tu = −∆u− |u|4u− ε|u|p−1uu(x, 0) = u0(x) ∈ H1(R3) (1.23)‘below the ground state’, in the spirit of [54]. By standard local existencetheory (details in Section 3.3), the Cauchy problem (2.1) admits a uniquesolution u ∈ C([0, Tmax);H1(R3)) on a maximal time interval, and a centralquestion is whether the solution blows-up in finite time (Tmax < ∞) or isglobal (Tmax =∞), and if global, how it behaves as t→∞. We have:Theorem 1.7.10. Let 3 < p < 5 and 0 < ε < ε0, let u0 ∈ H1(R3) beradially-symmetric, and satisfySε,ω(ε)(u0) < Sε,ω(ε)(Qε),and let u be the corresponding solution to (2.1):1. If Kε(u0) ≥ 0, u is global, and scatters to 0 as t→∞;2. if Kε(u0) < 0, u blows-up in finite time .161.7. A Perturbation of the 3D Energy Critical NLSNote that the conclusion is sharp in the sense that Qε itself is a globalbut non-scattering solution. Below the action of the ground state the setswhere Kε(u) > 0 and Kε(u) < 0 are invariant under the equation (1.10).Despite the fact that Kε(u0) > 0 gives an a priori bound on the H1 normof the solution, the local existence theory is insufficient (since we have theenergy critical power) to give global existence/scattering, and so we employconcentration compactness machinery.The blow-up argument is classical, while the proof of the scattering resultrests on that of [54] for the unperturbed problem, with adaptations to handlethe scaling-breaking perturbation coming from [1, 2] (higher-dimensionalcase) and [56] (defocusing case). This is given in Section 3.3.17Chapter 2A Degenerate EdgeBifurcation in the 1DLinearized NLSIn this chapter, we state and prove the theorem alluded to in Section 1.6.The problem is set up in Section 2.1. In Section 2.2 we collect some resultsabout the relevant operators that are necessary for the bifurcation analy-sis. Section 2.3 is devoted to the statement and proof of the main resultof this chapter: Theorem 2.3.1. The positivity of a certain (explicit) coef-ficient, which is crucial to the proof, is verified numerically; details of thiscomputation are given in Section Setup of the Birman-Schwinger ProblemWe consider the focusing, pure power (NLS) in one space dimension:i∂tu = −∂2xu− |u|p−1u. (2.1)Here u = u(x, t) : R × R → C with 1 < p < ∞. The NLS (2.1) admitssolutions of the formu(x, t) = Qp(x)eit (2.2)where Qp(x) > 0 satisfies−Q′′p −Qpp +Qp = 0. (2.3)In one dimension the explicit solutionsQp−1p (x) =(p+ 12)sech2(p− 12x)(2.4)of (2.3) for each p ∈ (1,∞) are classically known to be the unique H1solutions of (2.3) up to spatial translation and phase rotation (see e.g. [15]).182.1. Setup of the Birman-Schwinger ProblemIn what follows we study the linearized NLS problem. That is, linearize(2.1) about the solitary wave solutions (2.2) by considering solutions of theformu(x, t) = (Qp(x) + h(x, t)) eit.Then h solves, to leading order (i.e. neglecting terms nonlinear in h)i∂th = (−∂2x + 1)h−Qp−1p h− (p− 1)Qp−1p Re(h).We write the above as a matrix equation∂t~h = JHˆ~hwith~h :=(Re(h)Im(h))J−1 :=(0 −11 0)Hˆ :=(−∂2x + 1− pQp−1p 00 −∂2x + 1−Qp−1p).The above JHˆ is the linearized operator as it appears in [17]. We nowconsider the system rotatedi∂t~h = iJHˆ~hand find U unitary so that, UiJHˆU∗ = σ3H, where σ3 is one of the Paulimatrices and with H self-adjoint:σ3 =(1 00 −1), U =1√2(1 i1 −i),H =(−∂2x + 1 00 −∂2x + 1)− 12(p+ 1 p− 1p− 1 p+ 1)Qp−1p =: H˜ − V (p).In this way we are consistent with the formulation of [28, 84]. We can alsoarrive at this system, i∂t~h = σ3H~h, by letting ~h =(h h¯)Tfrom the start.Thus we are interested in the spectrum ofLp := σ3H192.1. Setup of the Birman-Schwinger Problemand so in what follows we consider the eigenvalue problemLpu = zu, z ∈ C, u ∈ L2(R,C2). (2.5)That the essential spectrum of Lp isσess(Lp) = (−∞,−1] ∪ [1,∞)and 0 is an eigenvalue of Lp are standard facts [17].When p = 3 we have the following resonance at the threshold z = 1 [17]u0 =(2−Q23−Q23)= 2(tanh2 x− sech2 x)(2.6)in the sense thatL3u0 = u0, u0 ∈ L∞, u0 /∈ Lq, for q <∞. (2.7)Our main interest is how this resonance bifurcates when p 6= 3 but |p− 3| issmall. We now seek an eigenvalue of (2.5) in the following formz = 1− α2, α > 0. (2.8)We note that the spectrum of Lp for the soliton (2.4) may only be located onthe Real or Imaginary axes [17], and so any eigenvalues in the neighbourhoodof z = 1 must be real. There is also a resonance at z = −1 which we do notmention further; symmetry of the spectrum of Lp ensures the two resonancesbifurcate in the same way.We now recast the problem in accordance with the Birman-Schwingerformulation (pp. 85 of [43]), as in [28, 84]. For (2.8), (2.5) becomes(σ3H˜ − 1 + α2)u = σ3V (p)u.The constant-coefficient operator on the left is now invertible so we can writeu = (σ3H˜ − 1 + α2)−1σ3V (p)u =: R(α)V (p)u.After noting that V (p) is positive we setw := V1/20 u, V0 := V(p=3)and apply V1/20 to arrive at the problemw = −Kα,pw, Kα,p := −V 1/20 R(α)V (p)V −1/20 (2.9)202.1. Setup of the Birman-Schwinger ProblemwithR(α) =((−∂2x + α2)−1 00 (−∂2x + 2− α2)−1). (2.10)We now seek solutions (α,w) of (2.9) which correspond to eigenvalues 1−α2and eigenfunctions V−1/20 w of (2.5). The decay of the potential V(p) andhence V120 now allows us to work in the space L2 = L2(R,C2), whose standardinner product we denote by 〈·, ·〉.The resolvent R(α) has integral kernelR(α)(x, y) =(12αe−α|x−y| 00 12√2−α2 e−√2−α2|x−y|)for α > 0. We expand R(α) asR(α) =1αR−1 +R0 + αR1 + α2RR. (2.11)These operators have the following integral kernelsR−1(x, y) =(12 00 0)R0(x, y) =(− |x−y|2 00 e−√2|x−y|2√2)R1(x, y) =(|x−y|24 00 0)and for α > 0 the remainder term RR is continuous in α and uniformlybounded as an operator from a weighted L2 space (with sufficiently strongpolynomial weight) to its dual. Moreover, since the entries of the full integralkernel R(α)(x, y) are bounded functions of |x− y|, we see that the entries ofRR(x, y) =1α2(R(α)(x, y)− ( 1αR−1(x, y) +R0(x, y) + αR1(x, y)))grow at most quadratically in |x − y| as |x − y| → ∞. We also expand thepotential V (p) in ε where ε := p− 3V (p) = V0 + εV1 + ε2V2 + ε3VR, ε := p− 3 (2.12)212.2. The Perturbed and Unperturbed OperatorsandV0 =(2 11 2)Q23 V1 =12(1 11 1)Q23 +(2 11 2)q1V2 =12(1 11 1)q1 +(2 11 2)q2 VR =12(1 11 1)q2 +(2 11 2)qRV1/20 =12(√3 + 1√3− 1√3− 1 √3 + 1)Q3.Here we have expandedQp−1p (x) = Q23(x) + εq1(x) + ε2q2(x) + ε3qR(x)and the computation givesQ23(x) = 2 sech2 x, q1(x) = sech2 x(12− 2x tanhx)q2(x) =12(2x2 tanh2 x sech2 x− x2 sech4 x− x tanhx sech2 x) .By Taylor’s theorem, the remainder term qR(x) satisfies an estimate of theform |qR(x)| ≤ C(1 + |x|3) sech2(x/2) for some constant C which is uniformin x and ε ∈ (−1, 1). We will henceforth writeQ for Q3 and Kα,ε for Kα,p.2.2 The Perturbed and Unperturbed OperatorsWe study (2.9), that is:(Kα,ε + 1)w = 0. (2.13)Using the expansions (2.11) and (2.12) for R(α) and V (p) we make the fol-lowing expansionKα,ε =1α(K−10 + εK−11 + ε2K−12 + ε3KR1)+K00 + εK01 + ε2K02 + ε3KR2+ αK10 + αεKR3+ α2KR4(2.14)where KR4 is uniformly bounded and continuous in α > 0 and ε in a neigh-bourhood of 0, as an operator on L2(R,C2).Before stating the main theorem we assemble some necessary facts aboutthe above operators.222.2. The Perturbed and Unperturbed OperatorsLemma 2.2.1. Each operator appearing in the expansion (2.14) for Kα,εis a Hilbert-Schmidt (so in particular bounded and compact) operator fromL2(R,C2) to itself.Proof. This is a straightforward consequence of the spatial decay of theweights which surround the resolvent. The facts that ‖V −1/20 ‖ ≤ Ce|x|, andthat ‖V 1/20 ‖ ≤ Ce−|x|, while each of ‖V0‖, ‖V1‖, ‖V2‖ and ‖VR‖ can bebounded by Ce−3|x|/2 (say if we restrict to |ε| < 12) imply easily that theseoperators all have square integrable integral kernels.Remark 2.2.2. The same decay estimates for the potentials used in theproof of Lemma 2.2.1 show that for α > 0 and w ∈ L2 solving (2.9) the corre-sponding eigenfunction of (2.5) u = V−1/20 w lies in L2 and so the eigenvaluez = 1−α2 is in fact a true eigenvalue. Indeed w ∈ L2 =⇒ V (p)V −1/20 w ∈ L2and so u = −R(α)V (p)V −1/20 w ∈ L2, since the free resolvent R(α) preservesL2 for α > 0 .We will also need the projections P and P which are defined as follows:for f ∈ L2 letPf :=〈v, f〉v‖v‖2 , v := V1/20(10)as well as the complementary P := 1−P . A direct computation shows thatfor any f ∈ L2 we haveK−10f = −4Pf. (2.15)Note that all operators in the expansion containing R−1 return outputs inthe direction of v.Lemma 2.2.3. The operator P (K00 + 1)P has a one dimensional kernelspanned byw0 := V1/20 u0as an operator from Ran(P ) to Ran(P ).Proof. First note that by (2.7)−V0u0 = σ3u0 − H˜u0, [−V0u0]1 = [u0]′′1 (2.16)232.2. The Perturbed and Unperturbed Operatorsfrom which it follows thatPw0 = 0, i.e. w0 ∈ Ran(P ).Then a direct computation using (2.16), the expansion (2.14), the expressionfor R0, and integration by parts, shows that(K00 + 1)w0 = 2vand so indeed P (K00 + 1)Pw0 = 0.Theorem 5.2 in [48] shows that the kernel of the analogous scalar operatorcan be at most one dimensional. We will use this argument, adapted to thevector structure, to show that any two non-zero elements of the kernel mustbe multiples of each other. Take w ∈ L2 with 〈w, v〉 = 0 and P (K00 +1)w =0. That is (K00 + 1)w = cv for some constant c. This means−V 1/20 R0V0V −1/20 w + w = cV 1/20(10).Let w = V1/20 u where u =(u1u2). We then obtain, after rearranging andexpanding(u1u2)=(c− 12∫R |x− y|Q2(y) (2u1(y) + u2(y)) dy12√2∫R exp(−√2|x− y|)Q2(y)(u1(y) + 2u2(y))dy).We now rearrange the first component. Expand−12∫R|x− y|Q2(y)(2u1(y) + u2(y))dy=− 12∫ x−∞(x− y)Q2(y)(2u1(y) + u2(y))dy− 12∫ ∞x(y − x)Q2(y)(2u1(y) + u2(y))dyand rewrite the first term as− x2∫ x−∞Q2(y)(2u1(y) + u2(y))dy +12∫ x−∞yQ2(y)(2u1(y) + u2(y))dy=x2∫ ∞xQ2(y)(2u1(y) + u2(y))dy + b− 12∫ ∞xyQ2(y)(2u1(y) + u2(y))dy242.2. The Perturbed and Unperturbed Operatorswhereb :=12∫RyQ2(y)(2u1(y) + u2(y))dyand where we used∫R 2Q2u1 + Q2u2 = 0 since 〈w, v〉 = 0. So puttingeverything back together we see(u1u2)=(c+ b+∫∞x (x− y)Q2(y) (2u1(y) + u2(y)) dy12√2∫R exp(−√2|x− y|)Q2(y)(u1(y) + 2u2(y))dy). (2.17)We claim that as x→∞(u1u2)→(c+ b0).Observe ∣∣∣∣ ∫ ∞x(x− y)Q2(y) (2u1(y) + u2(y)) dy∣∣∣∣≤∫ ∞x|y − x|Q2(y)|2u1(y) + u2(y)|dy≤∫ ∞x|y|Q2(y)|2u1(y) + u2(y)|dy→ 0as x→∞. Here we have used the fact that w ∈ L2 implies Q|2u1 +u2| ∈ L2and that |y|Q ∈ L2. As well, in the second component∫Re−√2|x−y|Q2(y)(u1(y) + 2u2(y))dy=e−√2x∫ x−∞e√2yQ2(y)(u1(y) + 2u2(y))dy+ e√2x∫ ∞xe−√2yQ2(y)(u1(y) + 2u2(y))dy252.2. The Perturbed and Unperturbed Operatorsand∣∣∣∣e−√2x ∫ x−∞ e√2yQ2(y)(u1(y) + 2u2(y))dy∣∣∣∣≤ e−√2x∫ x−∞e√2yQ2(y)|u1(y) + 2u2(y)|dy≤ e−√2x(∫ x−∞e2√2yQ2(y)dy)1/2(∫ x−∞Q2(y)|u1(y) + 2u2(y)|2dy)1/2≤ Ce−√2x(∫ x−∞e2√2yQ2(y)dy)1/2≤ Ce−√2x(∫ x−∞e2√2ye−2ydy)1/2≤ Ce−√2x(e−2√2xe−2x)1/2 ≤ Ce−x → 0, x→∞where we again used Q|u1 + 2u2| ∈ L2. Similarly,∣∣∣∣e√2x ∫ ∞xe−√2yQ2(y)(u1(y) + 2u2(y))dy∣∣∣∣→ 0as x→∞ which addresses the claim.Next we claim that if c + b = 0 in (2.17) then u ≡ 0. To address theclaim we first note that if c + b = 0 then u ≡ 0 for all x ≥ X for some X,by estimates similar to those just done. Finally, we appeal to ODE theory.Differentiating (2.17) in x twice returns the systemu′′1 = −2Q2u1 −Q2u2 (2.18)u′′2 − 2u2 = −Q2u1 − 2Q2u2. (2.19)Any solution u to the above with u ≡ 0 for all large enough x must beidentically zero.With the claim in hand we finish the argument. Given two non-zeroelements of the kernel, say u and u˜ with limits as x→∞ (written as above)c+ b and c˜+ b˜ respectively, the combinationu∗ = u− c+ bc˜+ b˜u˜satisfies (2.17) but with u∗(x) → 0 as x → ∞, and so u∗ ≡ 0. Therefore, uand u˜ are linearly dependent, as required.262.3. Bifurcation AnalysisNote that K00, and hence P (K00 + 1)P , is self-adjoint. IndeedK00 = −V 1/20 R0V0V −1/20= −V −1/20 V0R0V 1/20= (K00)∗.As we have seen above in Lemma 2.2.1, thanks to the decay of the poten-tial, PK00P is a compact operator. Therefore, the simple eigenvalue −1 ofPK00P is isolated and so(P (K00 + 1)P )−1 : {v, w0}⊥ → {v, w0}⊥ (2.20)exists and is bounded.With the above preliminary facts assembled, we proceed to the bifurca-tion analysis.2.3 Bifurcation AnalysisThis section is devoted to the proof of the main result of Chapter 2:Theorem 2.3.1. There exists ε0 > 0 such that for −ε0 ≤ ε ≤ ε0 theeigenvalue problem (2.13) has a solution (α,w) of the formw = w0 + εw1 + ε2w2 + w˜α = ε2α2 + α˜(2.21)where α2 > 0, w0, w1, w2 are known (given below), and |α˜| < C|ε|3 and‖w˜‖L2 < C|ε|3 for some C > 0.Remark 2.3.2. This theorem confirms the behaviour observed numericallyin [17]: for p 6= 3 but close to 3, the linearized operator JHˆ (which is uni-tarily equivalent to iLp) has true, purely imaginary eigenvalues in the gapbetween the branches of essential spectrum, which approach the thresholdsas p → 3. Note Remark 2.2.2 to see that u = V −1/20 w is a true L2 eigen-function of (2.5). In addition, the eigenfunction approaches the resonanceeigenfunction in some weighted L2 space. Furthermore, we have found thatα2, the distance of the eigenvalues from the thresholds, is to leading orderproportional to (p − 3)4. Finally, note that α = ε2α2 + O(ε3) with α2 > 0gives α > 0 for both ε > 0 and ε < 0, ensuring the eigenvalues appear onboth sides of p = 3.272.3. Bifurcation AnalysisThe quantities in (2.21) are defined as follows:w0 := V1/20 u0Pw1 :=14K−11w0Pw1 := −(P (K00 + 1)P)−1(14PK00K−11w0 + PK01w0)Pw2 :=14(K−11w1 +K−12w0 + α2(K00 + 1)w0)Pw2 := −(P (K00 + 1)P)−1(14PK00K−11w1 +14PK00K−12w0+α24PK00(K00 + 1)w0 + PK01w1 + PK02w0 + α2PK10w0)α2 :=−14〈w0,K00K−11w1〉 − 14〈w0,K00K−12w0〉 − 〈w0,K01w1〉 − 〈w0,K02w0〉〈w0,K10w0〉+ 14〈w0,K00(K00 + 1)w0〉.Remark 2.3.3. A numerical computation showsα2 ≈ 2.52/8 > 0.Since the positivity of α2 is crucial to the main result, details of this com-putation are described in Section 2.4.Note that the functions on which P (K00 + 1)P is being inverted in theexpressions for Pw1 and Pw2 are orthogonal to both w0 and v, and so thesequantities are well-defined by (2.20). The projections to v are zero by thepresence of P . As for the projections to w0, the identity〈w0, 14K00K−11w0 +K01w0〉 = 0 (2.22)has been verified analytically. It is because of this identity that the O(ε)term is absent in the expansion of α in (2.21). The fact that0 = 〈w0, 14K00K−11w1 +14K00K−12w0 +α24K00(K00 + 1)w0 +K01w1+K02w0 + α2K10w0〉comes from our definition of α2.282.3. Bifurcation AnalysisThe above definitions, along with (2.15), imply the relationships below0 = K−10w0 (2.23)0 = K−11w0 +K−10w1 (2.24)0 = K−10w2 +K−11w1 +K−12w0 + α2(K00 + 1)w0 (2.25)0 = P (K00 + 1)w1 + PK01w0 (2.26)0 = P (K00 + 1)w2 + PK01w1 + PK02w0 + α2PK10w0 (2.27)which we will use in what follows.Using the expression for α in (3.1.1), our expansion (2.14) for Kα,ε nowtakes the formKα,ε =1α(K−10 + εK−11 + ε2K−12 + ε3KR1)+K00 + εK01 + ε2K02 + ε3KR2+ (α2ε2 + α˜)K10 + (α2ε2 + α˜)εKR3 + (α2ε2 + α˜)2KR4=:1α(K−10 + εK−11 + ε2K−12 + ε3KR1)+K00 + εK1 + α˜K2where K1 is a bounded (uniformly in ε) operator depending on ε but not α˜,while K2 is a bounded (uniformly in ε and α˜) operator depending on bothε and α˜.Further decomposingw˜ = βv +W, 〈W, v〉 = 0,we aim to show existence of a solution with the remainder terms α˜, β andW small. We do so via a Lyapunov-Schmidt reduction.First substitute (2.21) to (2.13) and apply the projection P to obtain0 = P (Kα,ε + 1)w= P (Kα,ε + 1)(w0 + εw1 + ε2w2 + βv +W )= P (K00 + 1)w0 + εP (K00 + 1)w1 + εPK01w0+ ε2P (K00 + 1)w2 + ε2PK01w1 + ε2PK02w0 + ε2α2PK10w0+ P (K00 + 1)(βv +W ) + α˜PK10w0 + P(εK1 + α˜K2)(βv +W )+ ε3P(KR2w0 +K02w1 +K01w2 + εK02w2 + εKR2w1 + ε2KR2w2)+ (α2ε2 + α˜)PK10(εw1 + ε2w2) + (α2ε2 + α˜)εPKR3(w0 + εw1 + ε2w2)+ (α2ε2 + α˜)2PKR4(w0 + εw1 + ε2w2).(2.28)292.3. Bifurcation AnalysisMaking some cancellations coming from Lemma 2.2.3, (2.26) and (2.27)leads to−P (K00 + 1)PW =βPK00v + α˜PK10w0 + P(εK1 + α˜K2)(βv +W )+ ε3P(KR2w0 +K02w1 +K01w2 + εK02w2 + εKR2w1 + ε2KR2w2)+ (α2ε2 + α˜)PK10(εw1 + ε2w2) + (α2ε2 + α˜)εPKR3(w0 + εw1 + ε2w2)+ (α2ε2 + α˜)2PKR4(w0 + εw1 + ε2w2)=: F(W ; ε, α˜, β).According to (2.20), inversion of P (K00 + 1)P on F requires the solv-ability conditionP0F = 0, P0 := 1‖w0‖22〈w0, ·〉w0, P 0 := 1− P0 (2.29)which we solve together with the fixed point problemW =(−P (K00 + 1)P )−1 P 0F(W ; ε, α˜, β) =: G(W ; ε, α˜, β) (2.30)in order to solve (2.28).WriteF := P (βK00v + α˜K10w0 + (εK1 + α˜K2) (βv +W ) + ε3f1 + εα˜f2 + α˜2h1)where f1 and f2 denote functions depending on (and L2 bounded uniformlyin) ε but not α˜, while h1 denotes an L2 function depending on (and uniformlyL2 bounded in) both ε and α˜.Lemma 2.3.4. For any M > 0 there exists ε0 > 0 and R > 0 such that forall −ε0 ≤ ε ≤ ε0 and for all α˜ and β with |α˜| ≤M |ε|3 and |β| ≤M |ε|3 thereexists a unique solution W ∈ L2 ∩ {v, w0}⊥ of (2.30) satisfying ‖W‖L2 ≤R|ε|3.Proof. We prove this by means of Banach Fixed Point Theorem. We mustshow that G(W ) maps the closed ball of radius R|ε|3 into itself and thatG(W ) is a contraction mapping. Taking W ∈ L2 orthogonal to v and w0 suchthat ‖W‖L2 ≤ R|ε|3 and given M > 0 where |α˜| ≤ M |ε|3 and |β| ≤ M |ε|3,302.3. Bifurcation Analysiswe have, using the boundedness of(−P (K00 + 1)P )−1 P 0,‖G‖L2≤ C|β|‖PK00v + P(εK1 + α˜K2)v‖L2 + C|α˜|‖P (K10w0 + εf2 + α˜h1) ‖L2+ C‖P (εK1 + α˜K2)W‖L2 + |ε|3C‖Pf1‖L2≤ CM |ε|3 + CM |ε|3 + C|ε|‖W‖L2 + C|α˜|‖W‖L2 + C|ε|3≤ C|ε|3 + CR|ε|4≤ R|ε|3for some appropriately chosen R with |ε| small enough. Here C is a positive,finite constant whose value changes at each appearance. Next consider‖G(W1)− G(W2)‖L2≤ C‖P (εK1 + α˜K2) ‖L2→L2‖W1 −W2‖L2≤ C|ε|‖P K1‖L2→L2‖W1 −W2‖L2 + C|α˜|‖P K2‖L2→L2‖W1 −W2‖L2≤ C|ε|‖W1 −W2‖L2 ≤ κ‖W1 −W2‖L2with 0 < κ < 1 by taking |ε| sufficiently small. Hence G(W ) is a contraction,and we obtain the desired result.Lemma 2.3.4 provides W as a function of α˜ and β, which we may thensubstitute into (2.29) to get0 = 〈w0,F〉= β〈w0,K00v〉+ α˜〈w0,K10w0〉+ εβ〈w0,K1v〉+ α˜β〈w0,K2v〉+ ε3〈w0, f1〉+ εα˜〈w0, f2〉+ α˜2〈w0, h1〉+ ε〈w0,K1W 〉+ α˜〈w0,K2W 〉=: β〈w0,K00v〉+ α˜〈w0,K10w0〉+ F1 (2.31)which is the first of two equations relating α˜ and β.The second equation is the complementary one to (2.28): substitute312.3. Bifurcation Analysis(2.21) to (2.13) but this time multiply by α and take projection P to see0 = αP (Kα,ε + 1)w= K−10w0 + ε(K−11w0 +K−10w1)+ ε2 (K−10w2 +K−11w1 +K−12w0) + ε2α2(K00 + 1)w0+ ε3(K−11w2 +K−12w1 +KR1w0 + εK−12w2 + εKR1w1 + ε2KR1w2)+ βK−10v +K−10W + ε(K−11 + εK−12 + ε2KR1)(βv +W )+ α˜(K00 + 1)w0 + ε3α2P (K00 + 1)(w1 + εw2) + εα˜P (K00 + 1)(w1 + εw2)+ ε2α2P (K00 + 1)(βv +W ) + α˜P (K00 + 1)(βv +W )+ αP (εK01 + ε2K02 + ε3KR2 + αK10 + αεKR3 + α2KR4)× (w0 + εw1 + ε2w2 + βv +W ).(2.32)After using known information about w0, w1, w2, α2 coming from (2.23),(2.24), (2.25) and noting that K−10W = −4PW = 0 from (2.15) we have0 = βK−10v + α˜(K00 + 1)w0+ ε3(K−11w2 +K−12w1 +KR1w0 + εK−12w2 + εKR1w1 + ε2KR1w2)+ ε(K−11 + εK−12 + ε2KR1)(βv +W )+ ε3α2P (K00 + 1)(w1 + εw2) + εα˜P (K00 + 1)(w1 + εw2)+ ε2α2P (K00 + 1)(βv +W ) + α˜P (K00 + 1)(βv +W )+ αP (εK01 + ε2K02 + ε3KR2 + αK10 + αεKR3 + α2KR4)× (w0 + εw1 + ε2w2 + βv +W ).Written more compactly, this is0 =βK−10v + α˜(K00 + 1)w0+ ε3f4 + εK3(βv +W ) + α˜εf5 + α˜K4(βv +W ) + α˜2h2where K3 is a bounded (uniformly in ε) operator containing ε but not α˜,while K4 is a bounded (uniformly in ε and α˜) operator containing both εand α˜. Functions f4 and f5 depend on ε (and are uniformly L2-bounded)but not α˜, while the function h2 depends on both ε and α˜ (and is uniformlyL2-bounded). To make the relationship between α˜ and β more explicit we322.3. Bifurcation Analysistake inner product with v0 = β〈v,K−10v〉+ α˜〈v, (K00 + 1)w0〉+ ε3〈v, f4〉+ ε〈v,K3(βv +W )〉+ α˜ε〈v, f5〉+ α˜〈v,K4(βv +W )〉+ α˜2〈v, h2〉=: β〈v,K−10v〉+ α˜〈v, (K00 + 1)w0〉+ F2. (2.33)Now let~ζ =(α˜β)and rewrite (2.31) and (2.33) in the following wayA~ζ :=( 〈w0,K10w0〉 〈w0,K00v〉〈v, (K00 + 1)w0〉 〈v,K−10v〉)(α˜β)=(F1F2)which we recast as a fixed point problem~ζ = A−1(F1F2)=: ~F (α˜, β; ε). (2.34)We have computedA =(0 1616 −32)so in particular, A is invertible. We wish to show there is a solution (α˜, β)of (2.34) of the appropriate size. We establish this fact in the followingLemmas. Lemmas 2.3.5 and 2.3.6 are accessory to Lemma 2.3.7.Lemma 2.3.5. The operators and functions K2, K4 and h1, h2 are contin-uous in α˜ > 0.Proof. The operators and function in question are compositions of continu-ous functions of α˜.Lemma 2.3.6. The W given by Lemma 2.3.4 is continuous in ~ζ for suffi-ciently small |ε|.Proof. Let (α˜1, β1) give rise to W1 and let (α˜2, β2) give rise to W2 via Lemma2.3.4. Take |α˜1−α˜2| < δ and |β1−β2| < δ. We show that ‖W1−W2‖L2 < Cδ332.3. Bifurcation Analysisfor some constant C > 0. Observing K2 depends on α˜, we see‖W1 −W2‖L2 =‖ (P (K00 + 1)P )−1 P 0‖L2→L2‖F(W1, ~ζ1; ε)−F(W2, ~ζ2; ε)‖L2≤ C∥∥∥∥(β1 − β2)K00v + (α˜1 − α˜2)K10w0 + ε(β1 − β2)K1v+ εK1(W1 −W2) + α˜1β1K2(α˜1)v − α˜2β2K2(α˜2)v + α˜1K2(α˜1)W1− α˜2K2(α˜2)W2 + ε(α˜1 − α˜2)f2 + α˜21h1(α˜1)− α˜22h1(α˜2)∥∥∥∥L2≤ Cδ + C|ε|‖W1 −W2‖L2+ ‖α˜1K2(α˜1)(W1 −W2) +(α˜1K2(α˜1)− α˜2K2(α˜2))W2‖L2≤ Cδ + C|ε|‖W1 −W2‖L2noting that |α˜1| ≤M |ε|3. Rearranging the above gives‖W1 −W2‖L2 < Cδfor small enough |ε|.Lemma 2.3.7. There exists ε0 > 0 such that for all −ε0 ≤ ε ≤ ε0 theequation (2.34) has a fixed point with |α˜|, |β| ≤M |ε|3 for some M > 0.Proof. We prove this by means of the Brouwer Fixed Point Theorem. Weshow that ~F maps a closed square into itself and that ~F is a continu-ous function. Take |α˜|, |β| ≤ M |ε|3 and and so by Lemma 2.3.4 we have‖W‖L2 ≤ |ε|3R for some R > 0. Consider now‖A−1‖ |F1|≤ ‖A−1‖(|ε||β||〈w0,K1v〉|+ |α˜||β||〈w0,K2v〉|+ |ε|3|〈w0, f1〉|+ |ε||α˜||〈w0, f2〉|+ |α˜|2|〈w0, h1〉|+ |ε||〈w0,K1W 〉|+ |α˜||〈w0,K2W 〉|)≤ CM |ε|4 + CM2|ε|6 + C|ε|3 + CM |ε|4 + CM2|ε|6 + CR|ε|4≤ C|ε|3 + CM |ε|4 ≤M |ε|3342.3. Bifurcation Analysisand‖A−1‖ |F2|≤ ‖A−1‖(|ε|3|〈v, f4〉|+ |ε||〈v,K3(βv +W )〉|+ |α˜||ε||〈v, f5〉|+ |α˜||〈v,K4(βv +W )〉|+ |α˜|2|〈v, h2〉|)≤ C|ε|3 + CM |ε|4 + CR|ε|4 + CM |ε|4 + CM2|ε|6 + CMR|ε|6 + CM2|ε|6≤ C|ε|3 + CM |ε|4 ≤M |ε|3for some choice of M > 0 and sufficiently small |ε| > 0. Here C > 0 is aconstant that is different at each instant. So ~F maps the closed square toitself.It is left to show that ~F is continuous. Given η > 0 take |α˜1 − α˜2| < δand |β1 − β2| < δ. Let (α˜1, β1) give rise to W1 and let (α˜2, β2) give rise toW2 via Lemma 2.3.4. We will also use Lemma 2.3.5 and Lemma 2.3.6. Nowconsider|F1(α˜1, β1)−F1(α˜2, β2)|=∣∣∣ε(β1 − β2)〈w0,K1v〉+ α˜1β1〈w0,K2(α˜1)v〉 − α˜2β2〈w0,K2(α˜2)v〉+ ε(α˜1 − α˜2)〈w0, f2〉+ α˜21〈w0, h1(α˜1)〉 − α˜22〈w0, h1(α˜2)〉+ ε〈w0,K1(W1 −W2)〉+ α˜1〈w0,K2(α˜1)W1〉 − α˜2〈w0,K2(α˜2)W2〉∣∣∣≤ Cδ + C‖h1(α˜1)− h1(α˜2)‖L2+ C‖W1 −W2‖L2 + C‖K2(α˜1)−K2(α˜2)‖L2→L2≤ Cδ < η‖A−1‖√2for small enough δ. Similarly we can show|F2(α˜1, β1)−F2(α˜2, β2)| ≤ Cδ < η‖A−1‖√2for δ small enough. Putting everything together gives |~F (~ζ1) − ~F (~ζ2)| < ηas required. Hence ~F is continuous.So finally we have solved both (2.28) and (2.32), and hence (2.13), andso have proved Theorem Comments on the Computations2.4 Comments on the ComputationsAnalytical and numerical computations were used in the above to computeinner products such as the ones appearing in the definition of α2 (2.21). Itwas critical to establish that α2 > 0 since the expansion of the resolventR(α) (2.10) requires α > 0. Inner products containing w0 but not w1 canbe written as an explicit single integral and then evaluated analytically ornumerically with good accuracy. For example〈w0,K02w0〉+ 14〈w0,K00K−12w0〉=− 12∫R2|x− y|(4Q2(x)− 3Q4(x))× (Q2(y)q1(y)− q1(y) + 3Q2(y)q2(y)− 4q2(y)− c22Q2(y))dydx+12√2∫R2e−√2|x−y|(2Q2(x)− 3Q4(x))× (Q2(y)q1(y)− q1(y) + 3Q2(y)q2(y)− 2q2(y)− c24Q2(y))dydx=−∫RQ2(y)(Q2(y)q1(y)− q1(y) + 3Q2(y)q2(y)− 4q2(y)− c22Q2(y))dy−∫RQ2(y)(Q2(y)q1(y)− q1(y) + 3Q2(y)q2(y)− 2q2(y)− c24Q2(y))dy≈− 2.9369wherec2 =12∫RQ2q1 − q1 + 3Q2q2 − 4q2.To reduce the double integral to a single integral we recall some facts aboutthe integral kernels. Leth(y) = −12∫R|x− y|(4Q2(x)− 3Q4(x))dx.Then h solves the equationh′′ = −4Q2 + 3Q4.Notice that −4Q2 + 3Q4 = −2Q2u1−Q2u2 where u1 and u2 are the compo-nents of the resonance u0 (2.6). Observing the equation (2.18) we see thath = u1 + c = 2 − Q2 + c for some constant c. We can directly compute362.4. Comments on the Computationsh(0) = −2 to find c = −2 and so h = −Q2. A similar argument involving(2.19) gives12√2∫Re−√2|x−y|(2Q2(x)− 3Q4(x))dx = u2(y) = −Q2(y).Many of the inner products can be computed analytically. These includethe identity (2.22), the entires in the matrix A in (2.34) and the denomi-nator appearing in the expression for α2. As an example we evaluate thedenominator of α2:〈w0,K10w0〉+ 14〈w0,K00(K00 + 1)w0〉= −∫R2(3Q4(x)− 4Q2(x))(x− y)24(3Q4(y)− 4Q2(y))dydx+12∫R2(4Q2(x)− 3Q4(x))|x− y|Q2(y)dydx− 14√2∫R2(4Q2(x)− 3Q4(x))e−√2|x−y|Q2(y)dydx=32∫RQ4(y)dy= 8where the first integral is zero by a direct computation and the remainingdouble integrals are converted to single integrals as above.Computing inner products containing w1 is harder. We have an explicitexpression for Pw1 but lack an explicit expression for Pw1. Therefore weapproximate Pw1 by numerically inverting P (K00 + 1)P inP (K00 + 1)Pw1 = −(14PK00K−11w0 + PK01w0)=: g.Note that 〈g, v〉 = 〈g, w0〉 = 0. We represent P (K00 + 1)P as a matrix withrespect to a basis {φj}Nj=1. The basis is formed by taking terms from thetypical Fourier basis and projecting out the components of each functionin the direction of v and w0. Some basis functions were removed to ensurelinear independence of the basis. Let Pw1 =∑Nj=1 ajφj . ThenB~a = ~bwhere Bj,k = 〈φj , (K00 + 1)φk〉 and bj = 〈φj , g〉. So we can solve for ~a byinverting the matrix B. Once we have an approximation for Pw1 we can372.4. Comments on the Computations−6 −4 −2 0 2 4 6−1−0.8−0.6−0.4− o f Pw1 Figure 2.1: The two components of Pw1 computed numerically with 32basis terms.compute P (K00 + 1)Pw1 directly to observe agreement with the function g.With this agreement we are confident in our numerical algorithm and thatour numerical approximation for Pw1 is accurate. In Figure 2.1 we showthe two components of Pw1 as computed numerically. Figure 2.2 shows thecomponents of the function g with the computed P (K00 + 1)Pw1 on top.With an approximation for Pw1 in hand we can combine it with ourexplicit expression for Pw1 and compute inner products containing w1 inthe same way as the previous inner product containing w0. In this way weestablish that α2 > 0. We list computed values for the numerator of α2against the number of basis terms used in Table 2.1.382.4. Comments on the Computations−6 −4 −2 0 2 4 6−0.5−0.4−0.3−0.2− Consistency CheckFigure 2.2: The two components of function g with the computed P (K00 +1)Pw1 on top. Again 32 basis terms were used in this computation. At thisscale the difference can only be seen around zero and at the endpoints.Number of Basis Terms 8α220 2.499224 2.513728 2.518930 2.520132 2.5207Table 2.1: Numerical values for 8α2 for the number of basis terms used inthe computation.39Chapter 3Perturbations of the 3DEnergy Critical NLSIn this chapter we prove Theorems 1.7.2, 1.7.7, and 1.7.10. The constructionof the solitary wave profiles appears in Section 3.1, variational argumentswhich establish the solitary waves as ground states appear in Section 3.2,and the dynamical (scattering/blow-up) theory appears in Section Construction of Solitary Wave ProfilesThis section is devoted to the proof of Theorem 1.7.2, constructing solitarywave profiles for the perturbed NLS via perturbation from the unperturbedstatic solution W .3.1.1 Mathematical SetupLet λ2 = ω with λ ≥ 0. Now substitute (1.20) to (1.11) to see(−∆− 5W 4 + λ2)η = −λ2W + εf(W ) +N(η)whereN(η) = (W + η)5 −W 5 − 5W 4η + ε (f(W + η)− f(W ))collects the higher order terms. We can rewrite the above as(H + λ2)η = F , H = −∆ + V, V = −5W 4 (3.1)whereF = F(ε, λ, η) = −λ2W + εf(W ) +N(η).To understand the resolvent (H + λ2)−1 for small λ, we follow [47]. Use theresolvent identity to write(H + λ2)−1 = (1 +R0(−λ2)V )−1R0(−λ2)403.1. Construction of Solitary Wave ProfileswhereR0(ζ) = (−∆− ζ)−1is the free resolvent, and apply Lemma 4.3 of [47] to obtain the expansion(1 +R0(−λ2)V )−1 = − 1λ〈V ψ, ·〉ψ +O(1) (3.2)where ψ is the normalized resonance eigenfunction (1.18):ψ(x) =1√3piΛW (x),∫R3V ψ =√4pi.The above expansion is understood in [47] in weighted Sobolev spaces. Wechoose instead to work in higher Lp spaces. Precise statements are found inthe following Section 3.1.2.To eliminate the singular behaviour as λ→ 0 we require0 = 〈R0(−λ2)V ψ,F(ε, λ, η)〉. (3.3)Satisfying this condition determines λ = λ(ε, η). This is done in Section3.1.3. With this condition met, we can invert (3.1) to seeη = (H + λ2)−1F = (H + λ2(ε, η))−1F(ε, λ(ε, η), η) =: G(η, ε), (3.4)which can be solved for η via a fixed point argument. This is done inSection Resolvent EstimatesWe collect here some estimates that are necessary for the proof of Theorem1.7.2.In order to apply Lemma 4.3 of [47] and so to use the expansion (3.2)in what follows (Lemmas 3.1.1 and 3.1.4) we must have that the operatorH has no zero eigenvalue. However, it is true that H(∂W/∂xj) = 0 foreach j = 1, 2, 3. To this end, we restrict ourselves to considering only radialfunctions. In this way H has no zero eigenvalues and only the one resonance,ΛW (see [33]).The free resolvent operator R0(−λ2) for λ > 0 has integral kernelR0(−λ2)(x) = e−λ|x|4pi|x| . (3.5)413.1. Construction of Solitary Wave ProfilesAn application of Young’s inequality/generalized Young’s inequality givesthe bounds‖R0(−λ2)‖Lq→Lr . λ3(1/q−1/r)−2, 1 ≤ q ≤ r ≤ ∞ (3.6)‖R0(−λ2)‖Lqw→Lr . λ3(1/q−1/r)−2, 1 < q ≤ r <∞ (3.7)with 3(1/q − 1/r) < 2, as well as‖R0(−λ2)‖Lq→Lr . 1 (3.8)where 1 < q < 3/2 and 3(1/q − 1/r) = 2 (so 3 < r <∞). We will also needthe additional bound‖R0(−λ2)‖L32−∩L 32+→L∞ . 1, (3.9)where the +/− means the bound holds for any exponent greater/less than3/2, to replace the fact that we do not have (3.8) for r =∞ and q = 3/2.Observe also that R0(0) = G0 has integral kernelG0(x) =14pi|x|and is formally (−∆)−1.We need also some facts about the operator (1 + R0(−λ2)V )−1. Theidea is that we can think of the full resolvent (1 + R0(−λ2)V )−1R0(−λ2)as behaving like the free resolvent R0(−λ2) providing we have a suitableorthogonality condition. Otherwise we lose a power of λ due to the non-invertibility of (1 +G0V ): indeed,ψ ∈ ker(1 +G0V ), V ψ ∈ ker ((1 +G0V )∗ = 1 + V G0) . (3.10)First we recall some results of [47]:Lemma 3.1.1. (Lemmas 2.2 and 4.3 from [47]) Let s satisfy 3/2 < s < 5/2and denote B = B(H1−s, H1−s) where H1−s is the weighted Sobolev space withnorm‖u‖H1−s = ‖(1 + |x|2)−s/2u‖H1 .Then for ζ with Imζ ≥ 0 we have the expansions1 +R0(ζ)V = 1 +G0V + iζ1/2G1V + o(ζ1/2)(1 +R0(ζ)V )−1 = −iζ−1/2〈·, V ψ〉ψ + C10 + o(1)423.1. Construction of Solitary Wave Profilesin B with |ζ| → 0. Here C10 is an explicit operator and G0 and G1 areconvolution with the kernelsG0(x) =14pi|x| , G1(x) =14pi.Remark 3.1.2. The expansion is also valid in B(L2−s, L2−s) where L2−s isthe weighted L2 space with norm‖u‖L2−s = ‖(1 + |x|2)−s/2u‖L2 .Remark 3.1.3. Since our potential only has decay |V (x)| . 〈x〉−4 ourexpansion has one less term than in [47] and we use 3/2 < s < 5/2 ratherthan 5/2 < s < 7/2.The following is a reformulation of Lemma 3.1.1 but using higher Lpspaces rather than weighted spaces. This reformulation was also used in[44].Lemma 3.1.4. Take 3 < r ≤ ∞ and λ > 0 small. Then‖(1 +R0(−λ2)V )−1f‖Lr . 1λ‖f‖Lr .If we also have 〈V ψ, f〉 = 0 then‖(1 +R0(−λ2)V )−1f‖Lr . ‖f‖Lrand‖(1 +R0(−λ2)V )−1f − Q¯(1 +G0V )−1P¯ f‖Lr.{λ1−3/r, 3 < r <∞λ log(1/λ), r =∞}‖f‖Lr (3.11)whereP :=1∫V ψ2〈V ψ, ·〉ψ, P¯ = 1− PQ :=1∫V ψ〈V, ·〉ψ, Q¯ = 1−Q(3.12)Proof. We start with the identityg := (1 +R0(−λ2)V )−1f = f −R0(−λ2)V (1 +R0(−λ2)V )−1f= f −R0(−λ2)V g433.1. Construction of Solitary Wave Profilesso‖g‖Lr . ‖f‖Lr + ‖R0(−λ2)V g‖Lr .We treat the above second term in two cases. For 3 < r < ∞ let 1/q =1/r + 2/3 and use (3.8) and for r =∞ use (3.9)‖R0(−λ2)V g‖Lr .{ ‖V g‖Lq , 3 < r <∞‖V g‖L3/2−∩L3/2+ , r =∞.{‖V 〈x〉2‖Lm‖g‖L2−2 , 3 < r <∞‖V 〈x〉2‖L6−∩L6+‖g‖L2−2 , r =∞. ‖g‖L2−2 .Here we used that |V (x)| . 〈x〉−4, and with 1/q = 1/m + 1/2 we have(4 − 2)m > 3. Finally we appeal to Lemma 3.1.1 and use the fact thatLr ⊂ L2−2 to see‖R0(−λ2)V g‖Lr . ‖(1 +R0(−λ2)V )−1f‖L2−2 .1λ‖f‖L2−2 .1λ‖f‖Lrwhere we can remove the factor of 1/λ if our orthogonality condition issatisfied.In light of (3.10),1 +G0V : Lr ∩ V ⊥ → Lr ∩ (V ψ)⊥is bijective, and so we treat the operator (1 +G0V )−1 as acting(1 +G0V )−1 : Lr ∩ (V ψ)⊥ → Lr ∩ V ⊥,which is the meaning of the expression Q¯(1+G0V )−1P¯ involving the projec-tions P¯ and Q¯. That the range should be taken to be V ⊥ is a consequenceof estimate (3.14) below.To prove (3.11), expandR0(−λ2) = G0 − λG1 + λ2R˜,R˜ :=1λ2(R0(−λ2)−G0 + λG1)=1λ(e−λ|x| − 1 + λ|x|4piλ|x|)∗443.1. Construction of Solitary Wave Profilesand consider f ∈ (V ψ)⊥ ∩ Lr with 3 < r ≤ ∞. We first establish theestimates‖h‖Lq .{1, 1 < q <∞log(1/λ), q = 1, h := V R˜V ψ, (3.13)|〈V, (1 +R0(−λ2)V )−1f〉| .{λ, 3 < r <∞λ log(1/λ), r =∞}‖f‖Lr . (3.14)For the purpose of these estimates we may make the following replacements:V ψ → 〈x〉−5, V → 〈x〉−4, and R˜(x)→ min(|x|, 1/λ). To establish (3.13) wemust therefore estimate〈x〉−4∫R3min(|y|, 1/λ)〈y − x〉−5dy,and we proceed in two parts:• Take |y| ≤ 2|x|. Then〈x〉−4∫|y|≤2|x|min(|y|, 1/λ)〈y − x〉−5dy. 〈x〉−4 min(|x|, 1/λ)∫〈y − x〉−5dy. 〈x〉−4 min(|x|, 1/λ)and‖〈x〉−4 min(|x|, 1/λ)‖qLq.∫ 10rq+2dr +∫ 1/λ1r−3q+2dr +1λ∫ ∞1/λr−4q+2dr. 1 +{1, q > 1log(1/λ), q = 1}+ λ4(q−1).{1, q > 1log(1/λ), q = 1.• Take |y| ≥ 2|x|. Then〈x〉−4∫|y|≥2|x|min(|y|, 1/λ)〈y − x〉−5dy . 〈x〉−4∫|y|〈y〉−5dy. 〈x〉−4and‖〈x〉−4‖Lq . 1.453.1. Construction of Solitary Wave ProfilesWith (3.13) established we now prove (3.14). Let g = (1+R0(−λ2)V )−1fand observe0 =1λ〈V ψ, f〉=1λ〈V ψ, (1 +R0(−λ2)V )g〉=1λ〈(1 + V R0(−λ2))(V ψ), g〉=1λ〈(1 + V (G0 − λG1 + λ2R˜))(V ψ), g〉= 〈(−V G1 + λV R˜)(V ψ), g〉= − 1√4pi〈V, g〉+ λ〈h, g〉noting that (1 + V G0)(V ψ) = 0. Now|〈V, g〉| . λ‖h‖Lr′‖g‖Lr . λ{1, 3 < r <∞log(1/λ), r =∞}‖f‖Lrapplying (3.13).With (3.14) in place we finish the argument. For f ∈ Lr ∩ (V ψ)⊥ wewriteg = (1 +R0(−λ2)V )−1f and g0 = (1 +G0V )−1f.We have0 = (1 +R0(−λ2)V )g − (1 +G0V )g0and so(1 +G0V )(g − g0) = −RˆV gwhere Rˆ = R0(−λ2) − G0. The above also implies RˆV g ⊥ V ψ. We invertto seeg − g0 = −(1 +G0V )−1RˆV g + αψnoting that ψ ∈ ker(1 +G0V ). Take now inner product with V to seeα〈V, ψ〉 = 〈V, g〉463.1. Construction of Solitary Wave Profilesand so|α| . |〈V, g〉| .{λ, 3 < r <∞λ log(1/λ), r =∞}‖f‖Lrobserving (3.14). It remains to estimate (1 +G0V )−1RˆV g. We note thatRˆ =(e−λ|x| − 14pi|x|)∗and so for estimates we may replace Rˆ(x) with min(λ, 1/|x|). There followsby Young’s inequality‖(1 +G0V )−1RˆV g‖Lr . ‖RˆV g‖Lr. ‖min(λ, 1/|x|)‖Lr‖V g‖L1. ‖min(λ, 1/|x|)‖Lr‖g‖Lr. λ1−3/r‖f‖Lr .And so after putting everything together we obtain (3.11).We end this section by recording pointwise estimates of the nonlineartermsN(η) = (W + η)5 −W 5 − 5W 4η + ε (f(W + η)− f(W )) .Bound the first three terms as follows:|(W + η)5 −W 5 − 5W 4η| .W 3η2 + |η|5.For the other term we use the Fundamental Theorem of Calculus and As-sumption 1.7.1 to see|f(W + η)− f(W )| =∣∣∣∣∫ 10∂δf(W + δη)dδ∣∣∣∣=∣∣∣∣∫ 10f ′(W + δη)ηdδ∣∣∣∣. |η| sup0<δ<1(|W + δη|p1−1 + |W + δη|p2−1). |η| (W p1−1 + |η|p1−1 +W p2−1 + |η|p2−1). |η| (W p1−1 +W p2−1)+ |η|p1 + |η|p2473.1. Construction of Solitary Wave Profilesand so together we have|N(η)| .W 3η2 + |η|5 + ε|η| (W p1−1 +W p2−1)+ ε|η|p1 + ε|η|p2 . (3.15)Similarly|f(W + η1)− f(W + η2)|. |η1 − η2|(W p1−1 + |η1|p1−1 + |η2|p1−1 +W p2−1 + |η1|p2−1 + |η2|p2−1)and so|N(η1)−N(η2)| . |η1 − η2|(|η1|+ |η2|)W 3 + |η1 − η2|(|η1|4 + |η2|4)+ ε|η1 − η2|(W p1−1 +W p2−1)+ ε|η1 − η2|(|η1|p1−1 + |η2|p1−1 + |η1|p2−1 + |η2|p2−1) .(3.16)3.1.3 Solving for the FrequencyWe are now in a position to construct solutions to (1.11) and so proveTheorem 1.7.2. The proof proceeds in two steps. In the present section, wewill solve for λ in (3.3) for a given small η. Then in the following Section 3.4,we will treat λ as a function of η and solve (3.4). Both steps involve fixedpoint arguments.We begin by computing the inner product (3.3). Write0 = 〈R0(−λ2)V ψ,F〉 = 〈R0(−λ2)V ψ,−λ2W + εf(W ) +N(η)〉so thatλ · λ〈R0(−λ2)V ψ,W 〉 = ε〈R0(−λ2)V ψ, f(W )〉+ 〈R0(−λ2)V ψ,N(η)〉.(3.17)It is our intention to find a solution λ of (3.17) of the appropriate size. Thisis done in Lemma 3.1.6 but we first make some estimates on the leadingorder inner products appearing above.Lemma 3.1.5. We have the estimates〈R0(−λ2)V ψ, f(W )〉 = −〈ψ, f(W )〉+O(λδ1) (3.18)λ〈R0(−λ2)V ψ,W 〉 = 2√3pi +O(λ) (3.19)where δ1 is defined in the statement of Theorem Construction of Solitary Wave ProfilesProof. Firstly〈R0(−λ2)V ψ, f(W )〉 = 〈G0V ψ, f(W )〉+ 〈(R0(−λ2)−R0(0))V ψ, f(W )〉First note that since Hψ = 0 we have V ψ = −(−∆ψ) so〈G0V ψ, f(W )〉 = 〈−(−∆)−1(−∆ψ), f(W )〉 = −〈ψ, f(W )〉.Note that this inner product is finite. For the other term use the resolventidentity R0(−λ2)−R0(0) = −λ2R0(−λ2)R0(0) to see〈(R0(−λ2)−R0(0))V ψ, f(W )〉 = λ2〈R0(−λ2)ψ, f(W )〉.Observe now thatλ2|〈R0(−λ2)ψ, f(W )〉| ≤ λ2‖R0(−λ2)ψ‖Lr‖f(W )‖Lr∗where 1/r + 1/r∗ = 1. Choose an r∗ > 1 with 3/p1 < r∗ < 3/2. In this wayf(W ) ∈ Lr∗ observing Assumption 1.7.1. We now apply (3.7) with q = 3noting that 3 < r <∞. Henceλ2|〈R0(−λ2)ψ, f(W )〉| . λ2 · λ3(1/3−1/r)−2‖ψ‖L3w‖f(W )‖Lr∗. λ1−3/r.If p1 ≥ 3 we can take r as large as we like. Otherwise we must take 3 < r <3/(3−p1) and so 1−3/r can be made close to p1−2 (from below). We nowsee (3.18).Next on to (3.19). Note that this computation is taken from [44]. Firstwe isolate the troublesome part of W and writeW =√3|x| + W˜ .There is no problem with the second term since W˜ ∈ L6/5 and V ψ ∈ L6/5so we can use (3.8) with q = 6/5 and r = 6 to seeλ|〈R0(−λ2)V ψ, W˜ 〉| . λ‖R0(−λ2)V ψ‖L6‖W˜‖L6/5. λ‖V ψ‖L6/5‖W˜‖L6/5 (3.20). λ. (3.21)Set g := V ψ and concentrate onλ√3〈R0(−λ2)g, 1|x|〉=√6piλ〈gˆ(ξ)|ξ|2 + λ2 ,1|ξ|2〉493.1. Construction of Solitary Wave Profileswhere we work on the Fourier Transform side, using Plancherel’s theorem.So √6piλ〈gˆ(ξ)|ξ|2 + λ2 ,1|ξ|2〉=√6piλ gˆ(0)〈1|ξ|2 + λ2 ,1|ξ|2〉+√6piλ〈gˆ(ξ)− gˆ(0)|ξ|2 + λ2 ,1|ξ|2〉where the first term is the leading order. We invert the Fourier Transformand note that gˆ(0) = (2pi)−32∫g to see√6piλ gˆ(0)〈1|ξ|2 + λ2 ,1|ξ|2〉=√3(∫g)λ〈e−λ|x|4pi|x| ,1|x|〉=√3∫g = 2√3pi.We now must bound the remainder term. It is easy for the high frequencies∫|ξ|≥1|gˆ(ξ)− gˆ(0)||ξ|2(|ξ|2 + λ2)dξ . ‖gˆ‖L∞∫|ξ|≥1dξ|ξ|4 . ‖g‖L1 . 1.For the low frequencies note that since |x|g ∈ L1 we have that ∇gˆ is contin-uous and bounded. In light of this seth(ξ) := φ(ξ) (gˆ(ξ)− gˆ(0)−∇gˆ(0) · ξ)where φ is a smooth, compactly supported cutoff function with φ = 1 on|ξ| ≤ 1. Now since ∫|ξ|≤1ξ|ξ|2(|ξ|2 + λ2)dξ = 0we have ∫|ξ|≤1gˆ(ξ)− gˆ(0)|ξ|2(|ξ|2 + λ2)dξ =∫|ξ|≤1h(ξ)|ξ|2(|ξ|2 + λ2)dξand so bound this integral instead. If we recall the form of g we see |g| .〈x〉−5 and so (1+|x|1+α)g ∈ L1 for some α > 0. Therefore (1+|x|1+α)hˇ ∈ L1and noting also that ∇h(0) = 0 we see |∇h(ξ)| . min(1, |ξ|α). The Mean503.1. Construction of Solitary Wave ProfilesValue Theorem along with h(0) = 0 then gives |h(ξ)| . min(1, |ξ|1+α). Withthis bound established we consider two regions of the integral∫|ξ|≤λ|h(ξ)||ξ|2(|ξ|2 + λ2)dξ .∫|ξ|≤λ|ξ||ξ|2(|ξ|2 + λ2)dξ.∫|ζ|≤11|ζ|(|ζ|2 + 1)dζ . 1and∫λ≤|ξ|≤1|h(ξ)||ξ|2(|ξ|2 + λ2)dξ .∫λ≤|ξ|≤1|ξ|1+α|ξ|2(|ξ|2 + λ2)dξ. λα∫1≤|ζ|≤1/λ|ζ|α−1|ζ|2 + 1dζ . λα · λ−α . 1.Putting everything together gives (3.19).With the above estimates in hand we turn our attention to solving (3.17).Lemma 3.1.6. For any R > 0 there exists ε0 = ε0(R) > 0 such that for0 < ε ≤ ε0 and given a fixed η ∈ L∞ with ‖η‖L∞ ≤ Rε the equation (3.17)has a unique solution λ = λ(ε, η) satisfying ελ(1)/2 ≤ λ ≤ 3ελ(1)/2 whereλ(1) =−〈ΛW, f(W )〉6pi> 0. (3.22)Moreover, we have the expansionλ = λ(1)ε+ λ˜, λ˜ = O(ε1+δ1). (3.23)Remark 3.1.7. Writing the resolvent as (3.5), and thus the subsequentestimates (3.6)-(3.8), require λ > 0 and so it is essential that we have es-tablished λ(1) > 0. This is the source of the sign condition in Assumption1.7.1.Proof. We first estimate the remainder term. Take ελ(1)/2 ≤ λ ≤ 3ελ(1)/2and η with ‖η‖L∞ ≤ Rε. We establish the estimate|〈R0(−λ2)V ψ,N(η)〉| . ε1+δ1 . (3.24)We deal with each term in (3.15). Take j = 1, 2. We frequently apply (3.6),(3.8) and Ho¨lder:513.1. Construction of Solitary Wave Profiles• |〈R0(−λ2)V ψ,W 3η2〉| . ‖R0(−λ2)V ψ‖L6‖W 3η2‖L6/5. ‖V ψ‖L6/5‖η‖2L∞‖W 3‖L6/5. ε2• |〈R0(−λ2)V ψ, η5〉| . ‖R0(−λ2)V ψ‖L1‖η5‖L∞. λ−2‖V ψ‖L1‖η‖5L∞. ε3• ε|〈R0(−λ2)V ψ, ηpj 〉| . ε‖R0(−λ2)V ψ‖L1‖ηpj‖L∞. ελ−2‖V ψ‖L1‖η‖pjL∞. ε · εpj−2The term that remains requires two cases. First take pj > 3 thenε|〈R0(−λ2)V ψ, ηW pj−1〉| . ε‖R0(−λ2)V ψ‖Lr‖ηW pj−1‖Lr∗. ε‖V ψ‖Lq‖η‖L∞‖W pj−1‖Lr∗. ε2where we have used (3.8) for some r∗ < 3/2 and r > 3. Now if instead2 < pj ≤ 3 we use (3.6) with r∗ = (3/(pj − 1))+ so 1− 1/r = ((pj − 1)/3)−andε|〈R0(−λ2)V ψ, ηW pj−1〉| . ε‖R0(−λ2)V ψ‖Lr‖ηW pj−1‖Lr∗. ελ3(1−1/r)−2‖V ψ‖L1‖η‖L∞‖W pj−1‖Lr∗. ε · ε(pj−2)−and so we establish (3.24).With the estimates (3.18), (3.19), (3.24) in hand we show that a solutionto (3.17) of the desired size exists. For this write (3.17) as a fixed pointproblemλ = H(λ) := ε〈R0(−λ2)V ψ, f(W )〉+ 〈R0(−λ2)V ψ,N(η)〉λ〈R0(−λ2)V ψ,W 〉 (3.25)with the intention of applying Banach Fixed Point Theorem. We show thatfor a fixed η with ‖η‖L∞ . ε the function H maps the interval ελ(1)/2 ≤λ ≤ 3ελ(1)/2 to itself and that H is a contraction.First note that −〈ψ, f(W )〉 > 0 by Assumption 1.7.1 and so after ob-serving (3.18), (3.19), (3.24) we see that H(λ) > 0. Furthermore for ε small523.1. Construction of Solitary Wave Profilesenough we have that ελ(1)/2 ≤ H(λ) ≤ 3ελ(1)/2 and so H maps this intervalto itself.We next show that H is a contraction. Take ελ(1)/2 ≤ λ1, λ2 ≤ 3ελ(1)/2and again keep η fixed with ‖η‖L∞ ≤ Rε. WriteH(λ) = a(λ) + b(λ)c(λ)so that|H(λ1)−H(λ2)| ≤ |a1||c2 − c1|+ |a1 − a2||c1|+ |b1||c2 − c1|+ |b1 − b2||c1||c1c2|. |a1 − a2|+ |b1 − b2|+ ε|c1 − c2|using (3.18), (3.19), (3.24). We treat each piece in turn.First|a1 − a2| = ε|〈(R0(−λ21)−R0(−λ22))V ψ, f(W )〉|= ε|λ21 − λ22||〈R0(−λ21)R0(−λ22)V ψ, f(W )〉|by the resolvent identity. Continuing we see|a1 − a2| . ε2|λ1 − λ2|‖R0(−λ21)R0(−λ22)V ψ‖Lr‖f(W )‖Lr∗where 1/r + 1/r∗ = 1. Note that by Assumption 1.7.1 we have f(W ) ∈ Lr∗for some 1 < r∗ < 3/2 so 3 < r <∞. Applying now (3.8) we get|a1 − a2| . ε2|λ1 − λ2|‖R0(−λ22)V ψ‖Lqwith 3(1/q − 1/r) = 2 so 1 < q < 3/2. Now apply the bound (3.6)|a1 − a2| . ε2|λ1 − λ2|λ3(1−1/q)−2‖V ψ‖L1. ε3(1−1/q)|λ1 − λ2|and note that 3(1− 1/q) > 0.Next consider|b1 − b2| = |〈R0(−λ21)V ψ,N(η)〉 − 〈R0(−λ22)V ψ,N(η)〉|.Proceeding as in the previous argument and using (3.6) we see|b1 − b2| . ε|λ1 − λ2|‖R0(−λ21)R0(−λ22)V ψ‖Lr‖N(η)‖Lr∗. ε|λ1 − λ2|λ−21 ‖R0(−λ22)V ψ‖Lr‖N(η)‖Lr∗533.1. Construction of Solitary Wave Profilesfor 1/r + 1/r∗ = 1. We can estimate this term (using different r and r∗for different portions of N(η)) using the computations leading to (3.24) toachieve|b1 − b1| . ε−1 · ε1+δ1 |λ1 − λ2| = εδ1 |λ1 − λ2|.Lastly considerε|c1 − c2| = ε|λ1〈R0(−λ21)V ψ,W 〉 − λ2〈R0(−λ22)V ψ,W 〉|.Again we write W =√3/|x|+W˜ where W˜ ∈ L6/5. The second term is easy.We computeε|λ1〈R0(−λ21)V ψ, W˜ 〉 − λ2〈R0(−λ22)V ψ, W˜ 〉|. ε|λ1 − λ2||〈R0(−λ21)V ψ, W˜ 〉|+ ε3|λ1 − λ2||〈R0(−λ21)R0(−λ22)V ψ, W˜ 〉|. ε|λ1 − λ2|+ ε3λ−21 λ3(1−1/6)−22 |λ1 − λ2|‖V ψ‖L1‖W˜‖L6/5. ε|λ1 − λ2|where we have used (3.21) once and (3.6) twice. For the harder term wefollow the computations which establish (3.19) and so work on the FourierTransform sideελ1〈R0(−λ21)V ψ, 1/|x|〉 − ελ2〈R0(−λ22)V ψ, 1/|x|〉= Cελ1〈gˆ(ξ)|ξ|2 + λ21,1|ξ|2〉− Cελ2〈gˆ(ξ)|ξ|2 + λ22,1|ξ|2〉= Cελ1〈gˆ(ξ)− gˆ(0)|ξ|2 + λ21,1|ξ|2〉− Cελ2〈gˆ(ξ)− gˆ(0)|ξ|2 + λ22,1|ξ|2〉= Cε(λ1 − λ2)〈gˆ(ξ)− gˆ(0)|ξ|2 + λ21,1|ξ|2〉+ Cελ2〈(gˆ(ξ)− gˆ(0))(1|ξ|2 + λ21− 1|ξ|2 + λ22),1|ξ|2〉where we have used the fact thatλ1〈gˆ(0)|ξ|2 + λ21,1|ξ|2〉= λ2〈gˆ(0)|ξ|2 + λ22,1|ξ|2〉.Continuing as in the computations used to establish (3.19) , we boundε|λ1 − λ2|∣∣∣∣〈 gˆ(ξ)− gˆ(0)|ξ|2 + λ21 , 1|ξ|2〉∣∣∣∣ . ε|λ1 − λ2|543.1. Construction of Solitary Wave Profilesandελ2∣∣∣∣〈(gˆ(ξ)− gˆ(0))( 1|ξ|2 + λ21 − 1|ξ|2 + λ22),1|ξ|2〉∣∣∣∣. ελ2(λ1 + λ2)|λ1 − λ2|∫dξ|ξ|(|ξ|2 + λ21)(|ξ|2 + λ22). ε|λ1 − λ2|∫dζ|ζ|(|ζ|2 + 1)(|ζ|2 + λ22/λ21). ε|λ1 − λ2|.In this way we finally haveε|c1 − c2| ≤ ε|λ1 − λ2|.So, putting everything together we see that by taking ε sufficiently small,|H(λ1)−H(λ2)| < κ|λ1 − λ2|for some 0 < κ < 1, and hence H is a contraction. Therefore (3.25) has aunique fixed point of the desired size.To find the leading order λ(1) let λ take the form in (3.23), substituteto (3.17) use estimates (3.18), (3.19), (3.24) and ignore higher order terms.An inspection of the higher order terms gives the order of λ˜.In this way we now think of λ as a function of η. We will also need thefollowing Lipshitz condition for what follows in Lemma 3.1.9.Lemma 3.1.8. The λ generated via Lemma 3.1.6 is Lipshitz continuous inη in the sense that|λ1 − λ2| . εδ1‖η1 − η2‖L∞ .Proof. Take η1 and η2 with ‖η1‖L∞ , ‖η2‖L∞ ≤ Rε. Let η1 and η2 give rise toλ1 and λ2 respectively through Lemma 3.1.6. Consider now the difference|λ1 − λ2| =∣∣∣∣ε〈R0(−λ21)V ψ, f(W )〉+ 〈R0(−λ21)V ψ,N(η1)〉λ1〈R0(−λ21)V ψ,W 〉− ε〈R0(−λ22)V ψ, f(W )〉+ 〈R0(−λ22)V ψ,N(η2)〉λ2〈R0(−λ22)V ψ,W 〉∣∣∣∣=:∣∣∣∣a(λ1) + b(λ1, η1)c(λ1) − a(λ2) + b(λ2, η2)c(λ2)∣∣∣∣553.1. Construction of Solitary Wave Profilesobserving (3.25). Now we estimate|λ1 − λ2| ≤∣∣∣∣b(λ1, η1)− b(λ1, η2)c(λ1)∣∣∣∣+ ∣∣∣∣a(λ1) + b(λ1, η2)c(λ1) − a(λ2) + b(λ2, η2)c(λ2)∣∣∣∣≤C|〈R0(−λ21)V ψ,N(η1)−N(η2)〉|+ κ|λ1 − λ2|for some 0 < κ < 1. The second term has been estimated using the com-putations of Lemma 3.1.6 and taking ε small enough. Now we estimatethe first. Observing the terms in (3.16) we use the same procedure thatestablished (3.24) to obtain|〈R0(−λ21)V ψ,N(η1)−N(η2)〉| . εδ1‖η1 − η2‖L∞ .So together we now see(1− κ)|λ1 − λ2| . εδ1‖η1 − η2‖L∞which gives the desired result.3.1.4 Solving for the CorrectionWe next solve (3.4), given that (3.3) holds. Recall the formulation of (3.4)as the fixed-point equationη = G(η, ε) = (H + λ2)−1Fwhere in light of Lemma 3.1.6, we take λ = λ(ε, η) and F = F(ε, λ(ε, η), η)so that (3.3) holds.Lemma 3.1.9. There exists R0 > 0 such that for any R ≥ R0, there isε1 = ε1(R) > 0 such that for each 0 < ε ≤ ε1, there exists a unique solutionη ∈ L∞ to (3.4) with ‖η‖L∞ ≤ Rε. Moreover, we have the expansionη = εQ¯(1 +G0V )−1P¯(G0f(W )− λ(1)√3λR0(−λ2)|x|−1)+OL∞(ε1+δ1)where P¯ and Q¯ are given in (3.12).Proof. We proceed by means of Banach Fixed Point Theorem. We showthat G(η) maps a ball to itself and is a contraction. In this way we establisha solution to η = G(η, ε, λ(ε, η)) in (3.4).Let R > 0 (to be chosen) and take ε < ε0(R) as in Lemma 3.1.6. In thisway given η ∈ L∞ with ‖η‖L∞ ≤ Rε we can generateλ = λ(ε, η) = λ(1)ε+ o(ε).563.1. Construction of Solitary Wave ProfilesWe aim to take ε smaller still in order to run fixed point in the L∞ ball ofradius Rε.Consider‖G‖L∞ = ‖(1 +R0(−λ2)V )−1R0(−λ2)F‖L∞. ‖R0(−λ2)F‖L∞in light of Lemma 3.1.4 and since we have chosen λ to satisfy (3.3). Contin-uing with‖G‖L∞ . ‖R0(−λ2)(−λ2W + εf(W ) +N(η)) ‖L∞we treat each term separately. For the first term it is sufficient to replaceW with 1/|x| (otherwise we simply apply (3.9))λ2‖R0(−λ2)W‖L∞ . λ∥∥∥∥λR0(−λ2) 1|x|∥∥∥∥L∞. λ∥∥∥∥∥λ∫e−λ|y||y|1|x− y|dy∥∥∥∥∥L∞. λ∥∥∥∥∥∫e−|z||z|1|λx− z|dz∥∥∥∥∥L∞. λ∥∥∥∥∥(e−|x||x| ∗1|x|)(λx)∥∥∥∥∥L∞. λ . ε.Now for the second term use (3.9)ε‖R0(−λ2)f(W )‖L∞ . ε‖f(W )‖L3/2−∩L3/2+ . ε.And for the higher order terms we employ (3.6) and (3.9)• ‖R0(−λ2)(W 3η2)‖L∞ . ‖W 3η2‖L3/2−∩L3/2+. ‖W 3‖L3/2−∩L3/2+‖η‖2L∞. R2ε2• ‖R0(−λ2)η5‖L∞ . λ−2‖η5‖L∞ . λ−2‖η‖5L∞ . R5ε3• ε‖R0(−λ2)ηpj‖L∞ . ελ−2‖ηpj‖L∞. ε−1‖η‖pjL∞. Rpjεpj−1573.1. Construction of Solitary Wave Profilesfor j = 1, 2. The remaining remainder term again requires two cases. Forpj > 3 we use (3.9) to seeε∥∥R0(−λ2) (ηW pj−1)∥∥L∞ . ε‖ηW pj−1‖L3/2−∩L3/2+. ε‖η‖L∞‖W pj−1‖L3/2−∩L3/2+. Rε2and for 2 < pj ≤ 3 we apply (3.6)ε∥∥R0(−λ2) (ηW pj−1)∥∥L∞ . ελ(pj−1)−−2‖η‖L∞‖W pj−1‖L3/(pj−1)+. Rε1+(pj−2)−Collecting the above yields‖G‖L∞ ≤ Cε(1 +R2ε+R5ε2 +Rp1εp1−2 +Rp2εp2−2 +Rε+Rε(p1−2)−)(3.26)and so taking R0 = 2C, R ≥ R0, and then ε small enough so that Rε +R4ε2 +Rp1−1εp1−2 +Rp2−1εp2−2 + ε+ ε(p1−2)− ≤ 12C , we arrive at‖G‖L∞ ≤ Rε.Hence G maps the ball of radius Rε in L∞ to itself.Now we show that G is a contraction. Take η1 and η2 and let them giverise to λ1 and λ2 respectively. Again ‖ηj‖L∞ ≤ Rε and denote F(ηj) by Fj ,j = 1, 2. Consider‖G(η1, ε)− G(η2, ε)‖L∞= ‖(1 +R0(−λ21)V )−1R0(−λ21)F1 − (1 +R0(−λ22)V )−1R0(−λ22)F2‖L∞≤ ‖(1 +R0(−λ21)V )−1(R0(−λ21)F1 −R0(−λ22)F2) ‖L∞+ ‖ ((1 +R0(−λ21)V )−1 − (1 +R0(−λ22)V )−1)R0(−λ22)F2‖L∞≤ ‖R0(−λ21)F1 −R0(−λ22)F2‖L∞+ ‖ ((1 +R0(−λ21)V )−1 − (1 +R0(−λ22)V )−1)R0(−λ22)F2‖L∞≤ ‖R0(−λ21) (F1 −F2) ‖L∞ + ‖(R0(−λ21)−R0(−λ22))F2‖L∞+ ‖ ((1 +R0(−λ21)V )−1 − (1 +R0(−λ22)V )−1)R0(−λ22)F2‖L∞=: I + II + IIIwhere we have applied Lemma 3.1.4, observing the orthogonality condition.We treat each part in turn.583.1. Construction of Solitary Wave ProfilesStart with I. This computation is similar to those previous. We alsoapply Lemma 3.1.8:‖R0(−λ21) (F1 −F2) ‖L∞ = ‖R0(−λ21)((λ22 − λ21)W +N(η1)−N(η2)) ‖L∞. |λ1 − λ2|+ εδ1‖η1 − η2‖L∞. εδ1‖η1 − η2‖L∞ .Part II is also similar to previous computations:‖ (R0(−λ21)−R0(−λ22))F2‖L∞ = |λ21 − λ22|‖R0(−λ21)R0(−λ22)F2‖L∞. |λ1 + λ2||λ1 − λ2|λ−21 ‖R0(−λ22)F2‖L∞. ε1 · ε−2 · ε|λ1 − λ2|. |λ1 − λ2|. εδ1‖η1 − η2‖L∞ .Part III is the hardest. First we find a common denominator(1 +R0(−λ21)V )−1 − (1 +R0(−λ22)V )−1= (1 +R0(−λ21)V )−1(1 +R0(−λ22)V )(1 +R0(−λ22)V )−1− (1 +R0(−λ21)V )−1(1 +R0(−λ21)V )(1 +R0(−λ22)V )−1= (1 +R0(−λ21)V )−1(R0(−λ22)V −R0(−λ21)V)(1 +R0(−λ22)V )−1so that((1 +R0(−λ21)V )−1 − (1 +R0(−λ22)V )−1)R0(−λ22)F2 =(1 +R0(−λ21)V )−1(R0(−λ22)V −R0(−λ21)V )(1 +R0(−λ22)V )−1R0(−λ22)F2= (1 +R0(−λ21)V )−1(R0(−λ22)V −R0(−λ21)V)G(η2).NowIII = ‖(1 +R0(−λ21)V )−1(R0(−λ22)V −R0(−λ21)V)G(η2)‖L∞and here we just suffer the loss of one λ (Lemma 3.1.4) to achieveIII . λ−11 ‖(R0(−λ22)V −R0(−λ21)V)G(η2)‖L∞. λ−11 |λ22 − λ21|‖R0(−λ22)R0(−λ21)V G(η2)‖L∞. λ−11 |λ2 + λ1||λ2 − λ1|λ−1/22 ‖R0(−λ21)V G(η2)‖L2. ε−1 · ε1|λ2 − λ1|λ−1/22 λ−1/21 ‖V G(η2)‖L1. ε−1|λ2 − λ1|‖V ‖L1‖G(η2)‖L∞593.1. Construction of Solitary Wave Profilesand using Lemma 3.1.8 and (3.26) we seeIII . |λ1 − λ2| . εδ1‖η1 − η2‖L∞ .Hence, by taking ε smaller still if needed, we have‖G(η1, ε)− G(η2, ε)‖L∞ ≤ κ‖η1 − η2‖L∞for some 0 < κ < 1 and so G is a contraction. Therefore, invoking theBanach fixed-point theorem, we have established the existence of a uniqueη, with ‖η‖L∞ ≤ Rε, satisfying (3.4).To see the leading order observe the order of the terms appearing in theprevious computations as well as the following. First if p1 ≥ 3 thenε‖ (R0(−λ2)−G0) f(W )‖L∞ . ελ2‖R0(−λ2)G0f(W )‖L∞. ελ2 · λ−1−‖G0f(W )‖L3+. ελ1−‖f(W )‖L1+. ε2−and if instead 2 < p1 < 3 then take 3/q = (p1 − 2)− andε‖ (R0(−λ2)−G0) f(W )‖L∞ . ελ2‖R0(−λ2)G0f(W )‖L∞. ελ2 · λ3/q−2‖G0f(W )‖Lq. ελ3/q‖f(W )‖L(3/p1)+. ε1+(p1−2)− .The lemma is now proved.With the existence of η established we can improve the space in whichη lives.Lemma 3.1.10. The η established in Lemma 3.1.9 is in Lr ∩ H˙1 for any3 < r ≤ ∞. The function η also enjoys the bounds‖η‖Lr . ε1−3/r‖η‖H˙1 . ε1/2for all 3 < r ≤ ∞. Furthermore we have the expansionη = Q¯(1 +G0V )−1P¯R0(−λ2)(−λ2√3|x|−1) + η˜603.1. Construction of Solitary Wave Profileswith‖η˜‖Lr .max{{ε1−, if 2 < p1 < 3 and r = 3/(p1 − 2)ε, else}, εp1−2+1−3/r, ε2(1−3/r)}for 3 < r <∞ and where P¯ and Q¯ are given in (3.12).Proof. The computations which produce (3.26) are sufficient to establish theresult with r =∞. Take 3 < r <∞ and consider:‖η‖Lr . λ2‖R0(−λ2)W‖Lr + ε‖R0(−λ2)f(W )‖Lr + ‖R0(−λ2)N(η)‖Lr .For the first term use (3.7)λ2‖R0(−λ2)W‖Lr . λ2 · λ3(1/3−1/r)−2‖W‖L3w . ε1−3/rto see the leading order contribution.While the second term contributed to the leading order in Lemma 3.1.9it is inferior to the first term when measured in Lr. We do however needseveral cases. Suppose that 3 ≤ p1 < 5 or r > 3/(p1 − 2) and apply (3.8)with 1/q = 1/r + 2/3ε‖R0(−λ2)f(W )‖Lr . ε‖f(W )‖Lq . ε.Note that under these conditions f(W ) ∈ Lq. Now suppose that 2 < p1 < 3and r = 3/(p1 − 2) and apply (3.6) with q = (3/p1)+ε‖R0(−λ2)f(W )‖Lr . ελ3(1/q−(p1−2)/3)−2‖f(W )‖Lq . ε1− .And if 2 < p1 < 3 and 3 < r < 3/(p1 − 2) apply (3.7) with q = 3/p1ε‖R0(−λ2)f(W )‖Lr . ελ3(p1/3−1/r)−2‖f(W )‖Lqw . ε1−3/r+p1−2.And thirdly the remaining terms. First use (3.8) where 1/q = 1/r+ 2/3to see‖R0(−λ2)(W 3η2)‖Lr . ‖W 3η2‖Lq . ‖W 3‖L3/2‖η‖Lr‖η‖L∞ . ε‖η‖Lrand now use (3.6) with 1/q = 1/r to obtain‖R0(−λ2)η5‖Lr . λ−2‖η5‖Lr . λ−2‖η‖Lr‖η‖4L∞ . ε2‖η‖Lr .613.1. Construction of Solitary Wave Profilesand similarly for j = 1, 2ε‖R0(−λ2)ηpj‖Lr . ελ−2‖ηpj‖Lr . ε−1‖η‖pj−1L∞ ‖η‖Lr . εpj−2‖η‖Lrnoting that p2 − 2 ≥ p1 − 2 > 0. For the last remainder term we have twocases. If pj > 3 then use (3.8) with 1/q = 1/r + 2/3ε‖R0(−λ2)(ηW pj−1) ‖Lr . ε‖ηW pj−1‖Lq . ε‖η‖Lr‖W pj−1‖L3/2 . ε‖η‖Lr .If instead 2 < pj ≤ 3 then we need (3.7) with 1/q = (pj − 1)/3 so thatε‖R0(−λ2)(ηW pj−1) ‖Lr . ελ3(1/q−1/r)−2‖ηW pj−1‖Lqw. ελp1−1−3/r−2‖η‖L∞‖W pj−1‖Lqw. εpj−2ε1−3/rSo together we have‖η‖Lr ≤ Cε1−3/r + κ‖η‖Lrwhere κ may be chosen sufficiently small to yield the desired Lr bound for3 < r < ∞. An inspection of the higher order terms gives the size of η˜.We also must note Lemma 3.1.4. There are several competing terms whichdetermine the size of η˜ depending on p1 and r.On to the H˙1 norm. We need the identityη = (1 +R0(−λ2)V )−1R0(−λ2)F= R0(−λ2)F −R0(−λ2)V (1 +R0(−λ2)V )−1R0(−λ2)F= R0(−λ2)F −R0(−λ2)V ηso we have two parts‖η‖H˙1 ≤ ‖R0(−λ2)F‖H˙1 + ‖R0(−λ2)V η‖H˙1 .For the first‖R0(−λ2)F‖H˙1 . λ2‖R0(−λ2)W‖H˙1 + ε‖R0(−λ2)f(W + η)‖H˙1+ ‖R0(−λ2)(W 3η2 +W 2η3 +Wη4 + η5) ‖H˙1andλ2‖R0(−λ2)W‖H˙1 . λ2‖R0(−λ2)∇W‖L2. λ2 · λ1/2−2‖∇W‖L3/2w. ε1/2623.1. Construction of Solitary Wave Profilesandε‖R0(−λ2)f(W + η)‖H˙1 . ε‖R0(−λ2)f ′(W + η)(∇W +∇η)‖L2. ελ−1/2‖f ′(W + η)∇W‖L1+ ελ1−‖f ′(W + η)∇η‖L6/5−. ε1/2‖f ′(W + η)‖L3−‖∇W‖L3/2++ ε0+‖f ′(W + η)‖L3−‖∇η‖L2. ε1/2 + κ‖η‖H˙1with κ small and‖R0(−λ2)(W 3η2 +W 2η3 +Wη4 + η5) ‖H˙1. ‖R0(−λ2)η(∇Wf1 +∇ηf2)‖L2where f1 and f2 are in L2 so‖R0(−λ2)(W 3η2 +W 2η3 +Wη4 + η5) ‖H˙1. λ−1/2‖η‖L∞ (‖∇W‖L2‖f1‖L2 + ‖∇η‖L2‖f2‖L2). ε1/2 + κ‖η‖H˙1 .For the second‖R0(−λ2)V η‖H˙1 =∥∥∥∥∥(∇e−λ|x||x|)∗ (V η)∥∥∥∥∥L2=∥∥(λ2g(λx)) ∗ (V η)∥∥L2where g ∈ L3/2w . So using weak Young’s we obtain‖R0(−λ2)V η‖H˙1 . λ2‖g(λx)‖L3/2w ‖V η‖L6/5. λ2 · λ−2‖V ‖L3/2‖η‖L6. ‖η‖L6. ε1/2.So putting everything together gives‖η‖H˙1 ≤ C(ε1/2 + κ‖η‖H˙1)which gives the desired bound by taking κ sufficiently small.633.1. Construction of Solitary Wave ProfilesCombining Lemmas 3.1.6, 3.1.9, 3.1.10 and Remark 1.7.6 completes theproof of Theorem 1.7.2.At this point we demonstrate the following monotonicity result whichwill be used in Section 3.2.Lemma 3.1.11. Suppose that f(W ) = W p with 3 < p < 5. Take ε1 and ε2with 0 < ε1 < ε2 < ε0. Let ε1 give rise to λ1 and η1 and let ε2 give rise toλ2 and η2 via Theorem 1.7.2. We have|(λ2 − λ1)− λ(1)(ε2 − ε1)| . o(1)|ε2 − ε1|. (3.27)Proof. We first establish the estimate|λ2 − λ1| ≤(λ(1) + o(1))|ε2 − ε1| (3.28)We write, as in Lemma 3.1.6 and Lemma 3.1.8λ2 − λ1 = a(ε2, λ2) + b(ε2, λ2, η2)c(λ2)− a(ε1, λ1) + b(ε1, λ1, η1)c(λ1)=a(ε2, λ2)− a(ε1, λ2) + b(ε2, λ2, η2)− b(ε1, λ2, η2)c(λ2)+a(ε1, λ2) + b(ε1, λ2, η2)c(λ2)− a(ε1, λ1) + b(ε1, λ1, η1)c(λ1).The second line, containing only ε1 and not ε2, has been dealt with in theproof of Lemma 3.1.8 and so there follows|λ2 − λ1| ≤∣∣∣∣a(ε2, λ2)− a(ε1, λ2) + b(ε2, λ2, η2)− b(ε1, λ2, η2)c(λ2)∣∣∣∣+ o(1)‖η2 − η1‖L∞ + o(1)|λ2 − λ1|≤ |ε2 − ε1|(λ(1) + o(1))+ o(1)‖η2 − η1‖L∞ + o(1)|λ2 − λ1|.For the η’s we estimate‖η2 − η1‖L∞ . o(1)‖η2 − η1‖L∞ + |λ2 − λ1|+ |ε2 − ε1|appealing to Lemma 3.1.9. So putting everything together we have|λ2 − λ1| ≤(λ(1) + o(1))|ε2 − ε1|establishing (3.28).643.2. Variational CharacterizationNow we proceed to the more refined (3.27). Observing the computationsleading to (3.28) we have|λ2 − λ1 − (ε2 − ε1)λ(1)| ≤∣∣∣∣a(ε2, λ2)− a(ε1, λ2)c(λ2) − (ε2 − ε1)λ(1)∣∣∣∣+o(1)‖η2 − η1‖L∞ + o(1)|λ2 − λ1|.By (3.28) the last two terms are of the correct size and so we focus on thefirst. We have∣∣∣∣a(ε2, λ2)− a(ε1, λ2)c(λ2) − (ε2 − ε1)λ(1)∣∣∣∣=∣∣∣∣(ε2 − ε1)( 〈R0(−λ22)V ψ,W p〉λ2〈R0(−λ22)V ψ,W 〉 − λ(1))∣∣∣∣= o(1)|ε2 − ε1|noting (3.18) and (3.19). And so, putting everything together we achieve|λ2 − λ1 − (ε2 − ε1)λ(1)| . o(1)|ε2 − ε1|as desired.3.2 Variational CharacterizationIt is not clear from the construction that the solution Q is in any sense aground state solution. It is also not clear that the solution is positive. Inthis section we first establish the existence of a ground state solution; onethat minimizes the action subject to a constraint. We then demonstratethat this minimizer must be our constructed solution. In this way we proveTheorem 1.7.7.In this section we restrict our nonlinearity and take only f(Q) = |Q|p−1Qwith 3 < p < 5. Then the action isSε,ω(u) = 12‖∇u‖2L2 −16‖u‖6L6 −εp+ 1‖u‖p+1Lp+1+ω2‖u‖2L2 . (3.29)We are interested in the constrained minimization problemmε,ω := inf{Sε,ω(u) | u ∈ H1(R3) \ {0}, Kε(u) = 0} (3.30)whereKε(u) = ddµSε,ω(Tµu)∣∣∣∣µ=1= ‖∇u‖2L2 − ‖u‖6L6 −3(p− 1)2(p+ 1)ε‖u‖p+1Lp+1653.2. Variational Characterizationand (Tµu)(x) = µ3/2u(µx) is the L2 scaling operator. Note that for Qε =W + η as constructed in Theorem 1.7.2 we have Kε(Qε) = 0 since anysolution to (1.11) will satisfy Kε(Q) = 0.Before addressing the minimization problem we investigate the impli-cations of our generated solution Qε with specified ε and correspondingω = ω(ε). In particular there is a scaling that generates for us additionalsolutions to the equation−∆Q−Q5 − ε|Q|p−1Q+ ωQ = 0 (3.31)with 3 < p < 5.Remark 3.2.1. For any 0 < ε˜ ≤ ε0, we have solutions to (3.31) given byQµ = µ1/2Qε˜(µ·)with ε = µ(5−p)/2ε˜ and ω = µ2ω(ε˜). So for any ε > 0, we obtain the familyof solutions{ Qµ | µ =(εε˜) 25−p, ε˜ ∈ (0, ε0] }withω =(εε˜) 45−pω(ε˜) ∈[(ε0ε˜0) 45−pω(ε˜0), ∞)since as ε˜ ↓ 0, ( εε˜) 45−p ω(ε˜) ∼ ε˜ 2(3−p)5−p →∞.We now address the minimization problem by first addressing the exis-tence of a minimizer.Lemma 3.2.2. Take 3 < p < 5. Let Q = Qε solving (3.31) with ω = ω(ε)be as constructed in Theorem 1.7.2. There exists ε0 > 0 such that for 0 <ε ≤ ε0 we haveSε,ω(ε)(Qε) <13‖W‖6L6 = S0,0(W ).It follows, see Proposition 2.1 of [1], which is in turn based on the ear-lier [12], that the variational problem (3.30) with ω = ω(ε) admits a non-negative, radially-symmetric minimizer, which moreover solves (3.31).663.2. Variational CharacterizationProof. We compute directly, ignoring higher order contributions. Using(1.21) we write the action asSε,ω(Q) = 13∫Q6 +p− 12(p+ 1)ε∫|Q|p+1=13∫(W + η)6 +p− 12(p+ 1)ε∫|W + η|p+1.Rearranging we haveSε,ω(Q)− 13∫W 6 = 2∫W 5η +p− 12(p+ 1)ε∫W p+1 +O(ε2)where the higher order terms are controlled for 3 < p < 5:• ‖W 4η2‖L1 . ‖W 4‖L1‖η‖2L∞ . ε2• ‖η6‖L1 . ‖η‖6L6 . ε3• ε‖W pη‖L1 . ε‖W p‖L1‖η‖L∞ . ε2• ε‖ηp+1‖L1 . ε‖η‖p+1Lp+1 . εp−1.We now compute2∫W 5η = 2〈W 5, (H + λ2)−1(εW p − λ2W +N(η))〉= 2〈W 5, (1 +R0(−λ2)V )−1P¯R0(−λ2)(εW p − λ2W +N(η))〉where we have inserted the definition of η from (3.4) and so identify the twoleading order terms. There is no problem to also insert the projection P¯from (3.12) since we have the orthogonality condition (3.3) by the way wedefined ε, λ, η.We approximate in turn writing only R0 for R0(−λ2). In what followswe use the operators (1 + V G0)−1 and (1 + V R0)−1. The former as acts onthe spaces(1 + V G0)−1 : L1 ∩ (ΛW )⊥ → L1 ∩ (1)⊥and the later has the expansion(1 + V R0)−1 =1λ〈ΛW, ·〉V ΛW +O(1)673.2. Variational Characterizationin L1. We record here also the adjoint of P¯ :P¯ ∗ = 1− P ∗, P ∗ = 〈ΛW, ·〉∫V (ΛW )2V ΛW.To estimate the first term write2ε〈W 5, (1 +R0V )−1P¯R0W p〉 = 2ε〈(1 + V R0)−1W 5, P¯R0W p〉= 2ε〈(1 + V G0)−1W 5, P¯R0W p〉+O(ε2).The error is controlled with a resolvent identity:ε∣∣〈((1 + V R0)−1 − (1 + V G0)−1)W 5, P¯R0W p〉∣∣= ε∣∣〈(1 + V R0)−1V (G0 −R0)(1 + V G0)−1W 5, P¯R0W p〉∣∣= ε∣∣〈P¯ ∗(1 + V R0)−1V (G0 −R0) (−W 5/4 + V ΛW/2) , R0W p〉∣∣. ε∥∥P¯ ∗(1 + V R0)−1V (G0 −R0) (−W 5/4 + V ΛW/2)∥∥L1 ‖R0W p‖L∞. ε∥∥V R¯ (−W 5/4 + V ΛW/2)∥∥L1‖W p‖L3/2−∩L3/2+. ελ. ε2where we have substituted G0 − R0 = λG1 + R¯, note that G1(−W 5/4 +V ΛW/2) = 0 since (−W 5/4 + V ΛW/2) ⊥ 1, and have computed∥∥V R¯ (−W 5/4 + V ΛW/2)∥∥L1.∫〈x〉−1dx∫λ|λy|〈λy〉〈x− y〉−5dy . λ.Continuing, we have2ε〈W 5, (1 +R0V )−1P¯R0W p〉 = 2ε〈P¯ ∗(1 + V G0)−1W 5, R0W p〉+O(ε2)= −12ε〈W 5, R0W p〉+O(ε2)= −12ε〈R0W 5,W p〉+O(ε2)= −12ε〈G0W 5,W p〉+O(ε2)= −12ε〈W,W p〉+O(ε2)= −12ε∫W p+1 +O(ε2)683.2. Variational Characterizationwhere the other error term is bounded:ε∣∣〈(R0 −G0)W 5,W p〉∣∣ . ελ2 ∣∣〈R0G0W 5,W p〉∣∣ . ελ2 |〈R0W,W p〉| . ε2observing the computations that produce (3.19).For the second term we proceed in a similar manner−2λ2〈W 5, (1 +R0V )−1P¯R0W 〉 = 12λ2〈W 5, R0W 〉+O(ε2−)=12λ√3∫W 5 +O(ε2−)= 6piλ+O(ε2−)= −ε〈ΛW,W p〉+O(ε2−)= ε(3p+ 1− 12)∫W p+1 +O(ε2−)again referring to (3.19) and also Remark 1.7.4. The error term coming fromthe difference of the resolvents is similar. Noteλ2∣∣〈((1 + V R0)−1 − (1 + V G0)−1)W 5, P¯R0W〉∣∣. λ2∥∥P¯ ∗(1 + V R0)−1V (G0 −R0) (−W 5/4 + V ΛW/2)∥∥L1 ‖R0W‖L∞. λ3 ‖R0W‖L∞. λ3λ−1−‖W‖L3+. λ2−. ε2− .The term coming from N(η) is controlled similarly, and so, all togetherwe haveSε,ω(Q)− 13∫W 6 =(3p+ 1− 12− 12+p− 12(p+ 1))ε∫W p+1 +O(ε2−)= − p− 32(p+ 1)ε∫W p+1 +O(ε2−)which is negative for 3 < p < 5 and ε > 0 and small. We note that whenp = 3, this leading order term vanishes.Lemma 3.2.3. Take 3 < p < 5. Denote by V = Vε a non-negative, radially-symmetric minimizer for (3.30) with ω = ω(ε) (as established in Lemma693.2. Variational Characterization3.2.2). Then for any εj → 0, Vεj is a minimizing sequence for the (unper-turbed) variational problemS0,0(W ) = min{S0,0(u) | u ∈ H˙1 \ {0}, K0(u) = 0} (3.32)in the sense thatK0(Vεj )→ 0, lim supε→0S0,0(Vεj ) ≤ S0,0(W ).Proof. Since0 = Kε(V ) = K0(V )− 3(p− 1)2(p+ 1)ε∫V p+1,and by Lemma 3.2.2,S0,0(W ) > mε,ω(ε) = Sε,ω(ε)(V ) = S0,0(V )−1p+ 1ε∫V p+1 +12ω∫V 2,(3.33)the lemma will be implied by the claim:ε∫V p+1 → 0 as ε→ 0. (3.34)To address the claim, first introduce the functionalIε,ω(u) := Sε,ω(u)− 13Kε(u)=16∫|∇u|2 + 16∫|u|6 + p− 32(p+ 1)ε∫|u|p+1 + 12ω∫|u|2and observe that since Kε(V ) = 0,Iε,ω(ε)(V ) = Sε,ω(V ) < S0,0(W )and so the following quantities are all bounded uniformly in ε:∫|∇V |2,∫V 6, ε∫V p+1, ω∫V 2 . 1.By interpolationε∫V p+1 ≤ ε‖V ‖(5−p)/2L2‖V ‖3(p−1)/2L6. εω−(5−p)/4(ω∫V 2)(5−p)/4.703.2. Variational CharacterizationSo (3.34) holds, provided that ε4/(5−p)  ω. Since ω ∼ ε2, this indeed holdsfor 3 < p < 5.With the claim in hand we can finish the argument. The fact thatK0(V )→ 0 now follows from Kε(V ) = 0. Also, from Lemma 3.2.2 we knowthat for ε ≥ 0S0,0(V )− εp+ 1∫V p+1 ≤ Sε,ω(V ) ≤ S0,0(W )and so lim supε→0 S0,0(V ) ≤ S0,0(W ).Lemma 3.2.4. For a sequence εj ↓ 0, let V = Vεj be corresponding non-negative, radially-symmetric minimizers of (3.30) with ω = ω(εj). There isa subsequence εjk and a scaling µ = µk such that along the subsequence,V µ = µ1/2V (µ·)→ νWin H˙1 with ν = 1.Proof. The result with ν = 1 or ν = 0 follows from the bubble decompositionof Ge´rard [37] (see eg. the notes of Killip and Vis¸an [58], in particular The-orem 4.7 and the proof of Theorem 4.4). Therefore we need only eliminatethe possibility that ν = 0.If ν = 0 then∫ |∇Vε|2 → 0 (along the given subsequence). Then by theSobolev inequality,0 = Kε(Vε) = (1 + o(1))∫|∇Vε|2 − 3(p− 1)2(p+ 1)ε∫V p+1ε ,and so ∫|∇Vε|2 . ε∫V p+1ε .However, we have already seen∫|∇Vε|2 . ε∫V p+1ε . εω−(5−p)/4(ω∫V 2ε)(5−p)/4(∫|∇Vε|2)3(p−1)/4via interpolation and so(∫|∇Vε|2)(7−3p)/4. εω−(5−p)/4(ω∫V 2ε)(5−p)/4→ 0as above. Note that (7 − 3p)/4 = −3(p − 7/3)/4 < 0. Hence ν = 0 isimpossible and so we conclude that ν = 1. The result follows.713.2. Variational CharacterizationRemark 3.2.5. This lemma implies in particular that for V = Vε, ω = ω(ε),S0,0(V ) = S0,0(Vµ)→ S0,0(W ), and so by (3.33) and (3.34),ω∫V 2 → 0.Remark 3.2.6. Note that V µ is a minimizer of the minimization problem(3.30), and a solution to (3.31), with ε and ω replaced withε˜ = µ5−p2 ε, ω˜ = µ2ω.Under this scaling the following properties are preserved:ε˜45−p = µ2ε45−p  µ2ω = ω˜ε˜∫(V µ)p+1 = ε∫V p+1 → 0ω˜∫(V µ)2 = ω∫V 2 → 0.Moreover,ε˜→ 0, ω˜ → 0,the latter since otherwise ‖V µ‖L2 → 0 along some subsequence, contradictingV µ →W 6∈ L2 in H˙1, and then the former by the first relation above.Lemma 3.2.7. LetV µ = W + η˜, ‖η˜‖H˙1 → 0, ε˜→ 0be a sequence as provided by Lemma 3.2.4. There is a further scalingν = νε˜ = 1 + o(1)so that(V µ)ν = W ν + η˜ν =: W + ηˆretains ‖ηˆ‖H˙1 → 0, but also satisfies the orthogonality condition0 = 〈R0(−ωˆ)V ψ,F(εˆ, ωˆ, ηˆ)〉 (3.35)with the corresponding εˆ = ν(5−p)/2ε˜ and ωˆ = ν2ω˜.723.2. Variational CharacterizationProof. We may rewrite the above inner-product as〈R0(−ωˆ)V ψ,F(εˆ, ωˆ, ηˆ)〉 = −5√3〈ηˆ, (H + ωˆ)R0(−ωˆ)W 4ΛW 〉and observe from the resonance equation (1.18)5R0(−ωˆ)W 4ΛW = ΛW − ωˆR0(−ωˆ)ΛWand so(H + ωˆ)R0(−ωˆ)W 4ΛW = (−∆ + ωˆ − 5W 4)R0(−ωˆ)W 4ΛW= (1− 5W 4R0(−ωˆ))W 4ΛW = ωˆW 4R0(−ωˆ)ΛWso the desired orthogonality condition reads0 =1√ωˆ〈ηˆ, (H + ωˆ)R0(−ωˆ)W 4ΛW 〉 =√ωˆ〈W ν −W + η˜ν ,W 4R0(−ωˆ)ΛW 〉.Now since ΛW = ddµWµ|µ=1, by Taylor expansion‖W ν −W − (ν − 1)ΛW‖L6 . (ν − 1)2,and using (3.7)‖W 4R0(−ωˆ)ΛW‖L65. ‖R0(−ωˆ)ΛW‖L∞ . 1√ωˆ,we arrive at0 = (ν − 1)(√ωˆ〈ΛW, W 4R0(−ωˆ)ΛW 〉)+O((ν − 1)2) +O(‖η˜ν‖L6)Computations exactly as for (3.19) lead to√ωˆ〈ΛW, W 4R0(−ωˆ)ΛW 〉 = 6√3pi5+O(√ωˆ),and so the desired orthogonality condition reads0 = (ν − 1)(1 + o(1)) +O((ν − 1)2) +O(‖η˜ν‖L6)which can therefore be solved for ν = 1 + o(1) using ‖η˜ν‖L6 = o(1).733.2. Variational CharacterizationThe functionsWεˆ := (Vµ)ν = W + ηˆproduced by Lemma 3.2.7 solve the minimization problem (3.30), and thePDE (3.31), with ε and ω replaced (respectively) by εˆ → 0 and ωˆ. Sinceνε˜ = 1 + o(1), the propertiesεˆ45−p  ωˆ → 0, εˆ∫W p+1εˆ → 0, ωˆ∫W 2εˆ → 0persist.It remains to show that that Vεˆ agrees with Qεˆ constructed in Theo-rem 1.7.2. First:Lemma 3.2.8. For 3 < r ≤ ∞, and εˆ sufficiently small,‖ηˆ‖Lr . εˆ+√ωˆ1− 3r .Proof. Since Wεˆ is a solution of (1.11), the remainder ηˆ must satisfy (3.4).So‖ηˆ‖Lr = ‖(H + ωˆ)−1 (−ωˆW + εˆf(W ) +N(ηˆ)) ‖Lr. εˆ+√ωˆ1− 3r + ‖R0(−ωˆ)N(ηˆ)‖Lrusing (3.35) and after observing the computations of Lemma 3.1.9. We nowestablish the required bounds on the remainder, beginning with 3 < r <∞.Let q ∈ (1, 32) satisfy 1q − 1r = 23 :• ‖R0(−ωˆ)εˆW p−1ηˆ‖Lr . εˆ‖W p−1ηˆ‖Lq . εˆ‖W‖p−1L32 (p−1)‖ηˆ‖Lr . εˆ‖ηˆ‖Lr• ‖R0(−ωˆ)εˆηˆp‖Lr . εˆωˆ−5−p4 ‖ηˆp‖L6r6+(p−1)r. o(1)‖ηˆ‖p−1L6‖ηˆ‖Lr. o(1)‖ηˆ‖Lr• ‖R0(−ωˆ)W 3ηˆ5‖Lr . ‖W 3ηˆ2‖Lq . ‖W‖3L6‖ηˆ‖L6‖ηˆ‖Lr . o(1)‖ηˆ‖Lr• ‖R0(−ωˆ)ηˆ5‖Lr . ‖ηˆ5‖Lq . ‖ηˆ‖4L6‖ηˆ‖Lr . o(1)‖ηˆ‖Lrwhere in the second inequality we used ωˆ  εˆ4/(5−p). Combining, we haveachieved‖ηˆ‖Lr . εˆ+√ωˆ1− 3r + o(1)‖ηˆ‖Lrand so obtain the desired estimate for 3 < r < ∞. It remains to deal withr = ∞. The first three estimates proceed similarly, while the last one usesthe now-established Lr estimate:743.2. Variational Characterization• ‖R0(−ωˆ)εˆW p−1ηˆ‖L∞ . εˆ‖W p−1ηˆ‖L32−∩L 32+. εˆ‖W‖p−1L32 (p−1)−∩L 32 (p−1)+‖ηˆ‖L∞ . εˆ‖ηˆ‖L∞• ‖R0(−ωˆ)εˆηˆp‖L∞ . εˆωˆ−5−p4 ‖ηˆp‖L6p−1. o(1)‖ηˆ‖p−1L6‖ηˆ‖L∞. o(1)‖ηˆ‖L∞• ‖R0(−ωˆ)W 3ηˆ5‖L∞ . ‖W 3ηˆ2‖L32−∩L 32+ . ‖W‖3L6−∩L6+‖ηˆ‖L6‖ηˆ‖L∞. o(1)‖ηˆ‖L∞• ‖R0(−ωˆ)ηˆ5‖L∞ . ‖ηˆ5‖L32−∩L 32+ . ‖ηˆ‖4L6−∩L6+‖ηˆ‖L∞. (εˆ+ ωˆ 14−)4‖ηˆ‖L∞ . o(1)‖ηˆ‖L∞which, combined, establish the desired estimate with r =∞. Strictly speak-ing, these are a priori estimates, since we do not know ηˆ ∈ Lr for r > 6 tobegin with. However, the typical argument of performing the estimates ona series of smooth functions that approximate η remedies this after passingto the limit.Lemma 3.2.9. Write ωˆ = λˆ2. For εˆ sufficiently small, ‖ηˆ‖L∞ . εˆ, andλˆ = λ(εˆ, ηˆ) as given in Lemma 3.1.6. Moreover, Wεˆ = W + ηˆ = Qεˆ.Proof. From the orthogonality equation (3.35),0 = 〈R0(−λˆ2)V ψ,−λˆ2W + εˆW p +N(ηˆ)〉= −λˆ · λˆ〈R0(−λˆ2)V ψ,W 〉+ εˆ〈R0(−λˆ2)V ψ,W p〉+ 〈R0(−λˆ2)V ψ,N(ηˆ)〉.Now re-using estimates (3.18) and (3.19), as well as• |〈R0(−λˆ2)V ψ,W 3ηˆ2〉| . ‖R0(−λˆ2)V ψ‖L6‖W 3ηˆ2‖L6/5. ‖V ψ‖L6/5‖ηˆ‖2L∞‖W 3‖L6/5 . ‖ηˆ‖2L∞• |〈R0(−λˆ2)V ψ, ηˆ5〉| . ‖R0(−λˆ2)V ψ‖L6‖ηˆ5‖L 65. ‖V ψ‖L65‖ηˆ‖5L6 . ‖ηˆ‖5L6• |〈R0(−λˆ2)V ψ, εˆηˆp〉| . εˆ‖R0(−λˆ2)V ψ‖L6‖ηˆp‖L 65. εˆ‖V ψ‖L65‖ηˆ‖pL65 p. εˆ · ‖ηˆ‖pL65 p• |〈R0(−λˆ2)V ψ, εˆW p−1ηˆ〉| . εˆ‖R0(−λˆ2)V ψ‖L6‖W p−1ηˆ‖L 65. εˆ‖V ψ‖L65‖W p−1‖L32‖ηˆ‖L6 . εˆ‖ηˆ‖L6 ,753.2. Variational Characterizationcombined with Lemma 3.2.8, yields(λˆ− λ(1)εˆ)(1 +O(λˆ1−)) = O(λˆ2 + εˆ2 + εˆλˆ 12 )from which followsλˆ− λ(1)εˆ εˆ,and then by Lemma 3.2.8 again,‖ηˆ‖Lr . εˆ1− 3r , 3 < r ≤ ∞.It now follows from Lemma 3.1.6 that λˆ = λ(εˆ, ηˆ) for εˆ small enough.Finally, the uniqueness of the fixed-point in the L∞-ball of radius Rεˆfrom Lemma 3.1.9 implies that Wεˆ = Qεˆ, where Qεˆ is the solution con-structed in Theorem 1.7.2.We have so far established that, up to subsequence, and rescaling, asequence of minimizers Vεj eventually coincides with a solution Qε as con-structed in Theorem 1.7.2: ξ1/2j Vεj (ξj ·) = Qεˆj (here ξj = νjµj). It remainsto remove the scaling ξj and establish that εˆj = εj :Lemma 3.2.10. Suppose V (x) = ξ−12Qεˆ(x/ξ) solves (3.31) with ω = ω(ε)(as given in Theorem 1.7.2), where εˆ = ξ(5−p)/2ε, ωˆ = ξ2ω, and ωˆ = ω(εˆ).Then ξ = 1 and εˆ = ε, and so V = Qε.Proof. By assumption ωˆ = ξ2ω(ε) = ω(εˆ), soω(ε) = Ωε(εˆ), Ωε(εˆ) :=(εεˆ)4/(5−p)ω(εˆ).This relation is satisfied if εˆ = ε (ξ = 1), and our goal is to show it is notsatisfied for any other value of εˆ. Thus we will be done if we can showthat Ωε is monotone in εˆ. Take ε1 and ε2 with 0 < ε1 < ε2 ≤ ε0. Letα = 4/(5− p) > 2 and assume that 0 < ε2 − ε1  ε1. Denoting ω(εj) = λ2j ,we estimate:ε−α (Ωε(ε2)− Ωε(ε1)) = ε−α2 λ22 − ε−α1 λ21= ε−α2 (λ2 − λ1)(λ2 + λ1) + λ21(ε−α2 − ε−α1)≈ ε−α1 λ(1)(ε2 − ε1) · 2λ(1)ε1+ ε21(λ(1))2ε−α1(− αε1(ε2 − ε1) +O((ε2 − ε1ε1)2))≈ ε1−α1 (λ(1))2(ε2 − ε1) (2− α)< 0763.2. Variational Characterizationwhere we have used Lemma 3.27. With the monotonicity argument completewe conclude that ε = εˆ and ξ = 1 so there follows V = Qε.The remaining lemma completes the proof of Theorem 1.7.7:Lemma 3.2.11. There is ε0 > 0 such that for 0 < ε ≤ ε0 and ω = ω(ε), thesolution Qε of (3.31) constructed in Theorem 1.7.2 is the unique positive,radially symmetric solution of the minimization problem (3.30).Proof. This is the culmination of the previous series of Lemmas. We knowthat minimizers V = Vε exist by Lemma 3.2.2. Arguing by contradiction, ifthe statement is false, there is a sequence Vεj , εj → 0, of such minimizers,for which Vεj 6= Qεj . We apply Lemmas 3.2.3, 3.2.4, 3.2.7, 3.2.8 and 3.2.10in succession to this sequence, to conclude that along a subsequence, Vεjand Qεj eventually agree, a contradiction.Finally, for a given ε, we establish a range of ω for which a minimizerexists and is, up to scaling, a constructed solution. This addresses Remark1.7.9.Corollary 3.2.12. Fix ε > 0 and take ω ∈ [ω,∞) whereω = ε4/(5−p)ε−4/(5−p)0 ω(ε0) ≤ ω(ε).The minimization problem (3.30) with ε and ω has a solution Q given byQ(x) = µ1/2Qεˆ(µx)where Qεˆ is a constructed solution with 0 < εˆ ≤ ε0 and corresponding ω(εˆ).The scaling factor, µ, satisfies the relationshipsε = εˆµ(5−p)/2, ω = ω(εˆ)µ2.Proof. Fix ε > 0. Take any 0 < εˆ ≤ ε0 and corresponding constructed ω(εˆ)and constructed solution Qεˆ. Then, for scaling µ = (ε/εˆ)2/(5−p) the functionQ(x) = µ1/2Qεˆ(µx)is a solution to the elliptic problem (3.31) with ε and ω = ω(εˆ)µ2. Recallfrom Lemma 3.2.10 that ω(εˆ)µ2 is monotone in εˆ. Taking εˆ ↓ 0 yieldsω →∞. Setting εˆ = ε0 yields ω = ω.In other words if we fix ε and ω ∈ [ω,∞) from the start we determinean εˆ and µ that generate the desired Q. We claim that the function Q(x)773.3. Dynamics Below the Ground Statesis a minimizer of the problem (3.30) with ε and ω. Suppose not. Thatis, suppose there exists a function 0 6= v ∈ H1 with Kε(v) = 0 such thatSε,ω(v) < Sε,ω(Q). Set w(x) = µ−1/2v(µ−1x) and note that 0 = Kε(v) =Kεˆ(w). We now seeSεˆ,ω(εˆ)(w) = Sε,ω(v) < Sε,ω(Q) = Sεˆ,ω(εˆ)(Qεˆ)which contradicts the fact that Qεˆ is a minimizer of the problem (3.30) withεˆ and ω(εˆ). Therefore, Q(x) is a minimizer of (3.30) with ε and ω, whichconcludes the proof.3.3 Dynamics Below the Ground StatesIn this final section we establish Theorem 1.7.10, the scattering/blow-updichotomy for the perturbed critical NLS (2.1).We begin by summarizing the local existence theory for (2.1). This isbased on the classical Strichartz estimates for the solutions of the homoge-neous linear Schro¨dinger equationi∂tu = −∆u, u|t=0 = φ ∈ L2(R3) =⇒ u(x, t) = eit∆φ ∈ C(R, L2(R3))and the inhomogeneous linear Schro¨dinger equation (with zero initial data)i∂tu = −∆u+ f(x, t), u|t=0 = 0 =⇒ u(x, t) = −i∫ t0ei(t−s)∆f(·, s)ds :‖eit∆φ‖S(R) ≤ C‖φ‖L2(R3),∥∥∥∥∫ t0ei(t−s)∆f(·, s)ds∥∥∥∥S(I)≤ C‖f‖N(I),(3.36)where we have introduced certain Lebesgue norms for space-time functionsf(x, t) on a time interval t ∈ I ⊂ R:‖f‖LrtLqx(I) =∥∥‖f(·, t)‖Lq(R3)∥∥Lr(I) ,‖f‖S(I) := ‖f‖L∞t L2x(I)∩L2tL6x(I), ‖f‖N(I) := ‖f‖L1tL2x(I)+L2tL65x (I),together with the integral (Duhamel) reformulation of the Cauchy prob-lem (2.1):u(x, t) = eit∆u0 + i∫ t0ei(t−s)∆(|u|4u+ ε|u|p−1u)ds783.3. Dynamics Below the Ground Stateswhich in particular gives the sense in which we consider u(x, t) to be asolution of (2.1). This lemma summarizing the local theory is standard(see, for example [15, 56]):Lemma 3.3.1. Let 3 ≤ p < 5, ε > 0. Given u0 ∈ H1(R3), there is a uniquesolution u ∈ C((−Tmin, Tmax);H1(R3)) of (2.1) on a maximal time intervalImax = (−Tmin, Tmax) 3 0. Moreover:1. space-time norms: u,∇u ∈ S(I) for each compact time interval I ⊂Imax;2. blow-up criterion: if Tmax < ∞, then ‖u‖L10t L10x ([0,Tmax)) = ∞ (withsimilar statement for Tmin);3. scattering: if Tmax = ∞ and ‖u‖L10t L10x ([0,∞)) < ∞, then u scatters(forward in time) to 0 in H1:∃ φ+ ∈ H1(R3) s.t. ‖u(·, t)− eit∆φ+‖H1 → 0 as t→∞(with similar statement for Tmin);4. small data scattering: for ‖u0‖H1 sufficiently small, Imax = R, ‖u‖L10t L10x (R) .‖∇u0‖L2, and u scatters (in both time directions).Remark 3.3.2. The appearance here of the L10t L10x space-time norm is nat-ural in light of the Strichartz estimates (3.36). Indeed, interpolation betweenL∞t L2x and L2tL6 shows that‖eit∆φ‖LrtLqx(R) . ‖φ‖L2 ,2r+3q=32, 2 ≤ r ≤ ∞(such an exponent pair (r, q) is called admissible), so then if ∇φ ∈ L2, by aSobolev inequality,‖eit∆φ‖L10x . ‖∇eit∆φ‖L 3013x ∈ L10t ,since (r = 10, q = 3013) is admissible.The next lemma is a standard extension of the local theory called aperturbation or stability result, which shows that any ‘approximate solution’has an actual solution remaining close to it. In our setting (see [56, 78]):793.3. Dynamics Below the Ground StatesLemma 3.3.3. Let u˜ : R3 × I → C be defined on time interval 0 ∈ I ⊂ Rwith‖u˜‖L∞t H1x(I)∩L10t L10x (I) ≤M,and suppose u0 ∈ H1(R3) satisfies ‖u0‖L2 ≤M . There exists δ0 = δ0(M) >0 such that if for any 0 < δ < δ0, u˜ is an approximate solution of (2.1) inthe sense‖∇e‖L107t L107x (I)≤ δ, e := i∂tu˜+ ∆u˜+ |u˜|4u˜+ ε|u˜|p−1u˜,with initial data close to u0 in the sense‖∇ (u˜(·, 0)− u0) ‖L2 ≤ δ,then the solution u of (2.1) with initial data u0 has Imax ⊃ I, and‖∇ (u− u˜) ‖S(I) ≤ C(M)δ.Remark 3.3.4. The space-time norm ∇e ∈ L107t L107x in which the error ismeasured is natural in light of the Strichartz estimates (3.36), since L107t L107xis the dual space of L103t L103x , and (103 ,103 ) is an admissible exponent pair.Given a local existence theory as above, an obvious next problem is to de-termine if the solutions from particular initial data u0 are global (Imax = R),or exhibit finite-time blow-up (Tmax < ∞ and/or Tmin < ∞). Theo-rem 1.7.10 solves this problem for radially-symmetric initial data lying ‘be-low the ground state’ level of the action: for any ε > 0, ω > 0, setmε,ω := inf{Sε,ω(u) | u ∈ H1(R3) \ {0},Kε(u) = 0} (3.37)(see (1.22) for expressions for the functionals Sε,ω and Kε), and note thatfor ε 1 and ω = ω(ε), by Theorem 1.7.7 we have mε,ω = Sε,ω(Qε). Fromhere on, we fix a choice ofε > 0, ω > 0, p ∈ (3, 5)(though some results discussed below extend to p ∈ (73 , 5)):Theorem 3.3.5. Let u0 ∈ H1(R3) be radially-symmetric and satisfySε,ω(u0) < mε,ω,and let u be the corresponding solution to (2.1):803.3. Dynamics Below the Ground States1. If Kε(u0) ≥ 0, u is global, and scatters to 0 as t→ ±∞;2. if Kε(u0) < 0, u blows-up in finite time (in both time directions).Remark 3.3.6. The argument which gives the finite-time blow-up (the sec-ond statement) is classical, going back to [61, 62]. It rests on the followingingredients: conservation of mass and energy imply Sε,ω(u) ≡ Sε,ω(u0) <mε,ω, so that the condition Kε(u) < 0 is preserved (by definition of mε,ω); aspatially cut-off version of the formal variance identity for (NLS)d2dt212∫|x|2|u(x, t)|2dx = ddt∫x · = (u¯∇u) dx = 2Kε(u) ; (3.38)and exploitation of radial symmetry to control the errors introduced by thecut-off. In fact, a complete argument in exactly our setting is given as theproof of Theorem 1.3 in [1] (it is stated there for dimensions ≥ 4 but in factthe proof covers dimension 3 as well). So we will focus here only on theproof of the first (scattering) statement.The concentration-compactness approach of Kenig-Merle [54] to provingthe scattering statement is by now standard. In particular, [2] provides acomplete proof for the analogous problem in dimensions ≥ 5. In fact, theproof there is more complicated for two reasons: there is no radial symmetryrestriction; and in dimension n, the corresponding nonlinearity includes theterm |u|p−1u with p > 1 + 4n loses smoothness, creating extra technical dif-ficulties. We will therefore provide just a sketch of the (simpler) argumentfor our case, closely following [56], where this approach is implemented forthe defocusing quintic NLS perturbed by a cubic term, and taking the ad-ditional variational arguments we need here from [1, 2], highlighting pointswhere modifications are needed.In the next lemma we recall some standard variational estimates forfunctions with action below the ground state level mε,ω. The idea goes backto [63], but proofs in this setting are found in [1, 2]. Recall the ‘unperturbed’ground state level is attained by the Aubin-Talenti function W :m0,0 := E0(W ) = inf{E0(u) | u ∈ H1(R3) \ {0},K0(u) = 0},and introduce the auxilliary functionalIω(u) := Sε,ω(u)− 23(p− 1)Kε(u)=p− 732(p− 1)∫|∇u|2 + 5− p6(p− 1)∫|u|6 + 12ω∫|u|2813.3. Dynamics Below the Ground Stateswhich is useful since all its terms are positive, and noteKε(u) ≥ 0 =⇒ ‖u‖2H1 . Iω(u) ≤ Sε,ω(u). (3.39)Define, for 0 < m∗ < mε,ω, the setAm∗ := {u ∈ H1(R3) | Sε,ω(u) ≤ m∗, Kε(u) > 0}and note that it is is preserved by (2.1):u0 ∈ Am∗ =⇒ u(·, t) ∈ Am∗ for all t ∈ Imax.Indeed, by conservation of mass and energy Sε,ω(u(·, t)) = Sε,ω(u0) ≤ m∗.Moreover if for some t0 ∈ Imax, Kε(u(·, t0)) ≤ 0, then by H1 continuityof u(·, t) and of Kε, we must have Kε(u(·, t1)) = 0 for some t1 ∈ Imax,contradicting m∗ < mε,ω.Lemma 3.3.7. 1. mε,ω ≤ m0,0, and (3.37) admits a minimizer if mε,ω <m0,0;2. we havemε,ω = inf{Iω(u) | u ∈ H1(R3) \ {0},Kε(u) ≤ 0}, (3.40)and a minimizer for this problem is a minimizer for (3.37), and viceversa;3. given 0 < m∗ < mε,ω, there is κ(m∗) > 0 such thatu ∈ Am∗ =⇒ Kε(u) ≥ κ(m∗) > 0. (3.41)After the local theory, and in particular the perturbation Lemma 3.3.3,the key analytical ingredient is a profile decomposition, introduced into theanalysis of critical nonlinear dispersive PDE by [5, 55]. This version, takenfrom [56] (and simplified to the radially-symmetric setting), can be thoughtof as making precise the lack of compactness in the Strichartz estimates forH˙1(R3) data, when the data is bounded in H1(R3):Lemma 3.3.8. ([56], Theorem 7.5) Let {fn}∞n=1 be a sequence of radiallysymmetric functions, bounded in H1(R3). Possibly passing to a subsequence,there is J∗ ∈ {0, 1, 2, . . .} ∪ {∞} such that for each finite 1 ≤ j ≤ J∗ thereexist (radially symmetric) ‘profiles’ φj ∈ H˙1 \ {0}, ‘scales’ {λjn}∞n=1 ⊂ (0, 1],and ‘times’ {tjn}∞n=1 ⊂ R satisfying, as n→∞,λjn ≡ 1 or λjn → 0, tjn ≡ 0 or tjn → ±∞.823.3. Dynamics Below the Ground StatesIf λjn ≡ 1 then additionally φj ∈ L2(R3). For some 0 < θ < 1, defineφjn(x) :=[eitjn∆φj](x) λjn ≡ 1(λjn)− 12[eitjn∆P≥(λjn)θφj] (xλjn)λjn → 0,where P≥N denotes a standard smooth Fourier multiplier operator (Littlewood-Paley projector) which removes the Fourier frequencies ≤ N . Then for eachfinite 1 ≤ J ≤ J∗ we have the decompositionfn =J∑j=1φjn + wJnwith:• small remainder: limJ→J∗lim supn→∞‖eit∆wJn‖L10t L10x (R) = 0• decoupling: for each J , limn→∞[M(fn)−J∑j=1M(φjn)−M(wJn)]= 0,and the same statement for the functionals Eε, Kε, Sω,ε and Iω;• orthogonality: limn→∞[λjnλkn+ λknλjn+ |tjn(λjn)2−tkn(λkn)2|λjnλkn]=∞ for j 6= k.The global existence and scattering statement 1 of Theorem 1.7.10 isestablished by a contradiction argument. For 0 < m < mε,ω, setτ(m) := sup{‖u‖L10t L10x (Imax) | Sε,ω(u0) ≤ m, Kε(u0) > 0}where the supremum is taken over all radially-symmetric solutions of (2.1)whose data u0 satisfies the given conditions. It follows from the local theoryabove that τ is non-decreasing, continuous function of m into [0,∞], andthat τ(m) < ∞ for sufficiently small m (by part 4 of Lemma 3.3.1). Byparts 2-3 of Lemma 3.3.1, if τ(m) <∞ for all m < mε,ω, the first statementof Theorem 1.7.10 follows. So we suppose this is not the case, and that infactm∗ := sup{m | 0 < m < mε,ω, τ(m) <∞} < mε,ω.By continuity, τ(m∗) =∞, and so there exists a sequence un(x, t) of global,radially-symmetric solutions of (2.1) satisfyingSε,ω(un) ≤ m∗, Kε(un(·, 0)) > 0, (3.42)833.3. Dynamics Below the Ground Statesandlimn→∞ ‖un‖L10t L10x ([0,∞)) = limn→∞ ‖un‖L10t L10x ((−∞,0]) =∞ (3.43)(the last condition can be arranged by time shifting, if needed). The ideais to pass to a limit in this sequence in order to obtain a solution sitting atthe threshold action m∗.Lemma 3.3.9. There is a subsequence (still labelled un) such that un(x, 0)converges in H1(R3).Proof. This is essentially Proposition 9.1 of [56], with slight modificationsto incorporate the variational structure. We give a brief sketch. The se-quence un(·, 0) is bounded in H1 by (3.39), so we may apply the profiledecomposition Lemma 3.3.8: up to subsequence,un(·, 0) =J∑j=1φjn + wJn .If we can show there is only one profile (J∗ = 1), that λ1n ≡ 1, t1n ≡ 0,and that w1n → 0 in H1, we have proved the lemma. By (3.42) and thedecoupling,m∗ − 23(p− 1)κ(m∗) ≥ Sε,ω(un(·, 0))− 23(p− 1)Kε(un(·, 0))= Iω(un(·, 0)) =J∑j=1Iω(φjn) + Iω(wJn) + o(1),and since Iω is non-negative, we have, for n large enough, Iω(φjn) < m∗for each j and Iω(wJn) < m∗. Since m∗ < mε,ω, it follows from (3.40) thatKε(φjn) > 0 and Kε(wJn) ≥ 0, so also Sε,ω(φjn) > 0 and Sε,ω(wJn) ≥ 0. Henceif there is more than one profile, by the decouplingm∗ ≥ Sε,ω(un(·, 0)) =J∑j=1Sε,ω(φjn) + Sε,ω(wJn) + o(1),we have, for each j, and n large enough, for some η > 0,Sε,ω(φjn) ≤ m∗ − η, Kε(φjn) > 0. (3.44)Following [56], we introduce nonlinear profiles vjn associated to each φjn.843.3. Dynamics Below the Ground StatesFirst, suppose λjn ≡ 1. If tjn ≡ 0, then vjn = vj is defined to be the solutionto (2.1) with initial data φj . If tjn → ±∞, vj is defined to be the solutionscattering (in H1) to eit∆φj as t→ ±∞, and vjn(x, t) := vj(t+ tjn). In bothcases, it follows from (3.44) that vjn is a global solution, with ‖vjn‖L10t L10x (R) ≤τ(m∗ − η) <∞.For the case λjn → 0, we simply let vjn be the solution of (2.1) with initialdata φjn. As in [56] Proposition 8.3, vjn is approximated by the solution u˜jnof the unperturbed critical NLS (1.13) (since the profile is concentrating,the sub-critical perturbation ‘scales away’) with data φjn (or by a scatteringprocedure in case tnj → ±∞). The key additional point here is that by (3.44),and since m∗ < mε,ω ≤ m0,0, it follows that for n large enoughE0(vjn) ≤ m∗ < m0,0, K0(vjn) > 0,and so by [54], u˜jn is a global solution of (1.13), with ‖u˜jn‖L10t L10x (R) ≤C(m∗) < ∞. It then follows from Lemma 3.3.3 that the same is true ofvjn.These nonlinear profiles are used to construct what are shown in [56] tobe increasingly accurate (for sufficiently large J and n) approximate solu-tions in the sense of Lemma 3.3.3,uJn(x, t) :=J∑j=1vjn(x, t) + eit∆wJnwhich are moreover global with uniform space-time bounds. This contra-dicts (3.43).Hence there is only one profile: J∗ = 1, and the decoupling also implies‖w1n‖H1 → 0. Finally, the possibilities t1n → ±∞ or λ1n → 0 are excludedjust as in [56], completing the argument.Given this lemma, let u0 ∈ H1(R3) be the H1 limit of (a subsequence) ofun(x, 0), and let u(x, t) be the corresponding solution of (2.1) on its maximalexistence interval Imax 3 0. We see Sε,ω(u) = Sε,ω(u0) ≤ m∗. Whether u isglobal or not, it follows from Lemma 3.3.1 (part 2), (3.43) and Lemma 3.3.3,that‖u‖L10t L10x (Imax) =∞, hence also Sε,ω(u) = m∗.It follows also that{u(·, t) | t ∈ Imax} is a pre-compact set in H1(R3).853.3. Dynamics Below the Ground StatesTo see this, let {tn}∞n=1 ⊂ Imax, and note that since‖u‖L10t L10x ((−Tmin,tn]) = ‖u‖L10t L10x ([tn,Tmax)) =∞,and so (the proof of) Lemma 3.3.9 applied to the sequence u(x, t−tn) impliesthat {u(x, tn)} has a convergent subsequence in H1.The final step is to show that this ‘would-be’ solution u with these specialproperties, sometimes called a critical element cannot exist. For this, firstnote that u must be global: Imax = R. This is because if, say, Tmax < ∞,then for any tn → Tmax−, u(·, tn) → u˜0 ∈ H1(R3) (up to subsequence)in H1, by the pre-compactness. Then by comparing u with the solution u˜of (2.1) with initial data u˜0 at t = tn using Lemma (3.3.3), we conclude thatu exists for times beyond Tmax, a contradiction.Finally, the possible existence of (the now global) solution u is ruled outvia a suitably cut-off version of the virial identity (3.38), using (3.41), andthe compactness to control the errors introduced by the cut-off, exactly asin [56] (Proposition 10, and what follows it). 86Chapter 4Directions for Future StudyWe collect here some suggestions (and speculations) for future problemsrelating to the results of Chapters 2 and 3.4.1 Improvements and Extensions of the CurrentWorkAs mentioned in Section 1.6 the asymptotic stability for the 1D mass sub-critical (p 6= 3) NLS is still open. This problem presents serious challenges.A successful attempt on this problem may involve detailed knowledge of thespectrum of the linearized operator as obtained in Chapter 2.The results of Chapter 3 apply only to dimension 3. The works [1,2] addressed the dynamics (scattering/blow-up) for the perturbed energycritical NLS (below the threshold) in dimensions n ≥ 4. Nevertheless itwould be interesting to see if the construction of Section 3.1 can be achievedin 4D. One would need to redo the resolvent estimates in Section 3.1.2 using[46] in place of [47], as well as perform many of the computations in 4D sincewe have used the particular 3D free resolvent expansion (3.5) and particular3D Young’s inequality (3.6), (3.7) and so on. If the same methods produceconstructed solutions in 4D it’s possible we could complete the analysis ofSections 3.2 and 3.3 to achieve a dynamical theorem. While this theoremalready exists [1, 2] some comparison may be interesting.In dimensions n ≥ 5 the linearized operator (1.17) no longer has a reso-nance (as a resonance appearing in dimensions 5D and higher is impossible[45]) but does have an edge-eigenvalue. Replacing the role of [47] with [45]we may be able to use our methods to achieve the results of Chapter 3 in5D and higher.One important case we are missing from Sections 3.2 and 3.3 is the casewhen p = 3. Of course, the cubic-quintic NLS is important for applicationsand seems as well to present interesting mathematical difficulties. Whilewe are able to construct solitary wave solutions in Section 3.1 we cannotdemonstrate these solutions to be ground states in Section 3.2. In Lemma874.2. Small Solutions to the Gross-Pitaevskii Equation3.2.2 the leading order in the computation of Sε,ω(Q) is zero when p = 3 andthe next to leading order is difficult to resolve. If we were able to at leastdemonstrate the existence of a ground state we could run the arguments ofSection 3.3 to achieve our dynamical theorem for p = 3. (although a groundstate in this case may simply not exist). Supposing that a ground statesolution does exist for p = 3 we would still have difficulty to demonstrateour constructed solutions as the ground state (it also simply may not be) asthe series of Lemmas 3.2.3, 3.2.4, 3.2.7, 3.2.8 and 3.2.10 in Section 3.2 allrequire 3 < p < 5. One could look for additional boundary case argumentsto add or else try to demonstrate that our constructed solution is not aground state.4.2 Small Solutions to the Gross-PitaevskiiEquationA distinct but related problem is to consider solutions to the Gross-Pitaevskiiequation (the nonlinear Schro¨dinger equation with a potential)i∂tψ = (−∆ + V )ψ ± |ψ|p−1ψ (4.1)with small initial data ψ(x, 0) = ψ0 ∈ H1 in n dimensions. Function V (x)is a potential which we can think of as a Schwartz (fast decaying) function.The power p should be taken mass super-critical but energy sub-critical toensure solutions do not blow-up in finite time. Global well-posedness is aresult of conservation of mass and energy and the smallness of initial data(see [15] Chapter 6). We think mainly about H1 results but there are alsoresults where further localizing assumptions (ψ0 ∈ L1) on the initial dataare made such as [66] [69] [70] [87].The simplest result is when −∆+V has no bound states nor a resonance.In this case all solutions of (4.1) will scatter; a result due to [49] and [73].The result with a single simple bound state was obtained for a class ofnonlinearities in [42] in 3D. This result was extended to 1D with supercriticalpower nonlinearity in [59]. In this situation the bound state of −∆ + Vgenerates nonlinear bound states of (4.1). These nonlinear bound statesgive rise to stable solitary waves. Both [42] and [59] establish that any smallsolution can be decomposed into a piece approaching a nonlinear boundstate and a piece that scatters. The state of the art is [27] where they treatthe 3D cubic defocusing equation with many simple eigenvalues. There aresome conditions on the relative values of the eigenvalues but the treatablecases are quite generic. While the result obtained is analogous to [42] and884.2. Small Solutions to the Gross-Pitaevskii Equation[59] the methods used in this paper differ from [42] and [59]. The paper[27] uses instead variational methods. By means of a Birkhoff normal formsargument they find an effective Hamiltonian which gives rise to a nonlinearFermi Golden Rule. The connection between the Hamiltonian structure anda Fermi Golden Rule was originally introduced in [24]. See also [25] for theintuition behind the argument. For us the special case of two eigenvalues in[27] will be most relevant. There are also some other results for two boundstates [71] [79] [80] [81] [82] which impose stronger restrictions on the initialdata and placement of the eigenvalues.Our questions revolve around the case when operator −∆ + V has aresonance. That is when (−∆ + V )φ1 = 0 for some φ1 /∈ L2 but φ1 ∈ Lq forsome 2 < q ≤ ∞ where q depends on the dimension n. For the statement ofquestions bellow we also assume that −∆+V also has a simple bound state,that is (−∆ + V )φ0 = e0φ0 with e0 < 0. We could, however, ask analogousquestions if −∆ +V has no bound state and just the resonance or if insteadof a resonance we have a simple threshold eigenvalue.Firstly, we may suspect that φ1 will generate nonlinear bound states.When H := −∆ + V has a bound sate the nonlinear problem (4.1) admitsa family of nonlinear bound states Q0[z] parametrized by z = (Q0, φ0) witheigenvalue E0 = E0[|z|]. This follows from standard bifurcation theory (seethe Appendix of [42]). The existence of nonlinear bound states comingfrom the resonance eigenfunction does not follow immediately from the trueeigenvalue result. If for example we writeQ1(x) = zφ1(x) + q(x) (4.2)then to have Q1 ∈ L2 we cannot have q ∈ L2. The Birman-Schwingertrick used in Chapter 2 does not work due to the presence of the nonlinearterm in (4.1) but we can instead proceed as in Chapter 3. A formal andpreliminary computation suggests that generically in the defocusing case theresonance will not yield nonlinear bound states while in the focusing case wewill have nonlinear bound states with corresponding eigenvalues close to thethreshold. The idea is to follow the analysis of Section 3.1 and substitute(4.2) to the nonlinear eigenvalue equation(H + λ2)Q1 ± |Q1|p−1Q1 = 0and write the resulting equation as a fixed point problem for qq = (H + λ2)−1f.894.2. Small Solutions to the Gross-Pitaevskii EquationHere f = f(λ, z, q). From [47] and [48] we have the resolvent expansion(H + λ2)−1 =1λ〈R0(λ)V φ1, ·〉φ1where R0(λ) = (−∆ + λ2)−1 is the free resolvent. The idea now is to solve0 = 〈R0(λ)V φ1, f〉q = (H + λ2)−1ffor λ = λ(z) via fixed point arguments where q is in a higher Lp space.Adjustments of the fixed point arguments of Section 3.1 may prove fruitfulin this setting.Secondly, we may consider the asymptotic stability of the ground statefamily e−iE0tQ0. One approach would be to proceed as in [42] but withweaker decay estimates. Taking the power in the nonlinearity higher mayhelp in this direction. An alternative, and perhaps more favourable, ap-proach to this problem would be to understand the spectrum of the linearizedoperator around Q0 in the presence of the resonance φ1. We comment onthis direction. Since we are interested in the stability of the family of groundstate solitary waves we consider a solution to (4.1) of the formψ(x, t) = e−iE0t(Q0(x) + ξ(x, t))where ξ is a small perturbation of Q0. Since Q0 is the ground state wecan take it positive and real. Substituting the above to (4.1) and removingknown information about Q0 yieldsi∂tξ = (H − E0)ξ ± (p− 1)Qp−10 Reξ +Qp−10 ξ +N(ξ)where N(ξ) is nonlinear in ξ. The above with N removed is the linearizedequation. We complexify by letting~ξ :=(ξξ¯)to seei∂t~ξ = L~ξwhereL :=(H − E0 ± p+12 Qp−10 ±p+12 Qp−10∓p+12 Qp−10 −(H − E0 ± p+12 Qp−10)) .904.2. Small Solutions to the Gross-Pitaevskii EquationThe unperturbed L0 for z = 0 then has a resonance at the threshold. IndeedL0 =(H − e0 00 − (H − e0))has essential spectrum (−∞, e0] ∪ [−e0,∞), a double eigenvalue at 0 and aresonance on each threshold. The question is then about the spectrum of L.The full operator in question has essential spectrum (−∞, E0]∪ [−E0,∞) aswell as the double eigenvalue at 0 but the fate of the resonance has yet to beseen. Again, a formal preliminary computation suggests that generically inthe focusing case we have a true eigenvalue close to the threshold and thatin the defocusing case the resonance disappears. In 3D we may be able toapply [28] which treats general perturbations of the linearized operator. In1D the experience gained from Chapter 2 may be of use. Once the spectrumof the operator L is understood we may be able to proceed as in [79–82] toobtain asymptotic stability results for the ground state as well as the excitedstate, in any case where it may exist. A difficulty in this series of paperswas the eigenvalues of H being close together or, after complexification, Lhaving eigenvalues close to zero. For us any non-zero eigenvalues of L willbe close to the threshold and so far from zero.Finally, we may wish to understand the long time behaviour of all smallsolutions of (4.1). That is obtain an asymptotic stability and completenesstheorem in the spirit of [42], [59], [27]. This is the most substantial andchallenging question present. The goal is to prove a theorem resembling thefollowing. For any small solution ψ of (4.1) we have the unique decomposi-tionψ(t) = Q0[z0(t)] +Q1[z1(t)] + η(t)in the presence of nonlinear bound states connected to the resonance orψ(t) = Q0[z0(t)] + η(t)in the absence of nonlinear bound states connected to the resonance. In theabove zj and η must also enjoy some smallness properties. Additionally thezj should converge in some sense as t→∞ where only one zj converges toa nonzero value. The function η should scatter and converges to a solutionof the free linear Schro¨dinger equation. While we would like to, after under-standing the existence on nonlinear bound states and the linearized operator,proceed as in [42], [59], [27] we draw attention to the fact that eigenvaluesarbitrarily close to the threshold will adversely affect the necessary decay914.2. Small Solutions to the Gross-Pitaevskii Equationestimates. The convergence of ψ (if at all) to a nonlinear bound state will beslow and achieving such an asymptotic stability and completeness theorem(if true) may be beyond current mathematical technology.92Bibliography[1] T. Akahori, S. Ibrahim, H. Kikuchi, H. Nawa, Existence of a GroundState and Blow-Up Problem for a Nonlinear Schro¨dinger Equation withCritical Growth, Differential and Integral Equations, Vol. 25, No. 3-4,(2012), 383-402.[2] T. Akahori, S. Ibrahim, H. Kikuchi, H. Nawa Existence of a GroundState and Scattering for a Nonlinear Schro¨dinger Equation with CriticalGrowth, Selecta Math., Vol. 19, No. 2, (2013), 545-609.[3] C. O. Alves, M. A. S. Souto, M. Montenegro, Existence of a GroundState Solution for a Nonlinear Scalar Field Equation with CriticalGrowth, Calc. Var., Vol. 43, No. 3, (2012), 537-554.[4] T. Aubin, E´quations Diffe´rentielles Non Line´aires et Proble`me de Yam-abe Concernant la Courbure Scalaire, J. Math. Pures Appl., IX. Se´r.,Vol. 55, (1976), 269-296.[5] H. Bahouri, P. Ge´rard, High Frequency Approximation of Solutions toCritical Nonlinear Wave Equations, Amer. J. Math., Vol. 121, (1999),131-175.[6] D. Bambusi, Asymptotic Stability of Ground States in Some Hamilto-nian PDE with Symmetry, Comm. Math. Phys., Vol. 320, No. 2, (2013),499-542.[7] H. Berestycki, T. Cazenave, Instabilite´ des E´tats Stationnaires Dansles E´quations de Schro¨dinger et de Klein-Gordon Non Line´aires, C. R.Acad. Sci. Paris Se´r. I Math., Vol. 293, (1981), 489-492.[8] H. Berestycki, P.-L. Lions, Nonlinear Scalar Field Equations. I. Exis-tence of a Ground State, Arch. Rational Mech. Anal., Vol. 83, (1983),313-345.[9] H. Berestycki, P.-L. Lions, L.A. Peletier, An ODE Approach to theExistence of Positive Solutions for Semilinear Problems in RN , IndianaUniv. Math. J., Vol. 30, (1981), 141-157.93Bibliography[10] J. Bourgain, Global Well-Posedness Results for Nonlinear Schro¨dingerEquation in the Radial Case, J. Am. Math. Soc., Vol. 12, (1999), 145-171.[11] J. Bourgain, New Global Well-Posedness Results for NonlinearSchrdinger Equations, AMS Colloquium Publications, Vol. 46, (1999).[12] H. Bre´zis, L. Nirenberg, Positive Solutions of Nonlinear Elliptic Equa-tions Involving Critical Sobolev Exponents, Comm. Pure Appl. Math.,Vol. 36, (1983), 437-477.[13] V.S. Buslaev, C. Sulem, On Asymptotic Stability of Solitary Waves forNonlinear Schro¨dinger Equations, Ann. Inst. H. Poincare Anal. NonLineaire, Vol. 20, No. 3, (2003), 419-475.[14] R. Carretero-Gonza´lez and D. J. Frantzeskakis and P. G. Kevrekidis,Nonlinear Waves in BoseEinstein Condensates: Physical Relevance andMathematical Techniques, Nonlinearity, Vol. 21, (2008), R139-R202.[15] T. Cazenave, Semilinear Schro¨dginer Equations, American Mathemat-ical Soc., Providence, RI, 2003.[16] T. Cazenave, P.-L. Lions, Orbital Stability of Standing Waves for SomeNonlinear Schro¨dinger Equations, Comm. Math. Phys., Vol. 85, No. 4,(1982), 549-561.[17] S. Chang, S. Gustafson, K. Nakanishi, T. Tsai, Spectra of LinearizedOperators for NLS Solitary Waves, SIAM J. Math Anal., Vol. 39, No.4, (2007), 1070-1111.[18] W. Chen, J. Da´vila, I. Guerra, Bubble Tower Solutions for a Supercrit-ical Elliptic Problem in RN , Ann. Sc. Norm. Super. Pisa Cl. Sci. (5),Vol. XV, (2016), 85-116.[19] M. Coles, S. Gustafson, A Degenerate Edge Bifurcation in the 1DLinearized Nonlinear Schro¨dinger Equation, DCDS-A, Vol. 36, No. 6,(2016), 2991-3009.[20] M. Coles, S. Gustafson, Solitary Waves and Dynamics for Subcriti-cal Perturbations of Energy Critical NLS, (2017), (preprint) arXiv:1707.07219[21] J. Colliander, M. Keel, G. Staffilani, H. Takaoke, T. Tao, Global Well-Posedness and Scattering for the Energy-Critical Nonlinear Schro¨dingerEquation in R3, Ann. Math., Vol. 167, No. 3, (2008), 767-865.94Bibliography[22] S. Cuccagna, Stabilization of Solutions to Nonlinear Schro¨dinger Equa-tions, Comm. Pure Appl. Math., Vol. 54, (2001), 1110-1145.[23] S. Cuccagna, On Asymptotic Stability of Ground States of NLS, Rev.Math. Phys., Vol. 15, No. 8, (2003), 877-903.[24] S. Cuccagna, On Instability of Excited States of the NonlinearSchro´dinger Equation, Physica D, Vol. 238, (2009), 38-54.[25] S. Cuccagna, The Hamiltonian Structure of the Nonlinear Schro¨dingerEquation and the Asymptotic Stability of its Ground States, Comm.Math. Physics, Vol. 305, (2011), 279-331.[26] S. Cuccagna, On Asymptotic Stability of Moving Ground States of theNonlinear Schro¨dinger Equation, Trans. Amer. Math. Soc., Vol. 366,(2014), 2827-2888.[27] S. Cuccagna, M. Maeda, On Small Energy Stabilization in the NLSwith a Trapping Potential, (2013), (preprint) arXiv:1309.0655[28] S. Cuccagna, D. Pelinovsky, Bifurcations from the Endpoints of theEssential Spectrum in the Linearized Nonlinear Schro¨dinger Problem,J. Math. Phys., Vol. 46, (2005), 053520.[29] S. Cuccagna, D. Pelinovsky, The Asymptotic Stability of Solitons in theCubic NLS Equation on the Line, Applicable Analysis, Vol. 93, (2014),791-822.[30] S. Cuccagna, D. Pelinovsky, V. Vougalter, Spectra of Positive and Nega-tive Energies in the Linearized NLS Problem, Comm. Pure Appl. Math.,Vol. 58, No. 1, (2005), 1-29.[31] J. Da´vila, M. del Pino, I. Guerra, Non-Uniqueness of Positive GroundStates of Non-Linear Schro¨dinger Equations, Proc. London Math. Soc.,Vol. 106, No. 2, (2013), 318-344.[32] B. Dodson, Global Well-Posedness and Scattering for the FocusingEnergy-Critical Nonlinear Schro¨dinger Problem in Dimension d = 4for Initial Data Below a Ground State Threshold, (2014), (preprint)arXiv: 1409.1950[33] T. Duyckaerts, F. Merle, Dynamic of Threshold Solutions for Energy-Critical NLS, Geom. Funct. Anal., Vol. 18, No. 6, (2009), 1787-1840.95Bibliography[34] L. Evans, Partial Differential Equations (2nd ed.), American Mathe-matical Society, 2010.[35] G. Fibich, The Nonlinear Schro¨dinger Equation: Singular Solutions andOptical Collapse, Springer, 2015.[36] Z. Gang, I.M. Sigal, Asymptotic Stability of Nonlinear Schro¨dingerEquations with Potential, Rev. Math. Phys., Vol. 17, No. 10, (2005),1143-1207.[37] P. Ge´rard, Description du De´faut de Compacite´ de L’injection deSobolev, ESAIM Control Optim. Calc. Var., Vol. 3, (1998), 213-233.[38] J. Ginibre, G. Velo, Scattering Theory in the Energy Space for a Classof Nonlinear Schro¨dinger Equations, J. Math. Pures Appl., Vol. 64,(1985), 363-401.[39] M. Grillakis, Linearized Instability for Nonlinear Schro¨dinger andKlein-Gordon Equations, Comm. Pure Appl. Anal., 41, (1988), 747-774.[40] M. Gillakis, On Nonlinear Schro¨dinger Equations, Partial Differ. Equa-tions, Vol. 25, (2000), 1827-1844.[41] M. Grillakis, J. Shatah, W. Strauss, Stability Theory of Solitary Wavesin the Presence of Symmetry I, J. Funct. Anal., Vol. 74, (1987), 160-197.[42] S. Gustafson, K. Nakanishi, T. Tsai, Asymptotic Stability and Com-pleteness in the Energy Space for Nonlinear Schro¨dinger Equations withSmall Solitary Waves, IMRN, No. 66, (2004), 3559-3584.[43] S. Gustafson, I.M. Sigal, Mathematical Concepts of Quantum Mechan-ics (2nd ed.), Springer-Verlag Berlin Heidelberg, 2011.[44] S. Gustafson, T.-P. Tsai, I. Zwiers, Energy-Critical Limit of Solitons,and the Best Constant in the Gagliardo-Nirenberg Inequality, (unpub-lished).[45] A. Jensen, Spectral Properties of Schro¨dinger Operators and Time-Decay of the Wave Functions. Results in L2(Rm), m ≥ 5, Duke Math.J., Vol. 47, No. 1, (1980), 57-80.[46] A. Jensen, Spectral Properties of Schro¨dinger Operators and Time-Decay of the Wave Functions. Results in L2(R4), Journal of Mathemat-ical Analysis and Applications, Vol. 101, (1984), 397-422.96Bibliography[47] A. Jensen, T. Kato, Spectral Properties of Schro¨dinger Operators andTime-Decay of the Wave Functions, Duke Math. J., Vol. 46, No. 3,(1979), 583-611.[48] A. Jensen, G. Nenciu, A Unified Approach to Resolvent Expansions atThresholds, Rev. Math. Phys., Vol. 13, (2001), 717.[49] J.-L. Journe´, A. Soffer, C.D. Sogge, Decay Estimates for Schro¨dingerOperators, Comm. Pure Appl. Math., Vol. 44, No.5, (1991), 573-604.[50] T. Kapitula, Stability Criterion for Bright Solitary Waves of the Per-turbed Cubic-Quintic Schro¨dinger Equation, Physica D, Vol. 166, No.1-2, (1998), 95-120.[51] T. Kapitula, B. Sandstede, Stability of Bright Solitary-Wave Solutionsto Perturbed Nonlinear Schro¨dinger Equations , Physica D, Vol. 124,No. 1-3, (1998), 58-103.[52] T. Kapitula, B. Sandstede, Edge Bifurcations for Near Integrable Sys-tems via Evans Functions, SIAM J. Math Anal., Vol. 33, No. 5, (2002),1117-1143.[53] T. Kapitula, B. Sandstede, Eigenvalues and Resonances Using theEvans Functions, Discrete Contin. Dyn. Syst., Vol. 10, No. 4, (2004),857-869.[54] C. Kenig, F. Merle, Global Well-Posedness, Scattering and Blow-Up forthe Energy-Critical, Focusing, Non-Linear Schro¨dinger Equation in theRadial Case, Invent. math., Vol. 166, (2006), 645-675.[55] S. Keraani, On the Defect of Compactness for the Strichartz Estimatesfor the Schro¨dinger Equations, J. Differential Equations, Vol. 175, No.2, (2001), 353-392.[56] R. Killip, T. Oh, O. Pocovnicu, M. Vis¸an, Solitons and Scatteringfor the Cubic-Quintic Nonlinear Schrdinger Equation on R3, (2014),(preprint) arXiv: 1409.6734[57] R. Killip, M. Vis¸an, The Focusing Energy-Critical NonlinearSchro¨dinger Equation in Dimensions Five and Higher, American Jour-nal of Mathematics, Vol. 132, No. 2, (2010), 361-424.[58] R. Killip, M. Vis¸an, Nonlinear Schro¨dinger Equations at Critical Reg-ularity, Clay Mathematics Proceedings, Volume 17, 2013.97Bibliography[59] T. Mizumachi, Asymptotic Stability of Small Solitary Waves to 1DNonlinear Schro¨dinger Equations with Potential, J. Math. Kyoto Univ.,Vol. 48, No. 3, (2008), 471-497.[60] K. Nakanishi, W. Schlag, Global Dynamics Above the Ground StateEnergy for the Cubic NLS Equation in 3D, Calc. Var., Vol. 44, No. 1,(2012), 1-45.[61] H. Nawa, Asymptotic and Limiting Profiles of Blowup Solutions ofthe Nonlinear Schro¨dinger Equaitons with Critical Power, Comm. PureAppl. Math., Vol. 52, (1999), 193-270.[62] T. Ogawa, Y. Tsutsumi, Blow-Up of H1 Solution for the NonlinearSchro¨dinger Equation, J. Differ. Equations, Vol. 92, (1991), 317-330.[63] L. E. Payne, D. H. Sattinger, Saddle Points and Instability of NonlinearHyperbolic Equations, Israel J. Math., Vol. 22, No. 3, (1975), 273-303.[64] D. Pelinovsky, Y. Kivshar, V. Afanasjev, Internal Modes of EnvelopeSolitons, Physica D, Vol. 116, No. 1-2, (1998), 121-142.[65] G. Perelman, Asymptotic Stability of Multi-Soliton Solutions for Non-linear Schro¨dinger Equations, Comm. Partial Differential Equations,Vol. 29, No. 7-8, (2004), 1051-1095.[66] C.A. Pillet, C.E. Wayne, Invariant Manifolds for a Class of Dispersive,Hamiltonian, Partial Differential Equations, J. Differential Equations,Vol. 141, No. 2, (1997), 310-326.[67] E. Ryckman, M. Vis¸an, Global Well-Posedness and Scattering for theDefocusing Energy-Critical Nonlinear Schro¨dinger Equation in R1+4,American Journal of Mathematics, Vol. 129, No. 1, (2007), 1-60.[68] W. Schlag, Stabile Manifolds for an Orbitally Unstable NonlinearSchro¨dinger Equation, Ann. of Math., Vol. 169 , No. 1, (2009), 139-227.[69] A. Soffer, M.I. Weinstein, Multichannel Nonlinear Scattering for Non-integrable Equations, Comm. Math. Phys., Vol. 133, No. 1, (1990),119-146.[70] A. Soffer, M.I. Weinstein, Multichannel Nonlinear Scattering for Non-integrable Equations. II. The case of Anisotropic Potentials and Data,J. Differential Equations, Vol. 98, No. 2, (1992), 376-390.98Bibliography[71] A. Soffer, M.I. Weinstein, Selection of the Ground State for NonlinearSchro¨dinger Equations, Rev. Math. Phys., Vol. 16, (2004), 977-1071.[72] W. Strauss, Existence of Solitary Waves in Higher Dimensions. Comm.Math. Phys., Vol. 55, No. 2, (1977), 149-162.[73] W. Strauss, Nonlinear Scattering Theory at Low Energy, J. Funct.Anal., Vol 41, No. 1, (1981), 110-133.[74] C. Sulem, P.-L. Sulem, The Nonlinear Schro¨dinger Equation, Springer,1999.[75] G. Talenti, Best Constant in Sobolev Inequality, Ann. Mat. Pura Appl.,Vol. 110, (1976), 353-372.[76] T. Tao, Global Well-Posedness and Scattering for the Higher-Dimensional Energy-Critical Nonlinear Schro¨dinger Equation for Ra-dial Data, New York J. Math., Vol. 11, (2005), 57-80.[77] T. Tao, Nonlinear Dispersive Equations: Local and Global Analysis,American Mathematical Society, 2006.[78] T. Tao, M. Vis¸an, X. Zhang, The Nonlinear Schro¨dinger Equation withCombined Power-Type Nonlinearities, Communications in Partial Dif-ferential Equations, Vol. 32, No. 8, (2007), 1281-1343.[79] T.P. Tsai, H.T. Yau, Asymptotic Dynamics of Nonlinear Schro¨dingerEquations: Resonance Dominated and Radiation Dominated Solutions,Comm. Pure Appl. Math, Vol. 55, (2002), 153-216.[80] T.P. Tsai, H.T. Yau, Relaxation of Excited States in NonlinearSchro¨dinger Equations, Int. Math. Res. Not., Vol. 31, (2002), 1629-1673.[81] T.P. Tsai, H.T. Yau, Stable Directions for Excited States of NonlinearSchrdinger Equations, Comm. Partial Differential Equations, Vol 27,No. 11-12, (2002), 2363-2402.[82] T.P. Tsai, H.T. Yau, Classification of Asymptotic Profiles for Nonlin-ear Schro¨dinger Equations with Small Initial Data, Adv. Theor. Math.Phys., Vol. 6, (2002), 107-139.[83] M. Vis¸an, The Defocusing Energy-Critical Nonlinear Schro¨dinger Equa-tion in Higher Dimensions, Duke Math. J., Vol. 138, No. 2, (2007),281-374.99Bibliography[84] V. Vougalter, On Threshold Eigenvalues and Resonances for the Lin-earized NLS Equation, Math. Model. Nat. Phenom., Vol. 5, No. 4,(2010), 448-469.[85] V. Vougalter, On the Negative Index Theorem for the Linearized NLSProblem, Canad. Math. Bull., Vol. 53, (2010), 737-745.[86] V. Vougalter, D. Pelinovsky, Eigenvalues of Zero Energy in the Lin-earized NLS Problem, Journal of Mathematical Physics, Vol. 47, (2006),062701.[87] R. Weder, Center Manifold for Nonintegrable Nonlinear Schro¨dingerEquations on the Line, Comm. Math. Phys., Vol. 215, No. 2, (2000),343-356.[88] M. I. Weinstein, Nonlinear Schro¨dinger equations and sharp interpola-tion estimates, Comm. Math. Phys., Vol. 87, (1983), 567-576.[89] M. I. Weinstein, Modulational Stability of Ground States of NonlinearSchro¨dinger Equations, SIAM J. Math Anal., Vol. 16, (1985), 472-491.[90] M. I. Weinstein, Lyapunov Stability of Ground States of NonlinearDispersive Evolutions Equations, Comm. Pure Appl. Math., Vol. 39,(1986), 51-68.[91] V.E. Zakharov, Stability of Periodic Waves of Finite Amplitude on theSurface of a Deep Fluid, J. Appl. Mech. Tech. Phys., Vol. 9, (1986),190-194.100 Citation Scheme: Citations by CSL (citeproc-js) Usage Statistics <div id="ubcOpenCollectionsWidgetDisplay"> <script id="ubcOpenCollectionsWidget" async > Related Items
171b71c783446f7b
Encyclopædia Britannica, Inc. Classical physics, the body of physics developed until about the turn of the 20th century, cannot account for the behavior of matter and light at extremely small scales. The branch of physics concerned with atomic and subatomic systems is known as quantum mechanics. Its aim is to account for the properties of molecules and atoms and their even tinier constituents, such as electrons, protons, neutrons, and quarks. Quantum mechanics describes how these particles interact with each other and with light, X-rays, gamma rays, and other forms of electromagnetic radiation. One of the great ideas of the 20th century, quantum mechanics continues to be at the forefront of advances in physics in the 21st century. In addition to explaining the structure of atoms and the behavior of subatomic particles, it has explained the nature of chemical bonds, the properties of crystalline solids, nuclear energy, and the forces that stabilize collapsed stars. Quantum theory also led directly to the invention of the laser, the electron microscope, and the transistor. Quantum mechanics has revealed that matter and radiation behave much differently at extremely small scales than at the larger, familiar scales of the everyday world—the world described by classical physics. At atomic scales the behavior of matter and radiation can seem unusual or downright bizarre. The concepts of quantum mechanics often conflict with common-sense notions, notions that have of course been developed through observations of the world at larger scales. Danish physicist Niels Bohr famously said that “anybody who is not shocked by this subject has failed to understand it.” While the laws of classical physics allow one to determine exactly how matter and radiation will behave, quantum mechanics deals only in probabilities. Indeterminacy—randomness or uncertainty—is fundamental to quantum mechanics. Nevertheless, the success of this field is indisputable. Using probabilities, quantum mechanics makes very precise predictions about the properties of atomic and subatomic systems. In experiments these predictions have been shown to be extraordinarily accurate—more accurate in fact than those of any other branch of physics. In the 1800s physicists had discovered that light behaves like a wave. The German physicist Max Planck proposed the revolutionary quantum theory of light in 1900, in what he called an “act of desperation,” to account for certain mysterious facts about the emission of light. He proposed that, rather than being emitted continuously, light can be given off only in tiny bundles, or certain specific amounts of energy, which he called quanta (singular, quantum). Albert Einstein used this quantum theory to explain the photoelectric effect in 1905, proposing that in some ways light behaves like a particle. In the 1920s Louis-Victor de Broglie extended this idea to matter, proposing that electrons and other “particles” can behave like a wave. This has been confirmed in experiments. Radiation and matter sometimes have characteristics of waves and sometimes have characteristics of particles; they cannot be said to be one or the other. Among the most important developers of quantum mechanics were Niels Bohr, Erwin Schrödinger, Max Born, and Werner Heisenberg. In 1913 Bohr used the quantum theory to develop a new model of the structure of atoms. In 1926 Schrödinger developed the fundamental mathematical equation of quantum mechanics. It is radically different from Isaac Newton’s laws of motion, which are fundamental to classical physics, in that it indicates only probabilities. The solutions to the Schrödinger equation are wave functions, and Born showed that these functions can indicate the likelihood that a certain particle will be in a given place at a given time. According to Heisenberg’s famous uncertainty principle of 1927, it is impossible to measure both the exact position of a particle and its exact velocity at a given moment—even in theory. The more accurately one measures the position, the less accurately one can measure the velocity, and vice versa. The concepts of exact position and exact velocity together, in fact, have no meaning in nature.
a4239b1704d42714
You are here Is There Dark Matter in Orbit around the Earth? By Stephen L. Adler Published 2009 The earth flyby of the asteroid-lander NEAR spacecraft resulted in an unexplained velocity increase of over one centimeter per second, much larger than estimated measurement errors. For slightly over a year, I have largely put aside my longtime interests in the foundations of quantum mechanics and in particle physics, and have been working on dark matter. This interest came about in two different ways. The first was a paper on models for modifications of the Schrödinger equation on which I was working with my frequent collaborator Angelo Bassi during the 2007–08 academic year. The models called for a noise source to act weakly on ordinary matter, and one of the mechanisms I decided to try was dark matter collisions with ordinary matter. I starting learning about dark matter through a lunch with IAS Visitor Masataka Fukugita, whom I had met at a dinner at Peter and Sue Goldreich’s. Following this, I made what I considered a “toy model” for the paper I was writing with Angelo—not a realistic mechanism for our purposes, but it got me learning and thinking about dark matter. The second way was a news item that my wife Sarah showed me in the Economist about the so-called “flyby” anomalies. When spacecraft are put in “flyby” orbits, passing close to the earth to produce large changes in direction, the outgoing velocity is found to deviate from expectations by about a part in a million. (A review for a general physics audience is given in M. M. Nieto and J. D. Anderson, “Earth Flyby Anomalies,” Physics Today, October 2009.) Sometimes the spacecraft slows down slightly (as would be expected from normal drag), but in some cases it speeds up, a really weird effect if true. I made a mental note to look for the article when published in a journal, and when a detailed report appeared in Physical Review Letters[1], I asked Scott Tremaine what he thought. He said that the group at Jet Propulsion Lab that wrote it has a reputation for careful work, so one couldn’t just dismiss it. So I started to think about possible explanations. Having been in physics for nearly fifty years, I have seen many purported new effects be discounted as improvements have been made in the experiments or the analysis. The most probable explanation of the flyby anomalies is that they are the result of an inadvertent omission of some conventional physics from the analysis. People are still actively pursuing this route, but so far nothing convincing has emerged, and many things have been ruled out. Hence there is a chance that the effect is an indicator of new physics. I personally believe that if new physics is involved, it is very unlikely to implicate Maxwell’s equations for electromagnetism, because these, and their relativistic extension to quantum electrodynamics, have been tested to fantastic precision. I am also skeptical that the flyby anomalies can be attributed to changes in Einstein’s theory of general relativity, which has also been well-tested in the framework of metric theories that obey the equivalence principle (in a freely falling elevator, you feel no gravity), and within this framework one can show that possible effects are at least a factor of one hundred too small to be relevant. So deviations from gravity would have to take the form of a theory that does not obey Einstein’s equivalence principle, and this too has been tested to great accuracy. This leaves another possibility: effects of dark matter. We now know that ordinary matter is only a minor component of the universe; for every gram of ordinary matter, there are five grams of a mysterious “dark matter,” which participates in Newtonian gravitational forces, but is electrically neutral and so does not readily emit or absorb light (hence the “dark”). So far there are no firm experimental indications of its properties, beyond what is inferred from astrophysics and cosmology. The question I have been investigating is whether dark matter gravitationally bound to the earth could be responsible for the flyby anomalies. Could collisions of the spacecraft with dark matter near the earth cause the observed velocity changes? I have written three papers addressing this question. In the first[2], I showed that if there were a dark matter component that underwent an exothermic (energy releasing) reaction when colliding with a spacecraft proton or neutron, by converting to a lower mass particle, then the spacecraft nucleon would get a “kick” and the observed velocity increases could result. In this paper, I also studied various physical constraints, to see whether there is an allowed range of dark matter particle masses and interaction cross sections that could also explain the magnitude of the observed flyby anomalies. The answer is that there is a small window, but it requires dark matter masses much lighter than conventionally assumed in the standard “cold dark matter” model, and larger interaction cross sections with ordinary matter than conventionally assumed. As a result of circulating the preprint on this, I was invited to give a talk at a space science conference in the summer of 2008, and had some very useful conversations with people there. This led to my second paper[3], which was a determination of an upper limit of how much dark matter could be in orbit around the earth, by using current tracking data for satellites, the moon, and asteroids. By comparing lunar laser ranging of the moon, which gives the sum of the gravitational masses of the earth, the moon, and everything in between, with ranging of the LAGEOS geodetic satellite, which gives the earth gravitational mass, and ranging of a spacecraft tracking the Eros asteroid, which gives an accurate lunar mass, one gets an upper limit for the amount of dark matter that can lie between the earth and the moon. It turns out to be four billionths of the earth’s mass—not much mass, but enough, it turns out, to be compatible with a dark matter explanation for the flyby anomalies. The third paper[4], which I just finished this summer, involves making a detailed model for dark matter in orbit around the earth and using it to fit the flyby anomaly data. It is easy to see that Saturn-like rings of dark matter in the earth’s equatorial plane don’t work—most of the flybys would pass inside the rings, so there would be no scattering. So I tried the next simplest model, which was to consider a bunch of dark matter in a circular orbit, with its orbital plane tilted with respect to the earth’s equator. Because the earth has an equatorial bulge, its gravitational field differs from the spherically symmetric field of a point particle, and this symmetry deviation causes a tilted orbit to precess (i.e., slowly rotate) in time around the earth’s axis, with the angle between the orbit plane and the earth’s equatorial plane remaining fixed. Over a long period of time, this traces out a shell. My model then consists of two dark matter shells, one composed of elastic scatterers (to give flyby velocity decreases) and one composed of inelastic exothermic scatterers (to give the velocity increases). Parameters of the model are the radius, width, density times interaction cross section, and tilt angles of the two shells—eight parameters in all. The six known flybys are very well fitted—better than I had expected­­—with shell radii in the 30,000 to 35,000 kilometer range. One might suspect “overfitting,” but if one attempts to use just an inelastic shell and fit only the two flybys with the smallest estimated errors, one cannot get a good fit. And if one omits any one flyby from the two-shell fit, the model still gives a reasonable prediction for the omitted one. So the model has a certain “rigidity,” and is not just a case of fitting a wiggly curve through data points. Incisive tests for the future will include putting in possible constraints from satellites in high-lying orbits, making predictions for new flybys as data becomes available, looking for spacecraft temperature increases caused by the postulated dark matter scattering, and of course seeing the results of earth-bound experiments trying to detect dark matter and determine its properties. In the meantime, others will continue to look for conventional physics explanations of the mysterious flyby anomalies. I think it will take several years, at least, for things to be clarified, and in the meantime I have other projects to pursue, but this has been a fascinating exercise that has taught me new topics in physics, and brought me in touch with a space science community with which I had no previous contact. Stephen L. Adler has been a Professor in the School of Natural Sciences since 1969. In a series of remarkable, difficult calculations, he demonstrated that abstract ideas about the symmetries of fundamental interactions could be made to yield concrete predictions. The successful verification of these predictions was a vital step toward the modern Standard Model of particle physics, which describes elementary particles and their interactions. In some of his more recent work, he has been exploring generalized forms of quantum mechanics, both from a theoretical and a phenomenological standpoint. Published in The Institute Letter Fall 2009
3aa595bf8c494afb
All quantum operations must be unitary to allow reversibility, but what about measurement? Measurement can be represented as a matrix, and that matrix is applied to qubits, so that seems equivalent to the operation of a quantum gate. That's definitively not reversible. Are there any situations where non-unitary gates might be allowed? Unitary operations are only a special case of quantum operations, which are linear, completely positive maps ("channels") that map density operators to density operators. This becomes obvious in the Kraus-representation of the channel, $$\Phi(\rho)=\sum_{i=1}^n K_i \rho K_i^\dagger,$$ where the so-called Kraus operators $K_i$ fulfill $\sum_{i=1}^n K_i^\dagger K_i\leq \mathbb{I}$ (notation). Often one considers only trace-preserving quantum operations, for which equality in the previous inequality holds. If additionally there is only one Kraus operator (so $n=1$), then we see that the quantum operation is unitary. However, quantum gates are unitary, because they are implemented via the action of a Hamiltonian for a specific time, which gives a unitary time evolution according to the Schrödinger equation. | improve this answer | | • 4 $\begingroup$ +1 Everyone interested in quantum mechanics (not just quantum information) should know about quantum operations e.g. from Nielsen and Chuang. I think it is worth mentioning (since the Wikipedia page on Stinespring dilation is too technical) that every finite-dimensional quantum operation is mathematically equivalent to some unitary operation in a larger Hilbert space followed by a restriction to the subsystem (by the partial trace). $\endgroup$ – Ninnat Dangniam Mar 20 '18 at 5:31 • 1 $\begingroup$ I've downvoted this because it refers to a concept which is arguably more advanced than the perspective from which the original question is being asked. $\endgroup$ – Alexander Soare May 21 at 11:55 Short Answer Quantum operations do not need to be unitary. In fact, many quantum algorithms and protocols make use of non-unitarity. Long Answer Measurements are arguably the most obvious example of non-unitary transitions being a fundamental component of algorithms (in the sense that a "measurement" is equivalent to sampling from the probability distribution obtained after the decoherence operation $\sum_k c_k\lvert k\rangle\mapsto\sum_k |c_k|^2\lvert k\rangle\langle k\rvert$). More generally, any quantum algorithm that involves probabilistic steps requires non-unitary operations. A notable example that comes to mind is HHL09's algorithm to solve linear systems of equations (see 0811.3171). A crucial step in this algorithm is the mapping $|\lambda_j\rangle\mapsto C\lambda_j^{-1}|\lambda_j\rangle$, where $|\lambda_j\rangle$ are eigenvectors of some operator. This mapping is necessarily probabilistic and therefore non-unitary. Any algorithm or protocol that makes use of (classical) feed-forward is also making use of non-unitary operations. This is the whole of one-way quantum computation protocols (which, as the name suggests, require non-reversible operations). The most notable schemes for optical quantum computation with single photons also require measurements and sometimes post-selection to entangle the states of different photons. For example, the KLM protocol produces probabilistic gates, which are therefore at least partly non-reversible. A nice review on the topic is quant-ph/0512071. Less intuitive examples are provided by dissipation-induced quantum state engineering (e.g. 1402.0529 or srep10656). In these protocols, one uses an open map dissipative dynamic, and engineers the interaction of the state with the environment in such a way that that the long-time stationary state of the system is the desired one. | improve this answer | | At risk of going off-topic from quantum computing and into physics, I'll answer what I think is a relevant subquestion of this topic, and use it to inform the discussion of unitary gates in quantum computing. The question here is: Why do we want unitarity in quantum gates? The less specific answer is as above, it gives us 'reversibility', or as physicists often talk about it, a type of symmetry for the system. I'm taking a course in quantum mechanics right now, and the way unitary gates cropped up in that course was motivated by the desire to have physical transformations $\hat{U}$: that act as symmetries. This imposed two conditions on the transformation $\hat{U}$: 1. The transformations should act linearly on the state (this is what gives us a matrix representation). 2. The transformations should preserve probability, or more specifically inner product. This means that if we define: $$|\psi '\rangle = U |\psi\rangle, |\phi'\rangle = U |\phi\rangle$$ Preservation of inner product means that $\langle \phi | | \psi \rangle= \langle \phi' | | \psi'\rangle$. From this second specification, unitarity can be derived (for full details see Dr. van Raamsdonk's notes here). So this answers the question of why operations that keep things "reversible" have to be unitary. The question of why measurement itself is not unitary is more related to quantum computation. A measurement is a projection on to a basis; in essence, it must "answer" with one or more basis states as the state itself. It also leaves the state in a way that is consistent with the "answer" to the measurement, and not consistent with the underlying probabilities that the state began with. So the operation satisfies specification 1. of our transformation $U$, but definitively does not satisfy specification 2. Not all matrices are created equal! To round things back to quantum computation, the fact that measurements are destructive and projective (ie. we can only reconstruct the superposition through repeated measurements of identical states, and every measurement only gives us a 0/1 answer), is part of what makes the separation between quantum computing and regular computing subtle (and part of why it's difficult to pin that down). One might assume quantum computing is more powerful because of the mere size of the Hilbert space, with all those state superpositions available to us. But our ability to extract that information is heavily limited. As far as I understand it this shows that for information storage purposes, a qubit is only as good as a regular bit, and no better. But we can be clever in quantum computation with the way that information is traded around, because of the underlying linear-algebraic structure. | improve this answer | | • 1 $\begingroup$ I find the last paragraph a bit cryptic. What do you mean by "slippery" separation here? It is also non-obvious how the fact that measurements are destructive implies something about such separation. Could you clarify these points? $\endgroup$ – glS Mar 15 '18 at 20:05 • 2 $\begingroup$ @glS, good point, that was worded poorly. Does this help? I don't think I'm saying anything particularly deep, simply that Hilbert space size alone isn't a priori what makes quantum computation powerful (and it doesn't give us any information storage advantages) $\endgroup$ – Emily Tyhurst Mar 15 '18 at 20:41 There are several misconceptions here, most of them originate from exposure to only the pure state formalism of quantum mechanics, so let's address them one by one: 1. All quantum operations must be unitary to allow reversibility, but what about measurement? This is false. In general, the states of a quantum system are not just vectors in a Hilbert space $\mathcal{H}$ but density matrices $-$ unit-trace, positive semidefinite operators acting on the Hilbert space $\mathcal{H}$ i.e., $\rho: \mathcal{H} \rightarrow \mathcal{H}$, $Tr(\rho) = 1$, and $\rho \geq 0$ (Note that the pure state vectors are not vectors in the Hilbert space but rays in a complex projective space; for a qubit this amounts to the Hilbert space being $\mathbb{C}P^1$ and not $\mathbb{C}^2$). Density matrices are used to describe a statistical ensemble of quantum states. The density matrix is called pure if $\rho^2 = \rho$ and mixed if $\rho^2 < \rho$. Once we are dealing with a pure state density matrix (that is, there's no statistical uncertainty involved), since $\rho^2 = \rho$, the density matrix is actually a projection operator and one can find a $|\psi\rangle \in \mathcal{H}$ such that $\rho = |\psi\rangle \langle\psi|$. The most general quantum operation is a CP-map (completely positive map), i.e., $\Phi: L(\mathcal{H}) \rightarrow L(\mathcal{H})$ such that $$\Phi(\rho) = \sum_i K_i \rho K_i^\dagger; \sum_i K_i^\dagger K_i \leq \mathbb{I}$$ (if $\sum_i K_i^\dagger K_i = \mathbb{I}$ then these are called CPTP (completely positive and trace-preserving) map or a quantum channel) where the $\{K_i\}$ are called Kraus operators. Now, coming to the OP's claim that all quantum operations are unitary to allow reversibility -- this is just not true. The unitarity of time evolution operator ($e^{-iHt/\hbar}$) in quantum mechanics (for closed system quantum evolution) is simply a consequence of the Schrödinger equation. However, when we consider density matrices, the most general evolution is a CP-map (or CPTP for a closed system to preserve the trace and hence the probability). 1. Are there any situations where non-unitary gates might be allowed? Yes. An important example that comes to mind is open quantum systems where Kraus operators (which are not unitary) are the "gates" with which the system evolves. Note that if there is only a single Kraus operator then, $\sum_i K_i^\dagger K_i = \mathbb{I}$. But there's only one $i$, therefore, we have, $K^\dagger K = \mathbb{I}$ or, $K$ is unitary. So the system evolves as $\rho \rightarrow U \rho U^\dagger$ (which is the standard evolution that you may have seen before). However, in general, there are several Kraus operators and therefore the evolution is non-unitary. Coming to the final point: In standard quantum mechanics (with wavefunctions etc.), the system's evolution is composed of two parts $-$ a smooth unitary evolution under the system's Hamiltonian and then a sudden quantum jump when a measurement is made $-$ also known as wavefunction collapse. Wavefunction collapses are described as some projection operator say $|\phi\rangle \langle\phi|$ acting on the quantum state $|\psi\rangle$ and the $|\langle\phi|\psi\rangle|^2$ gives us the probability of finding the system in the state $|\phi\rangle$ after the measurement. Since the measurement operator is after all a projector (or as the OP suggests, a matrix), shouldn't it be linear and physically similar to the unitary evolution (also happening via a matrix). This is an interesting question and in my opinion, difficult to answer physically. However, I can shed some light on this mathematically. If we are working in the modern formalism, then measurements are given by POVM elements; Hermitian positive semidefinite operators, $\{M_{i}\}$ on a Hilbert space $\mathcal{H}$ that sum to the identity operator (on the Hilbert space) $\sum _{{i=1}}^{n}M_{i}=\mathbb{I}$. Therefore, a measurement takes the form $$ \rho \rightarrow \frac{E_i \rho E_i^\dagger}{\text{Tr}(E_i \rho E_i^\dagger)}, \text{ where } M_i = E_i^\dagger E_i.$$ The $\text{Tr}(E_i \rho E_i^\dagger) =: p_i$ is the probability of the measurement outcome being $M_i$ and is used to renormalize the state to unit trace. Note that the numerator, $\rho \rightarrow E_i \rho E_i^\dagger$ is a linear operation, but the probabilistic dependence on $p_i$ is what brings in the non-linearity or irreversibility. Edit 1: You might also be interested Stinespring dilation theorem which gives you an isomorphism between a CPTP map and a unitary operation on a larger Hilbert space followed by partial tracing the (tensored) Hilbert space (see 1, 2). | improve this answer | | I'll add a small bit complementing the other answers, just about the idea of measurement. Measurement is usually taken as a postulate of quantum mechanics. There's usually some preceding postulates about hilbert spaces, but following that • Every measurable physical quantity $A$ is described by an operator $\hat{A}$ acting on a Hilbert space $\mathcal{H}$. This operator is called an observable, and it's eigenvalues are the possibly outcomes of a measurement. • If a measurement is made of the observable $A$, in the state of the system $\psi$, and the outcome is $a_n$, then the state of the system immediately after measurement is $$\frac{\hat{P}_n|\psi\rangle}{\|\hat{P}_n|\psi\rangle\|},$$ where $\hat{P}_n$ is the projector onto the eigen-subspace of the eigenvalue $a_n$. Normally the projection operators themselves should satisfy $\hat{P}^\dagger=\hat{P}$ and $\hat{P}^2=\hat{P}$, which means they themselves are observables by the above postulates, and their eigenvalues $1$ or $0$. Supposing we take one of the $\hat{P}_n$ above, we can interpret the $1,0$ eigenvalues as a binary yes/no answer to whether the observable quantity $a_n$ is available as an outcome of measurement of the state $|\psi\rangle$. | improve this answer | | Measurements are unitary operations, too, you just don't see it: A measurement is equivalent to some complicated (quantum) operation that acts not just on the system but also on its environment. If one were to model everything as a quantum system (including the environment), one would have unitary operations all the way. However, usually there is little point in this because we usually don't know the exact action on the environment and typically don't care. If we consider only the system, then the result is the well-known collapse of the wave function, which is indeed a non-unitary operation. | improve this answer | | Quantum states can change in two ways: 1. quantumly, 2. classically. 1. All the state changes taking place quantumly, are unitary. All the quantum gates, quantum errors, etc., are quantum changes. 2. There is no obligation on classical changes to be unitary, e.g. measurement is a classical change. All the more reason, why it is said that the quantum state is 'disturbed' once it's measured. | improve this answer | | • 1 $\begingroup$ Why would errors be "quantum"? $\endgroup$ – Norbert Schuch Oct 28 '18 at 22:20 • $\begingroup$ @NorbertSchuch: Some errors could come in the form of the environment "measuring" the state, which could be considered classical in the language of this user, but other errors may come in the form of rotations/transformations in the Bloch sphere which don't make sense classically. Certainly you need to do full quantum dynamics if you want to model decoherence exactly (non-Markovian and non-perturbative ideally, but even Markovian master equations are quantum). $\endgroup$ – user1271772 Oct 29 '18 at 1:05 • $\begingroup$ Surely not all errors are 'quantum', but I meant to say that all 'quantum errors' ($\sigma_x,\sigma_y,\sigma_z$ and their linear combinations) are unitary. Please correct me if I am wrong, thanks. $\endgroup$ – alphaQuant Oct 29 '18 at 5:49 • $\begingroup$ To be more precise, errors which are taken care of by QECCs. $\endgroup$ – alphaQuant Oct 29 '18 at 5:56 • 1 $\begingroup$ I guess I'm not sure what "quantum" and "classical" means. What would a CP map qualify as? $\endgroup$ – Norbert Schuch Oct 29 '18 at 6:45 Your Answer
ce718c5aec29bcaa
Take the 2-minute tour × How is the following classical optics phenomenon explained in quantum electrodynamics? • Reflection and Refraction Are they simply due to photons being absorbed and re-emitted? How do we get to Snell's law, for example, in that case? Split by request: See the other part of this question here. share|improve this question Nice question ! –  student Dec 18 '10 at 11:03 Feynman's little pop-sci book on QED address these questions very well. I'd recommend it for anyone who doesn't want to muddle through the nitty-gritty of the math. Heck, it's a fun read even if you do want to work through the math. –  dmckee Dec 18 '10 at 15:33 By taking the classical limit, then explained clasically. Correspondence principle. –  user1708 Dec 18 '10 at 16:58 @kalle43: that goes without saying. –  Sklivvz Dec 18 '10 at 17:01 As I said in the comments below, the question is too broad right now and it would perhaps be wise to split it up. –  Marek Dec 19 '10 at 10:30 2 Answers 2 up vote 10 down vote accepted Hwlau is correct about the book but the answer actually isn't that long so I think I can try to mention some basic points. Path integral One approach to quantum theory called path integral tells you that you have to sum probability amplitudes (I'll assume that you have at least some idea of what probability amplitude is; QED can't really be explained without this minimal level of knowledge)) over all possible paths that the particle can take. Now for photons probability amplitude of a given path is $\exp(i K L)$ where $K$ is some constant and $L$ is a length of the path (note that this is very simplified picture but I don't want to get too technical so this is fine for now). The basic point is that you can imagine that amplitude as a unit vector in the complex plane. So when doing a path integral you are adding lots of short arrows (this terminology is of course due to Feynman). In general for any given trajectory I can find many shorter and longer paths so this will give us a nonconstructive interference (you will be adding lots of arrows that point in random directions). But there can exist some special paths which are either longest or shortest (in other words, extremal) and these will give you constructive interference. This is called Fermat's principle. Fermat's principle So much for the preparation and now to answer your question. We will proceed in two steps. First we will give classical answer using Fermat's principle and then we will need to address other issues that will arise. Let's illustrate this first on a problem of light traveling between points $A$ and $B$ in free space. You can find lots of paths between them but if it won't be the shortest one it won't actually contribute to the path integral for the reasons given above. The only one that will is the shortest one so this recovers the fact that light travels in straight lines. The same answer can be recovered for reflection. For refraction you will have to take into account that the constant $K$ mentioned above depends on the index of refraction (at least classically; we will explain how it arises from microscopic principles later). But again you can arrive at Snell's law using just Fermat's principle. Now to address actual microscopic questions. First, index of refraction arises because light travels slower in materials. And what about reflection? Well, we are actually getting to the roots of the QED so it's about time we introduced interactions. Amazingly, there is actually only one interaction: electron absorbs photon. This interaction again gets a probability amplitude and you have to take this into account when computing the path integral. So let's see what we can say about a photon that goes from $A$ then hits a mirror and then goes to $B$. We already know that the photon travels in straight lines both between $A$ and the mirror and between mirror and $B$. What can happen in between? Well, the complete picture is of course complicated: photon can get absorbed by an electron then it will be re-emitted (note that even if we are talking about the photon here, the emitted photon is actually distinct from the original one; but it doesn't matter that much) then it can travel for some time inside the material get absorbed by another electron, re-emitted again and finally fly back to $B$. To make the picture simpler we will just consider the case that the material is a 100% real mirror (if it were e.g. glass you would actually get multiple reflections from all of the layers inside the material, most of which would destructively interfere and you'd be left with reflections from front and back surface of the glass; obviously, I would have to make this already long answer twice longer :-)). For mirrors there is only one major contribution and that is that the photon gets scattered (absorbed and re-emitted) directly on the surface layer of electrons of the mirror and then flies back. Quiz question: and what about process that the photon flies to the mirror and then changes its mind and flies back to $B$ without interacting with any electrons; this is surely a possible trajectory we have to take into account. Is this an important contribution to the path integral or not? share|improve this answer +1, makes sense to me, but what about color? :-) –  Sklivvz Dec 18 '10 at 17:47 @Sklivvz: oh, I completely missed that part of the question. Thanks. But my answer is already too long, so I guess I will suggest OP to ask that as a separate question. Actually, @hwlau gives a correct first view on the problem (quantum mechanical), but the question actually deserves a lot more, I think. –  Marek Dec 18 '10 at 17:55 @Sklivvz: oh, you are OP :-D I am so sorry :-D –  Marek Dec 18 '10 at 17:55 +1, pretty good with this length. The difficult part is to explain why the other path can cancel exactly in few sentences, such as why the path of refraction is bend toward normal ;) –  hwlau Dec 19 '10 at 10:47 It really desires a long discussion. You may be interested in the book "QED: The Strange theory of light and matter" written by Richard Feynman (or the corresponding video), which gives a comprehensive introduction with almost no number and formula. For the solution of Schrödinger equation of hydrogen atom. The energy level is discrete, so its absorption spectrum is also discrete. In this case, only few colors can be seen. However, in solid, the atom are interacting strongly with each other and the resulting absorption spectrum can be very complicated. This interaction depends strongly with its structure and the outside electrons. The temperature can play an essential role for the structural change and a phase transition can occur and so does color change. I think there is no easy explanation for the exact absorption spectrum, or color, for a material without doing complicated calculation. share|improve this answer Your Answer
3ae7629330856e17
Making waves - singly First observed in the waters of a Scottish canal, solitary waves, or solitons, have applications right across physics, Ray Girvan discovers Scientific Computing World: May/June 2005 Background:'In the Hollow of a Wave off the Coast at Kanagawa', c. 1830, Katsushika Hokusai. The discovery of solitons is one of the nicer stories of important science arising from an apparently insignificant observation. In August 1834, the naval engineer John Scott Russell was watching a horse-drawn barge on the Union Canal, Hermiston, Edinburgh, as part of his work on hull design. When the cable snapped and the barge suddenly stopped, Russel was impressed by what happened: 'A mass of water rolled forward with great velocity, assuming the form of a large solitary elevation, a rounded, smooth and well-defined heap of water, which continued its course along the channel apparently without change of form or diminution of speed. I followed it on horseback, and overtook it still rolling on at a rate of some eight or nine miles an hour, preserving its original figure some 30 feet long and a foot to a foot and a half in height. Its height gradually diminished, and after a chase of one or two miles I lost it in the windings of the channel'. Russell went on to a distinguished career, but despite his experimental work in a homebrew wave tank and a subsequent paper, his contemporaries never shared his view of the importance of what he called the 'Wave of Translation'. Partial vindication came later in the 19th century, when Boussinesq (1872) and Korteweg and de Vries (1895) showed how such self-reinforcing solitary waves arose from partial differential equations describing shallow water motion. But in 1965, Martin Kruskal and Norman Zabusky discovered a surprising result. An entirely different system of coupled harmonic oscillators, the Fermi-Pasta-Ulam experiment, also yielded the Korteweg-de Vries (KdV) equation. Kruskal and Zabusky coined the term soliton for these travelling wave solutions, alluding to their particle-like properties. Whereas standard waves have peaks and troughs, and disperse as they travel, solitons consist of a single non-dispersing peak: linear effects spreading the waveform are exactly balanced by non-linear ones that focus it. Furthermore they show elastic interaction: two solitons of different sizes (and hence velocities) can pass through each other. This isn't, however, a normal wave collision where the heights add linearly; at the instant of superposition, solitons merge with a broader, lower peak. While solitons were first recognised on the surface of water, the commonest ones in water actually happen underneath, as internal oceanic waves propagating on the pycnocline (the interface between density layers). Sailors have long known of bands of rough and smooth sea - 'tide rips' and 'slicks' - as well as the phenomenon of 'dead water', increased drag at the mouth of fjords. But post-1970s observation, particularly with satellite Synthetic Aperture Radar (SAR) and ship-borne Doppler Current Profiling, revealed these to be the surface manifestation of 'undular bores', subsurface packets of solitons. Typically travelling as 'waves of depression' - troughs only - of tens of metres amplitude and often kilometres in wavelength, they are initiated when tidal flow is perturbed by underwater features such as ridges. They're more than a nautical curiosity, as they affect acoustic propagation in the sea (of military interest); mix sediments and nutrients (of ecological interest); and put potentially dangerous stress on the legs of oil rigs. Oceanic undular bores appear to arise by spontaneous breakdown of larger perturbations in systems governed by the KdV equation. This also occurs in the atmosphere, occasionally in the higher mesosphere, but most commonly in the lower atmosphere, the troposphere. A spectacular, much-publicised example is Morning Glory, an undular bore that forms seasonally over the Gulf of Carpentaria, Australia. In this case, the bore travels in an inversion layer - cold air trapped between the ground and warmer air above - each soliton generating a roll-shaped bank of cloud hundreds of kilometres long. Bores, naturally, are better known as a river surface phenomenon. One spectacular example appeared around January 10th this year, when many newspapers worldwide published a photograph supposedly showing the instant of impact of the 2004 'Boxing Day Tsunami'. Other papers were suspicious. Lack of attribution played a part, as did the grins and umbrellas of the onlookers shown in companion photos that joined it on the e-mail circuit. Collective debunking, now summarised at the urban-myths website, soon revealed that the photos dated from 2002 and showed a tidal bore on the Qiantang River, Hangzhou, China. The largest in the world, with a wavefront up to 9 metres high, the Hangzhou bore (called the Black Dragon) is the subject of an annual tide-watching festival. It's very often stated that tidal bores on rivers are the definitive example of solitons. The situation isn't so straightforward (and not helped by a mess of terminology from different fields). A bore is a general term for a moving step-discontinuity in water level, otherwise called a 'hydraulic jump'. The classic bore - regionally a mascaret, pororoca and aegir - arises in funnel-shaped estuaries that amplify incoming tides, the rapid rise propagating upstream against the flow of the river feeding the estuary. The profile depends on the Froude number, a dimensionless ratio of inertial and gravitational effects. At its most energetic, a bore has a turbulent breaking wavefront like an advancing waterfall; this is effectively a shockwave rather than a soliton. Slower bores take on an oscillatory profile with a leading wave (a dispersive shockwave) followed by a train of solitons. An even more complex question is whether tsunami waves involve solitons. Tsunami waves generated by sharp localised impulses, such as meteorite strikes into the sea, generally do. Models of the late Jurassic Mjolnir impact by the Simula Research Laboratory, University of Oslo, predicted trains of solitary waves. A similar effect occurred with 1958 mega-tsunami at Lituya, Alaska, when rock dropped en masse into a bay following a landslide. An even smaller, but still dangerous, equivalent is the wash from high-speed super-ferries that produce soliton wakes. Earthquake tsunamis such as the 2004 tsunami are initiated by broader scale impulses, and the spreading mechanism depends on the model. As long-wavelength water waves, tsunamis are generally modelled by the shallow water or long wave equations (a simplification of the Navier-Stokes equations for cases where the wavelength is much larger than the water depth). These incorporate Boussinesq and KdV equations as further approximations, both of which can give soliton solutions. Nevertheless, it's difficult to check models; tsunamis are near-impossible to observe in mid-ocean before they are modified by shore effects: the catastrophic increase in amplitude, and the steepening into bores. Sea level measurement shows, however, that tsunami waves, again unlike solitons in the strict sense, have both peaks and troughs. (A classic warning sign of an impending tsunami is the sea level dropping before the first wavefront arrives - a detail enshrined in the story of Hamaguchi Goryo, who in 1854 burned his rice harvest to warn villagers as the sea receded.) Some analyses have suggested, however, that the regime may depend on travel distance: that tsunami waves close to the epicentre arrive as continuous sinusoidal waves, but long-distance ones may resolve into solutions of the KdV equation and travel as a train of solitary waves. In general, though, all I can conclude is that 'tsunami' and 'soliton' aren't terms widely associated in the scientific literature. Soliton behaviour has also been seen in other fluid-like systems such as plasmas and flowing sand (barchan dunes have been observed to pass through each other). The Great Red Spot of Jupiter may also be some form of soliton. Following Kruskal and Zabusky's discovery of its broader applicability, soliton theory has extended well beyond the original application to fluids. Solitons appear in many other areas of physics governed by weakly nonlinear PDEs: for instance, the FitzHugh-Nagumo equations describing nerve impulse propagation; and the sine-Gordon equation in solid state physics and non-linear optics. It's hard to predict where solitons will pop up next. One of their more intriguing manifestations is 'light bullets', spherical solitary waves (as predicted by the non-linear Schrödinger equation) in non-linear optical media excited by laser. On collision, they show various behaviours - they can split, fuse, alter path and tunnel through each other - that might be harnessed to make optical computers. One optical effect that has reached fruition, however, is soliton-based communications. The idea was first suggested by Akira Hasegawa and Fred Tappert in 1973, when they showed theoretically that solitons could arise in optical fibres with a suitably tailored non-linear relation between light intensity and refractive index. Practical research took over a decade to catch up. Management of dispersion was one of the problems, and soliton technology, which sends laser data as 'pulse' vs. 'dark' states, has been in long-running competition for bandwidth and distance with the more traditional NRZ (non-return-to-zero) systems that send two intensity states with no zero state. Even so, a number of major telecoms providers, such as Marconi and Corvis Corporation, are now using soliton technology for ultra-long-haul (ULH) optical fibre networks that communicate over several thousand kilometres. John Scott Russell, one feels, would have been delighted by the wealth of phenomena arising from his Wave of Translation. As Chris Eilbeck's Solitons Home Page at Heriot-Watt University says, 'It is fitting that a fibre-optic cable linking Edinburgh and Glasgow now runs beneath the very tow-path from which John Scott Russell made his initial observations, and along the aqueduct which now bears his name'. Solitons Home Page: The Severn Bore Page: The Morning Glory: Tackling the mathematics Due to the analytical insolubility of the underlying partial differential equations, early work on solitons had to be done by numerical methods. This is still the mainstay of work with general starting conditions and geometries. Femlab, the finite element solver from Comsol, includes the Korteweg-de Vries as one of its standard equation-based models (images top and right, from Femlab 3.1). Using a time-dependent solver, it demonstrates a succession of faster solitons passing through a slower one, all reforming after the collision. The post-processed domain plot shows the solution extruded along the time axis. Dr Magnus Olsson, product manager for Comsol's Electromagnetics Module, told me that Femlab 3.2 will stress time-dependent formulations in non-linear optics and electromagnetics, the type of media in which soliton effects are of increasing practical importance. For instance, in photonic crystals, which manipulate light internally through non-linear optical properties, the lossless transmission of a pulse round a right-angled bend in a waveguide can be modelled with sine-Gordon solitons. (This model, incidentally, is also applicable to the motion of dislocations in metals and the 'unzipping' of DNA). The Mathematica image plots the exact solution for the interaction of two solitons, showing the characteristic phase shift and nonlinear superposition of amplitudes. It's now known that an isolated 2D soliton solution to the KdV equation has a sech^2 profile. This result arose from the analytical solution of the Korteweg-de Vries equation, found in 1967 using the inverse scattering transform of Gardner, Greene, Kruskal, and Miura. An especial understanding came via the work of Hungarian-born mathematician Peter D Lax, who recently won the Abel Prize 2005 for his lifetime contributions to PDE solution. The transform method worked through several obscure steps (Professor Helge Holden, summing up Lax's work on the Abel Prize site, calls them 'miracles'). Lax reformulated the solution in terms of two operators, a Lax pair, that not only explained how the transform worked but also made a large family of other soliton-generating PDEs integrable.